India has decided not to regulate the growth of artificial intelligence (AI) technology, saying that the sector is significant and strategic for the country.
This comes just after more than 15,000 signatories, including Elon Musk and Steve Wozniak, signed an open letter urging all AI labs to immediately pause the training of AI systems more powerful than GPT-4 for at least six months.
India’s Ministry of Electronics and Information Technology said in a written response to the letter that it acknowledged numerous ethical concerns around bias and transparency associated with AI’s rapid expansion, but the government “is not considering bringing a law or regulating the growth of artificial intelligence in the country.”
AI enables the digital economy
The ministry referred to AI as a kinetic enabler of the digital economy and asserted that the technology will strengthen entrepreneurship and business and play an important strategic role for the country moving forward.
India is also harnessing AI technology to provide personalized and interactive citizen-centric services through digital public platforms, the ministry said, adding that government officials were working to standardize responsible AI guidelines to drive AI’s development and promote healthy growth in the industry.
“AI has ethical concerns and risks due to issues such as bias and discrimination in decision-making, privacy violations, lack of transparency in AI systems, and questions about responsibility for harm caused by it. These concerns have been highlighted in the National Strategy for AI (NSAI) released in June 2018,” IT and Telecom Minister Ashwini Vaishnaw said.
The upcoming Digital Personal Data Protection Bill 2022 (DPDPB 2022) will apply to developers who develop and facilitate AI technologies. As AI developers will be collecting and using massive amounts of data to train their algorithm to enhance the AI solution, they might classify as data fiduciaries.
“This implies that AI developers may comply with the key principles of privacy and data protection like purpose limitation, data minimisation, consensual processing, contextual integrity etc as enshrined in DPDPB 2022,” Kamesh Shekar, programme manager at the Dialogue, a public policy think tank, told Analytics India Magazine.
“Besides, as contoured during Digital India Act (DIA) consultation, the government is also considering having provisions within the act which would define and regulate high-risk AI systems,” Shekar added.
India has guidelines, but not legally binding
India’s planning commission, also known as the NITI Aayog, recently issued some guiding documents on AI, such as the National Strategy for Artificial Intelligence and the Responsible AI for All report. Notably, these are not legally binding.
The documents outline the vision, goals, and principles for developing and deploying AI in India, with an emphasis on social and economic inclusion, innovation, and trustworthiness.
India’s approach contrasts with AI concerns by lawmakers in the US and Europe. Italy recently imposed a temporary ban on OpenAI’s ChatGPT, citing concerns that it violated the European Union’s General Data Protection Regulation (GDPR).
The European Union has already proposed legislation known as the European AI Act, which aims to introduce a common regulatory and legal framework for artificial intelligence in the region, covering all sectors except for the military and all types of artificial intelligence.
The Act will classify different AI tools according to their perceived level of risk, from low to unacceptable, and impose different obligations and transparency requirements on those who provide or use them. The AI Act will also work in tandem with other laws such as GDPR.
The UK has also shared plans for regulating AI. It is asking regulators in different sectors to apply existing regulations to AI.
Related article: Indian telcos bank on AI to boost customer service