The Ministry of Electronics and Information Technology (MeitY) has asked intermediaries and platforms to ensure that its computer resources or the use of artificial intelligence (AI) models, software, algorithms, etc. through such resources do not permit any bias or discrimination or threaten the integrity of the electoral process.
In an advisory issued on March 15, MeitY also says that under-tested, unreliable AI foundational models, generative AI or further development of such models should be made available to users in India only after appropriately labelling the possible inherent fallibility or unreliability of the output generated.
The advisory further states that wherever an intermediary through its software or any other computer resource permits or facilitates synthetic creation, generation or modification of a text, audio, visual or audio-visual information in such a manner that such information may be used potentially as misinformation or deepfake, such information is labelled or embedded with permanent unique metadata or identifier.
The ministry stated that the fresh advisory was necessitated after it was found that intermediaries and platforms were often negligent in undertaking due diligence obligations outlined under Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
"The advisory issued by MeitY on 15th March 2024, superseding the one on 1st March, is a step in the right direction. It removes the requirement of getting permission from the government for use of under-tested AI models. This is likely to reduce concerns related to avoidable government interference. Industry will be able to turn its attention and investment back on research, testing and innovation, and not worry about compliance management, at least for now," says Amol Kulkarni, director (Research), CUTS International said.
Kulkarni notes that there are still concerns around information overload and 'pop-ups' across user journey which may reduce their quality of experience. There is thus a need for a multi-pronged approach to address concerns arising in the AI/LLM ecosystem, he says.
"The new advisory also highlights the government's proactiveness of taking into account stakeholders' feedback, and course correct, which is commendable. The industry should consider this as an opportunity to keep the government informed of its efforts to deal with misinformation and other concerns and work in a collaborative manner with the government," Kulkarni says.