New AI Labeling Rules Strengthen India’s AI Regulation
India AI regulation takes its first step as the government mandates labelling and watermarks for AI-generated content online.The Ministry of Electronics and Information Technology (MeitY) drafted amendments to the Information Technology Rules 2021, which will require social media companies to take measures to ensure users disclose whether their content is AI-generated or otherwise AI-manipulated.
The proposal requires platforms to watermark AI content, and the watermark must be visible for more than 10 percent of the content’s length/size. Businesses may also flag user accounts for not complying with the new requirements. Platforms that fail to comply with these obligations may lose their safe-harbour immunity. Comments from industry participants must be submitted by 6 November 2023.

The draft also addresses growing concerns about the rise of deepfakes — a form of content that alters, impersonates, or mimics a person’s likeness, voice, or mannerisms. Tools powered by AI technology like OpenAI’s ChatGPT and Gemini by Google make deepfakes easier for the average user to create.
The Union IT Minister, Ashwini Vaishnaw, said the amendments “increase accountability of users, platforms and the government,” as deepfake materials proliferate. The enforcement of regulations would fall to central officers at the joint secretary level or equivalent and police officials at the DIG level or equivalent.
A spokesperson for the government commented that platforms, and not the user base, are responsible for identifying and reporting deepfakes. Additionally, the amendments seek to institute AI content labelling in community standards for social media companies.
Reactions from the industry and the legal sector
Legal and regulatory experts view India’s AI regulation as an important milestone in transparency. “The amendments to the IT Regulations are a proactive solution to the challenges arising from AI-generated content while promoting trust on the web,” said Dhruv Garg, founder of India Governance and Policy Project (IGAP). Garg also cautioned against an overreach that could stymie creativity, satire, or artistic applications of AI content.
“IT Regulations lend muscle to the need for platforms to take action under the regulation, but I think India would need a specific criminal legislation on AI-related content in order to deter abuse,” said cyberlawyer N.S. Nappinai, adding that damages attributable to deepfakes have risen to a severity that requires enforcement well beyond civil or administrative remedies.
Social Media Compliance under India AI Regulation
India’s courts have begun acting on AI abuses. The Delhi High Court recently issued an order preventing the duplication of actors Karan Johar and Aishwarya Rai Bachchan by an AI application. In response, Google and YouTube plan to deploy detection methods for AI-generated content. A potential problem was raised with Google’s new Gemini 2.5 Flash model, which can easily create very realistic images.
Meanwhile, the parliamentary standing committee on home affairs suggested a regulation that would place watermarks on all digital media, as well as certification standards around determining the origin of the content. They suggested that CERT-In would also issue the alerts related to a detection alerting that would help preserve some standard surveillance reports.
Conclusion
India’s regulation of AI truly promotes responsible digital governance. Forced AI content labelling with watermarking and strengthening platform accountability through government will address the growing deepfakes and harm of AI.
The draft amendments balance authenticity, transparency, and freedom of speech as ‘responsible digital governance,’ in a way that they hope to provide enhanced protection for social media users.
India’s proactive approach offers a global model for navigating AI content and preventing abuse for its citizens.