The three-hour takedown rule signals a major shift in how India balances digital governance with free expression. A tracker arrived with an unremarkable appearance, yet it contained huge ramifications. The ruling was met with the new compliance deadline established by India that has dramatically altered the timeframe for content removal from 36 hours to 3 hours for any content published to the major digital platforms; the amended rule will take effect on 20 February. India’s officials did not provide any public explanation of such an urgent timeframe.

These new rules apply to the largest technology companies in the world, including, for example, Meta, YouTube, and X, and include AI-generated material. Digital media companies operating globally in India already have to comply with some of the strictest restrictions in the world; critics expect further pressure to be placed on these companies to efficiently and consistently comply with these laws.
This decision will place technology companies that supply their services to India in a precarious position, as there are approximately 1 billion users in India. This move further escalates long-standing tension between technology companies and their customers and could inspire digital rights organizations to further suppress digital content. Proponents of the rapid removal of illegal content believe this policy will assist in maintaining public order.
What the Three-Hour Takedown Rule Requires
India’s IT regulations from 2021 have been amended, requiring a government notice for content to be removed from online platforms or social media sites based on national security issues or public order. Thousands of these notices have been issued each year by the government. Reports from other technology companies indicate there are many takedown requests per report; Meta had 28,000+ recently removed by them. Lawyer Akash Karmakar stated concerns over how feasible this is for companies.
He stated, “Social media companies cannot remove content in 3 hours due to a lack of time for legal review.” Karmakar also said there should have been more time for consultation with the companies before this requirement. When a platform receives notice to take down, it is required to take down the content without challenging the legal validity of that notice. Analysts are saying that this is one of the strictest regimes on democracy in the world. Many critics are worried that automated removals will arise from this policy.
How AI labelling rules reshape compliance
Bill S-223 was amended to include provisions outlawing manipulations of audio and video. This works towards controlling the dissemination of deepfake technology (footage created digitally, not on video) and requires companies to label such content in an obvious manner, as well as to include a traceable, permanent marker identifying it as electronically manipulated. The original proposal would have required a portion of each piece of content to be marked; however, the newest version allows for the marking to be made in an apparent manner anywhere on the content.
Rights advocates warn that automation increases risks associated with these proposals for companies. “Sometimes companies have difficulty meeting the 36-hour deadline due to the amount of quality and/or corrective time required before content is published. If companies no longer need to do these things because of the availability of automation, then their content may be enabled through censorship,” Anushka Jain stated.
Prasanto K Roy called the scope of the proposal “perhaps the most extreme of any democracy,” stating that obtaining compliance may be largely dependent on the use of automated processes to do so. As a result, the current discussion has circled around the issue of whether or not the speed of implementation will outweigh individual free expression.
For more- https://civiclens.in/category/national-news-civiclens-in/