Critics fear the expanded definition of AI content and shrunken compliance windows will grant authorities unchecked power over online speech.
India’s digital governance landscape is set to become significantly more restrictive with the introduction of amendments that demand near-immediate action from social media intermediaries. The new rules, applicable to both standard and AI-generated content, require the removal of flagged material within 180 minutes.
The scope of the regulation has also been widened to include specific definitions for synthetic media. Platforms are now prohibited from removing transparency labels once added to AI-generated content and must employ software to filter out deceptive or dangerous material, such as explosives-related guides or non-consensual deepfakes.
Erosion of Due Process
Legal and technology experts argue that the compression of the timeline undermines due process. By demanding such rapid action, the state effectively bypasses the time required to determine if a takedown request is legally sound.
With over 28,000 web links already blocked in 2024, the new rules are seen by critics as an extension of the government’s wide-ranging power to curate the online narrative under the guise of national security. The Internet Freedom Foundation has condemned the changes, predicting they will lead to “over-removal” as companies prioritize speed over accuracy to avoid penalties.
SOURCES: Internet Freedom Foundation, Government Transparency Reports, Legal Analysis.
This report has been significantly transformed from original source material for journalistic purposes, falling under ‘Fair Use’ doctrine for news reporting. The content is reconstructed to provide original analysis and reporting while preserving the factual essence of the source.
