Starting today, social media platforms will have just three hours to remove unlawful content or risk losing their legal shield. Is this a masterstroke for online safety or the death knell for digital freedom? asks Satyen K. Bordoloi.


When Rashmika Mandanna’s deepfake video went viral on social media at the end of 2023, the nation went into a tizzy. People spoke out. The authorities moved to arrest the creator. And the zeitgeist was against such violations. But just 2 years later, as AI tech has improved dramatically, I see on Instagram, deepfakes of almost every female Indian actress, doing all manner of obnoxious things.

This has resulted in a move that has sent shockwaves through the global technology community as the Indian government notified a stringent set of amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 which, effective today i.e. February 20, 2026, would fundamentally alter the compliance landscape for social media intermediaries with the intention to target the rapid spread of AI-generated misinformation, deepfakes, and other harmful content.

The government’s intention seems noble, and it defends the changes as essential to protect citizens’ safety in the age of generative AI. However, the tech industry, legal experts, and digital rights activists are raising red flags over what they perceive as an impossibly tight deadline and a potential threat to constitutional freedoms.

Content teams now race the clock as takedown orders shrink from 36 hours to just three

What the New Rules Require

At the centre of the new regulations is a major acceleration in the timeframe for content removal. Previously, platforms were given 36 hours to respond upon receipt of a government notice to remove unlawful content. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, reduce that window to just three hours. For the most egregious content – like that with nudity, sexual acts, or impersonation, including deepfake sexual content – it must be taken down within two hours.

The amendments are about more than just speed. They explicitly include “Synthetically Generated Information” (SGI) in the ambit of the law. It defines SGI as audiovisual content that is artificially generated or manipulated to appear authentic and imposes certain due diligence requirements on Platform Service Providers (PSPs).

India’s rules drag ‘synthetically generated information’ into the legal spotlight and demand it be clearly labelled

Intermediaries shall take “reasonable and appropriate technical measures” to prevent the proliferation of unlawful SGI. Crucially, platforms that allow users to produce or share this kind of content need to ensure it is “prominently labelled” and embedded with permanent, tamper-proof metadata or a unique identifier.

For Significant Social Media Intermediaries (SSMIs), i.e., platforms with a large user base in India, additional obligations are added to the list. They must now obtain user declarations on whether the content is AI-derived and employ technical means to verify their truthfulness. The government has also clarified that removal or disabling access to information in conformity with these rules shall not constitute an infringement of the “safe harbour” principle that immunises intermediaries from liability for user-generated content.

But this immunity is conditional. The amendments caution that non-compliance could lead to the revocation of the safe harbour, making the platforms legally responsible for the content subsequently posted by their users.

From a single viral scandal to an endless feed of deepfakes, the shock has turned into a grim routine

The Pros: A Necessary Shield in the Era of Artificial Intelligence

Government officials have portrayed the amendments as a common-sense and essential adjustment to the threats associated with generative AI. Abhishek Singh, Additional Secretary at the Ministry of Electronics and Information Technology, defended the rules before The Print and used what he referred to as a “user-harm” prism. He said the 36 hours is too long to wait, in an age when AI-powered content can go viral and create irreparable damage to a person’s reputation within minutes.

The government stance has been toughened by recent cases, especially those related to the Grok AI chatbot on X employed to produce non-consensual sexualized images of women and children. The government had to step in, and only then did X block thousands of pieces of content. “If somebody is doing something similar to what Grok did, and you take that content off after 36 hours, the damage is done,” Singh told The Print, pointing out that removals that aren’t immediate permit harm to solidify.

Officials also say the three-hour time limit is technologically possible. They compare it to copyright enforcement, where platforms have automated systems that take down infringing content in real time. If a system can identify an unauthorised cricket clip in a matter of minutes, it can also catch a deepfake. By requiring platforms to respond quickly, the rules are intended to prevent harmful content from going viral in the first place and spreading through the internet like wildfire.

Women and children find themselves disproportionately targeted by deepfake porn

The Cons: A Recipe for Censorship and Chaos?

The amendments have raised a wave of concerns from industry and civil society, despite the reassurances from the government. The main complaint is that they have not been consulted and that compliance is impractical. Industry associations such as Broadband India Forum (BIF) and NASSCOM are drafting submissions to the ministry contending that a three-hour timeline for “multiple internal checks, validation and cross-functional approvals” is simply not workable. They caution that the rushed timeline could chill too much speech.

The sentiments echoed by legal professionals. Rahil Chatterjee of Ikigai Law told the Indian Express that there “is often no clear or immediate test for illegality,” and that law enforcement communications are not always so straightforward. Platforms will have to make definitive legal decisions within 3 hours, which will be “extremely difficult to operationalise”.

Platforms must now watermark and fingerprint AI-made clips, turning every synthetic frame into traceable evidence

Nikhil Pahwa, Founder of MediaNama, pointed out to CNBC TV18 that when penalties are harsh and deadlines unattainably short, platforms are motivated to “err on the side of removal,” even in cases of borderline or contested material, turning them into a de facto censorship body.

Ambiguity also surrounds the definitional scope of the rules. Kahaan Mehta, senior legal reform specialist at the Vidhi Centre for Legal Policy, told The Print that the “bare text of the rules does not explicitly confine the three-hour requirement to deepfake content alone,” which means that it could be applied to many types of ‘unlawful’ content and that might even extend to political satire or lawful dissent.

To add to this, the requirement to label AI content depends on detection tools that experts call “far from reliable,” and those acting in bad faith aren’t going to be upfront about their content being synthetic.

Officials say they view AI rules through a ‘user-harm’ prism—focused on damage, not abstract speech

The Gathering Storm: Legal Challenges and Trade Tensions

The promulgation of these rules is already sparking legal and political controversy. Stand-up comic Kunal Kamra and senior lawyer Haresh Jagtiani have filed a petition in the Bombay High Court challenging the constitutional validity of the amended Rule 3(1)(d) and the government’s Sahyog Portal. What they say is that the new rules permit information to be blocked on “wholly vague grounds,” creating a restriction that’s “unconstitutional and unreasonable,” one that goes well beyond the ambit of Article 19(2) of the Constitution.

They say that the new framework circumvents the safeguards under Section 69A of the IT Act, which includes a reasoned order and a hearing. By facilitating the issuance of takedown orders by thousands of arm-twisting government officers via the Sahyog Portal, the Rules “strike at the heart of democracy,” the petitions state. The pleas are expected to be heard by the Bombay High Court on March 16.

Petitions in the Bombay High Court will decide whether the three-hour rule protects citizens, or violates their rights

At the same time, the rules are creating a new point of contention in global commerce. The new mandate comes just days after India and the United States released a framework for an interim trade agreement – which included a shared commitment to tackle “burdensome practices” in the digital trade sector – that will deal a significant blow to American tech companies many of whom (like Meta, Google, and X) have already been entrenched in legal fights with India’s new corporate social responsibility rules.

Also, many have pointed out that the prescribed two- to three-hour deadline could be the “fastest prescribed by any government in the world,” prompting fears that it could be considered a major non-tariff barrier. With the US administration’s obsession with punching foreign laws that affect its own companies, this might be a promising recipe for conflict.

As the February 20 deadline hits today, the question that the world doesn’t know the answer to is: whether these rules would make the digital ecosystem safer, or just shift the problem from viral harm to systemic over-censorship? The decision could ultimately be made not by compliance officers but by the courts.

In case you missed:

Satyen is an award-winning scriptwriter, journalist based in Mumbai. He loves to let his pen roam the intersection of artificial intelligence, consciousness, and quantum mechanics. His written words have appeared in many Indian and foreign publications.

Leave A Reply

Share.
© Copyright Sify Technologies Ltd, 1998-2022. All rights reserved