The Indian government is making tech companies more accountable by introducing laws to remove harmful content within hours…


For the last two decades, global tech companies have had it easy. Social media platforms host content, users create content and governments intervened only when it was absolutely required. But that era is changing – and it looks like it’s India who is heralding that change.

Earlier this year, the Government of India has announced stricter IT rules that requires social media companies to remove ‘unlawful content’ within three hours of official notice. This is done from the previous window of 36 hours. India’s Information Technology Minister Ashwini Vaishnaw addressed tech giants like Meta, YouTube and Netflix and warned them about the repercussions.

Speaking at the India AI Impact Summit, he said: “It’s very important for the multinationals to understand the cultural context of the country in which they are operating.”

This news means that tech companies are no longer seen as mere intermediaries. They now have to increasingly monitor content, rapidly respond to takedown orders and take responsibility for what is posted on their site.

Information Technology Minister Ashwini Vaishnaw (Image Courtesy)

The Three-Hour Rule Raises the Stakes

The biggest challenge for the tech companies is the three-hour rule. Under the revised rule, these platforms need to immediately respond to harmful content after being notified by the authorities. The regulations are mostly focused on AI generated content and deepfakes.

For these platforms that sees millions of posts daily, this poses a major problem. It is a very small timeframe to monitor all the content and review them properly. The three-hour rule would mean over-censorship as these companies would take down dubious content without verifying it properly just to avoid legal repercussions.

For instance, Meta alone restricted more than 28,000 pieces of content in India in the first six months of 2025 following government requests.

The government however believes that stringent rules are need to prevent misinformation and confusion. The concern is tougher with AI generated videos, fake audio clips and manipulated content.

India Wants Greater Control Over Platforms

India has been tightening the rules for years now. The Indian IT Rules were introduced in 2021 and it required companies to appoint compliance officers and respond to government requests. But lately, the enforcement has become a lot more aggressive.

According to various reports, the government is considering several amendments to make the legal advisories more binding for tech companies. This would mean that it severely weakens the “safe harbour” protection that currently shields these companies from user-generated content that’s uploaded on their sites.

There is no way around this for the tech companies. Considering the fact that India is one of the biggest internet markets in the world, they have no choice but to comply or be banned from the country.

The Last Word

India is not the only country that is working towards making the IT rules stricter. Across the world, countries are moving fast to address misinformation and harmful content. Europe recently announced the Digital Services Act which calls for stricter moderation and transparency requirements for major platforms.

But India stands out purely for the sheer speed of enforcement. The three-hour window is the shortest compliance window that any country has imposed.

Long story short, governments don’t just see these platforms as hosting content, they want them to act like publishers. Not just responsible for the technology, but the consequences of what spreads online.

In case you missed:

Adarsh hates personal bios, Chelsea football club and Oxford commas. When he's not writing, he's busy playing FIFA on his PlayStation.

Leave A Reply

Share.
© Copyright Sify Technologies Ltd, 1998-2022. All rights reserved