Platforms in the dock: The changing rules of internet liability

Recently in Isha Foundation’s case, the Delhi HC explicitly tagged Google, X Corp and Meta as respondents, bringing focus to a larger debate: To what extent should platforms be pulled in as co-defendants for simply providing the venue?
Platforms in the dock: The changing rules of internet liability
Harsh Gour

Harsh Gour is a Columnist at The Leaflet and a law student at NALSAR University of Law, Hyderabad. His research focuses on technology law, constitutional law, and criminal law. He is also the author of the poetry book ज़िंदगी के प्रेम को समर्पित (Dedicated to the Love for Life).

Published on

A TREND IN LAWSUITS IS UNFOLDING: suing not just the author of alleged infringing content, but the platform itself. Consider the example of an ongoing matter — the Isha Foundation case. After a YouTube video was uploaded defaming the spiritual leader Jaggi Vasudev, the Foundation sued not only the video’s author, but explicitly named Google LLC & Ors. as respondents. The Delhi High Court directed Google LLC, X Corp, and Meta Platforms to pull down the video. The question is whether tech giants are being pulled into the arena as co-defendants for third-party content.

Under India’s legal framework, Section 79 of the Information Technology Act, 2000 shields intermediaries only if they act as passive conduits and follow due-diligence rules. Courts have interpreted “actual knowledge” to mean a formal court or government order. Indian platforms must comply with takedown demands from authorities or face liability. In practice, an intermediary “has no choice but to comply with the Centre’s directions to continue operations,” and if it does so it “generally avoid[s] further legal consequences”. The intermediaries cannot pick and choose which orders to follow – if a court orders content to be removed, they must remove it promptly to keep the shield of Section 79.

Listing “intermediary” such as Google or Facebook as co-defendants follows a broader pattern of suing platforms and users together over allegedly defamatory or false content. Another example is the Vishakha Industries v. Google litigation: Vishakha sued Google in 2009 for defamation due to content in a Google group. The Delhi High Court (later upheld by the Supreme Court) held that pre-2009 Section 79 did not shield Google at all – because at that time safe harbor applied only to IT Act violations, not civil defamation (After the 2009 amendment, the law does cover all third-party content, but only once a court order directs removal). 

Listing “intermediary” such as Google or Facebook as co-defendants follows a broader pattern of suing platforms and users together over allegedly defamatory or false content.

In short, Indian law encourages platforms to comply with court orders under threat of liability, and in practice many plaintiffs will list a platform as co-defendant just to ensure the order can bind it.

Another complementing example is a lawsuit filed in the Eastern District of Texas, by two families, against the creators of the AI chatbot Character AI and tech giants Google and its parent company Alphabet Inc, claiming that the chatbot created an emotionally manipulative relationship, leaving their children in distress and driving them further apart from their parents.

Should platforms remain “mere conduits” or be treated as active participants when their services enable harm? 

US Courts are increasingly probing the limits of platform immunity

U.S. law has long favored the “mere conduit” view, but new cases are probing its limits. Under Section 230 of the 1996 Communications Decency Act, online services are broadly not treated as the “publisher” of what users post. Courts have distilled Section 230(c)(1) into a simple rule: if a party is an “interactive computer service” (like Google or Meta) and a plaintiff’s claim treats it as a publisher of another’s content, the claim is barred. 

In practice, this means platforms enjoy a “formidable” immunity from state-law claims - defamation, negligence, privacy torts, and the like - arising out of user-posted content. Even when a platform is put on notice of illegal speech, Section 230 generally protects it (the Fourth Circuit’s Zeran v. AOL (1997) decision famously held that knowing about defamatory messages doesn’t negate immunity). The intent was to let the internet flourish without platforms getting sued for every bad thing users say.

Platforms in the dock: The changing rules of internet liability
Can the revised EU product liability directive serve as a blueprint for India?

Section 230 does have exceptions. Notably, it explicitly excludes intellectual property claims: copyright and trademark cases cannot hide behind 230. Instead, Congress provided a separate mechanism for copyright via the Digital Millennium Copyright Act’s safe harbors. Under Section 512 of the Copyright Act (DMCA), an online service provider can avoid monetary liability for copyright infringement by its users so long as it implements a takedown regime and promptly removes infringing material when notified. In other words, the U.S. law says: a social network won’t be held liable for a user’s libel (thanks to 230) and a video site won’t be liable for a user’s pirated movie (thanks to the DMCA) - provided that the platform follows the statutory rules for content moderation. 

Congress has also carved out other narrow limits. For example, the FOSTA/SESTA amendments strip 230 immunity for online platforms that facilitate sex trafficking. But outside these exceptions, platforms have generally been treated as neutral “interactive computer services” rather than speakers or publishers of user content.

However, that protection has begun to crack. In recent years plaintiffs have gotten creative. They sue under laws not covered by 230 (like RICO, a federal law designed to combat organised crime, or housing statutes), or reframe a claim as a product-defect or under antitrust theory. Cases often now focus on what the platform did – its algorithms, design, business model – rather than on a specific piece of user content. For instance, courts are parsing whether a recommendation algorithm means the platform “contributed materially” to the wrongdoing, which could bypass 230 immunity. 

Major litigation has arisen, too: families of terrorism victims sued Google after YouTube’s algorithm allegedly promoted ISIS videos, arguing Google had become more than a passive carrier. In Gonzalez v. Google (2021) the Ninth Circuit first said Section 230 barred the suit, but the Supreme Court sidestepped the issue on other grounds. The Court ultimately dismissed the underlying terrorism claims on proximate-cause grounds, leaving Section 230 intact for now. Yet, the very fact that the Supreme Court entertained the case illustrates growing interest in rethinking platform immunity. In short, U.S. law still provides sweeping 230 and DMCA shields, but courts seem to be probing their boundaries more aggressively.

All these rules mean that platforms effectively have to play “whack-a-mole” with illegal content or else step into the arena.

In EU, if platform helped create, organise content, it loses safe harbour

In the European Union, the old E-Commerce Directive (2000) exempted online hosts from liability for user content, provided they had no knowledge of illegality and removed illegal content when put on notice. The new Digital Services Act, 2024 (DSA) largely preserves this approach. The DSA emphasises that platforms need not “generally monitor” all content – they should merely respond to specific notices – and it clarifies that even if a service suspects some illegal use, general awareness alone doesn’t impose liability. Under the DSA, only when an intermediary controls or curates the content does it have “sufficient knowledge and control…to be held liable”. 

In other words, the EU rule is: if you truly don’t know about the illegal content until told, you stay immune – but if the platform actively helped create or organise the content (or offered content under its “authority”), it loses the safe harbor. The DSA also adds new obligations for very large platforms (such as risk assessments and independent audits), but it does not make them automatic co-defendants whenever user content causes harm.

What about the platforms’ intent and knowledge when harm occurs? 

Traditionally, U.S. law did not condition 230 immunity on what the platform knew. In fact, Zeran v. AOL (1997) held that even after notice of defamation, AOL was still immune. Section 230(c)(2) does allow platforms to remove content in “good faith” without losing immunity, but there is no requirement that they must remove. 

Indian law is stricter: once an order is issued, refusal is fatal to immunity. In Europe, again, liability hinges on knowledge plus failure to act. All these rules mean that platforms effectively have to play “whack-a-mole” with illegal content or else step into the arena.

A key distinction is whether a platform remains a passive host or becomes an “author”. Courts have long said immunity ends if the site materially contributes to the illegality. A classic example is the Roommates.com case (2008), where the 9th Circuit Court held that an online housing site lost Section 230 protection for the segment of its site where it itself designed questions that produced discriminatory roommate ads. In the words of the court, Roommates did not “merely provide a framework” – it actively “developed the discriminatory [content]” – so it could not claim to be a neutral tool.

The same logic applies to modern platforms: if a social network’s algorithm curates hate speech, or an AI chatbot generates defamation, it is argued that the platform is acting more like a publisher. As argued recently in Business Law Today, generative AI search tools (like Google’s Overviews or ChatGPT) are “increasingly taking on the role of a content creator rather than a neutral platform”. If the AI itself is authoring the output, then the service is no longer just hosting third-party content.

How generative AI is upsetting the old safe-harbor calculus

In fact, generative AI is already upsetting the old safe-harbor calculus. By its plain terms, Section 230 only applies to content “provided by another information content provider” – i.e. something put into the system by a user. But when ChatGPT spins out a novel story or a search engine composes an answer, the words originate with the platform’s code. Courts are beginning to grapple with this. OpenAI itself implicitly acknowledged the issue by not invoking Section 230 in a recent Georgia defamation case (2024) involving a ChatGPT “hallucination”. 

Platforms in the dock: The changing rules of internet liability
The possibilities and pitfalls of ChatGPT

In that case, a radio host sued because ChatGPT falsely accused him of embezzlement. Instead of claiming immunity, OpenAI defended on traditional libel grounds (arguing the statement wasn’t “actual facts” and lacked malice). A judge ultimately granted summary judgment to OpenAI, reasoning that given ChatGPT’s repeated disclaimers about accuracy, “a reasonable reader…could not have concluded that the challenged output communicated ‘actual facts’”. Behind the scenes, however, is the idea that Section 230 couldn’t have saved OpenAI here – the defamatory claim was entirely generated by ChatGPT, not typed by some user. In effect, OpenAI was closer to a publisher than a passive host.

More AI-focused suits will force courts to merge two bodies of law: product liability on the one hand and defamation or privacy law on the other. Section 230 wasn’t designed with AI in mind and it may not fit: if a platform “effectively manufactures the content output via its algorithms,” why should it be immune? 

These arguments have analogues. For example, in the 9th Circuit Grindr case (2025), the plaintiff alleged that Grindr’s “safe environment” promise was breached, but the Court held that the broad, aspirational terms were too general to overcome Section 230. The judge noted that only a specific, relied-upon promise (as in Barnes v. Yahoo! (2009), where Yahoo failed to remove nude photos despite a promise to do so) might create liability. Otherwise, “failure to police content” is treated as a publishing choice protected by 230. But if the content is not user-posted at all but AI-created, that logic breaks down.

All of this raises sharp policy questions. How far should we go in holding platforms accountable? 

Media organisations, for one, argue that billions of lines of news content have been scraped for AI training on the theory of open-web “fair use,” effectively turning journalists’ work into “freeware”. Publishers worry that if AI tools are allowed to use their work without compensation, it undermines investment in journalism. On the other hand, tech companies warn that weakening 230 or charging for scraping could chill innovation. Microsoft’s AI chief Mustafa Suleyman recently invoked a decades-old “social contract,” claiming that content on the open web has long been treated as fair game for search and AI.

All of this raises sharp policy questions. How far should we go in holding platforms accountable? 

Legislatures are tentatively exploring answers. In the U.S., bills have been introduced (so far unsuccessful) to carve out algorithmically generated content from 230. In March 2023, Senator Marco Rubio proposed classifying platforms that amplify information via algorithms as “content providers,” thus stripping immunity for AI-synthesized speech. In June 2023 Senator Josh Hawley proposed waiving 230 immunity altogether for generative AI outputs. Both proposals died in committee, however, leaving it to the courts to weigh these issues. 

Abroad, regulators have moved more quickly: the EU’s DSA imposes heavy fines for illegal content on big platforms that fail to remove it, and countries like the UK and Australia are imposing duty-of-care rules on social networks. India too is tightening its rules; new ‘social media intermediaries’ guidelines impose real-time takedowns and compliance requirements, and failure can mean loss of Section 79’s protection.

As these debates play out, the key questions remain: To what extent should a platform be treated as a co-defendant simply because it provides the venue? Suppose a user posts an illegal ad or a defamatory video - does that make the platform an innocent host, or a culpable participant? Are algorithmic recommendations akin to issuing the content oneself? The answers will shape the future of online speech. For now, the trend is clear: platforms can no longer assume that they will always be treated like neutral pipes.

Related Stories

No stories found.
The Leaflet
theleaflet.in