Decoding the proposed IT Amendment Rules, 2025

The newly proposed amendment converts intermediaries from neutral conduits into proactive arbiters of authenticity and diverts from the constitutional logic of Shreya Singhal (2015).
Decoding the proposed IT Amendment Rules, 2025

Sumukhi Subramanian is a student at the National Law School of India University, Bengaluru, with a particular interest in the intersection of law and technology.

Published on

ON OCTOBER 22, THE MINISTRY OF ELECTRONICS AND INFORMATION TECHNOLOGY (‘MeitY’), through the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025 (‘Draft Amendment’), has initiated an intervention in the governance of online content, by ostensibly aiming to mitigate the societal harms of “deepfakes” and other forms of “synthetically generated information”. MeitY has invited comments and suggestions on the Draft Amendment, and the consultation window remains open at the time of writing.

The 2025 Amendments do not merely add a new layer of due diligence for online platforms, they fundamentally alter the legal architecture of intermediary liability that has underpinned the Information Technology Act, 2000 (‘IT Act’ ) so far. By imposing proactive and automated verification of user-generated content, the Draft Amendment effectively skirts the rationale behind the decision of the Supreme Court in Shreya Singhal v. Union of India (2015), by altering the role of Significant Social Media Intermediaries (‘SSMIs’) from passive conduits into active enforcers and arbiters of authenticity. This imposes what may be seen as an excessive burden which exceeds the mandate of Section 79 of the Act. 

In the first part, I examine the Draft Amendment, outlining the obligations it imposes. In the second part, look at its doctrinal conflict with the intermediary liability and safe harbour regime under Section 79 of the IT Act. Finally, in the last part, I outline how the European Union’s (‘EU’) framework showcases a potential alternative, which sits better with the safe harbour regime, and emphasise the underlying principles that distinguish it from India’s proposed intermediary-centric model.

The 2025 Amendments do not merely add a new layer of due diligence for online platforms, they fundamentally alter the legal architecture of intermediary liability.

The new duty to “verify”

The Draft Amendment seeks to introduce a host of new obligations for intermediaries, centered around the regulation of “synthetically generated information” – information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information reasonably appears to be authentic or true.  

Decoding the proposed IT Amendment Rules, 2025
The changing face of intermediary liability in India

The Draft Amendment appears to create a bifurcated compliance regime which imposes distinct but interconnected obligations on two classes of intermediaries: (a) those that enable the creation of synthetic content, and (b) those that enable its publication. The focus of this article is on the latter, particularly on the onerous obligations imposed on SSMIs under the proposed Rule 4(1A), which mandates that: 

  1. SSMIs must require users to declare whether their uploaded content is synthetically generated. 

  2. SSMIs must deploy reasonable and appropriate technical measures, including automated tools or other suitable mechanisms, to verify the accuracy of such declarations. 

  3. Where verification confirms that the content is synthetic, the SSMI must ensure it is prominently labelled. 

Therefore, the mandate appears to be rather explicit and unambiguous in requiring automated content scanning, which is not merely reactive, but rather, proactive. However, there are difficulties inherent in this approach.

Safe harbour in peril: The Draft Amendment and Section 79

Section 79 of the IT Act establishes the principles governing intermediary liability. While Section 79(1) establishes the broad principle of immunity, the specific qualifications to be met in order to avail this “safe harbour” are laid out in Section 79(2). 

What is particularly relevant is Section 79(2)(c) which mandates that the intermediary “observes due diligence while discharging his duties under this Act and also observes such other guidelines as the Central Government may prescribe”. Furthermore, the conditions under which safe harbour status is forfeited are specified under Section 79(3). Critically, Section 79(3)(b) states that the immunity shall not apply if, “upon receiving actual knowledge, or on being notified by the appropriate Government or its agency that any information, data or communication link... is being used to commit the unlawful act, the intermediary fails to expeditiously remove or disable access to that material”. 

It is the interpretation of the phrase “actual knowledge” that became a central issue before the Supreme Court in Shreya Singhal, wherein “actual knowledge” was read down to be interpreted narrowly to mean that an intermediary is obligated to act only upon receiving a specific takedown order from a court of competent jurisdiction or a notification from an appropriate government agency (and not merely, say, a private complaint). 

I contend that this was most certainly a deliberate constitutional choice establishing a clear, bright-line rule that intermediaries are not required – and indeed, are implicitly prohibited – from undertaking general or proactive monitoring of the content on their platforms to determine its legality. In fact, the Shreya Singhal Court’s reasoning for striking down Section 66A of the Act was rooted in the vagueness of terms used by the provision and the resultant likelihood that legitimate speech would be excessively censored, thereby violating Article 19(1)(a) of the Constitution. Therefore, had the Court permitted a broad interpretation of “actual knowledge”, the same structural flaw would have been perpetuated, and intermediaries would be very likely to err on the side of caution and over-censorship.  

Now, with regard to the proposed Draft Amendment, I argue that the duty to “verify” user declarations in Rule 4(1A) is merely a manoeuvre that attempts to circumvent the rationale behind the “actual knowledge” standard from Shreya Singhal, by replacing it with a far more demanding standard of (what is practically) constructive knowledge. 

Quite simply, by mandating that SSMIs deploy verification tools, the law presumes they have the means of knowledge. Consequently, if an unlabelled deepfake is found on a platform, the law will impute knowledge to the intermediary. From the perspective of the regulator, there is an argument to be made that the mandatory “reasonable” technical measures should have detected the synthetic nature of the content, and its failure to do so constitutes a failure of due diligence. 

Quite simply, by mandating that SSMIs deploy verification tools, the law presumes they have the means of knowledge.

This is particularly problematic given that from a technical standpoint, reliably detecting synthetic or AI-generated content is known to still be an open challenge and such tools report accuracies well below 100 percent. When applied on a large scale, accuracy is likely to drop due to varied content quality, adversarial techniques, and developing AI models, particularly when adversaries continually generate new deepfakes designed to evade detectors. 

This argument is bolstered by the Explanation to Rule 4(1A), which explicitly states that the intermediary’s responsibility “shall extend to taking reasonable and proportionate technical measures to verify the correctness of user declarations”. This language, read with Section 79(2)(c), makes the successful operation of these verification systems appear to be a non-negotiable precondition for availing safe harbour. Therefore, the liability of the intermediary is no longer triggered by the failure to act on a specific, lawful order, but by the failure of the intermediary’s own technology to meet an undefined and, arguably, technically challenging standard of accuracy. 

In sum, mandating platforms to “verify” user claims effectively means requiring constant scanning of user uploads with still-imperfect tools. False negatives (i.e., missed detections) could render platforms non-compliant and impact their safe harbour status, whereas false positives could over-censor users.

Granted, there has been a legislative attempt to frame the amendments as balanced by inserting a new proviso into Rule 3(1)(b). This proviso clarifies that the “removal or disabling of access to any information, including synthetically generated information... shall not amount to a violation of the conditions of clauses (a) or (b) of sub-section (2) of section 79 of the Act”. This is presented as a shield to reassure intermediaries that they will not lose their safe harbour for being too aggressive in their content moderation.

However, I argue that this proviso is a red herring that obscures the primary legal issue created by the new rules. The existential threat to intermediaries under this regime is not liability for over-removal of content, but the near-automatic loss of safe harbour for under-detection. The rules create a powerful, practically one-way incentive structure, considering that there is a safe harbour for taking content down, but strict liability for failing to detect and label it correctly. When faced with this asymmetry, it is reasonable to expect that any rational intermediary will configure its automated systems to be maximally risk-averse. This means lowering the threshold for flagging content, which will inevitably lead to the aggressive and pre-emptive censorship of legitimate user expression to minimise the regulatory and financial risks of losing safe harbour protection – a “chilling effect” on free speech (vide Shreya Singhal). 

Decoding the proposed IT Amendment Rules, 2025
Revisiting Shreya Singhal versus Union of India: A not so bright spot in the free speech jurisprudence of India

Furthermore, this approach represents the culmination of a regulatory trend where the due diligence clause of Section 79(2)(c) is weaponised to enforce content regulation by proxy. Section 79 was conceived to protect neutral conduits enabling speech. Arguably, the IT Rules began the process of expanding “due diligence” from purely procedural duties (like appointing a grievance officer) to substantive content-based obligations (making “reasonable efforts” to prevent users from hosting prohibited content). The 2025 Amendments take this trend to its logical and extreme conclusion, as ‘due diligence’ appears to be redefined to mean the successful deployment and operation of a large-scale, proactive, and automated content verification architecture.

Therefore, contrary to the intent behind the narrow judicial interpretation outlined in Shreya Singhal, the safe harbour provision has now been fundamentally inverted in purpose. Once a shield that protects intermediaries, it appears to have been transformed into a sword for the government to control platform behaviour and enforce a regulatory agenda.

An alternative: The EU AI Act’s approach

Nevertheless, the need for regulation in this sector cannot be denied. The rapid proliferation of generative AI and synthetic media undoubtedly warrants a coherent legal response. The question, therefore, is not whether to regulate, but how. Do models of regulation exist that acknowledge the practical and technical limitations of imposing authenticity-verification duties on intermediaries, while remaining faithful to their core function as neutral conduits of information? 

Here, I turn to the European Union’s Artificial Intelligence Act (‘AI Act’), adopted in 2024, which I believe offers a more calibrated approach to the governance of deepfakes and synthetic media. Specifically, Article 50 of the AI Act imposes a rigorous transparency obligation on users (“deployers”) of AI systems that “generate or manipulate image, audio, or video content constituting a deepfake”, requiring that they clearly disclose the artificial nature of such content. Notably, the EU framework provides explicit exemptions for artistic, satirical, fictional, and other analogous uses, and provides for less intrusive disclosure requirements in such situations. This is in line with the overarching ‘risk-based’ regulatory impulse pervading the AI Act.

Notably, the EU framework provides explicit exemptions for artistic, satirical, fictional, and other analogous uses, and provides for less intrusive disclosure requirements in such situations.

Notably, this obligation is framed as a duty of disclosure on the user, not a duty of verification on the intermediary. The law does not mandate proactive screening or automated detection by hosting platforms, and it places the onus squarely on the deployer of the AI system – that is, the individual or entity disseminating the content – rather than on the intermediary that merely hosts it.  

Therefore, I argue that the AI Act maintains the foundational principle of safe harbour for intermediaries established under the Digital Services Act, ensuring that platforms are not transformed into arbiters of authenticity. In contrast, the Indian Draft Amendment adopts a far more prescriptive and intermediary-centric model, imposing mandatory verification and labelling duties on platforms themselves. 

Broadly, the divergence between the two approaches reflects fundamentally different regulatory approaches. The AI Act preserves proportionality and practicality by targeting specific actors and calibrating obligations to the nature and risk of harm, whereas the Draft Amendment conflates transparency with control, imposing on intermediaries a blanket duty of continuous and pre-emptive monitoring.

It is notable that the IT Act already embeds a functional mechanism for reactive regulation through its notice-and-takedown framework under Section 79(3)(b) and Rule 3(1)(d) of the 2021 IT Rules. This structure could easily accommodate synthetic media harms without dismantling the safe-harbour architecture. For instance, MeitY could require intermediaries to act upon specific, verifiable notices relating to deepfake content – mirroring the existing unlawful content process – while encouraging voluntary disclosure standards for AI-generated media. This approach could be maintained at least until detection technologies mature and achieve greater reliability. Of course, to address overtly harmful synthetic content (such as pornographic or defamatory deepfakes) the law can and should turn to targeted criminal and civil remedies specifically designed to curb such harms, rather than expanding intermediary liability through continuous monitoring obligations. 

Concluding remarks

In conclusion, I have sought to demonstrate that while the Draft Amendment is motivated by a legitimate concern over the harms of deepfakes and synthetically generated information, its approach unsettles the foundational balance of India’s intermediary liability regime. It converts intermediaries from neutral conduits into proactive arbiters of authenticity, diverts  from the constitutional logic of Shreya Singhal v. Union of India and stretches Section 79 of the IT Act beyond its initially intended limits. As a result, it imposes a practically and technically onerous burden on intermediaries, risks chilling legitimate expression, and is likely to foster a regime of over-compliance and over-censorship. 

Decoding the proposed IT Amendment Rules, 2025
The many difficulties with legal AI chatbots

Drawing from the EU, a more proportionate response would be to strengthen the existing notice-and-takedown framework to address synthetic-media harms, promote voluntary disclosure and provenance standards, and rely on targeted criminal or civil remedies for overtly harmful content, until detection technology is better developed. Overall, it is certain that the regulation of deepfakes must evolve, but it must do so in a manner that accurately reflects the role of intermediaries and preserves the safe harbour regime.

Related Stories

No stories found.
The Leaflet
theleaflet.in