The changing face of intermediary liability in India

[dropcap]I[/dropcap]NDIA’S regulatory framework around internet intermediaries – to be understood in this context as social media and messaging platforms like Facebook, Twitter and WhatsApp – is about to get a major makeover. A set of draft amendments were published by the Ministry of Electronics and IT back in December 2018, modifying key provisions of the Information Technology (Intermediaries Guidelines) Rules 2011.

The amended Rules are expected to be notified in January 2020 as per the Ministry’s affidavit in a recent Supreme Court hearing.

 

Regulating intermediaries

 

The primary objective of the draft amendments was to bring greater accountability among intermediaries regarding the content hosted on their platforms.

Most notably, the amendments called on intermediaries to identify the originator of any given content when requested by government agencies, and also to proactively identify and remove unlawful content from their platforms using automated tools or other appropriate mechanisms.

It is not yet known how the final amendments will differ from the December 2018 draft, but the Ministry has repeatedly made it clear that intermediary platforms are seen as potential vehicles of social disruption, and that the solution to the problem lies in holding intermediaries to a higher standard of accountability for user-generated content.

This is a significant departure from existing law on the matter, which provides intermediaries immunity from liability for user-generated content so long as they remove unlawful content when asked to do so by a court order or government directive.

 

Are intermediaries responsible for misinformation?

 

The argument that intermediary platforms can be powerful tools in the hands of disruptors is admittedly not without merit. For instance, rumours circulated on WhatsApp have played a big role in instigating mob-lynching across India. Social media is being used to amplify disinformation campaigns targeting democratic processes, and organized online “trolls” make strategic interventions to derail public discourses.

Intermediary platforms are also liable to be abused in far more sinister ways, for example by terrorist groups who make use of privacy-focused protocols to organize their activities under the radar. Taken together with the fact that many intermediaries now make more internal determinations on the permissibility of user-generated content as opposed to merely providing platforms with no editorial control, there is, in fact, a strong case to be made for demanding greater accountability from them.

However, it is also important to ensure that steps taken to enhance accountability do not end up curtailing fundamental rights and freedoms.

 

Implications of draft amendments

 

Take the first of the two aforementioned changes introduced by the draft amendments i.e. requiring intermediaries to identify the originators of content on request. While the Facebook account or Twitter handle that published, a given piece of content might be identified rather easily, this is much more challenging when it comes to messaging platforms like WhatsApp, where end-to-end encryption prevents anyone other than senders and receivers from seeing the contents of messages.

To trace the originators of content on messaging platforms, platform providers will have to build encryption backdoors or do away with end-to-end encryption altogether, both of which would severely weaken user privacy and security, leaving users more vulnerable to cyber-attacks.

The second change mentioned above i.e. requiring intermediaries to proactively remove unlawful content is even more problematic. Introducing such a requirement would effectively mean that intermediaries failing to comply may be held liable along with the responsible users for unlawful content – a very dangerous proposition that imposes an unfair burden on intermediaries.

For starters, even setting aside the fact that messaging platforms would once again need to compromise on encryption protocols to access the contents of messages, there currently is no mechanism in existence, automated or otherwise, capable of proactively identifying and removing every instance of unlawful content from the terabytes that are uploaded to intermediary platforms on a daily basis. More importantly, the contours of lawfulness can often be difficult to determine and is definitely not a task for intermediaries to perform under the threat of legal sanction.

In Shreya Singhal v. Union of India the Supreme Court in 2015 ruled that intermediaries may be asked to remove content only through a court order or government directive, so that they are not forced to over comply with takedown requests and lend a chilling effect to the fundamental right to free speech. In addition, a legal obligation to continuously monitor users is inconsistent with the Supreme Court’s decision in K.S. Puttaswamy (Retd.) v. Union of India and Ors where privacy was held to be a fundamental right that can only narrowly be restricted.

While the Ministry’s call for greater accountability among intermediaries is not entirely misplaced, its proposed means to achieve this end has been far from ideal. Rather than broad regulatory changes that could end up doing more harm than good, what we need is a nuanced approach to intermediary regulation that brings together the multi-stakeholder community to conceptualize meaningful solutions, balancing national interests with those of the individual.

The final amendments due in January 2020 will hopefully feature more carefully considered changes that address stakeholder concerns and strike that delicate balance.