Social media, content moderation and free speech: A tussle

In the rush to regulate social media while assuming the platforms are working against the Constitution, the Proposed Amendments to the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 do not assess the already existing mechanisms of these platforms that may be reconciled with the Rules and codified accordingly.

——–

THERE has been much clamour around the announcement of American business magnate and investor Elon Musk’s potential acquisition of the microblogging and social networking platform Twitter. Musk, a self-proclaimed “free speech absolutist” has previously defined his idea of free speech to mean, “that which matches the law. I am against censorship that goes far beyond the law. If people want less free speech, they will ask [the] government to pass laws to that effect. Therefore, going beyond the law is contrary to the will of the people”.

Even as his idea of free speech remains elusive, it seems that Musk is willing to abide by regulatory requirements.

Arguably, Musk’s interest in taking over Twitter is not really about his concern for free speech as much as it is about exerting control over his “favourite playground”. His proposals indicate that he wants to be “very hands on” in the way Twitter is run. Musk may face more hurdles than he anticipates in keeping his absolutist stance, particularly, in Asia – which is not only a huge market for Twitter but is divergent from Musk’s views of free speech.

Twitter has not yet been able to find the balance between freedom of speech, curbing online abuse and hate speech, content moderation, and jurisdictional legal compliance. While in India, Twitter was asked by the Union Government to take down several posts and accounts (such as those during the farmer protests), in the U.S., it suo motu banned then-serving President Donald J. Trump from the platform, raising several questions around the power social media companies wield. The guiding principle for such company decisions is not a concern for democracy but the “tide of public opinion” relied on to “cynically manoeuvre whenever the bottom line is at stake”.

Also read: Need to rein in Big Tech before it gallops beyond control

With increasing online misinformation, fake news and hate speech, content moderation has flummoxed social media intermediaries and regulators alike. Musk may be conflating content moderation with (presumably, invalid) restrictions on free speech. There is a concern that Musk’s takeover of Twitter may lead to reversing of the moderation standards.

With increasing online misinformation, fake news and hate speech, content moderation has flummoxed social media intermediaries and regulators alike. 

The contours and implications of free speech, which have so far remained subject to regulation within the defined parameters of law (such as harm to public order), are expanding and evolving increasingly as the law struggles to catch up with it. Bringing in accountability for these platforms is difficult, largely due to the current legal limitations in constraining them.

The Indian position

On June 6, the Union Government released a set of draft amendments (‘Proposed Amendments’) to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (‘IT Rules’), the stated purpose of which is to ensure that the community standards to which digital intermediaries hold their users answerable comply with the law and constitutional principles. The press note prefixing the Proposed Amendments clarified that amendments have become necessary in view of several intermediaries seen acting in violation of the rights of Indian citizens. One of the more controversial proposals is the constitution of the Grievance Appellate Committee (‘GAC’) whose mandate will be to deal with “problematic content” in an expeditious manner. Users who are dissatisfied with the manner of handling of their complaint will have the liberty to file an appeal to the GAC, which shall resolve it within 30 days.

In doing so, the Proposed Amendments make the safe harbour protection under Section 79 of the Information Technology Act, 2000 contingent on the determination of the GAC appointed by the government, reporting to a Union Ministry, which raises doubts about its independence. This kind of an oversight mechanism  could curtail the right of free speech of citizens, and be used as a tool for government censorship, especially in view of the events of the recent past, when intermediaries were pressured by the government to take down content even though the same was not in violation of their community guidelines. It has been suggested that a more suitable mechanism is to establish an independent self-regulatory appellate body that acts as an appellate forum for all content moderation decisions.

Also read: Decoding Twitter versus Indian Government Imbroglio

That said, a long-term upside of the GAC may be in bringing algorithmic transparency. Social media platforms have previously been accused of bias when they are unable to explain decisions of their moderation algorithms. Decisions by the GAC could assist platforms to frame policies in conformity with the local law by widening their “reference library” for understanding what content can be considered harmful. However, given the lack of transparency mechanisms for GAC decisions under the Proposed Amendments, the creation of erroneous or arbitrary precedents may curtail freedom of expression.

This kind of an oversight mechanism is feared could curtail the right of free speech of citizens, and be used as a tool for government censorship, especially in view of the events of the recent past, when intermediaries were pressured by the government to take down content even though the same was not in violation of their community guidelines.

Given the amount of disinformation and hate speech disseminated through social media, there is a need to establish legal mechanisms to mitigate risks and build intermediary accountability, while balancing rights of individuals with valid restrictions on their speech. This is a difficult line to toe; considering the pervasiveness of social media, the monopoly of a few platforms and the complexity of the decision-making processes of content moderation, the issue is tough to resolve. While concerns around having the government as a final arbiter are valid, leaving the decision entirely on the private enterprise is also undesirable, which is why there is a rush to regulate social media across the world.

Regulatory efforts abroad

In the U.S., Section 230 of the Communications Decency Act has been under the scanner since the 2016 and 2020 presidential elections. Section 230 contains broad intermediary liability protections aimed at protecting and promoting free expression online. Efforts are being made to create exceptions to section 230 to tackle the spread of misinformation and hate speech, and increase platform accountability. So far, this has received two proposed approaches: 1) Platforms must explain their content moderation approaches and regularly report the content removed or “deprioritised” (Platform Accountability and Consumer Transparency Act), and 2) Mandating social media companies to let independent researchers view more data, including that pertaining to user targeting (Platform Accountability and Transparency Act).

Also read: The changing face of intermediary liability in India

In 2020, the U.S. Department of Justice released a proposal for changing section 230 that dramatically reduces the law’s scope. As per this proposal, intermediary platforms would not be immune from lawsuits unless decisions were made “in accordance with plain and particular terms of service and accompanied by a reasonable explanation”.

The European Union has proposed the Digital Services Act (‘DSA’) in its latest effort to regulate the web. The document is yet to be made public, but it reportedly provides inter alia for transparency in operation of algorithms, imposes new obligations around content moderation, and establishes a set of due diligence obligations such as publishing annual content moderation reports. It requires platforms to justify the removal of content, and provide user appeal mechanisms and internal complaint-handling systems. It imposes a heightened compliance burden not only for the smaller platforms that would risk extinction with such regulation but also the big platforms. With individual countries taking varied approaches to enforce the standards, DSA may lead to ineffective over-regulation.

Given the amount of disinformation and hate speech disseminated through social media, there is a need to establish legal mechanisms to mitigate risks and build intermediary accountability, while balancing rights of individuals with valid restrictions on their speech. 

In the United Kingdom, the Online Safety Bill, somewhat similar to the DSA, includes provisions for risk assessments and duty of care of the intermediaries to their users. Risk assessments will require intermediary platforms to “show their homework” to the regulators for particular instances. These risk assessments will be required to be done both internally as well as by independent auditors and overseen by regulators, as a mechanism to ensure transparency and accountability.

Facebook has proposed an approach that treats the platforms between telecommunications companies and a newspaper, and is based on the following: globalised and constantly changing nature of platforms, consequent exposure to different laws and cultural values, varied nature of intermediaries as against traditional publishers, and moderation decisions being inevitably prone to error. It also warns against the creation of perverse incentives such as the requirement to remove posts within 24 hours, which may discourage platforms from looking for older posts and instead focus on posts that are within the 24-hour window.

Unconstituitonal censorship

In the Indian context, the Proposed Amendments ignore the judicial reasoning of the Madras High Court for Part III of the IT Rules (pertaining to digital news and Over the Top platforms), and instead extend similar provisions for Social Media Intermediaries (Part II of the Rules). The Madras High Court, in its interim order, was of the opinion that such provisions lead to unconstitutional censorship.

Also read: Don’t take coercive action under the new IT Rules, Madras HC tells Centre

In the rush to regulate social media while assuming the platforms are working against the Constitution, the Proposed Amendments do not assess the already existent mechanisms of these platforms that may be reconciled with the Rules and codified accordingly. For instance, the decision by Twitter to mark posts that are questionable, and the decision by Facebook to warn users that articles that they are sharing are over three months old. Ironically, time and again, social media platforms have stood up against governmental pressure that fettered free speech.

Any reforms to bring in platform accountability and prevent harmful speech, would require research identifying what should online spaces look like, who are its beneficiaries, what hinders achieving the ideals and how a policy change could overcome it.

Any reforms to bring in platform accountability and prevent harmful speech, would require research identifying what should online spaces look like, who are its beneficiaries, what hinders achieving the ideals and how a policy change could overcome it. Given the globalised nature of social media, this is challenged by several variables such as those identified by Facebook, and the varied nature and approaches of different platforms.

Experience from across the globe has shown that any law that allows either the platforms or the government to be the arbiter of free speech results in inadvertently fettering free speech (Narayan, 2020). As governments proceed to regulate social media keeping in mind its harms and lapses, they should not ignore the value the platforms hold as modern “public squares” and should aim at mutual accountability in upholding citizens’ rights.