When and how will the law wake up to deepfake technology?

Deepfake technology is here to stay, with all its advantages and disadvantages. How will the legal policy landscape deal with it?

IN a recent post circulating widely on various social media platforms, popular Indian actress Rashmika Mandana is seen walking into an elevator sporting a black body-hugging swimsuit.

On the face of it, there is nothing wrong with the video. Actresses wearing swimsuits accentuating their décolletage has gained an edge of banality. Some might even think that it was part of the promotion of her upcoming big-banner film Animal.

But the catch is that Mandana is not the real and actual subject of the video. Her face has been morphed on top of the torso of a British Indian influencer named Zara Patel.

In a recent post circulating widely on various social media platforms, popular Indian actress Rashmika Mandana is seen walking into an elevator sporting a black body-hugging swimsuit.

It is a fine (or disquieting, depending on which side of the divide you are) example of deepfakes— images and videos produced by artificial intelligence (AI)-powered digital tools to demonstrate non-existent humans or real individuals in unique set-ups.

The morphed post managed to elicit a flurry of responses due to the fact that the torso of the actress was seamlessly blended into someone else’s physique. As AI technology keeps developing, what lies ahead when the line between fact and fiction is smudged further?

Assessment of potential risks

Anything novel comes with a learning curve. The earliest deepfakes required extensive recordings, several frames of target imagery and sophisticated technical know-how.

Recent progress in rapidfire technologies such as Generative Adversarial Networks (GANs) and Autoencoders (AEs) have democratised accessibility and fastened output-generation speed at minimum costs and skills.

Considering that human civilisation is on the cusp of an ‘infocalypse’, an era in which society is overrun by disinformation, the proliferation of deepfakes in terms of high resolution face-swapping, attribute editing and voice mimicking features can have deleterious consequences.

Deepfakes are made all the more catastrophic by a lack of accountability and ambiguity regarding the rights of content producers and suppliers, as well as the people whose resemblance is utilised.

Also read: Generative AI and the copyright conundrum

People’s identities are their private property. Hence, AI-based contracts pertain to any agreement concerning the right to one’s image and voice.

Unauthorised commodification of deepfakes by generating subconscious acceptance in the targeted consumer base can be attributed as a transgression of an individuals’ rights.

Simultaneously, the right to safeguard one’s image is not an absolute right, it entails that the fundamental freedoms and rights of others (such as that of speech and expression) have to be constantly recognised.

In addition to the creator’s rights, the manner in which the picture is used plays a crucial role in determining the legitimacy of the utilisation.

Privacy, defamation and social identity

In the relationship between humans and machines, the freedom to exercise agency is uncertain, which raises inquiries about technological ethics and privacy concerns, crucial to building public value and the long-term sustenance of AI.

Deepfakes are made all the more catastrophic by a lack of accountability and ambiguity regarding the rights of content producers and suppliers, as well as the people whose resemblance is utilised.

This implies that people tend to be hesitant or, worse, oblivious performers in a deepfaked, manipulated work. In the realm of attention-economy, the growth of entrepreneurial online self via virality metrics (likes, shares, views, etc.) escalates the risk of exposing online recording of lives to automatised data accumulation for a series of interlinked infractions: revenge porn, identity theft, defamation, blackmailing and harassment.

Under such circumstances, the detrimental effects on one’s persona and perhaps on the community as a whole may be unalterable and irretrievable. Hence, knowingly propagating non-consensual non-veridical representations of an individual implies altering a person’s social identity which undermines their personality rights and privacy.

Distribution of non consensual deepfake pornography infringes upon the rights of a person that others should not meddle with their identity within the society. A shocking 96 percent of deepfakes are sexually explicit material and objectify unconsenting women, demonstrating weaponisation of data-fuelled algorithmic content against females and the strengthening of a hierarchy that has historically been centred around toxic masculinity.

Also read: Why India needs a robust content deletion procedure to repress revenge pornography

Deepfakes which are immediate, easy-to-digest and elicit emotion can also be used to falsely represent politicians making bogus remarks or individuals engaged in controversial or illegal acts to manipulate decisions, taint reputations, disrupt societal cohesion and misdirect public discourse.

For example, deepfakes of Ukrainian President Volodymyr Zelensky appearing to command his army to surrender became viral along with the fake revenge porn of Rana Ayyub, an investigative journalist whose reporting exposed State corruption and violations of human rights in India.

Deepfakes of soldiers engaging in acts of sacrilege in a foreign territory or members of a certain group consuming food that is prohibited by their faith have the potential to instigate civil strife.

Human susceptibility to fake news, astroturfed disinformation campaigns, and the resulting manipulation depends on psychological (confirmation) biases and interpersonal dynamics, ergo becoming obvious in algorithmic data surveillance.

In this way, experiencing fake news turns into an act of prosumption instead of mere consumption, with spectators turned into oblivious propagandists, bending reality via their regular social media routines.

Deepfakes propagating harmful untruths have an impact on social image and personal and professional relationships in terms of phenomenal immediacy, an attribute that is inversely proportional to the simplicity with which the depiction can be challenged.

The personal impact of defamatory posts and trolls can take the form of victimisation, trauma and toll on mental health which is likewise comparable to the harm caused by conventional forms of privacy invasion through stalking or trespass.

In the US, only a few states have implemented legislation prohibiting the transmission of deepfake pornography in any manner, and only a handful have criminalised the conduct.

A shocking 96 percent of deepfakes are sexually explicit material and objectify unconsenting women, demonstrating weaponisation of data-fuelled algorithmic content against females and the strengthening of a hierarchy that has historically been centred around toxic masculinity.

Also read: Video game avatars: Who owns them, game players or developers?

Meanwhile, in the United Kingdom, both English and Welsh laws criminalise non-consensual porn when authentic images are used. Scottish legislation, on the other hand, appears to be more broad, as it includes photos that have been transformed in any way.

Personality rights and copyright protection

A person’s image encompasses one of the primary aspects of an individual’s character that distinguishes the person from others. The European Court of Human Rights declared in a 2009 judgement that the right to preserve one’s image is one of the essential components of personal development and presupposes the right to control the use of that image.

In this context, New York recently approved a new legislation granting renowned citizens and celebrities’ heirs the power to control the monetary use of their name, image and likeness. Sending out a superimposed face on a performer’s body is a misleading portrayal of one’s skill to fraudulently benefit from the actor’s services. Such digital impersonations can be tantamount to identity thefts.

Copyright protection and its adjoining economic rights are admissible to a work that must be unique in terms of the creator’s own intellectual production.

Deepfakes are the ultimate result of merging existing videos or image details that may or may not be secured under copyright law depending on the jurisdiction.

In India, under Section 57 (1)(b) of the Copyright Act, 1957, an author is protected from mutilation, distortion, alteration, or any other identical activity attributed to their work if it may harm their reputation.

Deepfakes typically depend on the alteration of copyrighted content, which can be classified as distortion or mutilation and, therefore, regarded to be a violation of individual rights.

Experts predict that by 2026, up to 90 percent of web material will be created synthetically.

Consequently, in the absence of explicit legislation, India keeps an absolute tight rein on digital fakery.

In the United States, deepfakes are considered ‘transformative’ work when they are developed for entirely distinct objectives from those anticipated while producing the original piece of work. In accordance with Berne Convention Article 2(1), literary and artistic works comprise every production in the literary, scientific, and artistic domain, regardless of its mode or form of expression.

Also read: A new report highlights judicial responses to rising cases of online gender-based violence

Hence, the US copyright legislation does not impose an absolute ban on deepfakes, even if there is use of copyrighted material as long as the end product is transformative in nature.

Legal threats looming ahead

Experts predict that by 2026, up to 90 percent of web material will be created synthetically. As humans have a visceral response to audio and visual mediums, they rely upon their own perception to tell them what is authentic and what is not.

Auditory and visual records of an occurrence are frequently regarded as accurate representations of what transpired. Falsifying electronic evidence has significant ramifications for the community, the criminal justice system, and law enforcement.

Offenders might employ the liar’s dividend hypothesis, in which they would employ deepfakes to dismiss legitimate evidence of their misdeeds. Recently, Elon Musk was sued for comments he made about Tesla’s self-driving feature, which resulted in a boy’s death.

Tesla’s lawyers sought to use the ‘deepfake defence‘ to reject Musk’s prior assertions regarding the security of Tesla’s autopilot features, even though the video has been accessible on YouTube for more than seven years.

Despite their increasing prevalence at the time, research in 2019 showed almost 72 percent of people in a UK survey to be unaware of deepfakes and their impact, highlighting the proclivity of uninformed public to fall for virtual forgeries.

An even more real-world threat landscape emphasises on enhanced ‘generalised epistemic anarchy’, a sophisticated stage of pan-society distrust. Refuting genuine occurrences raises the prospect of a society in which individuals cease to trust documentary proof of police aggression, human rights breaches, or a leader’s erroneous statements and lose their concept of reality.

Refuting genuine occurrences raises the prospect of a society in which individuals cease to trust documentary proof of police aggression, human rights breaches, or a leader’s erroneous statements and lose their concept of reality.

Technologists believe that in the advanced stage of the AI Revolution, it may be challenging to distinguish between actual and fraudulent media. When defendants will cast doubt on a piece of digital proof, proving the veracity of evidence will increase the cost and time required for poor plaintiffs to seek justice, while providing an easier way out for the rich to clamp down on the powerless.

While there is an urgent need to create methods capable of detecting deepfakes, the task will become increasingly difficult as AI learns from its own mistakes, making it difficult to predict how good it will be at identifying deepfakes produced using upgraded algorithms.

The solution lies in a coordinated response from the international community, technology corporations, educators, legislators and media along with societal resilience.