India’s quest for a workable AI legislation

In the recent past, general anxiety has spread among the public regarding deepfake technology, impersonation and misinformation, as AI tools have made them easy and convenient. How will India’s legal policy landscape handle this challenge?

RECENTLY, OpenAI and Microsoft were sued over claims that they used works of many non-fiction authors without their permission to train their chatbot, that is, ChatGPT.

The plaintiffs claim that the company is making a fortune by using their content but the original authors are not getting due credit.

This is not the first time a chatbot stands in a court on the accused side. US comedian and writer Sarah Silverman garnered media attention in July when she and two other writers sued OpenAI, the company behind ChatGPT, for copyright violations.

The Bedwetter author Silverman contended that ChatGPT summarised her book’s material when asked to do so and that she had never granted OpenAI the permission to utilise the content of her book to train the chatbot.

So, a software that was launched on November 30, 2022 is no stranger to controversies and lawsuits. Are India and its laws equipped to tackle AI?

Is Indian law equipped to tackle AI?

With the advent of AI, the traditional notion of copyright has been challenged. Questions arise. Who owns the content created by AI, is it the developer, the AI or both? Can an artificial person be considered an original author in the eyes of the law? Is the act of AI to create an ‘original’ work an example of plagiarism?

US comedian and writer Sarah Silverman garnered media attention in July when she and two other writers sued OpenAI, the company behind ChatGPT, for copyright violations.

To answer these questions, we must first understand whether Indian laws can recognise an artificial person as an author of an original work. Section 2(d)(vi) of the Indian Copyright Act, 1957 extends the definition of an ‘author’ concerning computer-generated works.

Also read: When and how will the law wake up to deepfake technology?

It specifies that for literary, dramatic, musical or artistic works that are generated by a computer, the ‘author’ is considered to be the person who causes the work to be created.

There are no clear precedents wherein the law recognises an AI as an original creator. In one prominent instance, an AI-powered programme called ‘Raghav‘ was initially recognised as a co-creator of a copyrighted work. The copyright office disputed this acknowledgment and filed a motion to annul the registration.

Although the attempt to register AI Raghav as the sole author was rejected, the Indian Copyright Office granted an application listing the creator as a co-author alongside the AI tool.

An AI-powered programme called ‘Raghav’ was initially recognised as a co-creator of a copyrighted work.

This lack of clarity can potentially complicate the attribution of responsibility in case of an offence. For example, determining who holds liability— whether the creator of a deepfake, the platform hosting it or other involved parties— remains a significant challenge due to the complexity of AI involvement.

We have seen the chaos caused last month as a deepfake video of actor Rashmika Mandanna went viral.

An FIR was promptly lodged, which was registered at the intelligence fusion and strategic operations unit of the Delhi police’s special cell, under Sections 465 and 469 of the Indian Penal Code as well as Sections 66C and 66E of the Information Technology Act, 2000.

A probe was initiated, but there is no news on whether the AI that helped the perpetrators of the crime would be considered responsible or not.

In the recent past, general anxiety has spread among the public regarding deepfake technology, impersonation and misinformation, as AI tools have made them easy and convenient.

People are starting to take steps to protect their assets. For example, actor Anil Kapoor moved court to seek protection of his image and voice so that they cannot be exploited by deepfakes.

Also read: Does artificial intelligence need a constitution of its own?

However, at present the emphasis is on the victim going to the police station to take down content. But as the use of AI in deepfake, misinformation and other ways of manipulating reality becomes more rampant, the law and policy will have to acknowledge that these crimes affect society in general, and need specialised treatment.

Policy recommendations

Some progress has been made around the world in this regard.

Before the 2020 elections in the United States, the Deepfakes Accountability Act, 2019 was approved, requiring deepfakes to have a watermark so that they could be identified easily.

India’s quest for a workable AI legislation

European Union’s Artificial Intelligence Act aims to bifurcate AI into high-risk AI and low-risk AI, enforcing strict guidelines for high-risk AI in areas such as healthcare or law enforcement.

The Act emphasises transparent pathways for data quality, privacy, transparency and traceability, requiring clear explanations for AI decisions. These measures aim to ensure trustworthy AI aligning with fundamental rights and ethical standards in the EU’s development and deployment.

South Korea has made it illegal to create or distribute any deepfake that has the potential to disrupt the nation’s harmony. The country criminalises any such act with fine and imprisonment.

Singapore’s Protection from Online Falsehoods and Manipulation Act (POFMA) addresses false online information, dissuading deepfake use, notably during elections. The Personal Data Protection Act (PDPA) regulates data, indirectly influencing deepfake creation by controlling personal information.

These legislations swiftly tackle misinformation, fostering accountability in combating deepfake dissemination and preserving data integrity and could offer useful templates for the Indian legislative landscape to emulate.

Conclusion

Ashwini Vaishnaw, the Minister of Electronics and Information Technology, has emphasised the need for stronger legislation to stop the spread of deepfakes.

The government is expected to present a comprehensive plan that is organised around four main pillars: identifying deepfakes, stopping their production, setting up a grievance and reporting system and increasing public awareness.