

ON November 19, 2024, the Delhi High Court heard India's first lawsuit against OpenAI, the company behind the famous artificial intelligence (AI) chatbot ChatGPT.
The court issued a summons to OpenAI following a lawsuit filed by Asian News International (ANI). ANI accuses OpenAI of using its copyrighted news content without permission to train its large language model (LLM), ChatGPT. ANI is seeking ₹2 crore in damages and an injunction to stop OpenAI from using its content in future.
According to ANI, OpenAI uses the information on its web platform without permission. ANI contends that OpenAI has no authority to use its content for profit, even though it is freely accessible online. This issue is significant because it draws attention to the continuous tension between the intellectual property rights of content providers and data utilisation by AI businesses.
Background to the case
ANI is one of the major news agencies in India. It provides content for many media channels across the country. OpenAI is accused of having trained its language model, ChatGPT, on news content belonging to ANI without prior authorisation and while being fully aware that it has no authority to do so.
ANI also contends that OpenAI ignored the agency’s request for a licensing agreement. ANI seeks fair remuneration and believes that OpenAI should comply with India’s copyright laws.
The defendant, OpenAI Incorporation, is a San Francisco-based firm that has created ChatGPT and other AI systems. The company is well-known in the tech industry for training its AI models on large datasets. OpenAI contends that its technology creates human-like responses by analysing patterns and deriving implications from vast databases rather than just replicating specific pieces of material.
OpenAI argues that because they were using data publicly available on the internet, they should not require permission to use such data.
Claims and counterpoints: The debate unfolds
ANI's position
During the hearing on November 19, ANI's counsel, advocate Sidhant Kumar, argued that OpenAI had violated India’s copyright laws by training AI models on ANI content. ANI’s content can only be accessed or stored by a limited number of people even though it is available online, he contended.
In addition, Kumar argued, web scraping for training AI models without consideration for compensation constitutes a serious theft from the companies that produce such content, and it is within their right to say how their content should be used.
He contended that ANI had offered OpenAI licensing on October 3, 2024, but the US-based company had declined the offer. ANI also pointed out that OpenAI's refusal of licensure shows its utter disrespect for intellectual property protection principles and copyright laws.
OpenAI's defence
OpenAI's counsel, advocate Amit Sibal, raised the question of jurisdiction. He explained that this case might not fall under the jurisdiction of Indian courts because OpenAI does not have any offices or servers running in the country. Sibal pointed out that OpenAI is facing lawsuits in 16 different countries, but not a single injunction has been issued against it.
Sibal argued that the current practice of training models using publicly available data is in keeping with how machine intelligence companies operate. Concerning the use of the data, he proposed that content producers could restrict access to websites if they wanted to prevent AI-model data-scraping.
Court proceedings
The Delhi High Court appointed an amicus curiae to provide assistance on the intricacies of AI and the use of data for training. The specialist will share information about how ChatGPT and similar AI systems operate, with particular attention paid to the use of data to develop large-scale models.
Through this appointment, the court has recognised the tremendous complexity of the situation and the necessity of specialised knowledge to untangle the legal and technological aspects at play.
Since an extended period is needed for a holistic assessment of the issue, the court rescheduled the hearing for January 2025, and no interim Order has yet been pronounced. The court will look into the technicalities of how AI technology should be legislated regarding existing intellectual property rights.
The global context: AI, copyright and regulation
The case contributes to the global discourse on AI and data rights. Civil liability issues related to using copyrighted resources are increasingly becoming a real problem for OpenAI and other digital companies.
Similar data-scraping problems and content misuse have now put OpenAI in the crosshairs of law in the US as major media firms have filed complaints against it. For example, media organisations such as The Chicago Tribune and The New York Daily News have sued OpenAI in the US for allegedly using their content without permission.
European regulators have taken a proactive approach to AI regulation. The General Data Protection Regulation (GDPR), enacted in 2018, offers strong safeguards against AI systems using personal data.
The European Union's AI Act, which is still under consideration, intends to regulate AI technology and ensure that it is deployed ethically, safely and transparently. These regulations speak about fairness, accountability and transparency, which are the crux of the ANI versus OpenAI case.
Similar issues arose in an earlier case in the US, where the courts ruled in favour of Google's digitisation of books without obtaining permission from their owners.
This case created the first precedent regarding how AI and digital technology would interact with copyright laws, sparking a heated discussion regarding what constitutes ‘fair use’ in the current digital age.
Some still express concerns about the degree to which AI should use publicly available data and the safeguards content producers ought to have, despite the precedent set in the Google and other cases. The way courts and regulatory bodies deal with the ANI versus OpenAI case may also have a bearing on how these issues will be resolved.
Global conversations on AI regulation
At the AI Ethics and Regulation Conference in Geneva in April 2024, world leaders discussed forming collective guidelines to address the ethical ramifications of AI.
The central point of discussion was to ensure that AI systems were derived ethically, thereby safeguarding against violating individual rights and misusing other people's labour. Transparency and accountability are crucial in the ANI versus OpenAI case, and they were the areas of focus at the conference.
The Organisation for Economic Co-operation and Development’s AI Principles also recommended increased international cooperation for AI technology supervision. Such guidelines advocate weighing any purported benefits against expected harm, including the misuse of data and a breach of intellectual property rights.
Striking the right balance: Innovation versus intellectual property protection
The ANI versus OpenAI case can be expected to become a landmark in the debate between protecting intellectual property and pushing information technological innovation.
In recognition of the superb advancement of AI systems, clear legal frameworks are becoming indispensable. In providing ways to govern AI globally, countries struggle to balance the interests of developers and inventive geniuses while ensuring fair compensation for their exploits.