

ON February 12, 2025, OpenAI filed a 31-page reply before the Delhi High Court, which is currently hearing India’s first lawsuit against AI-driven content usage.
The case was filed by Asian News International (‘ANI’), an Indian news agency, alleging that OpenAI used its copyrighted news content without authorisation to train its large language model (LLM), Artificial Intelligence chatbot ChatGPT.
ANI contends that OpenAI had no right to use its content for profit, even if the material was freely accessible online. The agency is seeking ₹2 crore in damages and an injunction to prevent OpenAI from further using its content. This case spotlights the ongoing conflict between the intellectual property rights of content creators and AI companies' use of publicly available data for model learning.
The next hearing for this case is scheduled for March 18, 2025.
Background
Asian News International (ANI), alleging unauthorised use of its copyrighted news content to train OpenAI’s Large Language Model (LLM), ChatGPT, when it discovered that ChatGPT was generating responses that contained excerpts from its proprietary news articles without authorization.
Following this, ANI approached the Delhi High Court and filed a copyright infringement suit in November 2024. The matter was listed before the court, where ANI sought an injunction to prevent OpenAI from further using its news content and demanded damages for the alleged unauthorized usage.
ANI asserts that OpenAI ignored its request for a licensing agreement, which could have legitimised the usage of ANI’s content.
It maintains that OpenAI must comply with copyright laws and compensate content creators fairly for the use of journalistic material.
The defendant, OpenAI Inc., a San Francisco-based AI firm, developer of ChatGPT and other AI-driven systems, trains its models on publicly available online content. It, however, denies any wrongdoing, arguing that its technology does not replicate specific articles but generates responses by analyzing linguistic patterns across large datasets.
It further contends that since it trains on publicly available internet data, it does not require permission to use such material. However, similar concerns over AI training data and content scraping have led to legal action against OpenAI by The New York Times in 2023.
In October 2024, in response to ANI’s allegations, OpenAI took pre-emptive action by blocklisting ANI’s domain to prevent further use of its content in AI training. However, OpenAI maintains that ANI has not provided concrete examples of ChatGPT reproducing its copyrighted material verbatim.
Meanwhile, on January 10, 2025, the Federation of Indian Publishers and on January 28, 2025, Digital News Publishers Association and some copyright owners intervened in the case, highlighting wider implications of this case for the media industry.
On February 17, 2025, major industry players such as T-Series, Saregama, and Sony also intervened in the case, raising concerns over the unauthorised use of copyrighted music and recordings in AI training.
What are the arguments?
On November 19, 2024, Advocate Sidhant Kumar, on behalf of ANI, argued that OpenAI violated copyright laws by training its AI models on ANI’s news content without permission. He contended that just because ANI’s content was available online did not imply that it could be freely used without consent.
Kumar also alleged that web scraping for AI training without compensation amounts to commercial exploitation of ANI’s intellectual property. Content creators have the exclusive right to decide how their work is used, he contended, and OpenAI’s approach disregards these rights.
On October 3, 2024, ANI had offered OpenAI a licensing agreement which OpenAI had refused to accept. Highlighting this refusal, Kumar argued that it demonstrates OpenAI’s deliberate disregard for intellectual property laws and the fundamental principles of copyright protection.
On November 19, Senior Advocate Amit Sibal, on behalf of OpenAI, challenged the jurisdiction of Indian courts in this matter. He argued that since OpenAI was a U.S.-based company with no physical offices or operational servers in India, the Delhi High Court may lack jurisdiction to hear the case.
Sibal highlights that OpenAI is facing similar lawsuits in sixteen different countries, but not a single court has issued an injunction against its operations. He further argued that the current industry practice of AI training on publicly available data aligned with global standards and did not constitute copyright infringement.
In response to ANI’s allegation of unauthorised content usage, Sibal suggested that publishers seeking to protect their data should implement website restrictions, such as paywalls or anti-scraping measures. He argued that placing the burden on AI developers to seek explicit permissions would stifle technological innovation.
What are the key legal issues?
Whether the storage of ANI’s data by OpenAI (which is in the nature of ‘news’ and claimed to be protected under the Copyright Act, 1957) for training ChatGPT amounts to infringement of ANI’s copyright?
Whether OpenAI’s use of ANI’s copyrighted data to generate responses for its users amounts to infringement of the ANI’s copyright?
Whether Open AI’s use of ANI’s copyrighted data qualifies as ‘fair use’ under Section 52 of the Copyright Act, 1957?
Whether Indian courts have jurisdiction to entertain the present lawsuit considering that Open AI’s servers are located in the U.S.?
What has happened in the High Court so far?
On the first day of hearing, on November 19, 2024, considering the range of issues involved in the present suit arising on account of recent technological advancements vis-à-vis copyrights of various copyright owners, this Court took the view that two Amici Curiae can be appointed to assist the Court.
The two Amici Curiae appointed are:
Dr. Arul George Scaria – Professor of Law and Co-Director at the Centre for IP Research and Advocacy (CIPRA), NLSIU Bengaluru
Adarsh Ramanujan – A Patent agent and independent litigation attorney
By January 28, 2025, both Amici Curiae had submitted written opinions, broadly agreeing on the key legal questions raised by the case. That day, the Court stated that it would first hear the Amici’s submissions and then proceed to arguments from ANI, OpenAI, and the intervenors.
That day, Justice Amit Bansal expressed concerns over the increasing number of parties seeking to intervene. He remarked: “We can’t keep expanding the scope of the suit; you can file your own suit. Hundreds of industries may be affected by it.”
The following are the intervenors in this case:
Federation of India Publishers
Indian Music Industry
IGAR Project LLP
Flux Labs AI Private Ltd.
Despite the Court’s reservations about expanding the scope of the case, on February 17, 2025, Senior Advocate C. M. Lall, representing the Indian Music Industry, argued that the industry is directly affected by AI-related copyright concerns. “We will not go a step beyond the scope. We will come in the end, we will only supplement what is left,” Lall pleaded, “Allow us to present arguments on law.”
Acknowledging the potential implications for the music industry, the Court issued notices to all the parties, allowing them to respond to intervention applications.
On February 21, 2025, Professor Arul George Scaria presented his opening submissions, and Adarsh Ramanujan’s submissions were also partly heard.
Scaria submitted that the Delhi High Court has jurisdiction as OpenAI's services are accessible in India, including ANI’s headquarters in Delhi. Ramanujan, on the other hand, submitted that the location of OpenAI’s servers was irrelevant. However, he explained that the High Court had jurisdiction, nonetheless, sinceANI’s principal place of business was in New Delhi.
Scaria argued that storing copyrighted material for training AI models was permitted under copyright law. What the High Court had to determine was whether OpenAI used ANI's content for anything beyond training. He noted that restricting such use could hinder knowledge dissemination.
However, Ramanujan argued that copying ANI’s data, even once, without permission constituted a copyright infringement. OpenAI’s use of ANI’s content did not qualify as fair dealing under Indian law since OpenAI is neither a news agency nor using the content for criticism or review.
On use of ANI’s content in AI outputs, Scaria pointed out that ANI must first establish its copyright over the content. If ANI’s material is used to paraphrase facts rather than directly reproduce content, it may not amount to infringement. Ramanujan submitted that reproducing ANI’s content in ChatGPT’s responses constitutes copyright infringement, especially if it results in economic harm to ANI by reducing subscriptions from media outlets.
A similar case: The New York Times saga
In December 2023, The New York Times filed a suit against OpenAI and Microsoft seeking billions of dollars in damages for the unauthorized copying and use of its proprietary journalistic content.
The suit being heard in the United States District Court Southern District of New York iss the first major legal action by an American media organization against an AI company. The legal battle emerged after failed negotiations between the parties.
Among the key allegations against OpenAI and Microsoft is the allegation of The Times’ content being reproduced by AI systems, particularly through Microsoft’s "Browse With Bing" feature, which is powered by ChatGPT. According to the complaint, the AI-generated text was almost verbatim from Wirecutter, The Times' product review site. The Bing responses did not provide proper hyperlinks back to the original pieces. The complaint states: “Decreased traffic to Wirecutter articles and, in turn, decreased traffic to affiliate links subsequently lead to a loss of revenue for Wirecutter.”
Beyond financial damages, The Times has also raised concerns about A.I. “hallucinations”—a phenomenon where chatbots generate false or misleading information and attribute it to credible sources. The suit cites multiple examples where Bing Chat produced incorrect information, allegedly sourced from The Times. In one example, Bing Chat listed "The 15 Most Heart-Healthy Foods," but 12 of those items were never mentioned in any New York Times article. This not only damages the credibility of the publication but also raises ethical concerns about AI-generated misinformation.
The Times has hired an Editorial Director for Artificial Intelligence Initiatives. The focus will be coming up with protocols on AI’s integration into journalism and protecting the integrity of the publication's content.
Innovation v. Intellectual Property protection
Does this era of Artificial Intelligence come at a significant cost? If so, how much are we willing to pay to ensure a balance between innovation and the protection of rights—both in the digital and non-digital space?
The OpenAI case in the Delhi High Court is one of the many examples of this growing tension. As the Court deals with the intersection of AI and copyright law, it must not only address the specific claims of the plaintiff and counterclaims but shall also acknowledge the broader implications its decision will have on AI policy and investment in India.
One of the strongest arguments in favor of AI training on publicly accessible data—including both open-access and subscription-based content—is that restricting AI’s access to diverse datasets could hinder its development. Limiting AI’s ability to train on comprehensive data sets may lead to exacerbated issues such as AI bias, misinformation, and even the amplification of harmful content.
A more nuanced approach is needed—one that trains AI models to understand specific community behaviors and moral frameworks. Instead of relying on a single global AI model, there should be multiple AI training frameworks tailored to diverse communities. In a country like India, where societal norms and moral perspectives change every few miles, AI must be designed to reflect and respect this diversity.
Why India needs AI regulation
According to a NASSCOM-BCG report, India’s AI market is projected to grow at a compound annual growth rate of 25-35 percent, reaching $22 billion by 2027, up from its current value of $7-10 billion.
Wherever Artificial Intelligence improves, regulatory frameworks must advance before a new phase of non-compliance emerges. Countries worldwide are facing the challenge of developing AI regulation that promotes innovation balanced against the rights of the creators.
The Delhi High Court is next set to hear the case on March 18, 2025.
The ChatGPT India case will not only dictate how India deals with the issue but also this may result in a global precedent for other countries facing similar issues.
Note:
All Court orders are available on the Delhi High Court’s website.
Case Detail: CS(COMM) 1028/2024.