"Free AI" for data — How can our legal landscape respond to Bharti Airtel-Perplexity Pro type partnerships?

Last month, Bharti Airtel rolled out a scheme to offer Perplexity Pro for free to 360 million customers. Users pay the price with data. Eight years after the Puttaswamy decision on the right to privacy, does India’s patchy data and AI regulatory regime invoke confidence in overlooking such high-stakes partnerships between AI companies and service providers?
"Free AI" for data — How can our legal landscape respond to Bharti Airtel-Perplexity Pro type partnerships?
Harsh Gour

Harsh Gour is a Columnist at The Leaflet and a law student at NALSAR University of Law, Hyderabad. His research focuses on technology law, constitutional law, and criminal law. He is also the author of the poetry book ज़िंदगी के प्रेम को समर्पित (Dedicated to the Love for Life).

Published on

AI FIRMS ARE TYING UP TO BUNDLE AI services at no extra cost – effectively offering “free AI” in exchange for user data. 

In July 2025, Bharti Airtel rolled out a scheme giving 360 million customers a year of Perplexity Pro (which normally costs ₹17,000) for free. Reports suggest that Reliance Jio is also in talks to distribute generative AI (ChatGPT in this case) - planning an “AI cloud” for users. At first glance, these offers democratise cutting-edge tools. But beneath the surface lies a quid-pro-quo: users gain free AI services while implicitly supplying rich personal data and queries back to the providers.

This nexus – reaching into AI, satellites and cloud – raises core questions of digital privacy, consent and constitutional values. 

Are users knowingly consenting to heavy data harvesting? Does India’s patchwork data regime and absence of a specific AI law give enough clarity? 

What does the Constitution’s right to privacy and dignity demand in an era of ubiquitous data-driven services? And how do these tie-ups compare with global norms like the EU AI Act or U.S. privacy enforcement?

Bundling AI and data

Airtel’s partnership with Perplexity illustrates the model. By clicking into Airtel’s app, a user can “claim” a complimentary Perplexity Pro subscription. The telecom touts this as a user benefit (an annual AI assistant now within reach), but one must note this also turns users into data sources. 

Perplexity’s own terms admit that it collects user queries and input to improve its services. Here, the concern is that Perplexity will use users’ data to train their AI models, given the service routes the user prompts through large third-party models like GPT-4 or Claude

What does the Constitution’s right to privacy and dignity demand in an era of ubiquitous data-driven services?

In other words, every question you ask and document you upload may feed back into commercial AI systems.

Meanwhile, space-based broadband is entering the mix. In March 2025, Mukesh Ambani’s Reliance Jio signed an agreement to bring SpaceX’s Starlink satellite internet into its retail network. Jio will stock Starlink terminals and provide installation support, giving the U.S. satellite constellation direct access to Indian customers. While Starlink’s deal is about connectivity (not free AI per se), it reflects the same trend of convergence between telecom infrastructure and advanced tech services. 

In future, satellite links might even carry AI Application Programming Interface (‘APIs’). For now, they highlight regulatory challenges (like spectrum allocation or licensing) that are entwined with digital services.

What unites such deals is the implicit data exchange in them. The telco collects usage information (including even behavioral analytics) and the AI provider gains broad training inputs, often without explicit user awareness. Airtel’s promotion contains no special opt-in beyond general terms of service, and Jio’s partnerships involve leveraging infrastructure rather than clear user consent terms. 

Will millions become empowered AI users, or simply generate data for tech giants while abdicating privacy ? It is a perfect modern Pandora's box.

"Free AI" for data — How can our legal landscape respond to Bharti Airtel-Perplexity Pro type partnerships?
Platforms in the dock: The changing rules of internet liability

India’s legal framework

India’s laws lag behind these innovations. The Digital Personal Data Protection (‘DPDP’) Act, 2023, was finally passed in August 2023 to introduce a rights-based privacy regime in India. But, crucially, it has not yet come into force. The government is yet to notify rules and the Data Protection Board before enforcement begins. Until then, the only regulatorily-binding privacy framework is the weak “Sensitive Personal Data” rules under the Information Technology Act, 2000 (‘IT Act’), which offer minimal consent or purpose limitations. 

In short, any personal information on Indian servers is still governed by an outdated law without full-data-protection rights.

Moreover, India has no specific AI regulation or tailored rules for model training. New AI-driven offerings are shoehorned into old laws – data privacy, copyright, and contract law – which often fall short to confront modern AI challenges. Essentially, until the DPDP and any AI policies arrive, any data-use by these bundled services is only policed by terms of service and broad, residual obligations. 

Even under the proposed DPDP framework, Airtel would act as a “data fiduciary” and Perplexity as a “processor,” with duties to get valid consent, limit uses, and allow withdrawal. But today, those duties are mostly aspirational.

The telecom sector itself has some data rules. Under India’s Unified Licensing regime (governing telecom operators), carriers must ensure privacy of communications and prevent unauthorised interceptions. The licenses also list telecom networks as ‘critical information infrastructure’ under the IT Act. However, nothing in current telecom laws explicitly prohibits sharing anonymised user data with third-party AI providers. 

Similarly, cloud computing (data centers, software services) has no dedicated telecom-like license – providers need only register with the Ministry of Electronics and Information Technology (‘MeitY’) and comply with the IT rules. Thus, a company like Perplexity operating in India is mostly subject to the same loose obligations as any online service.

In spectrum and connectivity, the picture is in flux. Indian law treats airwaves and satellite spectrum as government-owned resources licensed to operators. Starlink’s India rollout still awaits space and spectrum approvals, but its tie-up with Jio effectively uses Jio’s licenses as a distribution network. 

Are users even aware of what they’re opting for? Most consumers will hardly pore over terms revealing that their chat logs will train corporate AI models across borders.

The Telecommunications Act, 2023, is coming but not yet notified, and it leaves many questions (for instance, cloud or AI services are still undefined). In short, the infrastructure layer – radios, satellites, fiber, and cloud – is regulated as telecom/internet but lacks data-specific constraints beyond generic privacy rules.

Constitutional and privacy implications

India’s Constitution and case law enshrine deep privacy protections – at least in principle. In the landmark K.S. Puttaswamy case (2017), delivered eight years before this upcoming Sunday, the Supreme Court held that the right to privacy is protected as an intrinsic part of the right to life and personal liberty under Article 21. Privacy was described as a “fundamental right” under Part III, reflecting individual dignity and autonomy. This includes informational privacy: a person’s control over their personal data.

By packaging AI in exchange for data, telecom-AI deals brush against these values. Users technically consent by clicking “accept,” but meaningful informed consent is doubtful when the “price” is nebulous. Are users even aware of what they’re opting for? Most consumers will hardly pore over terms revealing that their chat logs will train corporate AI models across borders. The law requires notice and consent, but here consent risks being only formalistic. 

Even the DPDP draft emphasises “purpose limitation” and the ability to withdraw consent, yet these are missing in the public telecom offer. If Article 21’s promise of privacy is to have real content, it cannot be watered down by opaque digital bundling.

The right to dignity and autonomy is also at stake. Giving away an AI tool should not come at the cost of user trust. Users may unwittingly disclose sensitive information (health, finances, identity, etc.) in their queries, believing the interaction is private or beneficial. 

In reality, such data may be logged, analysed, and used to improve AI algorithms or targeted advertising – a far cry from any noble public interest. Such broad data extraction for training can undermine individual informational self-determination. Under the Constitution’s broader guarantees of liberty, the State has a duty to protect such rights from erosion, even in private sector deals. Without safeguards, digital rights are at risk of being diluted.

Global comparisons

Notably, The EU’s Artificial Intelligence Act (June 2024) explicitly mandates transparency for generative models. It does not outrightly ban consumer chatbots like ChatGPT, but, requires them to label AI-generated content, design models to avoid illegal outputs, and publish summaries of copyrighted training data. This is a direct attempt to shed light on opaque AI pipelines. In India, no equivalent rule forces AI providers to disclose what data trained their models or to watermark content.

"Free AI" for data — How can our legal landscape respond to Bharti Airtel-Perplexity Pro type partnerships?
Innovative consent-based data processing can change the way we look at the user-experience versus compliance debate

On privacy, the EU’s GDPR (and Digital Services Act) would treat a service like Perplexity as processing personal data, demanding a lawful basis and data minimisation. Regulators elsewhere emphasise user control. In the US, the Federal Trade Commission (‘FTC’) has loudly warned that generative AI “requires a massive amount of data inputs”, including highly sensitive material, and exhorted firms to ensure privacy and security by default

Crucially, the FTC stresses “there is no AI exemption” from consumer protection laws. Hence, companies must not use AI as a cover for deceptive or abusive data practices. There have already been FTC enforcement actions against firms misusing user data in AI settings.

These global movements underscore two norms: one, AI services must build in privacy and security, not degrade them; two, users must be fully informed when their data fuels AI. The Indian bundling deals currently operate far ahead of any codified standards. In effect, they challenge international norms around consent and fair processing. By comparing to the EU and US, we see that India’s regulatory clearances – and user rights – lag well behind. This gap is worrisome given the scale (hundreds of millions of users) and foreign reach of the data flows involved.

Accountability and enforceable norms

Are the AI companies bound by any enforceable set of Indian norms when they harvest training data? At present, not really. 

Perplexity and Starlink have no special obligations in India beyond whatever is agreed with local partners. Under even the draft DPDP framework, foreign AI firms could be treated as fiduciaries if they offer services to Indian users, but enforcement will be tenuous. There is no domestic licensing for AI providers, no telecom-like data-retention mandates (unlike voice call metadata, there is no rule to keep or delete user chat logs).

What about telecom regulators? Airtel and Jio’s licenses include broad duties (such as no violation of privacy), but they do not currently regulate bundling of third-party apps or the content of those apps. India’s net neutrality framework, grounded in the TRAI Prohibition of Discriminatory Tariffs for Data Services Regulations, 2016, and the 2018 amendments to the DoT’s Unified License, prohibits blocking, throttling, or preferential pricing of content or services. However, these rules do not address the bundling of ‘free’ services in exchange for user data, such as AI tools tied to mobile plans. This omission creates a clear legal blind spot.

If something goes wrong – say, AI models trained on user inputs produce defamatory or biased results – Indian users have limited redressal avenues.

If something goes wrong – say, AI models trained on user inputs produce defamatory or biased results – Indian users have limited redressal avenues. Complaints might be filed with the DPDP Board (once it exists), but enforcement across borders remains tricky when the company is outside India. Perplexity’s own terms often point to foreign courts and law. In practice, users might end up with no effective remedy under Indian law.

Thus, these deals currently exist in a kind of enforcement limbo. We have solicitude from telecom license law and aspirational principles in DPDP drafts, but no robust mechanism policing the actual data extraction and use. Companies can claim they do not sell personal information, yet still share data with affiliates and partners, leaving ambiguous what exactly is being handed over in the name of a “free” AI service.

Towards consent, transparency and dignity

Going forward, India needs to align the law with the reality of these telecom–AI tie-ups. First, unambiguous consent requirements are essential. Bundled AI tools should not rely on fine-print opt-ins. Telecom operators and AI providers must clearly disclose what data will be collected, how it will be used (especially if for training or monetisation), and offer an easy opt-out that does not strip the user of all basic service. 

The DPDP Act promotes “privacy by design” and data minimisation; telecom–AI bundles must be subject to these ideals. Besides, users must be warned that chat transcripts or uploaded files could become training fodder, not just ephemeral interactions. The spirit of Article 21 demands that this be real and informed consent – not just a box-ticking formality.

Second, regulatory clarity is needed on how these partnerships fit under existing law(s). The government should consider requiring telecom license conditions that forbid undisclosed data offloading to AI firms or require telcos to audit their partners’ data practices. TRAI and MeitY could issue guidelines to treat bundled AI as a distinct service category. These could mandate, for example, data location or in-country model hosting – much like the pro-privacy approach.

"Free AI" for data — How can our legal landscape respond to Bharti Airtel-Perplexity Pro type partnerships?
China’s DeepSeek AI, and the missing discourse on privacy and ethics

Third, transparency labels and accountability can be borrowed from the global playbook. The EU’s idea of watermarking AI content and summarising training data is instructive. At a minimum, AI answers provided to Indian consumers should carry a disclaimer (even if generically “I am an AI model”), and companies should publish a high-level summary of training sources. Anything less risks becoming a “labelling illusion”: users might see a free service but not understand the privacy cost.

Fourth, building digital trust is a must. Companies could follow the suggestion of DPDP consultants: treat user data using “purpose limitation” and allow deletion requests. Under DPDP’s design, Airtel would be a data fiduciary with audit responsibilities. Practically, regulators should enforce that Airtel (or Jio) ensure Perplexity (or Starlink) abides by Indian law or face penalties. In short, foreign AI players should not be able to evade norms.

The coming era of telecom–AI bundling or any similar association(s) may offer enormous potential for ubiquitous AI assistants, rural connectivity, and new services. But innovation must not trample basic rights. India’s constitutional values of liberty and dignity require that any “free” AI access be truly user-centric, with transparent, minimal data use. 

As regulators finally bring the DPDP Act to life, the Airtel–Perplexity and Starlink–Jio deals should serve as a test case: a chance to set benchmarks for consent, transparency, and accountability before potentially hundreds of millions of users are enrolled. With foresight and reform, India can ensure these partnerships empower users rather than exploit them, securing digital dignity even in the age of AI.

Related Stories

No stories found.
The Leaflet
theleaflet.in