
ARTIFICIAL INTELLIGENCE NOW permeates daily life From the smartphone assistants many of us carry to credit scoring, healthcare imaging, and government services, technological high-end systems (pro-AI-driven) are becoming progressively foundational, invisible, and everywhere in our institutions and economy.
AI is set to be deployed not just in consumer chatbots but in serious public services, such as predictive crop insurance models for farmers, citywide surveillance networks, service-enabled welfare delivery, and voice-based legal assistance in local languages. This technological ubiquity tends to inspire both wonder and anxiety. Yet many users and policymakers instinctively frame AI as a tool or assistant – a way to augment human capabilities – rather than as a competitor or replacement.
We seek the benefit of AI’s speed or pattern-recognition while expecting humans to remain in the loop. This view – that AI should help us rather than supplant us – is a useful starting point when thinking about its impact. It suggests that as we build laws and policies, we treat AI as an enabler of human goals, not a separate “being” with rights.
Even so, we must confront a knotty question: what are “digital rights” in an age of AI? The term appears increasingly in policy debates, but its definition is not self-evident. At a minimum, it implies that citizens retain rights and protections in the digital realm – over their data, their devices, their online speech, and access. AI governance sits atop a vast array of “digital” issues: not just data privacy and security, but digital property, service rights, contract rights, infrastructure access, and more. In practice, “digital rights” often parallel our traditional civil liberties (privacy, expression, equality, etc.) but take on a new shape when technology is involved.
India’s Supreme Court, for instance, has treated privacy as a fundamental right under Article 21 of the Indian Constitution, and that has become the constitutional grounding for digital protections in privacy cases. We may even codify rights like data protection or internet access into constitutions for permanence. But, before debating AI ethics or rulemaking, we shall clarify what rights we mean. Should a “right to algorithmic fairness” be elevated to the same level as speech or equality? Do we expect new rights beyond the existing roster of liberties, or are our current rights simply being translated into code?
These questions: Whether rights are best derived from constitutional text, new statutes, or soft-law instruments, like international guidelines, remain open. Answering them will shape any governance framework that comes after.
In the meantime, our traditional legal tools often struggle to keep up. Consider how the Evidence law treats digital records. Indian courts continue to regard any computer-generated document as prima facie hearsay – only admissible if it meets stringent statutory conditions. The Indian Evidence Act (now the Bharatiya Sakshya Adhiniyam) still requires a certificate from a responsible officer attesting to every technical detail of how an electronic record was produced. This rule was meant to ensure reliability, but it creates absurdities in the modern world. How does one obtain a certificate for an AI-generated text file? What about an immutable blockchain log or an image captured by an IoT sensor and instantly uploaded to the cloud?
Courts have reluctantly applied old rules by analogy, often resulting in a wide array of mixed rationales on what counts as proper digital evidence. The existing framework is inadequate for the multitude of varieties of emerging digital evidence. A system built for email and server logs strains to handle live data streams and AI outputs. Similarly, data-signature rules under the Indian Information Technology Act presume static public-key certificates and human signatories, concepts that do not map neatly to accounting generative systems or decentralised networks. The bottom line is that many familiar legal forms – certificates of authenticity, chain-of-custody affidavits, and original documents – may not have obvious equivalents when evidence lives in the cloud or is churned out by an algorithm.
Privacy and data-protection rules show similar stress. For example, India’s new Digital Personal Data Protection Act (‘DPDP Act, 2023’) has been touted as a modern data-rights law, but as of 2025, it remains only partially implemented. Until the DPDP rules and governing board are notified, personal information in India will still be governed by the Information Technology Act, 2000, and IT Rules, 2011 – an outdated law without full data protection rights. And there is currently no bespoke AI law at all. Startups and telcos are launching AI features under existing contracts and licences, while privacy, copyright, or consumer laws are stretched to apply. In fact, the new AI-driven offerings are shoehorned into old laws – data privacy, copyright, and contract law – which often fall short in confronting modern AI challenges. In practice today, usage of AI services is policed mostly by corporate terms of service and a few broad statutes. The legal tools predate things like IoT sensor networks, edge computing, and machine learning models – so they often treat new technology as a problem (hearsay or tampering) to be papered over rather than directly addressed.
Such mismatch raises deeper issues and ignites thoughts about the boundaries of digital rights. If an AI application collects or processes our data, on what legal basis do we claim rights over that process? Are we entitled under the Constitution (via privacy, equality, speech, etc.), by statutory law (like the DPDP Act or newer tech statutes), or by non-binding norms (like AI ethics principles)? Each approach has its own set of pros and cons. For instance, enshrining digital rights in a constitution would provide a strong floor as constitutional provisions are harder to amend and typically override ordinary laws. Embedding guarantees for digital privacy, connectivity, or data sovereignty into a constitution will send a strong signal that these (digital) rights are as fundamental as free speech or religion.
On the other hand, one could rely on Parliament to legislate specific safeguards: India’s privacy-turned-fundamental-right was itself recognised by the Supreme Court in Puttaswamy (2017), not by a constitutional amendment, and it spurred legislative and policy action. Alternatively, governments may lean on soft law – guidelines, standards, advisory codes – to fill gaps. Further, can trust be engineered by voluntary rules when the stakes are high and the code is invisible? Each strategy – constitutional right, statute, or voluntary norm – shapes who enforces the right, how violations are remedied, and what weight we give to abstract values vs concrete harms.
In practice, a stark tension exists between the pace of innovation and the law’s response. Across the globe, courts have dealt with only a handful of cases involving AI content, and mostly handled them by stretching old provisions (for example, using sections on fraud, privacy, or copyright to address deepfakes or automated content). In other words, judges have been content to treat AI-generated problems as human-generated problems in disguise. AI is generating fundamentally new risks and harms that old laws are ill-equipped for. For instance, a hospital uses patient data for one purpose, and that same data trains an AI model without consent – an obvious scenario of consent-based regimes breaking down completely. Similarly, data brokers and platforms routinely aggregate profiles that no statute explicitly forbids. Without new legal thinking, those affected may have no clear remedy.
Regulators, too, face a steep learning curve. We should aspire to values like transparency and fairness, but stop short of enforceable obligations. It must not be a situation where our AI framework lacks any binding guarantee, relying on moral suasion rather than legal compulsion. On the ground, institutions are still grappling with basic issues. Many regulators simply lack the staff with data science or machine learning expertise to audit algorithms or demand explainability. Even within States and several governments, digital literacy remains uneven, and techno-legal expertise is in short supply.
Thus, bodies like the telecom regulator, competition authority, or election commission find themselves several steps behind the companies they oversee. In courtrooms, judges and lawyers may not fully grasp probabilistic models. Meanwhile, novel tech like neural networks or blockchain may stymie judges accustomed to evidence on paper. For instance, India’s legal architecture is still fragile on digital issues, privacy case law is just emerging, and even the new data law is only beginning to be enforced. Under these conditions, overlaying AI-based systems on the old scaffolding is bound to create cracks.
This underscores a need for deeper thinking about digital rights and AI – beyond ad hoc fixes or curiosity. We have entered times where artificial intelligence is reshaping society and also reshaping the terrain of law. Every loan recommendation, predictive policing alert, or content-moderation decision is now and progressively potentially powered by opaque algorithms. Each such decision – opaque, probabilistic, and seemingly neutral as it may be – can profoundly impact equality, expression, privacy, and/or other constitutional rights.
We cannot settle for ticking boxes or piecemeal answers. The academic and policy community needs a sustained research effort: legal scholars must become conversant with machine learning methods, data architectures, etc., and technologists must understand legal principles. We must move beyond simply adapting old laws to new tools and think structurally about the digital ecosystem. For example, creative solutions may include creating regulatory sandboxes for experimentation, public audit APIs to let outsiders test algorithms, and risk-sensitive, phased obligations rather than one-size-fits-all rules. And even safe harbors where innovators can test ideas under supervision, while higher-risk AI applications (in criminal justice, healthcare, etc.) face stricter oversight.
In all cases, our goal should be practical coherence. Law is and must be an evolving system: it must absorb the realities of technology without losing sight of values. Thereby, bridging the gap between abstract rights and technical impact. For instance, if privacy is a right, then how is it protected when data is fed into an LLM? If free speech is valued, what are the guardrails for AI-filtered content? - These are not policy slogans but questions for regulators, scholars, and courts. Ultimately, a rights-conscious approach to AI requires both humility and rigor: humility to admit how quickly tech changes, and rigor to reform our institutions so that rights and responsibilities travel effectively into the digital age.
The path forward will not come from hype or fear alone. It will come from grounded, interdisciplinary work – clarifying concepts like digital rights, diagnosing legal misfits, and devising norms that actually influence behavior. We have at hand an opportunity to shape AI’s role, not only in terms of jobs or innovation, but in terms of justice and dignity. To seize it, we must look beyond mere compliance checklists and simple technical fixes. States must set and promote standards - gradually scaling obligations with risk. More broadly, they must reinvent how they understand harm and redress in an algorithmic age.
There is no single right answer yet, but the direction is clear: digital rights and AI should be studied together, so that law and technology reinforce rather than undermine each other. A rights pass-over approach risks and leaves behind those with the most to lose. We must forward soberly, acknowledging this challenge. This calls for investing in judicial & regulatory capacity, and treating AI systems as part of the socio-legal fabric rather than a peripheral novelty. This is a moment for deeper engagement – combining legal abstraction with technical detail – so that digital rights, purposefully defined, have real meaning in the age of AI.
(The purpose of this piece is to contribute a perspective that may help advance the much-needed academic and policy discourse on Artificial Intelligence.)