Artificial Intelligence and the Right to Health: A Normative Framework for Emerging Health Technologies

As AI reshapes health systems worldwide, governance must be grounded in human rights norms to avoid reinforcing inequalities and exclusions that have long defined healthcare.
Artificial Intelligence and the Right to Health: A Normative Framework for Emerging Health Technologies
Published on

ARTIFICIAL INTELLIGENCE and digital technologies are rapidly reshaping healthcare and public health worldwide. They offer immense promise for improving care, strengthening public health interventions, and creating more efficient processes. But their use has already led to harms, including discriminatory treatment of minority populations, unreliable clinical tools, and an ever-widening digital divide between wealthy countries and much of the Global South. 

Despite their promise and risks, legal and policy debates often focus narrowly on issues like safety and technical performance. They view the challenges posed by emerging health technologies as primarily technical problems that can be fixed by tweaking algorithms or recalibrating specifications. What is often missing is a broader normative and legal framework for evaluating health technologies that considers not only whether they work, but also how they will impact equity, access, and health-related rights.

This article argues that the right to health serves as a foundation for such a framework. The right to health centers critical values, illuminates the promise and risks of emerging health technologies, and offers guidance for their governance. First, I summarize the technologies and their advantages. Second, I address their risks and harms. Finally, I outline the right to health framework and apply it to a use case for digital adherence technologies.

The right to health centers critical values, illuminates the promise and risks of emerging health technologies, and offers guidance for their governance.

AI in Healthcare: A Rapidly Expanding Landscape

AI and digital technologies now perform a wide range of functions across health systems. While they are often discussed as a single category, they are better understood as a set of distinct, but overlapping, applications.

Diagnostic and clinical decision-support systems analyze medical images and electronic health records to assist clinicians in identifying disease, recommending treatments, and assessing prognosis. These tools are already integrated into specialties such as radiology and pathology.

Predictive analytics and risk modeling systems use machine learning to forecast individual patient risk, such as disease progression or hospital readmission, as well as broader public health trends. These models increasingly shape decisions about resource allocation and intervention strategies.

Patient-facing digital health technologies, such as telemedicine platforms and AI-driven chatbots, enable patients to access health information and care remotely. In some settings, they are expanding access to services that would otherwise be unreachable for some populations.

Artificial Intelligence and the Right to Health: A Normative Framework for Emerging Health Technologies
One Year of Treatment, Then Nothing: India’s rare disease policy and why patients are left to fight alone

Public health surveillance and disease control systems use AI-enabled data analytics to monitor outbreaks, forecast epidemics, and support treatment adherence. These tools are increasingly central to national disease responses.

Health systems administration tools, such as ambient AI scribes and workflow optimization platforms, are designed to improve efficiency and reduce administrative burdens on providers.

AI is also increasingly being used in biomedical research and drug discovery and development, but this article focuses on technologies that directly affect individuals in clinical and public health settings.

Taken together, these technologies promise earlier diagnosis, improved treatments, efficient health systems, and more targeted public health interventions. In resource-constrained settings, they may help overcome personnel shortages and other capacity deficits. But their rapid proliferation and deployment also raise important risks.

Emerging Harms and Structural Risks

A growing body of research, including my own, has begun documenting the harms associated with AI and digital health technologies. The risks impact individuals while reflecting deeper structural concerns about the design, deployment, and governance of health systems. 

A growing body of research, including my own, has begun documenting the harms associated with AI and digital health technologies.

Biased Data and Unequal Outcomes

Many AI systems are trained on historical health data that reflect existing inequalities in healthcare systems. When those datasets are incomplete or unrepresentative, the resulting algorithms can produce systematically different outcomes across populations. Researchers have shown that such systems can replicate and even amplify disparities. For example, a seminal 2019 study found that a healthcare risk algorithm used on millions of patients in the United States underestimated the health needs of Black patients because it relied on historical healthcare spending as a proxy for medical need. The racially biased algorithm resulted in Black patients receiving substantially less care than similarly situated White patients in U.S. hospitals.

Safety, Reliability, and Transparency Concerns

Some AI tools perform well in controlled studies but less reliably in real-world clinical settings. Researchers have documented cases where diagnostic or predictive algorithms performed significantly worse outside their original development environments. An influential 2021 study of an algorithm used to predict sepsis across hundreds of U.S. hospitals found that the model performed considerably worse in an independent validation than the manufacturer originally reported. The study found that the system generated large numbers of false alerts for clinicians and missed up to 70 percent of patients with the potentially deadly condition. 

Another recent study of computer-aided diagnostic (‘CAD’) tools for tuberculosis used in a community-based screening program in South Africa found “wide variations” in triaging thresholds between different versions of the CAD’s software. This unreliability risked introducing “systematic screening errors” into programs that deployed the popular CAD device.

Safety and reliability concerns are compounded by limited transparency. In my research examining AI-based radiology devices in the European Union, manufacturers disclosed very little information about the products’ training data, validation methods, data protection measures, or their performance across different populations. Without this information, clinicians and regulators cannot meaningfully assess whether AI systems are safe or appropriate in specific contexts.

Barriers to Access and the Digital Divide

About a quarter of the global population, or 2.2 billion people, remained offline in 2025. Yet, most digital health technologies require reliable internet connectivity, access to smartphones or other digital devices, and a level of digital literacy that cannot be assumed across all populations. In research I conducted for the Board of the Global Fund, members of vulnerable communities, civil society organizations, and Global Fund personnel emphasized that access to digital technologies and the ability to use them safely and effectively are shaped by a wide variety of factors, including gender, income, education, and geography. The study participants also highlighted the prohibitive costs of some emerging technologies, including those associated with patented systems or expensive hardware or software. 

Artificial Intelligence and the Right to Health: A Normative Framework for Emerging Health Technologies
Why can’t we defeat the oldest disease known to humankind?

So while digital platforms may increase access to health services and information for some, the ongoing digital divide means our increased reliance on digital technologies may also deepen existing inequalities in access, especially for vulnerable populations and communities in low-resource settings.

Data Privacy, Digital Surveillance, and Trust

Many AI systems and other digital health technologies rely on the large-scale collection and analysis of sensitive personal information, including clinical, behavioral, and location data. A few years ago, I explored the use of digital technologies in the global tuberculosis response with a computational social scientist. Our paper highlights digital tools for promoting treatment adherence, such as video observation platforms and ingestible electronic sensors. We found that these technologies generate extensive digital records, not only about whether patients take their medication, but also about their daily behaviors, where they live, and how they move through their communities. 

My research for the Board of the Global Fund also revealed that concerns about privacy, digital surveillance, and the misuse of personal data often discourage members of vulnerable populations from using digital platforms in the first place. We found that patients fear stigma, discrimination, or even criminal prosecution from the exposure or misuse of sensitive health information they share digitally.

Taken together, these findings underscore that without robust safeguards for digital privacy and security, emerging health technologies may cause real harms that erode patient trust, thereby undermining the very objectives they are meant to advance.

My research for the Board of the Global Fund also revealed that concerns about privacy, digital surveillance, and the misuse of personal data often discourage members of vulnerable populations from using digital platforms in the first place.

The Right to Health Framework

The risks associated with AI and digital technologies for health require an assessment framework that goes beyond technical performance and situates emerging technologies within systems of law, rights, and governance. The right to health provides the foundation for such a framework. Grounded in international and domestic law, the right to health framework is both a legal and an analytical tool. At the global level, the right is established in multiple treaties, most prominently in the International Covenant on Economic, Social, and Cultural Rights

Regional human rights treaties, like the African Charter, recognize the right to health and, perhaps most importantly, it is enshrined or judicially recognized in over 135 national constitutions worldwide. Domestic courts have developed a robust jurisprudence interpreting and applying the right to health, often informed by its normative content under international law.

The right to health framework comprises both substantive dimensions and cross-cutting principles. At its core is the AAAQ framework, which evaluates health systems across four dimensions:

  • Availability—are health facilities, goods, and services present in sufficient quantity?

  • Accessibility—are they accessible to all without discrimination—physically, economically, and informationally?

  • Acceptability—do they respect medical ethics, cultural norms, and patient autonomy?

  • Quality—are they safe, effective, and scientifically appropriate?

The framework’s cross-cutting principles complement these substantive dimensions. They include meaningful participation, equality and non-discrimination, attention to vulnerable groups, and access to remedies and accountability 

The right to health framework broadens the analytical framework for emerging health technologies and offers concrete guidance for their governance.

Artificial Intelligence and the Right to Health: A Normative Framework for Emerging Health Technologies
Birth Control in India: Market, Technology and Health (1930 – 60s)

Applying the Framework: Digital Adherence Technologies

AI-enabled digital adherence technologies, often referred to as DATs, are used for tuberculosis, HIV, mental health disorders, and other chronic diseases, like hypertension, that require long treatment regimens. These systems use mobile phones, video monitoring, “smart” medication devices, and ingestible sensor-enabled pills to track whether patients are taking their medications. Increasingly, these platforms incorporate machine learning, computer vision, and predictive analytics to analyze adherence data, identify risk patterns, and automate provider interventions. In essence, DATs are data-intensive, AI-powered health-monitoring systems that also serve as clinical decision-support tools.

Availability, Accessibility, Acceptability, and Quality

DATs may expand health systems’ capacity to support patients, particularly in settings with limited human resources, thereby advancing the availability of care. But if they are introduced at the expense of community health workers or in-person services, they may weaken health systems over time. A rights-based approach emphasizes that new technologies should complement, not replace, existing forms of care and be integrated in ways that strengthen overall health system capacity, such as by redirecting cost savings into community-based care.

The accessibility dimension illuminates both opportunities and risks. DATs can reduce geographic barriers to services, particularly for patients in remote areas. At the same time, they depend on access to digital devices, internet connectivity, and digital literacy. Intellectual property protections and proprietary systems may also limit their affordability. Without careful policy choices, these technologies risk deepening existing inequalities in care along the digital divide.

Acceptability raises more complex questions. For some patients, remote monitoring may offer greater privacy and reduce stigma. However, digital adherence systems rely on video monitoring, ingestible sensors, and other forms of digital surveillance, generating copious amounts of sensitive personal data. Without appropriate safeguards, DATs may undermine patient autonomy, create a sense of coercive monitoring, and expose patients to harms arising from data breaches or the misuse of their data by authorities.

Algorithmic tools in digital adherence systems may allow providers to identify patients at risk of treatment interruption and intervene to improve health outcomes. In this sense, DATs may promote the quality dimension of the right to health framework. However, systematic reviews of DATs have not consistently found them to be more effective than other approaches, such as self-administered treatment without observation. This raises the question of whether DATs are being integrated into health systems to achieve cost reductions, even though they are less effective than less intrusive alternatives. 

In addition, if the algorithms used in predictive models are inaccurate or trained on biased datasets, they may lead to inappropriate medical or behavioral interventions, thereby diminishing the quality and reliability of care. 

Cross-Cutting Principles

The cross-cutting principles of the right to health framework provide concrete guidance for governing digital adherence technologies. The principle of meaningful participation points to a policy approach that involves patients and their representatives in the development and implementation of DATs within health systems. This may include facilitating patient involvement in technology design processes, requiring the representation of subpopulations in clinical trials and operational research, or mandating their involvement in agency rulemaking governing the use of DATs.

The principle of equality and non-discrimination, together with the focus on vulnerable groups, elevates concerns that digital monitoring systems may be less accessible to marginalized populations while also exposing them to invasive or coercive interventions and risks associated with digital surveillance and the collection of sensitive personal data. This perspective also points toward governance approaches that prioritize community-informed technology design and underscores the importance of investments in digital literacy and ensuring patient access to devices, connectivity, and the infrastructure on which AI-based tools operate.

Artificial Intelligence and the Right to Health: A Normative Framework for Emerging Health Technologies
Global health without America: What is at stake for WHO and the world?

Finally, the remedies and accountability principle requires that patients using DATs have meaningful avenues for redress for harms and rights violations, including those arising from data breaches or inappropriate interventions triggered by algorithmic decision-making. This includes access to courts and other adjudicatory mechanisms capable of providing effective remedies. In practice, this may involve statutory private rights of action, collective remedies like class actions, or access to regulatory bodies with the authority to investigate harms and enforce rights.

Conclusion

AI will continue to shape health systems worldwide. Emerging governance approaches tend to focus too narrowly on technical performance and market deployment. In doing so, they risk reinforcing existing inequalities while introducing new forms of digital exclusion and surveillance. A rights-based approach broadens the focus to consider whether technological innovations serve the public interest. It demands attention to equity and community participation as central criteria for evaluating and governing emerging health technologies. The right to health also requires effective legal pathways for redress and accountability for harms. In this way, the right to health framework offers a powerful tool for ensuring that AI and other digital technologies advance, rather than undermine, health and human rights. 

Related Stories

No stories found.
The Leaflet
theleaflet.in