AI Technologies: Putting human rights at the forefront

ARTIFICIAL Intelligence (AI) is no longer in the realm of science fiction; it is now increasingly being deployed across industries and within public systems.  In the last couple of decades, AI-driven technologies and systems have been adopted not just by industries but also by governments across the world.

The fact that powerful computers and AI-driven technologies offer significant benefits to society is not disputed. However, if AI systems are not understood and regulated, they can undermine many established human rights principles and can pose serious threats to the civil liberties enshrined in our Constitution.


AI induced technology and human rights challenges


AI has the potential to seep into many societal processes – education, employment, health, police, military and governance. AI can impact a broad array of human rights, including the right to privacy, freedom of expression, participation in cultural life, the right to remedy, and the right to life.

As I discussed in an earlier piece, Artificial Intelligence: An explainer for beginners, AI-induced technologies could have serious implications on society through its use in surveillance, bias and discriminatory data-design which can be used in profiling, and due to the missing clause on accountability and liability.


Under the Constitution of India, there exists a duty on the State to prevent such discriminatory practices and use of such methods which infringe the fundamental rights of citizens and causes discrimination. However, it will be interesting to see how this jurisprudence will develop in protecting our fundamental rights from technological advancements in the form of AI.

In 2015, the Government of India challenged the existence of the right to privacy as a fundamental right before the Supreme Court in order to continue with the Aadhaar project. Aadhaar had several privacy hazards related to data security, bodily integrity due to the use of biometrics, and personal data mining that further enables mass government and corporate surveillance.

In Justice KS Puttaswamy (Retd.) v Union of India, a nine-judge bench of the Supreme Court of India on August 27, 2017, unanimously held right to privacy as a fundamental right intrinsic to right to life, liberty, freedom and dignity.

However, on September 26, 2018, in Justice K S Puttsawamy (Retd) and another v Union of India, with the ratio of 4:1, the Apex Court upheld the constitutional validity of Aadhaar on the basis that the project had privacy and security safeguards inbuilt in the system. However, in light of the right to privacy, Section 57 of the Aadhaar Act, 2016 was held unconstitutional. This section enables State, body corporates and individuals to seek information of an individual for any purpose. The provision now only allows the government to use Aadhaar for various social welfare schemes. Justice Chandrachud in his dissenting opinion said that Aadhaar failed “to protect the individual right to informational privacy”.  The right to privacy and the Aadhaar judgements are the beginning of the legal discourse on the nuances evolving out of the infinite prospects of using AI and data technologies.


Also, read: Artificial Intelligence: An explainer for beginners


 How AI will impact human rights?


Human rights are the basic fundamental rights guaranteed to every human being and are codified under various national and international laws. The United Nations Guiding Principles on Business and Human Rights provides that both the governments and body corporates are required to respect and practice human rights, although governments have additional obligations to protect and fulfil human rights.

The present AI technologies have created a new form of repression and raised the vulnerability of the marginalised sections in society. The ability of AI to identify, classify and discriminate magnifies the potential for human rights abuses to a larger extent and scale.  Among many others, the rights discussed below would be most prone to be affected by AI induced technologies:

1. Right to equality and non-discrimination

  • Article 14 of the Constitution of India – Equality before law 
  • Article 15of the Constitution of India  –Prohibition of discrimination on grounds of religion, race, caste, sex or place of birth

The use of AI in the criminal justice system could lead to discrimination with human biases contaminating AI design. In the criminal justice system, law enforcement should not make their decision to detain or prosecute any persons entirely on AI-based information as it could have inherent biases in algorithm towards a religion, caste, race or sex.

The United States criminal justice system has started using software to predict future criminals that now function across the country to inform detainment decisions at an early stage, assigning bail and criminal sentencing. Many researchers have contended that such software in the criminal justice system is biased against the people of colour.


AI induced technology is in practice to recruit employee in multinational corporations. AI-based technology platforms mine big data to make quick intelligent decisions and automate repetitive hiring processes, but the same could be disruptive when the AI is designed with biases towards gender or any particular race or religion. E-commerce giant, Amazon,  ditched its AI recruitment tool after the company found that it was gender biased.

Data is used to train AI machines and technologies through algorithms. Therefore biased data could lead to a biased algorithm which would ultimately lead to biased AI technology. Unfortunately, biased data is the rule rather than the exception. Because data is produced by humans, the information carries all the natural human bias within it. For the time being there is no cure for bias in AI systems.

Major AI players including GoogleMicrosoft, and DeepMind have developed ethical principles to guide and pursue their AI initiatives.

2. Right to information, freedom of expression, political participation and livelihood

  • Article 19 of the Constitution of India. – Protection of certain rights regarding freedom of speech, etc.

The use of AI in surveillance infringes the right to privacy and has a chilling effect on the freedom of expression. Round the clock surveillance of citizens raises the fear among them of being monitored and the likelihood that people will not practice their basic fundamental rights including freedom of speech and expression.

AI induced digital robots are the new tool for online harassment of the marginalized and dissenting voices. Hard to recognise digital bot accounts masquerade as real users and send automated responses to identified accounts or to anyone who shares a certain opinion, infringing the freedom of expression.

It has been argued that in many recent elections around the world, political parties have been using AI to create and spread misinformation about their political rivals, threating democratic values and challenging the notion of free and fair elections.

Access Now in a recent report said that “AI-powered surveillance could be used to restrict and inhibit political participation, including by identifying and discouraging certain groups of people from voting. Use of facial recognition in polling places or voting booths could compromise the secrecy of the ballot … the mere signification of surveillance could be sufficient to convince voters that their ballots are not secret and could influence their voting decisions accordingly”.

The predictive power of AI is already in use to predict and help prevent armed conflict. If the same approach could also be used pre-emptively by governments to predict and prevent public demonstrations or protests before they take place, then it would be a major blow to the right to protest and dissent against the government.

Another major impact of AI-induced technology is on the labour market, as many industries, primarily in the manufacturing sectors, are adopting automation at a large scale. Automation of jobs has posed a real threat to the right to work and livelihood and has already resulted in job loss in many sectors.

In a recent report, the International Labour Organization said that 51.8 per cent of the total job activities in India can be automated. Another report by McKinsey Global Institute has predicted that up to 12 million women in India will lose their jobs by 2030 due to automation. In the near future, all the job which includes repetitive tasks or low skills will be automated risking unemployment crisis.

3. Right to life, livelihood and privacy

  • Article 21 of the Constitution of India. – Protection of life and personal liberty.No person shall be deprived of his life or personal liberty except according to procedure established by law. 

AI induced autonomous machines are replacing traditional weapons and are under development in many countries today. Autonomous weapons have no human control and they attack their targets on the basis of the data-algorithms they were designed with. These autonomous weapons in the near future are likely to suffer from AI’s inability to deal with nuances or unexpected events, putting the life of masses at inevitable risk.

For example, in a conflict situation, an AI autonomous weapon which is trained to attack combatants could possibly attack a civilian population if civilians have similar appearances or are at the combatant position. Thus, AI could result in the deaths of innocent civilians and large scale destruction that a human operator may have been able to avoid. Every human being has a right to life and they should be guaranteed a safe and secure environment, an environment which has protection from weapons of mass destruction.



Privacy is a fundamental human right and is essential to human dignity. Information is the new gold in the age of digital technology which must be guarded for the protection of one’s fundamental right to privacy. Even though the  Supreme Court of India has recognized privacy as a human right, there is no legislation yet that protects an individual’s privacy and digital data. AI induced technologies are trained to access and analyse big personal data sets. This personal data is accessed from various digital platforms, and at times without consent. This data can be used by AI software to even predict your behaviour.


Regulating AI – How are governments preparing? 


Many countries, including India, have either initiated the legislative process or have adopted a policy to control and regulate the use of general public data.

France and Mexico have highlighted the importance of creating data policies and resilient open data infrastructure in their national policies. United Kingdom’s AI review framework for ‘data trusts’, to enable confidence in data sharing between organization. Whereas the German Government’s AI strategy has advocated the need to share data voluntarily, albeit in a protected environment.

India’s NITI Ayog’s ‘National Strategy for AI’ has suggested establishing a data protection framework with legal backing, establishing sectoral regulatory frameworks, benchmarking national data protection and privacy laws with international standards, encouraging self-regulation, investing and collaborating in privacy-preserving AI research, and spreading awareness.



The Srikrishna report proposes a comprehensive data protection legislation. However, while the report identifies data protection principles it fails to put individual rights and liberties ahead of the digital economy. The government while framing the data protection law must ensure the following recommendations:

  1. Government use of personal data vis-à-vis AI should be governed by open procedure and be transparent and accountable i.e. all such governmental acts must come under the Right to Information Act.
  2. The government should endorse more research on the societal impacts of AI, including AI effects on the fundamental rights and civil liberties.
  3. Private sector enterprises should adhere to ethical policies and potential applications of AI should be benchmarked against constitutional principles.


Toronto Declaration


Human Rights Watch, Amnesty International, Access Now and other rights and technology groups released a statement in May 2018, known as Toronto Declaration, articulating the norms to safeguard the human rights standards in the age of the AI for both the public and private sectors.

The Toronto Declaration is a set of new human rights principles focused on AI and its impacts on human rights which includes the right to dignity and non-discrimination, right to privacy, freedom of expression and most important right to life.

“Digital rights, privacy rights, access rights, these are not optional rights – they are fundamental rights. We shouldn’t have to beg, plead and become technical wizards to exercise our fundamental rights”, said Zeynep Tufekci, a keynote speaker at the RightsCon, the annual global meeting on tech and human rights which led to the Toronto Declaration in 2018.


Putting human rights at the forefront of development and application of the AI technologies, Toronto Declaration is also the first set of international principles framed to guide policy on how to construct and regulate the AI for a future society where technology does not harm our minimum basic fundamental human rights.

The government needs to promote a balanced relationship between the utilities of AI technologies and human rights and civil liberties. Only, technologies that ease the work of human beings for good and not make them vulnerable, should be promoted.

The Leaflet