John McCarthy, Emeritus Professor of Computer Science at Stanford University, first coined the term “Artificial Intelligence” (AI) in 1956 and defined it as “the science and engineering of making intelligent machines” during a summer workshop called the Dartmouth Summer Research Project on AI. Today, AI has become a singular term for computational and synthetic intelligence.
AI can be defined as the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with human intelligence [intelligent beings]. The modern definition of AIis “the study and design of intelligent agents” where an intelligent agent is a system that perceives its environment and takes actions which maximise its chances of success.
In the process of performing intelligence tasks, AI machines use insights from many fields including computer sciences, psychology, philosophy, neuroscience, cognitive science, linguistics, operations research, economics, control theory, probability, optimisation and logic. The fundamental and global nature of the AI revolution means it will affect and be influenced by all countries, economies, sectors and people.
What is the current state of development of AI?
AI experts, David Chalmers and Hans Moravec argued that human-level AI is not only theoretically possible but feasible within this century (21stcentury). With advance developments in several conceivable computer and information technologies, AI is no longer just a futuristic concept. AI systems are now used for diverse applications.
AI systems, which now more efficiently focus on a self-learning, reducing uncertainty or concept formation, are being used from assessing job applications, predicting loan defaulters, medical diagnosis to voice and handwriting recognition. AI has become a versatile tool for a variety of daily operations and tasks for multinationals corporations, private sector enterprises and governments across the world.
Technology giants, including IBM, Google, Amazon, Tesla, and Facebook are exploring unique capabilities to make AI operate just like human beings.
The dangers of AI?
Nick Bostrom, in his book implicating the possible dangers of AI, writes that software that uses AI and machine learning techniques, though it has some ability to find solutions that the programmers had not anticipated, functions for all practical purposes like a tool and poses no existential risk. “We would enter the danger zone only when the methods used in the search for solutions become extremely powerful and general: that is, when they begin to amount to general intelligence – and especially when they begin to superintelligence”. We are still far from developing AI with superintelligence.
At present, there are only a few but important implications of AI-induced technologies on society, primary among these are:
Bias and discrimination, and
Accountability and liability.
a. AI in Surveillance
AI technologies are extensively used by corporations and governments for collecting personal data of individuals through the internet, and social media, which can be used to monitor individuals’ behaviour and choices, sometimes without even informing the individual.
In 2014, the Chinese government in a mass surveillance efforts had launched a data-driven social credit systemwhich automatically generated a rating for each Chinese citizen, business and authority-based, on whether the government and their fellow citizens considered them trustworthy. This Chinese rating system now affects everything from loan approvals to permission to board flights.
b. Bias and Discrimination in AI
AI works on strategies designed and developed through algorithms – which are the shortcuts to instruct the computer or machine to perform required tasks. All AI systems are going to be trained on existing data, data which reflects existing social biases, which would get reproduced and codified in the rules that the AI system learns.
For example, an AI system which is designed for recruitment could be biased against women if the data which is used for training the algorithm is biased. Similarly, an AI system which is used in policing is likely to convict a black man in the US because existing prison and crime data are biased towards people of colour.
c. Accountability and Liability
The one big legal issue which arises with the increase in AI systems in general operations is the question of liability and accountability if an AI system makes a mistake. For example, the use of AI in screening diseases and medical procedures – it is not clear whether the liability of an error rests with the developer of the AI or the medical practitioner.
In Hong Kong, an investor lost 20 million dollars because of an AI error. Closer to home, in Telangana, an AI system which was rounding the evaluation data declared wrong exam result of class 12, leading to25 students committing suicide.
In a recent report, released on May 21, 2019, the International Development Research Centre and Oxford Insight placed India at number 19 in its AI readiness ranking out of 194 countries worldwide. The group assessed countries’ governance, infrastructure and data, skills and education, and government and public services measures on how well they were prepared to manage the potentially transformative impacts of AI.
The popular imagination of loss of human control and AI technological singularity may be far-fetched. Yet, AI is entering our social system including banking, medical facilities, consultancy services, education and scientific research in a subtle but significant way.
1 David John Chalmers, “The Singularity: A Philosophical Analysis”, Journal of Conscious Studies, 17 (9-10): 7-65 (2010)
2 P Hans Moravec, Mind Children: The Future of Robot and Human Intelligence (Harvard University Press, Cambridge, MA: 1998
3 Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford University Press, Oxford, 2017 (Reprint))