Without any legal safeguards in place, the widespread deployment of this technology by the Indian State makes it a tool to gain collective control over society.
IN the 1960s, American mathematician and computer scientist Woodrow Wilson Bledsoe developed a mechanism – or rather a system of measurement – that would put faces in different categories based on their identifiable features. Now, this mechanism, though with exponential development and growth, is called facial recognition technology [FRT], perhaps the most sophisticated surveillance tool ever made. FRT is a form of biometric identification system used to identify or verify the identity of an individual, by using various differentiating facial features.
However, just like any ordinary technology, FRT has pros and cons too. For instance, it can be used to identify criminals, fasten security approvals, find a missing person, and it can be misused to impinge on one’s privacy and freedom.
After the global internet revolution and widespread digitisation, companies have been collecting data to segregate their target users and make profits, at times by selling the data to other entities – that can potentially [mis]use it. Facebook revealed its deep-face recognition software to tag photos in 2014. In 2015, both Microsoft and Android unveiled facial recognition features in their products, followed by Apple in 2017, that enabled the user to log into their devices through FaceLock/FaceID. The same mechanism, when fed with big data, functions as FRT surveillance.
FRT uses biometrics to map down facial features from any photo or video, which is then run against an already available database, to identify the matching profile on cognitive recognition with the use of artificial intelligence [AI] and machine learning [ML]. FRT is distinct from facial analysis/characterisation, which examines an image and then characterizes it by gender, age, or race.
Due to its ability to monitor and track the citizenry, FRT is prone to misuse, biases and discrimination. It is used by governments to identify and target protestors, and has the potential to be used to track one’s movement, since the human face is open to the public sphere.
Although FRTs function on different algorithms, a few basic principles are the same in all of them. Firstly, a picture of a face is extracted from a photo or video. Secondly, the software reads the geometry of the face – such as the eye-to-eye and forehead to chin distance, and creates a ‘faceprint’. Thirdly, the faceprint is run against a database of image(s). Finally, FRT determines if the database has a match or not.
The faceprint is also called a ‘face template’, which is a mathematical representation designed to include certain details that distinguishes one face from another. The technology allows verification (1:1) as well as identification (1:n) – the former being one’s claim’s confirmation while the latter being a random search in the database to identify the person in the subject image.
There can be several use cases of FRT, but the uses can broadly be classified into three types: commercial, governmental and law enforcement.
FRT, from a consumer perspective, is useful in securing gadgets, enhancing security and fastening payments. For the State, it has endless possibilities of being used for. However, the technology becomes problematic when it is used against the very people it was meant to benefit. For instance, the public might not know how their data is being used (unlawfully) either for commercial or law enforcement purposes. American cyber security expert Megan Gates has written that“[p]roblems arise…when that citizen-to-state legibility is not paired with equal visibility into how the information collected via surveillance is used.”
Privacy is important, and individuals would ordinarily want to have control over their data, which in this case, is the faceprint, which is easy for agencies to collect and tough for the public to avoid. It gives rise to a highly contentious debate on whether to regulate, not regulate or ban FRT.
Research has shown that FRTs pose “complex ethical and technical challenges.” Due to its ability to monitor and track the citizenry, FRT is prone to misuse, biases and discrimination. It has been argued that the use of FRT by police is dangerous for human rights which outweighs any purported law and order aims. A report authored by Jai Vipra, Senior Resident Fellow at Vidhi Center for Legal Policy, said that “[t]he evident over-policing of Muslim areas can result in the use of FRT in policing in Delhi disproportionately targeting Muslims.”
FRT can be and has been used by governments to identify and target protestors – and has the potential to be used to track one’s movement, since the human face is open to the public sphere. It functions similar to the tracking of vehicles, where the faceprint replaces the number-plate. With respect to automated FRT, Ameen Jauhar, Senior Resident Fellow at Vidhi, writes: “Even if one was to assume the effectiveness, such radical and disruptive technologies cannot and should not be utilised in an unregulated and arbitrary manner.”
Table I: world’s most surveilled cities (cameras per sq. mile)
India has continuously been deploying new technologies and is one of the biggest markets for it. India’s deployment of FRT ranges from arresting alleged criminals to providing pensions. Delhi is the world’s most surveilled city, in terms of cameras per square mile, while Chennai and Mumbai ranks third and 18th, respectively. Across India, at least 20 States/union territories either seek to deploy or have already deployed FRT systems, with Maharashtra, Telangana, Delhi, Tamil Nadu, Andhra Pradesh and Gujarat leading the chart.
In Hyderabad, activist S.Q. Maqsood filed a suit at the Telangana High Court challenging the state’s FRT deployment – the first suit of its kind in India. The petitioninter alia makes the point that FRT is deployed without complying with a law, procedural safeguards, and probable/reasonable cause. The Supreme Court in its Puttaswamy I and IIverdicts of 2017 and 2018 respectively provided that the State must establish legality, legitimate aim, proportionality, and procedural safeguards while restricting a fundamental right – all of which have been alleged in the petition to be absent.
Internet Freedom Foundation [IFF], which launched Project Panoptic with the aim to bring transparency and accountability on FRT in India, helped Maqsood file the petition. Anushka Jain, Associate Counsel at IFF, speaking to The Leaflet, said that the deployment of FRT is unlawful, insofar as it fails to satisfy the four-fold requirement of Puttaswamy. It becomes even more problematic, she said, as India does not have a data protection law, or any statutory protection against government surveillance. She vehemently opposed the use of FRT by law enforcement agencies, because of reasons including its inaccuracy, biasedness, discriminatory tendency, and ability to be [mis]used for mass surveillance.
Delhi is the world’s most surveilled city, in terms of cameras per square mile, while Chennai and Mumbai ranks third and 18th, respectively. Across India, at least 20 States/union territories either seek to deploy or have already deployed FRT systems.
In a co-authored research paper, Faizan Mustafa, Vice-Chancellor of the NALSAR University of Law, Hyderabad aptly wrote that “a technology is just a tool — the end result depends on how the tool is used, who uses it, and what is it used for?” The paper, after analyzing the technology, calls for an immediate moratorium on the use of live FRT, as there are “justified concerns about surveillance and fundamental right violations.”
Notably, courts in India have not reached a consensus on installation of CCTV cameras. For instance, the Madras High Court has currently referred the question to the Chief Justice of the high court for potential adjudication by a larger bench after two recent divergent decisions on the issue – with one judge declaring it unlawful, and another approving CCTV cameras to be installed. Further, a petition has been filed at the Delhi High Court challenging the installation of CCTV cameras in government school classrooms with live streaming of the footage to a third person.
As per a report, 109 countries have either deployed or approved the use of FRT for surveillance. Within the U.S., FRT is banned in at least 15 states, but 19 U.S. government agencies were found using it – and there is no federal legislation governing the use of FRT. In Europe, a majority of the countries have deployed it for use, but the European Parliament called for a ban on FRT. The world is divided between two categories – allow the use with or without proper safeguards.
In the U.S., Clearview AI, a software company, came under the hammer when it was revealed that it was building a tracking and surveillance tool, with more than 3 billion images available on the internet. The company was then sued in ACLUvs.Clearview AI (2020), alleging violation of the Illinois privacy law – the Biometric Information Privacy Act. As of October 2021, the company was said to have developed stronger tools with the database extending to above 10 billion images.
Floyd Abrams, the American lawyer who represents Clearview, believes that FRT should be utilized for post-crime investigations and identity theft detection by respective agencies, and thus argues that Clearview is not at fault since it heavily “self-regulates” itself in the absence of any federal law on FRT – given that its database is only accessible by enforcement agencies. However, reports suggest that the company’s client range spans across sports, entertainment and education industries, and even wealthy individuals. Thus, corporate self-regulation might not be the solution – which is too “lucrative”.
In Sweden, the police was slapped with a €250,000 fine for unlawfully identifying individuals using Clearview, while countries like Australia and Canada held Clearview to have violated their respective privacy laws.
On the other end, China is building one of the most sophisticated surveillance networks in the world, with millions of cameras in public places. Karl Strittmatter, a German journalist who has studied China for three decades, says that China has amassed heavy data about its citizens “to punish people for even minor deviations from expected norms.” Surveillance projects like the Golden Shield Project, Safe Cities, Skynet, Smart Cities, and Sharp Eyes, enables China to scan or track almost the entire population. The infamous Skynet,launched in 2005, is a police system with video surveillance, FRT and AI combined; whereas Sharp Eyes, a surveillance project was aimed to surveill 100 per cent of public space by 2020.
Based on the surveillance data, the Chinese government implements a “social-credit” system which categorizes individuals on the basis of their social behaviour and rates their “trustworthiness.” Under the guise of COVID-19 prevention, the world has been flooded with Chinese surveillance equipment, since governments across the world want to gain more control over their subjects – a lesson they might have learnt from China.
India is keen on deploying new technologies to govern its massive population. At times, this technology turns against the very citizens it was meant to benefit, and leads to certain [un]intended consequences. For instance, Aadhar, Aarogya Setu, and Pegasus, all have one point in common – surveillance by the government. FRT is just another measure, to gain collective control over the society, however laudable its purpose may be.
China has amassed heavy data about its citizens “to punish people for even minor deviations from expected norms.”
The government should realize that national security or public order are too insufficient a ground to deploy surveillance tools like FRT on the entire population – without even having any statutory or legal framework in place. With FRT, the State cultivates the ability to trace, track or surveill on anyone and everyone in real time; is it not the dystopian world prophesied by English writers Aldous Huxley and George Orwell? Yes, unless proper statutory safeguards are put in place.