Pictures courtesy: DAKSH

Regulating Artificial Intelligence in judiciary and the myth of judicial exceptionalism

With the continued adoption of artificial intelligence in courts of law, can efficiency and effectiveness trump the concerns of legitimacy and justice? 

——-

Academics and researchers gathered recently to discuss the findings of a new report on algorithms and their possibilities in the judicial system. Prepared and presented by DAKSH, a research centre that works on access to justice and judicial reforms, the report has been described as a superlative introduction to the various problems that ail our courts and how the usage of algorithms and allied technologies complicates it. 

Artificial Intelligence (“AI”) systems have seen increased use in the Indian justice system, with the introduction of the Supreme Court Vidhik Anuvaad Software (SUVAS), used to translate judgments from English into other Indian languages, and the Supreme Court Portal for Assistance in Courts Efficiency (SUPACE), which would help the judges conduct legal research. 

However, such systems are shrouded in secrecy as their rules, regulations, internal policy and functioning have not been properly documented or made available publicly. As such systems directly impact the efficiency and accessibility of the justice system in India, a framework that promotes accountability and transparency is warranted. 

The new report examines the various domains of the judicial process where AI has been or can potentially be deployed, including predictive tools, risk assessment, dispute resolution, file management, and language recognition. It elaborates on the various ethical principles of regulating AI in the judicial space, and enumerates the challenges to regulation as observed in foreign jurisdictions. It also suggests several institutional mechanisms that would aid in regulating AI and making it a force for good.

The event was attended by the founder of Aapti Institute, Dr Sarayu Natarajan; associate professor in the Department of Humanities and Social Sciences at IIT-Delhi, Prof Naveen Thayyil; and the executive director at the Centre for Communication Governance at NLU-Delhi, Jhalak Kakkar. The event was moderated by senior research fellow at DAKSH, Sandhya PR. 

Discussion traversed the domains of algorithmic accountability and the ethics of deploying such tools in a judicial system that seldom stays on an even keel.

Regulating technology

Dr Natarajan praised the report for its comprehensive overview of the subject. She stated that the use of algorithms in availing judicial remedies should be understood with respect to various social categories such as caste, religion and economic backgrounds, which impact access. AI runs a chance of further alienating or marginalizing such social categories as far as access to justice is concerned. 

Prof. Thayyil talked about his belief that AI will impact the coming two or three decades of the course of the Indian judiciary. This would escalate as the judicial system increases the use of such technologies in various facets of its functioning. As such, regulation of such technologies is crucial. 

At present, no clear guidelines are available as to the control and effective management of AI and other tools in the justice system, and professionals would have to refer to the experience of other countries to adopt best practices. 

To evaluate the desirability and degree of control that would be required, one needs to examine the impact of such technology in the real world. This concerns issues such as effectiveness, avoiding bias, ethical considerations, access issues, etc. 

To measure such an impact, Prof Thayyil preferred the lens of regulatory ethics. He discussed his strong faith in the parameter of legitimacy, that is usually ignored during impact assessment of such tools, in favour of the more popular parameters of effectiveness and efficiency. He also stated that an ethics-based scrutiny of such systems would have to go beyond the procedure of such tools, and into the norms and values that inform them. 

Creating accountability

Notably, all panellists were cautious about stating which specific parts of the judicial process would be best optimised or were most likely to be experimented with, in the use of such technologies. They explained this by referring to the variety of parameters and access issues that have to be considered before deploying them. 

The lack of public consensus and widespread distrust in AI would have to be factored with public consultations and reviews from industry experts.

Sandhya referred to the lack of explainability in many of the tasks touted to be accomplished. Explainability refers to features of AI systems which affect the capacity of humans to understand and trust the results of an AI system. Legitimacy, as such, is deeply impacted. Dr Natarajan expressed concern about the impact of technological intervention on the worst off among us. 

Kakkar pointed out another challenge that complicates the deployment of such technologies, which is that they are usually developed by private parties and then enforced by the State. This makes it difficult to ensure accountability and transparency of the technologies. 

Hindering justice

AI systems are supposed to learn from the data fed to them, and this could perpetuate the discriminatory tendencies and practices already present within the judicial system. The need for transparency, she emphasized, was crucial, and this could be refined and adopted by subjecting the question to public scrutiny and expert audits. 

Prof Thayyil resonated with the views presented and commented on the perception that the use of technology increases efficiency. Contrary to that, the reality may be that by reducing access and introducing bias, the efficiency may, in fact, decrease, he suggested. 

There is also the argument of such technologization becoming the norm in the near future, which would make a return to a non-AI system difficult. The lack of transparency and accountability of such systems was addressed by Sandhya by referring to them as black boxes

Developing policies on AI tools in India would have to go to the basics of an open justice framework, to make such technologies more coherent with the ends of justice being contemplated. Such a framework would necessitate the disclosure of the functioning and guidelines on the working of such technology and also subject them to effective control. 

A framework to regulate AI?

A cautious approach to such questions was reiterated with Kakkar stating that designing policies and managing data — as means to regulation — were inherently complex problems. The Indian experiment with regulation has been, so far, mixed, he suggested.

Since regulators function under legislation, the crucial question would be if it was too early or too late for a country like ours to have regulatory mechanisms for AI, in general. 

If it is too early to have such a framework, the legislation would not be able to capture the nuances of the system that are yet to find use in the Indian justice system, but may eventually do. If it is too late for it, there is a chance that such regulation may be ineffective as the AI system has been irreversibly embedded in the way the judiciary functions. 

The possibility and desirability of such a regulatory mechanism, and framing policies on the same, would depend on the goals sought to be achieved. For example, a target of enhanced security would necessitate an autonomous regulator with regulatory capacity to question both public institutions, which deploy such tools, and private institutions, which build them. 

Kakkar reiterated India’s lack of a substantive data protection law, in which case the critical question is, what framework would be used to protect the fundamental and human rights of people whose data is being used by such systems. There are data gaps, as marginalized communities are generally neglected in building such technologies. 

Trusting the judiciary

There is also the aforementioned possibility of the perpetuation of bias if such a regulatory mechanism is attempted by the courts themselves, in the absence of a regulatory legislation. Kakkar also agreed with Prof Thayyil’s anxiety about path dependencies, which suggests that the future course of AI depends on its deployment and percolation at present, and function creep, which suggests data may be used for other ends than demonstrated. 

These issues may aggrandize these systems, expanding the scope of possibly harmful practices. 

Dr Natarajan believed that if such a regulatory function was left to the courts, the myth of judicial exceptionalism would have to have sufficient heft to hold muster. To regular observers of the courts of law, it is obvious that such exceptionalism is hardly the norm, she observed.

As such, the judiciary cannot be solely trusted with such a regulatory task. Suggesting that it might be a little early to have a regulatory legislation for such technologies, Dr Natarajan affirmed her belief in the need for some basic regulatory mechanisms. These would examine the background of the developer of such technologies, prevent bias, among other things.

Interdisciplinary AI

The panellists talked about regulation of similar tools in other domains, and the need to cull out a regulatory principle for AI which was more or less uniform across varied fields. 

Best practices from different domains, such as healthcare, would have to be adapted because the ends of the two fields differ. This is because, while accuracy is the goal aimed to be achieved through such tech in healthcare, it is not the end but only a means to one in the case of law. 

Similarly, adopting practices from other countries would have to take into account the resource settings of various jurisdictions, and a low resource country like ours would have to make certain adjustments before adopting practices from high resource jurisdictions such as China or Germany, it was felt.