Law and Technology

Overreliance on legal AI chatbots and the erosion of reasoning skills

AI can be used for routine tasks, like for document review or proofreading - this can save time. But interpreting its results should trigger critical thinking, not blind faith.

This is an extension of an earlier essay on the difficulties surrounding legal AI chatbots published here.

BEYOND SYSTEMIC ISSUES, THERE IS A HUMAN COST to misusing AI in law. Many lawyers worry that treating AI as an oracle will erode their own legal reasoning. If an AI can quickly draft a petition or a plaint or summarize a case, will younger lawyers still learn how to do those tasks themselves? Professors caution that legal reasoning is a skill honed by years of learning and struggle through messy and difficult problems; handing that to an AI chatbot “on autopilot mode” may lead to deskilling

There has to be a balance. Keith Porcaro notes that errors [from LLMs] look different from human errors, and even savvy users may fail to catch them. Without deliberate safeguards - like cross-checking AI output against trusted sources - users may become complacent. Over time, junior lawyers who lean more on AI might miss the chance to develop core competencies. The American Bar Association has recognized this risk in issuing ethics opinions: it reminds lawyers that they cannot abdicate professional judgment to software. The tool must assist the lawyer’s mind, not replace it.

This leads us to a normative stance that AI should be supplemental, and not determinative. In practical terms, courts and law firms can adopt workflows where AI suggestions are treated as draft proposals and require human revision(s) before submitting to the court of law. AI can be used for routine tasks, like for document review or proofreading - this can save time. But interpreting its results should trigger critical thinking, not blind faith. 

If courts and law firms do not encourage the dual approach of using AI to flag potential issues while also doing deeper analysis of how those issues map to current law(s) and ethics, generations of lawyers may lose confidence in their own reasoning and over-rely on technology.

In practical terms, courts and law firms can adopt workflows where AI suggestions are treated as draft proposals and require human revision(s) before submitting to the court of law.

Ethical and Institutional Risks: Bias, Privacy, and Misinformation

Beyond AI inaccuracy, there are also broader ethical and societal risks, whenever AI attempts or is allowed to supplement the law. 

Bias replication: AI models learn from human-generated data; any biases in that data can be amplified. ChatGPT and similar models will regurgitate racist, sexist, or otherwise prejudiced stereotypes. Risk assessment algorithms may misclassify defendants at twice the rate of other defendants on discriminatory aspects, namely caste, religion, et cetera. 

If a legal AI chatbot inherits such biases, it could generate advice that disadvantages certain groups. For instance, it might implicitly assume some default facts about defendants that reflect societal prejudice, conflicting with legal ethics and constitutional equal protection principles. 

As the world relies more on technology, AI could amplify existing inequalities, eroding public trust in legal institutions.

Data privacy and confidentiality: AI models often require vast datasets for training. Imagine: When a lawyer inputs client details into a third-party chatbot, who controls that information?

Misinformation as an institutional hazard: Courts and regulators depend on accurate public information, from legal briefs to news reports. AI-generated content can clog these channels. The spectre of “deepfake law” - that may be convincingly written and look agreeable, but sit on fake or wrong statutes and jurisprudence - might force courts to spend extra time verifying basic facts. Ignorantly placing trust in chatbots could reduce demand for independent journalism and scholarship on law. For instance, if judges, lawyers, and legislators begin Googling AI-generated summaries instead of referring directly to the source materials, checks and balances might erode.

These risks suggest that the use of AI in law cannot be left unregulated. Legal processes adhere to principles of justice, transparency, and dignity. Allowing unfettered use of opaque chatbots would sabotage these principles. In a democratic society, the law is a tool to claim citizens’ rights and justice. Our political-legal tradition demands accountability; when a person’s life, liberty or property is on the line - judgments must be explainable and contestable. 

Replacing human judgment with inscrutable algorithms (or AI) risks conflict with constitutional values. A judge cannot meet the “reasoned explanation” requirement of due process if the reasoning was done by a black-box model. As one commentator puts it, reliance on AI “must be conditioned on a satisfactory showing that the AI system is valid, reliable, and lawful.” Thus, any integration of AI into legal settings must be carefully governed.

The new DPDPA does not contain algorithmic governance sections - this means that the automated legal tools could slip through regulatory cracks.

Regulatory and Policy landscape

Policymakers must recognise the above-noted challenges. In some jurisdictions, they are moving towards imposing safeguards. 

For instance, the European Union’s AI Act is particularly relevant. It explicitly treats AI used in the justice system as high-risk - any AI "intended to be used by a judicial authority… to assist [it] in researching and interpreting facts and the law" is listed among the highest-risk categories. 

Such high-risk AI must meet stringent requirements: transparency, human oversight, accuracy testing, and documentation. In practice, this means that an AI legal assistant would need to be of a certified quality before being put in use in courts or government legal advice. The Act also prohibits uses of AI deemed inherently harmful (e.g. covert biometric ID), and forbids “social scoring” – both of which hint at legal-context issues (e.g. digital evidence or surveillance) even if not specific to chatbots. Similarly, the forthcoming AI Liability Directive (EU) would make firms liable for AI errors in many contexts, creating financial incentives for reliability. Together, these European rules push developers to integrate ethics and governance from the ground up.

India, without any AI framework in policy or place, has a different approach. The Digital Personal Data Protection Act, 2023 (‘DPDPA’) primarily focuses on personal data, not AI per se. The new DPDPA does not contain algorithmic governance sections - this means that the automated legal tools could slip through regulatory cracks. India needs an AI-specific regulatory regime - for instance, mandating impact assessments for high-risk systems and explicit audit rights for individuals. It can adopt elements of the EU model (e.g., addressing “lack of transparency” and bias). Until such measures are codified, any large-scale deployment of legal AI in India will remain precarious. In other countries, piecemeal steps are emerging. 

The legal sector will increasingly expect AI tools to meet higher standards than compared to general consumer apps, given the risks to fundamental rights and the administration of justice.

Justice, efficacy, and constitutional alignment

Beyond technical fixes, we must ask normative questions: Should we fully adopt AI in the justice system? If yes, then whose interests are served, and at what cost(s)? AI chatbots could, in theory, make legal help more affordable and accessible. A litigant simply using AI might get instant answers to basic questions online, potentially improving access to justice. Yet, if those answers are unreliable, the outcome might worsen the situation. Poorer citizens (the ones most likely to skip hiring a lawyer) are ironically those most at risk from “free” AI legal advice. In this sense, unregulated legal AI could exacerbate inequality.

Moreover, the use of AI in law calls for scrutiny under constitutional and civil-liberty principles. Imagine - if an AI plays a role in, say, predictive sentencing, then questions of free speech, equal protection, and due process arise. In India, for instance, the right to life and liberty under Article 21 has been interpreted to include fair procedure; an opaque AI decision‐process would be suspect under that doctrine. In the US, the Fifth and Fourteenth Amendments guard against deprivation of liberty without due process; any AI that influences bail, trial, or punishment decisions would arguably need to be challenged under these guarantees. Across democracies, the principle is similar: justice must not be blinded by convenient tech. AI in law must align with constitutional values, meaning, demanding explainability, human accountability, and the right to contest automated outputs.

Given the above-noted stakes, we cannot rely solely on market forces considerations alone. We need robust guardrails. 

Possible reforms may include:

Transparency as a norm: Any legal AI chatbot/service should disclose how it works (e.g. what data it was trained on) and cite its sources. This could be enforced by law or bar rules. For example, an AI-generated memo could be required to footnote its claims in citations drawn from current law(s).

The presence of human-element: Laws could mandate that AI does not make final decisions in critical legal processes. Similar to the EU’s model for high-risk AI, every AI output could require review by a licensed lawyer before action. Lawyers must remain ethically responsible.

Certification and Testing: Regulatory bodies or professional associations might create testing protocols for legal AI (like medical devices have trials).

Professional Training and Rules: Bar associations should update legal ethics rules to address AI explicitly. Lawyers must be trained not only in how to use AI tools, but also in their limits. This should also be covered in legal education requirements, covering AI competence and biases.

Right to Explanation and Redressal: Data protection laws should recognise people’s rights when interacting with AI. For instance, India’s DPDPA or any future AI law guarantees that a person can request an explanation of an AI-based decision and has the option to contest it. This is crucial for procedural fairness – no one should be trapped by an error in an “automated process.”

Standardization of AI Outputs: The legal community could develop standards (like authoring guidelines) for how AI should phrase legal advice or what disclaimers to attach. This might include clearly stating when an answer or document was machine-generated and suggesting verification with a human lawyer.

Our goal should not be to ban AI (that would ignore its undeniable benefits for efficiency and analysis of large data), but to integrate it in ways that respect justice (in this given legal-AI context). For instance, AI can expedite document review, spot drafting inconsistencies, and democratise law, but only with the right human-element checks.

Relying on opaque chatbots without safeguards breaks trust. Citizen-centric policy demands that ordinary people can trust legal processes.

Our society demands that legal information be grounded in fact, principle, and accountability.

Conclusion

AI chatbots illustrate both the promise and peril of new technologies in law. Tools like ChatGPT have sparked creativity, enabling lawyers to draft faster and helping laypersons to explore legal queries. The administration of justice is not a simple text-completion task. Current LLMs excel at generating plausible-sounding language, but that is no substitute for knowing the law. Plausibility can be dangerously misleading, when lives and liberties are at stake. Our society demands that legal information be grounded in fact, principle, and accountability.

As noted in the previous piece and above, AI chatbots fall short: from fundamental pattern-matching limits and gaps in data and bias, to the erosion of professional skills and conflicts with ethical and constitutional norms. 

A sober, multidisciplinary approach is needed. Policy-makers must tailor AI regulations to the law’s special needs. Bar associations must update professional rules, and technologists must build systems with explicit guardrails and audit capabilities.

For citizens, the message is caution and engagement. Those seeking legal help should cross-check any advice. We must seek to center people in this transition and ensure AI serves the public good.

Looking forward, research and policy should focus on dealing with and minimising these gaps. For instance, developing explainable AI in legal contexts would align technology with due process. We need publicly owned legal AI models (trained on open legal text). Reform ideas might also include an “AI Ombudsman” within legal institutions to monitor use. AI’s role in law must remain subsidiary: augmenting lawyers’ work, not automating justice.