Article Review
Joshua Yuvaraj, a legal scholar who specializes in the implications of emerging technologies, has recently published a draft version of his forthcoming paper, “The Verification-Value Paradox: A Normative Critique of Gen AI Use in Legal Practice”.
The paper argues that all generative AI models suffer from a “reality flaw”, whereby outputs are not guaranteed to match factual or legal accuracy and can include hallucinations; as well as from a “transparency flaw” that renders the workings of the model opaque, making it difficult to understand or explain how conclusions are reached.
These flaws are persistent regardless of training data quality or model sophistication, and, the paper argues, together pose a challenge to the belief that AI’s risks can always be safely “managed”. While AI is promoted as a time-saving tool for legal research, drafting, and analysis, evidence shows that many instances of AI-generated errors, fake citations, and misleading outputs do end up in client submissions and even in court, leading to professional reprimands, fines, and other procedural consequences.
Yuvaraj cites multiple studies that show high rates of hallucinated content and citations in highly-ranked AI legal tools, including those trained on legal data. And indeed, comparative legal research has documented AI-hallucinated case citations appearing in court records in the UK, Australia, Pakistan, and Canada, with resulting professional misconduct inquiries and clarifications in bar association guidelines. Furthermore, much higher rates of errors and non-existent contents have been found in legal research with generative AI than would ever be acceptable in responsible legal practice.
But Yuvaraj goes one step further, propounding what he terms the Verification-Value Paradox: the notion that all the efficiency gains obtained by using AI – for example, to automate research and drafting – are outweighed by the increased burden of human verification, as lawyers are ethically required to ensure that all facts, citations, and arguments in any legal document are true and accurate. Moreover, the heavier the use of AI in legal contexts, the heavier the burden – and cost – of human verification.
As a result, he suggests that legal professionals should adopt a more skeptical attitude toward AI tools and avoid their unquestioned use in legal workflows. Manual, line-by-line verification remains mandatory for any substantive output, as courts hold lawyers entirely responsible for the veracity of all the materials they submit, including AI-generated content. This comes at a cost that, depending on circumstances, may well be prohibitive.
Yuvaraj, who is himself an academic, also argues that law schools should not merely teach students AI literacy or encourage AI use. Rather, they should focus on critical assessment of technology, emphasizing truth, professional responsibility, and critical verification skills over courses on “How to use AI”.
***
It is well known that there has been a recent surge of legal professionals disciplined for submitting filings with fake citations created by generative AI tools like ChatGPT, Microsoft Copilot, and Google Gemini. With hundreds of incidents all over the world, this is a trend that continues to grow, rather than diminish with awareness, and despite the fact that AI tools now explicitly warn users about their own limitations.
The Verification-Value Paradox would thus be strongly supported by recent legal experience. Courts sanction lawyers for unverified citations and hallucinated content regardless of intent, and the burden of human verification of AI-generated contents does not benefit from scale dynamics, but remains directly proportional to the use of AI.
The conclusion would seem to be that generative AI cannot be simply treated as an easy time-saving solution in any workflow involving legal documents – including translation – unless every output is manually, and rigorously, checked by human experts. While AI can automate certain tasks, documentary workflows must be designed to ensure the stringent level of quality required while remaining cost-effective.