Why Legal AI Needs Better AI Architecture, Not Generative Guesswork

Why Legal AI Needs Better AI Architecture, Not Generative Guesswork

Click to learn how closed-loop AI architecture can significantly reduce hallucination risk vs. generative AI guesswork in high-stakes legal work.

Miguel Jette
VP of AI
December 16, 2025
Conceptual diagram of a closed-loop AI architecture, with multiple data inputs processed in a continuous feedback loop to improve outputs.
Table of contents
Hungry For More?

Luckily for you, we deliver. Subscribe to our blog today.

Thank You for Subscribing!

A confirmation email is on it’s way to your inbox.

Share this post
LinkedIn
Facebook
X logo
Pinterest
Reddit logo
Email

When someone’s liberty hangs on a brief riddled with fabricated legal citations, we’re not looking at a “bad prompt.” We’re looking at an architecture problem.

In early 2025, Kyle Kjoller, a welder in Nevada County, California, was held without bail on gun possession charges. Prosecutors opposed his release in an 11-page brief that his lawyers say showed generative-AI fingerprints: misread law, fabricated quotations, and misstated constitutional provisions.

A coalition of legal and technology scholars later urged the California Supreme Court to scrutinize unchecked generative AI use in prosecutions, warning that it risks due-process harms and wrongful convictions.

The Hidden Danger: Generative “Helpfulness”

Generative AI systems are optimized to be persuasive. That’s why they can become dangerous in law. When a model is asked to support a position, it tends to produce confident-sounding authority that matches the argument — whether or not that authority exists, or even if it says the opposite.

This isn’t fringe risk: in September 2025, California’s Court of Appeals sanctioned AI-fabricated citations, and Stanford’s 2025 AI Index flagged the same pattern as a persistent real-world risk.

The Wrapper Wave Problem

At the same time, legal AI adoption is accelerating. Many firms are rolling out tools built on top of large language models (LLMs) — often wrapped with legal-friendly UX and guardrails.  Harvey, for example, is now used by more than half of the top U.S. law firms and is explicitly an LLM-powered generative platform for drafting/review workflows. 

To be clear: these tools can be genuinely useful. But most of them share the same underlying risk because they are still generative-first systems. Even when they add retrieval or internal knowledge grounding, the model is typically still allowed to “free-generate” around what it retrieved — which means hallucinations remain possible. That’s not a knock on the vendors; it’s a limitation of the architecture.

So the question isn’t “Should firms use these tools?

It’s “Do firms understand the difference between generative help and closed-loop trust?

Generative vs. Closed-Loop AI

Here’s what the legal profession needs to understand: generative AI is architecturally unsuited for high-stakes legal work.

Generative models like ChatGPT are trained to produce plausible text. They predict what should come next based on patterns in training data. When asked to draft a brief, they can produce reasoning that looks right — including citations that look real — while quietly drifting into fiction. Courts are now treating that drift as a duty-of-candor and competence problem, not a simple tech hiccup.

Closed-loop AI works differently. In a closed-loop system, the AI is constrained to the verified source material you provide — transcripts, exhibits, discovery files, case records — and every output is grounded in that record. It can summarize, extract, classify, and connect what’s there, but it cannot introduce new authority (like citations, quotes, or factual claims) that aren’t supported by a source. 

That doesn’t mean closed-loop AI can never be wrong — a summary can still miss nuance or misread a tense exchange — but it materially reduces hallucination risk as these models themselves don't have access to the internet like generic AI models do. 

It’s an architectural choice that determines whether the system is allowed to generate beyond the record — and therefore how likely hallucinations are to show up in real legal work.

What This Means for Everyday Legal Work

The Kjoller matter shows why architecture is so critical to the conversation. Prosecutors blamed workload and speed, then pointed to training and safeguards. But legal work is overwhelming by default, and tools that only rely on humans to catch hallucinations won’t scale.

The answer isn’t banning AI — it’s using AI that’s constrained by design. In deposition or evidence review, generative systems can produce a clean story of what a witness “probably meant,” even when the record doesn’t support it. Closed-loop systems stay anchored to verified sources and surface uncertainty when support is thin, helping attorneys move faster without drifting beyond the record.

That creates a different trust foundation:

  • Verifiability: every statement links to the source
  • Auditability: clear trail from output back to record
  • Defensibility: you can show receipts, not just say “we checked”

Rev’s Approach to Closed-Loop AI

Closed-loop AI isn’t a buzzword — it’s what legal work has always required. At Rev, our technology has long been grounded in customer-owned source files. That means when AI assists with summarizing testimony, extracting issues, or surfacing key moments, those outputs stay tethered to the record — with provenance that legal teams can verify and defend.

We’ve seen this play out in real trials. In one recent criminal defense case, Greening Law Group used Rev to spot contradictions across body-cam and interview footage, build cross-examination quickly, and navigate evidence in real time. The firm credits that closed-loop workflow with helping secure dramatically reduced sentences for their client.

In other words, Rev isn’t trying to make legal work “more generative.” We’re helping teams move faster inside the boundaries of what’s actually been said and recorded — and doing it in a deployment model that helps attorneys move faster with more confidence. 

The Path Forward

Every hallucinated citation erodes trust — not only in AI-assisted work, but in the legal system itself. Done right, AI can help overwhelmed public defenders, help prosecutors manage evidence responsibly, and make legal services more accessible. Done wrong, we’ll either see a retreat from AI or a dangerous normalization of fabricated authority.

The threat isn’t AI itself. It’s AI that optimizes for plausibility rather than verifiability. The legal profession runs on the integrity of the record and the integrity of citations. Generative AI breaks that trust by producing authority that sounds right but isn’t. Closed-loop AI rebuilds it by tethering outputs to verified source material.

And even with the right architecture, the duty doesn’t disappear. Attorneys still need to validate AI-assisted work, understand its limits, and apply the same professional stewardship they would to any tool that might influence a client’s outcome. Closed-loop systems make that job safer and more defensible — but they don’t make it optional.

Kyle Kjoller’s liberty shouldn’t depend on whether an AI hallucinated his prosecutor’s research. Neither should anyone else’s.

Hungry For More?

Luckily for you, we deliver. Subscribe to our blog today.

Thank You for Subscribing!

A confirmation email is on it’s way to your inbox.

Share this post
LinkedIn
Facebook
X logo
Pinterest
Reddit logo
Email

Subscribe to The Rev Blog

Sign up to get Rev content delivered straight to your inbox.