King Vanga on Combining AI and Human Judgment in High-Stakes Litigation

King Vanga on Combining AI and Human Judgment in High-Stakes Litigation

High-stakes litigation has a way of speeding up before anyone feels ready. Records arrive, questions follow, and lawyers are pushed to take positions while the factual picture is still incomplete. That kind of pressure isn’t a temporary problem. It’s part of how these cases now unfold.

This change is increasingly visible to technologists working at the intersection of AI and professional services. King Vanga, a founder committed to shaping AI’s future in the service of society, believes that AI has already begun changing how legal services are delivered, particularly in how information is processed and evaluated at the outset of complex matters.

“AI has entered this environment as a practical response to scale,” says Vanga. “It can organize information faster than human teams ever could.” For many firms, that capability has become part of how work is expected to happen, not an experiment running in the background. Once faster analysis becomes available, waiting starts to look like hesitation.

Speed, however, has side effects. Early insight often carries more authority than it deserves. The first framing of a dispute tends to influence everything that comes after, even when later information complicates the picture. In high-stakes matters, early framing can lock teams into paths that are hard to reverse.

The problem is not automation itself. The problem arises when speed begins to stand in for judgment. Litigation still requires decisions made by people who answer for the consequences. When faster answers feel like better answers, responsibility can slip without anyone noticing.

What AI Actually Changes in a High-Stakes Case

The biggest change shows up at the beginning. Initial review phases that once unfolded slowly can now happen almost immediately. Large volumes of material are grouped, summarized, and compared before a legal theory has fully formed. And that alters how confidence develops and how quickly teams feel pressure to commit.

This move is no longer theoretical. We are seeing a growing share of litigation professionals who are already incorporating these tools into early review and assessment work. A 2025 industry survey reported that 37 percent of professionals involved in e-discovery are actively using generative AI, a sharp increase from just a few years earlier. That level of adoption means early AI-assisted analysis is quickly becoming a normal part of case intake rather than an edge case.

Early confidence matters. It shapes how teams staff cases, how they talk to clients, and how aggressively they posture. When AI accelerates early analysis, it also accelerates commitment. Positions harden sooner, sometimes before their weaknesses are fully understood or tested against opposing narratives.

That compression of judgment carries risk. “When insight arrives instantly, people confuse speed with clarity,” says Vanga. “AI can surface information quickly, but judgment still has to slow the process down enough for meaning to emerge.”

AI does not decide relevance in the legal sense. It identifies patterns, similarities, and frequency. Those signals are useful, but they do not map cleanly onto legal risk or narrative impact. Litigation depends on judgment about what carries weight, not just what appears often.

As a result, AI changes timing more than outcomes. It moves certainty forward in the process. That shift can help teams act decisively, but it can also amplify early misjudgments that would have been tempered by slower, more deliberative review.

The First Place Things Go Wrong: Context

Context in litigation is not decorative. It determines how facts interact with claims, defenses, and procedure. AI systems process information without awareness of those pressures. They recognize relationships, not stakes, which creates an early gap between what is summarized and what actually matters.

That gap is compounded by how unevenly these tools are understood and used. According to the American Bar Association’s 2025 Legal Technology Report, only 31 percent of legal professionals say they personally use generative AI in their work, a figure that underscores how often AI outputs may be reviewed by people who are still developing familiarity with their limitations.

Trouble starts when summaries look complete. A document overview may accurately reflect portions of the record while still missing why certain details carry disproportionate legal or strategic weight. Without a strong contextual filter, relevance becomes mechanical instead of judgment-driven.

Under time pressure, those gaps tend to harden. Teams rely on early outputs to make decisions and rarely revisit the assumptions underneath them. Each step builds on the last, even when the foundation is thinner than it appears.

Nothing fails loudly. The analysis feels orderly. The process looks efficient. Only later does it become clear that something essential was never fully weighed.

King Vanga: Where AI Earns Its Place and Where It Should Stop

AI adds value when it reduces cognitive load. Organizing large records, grouping related material, and flagging inconsistencies can free lawyers to focus on decisions that require judgment. Used this way, automation supports expertise rather than replacing it, especially in the early stages of analysis.

At the same time, skepticism inside the profession remains high when AI moves beyond support and into judgment. A recent survey of law firm professionals found that 79 percent expressed concern about generative AI’s imperfect understanding of legal ethics and professional standards, reflecting persistent hesitation about relying on these systems for substantive legal decisions.

The line is crossed when outputs are treated as conclusions rather than inputs. “AI should reduce the burden of sorting, not assume the burden of deciding,” says Vanga. “The moment a system starts standing in for professional responsibility, it creates risk that no amount of efficiency can justify.”

Tasks involving credibility, intent, or narrative positioning require evaluation that goes beyond internal coherence. Assigning those judgments to systems creates confidence without accountability, which is especially dangerous when decisions cannot be easily undone.

Oversight only works when authority is explicit. Human review must include the ability to override, discard, or pause based on judgment alone. Without that authority, review becomes procedural rather than substantive.

Clear boundaries protect both efficiency and responsibility. When teams decide in advance where automation stops, they reduce ambiguity and keep judgment active rather than deferred.

Judgment Under Pressure: How Strategy, Governance, and Culture Collide

AI systems are designed to produce internally consistent outputs. They align information into patterns that make sense within the material provided. Litigation strategy operates under different constraints. It must anticipate reaction, reinterpretation, and second-order effects that unfold over time, often in unpredictable ways.

Human decision-makers account for these pressures instinctively. They consider how arguments will be received, where opposing counsel may probe for weakness, and how credibility accumulates or erodes across filings and hearings. These considerations rarely appear in structured outputs, yet they shape outcomes as much as any fact pattern.

Under deadline pressure, the risk shifts. Outputs that appear complete can crowd out slower judgment. Review becomes confirmation instead of evaluation. “AI produces answers that feel finished,” says Vanga. “The danger is assuming completeness means soundness, especially when no one pauses long enough to test the reasoning behind it.”

This is where governance becomes strategic rather than administrative. Without clear boundaries around how AI informs decisions, authority diffuses. Teams may struggle to explain who approved a position, why an output was trusted, or where judgment intervened. In high-stakes litigation, those questions rarely stay internal.

Culture determines whether governance holds. Junior lawyers may lean on outputs without understanding their limits. Senior lawyers may step back too early, assuming systems will surface issues that once required experience. “People aren’t resisting tools,” Vanga says. “They’re reacting to a shift in who decides and who carries the risk when something goes wrong.”

Training that focuses only on tool usage misses this tension. Teams need shared clarity about responsibility. Judgment must be reinforced as a skill, not assumed as a default. When culture and governance align, AI becomes support rather than substitution.

Competitive Advantage in High-Stakes Litigation Is Still Human

AI can speed up how teams access and organize information, but it does not absorb responsibility for the choices that follow. In high-stakes litigation, decisions still rest with people who must explain them, defend them, and live with their consequences. Confidence formed too early can feel reassuring while masking unresolved risks that surface later under scrutiny. The teams that perform best are not the fastest or the most automated. They are the ones that know when to move forward and when to slow down, taking the time to test assumptions before committing to a course they will have to stand behind.