What almost everyone gets wrong about the use of AI for the legal and regulatory compliance domain

Category: Federal & State Compliance

Written by Myriad on April 8, 2025

In the growing excitement around generative AI, it’s easy to believe the technology is poised to revolutionize every industry overnight. Nowhere is this more visible than in legal and regulatory compliance—where the promise of automated document review, instant insights, and reduced workloads has never sounded more appealing. And yet, despite all the attention, most attempts to apply large language models (LLMs) in this domain fall short.

The problem isn’t that AI lacks potential here. It’s that the current generation of generic tools is being misapplied and oversold.

Let’s get something out of the way: generic LLM tools like ChatGPT, Gemini, or Copilot are incredibly capable at surface-level tasks. They’re excellent at summarizing content, drafting memos, or even assisting with simple legal templates. Some more tailored solutions—like the emerging crop of AI products for law firms—package this in a more lawyerly interface or integrate more easily into existing workflows. They may even cite case law or enforcement actions. But fundamentally, they all rely on the same underlying engine: a probabilistic prediction model trained to generate text that sounds correct, not necessarily one that is correct in a domain-specific and context-aware way.

To understand why that’s a problem in legal and regulatory work, it’s helpful to know how generic LLMs actually function. Similar to how generic LLMs have trouble understanding sarcasm, they struggle with apprehending the subtlety of law. These models don’t “understand” the law in any traditional sense. They pattern-match across billions of data points to predict the next most likely word in a sequence. They can mimic legal tone. They can parrot common interpretations. But they don’t possess a structured grasp of specific obligations, the conditions under which they apply, or how they relate to each other across a sprawling web of regulations, rules, and real-time changes.

That’s what lawyers, auditors and compliance officers—actually do.

Ask even the best-performing generic LLM to take a recent regulation like DORA (Digital Operational Resilience Act) and assess whether your firm’s existing ICT contracts or controls are compliant, and it will fail. It won’t fail by being wildly off-base—it will fail by being subtly wrong, confidently wrong, or simply generic. This is the kind of failure that leads to exposure, missed obligations, or reputational risk.

This isn’t just a theoretical issue. Every day, regulators publish hundreds of updates—over 200 per day, by some counts—across jurisdictions, sectors, and risk areas. These aren’t just additions to a database. They often include nuanced shifts in interpretation, new enforcement trends, or guidance that affects how a rule should be applied in context. Capturing this in real time, filtering for what matters to a given firm, and translating that into updated operational guidance is something no LLM can do today.

Real regulatory work requires breaking each law or rule into its obligations under specific conditions, and then connecting those across documents, across agencies, across jurisdictions. It requires a structured, dynamic knowledge base that evolves with regulatory change—and an interpretive engine that reasons not like a writer, but like a seasoned attorney.

To get there, we need fundamentally new approaches.

Breakthroughs in AI for other high-risk domains—like cancer detection or autonomous vehicles—weren’t powered by general-purpose LLMs. They relied on precise, non-generic, domain-specific architectures with feedback loops and ground-truth data to enable real-world reliability. In compliance, we need the same.

At Myriad, we’ve recently validated a system that brings together two critical ingredients: a real-time regulatory change monitor capable of updating firm-specific knowledge graphs on the fly, and a reasoning engine modeled on the workflows of top-tier legal professionals. This isn’t about summarizing the law. It’s about enabling AI to apply it accurately, consistently, and in the face of change.

We’ve shown that, under this non-generic model, it’s possible to achieve levels of extreme high accuracy and coverage, previously considered out of reach for automation. Whether it’s DORA, MiFID, SEC marketing rules, or consumer protection laws, the right architecture can do more than assist—it can ensure nothing falls through the cracks.

None of this is easy. But it is necessary. Because what’s at stake isn’t just productivity—it’s trust, legal exposure, and the credibility of AI in one of its most high-stakes applications.

The next wave of AI in compliance won’t look like a chatbot. It’ll look like infrastructure, deeply embedded into your existing workflows.