OVERVIEW
The provided text argues that the focus of AI innovation in compliance, particularly for ad review, is shifting away from developing and training customized large language models (LLMs). Instead of striving for incrementally "smarter" models, the article emphasizes that modern, readily available LLMs are already highly capable due to their extensive training. The core argument is that "agentic AI" architecture, which involves building intelligent layers and structured communication frameworks around these powerful foundational models, offers a more effective and adaptable approach. This method prioritizes clear communication and strategic prompting to guide the LLM's behavior, leading to better results, increased efficiency, and reduced operational overhead compared to the outdated custom model training paradigm.
CRITICAL QUESTIONS POWERED BY RED OAK
While training models on firm-specific data once seemed like the gold standard, the reality is that today’s large language models (LLMs) are already highly capable. Custom models introduce high operational overhead, require compliance staff to act as AI trainers, and often suffer from poor generalization or vendor lock-in. In practice, these models can actually reduce effectiveness when compared with leveraging foundational LLMs supported by smarter system design.
Agentic AI doesn’t try to “retrain” the model itself. Instead, it builds intelligent, structured frameworks around the model—using agents configured with firm policies, workflows, and decision trees. These agents act as specialists, guiding the LLM with precision prompts and rules. The result is greater fluency, reliability, and transparency—delivering faster reviews, consistent compliance, and audit-ready documentation.
The real differentiator isn’t the model—it’s how you communicate with it. Success depends on AI fluency: building systems that give models clear instructions, structured prompts, and contextual rules. Firms that focus on architecture and communication see shorter review cycles, quicker time to market, reduced risk, and stronger regulatory alignment—all without the burden of endless retraining or black-box outputs.
Speaker 2
Definitely.
Speaker 1
Especially in areas like compliance, ad review—places where it's critical.
Speaker 2
Absolutely.
Speaker 1
For years all the talk was about who had the smarter AI model. Right.
Speaker 2
The custom built one, that was the race.
Speaker 1
So our mission today is to unpack why that's kind of flipped. Why we're now talking about something called agentic AI. You might actually be surprised to hear that in a way, the model wars, they might be over.
Speaker 2
It's a really fascinating turn, isn't it? Because for so long companies were pouring, I mean, millions into training their own.
Speaker 1
Bespoke AI, thinking that was the edge.
Speaker 2
Exactly. The ultimate competitive advantage. But what we actually saw happening—well, those custom models started showing brittleness and overfitting.
Speaker 1
Okay, what does that mean in practice? Like brittle.
Speaker 2
Think of it like a super specialized athlete: amazing at one thing, but struggles with anything else.
Speaker 1
Gotcha.
Speaker 2
They kind of break down when they face something new, something unexpected. And in compliance, that happens constantly.
Speaker 1
Right. Regulations change, market conditions.
Speaker 2
Precisely. So this brittleness, it often meant performance actually degraded when these models hit the real world.
Speaker 1
Okay, let's dig into that a bit more. The old way, you'd build your own bespoke model, feed it thousands, maybe millions of your documents, fine tune it.
Speaker 2
Sounds logical on the surface.
Speaker 1
Yeah. More data, more specific training should mean better learning, you'd think. But what were the, like, the real world snags?
Speaker 2
Well, that whole approach hit some pretty significant walls. First off, just the high operational overhead, just the sheer time and resources needed for training, and then retraining constantly, especially as rules kept changing.
Speaker 1
Yeah, that sounds exhausting.
Speaker 2
And that burden—yeah, it often landed smack on compliance teams who were already, you know, stretched.
Speaker 1
Thin, asking them to suddenly become AI experts overnight.
Speaker 2
Pretty much. Then there was poor generalization. A model trained perfectly on yesterday's data.
Speaker 1
Might completely fail tomorrow.
Speaker 2
Exactly. On those unforeseen edge cases.
Speaker 1
Yeah.
Speaker 2
And critically, you often got stuck with vendor lock in and opacity.
Speaker 1
Ah, the black box problem.
Speaker 2
Right. These custom solutions.
Speaker 1
Yeah.
Speaker 2
You couldn't easily figure out why the model flagged something specific. Try explaining that to an auditor.
Speaker 1
Okay, so this is where it gets really interesting then. If everyone's starting to use the same powerful foundational models, LLMs, right, the large language models like OpenAI's GPT-4, Google's Gemini, Anthropic's Claude—these big engines—then the difference isn't the engine itself anymore, it's everything else. Everything you build around it.
Speaker 2
Precisely. And that is the core idea of agentic AI.
Speaker 1
So you're not trying to reshape the core brain, the LLM.
Speaker 2
No, you use that LLM as this incredibly powerful general-purpose reasoning engine.
Speaker 1
Yeah.
Speaker 2
And around it you build these structured intelligent layers. We call them agents. Think of them like specialist detectives. Each one is designed for a very specific task.
Speaker 1
Okay, like what?
Speaker 2
Well, maybe one agent is an expert at parsing complex financial disclosures, another one is checking the tone in marketing copy, and maybe a third is just flagging high-risk terms. But based on your firm's specific rules and context.
Speaker 1
What I think is super fascinating here is how these agents, how they actually interact with the LLM.
Speaker 2
Right.
Speaker 1
They communicate with the LLM on your behalf using things like structured prompts, metadata, decision trees.
Speaker 2
Exactly. It's all about speaking the language of the LLM fluently, not just throwing data at it.
Speaker 1
It's less brute force, more finesse. Like giving a master craftsman really precise instructions instead of teaching him how to build the tools himself.
Speaker 2
That's a great analogy. It really does replicate how a seasoned human compliance reviewer operates.
Speaker 1
How so?
Speaker 2
They don't just scan for keywords. They bring context, policy knowledge. They have that kind of structured mental checklist.
Speaker 1
Okay.
Speaker 2
Agentic AI tries to mimic that structured thinking through better communication frameworks. And the beauty is you improve the system over time by refining the agents.
Speaker 1
Tweaking the prompts without needing to retrain the model from scratch.
Speaker 2
Exactly. Which makes it incredibly agile, much faster.
Speaker 1
To adapt, and the real world payoff seems pretty significant. Firms using this agentic approach, they're seeing real benefits. Yeah?
Speaker 2
Oh, absolutely. Tangible stuff. We're hearing about major reduction in review cycles. Like how much—some firms report, like a 70% drop in manual oversight needed, which obviously leads to faster time to market for campaigns and products.
Speaker 1
That's huge.
Speaker 2
Translates to lower risk exposure too.
Speaker 1
Yeah.
Speaker 2
And maybe the biggest piece for many—audit-ready transparency.
Speaker 1
Ah. Because you can see the why precisely.
Speaker 2
Every agent's decision, the specific policy rule it looked at, the prompt it used, it's all logged. Automatically creates this clear audit trail. Completely different from those old opaque models.
Speaker 1
And this matters so much right now because let's face it, the regulatory environment isn't getting any simpler.
Speaker 2
Not at all. Compliance teams need solutions that are flexible, transparent and frankly, more human-aligned.
Speaker 1
Yeah, it feels like it's about avoiding the traps of the past. Maybe where AI was sometimes overpromised.
Speaker 2
Right. Through approaches that were maybe fundamentally flawed. This is about delivering real measurable value now.
Speaker 1
So when you boil it down, the key insight seems to be the future of AI in compliance. It's not really about which model you pick anymore. It's about how you are using it. It's about your team's, your firm's fluency—how well you talk to the machine.
Speaker 2
That's exactly right. The AI arms race, you know, the one focused purely on the model—that's over now. It's all about strategy.
Speaker 1
Strategy and how you use it.
Speaker 2
Yeah. The firms that win are going to be the ones who master building these agentic systems. Systems that understand business context, apply specific—
Speaker 1
Company policies, and deliver that contextual intelligence.
Speaker 2
Through clear, purposeful, nuanced communication with the AI. So the real question for you listening is: what will your firm do to become fluent in this new language of agents?
Model wars are over. The next wave of AI innovation in compliance isn’t about who has the best-trained model, it’s about who knows how to communicate with it.
For years, AI innovation in compliance has focused on a single idea: the smarter the model, the better the results. Custom-trained NLP engines were touted as the key to transforming advertising and marketing reviews. Vendors competed over who had trained more models, on more data, with better algorithms.
But here’s the thing: that conversation no longer reflects where the technology is or where it’s going.
Modern large language models (LLMs) like OpenAI’s GPT-4, Google’s Gemini, and Anthropic’s Claude have changed the game. These models are massively capable right out of the box. They’ve been trained on such vast, diverse, and semantically rich data that incremental differences in performance between custom-trained versions are often negligible. It’s been our experience that in some cases it’s even worse. We’ve found that they often introduce brittleness and overfitting that degrade performance in real-world applications.
The real question is no longer, “Which model are you using?”
It’s “How are you using it?”
And perhaps more importantly: “Do you know how to communicate with it?”
This shift from model obsession to architectural intelligence is already reshaping how the smartest firms think about AI in compliance. And it’s setting the stage for a new class of systems, where the power of the LLM is fully realized not through brute-force training, but
through the intelligence and clarity of the framework around it.
Perigon VP of Product Zach Bartholomew recently weighed in with a simple but powerful framing:
If we go back a couple of years ago and revisit the trending discourse around AI and Compliance at the time, the focus was squarely on training. Feed the machine thousands of documents, then fine-tune it to understand your firm’s voice, rules, and priorities. The idea was
logical at the time: more data equals better learning. Unfortunately, after much experimenting, testing, and learning we found that the “build your own bespoke model” approach hits walls quickly in practice:
• High operational overhead: Training a model takes time. Retraining it when policies or priorities change takes even more. And in a regulatory environment that evolves constantly, lag time equals risk.
• Burden shifts to compliance teams: Speaking of high operational overhead, custom trained models require input, oversight, and correction from compliance professionals. This asks already over-extended compliance teams to also take on the role of AI trainers
in addition to their core responsibilities.
• Poor generalization: A model trained on yesterday’s data often fails to handle tomorrow’s edge case. Worse, it may reinforce patterns that are no longer relevant or even compliant.
• Vendor lock-in and opacity: Many custom-trained models are black boxes. Clients can’t easily interrogate why the model flagged something (or didn’t) and have little recourse when outputs fail to meet expectations.
We should make it clear that we’re not saying that NLP and model tuning have no place. We’re just saying that it’s our position that they’re no longer the defining edge. If everyone is using roughly the same class of foundational models (and they are) then the differentiator becomes everything else.
This is where agentic AI architecture comes in.
Instead of trying to mold the model itself, agentic systems use the model as a powerful general-purpose engine and then build structured, intelligent layers around it to guide its behavior. These agents don’t just process data; they’re experts acting with intention.
Think of it like building a team of specialists. Each agent is designed to handle a particular task: parsing disclosures, checking tone, flagging high-risk terms, assessing suitability to internal policies. They’re aligned with your workflows, configured with your rules, and perhaps most importantly – they communicate with the LLM on your behalf using structured prompts, metadata, and decision trees.
In other words, agentic systems work because they speak the language of the LLM fluently. They don’t try to brute-force intelligence into the model. They communicate clearly with it and get better results because of it.
If there’s one idea that defines where AI is heading, it’s this: the future belongs to those who know how to communicate with the machine.
We’ve entered the era of AI fluency. Models are incredibly powerful, but they are only as good as the prompts and instructions they receive. This is what makes agentic design so powerful. It formalizes how we guide the model. We lead with clarity, structure, and intent instead of relying on a fuzzy assumption that a “trained” model will magically know what we want.
Think about how a seasoned compliance reviewer works. They don’t just skim a document for keywords. They assess tone, intent, clarity, positioning. They bring policy context and a structured mental checklist. Agentic AI attempts to replicate that kind of interaction not through more training data, but through better communication frameworks.
And just like human reviewers, the system can improve over time. Agents can be refined. Prompts can be tested and versioned. Rules can be adjusted without needing to “retrain the model” from scratch.
This is how you build real, scalable intelligence – not by making the model smarter, but by making the system around it more fluent and more structured.
Let’s ground this in outcomes. Firms that adopt agentic AI architectures are seeing measurable improvements:
And importantly, these benefits aren’t theoretical. They don’t rely on endless training cycles or proprietary black-box models. They’re the result of smart, transparent architecture, not smarter models.
The regulatory environment isn’t getting any easier. Between evolving SEC and FINRA expectations, heightened scrutiny on marketing practices, and growing pressure to modernize infrastructure, compliance teams are being asked to do more, faster, with less and it will only
continue to intensify as regulators themselves become more concerned with, and knowledgeable of AI.
AI can (and should) help. But only if it’s implemented thoughtfully.
We’ve seen too many firms fall into the trap of overpromising with AI, only to be disappointed when a custom-trained model couldn’t keep up with the real-world complexity of compliance work. The problem wasn’t ever the technology, though. It was the approach.
Agentic architecture offers a better path forward. One that’s more flexible, more transparent, and ultimately more human-aligned.
There was a time when choosing the “right” model was the most strategic decision a firm could make around AI. That time is over.
Today, everyone has access to the same class of models. The firms that win will be the ones who know how to use them, who build systems that can interpret business context, apply firm-specific policies, and deliver contextual intelligence not just a computational response.
Agentic architecture is how we get there. Not by making models better. But by building systems that know how to speak clearly, purposefully, and with the kind of nuance that compliance work demands.