Overview
Summary
The provided text argues that the focus of AI innovation in compliance, particularly for ad review, is shifting away from developing and training customized large language models (LLMs). Instead of striving for incrementally “smarter” models, the article emphasizes that modern, readily available LLMs are already highly capable due to their extensive training. The core argument is that “agentic AI” architecture, which involves building intelligent layers and structured communication frameworks around these powerful foundational models, offers a more effective and adaptable approach. This method prioritizes clear communication and strategic prompting to guide the LLM's behavior, leading to better results, increased efficiency, and reduced operational overhead compared to the outdated custom model training paradigm.
Critical Questions Powered by Red Oak
Transcript (for the Robots)
The Death Of The Custom Model: Why Agentic AI Is The Future Of Ad Review
Model wars are over. The next wave of AI innovation in compliance isn’t about who has the best-trained model, it’s about who knows how to communicate with it.
For years, AI innovation in compliance has focused on a single idea: the smarter the model, the better the results. Custom-trained NLP engines were touted as the key to transforming advertising and marketing reviews. Vendors competed over who had trained more models, on more data, with better algorithms.
But here’s the thing: that conversation no longer reflects where the technology is or where it’s going.
Modern large language models (LLMs) like OpenAI’s GPT-4, Google’s Gemini, and Anthropic’s Claude have changed the game. These models are massively capable right out of the box. They’ve been trained on such vast, diverse, and semantically rich data that incremental differences in performance between custom-trained versions are often negligible. It’s been our experience that in some cases it’s even worse. We’ve found that they often introduce brittleness and overfitting that degrade performance in real-world applications.
The real question is no longer, “Which model are you using?”
It’s “How are you using it?”
And perhaps more importantly: “Do you know how to communicate with it?”
This shift from model obsession to architectural intelligence is already reshaping how the smartest firms think about AI in compliance. And it’s setting the stage for a new class of systems, where the power of the LLM is fully realized not through brute-force training, but through the intelligence and clarity of the framework around it.
Custom Model Training: A Relic of the Past
Perigon VP of Product Zach Bartholomew recently weighed in with a simple but powerful framing:
If we go back a couple of years ago and revisit the trending discourse around AI and Compliance at the time, the focus was squarely on training. Feed the machine thousands of documents, then fine-tune it to understand your firm’s voice, rules, and priorities. The idea was
logical at the time: more data equals better learning. Unfortunately, after much experimenting, testing, and learning we found that the “build your own bespoke model” approach hits walls quickly in practice:
- High operational overhead: Training a model takes time. Retraining it when policies or priorities change takes even more. And in a regulatory environment that evolves constantly, lag time equals risk.
- Burden shifts to compliance teams: Speaking of high operational overhead, custom trained models require input, oversight, and correction from compliance professionals. This asks already over-extended compliance teams to also take on the role of AI trainers
in addition to their core responsibilities. - Poor generalization: A model trained on yesterday’s data often fails to handle tomorrow’s edge case. Worse, it may reinforce patterns that are no longer relevant or even compliant.
- Vendor lock-in and opacity: Many custom-trained models are black boxes. Clients can’t easily interrogate why the model flagged something (or didn’t) and have little recourse when outputs fail to meet expectations.
We should make it clear that we’re not saying that NLP and model tuning have no place. We’re just saying that it’s our position that they’re no longer the defining edge. If everyone is using roughly the same class of foundational models (and they are) then the differentiator becomes everything else.
Agentic AI: A Smarter Way to Work with the Model
This is where agentic AI architecture comes in.
Instead of trying to mold the model itself, agentic systems use the model as a powerful general-purpose engine and then build structured, intelligent layers around it to guide its behavior. These agents don’t just process data; they’re experts acting with intention.
Think of it like building a team of specialists. Each agent is designed to handle a particular task: parsing disclosures, checking tone, flagging high-risk terms, assessing suitability to internal policies. They’re aligned with your workflows, configured with your rules, and perhaps most importantly – they communicate with the LLM on your behalf using structured prompts, metadata, and decision trees.
In other words, agentic systems work because they speak the language of the LLM fluently. They don’t try to brute-force intelligence into the model. They communicate clearly with it and get better results because of it.
The New AI Literacy: It’s Not the Model, It’s the Messaging
If there’s one idea that defines where AI is heading, it’s this: the future belongs to those who know how to communicate with the machine.
We’ve entered the era of AI fluency. Models are incredibly powerful, but they are only as good as the prompts and instructions they receive. This is what makes agentic design so powerful. It formalizes how we guide the model. We lead with clarity, structure, and intent instead of relying on a fuzzy assumption that a “trained” model will magically know what we want.
Think about how a seasoned compliance reviewer works. They don’t just skim a document for keywords. They assess tone, intent, clarity, positioning. They bring policy context and a structured mental checklist. Agentic AI attempts to replicate that kind of interaction not through more training data, but through better communication frameworks.
And just like human reviewers, the system can improve over time. Agents can be refined. Prompts can be tested and versioned. Rules can be adjusted without needing to “retrain the model” from scratch.
This is how you build real, scalable intelligence – not by making the model smarter, but by making the system around it more fluent and more structured.
Real Results, Real Speed
Let’s ground this in outcomes. Firms that adopt agentic AI architectures are seeing measurable improvements:
- Reduction in review cycles: With better pre-submission feedback and cleaner first drafts, compliance reviewers spend less time sending materials back for revisions.
- Faster time to market: Marketing teams can get campaigns out the door faster, without sacrificing compliance rigor.
- Lower risk exposure: Intelligent agents ensure policies are consistently applied, edge cases are flagged early, and high-risk content is never missed in the shuffle.
- Audit-ready transparency: Every interaction is documented. Every decision is traceable. Compliance can show not just what they did, but how and why.
And importantly, these benefits aren’t theoretical. They don’t rely on endless training cycles or proprietary black-box models. They’re the result of smart, transparent architecture, not smarter models.
Why It Matters Now
The regulatory environment isn’t getting any easier. Between evolving SEC and FINRA expectations, heightened scrutiny on marketing practices, and growing pressure to modernize infrastructure, compliance teams are being asked to do more, faster, with less and it will only
continue to intensify as regulators themselves become more concerned with, and knowledgeable of AI.
AI can (and should) help. But only if it’s implemented thoughtfully.
We’ve seen too many firms fall into the trap of overpromising with AI, only to be disappointed when a custom-trained model couldn’t keep up with the real-world complexity of compliance work. The problem wasn’t ever the technology, though. It was the approach.
Agentic architecture offers a better path forward. One that’s more flexible, more transparent, and ultimately more human-aligned.
Final Thought: The AI Arms Race Is Over. Now It’s About Strategy.
There was a time when choosing the “right” model was the most strategic decision a firm could make around AI. That time is over.
Today, everyone has access to the same class of models. The firms that win will be the ones who know how to use them, who build systems that can interpret business context, apply firm-specific policies, and deliver contextual intelligence not just a computational response.
Agentic architecture is how we get there. Not by making models better. But by building systems that know how to speak clearly, purposefully, and with the kind of nuance that compliance work demands.


