AI that Answers to Compliance

Overview

Watch

As regulators increase their focus on AI governance in compliance workflows, firms are being forced to take a closer look at how their AI tools actually operate. In this Red Oak Fireside Chat, CTO Rick Grashel explains how Red Oak’s AI was built to meet the level of precision, transparency, and oversight regulators are now calling for.

Critical Questions Powered by Red Oak

Compliance-Grade AI is AI that is designed to operate within regulatory requirements, not learn compliance over time through opaque model training. 

Unlike many AI-native tools, Compliance-Grade AI ensures that every interaction is auditable, reproducible, and tied directly to the compliance record. It captures what was asked, what the AI returned, and how that output was used—so firms can explain and defend decisions to regulators with confidence. 

In regulated environments, AI isn’t valuable unless it can meet books and records obligations, support audit trails, and align with firm policies. Compliance-Grade AI is built to do exactly that. 

AI can be safe in financial services compliance, but only when it’s used deliberately and governed properly. 

The risk isn’t AI itself; it’s uncontrolled AI. Tools that rely on probabilistic outputs, lack auditability, or introduce model drift can create regulatory exposure if they’re used in decision-critical parts of the compliance process. 

Safe AI use in compliance requires clear governance, defined workflows, human validation where necessary, and full transparency into how AI outputs are generated and stored. Without those controls, AI adoption can actually increase risk rather than reduce it. 

AI works best in compliance when it’s applied to supportive, non-deterministic tasks, such as early-stage document analysis, pattern identification, or surfacing potential issues for human review. 

However, AI should not be relied on for final approval decisions, regulatory recordkeeping, or any function that requires absolute precision and repeatability. These areas demand deterministic outcomes, not probabilistic ones. 

The most effective compliance platforms use AI selectively, placing it where approximation is acceptable and backing it with governance and controls wherever regulatory risk exists. 

Transcript

Speaker 1: 00:05
Rick, first of all, good to see you. Thank you for joining me.

Speaker 2: 00:08
Absolutely.

Speaker 1: 00:10
I've been trying to carve some time out for us to do this for a while. And this is a little bit of just an early sneak peek for some of the people that are going to be listening to us. Uh, for those who haven't heard, we're releasing a white paper soon. We have a really good webinar uh coming up here shortly after the new year. Uh so we just kind of wanted to give everybody a little bit of a sneak peek about the kinds of things we're thinking about, specifically when it comes to AI. I'm super excited. Thank you for the time.

Speaker 2: 00:35
Sure, absolutely. Yeah.

Speaker 1: 00:37
So one of the things that I really have been wanting to ask you about, and I feel would provide a lot of value for people that are in our sphere, people that are thinking about AI, the compliance industry generally, I think you're particularly good or qualified to answer because you've been building compliance systems for a long time. And we've been building compliance systems as a company since way before AI. And we all understand that AI is kind of a buzzword, right? It's the hot topic. Everybody wants to talk about AI. Can we use AI for this? Can we use AI for that? So I first want to ask you, did you? I first want to ask you, when did you first kind of get the idea that AI is something that could be effective or useful for compliance professionals? And when you came to that realization or started thinking about it, did anything kind of give you a little bit of hesitancy or a little bit of a pause? Does anything about AI in compliance make you nervous?

Speaker 2: 01:36
This is a that's a great question because the reality is that um AI has actually been around for a while, especially when we talk about machine learning and things of that nature. But your question actually getting to the point of when when did I realize that it could be applied in compliance or in a compliance context? Because that introduces a different level of scrutiny and a different level of precision that's necessary. And the interesting thing, again, yeah, I've been building compliance systems, enterprise grade compliance systems for decades. So the first time that I realized that it could have applicability specifically in the financial services compliance arena came out of healthcare, believe it or not.

Speaker 1: 02:31
Interesting.

Speaker 2: 02:32
So before uh before Red Oak, I was involved in creating basically enterprise compliance data management systems for the healthcare industry. And that kind of an industry requires we're talking human lives here. We're talking things like patient fatality rates, we're talking about patient identification and things that require a level of precision to make sure that clinical decisions that are made on the spot in ER scenarios, life criticality, those kinds of things, they require, again, a level of precision such that you're not making the wrong diagnostic decision in a critical moment, right? And especially in a critical care scenario. When I first saw that they're actually applying artificial intelligence, AI models, and machine learning to identify patients, to be able to diagnose things like radiography and x-rays, to be able to try to see things that maybe a radiographer might miss, something so small and so slight. When that happened, and I saw that happen, and I saw that physicians and doctors who are responsible for the care of critical patients or patients with critical needs, I knew at that point this is something serious now that's being taken seriously and can be applied to financial services, certainly, which again has has a different kind of criticality to it for sure. I knew the second that it's trusted by healthcare in certain scenarios, we have to start looking at it in our industry.

Speaker 1: 04:17
Yeah, and I think that that's so particularly salient. And you hammered on a word that I kind of want to dig dig in on just a little bit. And you've used the word precision a couple of times. Now, to just kind of cut to the chase, we all know that there are a lot of pressures inside of the financial services industry generally, but even onto the compliance teams themselves to start implementing and adopting AI solutions, whether it be for ad review or any other task that compliance people have to deal with and work with on a daily basis. But we know that compliance is different. This is not the car industry, this is not, you know, quick serve restaurants, this is not mortgage or real estate. Right. Compliance is particularly different. And we know that compliance demands precision, not necessarily prediction, not guesswork. You must be as precise as possible when we are talking about the work that compliance professionals do on a day-to-day basis. And when thinking about the context of that, we've seen the, you know, sort of the oncoming of a lot of AI native solutions. Is AI native the right way to be thinking about this, particularly for such a regulated environment?

Speaker 2: 05:33
It's a really hard word because I think if I asked anybody what does AI native mean, I think you're gonna get any number of different answers depending on who you asked. And when we talk about, because everybody is under pressure now, from the top down in every organization, you must use AI, you must use AI, does it have AI? Everything is AI. So that's a really important question because the term AI native, what does that really mean? If I was to ask anybody watching this, what does AI native mean? People aren't going to be able to say with any level of specificity, probably what that means over and above of it uses AI, right? So then it becomes a question, well, how does it use AI, right? I have a lot of a pressure on me from my executives, from my chief compliance officer. We need to use AI. We need to use AI.

Speaker 1: 06:30
And can I just jump in here for just a quick second, Rick? Yeah. I think you really hit the nail on the head. So we interact with a lot of people in this space. And if we just kind of cut to the chase, like, look, here's the deal. We know that oftentimes compliance teams aren't necessarily making these decisions on these kinds of tools for themselves. Oftentimes we know that a lot of these decisions are made at the highest of level and kind of dictated down throughout the rest of the organization, including compliance. So I think that this is something that people are really kind of curious about. And I think that there's a lot that could be said on that point alone.

Speaker 2: 07:07
Absolutely. And what I can say about Red Oak is that for 15 years we have been a trusted vendor and the marketing compliance review space that we are outcome focused. We want to make sure that the outcomes of the compliance function are not only things that are compliant, things that are auditable, things that comply with books and records, results that are precise in nature to the level of risk that are acceptable by the firms that we serve. And what that means is that there are scenarios where AI itself is actually perfect in terms of approximation. There might be upfront tasks that need to be done, or even along the way, in a workflow where approximation of a question you may ask about a document, on a question that you may ask about what should this particular filing have? Should it be filed with FENRE? There might be approximations that are completely acceptable to ask regarding a document review for a model, completely acceptable. However, there are other decisions where you are required to have a level of precision where you can't afford to have hallucinations. You can't afford to, if I ask this question again about this document, I'm going to get a different answer, or I'm going to get no answer. We all have heard this stuff. We've all experienced it ourselves. And we need to get real in terms of how we're actually utilizing AI, where do we use it? Are we using it in the right places in our compliance process? Because again, not unlike healthcare, we can use it. We can really use it in effective places, but there are places where it should not be used. And it's where those critical decisions are made, where precision is absolutely imperative.

Speaker 1: 09:11
Well, and I think that that's one of the ways that we've kind of earned this trust to your point. You know, Red Oak has been serving these clients for 15 years in a way that shows that we really understand their work, how they do their work. And I think that, you know, while a lot of people may not be able to explain exactly what AI native means, because frankly, put on the spot here, I'm not sure that I can either. But one thing that I do know is that AI native seems to imply this idea that you start with AI and then you can kind of overlay some compliance stuff inside and on top of that. But in reality, that's the exact opposite of how you and the rest of the tech team and everybody else involved in designing our system. It's the opposite of that approach, really, because we have 15 years of compliance first, as you said, outcome-oriented, outcome-oriented objectives. Meaning at Red Oak, we're very outcome-oriented. We start with compliance. We've established that foundation of solid compliance, and we're figuring out how to tactically and properly use AI where it most makes sense and where it's not going to introduce unnecessary risk. And I think that's really the key to it. So I maybe I can't explain why what AI native is, but I can tell you what it's not.

Speaker 2: 10:33
Yeah, uh a hundred percent. And you used a terminology that I've heard throughout the industry, and I think it's dangerous, which is AI first. Just the presumption of AI first, I'm sorry, to me, that's actually a disservice to whatever kind of business process that you're implementing. It's not AI first, it's compliance first in our business. We always have to think compliance first. And what that means is that compliance is policy. It's policy first. And if my policy and my business process allows for me to use this particular area in this business process, in this review process where I can utilize AI. And it doesn't introduce risk to my compliance process and it actually accelerates my time to market. It accelerates or reduces my risk. That's fantastic, and that's where AI is going to go. It's not necessarily first, it's not necessarily last, but only in the places where it makes sense. In the places where it doesn't make sense, we're going to include other technologies that do make sense, such as transcription, such as OCR. There's other more appropriate, precise technologies for certain tasks at hand. So we can't just say AI first. We can say that we are AI enabled, but we only use it where it makes sense again in the entirety of the compliance process.

Speaker 1: 12:07
And I think that's exactly it. And I'm frustrated that this is only just a little bit of a teaser. And obviously, we can't get ahead of ourselves. We have this webinar coming up very soon. And obviously, as you know, uh, we have this AI white paper that's coming out right at the beginning of the year. And we're going to really in detail introduce this idea of compliance grade AI and what compliance grade AI means, what it actually looks like. But just again, for purposes of a sneak preview, what does this mean in practical terms when talking about compliance grade AI, right? What does this mean in practical terms for all of these firms that are trying to modernize ad review or supervision or any of these other processes that they have to deal with on a daily basis?

Speaker 2: 12:52
Right. So compliance grade solutions, period, demand that you always have certain things at the forefront. It's always in a regulatory context. It's always the base basics. Am I 17A4 compliant? Are my books and records satisfied? Do I am I able to produce an audit trail which goes from the beginning to the end of any approval cycle for anything that needs to be compliant? Is it auditable? Is it producible? Those kinds of things. What is my data storage practices? All of those things have to be satisfied when we talk about now compliance grade AI. You add AI into that. We're talking about all of that stuff. Compliance grade AI, guess what? We're talking about all of that stuff too. So not only am I asking the AI a question, maybe where it makes sense inside of a review process, I'm also going to make sure that everything I ask the AI, I'm storing that contemporaneously at the time that I asked. I'm storing the exact response I got back from the AI. I'm keeping that along with the entirety of the record of that review so that I can produce that to regulators if necessary under audit. These are that is what compliance grade AI means. It's not just asking a question, tell me if I have promissory language inside of a piece of marketing material. It's actually, yes, tell me that, but also make sure that you store all of the information around the interaction with the AI in that process, along with that entire record, and be able to produce that. That's real governance, that's real understanding, that's real compliance grade AI.

Speaker 1: 14:44
So speaking of governance, you and I, we were having a little bit of a conversation a while ago, and we were thinking of all these different analogies and just on this governance piece alone, because it's so important. And you brought up the idea of an airplane. And you said something to the effect of you wouldn't jump in an airplane and fly it if that airplane didn't have sort of redundancy built-in backup systems, a black box, control mechanisms generally to make sure if something mechanically goes wrong with the plane, there's a backup so the plane doesn't go down, so you have a chance. Well, let's say that that backup fails and you actually have to jump out of the airplane. Well, let's say you're jumping out with a parachute. Well, what if that parachute fails? Guess what? There's redundancy, there's governance. You have a backup shoot in case the first one fails. And I believe kind of what you're getting at is our perspective at Red Oak, it's the same idea, it's the same approach. So, why would you go into a very highly regulated situation, a very demanding situation that requires, once again, to use your word precision? Why would you go into that without any kind of backup?

Speaker 2: 15:55
Oh, absolutely. Yeah, you would never want to do that. So, what you want is an understanding that where I introduce AI, I'm not going to get exact precision. I'm going to get approximation. I may ask the same question of the AI and get different answers, even for the same question on the same thing, right? We've all experienced this with ChatGPT. And that approximation may be totally fine, right? But the idea is, again, in the event of hallucination and failure, which we've all experienced, what happens in that case of where that hallucination occurs in that approximation during the approval process? Is your platform able to be configurable to be able to handle the kind of scenario where you now need a check where someone can actually validate that approximation and make sure it's valid? They can invalidate that approximation, they can validate it, they can augment it. You have to be able to have a configurable workflow. You have to be able to have a configurable business process that protects you in the event, again, that the AI quote unquote fails and you need that backup system. That's exactly why it's a whole of a complete compliance-grade AI platform and not just a platform where, oh, we're just going to ask it a question, it's going to give us answers, and we don't have those necessary systems and capabilities in place. That's exactly the kind of capabilities that our platform has.

Speaker 1: 17:19
And by the way, not just configurable and compatible with policies and workflows generally, but all of this stuff, if it's really compliance grade, it should be compatible with your firm's existing policies, with your existing workflows. AI should not force you to change the way you do business. It should enhance your ability to do your business more effectively and a little bit faster, sure. But I believe, and I think that all of us here at Red Oak believe that if you fundamentally have to change the way you work just because of a mandate to use a particular kind of tool, that's not really compliance grade, and that's not really what we're trying to accomplish.

Speaker 2: 17:57
Exactly. And I think it's important just to reiterate in the end, we're about compliance grade outcomes.

Speaker 1: 18:04
Yes.

Speaker 2: 18:05
That is always what we've been about at Red Oak. It's about outcomes and where we can improve, make more efficient, reduce risk, improve performance of those outcomes using AI in the right places in those business processes. We do that. We absolutely will do that where it makes sense. But the thing we can guarantee is we're not going to do that in the places where it doesn't make sense. There are places that require a level of precision where it doesn't make sense. But our platform is configurable enough to allow the firms that we serve to choose the level of risk and where they would like to introduce that AI and how they would like to do it as it makes sense according to their policies. That's what Red Oak is about. We've always been about those outcomes. And AI is not going to change that mandate uh at all.

Speaker 1: 19:02
And I think that's so spot on. I you're getting me all fired up as you always do. Uh, we're gonna call it here for today. So thank you so much for making some time and joining me. Uh, again, to everybody who's listening, this is just a little bit of a preview. We have a white paper that's going to explain what compliance grade AI means. It is going to give everybody much more insight into the Red Oak philosophy on AI overall, as well as a really cool webinar focused on AI and how to use AI the proper way for compliance coming up shortly after the start of the year. So, Rick, I can't thank you enough again for the time.

Speaker 2: 19:38
Thank you so much. Yep, appreciate it.

Read the Blog Post

The Problem With “AI-First” Thinking in Compliance 

AI isn’t new. Machine learning, automation, and pattern recognition have powered regulated systems for years. What is new is the expectation that AI should now be embedded everywhere—often without a clear understanding of what that means in a regulated environment. 

That’s where things get dangerous. 

Compliance isn’t about prediction. It’s not about probability or approximation. It’s about precision, repeatability, and auditability. If you ask the same question tomorrow and get a different answer, or no answer at all, that’s not innovation. That’s exposure. 

Many so-called AI-native solutions start with the model and attempt to layer compliance on top. In highly regulated environments, that approach gets the order of operations exactly backward. 

AI-first thinking is a disservice to complianceThe correct framing, and the only defensible one, is compliance first. 

Where AI Helps in Compliance (and Where It Absolutely Doesn’t) 

To be clear: AI can play a meaningful role in compliance workflows. 

There are areas where approximation is not only acceptable; it’s useful. Early-stage document analysis. Identifying potential disclosures. Surfacing patterns or anomalies for human review. 

But there are also points in every compliance process where approximation is completely unacceptable. 

  • Final approval decisions 
  • Regulatory recordkeeping 
  • Books and records obligations 
  • End-to-end audit trails 

These require determinism, not probability. 

Hallucinations, model drift, or inconsistent outputs aren’t just technical nuisances—they’re regulatory liabilities. 

The problem isn’t AI itself. The problem is using AI in the wrong places without the right controls. 

What “Compliance-Grade AI” Actually Means 

This is where Red Oak’s philosophy fundamentally diverges from much of the market. 

In our upcoming white paper, we introduce the concept of Compliance-Grade AI—AI designed to perform compliance, not “learn” it over time through opaque training processes. 

In practical terms, Compliance-Grade AI means: 

  • Every AI interaction is captured, stored, and tied directly to the compliance record 
  • Every output is auditable, reproducible, and defensible 
  • Every workflow includes governance, controls, and human validation where required 
  • Every deployment aligns with your firm’s existing policies—not the other way around 

If regulators ask how a decision was made, firms must be able to show what was asked, what was returned, and how that output was used, not just point to a final approval. 

Anything less than that is incomplete governance. 

AI Governance Isn’t Optional. It’s the Safety Net 

During Red Oak's Fireside Chat, CTO Rick Grashel offered a simple analogy: you wouldn’t fly an airplane without redundant systems, backup controls, and a black box. And you certainly wouldn’t jump out with a single parachute and no backup. 

Yet many AI tools entering compliance workflows operate without equivalent safeguards. 

What happens when the model fails? 
When it produces conflicting outputs? 
When policies change? 

Without configurable workflows, validation steps, and fallback mechanisms, AI doesn’t reduce risk; it quietly compounds it. 

Compliance-grade platforms assume failure is possible, and are designed accordingly. 

The Real Risk Facing Compliance Teams Today 

The biggest risk right now isn’t that firms won’t adopt AI. 

It’s that they’ll adopt it too quickly, under pressure, and without fully understanding how it affects their regulatory obligations. 

AI should make compliance teams faster and more effective, not force them to reengineer proven processes or accept new forms of risk just to keep pace with a trend. 

For more than 15 years, Red Oak has focused on one thing: compliance-grade outcomes. AI doesn’t change that mandate. It simply becomes another tool, used deliberately, governed rigorously, and deployed only where it truly makes sense. 

What’s Next 

Register for our upcoming live webinar on AI in Compliance, where we’ll break down what Compliance-Grade AI actually looks like in practice, and how to use AI in regulated workflows without introducing unnecessary risk. 

If AI is part of your compliance roadmap, and let’s be honest, it probably is, the most important question isn’t whether to use it. 

It’s whether you can explain it, defend it, and govern it when it matters most. 

Contributor

Rick Grashel is the Chief Technology Officer and Co-Founder of Red Oak. Connect with Rick on LinkedIn.

Fletcher Stubblefield is a Senior Product Marketing Manager at Red Oak. Connect with Fletcher on LinkedIn.