Red Oak Insights | AI that Answers to Compliance Webinar

Overview

Watch

AI is everywhere — but in compliance, prediction isn’t enough. Precision is required.

In this AI That Answers to Compliance webinar, CTO Rick Grashel and Senior Product Marketing Manager Fletcher Stubblefield break down what truly makes AI Compliance-Grade. From deterministic guardrails and agentic architecture to auditability under SEC 17a-4, they explore why AI in financial services must execute rules — not guess at outcomes — and how Compliance-Grade-AI™ reduces risk while improving operational efficiency.

Critical Questions Powered by Red Oak

Compliance-Grade AI™ is an AI architecture designed specifically for regulated industries where precision, auditability, and explainability are mandatory. Unlike traditional generative AI models that predict likely outputs based on historical data, Compliance-Grade AI™ executes predefined compliance rules within structured guardrails.

In financial services, this means AI systems must:

  • Operate with deterministic policy enforcement
  • Maintain full audit trails under SEC Rule 17a-4
  • Provide explainable, reconstructable decision paths
  • Support supervisory procedures and governance frameworks

If an AI-assisted decision cannot be fully reconstructed years later during a regulatory audit, it is not compliance-grade.

Predictive AI models are built to generate statistically probable outcomes. In marketing review, supervision, or trade surveillance, that can introduce unnecessary risk.

Compliance does not operate on probability — it operates on policy. If a regulation changes, a disclosure is updated, or internal supervisory procedures evolve, predictive models trained on historical data may continue making outdated assumptions unless retrained. That retraining process is often slow, IT-heavy, and operationally burdensome.

A compliance-first AI architecture combines predictive capability with deterministic guardrails, ensuring rules are enforced — not guessed.

Auditability means every AI-assisted action can be reconstructed, documented, and produced under regulatory review. Explainability means the organization can clearly articulate why a decision was made, what rules were applied, and how the outcome was determined.

Regulators are increasingly signaling they will ask:

  • Which AI systems are involved in workflows?
  • What role did they play in decision-making?
  • Why did the system reach a specific outcome?
  • Can the full decision chain be produced?

Compliance-Grade AI™ embeds explainability into its architecture through agentic execution, documented decision paths, and structured audit trails — reducing regulatory exposure while maintaining operational efficiency

Transcript

Speaker 1: 00:05
And we are live. Awesome. Welcome everybody. A little bit of housekeeping here as everybody comes in. We're going to go ahead and launch a quick poll before we do introductions and everything. And if you wouldn't mind, go ahead and fill out the poll question for us. We are going to circle back to this at the end of our webinar here. And hopefully use this to lead us into a pretty cool discussion about confidence in AI and how we're using AI in our roles and all sorts of other goodness like that. So, first of all, thank you everybody for coming. Really looking forward to jumping in. Give everybody a couple of minutes here just to continue filtering in. Just a quick reminder about the poll. Okay, and I think that's probably about good for everybody. And feel free to continue to answer as long as it's shown. Uh I want to go ahead and jump into a couple of introductions. Uh my name is Fletcher Stubblefield. I'm a senior product marketing manager here at Red Oak, and I am joined by Rick Grashel, our CTO.

Speaker: 01:40
And I'm Rick Grashel. I'm the CTO. I'm also uh one of the founders of Red Oak. So uh been doing this for 16 years now. Uh a lot of you know me, and uh yeah, I'm excited to do this.

Speaker 1: 01:55
And Rick, super excited to talk about AI with you once again. Um, I feel like we always get to nerd out together, but it's important stuff because we know that a lot of people are talking about AI. There's a lot of interest in it, and we have a very clear point of view when it comes to AI here at Red Oak. Um, and we're gonna talk about that. So we have launched our white paper recently, Compliance -Grade-AI™. And this is really a format for us to introduce everybody to Compliance-Grade-AI™. What does that mean? What is Red Oak's point of view? How are we thinking about AI? And more importantly, because of our philosophical approach to it, how are we building our solutions moving forward? Because at the end of the day, our goal is to build solutions that provide real value to all of our clients. And we believe that that is through Compliance-Grade-AI™. So we should be good with the poll now. Uh, so let's go ahead and remove that. Perfect. And displaying a couple of the results. So it looks like the majority of the room is at about a three, which is great. Thank you so much for this insight. And then, are you using AI in your professional role today? Most everybody in this room is already using AI. So that's wonderful. That gives us a great platform to start from. And before we dive in, a couple of things I wanted to mention because I said that we have a very clear point of view when it comes to Compliance-Grade-AI™. needs to look like. And there's a couple of things, right? It has to be compliance-first engineering. So when we're talking about Compliance-Grade-AI™., it's compliance first. It has to be an agentic architecture. We need to use a team of agents rather than just generative AI to generate and produce outputs. It needs to be model agnostic, and we're going to get more into that and why that may be more important than you than you may think on the surface. As well as it needs to be built and designed by career compliance professionals, chief compliance officers, people who have experienced the risk and know what it means to be responsible for keeping your firm and and your clients safe. So, with that, uh Rick, let's go ahead and get started. And I think that a good place for us to start and right off the top, when we think about our philosophical approach to AI and how we look at it, what is Compliance-Grade-AI™? It's really a conversation about prediction versus probability, deterministic versus possibility. So if you wouldn't mind, would you just kind of share your thoughts on what does that mean, precision or prediction?

Speaker: 04:24
Yeah, it's interesting because precision and prediction, uh, this is super relevant when we talk about AI because models are prediction engines. That's what they are. AI is about prediction. And um, I gave a talk uh uh not even a week ago, I would say, where I gave the example of you can think of it, it's not necessarily important for everyone to understand exactly how models work underneath, you know, behind the scenes, kind of under the covers, but to think of them almost as they're autocomplete. Imagine your text messaging app on your phone, they're like autocomplete um you know, devices on steroids, basically. I mean, that based on the entire corpus and body of information that they know in the world, based on the prompt that they're given, at that point they're going to be able to more and more predict what's the next word, what should it be? What and once you have a larger prompt, then ideally it should be able to predict a very large output just based on all of the history that it's been trained on, that you've interacted with it, that it's interacted with other people. So that's the prediction part, and that's super important because a lot of times um you actually want things to be predictive. You want them to be dependable, you want convention, you want your responses and the way that you behave and the way that things behave around you to be something that you can depend on. However, precision is necessary. And precision is that sometimes things go outside of the guardrails. Sometimes you need to apply a very specific prompt, like let's say, in a very specific situation. Sometimes you need a response to ask a question in a very specific way, in a very specific situation. And that is the difference. It requires precision. It's it requires the ability to not only depend on the predictability of the model and how it behaves and how it responds to you and how you interact with it, but it requires the precision of its response so that you can make an accurate decision, not just based on predictive uh behaviors and predictive uh outcomes, but based on the precision of the inputs, which are a lot of times largely deterministic.

Speaker 1: 07:00
Yeah, and that's great. And I think that, you know, and correct me if I'm wrong here, but in simpler terms, the way if we just want to give a high-level overview of how AI generally works, specifically just your run-of-the-mill generative AI, it's just predicting what comes next, whether that's a word or a pixel. So it analyzes a bunch of data based upon that data. What does it think is likely to come next? And, you know, this may be jumping ahead a little bit. And we just talked about this idea of model agnosticism, and we're going to get into some model training versus maybe some other alternative approaches uh coming up. But if you have a model, and let's say this is a model that is advertised as a custom trained model based on whether that be your company's proprietary data, whether that be trained on industry data, if a regulation changes and that model is just making predictions off of the data set that it has, what are some of the implications that could pop up with that? Because if it's just predicting what word might come next, what word might come next? I think there could be some problems there.

Speaker: 08:04
Yeah, there could be huge problems. I mean, and that's where you cannot depend on purely the predictive aspect of the model. You have to have other inputs. And heck, we don't even have to just talk about external regulations. What if your internal policies change? What if your internal behaviors change? What if your own uh written supervisory procedures change for how you supervise something or how you review a piece of material or this kind of stuff? You have to be able, if you're only depending on your history in order to determine, you know, the predictive outcome is this, you're gonna be retraining so often that it becomes almost unmanageable. It's a constant burden. Um, so the idea is that you want to be able to apply uh a level of deterministic precision to the inputs and then have the model use its predictive behavior combined with though that metadata, that information as it changes, in order to get the most accurate kind of response. The two are critical to work together. One doesn't replace the other, but training uh in terms of using that as a way to achieve precision and accuracy, uh, it's outdated these days. I mean, 2022 called they want their approach back.

Speaker 1: 09:29
Yeah, so it sounds to me like the conversation that we're really moving into is fundamentally one about is your system executing predefined compliance rules, or is your system making guesses at what it thinks is likely to be compliant? And I think that one introduces unnecessary risk, and the other, the Compliance-Grade-AI™. approach is really about being as deterministic as possible because for compliance, we need to enforce rules and policies, not interpret them.

Speaker: 09:60
Yes, a hundred percent.

Speaker 1: 10:02
So, real quick here, we're gonna pause one more time. We're gonna launch one more poll question for everybody, and we thought that this would be a really good way to get all of you involved. So, we're reserving some time for the last topic that we cover, the last five, six minutes or so of this session, for what everybody in our audience would like to learn more about. Do you want to learn more about AI governance? Are you more interested in books and records and auditability and how AI is involved here? False positives, what is FinRon the SEC's approach to AI? Pre-review for marketing, even where is AI going, the future of AI. So if everybody wouldn't mind, just take a second or two and uh vote on what you are most interested in, and then we will use the end of this presentation to do just a little bit deeper dive into what you all want to hear. Because at the end of the day, this this webinar is for all of you. Um we want to make sure that we we get all of you the information that that you want and need. So give everybody um just a second or two here. Great. And I think we can probably go ahead and uh shut down that poll. I'm excited to see where this goes. Me too. And do we have results yet? Pre-review for marketing. Awesome. Can't wait to myself as a marketer, I'm a little biased. So uh we will definitely take some time to touch on pre-review uh after we get through some other topics here. Great. Thank you so much, everybody, for your participation. And we got out ahead of ourselves a little bit, but talking about precision and prediction, we understand that we need to be deterministic. We understand that AI systems, if it's going to be a truly Compliance-Grade-AI™. system, that it needs to execute on rules and not guess at what might be compliant. That leads us into the next sort of meaty portion of this presentation when we talk about what it means to Compliance-Grade-AI™. Audibility and explainability. We say all the time that if you can't audit it, if it's not documented, if it can't be recreated, then it's not Compliance-Grade-AI™. Rick, would you mind just speaking for a minute or two on what is audibility and explainability when it comes Compliance-Grade-AI™? What does that mean to Red Oak and Compliance-Grade-AI™?

Speaker: 12:24
Yeah, a hundred percent. And and we have an entire group of people that do nothing but think about what our approach is. That I mean, and at Red Oak, we're thinking constantly about what is our underlying goal when we're offering any new feature with anything. And it's always, as you mentioned, uh, you know, at the start, it is always going to be compliance first. We're always thinking immediately, okay, here's the feature, here's the need, right? Here's what our clients are asking us. We definitely want to do this. We find that there's huge value in offering this capability. Uh, and immediately we're jumping to how do we make sure that this is compliant? That is question number two, because our roots and our entire connectivity platform, it's based in compliance. It has to be. So when we talk about Compliance-Grade-AI™, it's more than just, hey, we're going to have AI and we're going to actually be uh interacting with a model and we're going to be helping you out, you know, somewhat. It can't just be that if Compliance-Grade-AI™, because being in the compliance industry, there's a lot of questions outside of just your little task that have to be answered sometimes. And these things happen under audit. And it comes down to the to the very basis of SEC 17A4 compliance. What is an audit trail? An audit trail, honestly, is it's not just a listing of stuff that occurred, it's understanding that in the course of the inception of a compliance approval all the way through, like let's say the distribution process at each step, what happened? Who was involved? What did they do? Were there other uh pieces of information that were added along the way? When was that added? And all of that combines to create the entire record. It's about books and records compliance because you have to a regulator's gonna come in, customer complaint comes along, and before you know it, the regulator comes in and says, I want to see everything that John Smith did last year, show it all to me. And you can't just throw together. Now, obviously, we always want to only give exactly what's asked for, right? We we we always want to do that, but at the same time, you can't just throw a mess of things together and have it make sense. It needs to not only be auditable, it needs to be explainable. The regulator's obviously gonna ask, well, why this? Why this decision? Why did this happen? Right, and it actually gets to the point. We actually released a post on this, I think last week, but Greg Rupert from FINRA actually came out and said, they're not directly asking the question right now, but they're gonna ask the question if your AI and your agenc autonomous AI um agents are involved in your workflow, are involved in your reviews, are involved in your trade surveillance, are involved in your supervisory, uh your super your supervision processes, um, we're gonna start asking you about those. And not only are we gonna ask you about what happened, we're gonna ask you about why it happened. What did this agent decide to do? Why did it decide to do that? If it incorporated another autonomous agent, why was that decision made? What was the chain of events in that autonomous agent chain and those decisions in order to arrive at some sort of a compliance outcome? As AI gets involved more and more in our business processes, from compliance to connectivity all the way through to distribution, that question has to be answered. It has to be answered crisply. Otherwise, you are not going to be books and records compliant. So, Compliance-Grade-AI™ is that it has to be agentic, like you called up front. It needs to be autonomous. We need to be able to make some decisions. The AI has to be able to make some decisions, but we need to audit all the way along that chain what happens, what exactly was involved, why did this sort of branch take place from one agent to another, and put all of that together inside of a record so that again, under audit or under subpoena, if you have to produce that record, you can produce it in a human, readable, and consumable way. And that's the takeaway of what Compliance-Grade-AI™. is. That's just the 17A4 and producing perspective of it. Uh, but compliance grade is also how does the round trip work in terms of how do I curate my ongoing prompts? How do I, what's my continual improvement process, right? That has to be a part of every um written supervisory procedure. It has to be. What is your FINRA asks these questions during their audit? What is your continual improvement process? Um, so you have to have something inside of your application that supports that notion, too, of that round-tripping of iterative improvement and continuous improvement as well. So these are things that we build into our AI solutions that we will continue in every aspect of every feature where we add in AI. And again, just saying this is one of those things where a lot of vendors will hand wave and will not talk about these things in great detail. Totally. Um, and but I encourage everybody on this call, no matter we're all using AI all over our lives when it comes to uh our work. And when we are in a compliance industry, a regulated industry, uh, we need to scratch the surface just a little bit more and ask that question of our vendor of when I do have to produce for books and records, what does that process look like? Does it cost me additional money? How fast can that be done? These are questions we're ready, willing, and able and do answer. Um, so what auditability and explainability is. Sorry.

Speaker 1: 18:56
Totally. And if I can add to, I mean, this is something for our audience that we spend a lot of time obsessing about and to just put it as simply as possible Compliance-Grade-AI™, AI that is truly compliance grade, every AI assisted decision has to be able to be fully reconstructed after the fact. Absolutely. And I think that that is just the simplest way to put it. And Rick, again, you and I have had endless conversations about this and a little bit of uh sausage making for the audience here is one of the ways we think about it here at Red Oak and think about okay, how do we make AI truly compliance grade? We kind of frame it around this question. If a regulator were to ask for an explanation two, three years later, two, three years after the fact, would that answer be available? And even if the answer is available, does it still make sense? And I think that that is a really good way to think about explainability and auditability being those sort of core tenets, those core components of Compliance-Grade-AI™.

Speaker: 20:00
Exactly, exactly. And and that auditability, again, the auditability 17A4 is so much more than just storage. Yes, you know, a lot of time it's just recorded, it's worm storage. That's what it's storage. It's not just storage. You have to be able to actually produce it in a meaningful way. You cannot be the Roach Motel of data that basically it's really difficult to get out, it's gonna take months, and then after it takes months, it's gonna take an entire team of individuals to reconstruct that story that you're talking about that happened two years ago. You need a system that's built with that capability already. The underpinnings are already there to reduce that burden and that workload. Because I'm here to tell you right now, the regulators are coming asking that question. They're going to expect uh those answers the way they always do.

Speaker 1: 20:58
Well, and they're they're openly saying as much. Now, 100%. You just mentioned something that I think this is a really good point for us to kind of hone in on a little bit when you talked about reducing workload and reducing some of the administrative burden. Let's jump into our section about agentic execution versus model training. Because I teased this a little bit in the beginning, and we talk about having an agentic architecture is one of those core tenets. You just mentioned this a second ago about truly Compliance-Grade-AI™ And we have thought about all sorts of ways to kind of explain this because I think when thinking about it in passing, most people get it, right? Most people understand that we have generative AI, and you can ask uh Chat GPT a question and it will it will provide you an answer. Most people at this point understand an agentic system where you have agents that have roles, but um I think it's a little bit deeper than that. And yesterday we were kind of comparing this to a business, to a Sports team to even a military organization. Would you mind doing a little bit of a uh deeper dive into that? So maybe we can explain the little bit of the nuance that we have in our approach to our agentic architecture.

Speaker: 22:13
Yeah. So our agentic architecture, it's a combination of course creating what we call classifications and definitions. And there's a certain configuration, which is really just an embodiment of in an in an approval process, what are the things that we're interested in? What do we want to look for? And that varies. That's not even just, let's say, marketing review. I mean, we could just limit it to FINRA 2210, but even those regulations, right? If anybody were to uh you know read FINRA 2210, which is just a wonderful evening read by the firelight, you know, if somebody wants to do that, but they're generally vague. I mean, they're fairly vague. They do have some higher level guidelines, but each firm at you know its lower level usually has very specific, uh, very specific kinds of policies and procedures to go through this. And this is where you and I have talked about it's not unlike a um, I mean, it's definitely not unlike a sports team, right? I mean, obviously the Super Bowl was this past weekend, and you have a lot of players on the team. All of these players are agents, if you will, and they all have very specific jobs that they have to do. And some of those, honestly, those plays, the way they're designed, they are not really mutable. It's this is the way that we execute these plays. And now, within that play, there might need to be small adjustments that have to be made, right? Which is an individual agent's decision or that player's decision. They might decide that they might have to stutter step a little bit or do something just real quick here, right? But they're still within the guardrail. I think we we were saying this before, right? The good it's within the guardrails of the of the play, though. You're still within your policy, right? And all things um, you know, all things being equal, as long as the overarching guardrails ideally are adhered to that's the going to increase the out the likely outcome that it's gonna be successful, right? And that's that the outcome is gonna be uh preferred. So, and of course, not unlike I mean your ex-military, obviously, Fletcher. So, so you know, and but it not even just in a military sense. If something goes wrong, probably going to have a debrief, or there's gonna be a post-mortem of some sort, right? Where you're gonna in detail look at the look at the audit trail, look at the record, see exactly who was engaged when, what they did, why they did it. I mean, you were talking even down to counting, I mean, counting rounds of ammunition, understanding why everyone did what they did so that you could improve. That's that's where this comes from. You know, that's where so when we talk about agentic execution, that's true agenc execution. And model training uh, again, is it's much more important from a predictive perspective, which is to say that based on all the public availability of knowledge that we have about whatever the domain is that we're talking about, it's marketing review, um, you know, it could be anything in any regulated industry, the model has a giant framework and a corpus of information to access from a predictive, from a predictive perspective, but you have to apply a gentic, autonomous execution, yes, but you also have to have the guardrails that the agent understands. These are the guardrails that I operate within. And you have to be able to tell the agent these are the those guardrails. Otherwise, like Greg Rupert from Fenra said, you could end up with agents that if they don't have guardrails, they really are just going to be making freeform decisions on their own. And like you're saying, if you're on the battlefield and uh your fellow soldier is not behaving in a predictive way that you know operates within a certain guardrail, it puts everybody at risk.

Speaker 1: 26:22
Well, it a hundred percent. And I think too, it's it's not just about having guardrails in place, but it's redundancy of guardrails too. Because if we go back to our sports analogy, it's not just that these plays are within the guardrails or the confines of the play. There's also the guardrails of the offensive coordinator who falls underneath the guardrails of the head coach, who falls underneath the guardrails of the general manager. So it's redundancy of redundancy of security, is the way that I like to look at it. It's, you know, we because we like to keep building upon these themes. Earlier, just in the previous section, we were talking about being able to recreate decisions after the fact. It's the same reason that every professional sports team after the game goes and have intensive film study. They watch the game, the military has after action reports and they want to be able to recreate every single military act exactly as it happens so they can learn and improve it and increase security. And businesses operate the same way. None of these organizations would just allow all of their agents to go out and act autonomously and just sort of do whatever they want. Everyone has to operate with precision within the framework, the safety net of the guardrails of their roles. And that's exactly what we're talking about when we build our agentic system for AI. Safety, guardrails, redundancy.

Speaker: 27:47
Yeah, and and I actually want to follow on to say also, because and we've heard concerns from um multiple clients and prospects who says, uh, man, yes, we hear you, right? We hear that we need to have these debriefs and we need to, like you're saying, like every soldier would have to file a report on exactly what happened, why it happened, right? You have these coordinators, there's a there's a level of institutional almost bureaucracy and process that happens, right? So you can improve. Um, you know, but the goal, we're all dealing with AI now. It's here, it's not going anywhere. It's in our daily. I mean, we just saw the um, you know, the results of the survey. It's not going anywhere. It we're all in it now. And so we all have to learn how to incorporate it into our processes. Uh, but the idea, right, is that yes. I mean, we're getting the questions. Man, I have a data day job. You're telling me I have to now curate this. Who does this? Can it just automatically just do this stuff? And the answer is no. The answer is no. The idea is that with the gains that you get from AI and autonomous agentic AI actually helping you in this process, those kinds of performance gains that you're going to get, you're going to be you're going to have a force multiplier there and then be able to spend, yes, you have to spend some time curating and doing that continuous improvement. And how was this outcome? And did this work correctly? Do we make need to make an iterative adjustment, right? Do we need to move the move the goalpost? Do we need to move the guardrails a little bit? Um, you know, that kind of thing, but still, the benefit that you get is going to be so much more than the investment. And I know that many times that's not the case. Many times, solutions and vendors, they'll solve two problems and they'll give you four more to solve. And that's horrible. And uh, that is not the case here. Uh, we're really seeing force multipliers being returned from multiple clients for much less investment of time.

Speaker 1: 29:56
And I think that's a perfect transition because at the end of the day, all of this is designed to make the every one of you and here in this room right now, it's designed to make your role easier and less stressful. And I think that that is a perfect transition into what we like to think about as the operational burden problem. Are we we understand that there is an operational burden that takes place? And at the same time, we're not naive. We have been in this industry for a long time. We've talked to countless chief compliance officers. We have many people here know Kathy and some of the other founders here yourself as well. Like, we come from this world, and so we are laser focused on what it is that the AI is inherently trying to do. If we all just take a deep breath and sit back for a second, think about okay, AI this and AI that, does it handle disclosures? Does it, you know, do this? We're trying to reduce the administrative burden that you live through on a day-to-day basis. Whereas where I'm going to circle all the way back to this model training question, one of the big problems that we've seen in the industry is a lot of people like to tout their custom train models. And our model is performant because it is trained on all of your proprietary business data. It knows your policies, it knows your procedures. And I would love for you to take a deeper dive here because one of the things you and I have spent a ton of time talking about is unfortunately in reality, when these systems are implemented and the AI goes live, we kind of see the opposite effect happening. All of the sudden, compliance teams are starting to be expected to help tune this model, to feed more data to this model, to maintain this model. And now, all of a sudden, as if you aren't already, you know, if you don't already have enough on your plate trying to handle ad reviews and provide supervision and all of these other tasks that you're trying to complete. Now let's add to that. Hey, can you determine why this model is making a weird decision? Hey, can you upload this whatever it is into the system? What can we speak to our audience just a little bit more about this particular problem and how Red Oak solves that?

Speaker: 32:04
Yes. So this and this has less to do with what I would say Compliance-Grade-AI™, as opposed to what I would just say is red oak grade product. Yes. And expectations of Red Oak features. So one of our core value propositions that has been here since the beginning and will always be here is that we have to put we have to change, be able, we believe that the business owners strongly, because again, like you've said, we've spoken to untold numbers of compliance professional over the many years. And one of the biggest complaints is that when a change needs to be made, it's too slow, it's not responsive enough, it's not fast enough, uh, it costs too much, there's too much of this kind of stuff. Why? Because generally, what has to happen when one of my compliance systems or one of my systems, period, needs a change? Do I need to go to IT and request a project? Who here, and I'm not criticizing IT, they're already way understaffed and way overburdened, right? Yes. Who's here requested IT to have a project, right? On this entire, all these people watching, we all know what that's like. You're lucky if you get it in weeks, could be months, even for a small change. I mean, we've seen clients have small changes, something as as painful that feels like it should be so simple. Can't we just do this? And it still takes weeks because you're in, you know, you're in the pipe, you're on the plan. It it takes a while. The another alternative is do we have to contact our vendor? What is our support model for our vendor look like? Um, and again, these are questions that we're looking. We we really want people on this webinar to ask this of your vendors, ask this of Red Oak. We will tell you, right? We are going to tell you. Um, what does it look like when you have to make change a change to your AI implementation? It doesn't even have to be in this domain, it could be anywhere. If if one of our policies changed, can we just call you up and you make a change? What does that look like? How much does that cost? How long does that take? So bringing that back though, the core Red Oak principle is always putting the configurability of when something should be applied in the hands of the business owners, whether that be compliance directors, compliance owners, whether that be marketing owners, whoever, it doesn't matter, whether that be uh a field branch manager in a field, it doesn't matter. We're gonna try to put as much as possible on that person so they can change at the speed of business and they can change without having to involve a bunch of external factors and costs like this. So um that's the core of what we do. So when we talk about model training and things of that nature, model training is an IT project. There is no person on this webinar right now that's gonna go out and just retrain their model. No, it's gonna take an entire IT project, allocation of resources. It's gonna take weeks or months of effort, it's going to take contributions from everybody on the team because everybody has to go through the same thing again. Um, it's that is the difference to our approach, is that we're going to use the extremely already capable, more than powerful base models that we have access to, and then apply on top of that the ability for again the business owners to be able to determine, schedule, and create their own destinies, which were with regards to what guardrail am I going to put into place today? What am I going to do tomorrow? What does it look like? What is the change management of that? What does that entire supervisory process look like? That's the Red Oak difference.

Speaker 1: 36:13
100%. And, you know, I think this is a good time for us to pivot to our poll question where we, you know, sort of asked our audience what everybody's curious about. And I think the time is perfect because you actually mentioned some of the marketing stuff that we're going to get into, especially as Red Oak, a little bit of a teaser for everybody. We're working so hard on compliance connectivity and evolving into a truly compliance connectivity platform. But let's talk about marketing for a second. There's obviously a lot of interest here in this room about pre-review for marketing and what AI can do for marketing. And I think this is so emblematic of the way Red Oak Red Oak approaches not just compliance, but solving problems and pain points for financial services professionals generally, because this is kind of an issue that we're attacking it at both angles, right? We're trying to facilitate a better experience for ad review and for compliance work in general. But at the same time, we can also help from the marketing side with tools like pre-review. Would you mind talking about the Red Oak approach to pre-review and how enabling and empowering the marketers is enabling and empowering compliance, the same way that enabling and empowering compliance is enabling and can and empowering the marketing team?

Speaker: 37:31
No, I'm this is such a I'm super excited about this part uh of the overarching solution. And um, because effectively we are the compliance connectivity platform. And that's where we really see the huge value in what we bring from content creation, from the point of content creation, however that happens, all the way through the through the to the distribution of wholesalers to wholesalers and field advisors and ultimately into clients' hands, right? I mean, we want to be able to connect and are connecting that entire process and all of our behaviors at Red Oak now are really strongly focused, yes, on AI, but really on the connectivity of all of those items together and then applying AI across those kinds of things. So coming back to the question about pre-review for marketing is something that has been strongly asked for.

Speaker 2: 38:32
Yes.

Speaker: 38:33
Now I will say we have we are currently in discussions with multiple popular content creation vendors. If anybody on this call were to name the top three, I guarantee you that you would hit all of them uh for who they use for content creation, but definitely have been asked that you know, if I'm using a content creation system, like let's say a workfront or seismic or whatever, pick your pick your uh content creation tool, right? Um once it gets done enough, wouldn't it be great to be able to send that piece? We don't want to create a record yet, but we would really love to just bounce it off of the Red Oak AI agent to be able to see how are we doing, right? Are we complying for this type of material with the current, you know, are we within the guardrails? Is there anything we should improve? Are the proper disclosures and disclaimers actually present, right? Which is probably 50% of the problems that we know are seen on material. At least at least. And that and that is a part of the connectivity story for us. Uh, because again, with uh with our powerful APIs, again, which enable connectivity, uh we can do things like pulling the actual annotations that are made by our AI agent into those content creation tools so that they could actually be viewed by the marketer. That is where we are, and where I mean, that's the future. That's what we're the kind of connectivity we're enabling. Now, I do want to say though, I also realize that uh, you know, there are definitely certain uh firms on this call and clients who they have a level of discomfort with that. That's fine. This is not something that you okay, that's fine. You don't have to uh you know consume that particular feature if you don't want it, but just know that that's where we're going in terms of uh pre-review of material. We are we want to be enable, uh we want to enable uh the connectivity of those content creation systems to be able to use Red Oak again as the Compliance-Grade-AI™ agent to be able to look at material in this case and be the sole source of truth and guardrails of determining if a piece is compliant or not, even in a pre-review phase.

Speaker 1: 41:12
And something I'd like to really emphasize when thinking about marketing pre-review is this is still Compliance-Grade-AI™. This is still a red oak grade product. So all of those redundant systems, all of the safety nets, all of those frameworks that we talk about with every other component of Compliance-Grade-AI™, it's still there. And this is really powerful stuff because when you start arming the marketing side of the business as well as the compliance side of the business, now what we're talking about is meaningful results in your every single day. We're talking about shorter review cycles. We are talking about newer review cycles. And then when you start combining the power of marketing preview with the rest of Compliance-Grade-AI™ features and functionality in the context of a compliance connectivity platform, we're talking about really, really powerful stuff here, Rick.

Speaker: 42:06
A hundred percent. I mean, and and just to make it really super concrete, I mean, imagine as a marketer, when you've got something ready, as ready as you think it is, right? Maybe you base it on some past material, you might have massaged it, material changes, maybe immaterial, doesn't matter, right? It's I know for a fact, having talked to multiple marketers and content creators, that the second they hit the red oak button, right? Let's say, ah, they they just know what that first what would be call a turn or a turnaround looks like they just they just know what it's gonna look like, right? And it's annoying, right? I because again, it's a delay. It's oh my goodness, if I could have just known that my disclosures were outdated, if I could have just known that I'm supposed to have, you know, whenever I have, you know, an acronym, I have to have a uh, you know, some sort of something in the glossary explaining what that acronym actually is, or you know, some of these kinds of things. If I would have just known that up front, I could have taken care of that in like five minutes and eliminated an entire turn. That's what we're talking about. Our goal, like let's say from Red Up, to put it very concrete, if we can reduce 90% of in 90% of compliance approval processes, if we could reduce one turn, I don't want to say eliminate the first turn, but there might be there might still be a turn, of course. But to go from four turns to three turns or three turns to two turns, you're talking days in some cases. I mean, literally, if if something is submitted on a Friday, you're talking about the possibility actually of maybe getting it back even that same day, as opposed to having to wait over the weekend or some. Other things. So that's the concrete level that we're talking about with pre-reviews. So certainly if we have clients out there and interested parties that are maybe again a little bit feeling a little bit sketchy about that, please, we we we really at least encourage you to have a discussion with us about that. Again, it's your decision if you would like to be interested in something like that. But just know we talk, we are talking to the vendors that you're using about how to do this responsibly, how to do this in a way that makes sense to your marketers.

Speaker 1: 44:30
As well as, you know, we would encourage you, as I mentioned at the beginning of this program, we just released our brand new white paper that is a deep dive on what it means to have truly Compliance-Grade-AI™. We are going to provide that to everybody on today's call. So please, please, please, we encourage all of you to read the white paper. We go into way more detail on all of this stuff, as well as many other topics that we unfortunately don't have quite the time to get to today. Because Rick, I mean, as you know, I could sit here and talk about this with you for three, four hours. It's as much as you all are thinking about it, we are thinking about it as at least as much. Um, so everybody, rest assured, uh, this paper covers absolutely everything. But we're at that time where I would like to make sure we have ample time to answer everybody's questions as they come in. So let's go ahead and open up QA.

Speaker: 45:32
So we have We're gonna get to see these. Do I get to see the questions? I don't know. Just let me know what they are, I guess.

Speaker 1: 45:38
Yeah, I believe you should, but if not, I'll go ahead and read them to you. So the first question that we have coming in is Is there an AI platform you would recommend that meets compliance grade requirements? Rick, I'll let you uh I'll let you take that one. It sith ask that one more time. Is there an AI platform you would recommend that meets compliance grade requirements?

Speaker: 46:05
Uh I would recommend the Red Oak uh platform that meets uh compliance grade uh requirements. And I'll go further to say that obviously um Red Oak, we don't create our own model. We don't so also, and we didn't really mention this, and it's probably something important to say, right? Because it gets around overall security and data governance. But um, you know, when we talk about a compliance grade vendor, it's also about what is their due diligence when it comes to actually if we're going to use a vendor, right, in order to, you know, engage AI or engage a model, what are their terms of service? Are they going to be using or trying to use data that they receive to train other models and provide other value to other people? Is there a possibility to leak, you know, PII or other confidential information? Again, a compliance connectivity platform provider. This is the second question that we ask right after what does the client need, right? So I'm gonna sit here and absolutely say Red Oak to me is a compliance grade vendor. Um and it's because not only are, again, for 16 years now, the the vendor of choice when it comes to the kinds of solutions that we uh provide, it's also because of the due diligence of the vendors we use and the ones we will work with because we are guaranteed by those vendors, uh, which are large, that um they have enterprise level terms of service that respect uh the kind of data governance security. And um, you know, we're not going to train using your client data and this kind of stuff. We make sure everything is air gapped, all data is secure, nothing is stored uh by any of those vendors. So that's super important. Um, I don't know if the question was trying to get to a more are there other like individual AI vendors? I'm not sure, but that's my answer for now.

Speaker 1: 48:14
Sure. Well, here's the benefit. And we kind of uh we're already thinking about this before the call. So, Katie, who asked the question, I've got great news for you. One of the pieces of content that we've actually put together is we've created an ebook that is an in-depth guide on how to vet Compliance-Grade-AI™ technology. So this will arm you with the questions to ask, the pillars that these are applicable to, and they will give you a really sort of inside perspective down to a technical level, if needed, on how to make sure that the AI solutions that you are investigating are truly Compliance-Grade-AI™. So everybody here will also receive a copy of this ebook as well. Um, that we spent a lot of time really trying to make sure could provide you with all of the information that you could possibly need, in addition to some stuff you may not even be thinking about. So everybody's and right.

Speaker: 49:05
And to that point, I was just gonna say, and I think that's a great uh, I think that's a great thing to bring up because I mean, yes, there are certain vendors probably that we think are are certainly appropriate, but probably the more long-term and and and cogent answer to that is what you just said, which is let's give you, as a compliance checklist, right? That's already a part of your vendor due diligence. Let's give you the top three or four questions that you need to ask to make sure that the vendor that you're considering uses Compliance-Grade-AI™.

Speaker 1: 49:40
Perfect. Let's move to the next question. So let's have a question. Uh, Rick, you got some love for Roach Motel of Data. Um, the audience found that funny. Um, awesome. Uh uh next question. We have a question from Pam. What's your timeline for implementation for pre-review?

Speaker: 50:04
Uh the timeline for implementation and pre-review is so I will tell you this that we already have a couple of clients who do this with their integrations today.

Speaker 1: 50:15
Yes.

Speaker: 50:16
So for example, uh for a client who uses Workfront, are already actively in production pushing pieces or Salesforce, let's say, right, as a source system, which I think probably approximately 70%, if if not more, of the people on this call use Salesforce, right? Um, that those submissions get created over their existing integration, and then they're able to pull back again, even just forgetting about um oh I well, actually, the question was about pre-review.

Speaker 1: 50:58
I should I should say timeline for implementation on pre-review.

Speaker: 51:01
Yeah, so pre-reviews on the roadmap for this year. I don't have a the specific timeline. I can tell you it's on the roadmap for this year for the pre-review piece, but I'll tell you that that most importantly, right? Kind of as the as the workaround. I didn't want to say workaround, it just is what it is. I was mentioning before. So we do have a few clients right now that will actually they'll go ahead and create the submission in draft status. They'll leave it like if you know Red Oak, right? It's the submission's gonna stay in draft status, scans for disclosures, the AI scan will run, and then that resulting PDF and notations with all of those annotations will be returned back to that source content creation system so it can be rendered and then be adjusted. But it does require the creation of the submission uh in order to make that happen. But by the way, again, and I know that that's bothersome to some marketing teams. They don't want it to be on the record yet, right? I uh and we get that. So that's why we're really looking to more, I would say, of a non-submission related pre-review beforehand. Um, so that's coming this year, but just know there is something of a workaround that we do have existing clients doing right now, if you're amenable to that approach.

Speaker 1: 52:17
Uh, another question here, um, which I think is a kind of a good one uh for everybody to hear. So uh Chris says we have private AI embedded within our cloud storage solution. We also have a private license with one of the providers to maintain employee activities private, uh, as the license does not allow the provider to train other LLMs. We have an affirmation that requires employees to certify that they only use internal AI applications. Are there any additional steps one can take to prevent potential ghost AI usage?

Speaker: 52:53
I mean, at some point I understand this question. This is really um this gets to a point of trust where eventually you just have to trust the terms of service. So, first of all, I mean, in your own due diligence, you're gonna ask the questions that you know to ask. Again, with uh with Fletcher's sort of one pager that he talked about, we're gonna be able to give you really good questions to ask to make sure. Um, I'm gonna tell you though, of a little bugaboo that I think is important that hits a lot of compliance teams. And it's changing all it's annoying, it's annoying as hell. It really is, which is when your vendor changes terms of service and they don't tell you. Yes. So, in my opinion, like, and if it's not there, Fletcher, let's put it there. But even if it's not there, there has got to be a process where, especially for AI vendors, when someone's talking about their privately trained models and things like this, especially around AI, I think it is critical to ask these AI vendors when you change your terms of service, what is your notification process? Do I have the ability to opt in or opt out? What are my options, right? Because the reality is, is if you develop your entire, imagine you're building your whole compliance process on, you know, it's depending on this vendor. You did your due diligence, everything was great. And then 12 months later they changed their terms of service and said, We're allowed to use your data for XYZ. They're not giving you a choice, maybe. What do you do? Now you have a you've got a hair on fire, hair on fire, a hair on fire uh situation where you have to address that. So um I I think that's the terms of service changing to me are the most critical thing. But at some point, you do have to just know I asked the questions I was supposed to ask. Again, I operated within my supervisory procedures, and I just have to trust this vendor at that point. It's the changing of the terms of service to me that are the most risky.

Speaker 1: 55:06
Exactly. Well, we have way more questions than we have time to answer with our remaining of time. So I just want to assure everybody we will answer the rest of the questions. We will put those together and we will make sure to send those out as part of our follow-up. Um, thank you so much, everybody, for attending. Couple of quick housekeeping items just to end the presentation here. We're gonna go ahead and launch one more poll for everybody. So, as I kind of close this out, if you wouldn't mind, just please uh take a second or two, uh, answer the poll question. This really helps us learn and get better, helps us continue providing better content, better solutions, everything we can for you. Um, but if you find this topic interesting, especially AI, I want to let everybody know about Accelerate, which is our conference coming up. Go to our website, check out all the information about Accelerate. If you're super interested in AI, which everybody here is, you are going to learn not just more about Compliance-Grade-AI™ and what that means, but as we evolve into this compliance connectivity platform, we're gonna learn more about how AI is actually used as a connective layer, as the connective tissue, the nervous system that starts to connect marketing with compliance and supervision all the way through distribution. So if you are on the compliance side, of course, you need to be there. Marketers, you need to be there. National accounts teams and distribution side of the business, everybody's got to be there. We're gonna talk about AI and so, so much more. Uh, really looking forward to seeing everybody there. And just as one final thank you uh to attending, we we love doing these, love hearing from from all of you. Thank you, everybody.

Read the Blog Post

AI Compliance Solutions for Financial Services: Why Precision Beats Prediction

Here’s something you already know: AI in financial services isn’t coming — it’s here. Financial institutions are using artificial intelligence for everything from marketing content to supervisory workflows.

But there’s a critical gap most firms overlook.

Traditional AI systems predict. Financial compliance requires precision. That’s not a minor distinction — it’s the difference between efficiency and exposure.

Why Most AI Falls Short for Compliance

Let’s start with how generative AI actually works.

Most AI systems are prediction engines. They analyze training data and generate statistically probable outputs. That works exceptionally well for drafting emails, summarizing documents, or generating marketing copy.

But compliance doesn’t operate on probability. It operates on policy.

When regulations change (and they will), when supervisory procedures evolve, or when disclosure language updates, prediction-based AI trained on historical patterns can produce outdated or non-compliant outputs. Retraining models every time policies shift isn’t just inefficient — it’s operationally unsustainable.

Financial compliance demands deterministic execution within clearly defined guardrails. Anything less introduces unnecessary risk.

Agentic AI Architecture: A Better Approach for Regulated Industries

Compliance-Grade-AI™ takes a fundamentally different approach.

Rather than relying solely on model training, it uses agentic architecture — structured AI agents with defined roles operating within supervisory guardrails.

Think of it like a professional sports team:

  • Each player has a specific role.
  • Every play operates within strict boundaries.
  • Coaches provide oversight and adjustment.
  • After-action reviews drive continuous improvement.

AI compliance workflows should function the same way. Agents operate within predefined rules, escalate appropriately, and produce documented, explainable outcomes.

Why does this matter? Because in financial services, you may need to reconstruct any decision years after it was made.


SEC Compliance and AI Auditability Requirements

SEC Rule 17a-4 isn’t simply about storing data — it’s about reconstruction.

When regulators request the full history of a marketing review — who reviewed it, what changed, which AI agents were involved, and why specific decisions were made — you must be able to produce a clear, defensible audit trail.

Compliance-Grade AI™ for financial services must deliver four core capabilities:

  • Complete audit trails that document every action
  • Reconstructable decisions that can be explained years later
  • Documented supervisory oversight throughout the workflow
  • Explainable decision chains connecting inputs to outputs

If your AI system can’t answer “why did this happen?” two years later, it isn’t compliance-grade. Full stop.

Avoiding the Operational Burden of Custom AI Models

One of the biggest misconceptions about AI in compliance is that custom-trained models automatically reduce workload.

In practice, they often create more.

When AI systems require constant retraining, IT involvement, or manual data feeding to remain accurate, compliance teams inherit additional administrative responsibility. What begins as an efficiency initiative becomes another operational dependency.

Compliance-Grade-AI™ avoids this trap by leveraging powerful foundation models combined with configurable guardrails that business owners control directly. When policies change, compliance adjusts rules — not entire models.

That distinction isn’t theoretical. It’s operational.

AI-Powered Marketing Compliance Pre-Review

Marketing workflows offer a practical example of how AI compliance solutions create measurable impact.

Marketing teams often submit materials knowing certain disclosure or formatting issues will trigger revision cycles. Each additional turn extends timelines and increases friction between marketing and compliance.

By integrating AI compliance solutions into pre-review workflows, firms can proactively identify:

  • Missing or outdated regulatory disclosures
  • Inconsistent terminology
  • Undefined acronyms or technical terms
  • Required policy elements specific to the firm

The result: fewer review cycles, faster approvals, and improved collaboration between departments.

AI becomes connective tissue across the organization — not just another automation layer.

AI Governance Frameworks for Financial Institutions

Regulators are paying attention. The question is no longer whether firms are using AI — it’s whether they can demonstrate responsible governance.

Responsible AI adoption in regulated industries requires:

  • Clear governance frameworks defining roles, responsibilities, and escalation paths
  • Rigorous vendor due diligence to ensure third-party AI meets compliance standards
  • Structured change-management processes documenting system updates
  • Continuous improvement mechanisms aligned to regulatory expectations

Red Oak’s white paper, AI for Financial Compliance — Built for Precision, Not Prediction, outlines the architectural and governance principles required to implement AI responsibly in financial services.

The Future of AI in Financial Compliance

AI in compliance isn’t about replacing human judgment. It’s about empowering compliance professionals to work more effectively — and more defensibly.

When AI answers to compliance requirements instead of asking compliance to adapt to AI’s limitations, financial institutions gain both operational efficiency and regulatory confidence.

Prediction powers AI. Precision protects your firm. And in financial services, that distinction makes all the difference.

Contributor

Rick Grashel is the Chief Technology Officer and Co-Founder of Red Oak. Connect with Rick on LinkedIn.

Fletcher Stubblefield is a Senior Product Marketing Manager at Red Oak. Connect with Fletcher on LinkedIn.