FINRA’s AI Landscape and What it Means for Member Firms

Overview

Listen

FINRA’s new FINRA Forward initiative shows how deeply AI is reshaping compliance and market oversight. Hear Red Oak's thoughts in this quick chat.

Critical Questions Powered by Red Oak

While FINRA has leveraged AI for years in its market surveillance operations, generative AI introduces a more interactive, analytical capability. With tools like FILLIP, FINRA staff can now analyze complex documents, summarize findings, and conduct risk reviews conversationally, turning what used to be a manual process into a dynamic, AI-assisted workflow. 

Firms are expected to uphold the same supervisory standards under Rule 3110, but now through a lens that includes AI governance. This means establishing clear policies around data integrity, model risk management, and privacy, ensuring GenAI tools enhance, not replace, sound human judgment in compliance processes. 

The key lies in balance: adopting GenAI solutions purpose-built for compliance, not just generic AI tools. FINRA’s example demonstrates that efficiency and responsibility can coexist, and that firms equipped with integrated, compliance-focused platforms (like Red Oak’s) are better positioned to thrive as AI becomes a regulatory necessity, not a luxury.

Transcript

Speaker 1: 

Welcome back to the Deep Dive. The only place where we take these complex, really industry shaping topics, especially in finance and just distill them into the knowledge you need fast. Today we are strapping in for a major digital transformation, and it's happening at, well, the highest regulatory level 

Speaker 2: 

It is, 

Speaker 1: 

The subject is at FINRA, the financial industry regulatory authority, and they are accelerating their deep comprehensive embrace of artificial intelligence 

Speaker 2: 

And specifically generative AI. 

Speaker 1: 

Gen AI. 

Speaker 2: 

Exactly, and we aren't just talking about a simple tech update. We're looking at what this massive shift means for FINRA's own efficiency and maybe more importantly, what it mandates for the compliance systems and every single member firm out there. 

Speaker 1: 

Right. Our mission today is to cut through all the vendor hype and just extract the core knowledge here, we're tracking their evolution. 

Speaker 2: 

From what to what. 

Speaker 1: 

From using, let's call it traditional AI for pure market surveillance, which they've been doing for years to now implementing these large language models, LLMs for their own internal operations. 

Speaker 2: 

And this isn't just an internal change for them, not at all. It immediately raises the bar for how you need to be thinking about your firm's supervisory systems, your internal governance. 

Speaker 1: 

And the source material we've got today. It's pulled directly from recent FINRA announcements, industry analysis, and it makes one thing incredibly clear right from the jump. 

Speaker 2: 

What's that? 

Speaker 1: 

This shift, it wasn't a luxury, it wasn't something they chose to buy. This huge technological lift was undertaken, and this is the quote by necessity, 

Speaker 2: 

That phrase, by necessity, that is truly the key to understanding this entire landscape to get why gen AI is now indispensable. You first have to appreciate just the sheer scale of the data problem FINRA was drowning in, right, 

Speaker 1: 

Even before this latest leap. 

Speaker 2: 

Exactly. 

Speaker 1: 

I think most people get that FINRA is a regulator, but they might not grasp the just overwhelming volume of information they have to process every single day. 

Speaker 2: 

The scope is staggering. I mean, FINRA oversees the vast majority of trading in the US equity markets, 

Speaker 1: 

A vast majority. 

Speaker 2: 

They're performing cross market surveillance for 26 different self-regulatory organizations, SROs, which well, they collectively operate 35 equities and options exchanges. 

Speaker 1: 

So just think about that. The volume of orders, quotes, trades flying across all of those different venues every second of the day. And 

Speaker 2: 

FINRA, CEO, Robert Cook, he addressed this challenge head on. He noted that with this level of complexity, it just became, and I'm quoting impossible to rely on traditional spreadsheets or similar tools to analyze the relevant data. 

Speaker 1: 

You can't just put it in a spreadsheet. 

Speaker 2: 

No, you just can't hire enough analysts or build enough pivot tables to keep up with that flood of information. It's just not humanly possible. 

Speaker 1: 

It sounds like the absolute definition of information overload, 

Which is why the sources use such a powerful analogy for it. 

Speaker 2: 

The Sisyphean task, 

Speaker 1: 

The Sisyphean task, before this kind of sophisticated AI regulatory compliance on this scale was like forever pushing a boulder uphill only to have the data volume roll it right back down on you. 

Speaker 2: 

Precisely. And if we look at what that first wave of AI integration achieved, it's nothing short of Herculean. That AI program didn't just automate a few tasks, 

Speaker 1: 

It changed the game completely. 

Speaker 2: 

It fundamentally changed what's possible in real time. We are talking about the system reviewing and get this hundreds of billions of market events generated each day in search of fraud manipulation or other misconduct that can harm investors and markets 

Speaker 1: 

Hundreds of billions. That specific detail is the critical context here, isn't it? 

Speaker 2: 

It is. 

Speaker 1: 

It establishes the baseline. A hyper automated, incredibly powerful surveillance machine already exists and it's running 2047. So the move to gen AI isn't about starting to use ai? No, 

Speaker 2: 

Not at all. It's the next massive escalation of an already existing foundational infrastructure. 

Speaker 1: 

And that leads us perfectly into the 2024 initiatives, 

Speaker 2: 

Right? Because FINRA realized that while that traditional AI was great at spotting known patterns of manipulation, the classic spoofing or layering 

Speaker 1: 

The stuff they already knew to look for 

Speaker 2: 

Exactly, they needed a way to combat efficiency bottlenecks internally and to analyze new and evolving risks much more quickly, 

Speaker 1: 

Which is where FINRA Forward comes in. This is their big strategic initiative. It's designed to improve overall effectiveness by modernizing rules, combating new risks. Cyber risks are a big one, and crucially integrating gen AI tools across the board, 

Speaker 2: 

And the most visible piece of tech to come out of this shift is a brand new internal tool they introduced this year. It's called FILLIP. 

Speaker 1: 

FILLIP. It sounds approachable. I guess that's the point of these new interfaces. 

Speaker 2: 

It is. FILLIP is a large language model and LLM based chat feature, and you have to remember, LLMs are the engine behind Gen AI. They use deep learning to identify, summarize, predict, and of course generate new text-based content. They're just fundamentally powerful accelerators for any task that involves language or analyzing documents, and 

Speaker 1: 

The adoption rate really tells you everything you need to know about how useful it is. This isn't some little sandbox experiment. 

Speaker 2: 

No, it's in production. 

Speaker 1: 

The sources are saying that nearly 40% of FINRA staff are using FILLIP every single week. 

Speaker 2: 

That high adoption rate shows you the immediate efficiency gains, but it also raises a question we have to pause on for a second. Okay. If FINRA staff are using genai two and the sources say this write and edit drafts, doesn't that inherently introduce the risk of AI hallucinations or inaccuracies into regulatory documents? 

Speaker 1: 

That is a phenomenal point. 

Speaker 2: 

Yeah. 

Speaker 1: 

Are the efficiency gains just shifting the error burden from a human drafter to an AI generator? 

Speaker 2: 

It's a real challenge because the reputation of the regulator is paramount. I mean, if you're generating a deficiency letter or a risk review based on an LLM, the quality control has to be just perfect. 

Speaker 1: 

So how are they handling that? 

Speaker 2: 

Well, this is where the governance really comes into focus. The efficiency gain isn't necessarily in drafting a perfect final document from scratch. It's in accelerating that initial analytical phase. The use cases are actually very specific. So for instance, gen AI is fantastic at summarizing and analyzing complex regulatory info. If FINRA gets a mountain of comment letters on a new rule, which 

Speaker 1: 

They always do 

Speaker 2: 

Right? FILLIP can distill those hundreds of documents into the core arguments and the main concerns in minutes that lets the human staff spend their time responding to the substance, not just reading the entire stack. So 

Speaker 1: 

It's about speeding up the front end research the summarizing, not creating the final authoritative document itself. 

Speaker 2: 

Exactly. And another critical use is document comparison. Think about comparing an updated mutual fund prospectus against the last version to quickly flag what's changed. That 

Speaker 1: 

Used to be a very manual, very tedious process, 

Speaker 2: 

Highly manual. Now, FILLIP assists in conducting member firm wrist reviews and analyzing data on funds to facilitate exams. It's accelerating that critical frontline regulatory work. 

Speaker 1: 

And when you add up all those minutes saved on summarizing, comparing pre-draft, the cumulative impact must be huge. 

Speaker 2: 

It is FINRA estimates they'll achieve many thousands of hours in annual efficiency gains from staff using the gen AI tools that are already deployed or in development 

Speaker 1: 

Thousands of hours. 

Speaker 2: 

This is the core goal of FINRA forward maximum organizational output with higher efficiency. It lets those highly trained human analysts focus on the highest risk issues where human judgment is absolutely non-negotiable. 

Speaker 1: 

Okay, so that's FINRA's side of it. They've mastered the hundreds of billions of data points with their AI surveillance, and now they're using gen AI internally to save thousands of hours. Now we have to shift focus. This is the crucial part for you, the listener. If the primary financial regulator is adopting this tech at this scale, what does it mean for your firm's compliance obligations? 

Speaker 2: 

This is the so what if you're a member firm? FINRA is setting a new standard. They're setting expectations for the future of what effective supervision looks like, and while FINRA is helpful, they provide resources like threat intelligence products. They hold round tables. The core regulatory mandate hasn't changed, 

Speaker 1: 

But the risks have. 

Speaker 2: 

The risks have absolutely changed. The fundamental message is that your supervisory duties under rule 3 1 10 are tech agnostic. 

Speaker 1: 

Tech agnostic. What does that actually mean In practice? 

Speaker 2: 

It means FINRA doesn't care if you use paper and pencils or the most sophisticated gen AI system on the planet. A firm must always have a reasonably designed supervisory system that is tailored to its specific business. 

Speaker 1: 

Okay. Let's unpack this because saying a rule is tech agnostic feels a little incomplete when the tech itself introduces these fundamental brand new risks that didn't exist five years ago. 

Speaker 2: 

That's a great point. 

Speaker 1: 

So when a firm does choose to adopt gen ai, say for reviewing emails or flagging things in internal chats, that decision immediately triggers specific non-negotiable governance requirements. 

Speaker 2: 

Precisely. You choose the new tool, you assume the new governance burden, and FINRA has been very clear that if gen AI is part of your supervisory system, your policies and procedures have to explicitly address three key areas. 

Speaker 1: 

Okay, what are they? 

Speaker 2: 

Let's start with the first one. It's arguably the most complex technology governance, which has to include comprehensive model risk management. 

Speaker 1: 

Model risk management. That's the term that sends a shiver down a compliance officer spine. Why is this so paramount? Just because we introduced an LLM, 

Speaker 2: 

Because LLMs are inherently black boxes. 

Traditional surveillance AI had clear rules. If A happens, then flag B. 

Speaker 1: 

Simple logic, 

Speaker 2: 

Right? But gen AI works differently. It's probabilistic. It doesn't tell you why it flags something. It just gives you an output based on probabilities and its training data. So if your system fails to catch something and you have to defend that failure to FINRA, 

Speaker 1: 

You better be able to explain how the model works. 

Speaker 2: 

You must be able to explain the assumptions, the limitations, the potential biases that were baked into that model. If you can't manage the model risk, you can't reasonably rely on its output. That 

Speaker 1: 

Makes perfect sense. If you rely on the black box, you are ultimately on the hook for what the black box produces or it fails to produce. 

Speaker 2: 

Moving on to the second requirement, data privacy and integrity. Gen AI is incredibly data hungry, 

Speaker 1: 

And the risk here seems twofold. One, are you accidentally feeding sensitive client data into a third party LLM that could then leak it? 

Speaker 2: 

A huge risk. 

Speaker 1: 

And two, is the data you're feeding it even accurate to begin with. Garbage in, garbage out. 

Speaker 2: 

That's absolutely right. The privacy part is about protecting client and firm data. The integrity piece is just as critical. If your AI is learning on flawed or bias data, then your entire supervisory system is compromised from the get go. You have to maintain rigorous data hygiene specifically for the ai, 

Speaker 1: 

Which brings us to the third non-negotiable requirement, and this one connects right back to that hallucination risk we talked about earlier, 

Speaker 2: 

Reliability and accuracy of the AI model. 

Speaker 1: 

So you have to prove that it actually works. 

Speaker 2: 

This is the practical check on the system. You can't just deploy it and trust the output. You have to have procedures in place to continuously test and monitor that the AI is in fact accurate and reliable for its intended purpose. 

Speaker 1: 

Can you give an example? 

Speaker 2: 

Sure. Think about it this way. If your gen AI tool is supposed to summarize client complaints to spot patterns of sales practice abuse, and it hallucinates a key detail or misses one, 

Speaker 1: 

That could lead to a catastrophic failure to supervise 

Speaker 2: 

Exactly. So you must have a robust testing protocol that's specifically designed to catch those kinds of generative errors before they translate into real world compliance failures. 

Speaker 1: 

So if FINRA is upgrading its own systems and saving thousands of hours, the mandate for member firms is clear. You have to modernize to keep pace, but the second you adopt gen ai, your governance maturity has to jump exponentially to manage all this new risk. 

Speaker 2: 

That is the perfect summary. AI was a necessity for scale. Gen AI is now the pursuit of efficiency and the resulting governance obligation for member firms is non-negotiable, and it's heavily weighted toward managing those new risk model risk, data integrity and proving accuracy. 

Speaker 1: 

It creates a really challenging environment, though. FINRA is building its own sophisticated, custom designed AI landscape, but the sources note that member firms can sometimes feel adrift in a sea of options, some more questionable than others. 

Speaker 2: 

There's a fragmentation problem out there big time. 

Speaker 1: 

The market need that's been identified is for a tool that's built from the ground up with compliance in mind, something that uses the power of gen AI responsibly, but is designed to connect all the separate parts of a firm's compliance program, 

Speaker 2: 

Supervisory review, document storage, communications, all 

Speaker 1: 

Of it into one cohesive hole. The sources actually call this concept a compliance connectivity platform. The fragmentation of tools is clearly a vulnerability that FINRA's own integrated approach highlights. 

Speaker 2: 

The irony is rich, isn't it? The regulator has solved its scale and efficiency problem internally, but by doing so, it's put immense pressure on firms to solve their own fragmentation and model governance problems and to do it simultaneously. 

Speaker 1: 

Which leads to the final provocative thought we want to leave you with today. 

Speaker 2: 

Okay. So if we know that FINRA's AI is reviewing hundreds of billions of market events daily, and we know that your firm's responsibility for reliability and accuracy is non-negotiable, under Rule 31 10, then 

Speaker 1: 

You have to consider this. 

Speaker 2: 

If your gen AI system fails to detect the next complex market manipulation scheme, will your governance documentation be mature enough to defend that model's reliability to a thin on examination team? 

Speaker 1: 

Because that is the standard you have to operate under. Now 

Speaker 2: 

That's the bar a 

Speaker 1: 

Question worth deep diving into immediately. Thank you for joining us today as we navigated FINRA's critical digital transformation and what it means for the future of financial compliance. Until next time, keep exploring. 

Read the Blog Post

Earlier this year, FINRA announced the start of FINRA Forward, a series of initiatives designed to help improve its effectiveness and efficiency in light of constantly evolving regulatory considerations and technological developments. These initiatives included modernizing FINRA rules and empowering member firm compliance, while combating cybersecurity and fraud risks.  As part of this effort, FINRA’s president and CEO, Robert Cook, recently announced their endeavor to leverage their long-standing experience, infrastructure, and capabilities to broadly integrate generative artificial intelligence (GenAI) enabled tools into their regulatory program.   

FINRA has previous experience using artificial intelligence in their market surveillance programs. FINRA has an obligation to oversee trading across dozens of exchanges, alternative trading systems, and other venues. In addition, they perform cross-market surveillance for 26 SROs operating 35 equities and options exchanges.  

Given this complexity and scale, Cook said that “…it became impossible to rely on traditional spreadsheets or similar tools to analyze the relevant data. Thus, for years FINRA has by necessity been developing and employing AI to support its market oversight functions”. This incorporation of artificial intelligence took what was previously a Sisyphean task and transformed it into program that reviewed “hundreds of billions of market events generated each day in search of fraud, manipulation, or other misconduct that can harm investors and markets”.  

In 2024, FINRA introduced FILLIP, a large language model (LLM) based chat feature.  LLMs are a type of GenAI that use deep learning techniques on large amounts of data to identify, summarize, predict, and generate new text-based content. FILLIP is used by nearly 40% of FINRA staff on a weekly basis for summarizing and analyzing information, comparing documents for material changes, and writing and editing drafts. It can also conduct member firm risk reviews and analyze data on mutual funds and ETFs to facilitate examinations of related sales activities. While those users are still crucial to the process, FINRA estimates “many thousands of hours in annual efficiency gains from staff using the GenAI tools that are already deployed or in development, with more to come” with the use of third-party GenAI tools.  

FINRA has also provided helpful resources for firms that use GenAI and seek to find new use cases for the rapidly advancing technology, including Threat Intelligence Products (TIPs) that are delivered directly to member firm personnel, outreach, discussions, roundtables, and conference sessions. They continue to state, as with any technology or practice, your supervisory duties under Rule 3110 are tech-agnostic. Accordingly, “a member firm must have a reasonably designed supervisory system tailored to its business. If a firm is using Gen AI tools as part of its supervisory system—for the review of electronic correspondence, for instance—its policies and procedures should address technology governance, including model risk management, data privacy and integrity, reliability and accuracy of the AI model.” 

In a world of ever-increasing complexity, FINRA has seen the need to incorporate new tools into its compliance landscape, not only out of a desire for increased efficiency, but “by necessity”. While FINRA builds their own AI landscape, member firms sometimes feel adrift in a sea of options, some more questionable than others. What you need is something built from the ground up with compliance in mind, something that uses GenAI responsibly yet with an eye toward the future. This tool would have the capability to connect disparate elements of your compliance program into a cohesive whole. What you need is a compliance connectivity platform. What you need is Red Oak.  

Sources: Advancing FINRA’s Mission With AI | FINRA.org, FINRA Announces New “FINRA Forward” Initiatives to Support Members, Markets and Investors | FINRA.org and Regulatory Notice 24-09 | FINRA.org 

Contributor

Cathy Vasilev is the Chief Compliance Officer and Co-Founder of Red Oak. Connect with Cathy on LinkedIn