On December 9, 2025, FINRA released its 2026 Annual Regulatory Oversight Report. For the first time, the report contains a standalone section dedicated to Generative AI. This is a significant escalation from 2025, when GenAI was folded into other topics.
This article breaks down what changed, what it signals about examination priorities, and what broker-dealers should do now.
What's Actually New
From 3 Use Cases to 15
The 2025 report identified three GenAI use cases firms had implemented: summarization, analysis across data sets, and policy retrieval. The 2026 report identifies fifteen.
The new additions include:
- Conversational AI and question answering
- Sentiment analysis
- Translation
- Content generation and drafting
- Classification and categorization
- Workflow automation and process intelligence
- Coding tools
- Query applications (natural language to database)
- Synthetic data generation
- Personalization and recommendation
- Data transformation
- Modeling and simulation
FINRA notes that most current implementations remain internal and efficiency-focused. Summarization and information extraction remains the top use case. But the expansion from 3 to 15 use cases in twelve months tells you where this is heading.
AI Agents Get Their Own Section
This is the most forward-looking part of the report. FINRA defines AI agents as "systems or programs that are capable of autonomously performing and completing tasks on behalf of a user." They explicitly call out that agents present a different risk profile than standard GenAI tools.
The report identifies specific agent-related risks:
Autonomy and Scope Creep: Agents may act without human validation and exceed their intended authority. An agent designed to draft client communications might start sending them.
Auditability and Transparency: Multi-step reasoning chains are difficult to reconstruct. When an agent makes a decision through 47 intermediate steps, documenting the logic for examiners becomes challenging.
Sensitive Data Handling: Agents with broad system access may unintentionally retain or disclose proprietary information across contexts.
Insufficient Domain Expertise: General-purpose agents lack the specialized knowledge required for securities compliance. They may not understand why a particular recommendation requires suitability analysis.
Misaligned Incentives: Reward functions designed for efficiency may conflict with investor protection. An agent optimized to maximize trade completion might deprioritize best execution.
The report specifies that firms should:
- Monitor agent system access and data handling
- Determine where human-in-the-loop oversight is necessary
- Track agent actions and decisions
- Establish guardrails to constrain agent behavior
Prompt and Output Logging Expectations
FINRA now explicitly expects firms to maintain "prompt and output logs for accountability and troubleshooting" as part of ongoing monitoring. This isn't phrased as a new rule—FINRA's position is that existing supervisory requirements already demand this.
The report also specifies tracking "which model version was used and when" during deployment. This matters because models change. A model that passed your validation in March may behave differently after a September update. Knowing which version generated which output is essential for investigations and audits.
What This Means for Examinations
The report uses encouraging language ("FINRA encourages firms to consider...") rather than mandatory language. This is consistent with Regulatory Notice 24-09, which explicitly stated it created no new requirements.
But here's what experienced compliance officers understand: when FINRA publishes detailed guidance about what firms "should consider," those considerations become de facto examination benchmarks. Examiners will ask whether you considered them. They will ask to see documentation of your consideration.
- How do you supervise GenAI use at your firm?
- What GenAI tools are employees using, and for what purposes?
- Show me your prompt and output logs for AI-assisted supervision
- How do you track model versions and changes?
- What controls exist for AI agents that take autonomous actions?
- How do you validate that AI-generated communications comply with Rule 2210?
The report mentions that FINRA is monitoring GenAI-enabled fraud, including AI-generated voices, deepfake identity documents, and synthetic media used in account takeovers. This means firms should expect questions about how they detect AI-generated fraudulent inputs, not just how they govern their own AI use.
Regulatory Notice 24-09 Context
The 2026 report builds on Regulatory Notice 24-09, issued June 27, 2024. That notice established the foundational principle: FINRA's rules are technology-neutral. They apply to AI the same way they apply to any other technology.
Key rules the notice highlighted:
Rule 3110 (Supervision): Firms must have reasonably designed supervisory systems tailored to their business. If you use GenAI for supervision or surveillance, your policies must address technology governance, model risk management, data privacy, and accuracy.
Rule 2210 (Communications): Content standards apply equally to AI-generated communications. A chatbot's output to a client is a retail communication subject to the same rules as a human-written email.
The notice explicitly stated that firms must evaluate GenAI tools before implementation "to ensure that the member firm can continue to comply with existing FINRA rules applicable to the business use of those tools."
What Broker-Dealers Should Do Now
Immediate Actions (Q1 2026)
-
Inventory Current AI Use
You cannot govern what you cannot see. Conduct a comprehensive inventory of:- Approved GenAI tools and their use cases
- Shadow AI (unauthorized tools employees are using anyway)
- Third-party vendors using AI on your behalf
- AI components embedded in existing software
-
Implement Logging
If you're running GenAI tools without prompt and output logging, that's your highest priority gap. FINRA expects these logs for accountability and troubleshooting. Build the infrastructure now. -
Document Model Versions
Establish a system to track which model version was used and when. When a vendor updates their model, you should know about it and have records linking outputs to specific versions.
Near-Term Actions (Q2-Q3 2026)
-
Develop Agent-Specific Controls
If you're using or planning to use AI agents—systems that take autonomous actions—you need controls beyond what you have for chat-based tools. This includes:- Scope definitions that limit what agents can do
- Human-in-the-loop requirements for consequential actions
- Action logging with reconstruction capability
- Kill switches
-
Update Supervisory Procedures
Review your written supervisory procedures. Do they address GenAI? Do they specify how AI-generated content is reviewed before client distribution? Do they address AI-assisted supervision and the unique risks of relying on AI for compliance functions? -
Train Examinable Staff
Registered representatives and compliance staff should understand the firm's AI policies. They should be able to explain to an examiner what tools they use, how they use them, and what controls apply.
Ongoing
-
Monitor Vendor Models
When third-party AI vendors update their models, your validation may no longer apply. Establish a process to be notified of model updates and re-validate as appropriate. -
Test for Bias and Accuracy
The report calls out bias and hallucinations as ongoing risks. Periodic testing—not just at deployment, but continuously—is an expectation.
Timeline Summary
| Timeframe | Action |
|---|---|
| Now | Complete AI inventory (approved and shadow) |
| Q1 2026 | Implement prompt/output logging; document model versions |
| Q2 2026 | Develop agent-specific controls; update WSPs |
| Q3 2026 | Training for examinable staff |
| Ongoing | Vendor model monitoring; bias/accuracy testing |
The Bottom Line
The standalone GenAI section in the 2026 report is not a coincidence. FINRA is telling firms: we are paying attention to this, and you should be too.
The regulatory approach remains technology-neutral—existing rules apply, no new rules created. But the level of specificity in this report (prompt logging, model version tracking, agent-specific controls) provides a clear roadmap for what examiners will expect.
Firms that treat this as a compliance checkbox will struggle. The firms that will fare best are those treating AI governance as an operational capability, not a policy document. When an examiner asks to see your prompt logs, you should be able to produce them. When they ask how you validated your AI supervision tool, you should have documentation. When they ask what controls apply to autonomous agents, you should have an answer beyond "we're working on it."
The 15 use cases in this report will be 30 next year. The time to build governance infrastructure is now.
