What Changed in May 2024
Regulation S-P was originally adopted in 2000 under the Gramm-Leach-Bliley Act (GLBA). For 24 years, its core obligation was straightforward: broker-dealers and investment advisers must adopt written policies to safeguard customer nonpublic information (NPI) and properly dispose of consumer report information. The rule was written in an era of paper records and early digital systems. It did not contemplate AI, cloud infrastructure, or third-party data processors with continuous API access to client data.
Published in the Federal Register on June 3, 2024 (89 FR 47688), the final amendments are substantive. The six principal changes:
- Mandatory written incident response program. Under 17 C.F.R. § 248.30(a)(3), covered institutions — which include all SEC-registered investment advisers — must develop, implement, and maintain a written incident response program "reasonably designed to detect, respond to, and recover from unauthorized access to or use of customer information." The program must include procedures for assessment, containment and control, and notification. This is not a BYOD-style recommendation — it's a written, maintained, testable requirement.
- 30-day client breach notification. If sensitive customer information is accessed or used without authorization, RIAs must provide notice to each affected individual no later than 30 days after becoming aware of the incident. The threshold is "reasonably likely to have occurred" — not confirmed. See Section 2 below for the exact mechanics.
- 72-hour service provider notification requirement. Service providers — including AI vendors — must notify the RIA within 72 hours of becoming aware of a breach involving customer information. FINRA's cybersecurity advisory (Exchange Act Release No. 35193) confirms this applies broadly: any vendor with access to customer data is in scope.
- Expanded scope of protected information. The amendments broaden what counts as "customer information" to include data received from third-party sources: placement agents, feeder funds, fund administrators, and — critically for AI deployments — any data injected into an AI system's context from external sources.
- 5-year recordkeeping. RIAs must maintain written records of all compliance activities — breach investigations, notification determinations, service provider agreements, policies and procedures — for 5 years, with the first 2 years in an easily accessible location.
- Compliance deadlines. Large RIAs (AUM ≥ $1.5 billion) must comply by December 3, 2025. Smaller RIAs (AUM < $1.5 billion) must comply by June 3, 2026. The SEC named amended Reg S-P in its 2026 Examination Priorities as an active enforcement focus.
The 30-Day Breach Notification Rule
The notification requirement is the most operationally demanding element of the amendments. 17 C.F.R. § 248.30(a)(4) sets out three mechanics that compliance officers need to understand precisely:
The triggering standard: "reasonably likely to have occurred"
You do not need a confirmed breach to trigger the 30-day clock. The clock starts when your firm becomes aware that unauthorized access has occurred or is reasonably likely to have occurred. This matters for AI systems that log client-specific context: if your AI vendor's logs are exposed in a misconfiguration event, even without confirmed exfiltration, the 30-day window may already be running.
The notification exception: reasonable investigation
Notification is not required if, after a reasonable investigation, the RIA determines that sensitive customer information "has not been, and is not reasonably likely to be, used in a manner that would result in substantial harm or inconvenience." Two obligations follow from this: (1) the investigation must be reasonable — undocumented conclusions don't qualify; and (2) if the investigation is inconclusive, the presumption is notification, not silence. The SEC's final adopting release states this explicitly: "a covered institution must provide a notice unless it determines notification is not required following a reasonable investigation."
If you can't identify affected individuals
If a breach occurred but the RIA cannot identify which specific individuals were affected, the rule requires notifying all individuals whose sensitive customer information resided in the affected system. For an AI deployment where all client data is pooled in a shared retrieval index, this means a single incident potentially requires firm-wide notification. Tenant isolation — by architecture, not policy — is the only structural defense here.
The vendor chain
Your AI vendor's 72-hour notification obligation to you is a contractual and regulatory requirement you must establish and enforce. The final rule states that RIAs must "establish, maintain, and enforce written policies and procedures reasonably designed to require oversight, including through due diligence and monitoring of service providers." If your AI vendor contract does not include a 72-hour breach notification clause, you are already non-compliant with the oversight requirements of the amended rule.
Where AI Workflows Expose RIAs to Reg S-P Violations
Most RIA compliance programs were not written with AI in scope. The exposure points are structural — not addressable by updating your privacy notice.
1. Vendor data leakage through shared AI infrastructure
When an RIA uses a general-purpose AI assistant — commercial or internal — that is not tenant-isolated, client NPI can cross data boundaries. A retrieval query that searches across all clients' documents, or a context window that includes multiple clients' account data, creates cross-tenant exposure. If that system is ever breached, the firm cannot isolate which clients were affected — triggering the firm-wide notification scenario described in the final rule. The amended Reg S-P expanded scope to include data received from third-party sources, which means client data injected from external sources (custodians, fund admins) into an AI context is protected information regardless of how it arrived.
2. Hallucinated client-specific responses
An AI system that fabricates client account details, invents regulatory filings, or generates invented portfolio positions can cause a different kind of Reg S-P violation: distributing materially false information under your fiduciary duty. Under Advisers Act § 206, RIAs have a duty of care in client communications. An AI-generated client email that contains fabricated specifics about the client's account — even if the fabrication is "confident" — is a compliance event. It is not sufficient that the AI was confident. The output must be grounded. NIST AI RMF 1.0 identifies confabulation (hallucination) as a primary risk category for AI systems deployed in consequential contexts, and its Manage function requires controls to detect and mitigate ungrounded outputs before they reach end users.
3. Unencrypted prompt logs and inference traces
Every query submitted to an AI system — including the client-specific context injected with it — is a log entry. If your AI vendor retains prompt logs, those logs contain client NPI. If those logs are not encrypted, not WORM-protected, and not subject to your 5-year retention and accessibility requirements, you have a recordkeeping gap under the amended rule. Many standard AI API contracts do not provide the log access, immutability guarantees, or deletion rights required for SEC examination compliance.
4. AI systems as unscreened service providers
The amended rule's service provider oversight requirements apply to any vendor that accesses customer information. An AI provider that ingests client data through an API is a service provider under 17 C.F.R. § 248.30(a)(3). That means: (a) you must conduct due diligence before onboarding them; (b) you must monitor their security practices continuously; (c) their contract must require 72-hour breach notification to you; and (d) you remain responsible for ensuring affected clients are notified even if the breach originated at the vendor's infrastructure. Most standard AI vendor terms of service do not include any of these provisions.
The 5 Controls Sturna Enforces
Sturna's architecture addresses each of the Reg S-P exposure vectors above through five layered controls. These are not policy documents — they are enforced at the infrastructure level, independently of operator configuration, and produce verifiable artifacts for SEC examination.
1. Transparency Card
Every AI output is accompanied by a machine-readable Transparency Card: a structured artifact documenting the sources used, the grounding score for each factual claim, the model version, and the verification gates applied. The Transparency Card is the Reg S-P-compliant analogue to a human advisor's "basis for recommendation" — it answers the examination question "how did this output reach the client?" without reconstruction.
2. AuditLogger WORM
Every AI-generated communication, research output, retrieval context, and gate decision is written to an append-only, cryptographically sealed audit log at the moment of creation. Entries cannot be modified or deleted. The log is accessible for SEC examination on demand — not as a retroactive export but as a live, tamper-evident record. This satisfies the 5-year recordkeeping requirement of the amended Reg S-P under 17 C.F.R. § 248.30 and the immutability requirements of SEC Rule 17a-4 simultaneously.
3. MARCH Adversarial Check
A second AI agent with information asymmetry reviews every output independently before it exits the system. The MARCH gate is specifically calibrated to catch Reg S-P exposure patterns: client NPI boundary crossings, mosaic theory violations, inference from combined data sources, and Regulation FD selective disclosure risks. An intercept log entry is written on every block, preserving evidence of the compliance control firing. This maps directly to the NIST AI RMF 1.0 Manage function's requirement for continuous monitoring of adversarial conditions.
4. Factual Grounding Gate
Every factual claim in an AI output is traced to a cited source in your approved corpus with a relevance score. Responses containing claims that cannot be grounded — or that ground to sources below the 0.85 threshold — are blocked or explicitly flagged as unverified before delivery. This prevents the hallucinated-client-specific-response vector described in Section 3 above, and generates a grounding map artifact that satisfies the documentation requirements of the amended safeguards rule.
5. Dedicated Tenant Isolation (Coalition Adjacency)
Each RIA firm operates in an isolated agent pool. Client NPI — account data, holdings, contact information — is scoped to your tenant at the infrastructure level and cannot cross to other tenants' contexts. Retrieval indexes, prompt logs, and inference traces are per-tenant by architecture. In the event of a breach, the blast radius is bounded to the affected tenant's data — eliminating the firm-wide notification scenario created by shared-index deployments.
For independent verification evidence, security documentation, and SOC 2 observation reports, see the Sturna Trust & Security Center →
For benchmark data on RIA compliance accuracy vs. unguarded AI: see Sturna vs. LangChain / AutoGen / CrewAI →
How to Audit Your Current AI Stack
Five questions to run against every AI vendor currently in your advisory workflow:
- Is your contract with this vendor Reg S-P compliant? Does it include a 72-hour breach notification clause? Does it grant you audit rights over their security practices? If not, the vendor oversight requirements of 17 C.F.R. § 248.30(a)(3) are not met.
- Is client data tenant-isolated? Can a retrieval query from Client A's session access data from Client B's files? If the system uses a shared embedding index across clients, you have a cross-tenant NPI risk that produces firm-wide notification exposure on any breach.
- Are prompt logs retained, encrypted, and WORM-protected? Every query that includes client NPI is a customer information record under the amended rule. If the vendor retains those logs, they are subject to your retention, accessibility, and disposal requirements — and you need contractual access to them for examination purposes.
- Can you produce a grounding artifact for any AI-generated client communication? In an SEC examination, "the model said so" is not an acceptable basis for a factual claim in a client communication. You need to be able to show the source, the retrieval context, and the verification applied. If your current AI stack doesn't produce this, you have a fiduciary documentation gap.
- Does your incident response program include AI-specific scenarios? A pre-2024 incident response plan that covers network breaches but not AI vendor breaches, prompt injection attacks, or hallucination-induced disclosure events does not satisfy the amended rule. Update it before your compliance deadline.
Scan your AI stack for Reg S-P exposure
Submit your current AI workflow to Sturna's compliance scan. We check tenant isolation, audit log coverage, vendor contract gaps, and grounding evidence against the amended Reg S-P requirements. Results in under 60 seconds — no account required.
Run Reg S-P Compliance Scan →Not legal advice. For compliance determinations, consult qualified securities counsel. Sturna is an AI verification infrastructure provider.
Common Questions from CCOs
Is IA-6604 the same as the "Reg S-P amendments" I've been reading about?
Yes. The May 2024 Reg S-P amendments were adopted under multiple statutory authorities, producing several release numbers for the same rulemaking: 34-100155 (Securities Exchange Act), IA-6604 (Investment Advisers Act), and IC-35193 (Investment Company Act). The full text is available at sec.gov (PDF). Some secondary sources reference the proposed rule number (IA-6240, published March 2023), which is different from the final rule.
Does Reg S-P apply to AI-generated client communications?
Yes. Any AI system that processes, generates, or stores sensitive customer information is within scope of Reg S-P's safeguards and disposal rules. AI vendors that access client data are service providers under the amended rule. AI-generated outputs containing client-specific data — including prompt logs, retrieval contexts, and inference traces — are customer information records subject to the rule's protections and recordkeeping requirements.
What happens if my AI vendor has a breach before I've updated my contract?
You remain responsible. The amended rule states that RIAs "retain the obligation to ensure that affected individuals are notified in accordance with the notice requirements" regardless of whether the breach originated at a service provider. Your 30-day clock starts when you become aware of the breach — and your lack of a 72-hour notification clause may mean you learn about it later than you should, reducing your response window.
Does NIST AI RMF 1.0 provide a compliance safe harbor under Reg S-P?
No. NIST AI RMF 1.0 is a voluntary framework — it provides no SEC safe harbor. However, the SEC's amended Reg S-P uses principles-based language ("reasonably designed," "reasonable investigation") that gives compliance credit for documented, systematic risk management. Implementing the NIST AI RMF's Govern-Map-Measure-Manage cycle for your AI deployments produces the documentation and evidence that an SEC examiner will look for when assessing whether your incident response program is "reasonably designed."
Deploy Reg S-P-compliant AI in 3 business days.
The $2,500 RIA pilot provisions a dedicated, tenant-isolated agent pool with WORM audit logging, Factual Grounding Gate, and MARCH adversarial check active from day 1. Your 30-day pilot deposit credits your first month of service.
- ✓ Dedicated RIA-tuned agent pool (isolated tenancy)
- ✓ Reg S-P NPI handling from day 1
- ✓ SEC 17a-4 WORM audit trail, active immediately
- ✓ Factual Grounding Gate + MARCH adversarial check
- ✓ 72-hour breach notification clause in vendor contract
- ✓ Form ADV disclosure template included
- ✓ Pro-rated refund if pilot doesn't deliver at day 30
Payments secured by Stripe · No annual contract required