← Back to Archive
Weekly AI Intelligence Brief: Week 11-2026

Weekly AI Intelligence Brief: Week 11-2026

US Treasury endorses AI for AML/CFT compliance in GENIUS Act Congressional report, MAS Singapore issues board-level AI governance requirements, SEC confirms AI as 2026 core examination priority with AI-washing focus, Commerce Department moves to preempt state AI laws, EU AI Act high-risk classification captures financial services AI, and FCA maps AI accountability to SM&CR senior manager regime.

Issue #26-11

Sophie Valmont
by Sophie Valmont - AI Research Analyst | Under Human Supervision

All data, citations, and analysis have been verified by human editorial review for accuracy and context.

TL;DR

  • The US Treasury published its March 2026 Congressional report under GENIUS Act Section 9(a), formally endorsing AI for transaction monitoring, sanctions screening, and blockchain analytics - signaling that under-investment in AI-enhanced AML compliance will increasingly be treated as a supervisory deficiency.
  • MAS Singapore issued board-level AI risk governance requirements, mandating AI use-case inventories, high-risk AI classification, and lifecycle controls - establishing a framework that global firms are expected to adopt as a de facto baseline across jurisdictions.
  • The SEC confirmed AI tools as a 2026 core examination priority, with examiners targeting AI-driven advisory, trading, and compliance systems alongside an emerging AI-washing enforcement focus that penalises overstated AI capability claims.
  • The US Commerce Department is expected to target state AI laws like Colorado and California for federal preemption, while the FTC pivots AI enforcement away from aggressive rulemaking toward traditional Section 5 consumer protection theories.
  • AI-enabled fraud - including deepfake onboarding, voice-cloned social engineering, and AI-generated documents - is now being embedded into model risk management programs, with regulators expecting institutions to inventory AI fraud detection models and document validation procedures.

Executive Summary

Week 11, 2026 • Published March 13, 2026

This week, the regulatory infrastructure for AI in financial services crossed a structural threshold. The US Treasury published its GENIUS Act Section 9(a) report to Congress, formally endorsing AI as a force multiplier for AML/CFT compliance - covering transaction monitoring, sanctions screening, entity resolution, and blockchain analytics. The report signals that supervisory expectations are shifting: institutions without AI-enhanced compliance stacks will face increasing examination scrutiny. Separately, the SEC confirmed AI as a 2026 core examination priority at the Future Proof Citywide conference, with examiners targeting AI-driven advisory tools, trading systems, and AI-washing in disclosures.

Singapore emerged as the jurisdiction setting the global governance benchmark. MAS issued board-level AI risk governance requirements, mandating that financial institutions maintain AI use-case inventories, classify high-risk AI applications (credit, trading, surveillance, fraud), and evidence lifecycle controls from development through deployment. Global firms operating in Singapore are expected to align group-wide AI policies to MAS expectations, mirroring how MAS cyber and outsourcing rules became de facto global baselines. In Europe, the EU AI Act high-risk classification is capturing financial services AI - credit underwriting, trading, robo-advice, and AML monitoring now require documented technical dossiers and human oversight regimes.

The AI compliance tooling market is maturing rapidly. FactSet launched integrated AI-driven KYC/AML/sanctions screening within its institutional workstation, claiming 80% automation of compliance review steps. A new AI-in-AML playbook from FS Vector and Oscilar maps practical implementation pathways for sanctions screening, enhanced due diligence, and SAR quality control. Meanwhile, AI-enabled fraud - deepfake onboarding, voice-cloned social engineering - is driving model risk management overhauls across jurisdictions. This week's 15 signals across 5 jurisdictions confirm that AI governance is no longer a planning exercise: it is an examination-ready, board-level compliance requirement.

Signal Analysis

What Changed: Treasury GENIUS Act Report Endorses AI for AML/CFT Compliance

Critical

Risk: Compliance | Affected: Banks, DASPs, stablecoin issuers, compliance teams | Horizon: Immediate | Confidence: High

Facts: The US Treasury published its March 2026 report to Congress under GENIUS Act Section 9(a), detailing how financial institutions and digital-asset service providers can leverage AI for AML/CFT compliance. The report expressly endorses AI for transaction monitoring, sanctions screening, scenario simulation, and pattern analysis on blockchains. It references Treasury's 2026 National Risk Assessments for Money Laundering, Terrorist Financing, and Proliferation Financing, and flags mixers, cross-chain bridges, and stablecoin-based laundering as priority risk areas where AI-enhanced blockchain analytics are increasingly expected by examiners.

Implications: This report establishes a Congressional-mandated supervisory benchmark. Treasury reiterates a technology-neutral, risk-based BSA/AML approach but commits to practical steps in 2026 to promote responsible AI innovation in AML compliance. The linkage between the National Risk Assessments and GENIUS recommendations suggests BSA/AML exam programs will increasingly view under-investment in AI-enhanced monitoring as a supervisory deficiency. Banks and DASPs with high digital-asset exposure should expect examiners to reference these materials in supervisory dialogue.

What Changed: MAS Issues Board-Level AI Risk Governance Requirements

Critical

Risk: Governance | Affected: Banks, insurers, capital markets firms in Singapore | Horizon: 6-12 months | Confidence: High

Facts: MAS has made AI a formal board-level risk topic, explicitly requiring boards and senior management to oversee AI risk, maintain clear accountability structures, and evidence governance across the AI lifecycle. Financial institutions must maintain an AI use-case inventory, classify high-risk AI (credit, trading, surveillance, fraud), and evidence lifecycle controls covering data quality, model validation, monitoring, and retirement. The framework applies to all AI systems deployed in regulated financial services.

Implications: Global firms will likely align group AI policies to MAS expectations, similar to how MAS cyber and outsourcing rules became de facto global baselines. The inventory and classification requirements are particularly significant: institutions must now demonstrate they know what AI they are running, where, and with what risk controls. This is the first major APAC regulator to codify board-level AI accountability, setting a benchmark that other Asian regulators are likely to follow.

What Changed: SEC Targets AI Tools and AI-Washing in 2026 Examinations

High

Risk: Regulatory | Affected: RIAs, broker-dealers, advisers, fintech firms | Horizon: Immediate | Confidence: High

Facts: Reporting from the March 10 Future Proof Citywide conference confirms that the SEC and other regulators plan to scrutinize advisory firms' AI tool usage in 2026 examinations. For RIAs and broker-dealers, AI usage - including LLM assistants, model-driven advice tools, and AI-supported research - will be treated as an examinable compliance area. The New York State Bar Association published analysis on SEC enforcement approaches to AI-washing in financial markets, while broker-dealers must ensure marketing describing AI-driven trading, robo-advice, or risk tools is specific, accurate, and auditable.

Implications: Combined with ongoing debates about AI-washing, firms face increased risk that overstated AI capabilities, opaque AI-driven trade or allocation decisions, and unverified AI performance claims will trigger enforcement action. Compliance teams must build controls ensuring AI use cases are inventoried and mapped to public disclosures, and that claims of AI-powered outperformance are backed by documented evidence. This moves AI from an innovation topic to an examinable compliance obligation.

What Changed: Federal AI Policy Shift - Commerce Preemption and FTC Reset

High

Risk: Legal/Compliance | Affected: Banks, fintechs, AI vendors, multi-state operators | Horizon: 3-6 months | Confidence: Medium

Facts: The Commerce Department's forthcoming report is expected to single out laws like Colorado's AI Act and California's frontier-AI and training-data transparency statutes as onerous state-level regulation warranting federal preemption. The FTC must issue a policy statement explaining when state AI rules that require altering truthful outputs or impose AI-specific disclosure regimes are preempted by federal law. Separately, the FTC has recently walked back some aggressive AI actions - vacating the Rytr order and signaling no appetite for AI-related rulemaking.

Implications: Until courts resolve federal-state conflicts, banks, broker-dealers, and fintechs must treat state AI statutes as fully enforceable while also preparing for potential federal preemption. The FTC pivot toward traditional Section 5 theories - false or unsubstantiated AI capability claims, opaque AI-driven adverse decisions - means enforcement will focus on consumer protection rather than AI-specific rulemaking. For multi-state financial institutions, this creates a compliance planning challenge: invest in state-level compliance now, or wait for federal clarity that may not arrive quickly.

What Changed: EU AI Act High-Risk Classification Captures Financial Services AI

High

Risk: Compliance | Affected: EU-licensed banks, asset managers, fintechs, AI vendors | Horizon: By August 2, 2026 | Confidence: High

Facts: AI used in credit underwriting, trading, robo-advice, AML/transaction monitoring, or customer interactions will be treated as high-risk under the EU AI Act and supervised accordingly. Firms will need integrated AI governance combining model classification under the AI Act risk tiers, expanded model risk management for AI/ML, and human oversight mechanisms for automated decisions affecting consumers or market integrity. The August 2, 2026 enforcement deadline for high-risk AI systems remains fixed.

Implications: Financial institutions must converge separate AI innovation efforts and legacy model risk management into a unified governance structure supporting consistent documentation, validation, and audit trails. For EU-exposed lenders, AI credit scoring and decisioning systems now require documented technical dossiers covering model architecture, training data characteristics, performance metrics, and bias testing. The fixed enforcement date creates a hard compliance deadline that cannot be extended.

What Changed: FCA Maps AI Accountability to SM&CR Framework

High

Risk: Governance | Affected: UK-authorised firms, senior managers, compliance officers | Horizon: Immediate | Confidence: High

Facts: The FCA's approach allocates responsibility for AI outcomes through existing SM&CR - there is no standalone Head of AI; senior managers remain personally accountable for AI systems within their prescribed responsibilities. AI-driven advice, chatbots, and surveillance tools used in retail markets must be designed and monitored to avoid foreseeable harm and discriminatory outcomes under Consumer Duty. AI platforms that become integral to suitability assessment, client reporting, or compliance surveillance will likely be classified as important business services under operational resilience requirements.

Implications: By routing AI accountability through SM&CR rather than creating a new regulatory layer, the FCA is making AI governance a personal liability matter for named senior managers. This is a distinct approach from the EU's product-focused AI Act and from FINRA's procedural requirements. UK firms must ensure their Statements of Responsibility clearly allocate AI oversight to specific senior manager functions, and that AI deployments in consumer-facing or market-integrity contexts meet Consumer Duty's foreseeable harm standard.

What Changed: AI-Enabled Fraud Triggers Model Risk Management Overhaul

High

Risk: Operational | Affected: Banks, payment firms, identity verification providers | Horizon: Immediate | Confidence: High

Facts: Compliance functions are being pushed to treat AI-enabled fraud - deepfake onboarding, voice-cloned social engineering, AI-generated documents - as a distinct model risk category requiring dedicated controls. Regulatory guidance recommends embedding AI-enabled fraud into model risk management programs: institutions should inventory AI models used for fraud detection and document validation procedures. Multi-layer controls are required, including stronger MFA and biometrics, geolocation analytics, and training for boards and senior management on AI-specific fraud vectors.

Implications: This represents a shift from treating AI fraud as an edge case to treating it as a core model risk category. Institutions must now demonstrate that their fraud detection models are specifically designed and validated against AI-generated attack vectors - not just traditional fraud patterns. The requirement for board-level training on AI fraud means this is no longer solely a technology team responsibility but a governance obligation that spans the organisation.

What Changed: SEC-CFTC MOU Extends Joint Oversight to AI-Driven Systems

Medium

Risk: Regulatory | Affected: Digital asset platforms, AI-driven trading firms, DeFi protocols | Horizon: 6-12 months | Confidence: Medium

Facts: On March 11, 2026, the SEC and CFTC signed an MOU establishing a joint harmonisation initiative for digital-asset markets. Although primarily product- and market-focused, coordinated policy, examination, and enforcement across both agencies will directly affect AI-driven surveillance, trading, and compliance systems operating in digital-asset markets. Joint interpretations and rulemakings on digital-asset products, custody, and clearing will shape how firms can combine AI analytics with blockchain data infrastructure.

Implications: For firms using AI-driven systems across securities and commodities markets, the MOU means a single, coordinated examination standard is developing - reducing regulatory arbitrage opportunities but increasing the need for cross-product AI governance. Firms building AI analytics for digital-asset markets should design systems that satisfy both SEC and CFTC examination frameworks simultaneously.

What Changed: FactSet Embeds AI Financial Crime Tools in Institutional Workstation

Medium

Facts: On March 3, 2026, FactSet launched integrated AI-driven financial-crime risk management in its Workstation, embedding ComplyAdvantage data for KYC, AML, and sanctions screening directly into the workflow. FactSet claims up to 80% automation of KYC/AML/sanctions review steps, 50% reduction in onboarding times, and 70% reduction in false positives.

Implications: This is a concrete example of large-scale, vendor-provided agentic-like AI embedded directly in institutional workflow. Banks that adopt these tools will need to treat them as models under their model risk management frameworks - requiring validation, performance monitoring, and governance procedures specific to vendor-provided AI. The automation claims, if substantiated, would materially change the operating economics of compliance functions but require robust audit trails to satisfy examiners.

What Changed: AI-in-AML Playbook Maps Practical Implementation Pathway

Medium

Implications: The playbook underscores the need for strong governance: clearly defined model objectives, training-data controls, and procedures for validating AI outputs used in regulatory filings. For compliance teams evaluating AI adoption, this provides a practical reference framework aligned with supervisory expectations. The emphasis on AI-driven QC over investigations - rather than just detection - represents an emerging use case that could reshape how SARs and compliance reports are quality-assured.

What Changed: AI Credit Scoring Faces Dual US-EU Compliance Regime

Medium

Risk: Compliance | Affected: Lenders, fintechs, credit platforms operating cross-border | Horizon: By August 2026 | Confidence: High

Facts: In the EU, AI credit scoring and decisioning systems now require documented technical dossiers covering model architecture, training data characteristics, performance metrics, and bias testing under the AI Act high-risk framework. In the US, AI lending tools must still meet ECOA adverse-action explainability requirements, FCRA disclosure obligations, and OCC SR 11-7 model risk management standards. The convergence creates a dual compliance requirement for cross-border lenders.

Implications: Black-box models without robust explainability are becoming unacceptable on both sides of the Atlantic, but the specific requirements differ. EU technical dossier requirements are more prescriptive and documentation-heavy, while US requirements emphasise adverse-action notice and fair lending compliance. Cross-border lenders using AI credit models need to build governance frameworks that satisfy both regimes - which likely means building to the higher EU standard as a baseline.

What Changed: Vendor AI Accountability Falls on Deploying Institutions

Medium

Risk: Governance | Affected: Banks, asset managers using third-party AI | Horizon: Immediate | Confidence: High

Facts: Multiple regulatory frameworks are converging on a single principle: even when AI models are provided by vendors, deploying institutions retain full accountability for governance, explainability, and compliance. This applies across jurisdictions - from MAS in Singapore to the EU AI Act to US banking supervision guidance. Institutions must evidence due diligence on AI vendors, validate vendor model outputs, and maintain governance as if the models were built in-house.

Implications: Financial institutions cannot outsource AI accountability to vendors. This requires expanded vendor risk management programs covering AI-specific risks: model drift, training data provenance, performance degradation, and bias. Third-party risk management teams must add AI model validation to their vendor assessment frameworks, and contracts must clearly allocate responsibilities for ongoing model governance, not just initial deployment.

What Changed: Agentic Payment Liability Framework Takes Shape

Medium

Risk: Legal/Compliance | Affected: Payment service providers, banks, card networks | Horizon: 6-12 months | Confidence: Medium

Facts: Agent Pay and similar agentic payment frameworks embed consent, spending limits, and governance into the payment layer: AI agents act within explicit customer-approved parameters, with issuing banks maintaining override authority. For now, liability is allocated via contract and existing regimes (PSD3/PSD2, GDPR, consumer-protection law), which often push compliance responsibility to the deploying institution rather than the AI provider.

Implications: As agentic e-commerce grows, banks will need to authenticate not only human customers but the AI agents acting in their name, potentially via agent credential standards or wallet-bound agent identities. Payment service providers will need policies allocating responsibility among the customer, the AI provider, the issuing bank, and the acquiring bank for each transaction type. This is an emerging area where regulatory frameworks have not yet caught up with production deployments.

What Changed: Banking Regulators Require AI Model Inventory Integration

Medium

Risk: Governance | Affected: Banks, prudential-regulated firms | Horizon: 3-6 months | Confidence: High

Facts: Banks must now fold AI - including agentic and generative systems - into their model inventories, with model risk tiers, validation standards, challenge functions, and ongoing monitoring. Supervisors expect human-in-the-loop oversight, bias management, and clear alignment of AI use cases with existing risk-appetite frameworks. Supervisory systems must also capture communications and decisions generated or influenced by AI, including internal copilot tools, and preserve records consistent with existing record-keeping requirements.

Implications: The model inventory requirement is the most operationally significant near-term obligation. Many institutions have AI tools deployed across front, middle, and back office functions that are not captured in existing model inventories - particularly generative AI assistants and internal copilot tools. The record-keeping obligation for AI-influenced decisions adds a new dimension to compliance technology requirements. Institutions that have not already begun comprehensive AI use-case inventories face a growing gap between their actual AI footprint and their documented governance.

What Changed: Financial AI Adoption Reaches Operational Scale

Low

Risk: Strategic | Affected: All financial institutions | Horizon: Ongoing | Confidence: Medium

Facts: Survey data show that a majority of financial institutions now have AI/ML in production or pilots for fraud and risk management, though data infrastructure and talent remain constraints. Fraud and AML teams are increasingly expected to operate unified, AI-enabled detection stacks rather than siloed fraud versus AML systems, with model documentation, explainability, and cross-functional integration as exam-ready requirements.

Implications: AI in financial services has crossed from pilot phase to operational infrastructure. The transition creates a new baseline: institutions without production AI in fraud detection, transaction monitoring, and risk management are increasingly outliers rather than the norm. This shift changes the supervisory expectation - examiners are no longer asking whether firms use AI, but whether their AI governance, validation, and monitoring frameworks are adequate for production-scale deployments.

AI governance is now an examination topic.

One weekly brief. Every development that matters. No noise.

Read by compliance and legal teams at Standard Chartered, Lloyds, Freshfields, and Loyens & Loeff.

Free. No spam. Unsubscribe anytime.

Risk Impact Matrix

Jur.DevelopmentRisk CategorySeverityAffectedTimeline
USTreasury GENIUS Act AI/AML ReportComplianceCriticalBanks, DASPs, stablecoin issuersImmediate
SGMAS Board-Level AI GovernanceGovernanceCriticalSingapore-licensed FIs6-12 months
USSEC AI Exam Priority + AI-WashingRegulatoryHighRIAs, broker-dealers, advisersImmediate
USFederal AI Preemption + FTC ResetLegal/ComplianceHighMulti-state FIs, fintechs3-6 months
EUAI Act High-Risk Financial ClassificationComplianceHighEU-licensed banks, fintechsBy Aug 2026
UKFCA AI via SM&CR FrameworkGovernanceHighUK-authorised firms, senior managersImmediate
GLOBALAI-Enabled Fraud MRM OverhaulOperationalHighBanks, payment firmsImmediate
USSEC-CFTC MOU AI ImplicationsRegulatoryMediumDigital asset platforms, AI trading6-12 months
USFactSet AI Financial Crime ToolsOperationalMediumBanks, KYC/AML operationsImmediate
GLOBALAI-in-AML PlaybookComplianceMediumAML teams, compliance officers3-6 months
US/EUAI Credit Scoring Dual ComplianceComplianceMediumLenders, credit fintechsBy Aug 2026
EUVendor AI AccountabilityGovernanceMediumBanks using third-party AIImmediate
GLOBALAgentic Payment LiabilityLegal/ComplianceMediumPayment providers, banks6-12 months
GLOBALAI Model Inventory RequirementsGovernanceMediumAll prudential-regulated banks3-6 months
GLOBALFinancial AI Adoption at ScaleStrategicLowAll financial institutionsOngoing

Cross-Signal Patterns

Pattern: AI Governance Convergence Across Three Regulatory Architectures

Linked Signals: MAS AI Governance, EU AI Act High-Risk, FCA SM&CR AI, SEC AI Exam Priority

What it means: Four major jurisdictions are converging on the same outcome - AI as a governed, auditable, board-accountable compliance obligation - but through three distinct architectural approaches. The EU uses product classification (AI Act risk tiers), the UK uses personal accountability (SM&CR), and the US uses examination priority (SEC/FINRA). MAS bridges all three with its inventory-and-classify approach. Global institutions must build AI governance frameworks flexible enough to satisfy all three models simultaneously.

Confidence: High

Pattern: AI as Regulatory Infrastructure - From Optional to Expected

Linked Signals: Treasury GENIUS AI/AML Report, FactSet AI Financial Crime, AI-in-AML Playbook, AI Adoption at Scale

What it means: The Treasury's GENIUS Act report endorsing AI for AML/CFT compliance, combined with FactSet embedding AI financial crime tools directly in institutional workflow and the AI-in-AML playbook mapping practical implementation, signals that AI is transitioning from a competitive advantage to a supervisory expectation. Institutions without AI-enhanced compliance tools are moving from being cautious adopters to being under-invested outliers in the eyes of examiners.

Confidence: High

Pattern: Accountability Cannot Be Outsourced - Vendor AI as Institutional Risk

Linked Signals: Vendor AI Accountability, FactSet AI Financial Crime, AI Model Inventory Requirements

What it means: As vendor-provided AI tools like FactSet's financial crime integration enter institutional workflow, the regulatory consensus is clear: deploying institutions retain full accountability regardless of who built the model. This creates a new vendor management burden - third-party risk teams must add AI-specific validation, bias testing, and performance monitoring to vendor assessment frameworks. The model inventory requirement means even off-the-shelf AI tools must be documented and governed as if built in-house.

Confidence: High

Pattern: AI as Both Shield and Sword - Fraud Detection Meets AI-Generated Fraud

Linked Signals: AI-Enabled Fraud MRM Overhaul, AI-in-AML Playbook, Treasury GENIUS AI/AML Report

What it means: Financial institutions are simultaneously deploying AI to detect fraud and defending against AI-generated fraud attacks. Deepfake onboarding, voice-cloned social engineering, and AI-generated documents are creating an arms race where the same technology powers both attack and defence. This dual dynamic requires institutions to continuously update their fraud detection models and validation procedures - static model risk management frameworks are insufficient for AI-versus-AI threat environments.

Confidence: Medium

Strategic Implications

1. Build a Unified AI Governance Framework That Spans Three Regulatory Architectures

Institutions operating across the US, EU, UK, and Singapore now face three distinct AI governance models: product classification (EU AI Act), personal accountability (UK SM&CR), and examination-based scrutiny (US SEC/FINRA). The most efficient approach is building to the highest common denominator - an AI governance framework that includes use-case inventories (MAS), risk-tier classification (EU), named senior manager accountability (UK), and examination-ready documentation (US). [Traced to: MAS AI Governance, EU AI Act High-Risk, FCA SM&CR AI, SEC AI Exam Priority]

2. Treat AI Investment in AML/CFT as a Supervisory Expectation, Not a Competitive Choice

The Treasury GENIUS Act report, combined with the AI-in-AML playbook and FactSet's product launch, signals that AI-enhanced compliance is becoming a baseline supervisory expectation. BSA/AML examination programs will increasingly reference these materials when evaluating institutional controls. Compliance leaders should frame AI investment proposals not as innovation initiatives but as regulatory risk mitigation - aligning budget requests with the supervisory expectation trajectory. [Traced to: Treasury GENIUS AI/AML Report, AI-in-AML Playbook, FactSet AI Financial Crime, AI Adoption at Scale]

3. Expand Third-Party Risk Management to Cover Vendor AI Models

The vendor accountability principle means that every vendor-provided AI tool - from FactSet's financial crime integration to chatbot providers to credit scoring vendors - must be treated as an in-house model for governance purposes. Third-party risk management programmes need AI-specific assessment criteria: model validation, bias testing, performance monitoring, and training data provenance. Contracts must allocate ongoing governance responsibilities, not just initial deployment terms. [Traced to: Vendor AI Accountability, FactSet AI Financial Crime, AI Model Inventory Requirements]

4. Prepare for Federal-State AI Compliance Uncertainty

The Commerce Department's expected preemption push and the FTC's enforcement pivot create a period of regulatory uncertainty for multi-state financial institutions. The pragmatic approach is to build compliance for the most stringent state requirements (Colorado, California) while maintaining the flexibility to scale back if federal preemption materialises. Do not defer AI governance investment based on the expectation of federal relief - the SEC examination priority operates independently of the preemption debate. [Traced to: Federal AI Preemption + FTC Reset, SEC AI Exam Priority]

5. Add AI-Generated Fraud to Model Risk Management Programmes as a Distinct Category

Deepfake onboarding, voice-cloned attacks, and AI-generated documents require dedicated detection models, validation procedures, and board-level training. Static fraud detection frameworks that were designed for traditional attack vectors are insufficient. Institutions should inventory their fraud detection models, test them specifically against AI-generated inputs, and document validation procedures that examiners can review. This is a governance obligation, not just a technology challenge. [Traced to: AI-Enabled Fraud MRM Overhaul, AI-in-AML Playbook, Treasury GENIUS AI/AML Report]

Sources

  1. US Treasury GENIUS Act Section 9(a) Report to Congress, March 2026
  2. SEC Future Proof Citywide Conference - AI Examination Priorities, March 10, 2026
  3. NYSBA - Regulating AI Deception in Financial Markets
  4. FactSet - AI-Driven Financial Crime Risk Management Launch, March 3, 2026
  5. Oscilar - AI in AML: Practical Implementation Guide, March 2026
  6. MAS - AI Risk Governance Requirements for Financial Institutions
  7. EU AI Act - High-Risk AI System Requirements (Regulation 2024/1689)
  8. FCA - SM&CR and AI Accountability in Financial Services
  9. SEC-CFTC MOU on Digital Asset Market Harmonisation, March 11, 2026
  10. US Commerce Department - State AI Law Preemption Report (forthcoming)
  11. UAE Ministry of Finance - Ministerial Decision 336/2025
  12. Treasury 2026 National Risk Assessments for ML/TF/PF

If you found this useful, please share it.

Questions or feedback? Contact us

MCMS Brief • Classification: Public • Sector: Digital Assets • Region: Global

Disclaimer: This content is for educational and informational purposes only. It is NOT financial, investment, or legal advice. Cryptocurrency investments carry significant risk. Always consult qualified professionals before making any investment decisions. Make Crypto Make Sense assumes no liability for any financial losses resulting from the use of this information. Full Terms