Concepts
The governance layer
What sits between your AI and your customers — the eleven capabilities Bedrock layers on top of the immutable ledger.
The ledger and certificate system tell you what happened. The governance layer is the set of capabilities that make sure the right things keep happening — that the AI models are still calibrated, that vulnerable customers are routed to the right reviewers, that incidents get triaged, that bias doesn't creep in unnoticed, that the right checklists were completed. It's where Bedrock crosses from “system of record” into “system of control.”
The eleven capabilities
- Model registry — every AI model your firm uses, registered, versioned, and named
- Drift detection — automatic alerts when a model starts behaving differently
- Bias monitoring — protected-characteristic outcome comparisons
- Vulnerability routing — Consumer Duty triggers send the right cases to the right humans
- Incident response — structured handling for things that go wrong
- Impact assessments — Consumer Duty outcome assessments, signed off before any AI use case goes live
- Explainability — capture the rationale, not just the output
- Chain integrity — continuous proof that the ledger hasn't been tampered with
- Checklists — gating reviewer decisions on completeness
- SLA enforcement — turnaround time guarantees with breach events
- Certificates — externally verifiable proof of every decision
How they fit together
Each capability produces evidence into the ledger. Drift detection writes a MODEL_DRIFT_DETECTED event. Vulnerability routing writes a VULNERABILITY_FLAGGED event. SLA breaches write SLA_BREACHED. Every capability produces something a regulator or principal can ask for and get a verifiable answer about.
Mapping to FCA rules
Each capability maps to a specific section of the FCA Handbook — Principles 6 and 12, Consumer Duty (PRIN 2A), SYSC 8, SYSC 22, COBS, and others. See the full mapping in FCA Handbook mapping.