Features
Bias monitoring
Outcome comparisons across protected characteristics, so you can prove your AI is treating customers fairly — or catch it when it isn't.
Bias monitoring is the capability that compares outcomes across protected characteristics — age band, sex, ethnicity (where collected), disability status, vulnerability flag — and flags any group that is being approved, modified or rejected at a materially different rate from the firm baseline. It is the practical mechanism for proving Consumer Duty's “differential outcomes” obligation.
How it works
Each review job optionally carries a customerSegment object on its metadata. Bedrock aggregates outcomes by segment over a rolling 90-day window and compares each segment's rates against the firm baseline. Segments whose rejection or modification rate diverges by more than 5 percentage points (warning) or 10 percentage points (alert) raise a BIAS_SIGNAL_DETECTED ledger event.
Evidence produced
- Rolling outcome rates per segment
BIAS_SIGNAL_DETECTEDledger events- Quarterly bias report (PDF) for board packs
FCA mapping
- PRIN 2A.4 — Consumer Duty “price and value” outcome
- PRIN 2A.5 — Consumer Duty “consumer support” outcome
- Equality Act 2010 (s.20 reasonable adjustments)