Features
Impact assessments
Pre-deployment Consumer Duty evidence for every AI use case — signed off by a named senior and anchored to the immutable ledger.
Before a new AI use case starts producing customer-facing advice, the firm files an impact assessment against Bedrock's Consumer Duty template. The assessment walks each of the four PRIN 2A outcomes — products & services, price & value, consumer understanding, and consumer support — and records a risk rating and a narrative per outcome. A named senior (lead reviewer or firm admin) then approves or rejects the draft. The approval snapshot is hashed and anchored to the immutable ledger as the IMPACT_ASSESSMENT_APPROVED event, so the firm can evidence to the FCA that the use case was scrutinised before it went live.
What an impact assessment contains
- Use case — a short title (e.g. "GPT-4 fact-find summariser").
- Description — where in the workflow the AI runs and what it does.
- Model provider & version — optional, but required if you want the firm's enforcement gate (see below) to tie jobs back to this assessment.
- Four outcome responses, one per PRIN 2A outcome:
products_services— target market fit, product governance alignment, guardrails against mis-alignment.price_value— whether the AI element changes cost structure or fair-value assessment.consumer_understanding— whether clients are told about the AI, and how its role is explained.consumer_support— escalation paths, additional support for vulnerable customers, post-deployment monitoring.
LOW/MEDIUM/HIGHrisk rating. - Sign-off — named approver (must be
LEAD_REVIEWERorFIRM_ADMIN), role, and timestamp. The approval event is signed with the firm's ECDSA P-256 ledger key.
Lifecycle
Assessments move through an explicit status graph:
DRAFT— author drafts the content, can edit freely.PENDING_SIGNOFF— submitted for senior review. Still editable.APPROVED— senior has signed off. Anchored to the ledger. Immutable from this point — further edits require a new draft.REJECTED— sent back to draft.SUPERSEDED— explicitly retired by a newer assessment covering the same use case.
Invalid transitions (e.g. APPROVED → DRAFT) return 409 CONFLICT. Senior sign-off is always performed by an authenticated user — API keys cannot approve assessments.
Enforcement on job submission
Firms have a boolean setting enforceImpactAssessments (defaults to true). When it's on, any job submitted to POST /v1/principal/jobs that declares a modelProvider and modelVersion must have at least one APPROVED impact assessment with the same provider/version on file. If there isn't one, the submission returns:
409 Conflict
{
"error": "IMPACT_ASSESSMENT_REQUIRED",
"message": "No approved impact assessment for openai gpt-4o-2024-08-06. …",
"details": {
"modelProvider": "openai",
"modelVersion": "gpt-4o-2024-08-06"
}
}Jobs that don't declare a model are allowed through regardless — there's nothing to match against. Firms that want to disable the gate (e.g. during an initial backfill window) can:
PATCH /v1/firm/me/settings
{ "enforceImpactAssessments": false }Toggling the gate is itself a compliance-relevant change, so the PATCH emits a FIRM_SETTINGS_UPDATED ledger event carrying the field-level diff and the authenticated actor (user or API key). An auditor reviewing jobs submitted during a gate-off window can follow this trail to see exactly when the enforcement was disabled, by whom, and when it came back on.
Evidence produced
IMPACT_ASSESSMENT_APPROVEDledger event on senior sign-off, containing a hashed snapshot of the approved outcomes, template version, model provider/version, and signer identity.FIRM_SETTINGS_UPDATEDledger event wheneverenforceImpactAssessmentsis toggled, carrying the field-level diff and the actor.- Both events are signed with the firm's ECDSA P-256 ledger key over the canonical JSON of the snapshot — verifiable via
POST /v1/internal/verify-signature.
FCA mapping
- PRIN 2A — the four Consumer Duty outcomes the template explicitly covers.
- SYSC 7.1 — risk control and governance over automated decisioning.