Features

AI context

Track how inputs affect outputs. Every piece of AI-assisted advice submitted to Bedrock can carry a structured aiContext payload that captures the full lineage — which model, what it saw, what it recommended, why, and what guardrails fired.

Why it matters

The FCA expects firms to explain why a particular piece of advice was given, not just what was advised. When a regulator asks “how did the AI reach this recommendation?”, the answer needs to be specific, reproducible, and tamper-evident.

aiContext captures that answer at the point of advice generation and hashes it into the immutable ledger record alongside the document. The input-to-output mapping becomes independently verifiable — the same chain integrity that protects the ledger also protects the AI lineage.

Shape

Pass aiContext as an optional field on job submission. Only model (with provider and version) is required. Everything else is optional — include what your pipeline has available.

json
{
  "aiContext": {
    "model": {
      "provider": "openai",
      "version": "gpt-4o-2024-08-06"
    },
    "inputs": {
      "riskProfile": "Balanced",
      "investmentHorizon": "10y",
      "existingHoldings": ["VWRL", "VGOV"]
    },
    "outputs": {
      "recommendation": "Rebalance to 60/40 equity/bond split",
      "allocations": { "equities": 0.6, "bonds": 0.4 }
    },
    "factors": [
      { "input": "riskProfile", "influence": 0.42, "direction": "positive" },
      { "input": "investmentHorizon", "influence": 0.31, "direction": "positive" },
      { "input": "existingHoldings", "influence": 0.27, "direction": "neutral" }
    ],
    "confidence": 0.87,
    "guardrails": [
      { "rule": "MAX_EQUITY_ALLOCATION", "triggered": false },
      { "rule": "VULNERABLE_CLIENT_CHECK", "triggered": true, "action": "flagged_for_review" }
    ]
  }
}

Fields

model (required)

Identifies the model that produced the advice. provider is the vendor (e.g. openai, anthropic, internal). version is the specific model version — pin the exact version rather than a moving alias so drift detection is meaningful.

inputs

Free-form object capturing the key inputs the model received. Structure it however fits your pipeline — Bedrock stores it verbatim. Common fields: risk profile, investment horizon, financial goals, existing holdings, market conditions.

outputs

Free-form object capturing the model's recommendation. Common fields: allocation percentages, recommended actions, risk warnings, generated text.

factors

An array of input-influence pairs describing which inputs most affected the output. Each entry has an input name, an influence weight (relative, not normalised), and an optional direction (positive, negative, or neutral). Bedrock aggregates these across submissions for the bias monitoring and drift detection dashboards.

confidence

A number between 0 and 1 representing the model's self-reported confidence in the recommendation. Bedrock tracks confidence trends over time — a sustained decline may indicate model degradation.

guardrails

An array of regulatory or firm-specific rules evaluated during advice generation. Each entry has a rule name, a triggered boolean, and an optional action describing what happened when the guardrail fired (e.g. flagged_for_review, blocked, adjusted_allocation).

How it flows through the system

  1. Firm submits a job via POST /v1/principal/jobs with aiContext in the request body.
  2. The model identity is checked against the impact assessment gate (if enabled).
  3. aiContext is stored as JSON on the review job and included in the documentMetadata of the DOCUMENT_SUBMITTED ledger record, where it is canonicalised, hashed, and signed.
  4. The reviewer sees the AI context in the fact-find panel alongside the document, including which inputs drove the recommendation and which guardrails fired.
  5. Governance dashboards aggregate factors and confidence across submissions for drift detection and bias monitoring.
  6. On completion, the ledger record and certificate contain the full AI lineage — immutable, independently verifiable, and ready for regulatory review.

Backwards compatibility

aiContext is optional. Jobs submitted without it work exactly as before — the field is null on the review job and omitted from the ledger record metadata. Firms can adopt it incrementally as their AI pipelines mature.

Bedrock AIAsk me anything about Bedrock

Hi! I'm Bedrock's AI assistant. I can answer questions about the product, pricing, compliance coverage, and integrations. What would you like to know?