1. Executive stance
Board approvedRiskMetrica will not deploy AI that is unsafe, unlawful, or likely to undermine customer, employee, or societal welfare. AI augments decision making, it does not remove accountability. We embed controls that ensure lawful basis, clear purpose, proportional data use, measurable fairness, robust security, and auditable outcomes across our platform, including RAC, Balance, Scenario and Wargaming, Culture, Delegations and Guardrails, and Reporting.
2. Principles
Design standard- Legality and purpose limitation with documented use cases.
- Human accountability with decision rights and audit logs.
- Safety and security by design with threat modelling and hardening.
- Privacy by design with data minimisation and retention controls.
- Fairness with measurable disparity limits and bias mitigation.
- Transparency with documentation, model cards, and user disclosures.
- Explainability proportionate to risk and context.
- Robustness with validation, stress, and red teaming.
- Controllability with rollback, kill switch, and change control.
- Monitoring with metrics, alerts, and thresholded responses.
- Proportionality and stewardship, including cultural impacts.
- Continuous improvement with post incident reviews.
3. Governance model
Lifecycle controls
Roles and gates
- Product Owner sets lawful purpose, scope, and success criteria.
- Data Steward signs data protection impact assessment where required.
- Model Owner maintains model card, versioning, and change log.
- Independent Validator performs testing and red teaming.
- Risk and Compliance grant deployment approval at formal gate.
Documentation set
- Use case register and risk classification.
- Data sheet and privacy assessment with retention schedule.
- Model card, training manifest, and evaluation report.
- Security hardening checklist and threat model summary.
- Monitoring plan with metrics and thresholds.
4. Model risk controls
High impact focus- Independent validation and challenge.
- Adversarial testing and prompt red teaming for LLMs.
- Stress and scenario testing aligned to Balance Module.
- Kill switch and controlled rollback with audit.
- Monitoring with drift, bias, and toxicity indicators.
- Guardrails, rate limits, and output filtering.
- Change control and versioned deployments.
- Periodic revalidation and expiry dates.
5. Data and privacy
Privacy by designData is collected and processed on a lawful basis, minimised for purpose, protected in transit and at rest, and retained only as necessary.
Lawful basis
Documented purpose, DPIA where required, privacy notices for users, and contractual controls with processors.
Minimisation
Prefer synthetic or anonymised data, strict role based access, and retention schedules with deletion workflows.
Security
ISO 27001 and SOC 2 aligned controls, encryption, logging, secret management, and third party risk reviews.
6. Fairness and bias
MeasurableWe set disparity limits and monitor fairness metrics that are proportionate to context.
- Pre deployment fairness tests with representative data.
- Operational thresholds on disparity with alerts and response playbooks.
- Bias remediation strategies with monitored outcomes.
7. Transparency and explainability
Proportionate- Model cards with purpose, data, metrics, and limitations.
- Appropriate explainability for user context and risk level.
- Traceable lineage from data to decision with audit logs.
8. Human oversight
AccountabilityHuman decision makers remain accountable. Controls vary by impact level.
- Review and approval steps for high impact uses.
- Override capability and fallbacks to manual processing.
- Training for users on limitations and proper use.
9. Monitoring and assurance
Continuous- Production metrics with drift and performance thresholds.
- Security monitoring and vulnerability management.
- Assurance plan with internal audit participation.
- Incident response with containment, notification, and PIR.
- Rollback to last known good artefacts where needed.
- Red team exercises, including adversarial content for LLMs.
10. Prohibited uses
BoundariesRiskMetrica will not support these categories.
- Unlawful discrimination or profiling without safeguards.
- Surveillance that violates privacy or human rights laws.
- Autonomous decision making for high risk outcomes without human review.
- Manipulative or deceptive content presented as factual analysis.
- Safety critical control without certified safeguards.
11. Regulatory mapping
Alignment| Framework | What we map | Where in platform |
|---|---|---|
| EU AI Act | Risk classification, documentation, monitoring, incident reporting, human oversight. | Use case register, model cards, monitoring plan, oversight controls. |
| UK guidance | Safety, security, transparency, fairness, accountability, contestability. | Principles, assurance plan, audit trails, user disclosures. |
| ISO 27001, SOC 2 | Security, availability, integrity, confidentiality, privacy controls. | Security hardening, logging, access control, supplier risk. |
| DORA, CPS 230 | Operational resilience, incident reporting, testing, continuity. | Scenario and Wargaming, continuity, incident response. |
12. Customer commitments
Contractual- Disclosure of AI use where material to outcomes.
- Security and privacy assurances proportionate to risk.
- Access to logs and model cards for audit on request.
- Timely notification of incidents and remedial actions.
- Right to human review for high impact decisions.
- Clear support channels for concerns or appeals.
13. Raise a concern or request
ContactIf you have a concern about any AI feature or need additional disclosures, contact the Risk Office. Provide the use case, context, and any identifiers shown in the interface.