Enterprise AI SutrasBeta
ISO 42001 · 360° AI Lifecycle Compliance

Compliance that speaks every language

A structured framework mapping AI compliance obligations across every lifecycle stage — with role-specific guidance for legal, engineering, compliance, and leadership.

7Lifecycle Stages
100+Documents
6Roles Covered
Stage 01
Inception
12 docsPending
Stage 02
Data Acquisition & Preparation
Pending
Stage 03
Model Development
Pending
Stage 04
Verification & Validation
Pending
Stage 05
Deployment & Integration
Pending
Stage 06
Operation & Monitoring
Pending
Stage 07
Decommissioning
Pending
Choose your lens

What's your role in this?

Every page in this framework is written for a specific role. Pick yours and go directly to what matters to you — no reading around irrelevant content.

⚙️
Engineer
Codebase obligations, technical safeguards, auditability requirements, and what to build before deployment.
View Engineer docs
⚖️
Legal Counsel
Regulatory obligations, liability exposure, documentation standards, and cross-jurisdictional considerations.
View Legal docs
Compliance Officer
Control frameworks, audit trails, evidence requirements, and continuous monitoring obligations.
View Compliance docs
CXO / Executive
Strategic risk landscape, board-level obligations, accountability structures, and go/no-go checkpoints.
View CXO docs
Architect
System design constraints, data flow compliance, infrastructure requirements, and design-time obligations.
View Architect docs
Security
Threat modelling requirements, access control obligations, adversarial robustness, and incident response.
View Security docs
In practice

What a document looks like

Each document is a focused, standalone guide. One role. One substage. Everything you need to check — nothing you don't.

🔒aigov.framework / stage-1 / inception / risk-assessment / engineer
Stage 1/Inception/Risk Assessment/Engineer
⚙️ Engineer

Risk Assessment at Inception

What engineers need to evaluate before a line of model code is written. This substage determines what you can and can't build — and what you're required to document from day one.

EU AI Act · Art. 9ISO 22989NIST AI RMFGDPR · Art. 25ISO 42001
0 / 5 checked
What to check
Classify the AI system risk level under the EU AI Act. Determine if this system falls under prohibited, high-risk, limited-risk, or minimal-risk categories before any architecture decisions are made.
Identify personal data flows at the design stage. Document what personal data the system will process, infer, or store — this triggers GDPR Art. 25 privacy-by-design obligations.
Define technical documentation scope. For high-risk systems, EU AI Act Art. 11 requires technical documentation to begin at inception. Establish your documentation system now, not at deployment.
Assess training data availability and lineage. Can you demonstrate the provenance of every dataset you intend to use? Undocumented training data is a compliance liability from day one.
Flag human oversight requirements. Identify at inception which decisions require human review loops. Retrofitting oversight mechanisms post-deployment is significantly harder.
Handoff to
→ Legal: Risk classification review→ Architect: Privacy-by-design brief→ Compliance: Documentation framework