Stage 1/Inception/Risk Assessment/Engineer
⚙️ Engineer
Risk Assessment at Inception
What engineers need to evaluate before a line of model code is written. This substage determines what you can and can't build — and what you're required to document from day one.
EU AI Act · Art. 9ISO 22989NIST AI RMFGDPR · Art. 25ISO 42001
What to check
✓
Classify the AI system risk level under the EU AI Act. Determine if this system falls under prohibited, high-risk, limited-risk, or minimal-risk categories before any architecture decisions are made.
✓
Identify personal data flows at the design stage. Document what personal data the system will process, infer, or store — this triggers GDPR Art. 25 privacy-by-design obligations.
✓
Define technical documentation scope. For high-risk systems, EU AI Act Art. 11 requires technical documentation to begin at inception. Establish your documentation system now, not at deployment.
✓
Assess training data availability and lineage. Can you demonstrate the provenance of every dataset you intend to use? Undocumented training data is a compliance liability from day one.
✓
Flag human oversight requirements. Identify at inception which decisions require human review loops. Retrofitting oversight mechanisms post-deployment is significantly harder.
Handoff to
→ Legal: Risk classification review→ Architect: Privacy-by-design brief→ Compliance: Documentation framework