ETSI EN 304 223: Securing Artificial Intelligence (SAI) - Baseline Cyber Security Requirements for AI Models and Systems

ETSI EN 304 223 is a European Standard (EN) establishing baseline cybersecurity requirements for AI models and systems intended for real-world deployment. Published in December 2025, it represents the first globally applicable European Standard specifically focused on securing artificial intelligence.

The standard recognizes that AI systems introduce unique security risks not found in traditional software, including data poisoning, model obfuscation, indirect prompt injection, and vulnerabilities linked to complex training and deployment practices. It covers AI systems incorporating deep neural networks, including generative AI, and is developed for systems intended for real-world deployments.

The standard references Regulation (EU) 2024/1689 (the AI Act) and provides a cybersecurity baseline complementary to the EU AI Act requirements.

Quality Attributes Required or Emphasized

The standard defines requirements that directly impact AI system design and implementation:

Attribute Relevance in ETSI EN 304 223
Security Core principle requiring comprehensive protection of AI systems throughout their lifecycle against AI-specific threats.
Confidentiality Protection of training data, model weights, and system configurations against unauthorized access and extraction attacks.
Integrity Safeguarding AI assets against data poisoning, model manipulation, and unauthorized modifications to ensure trustworthy outputs.
Availability Ensuring AI systems remain operational and resilient against denial-of-service and resource exhaustion attacks.
Robustness Resistance to adversarial inputs, prompt injection, and edge cases that could compromise AI system behavior.
Traceability Documented audit trails for models, datasets, prompts, and system decisions enabling forensic review and accountability.
Accountability Clear assignment of responsibilities across stakeholder roles (Developers, System Operators, Data Custodians) with verifiable controls.
Auditability Comprehensive logging of system and user actions, enabling compliance verification and incident investigation.
Maintainability Support for timely security updates, patches, and ongoing monitoring throughout the AI system lifecycle.
Recoverability Disaster recovery procedures addressing AI-specific attack scenarios and system restoration capabilities.
Resilience Ability to withstand and recover from security incidents while maintaining essential AI system functions.
Transparency Clear communication to end-users about data use, access, storage, system limitations, and failure modes.

Five Lifecycle Phases

ETSI EN 304 223 adopts a whole-lifecycle approach, organizing requirements across five phases:

1. Secure Design (Principles 1-4)

Principle Description
P1: Security Training Role-based AI security training for personnel involved in AI development and deployment.
P2: Security-by-Design Integration of protective measures into functionality from inception, not as afterthoughts.
P3: Audit Trails Documented audit trails for models, datasets, and prompts enabling accountability and forensic review.
P4: Threat Modeling AI-specific threat modeling covering poisoning, inversion, and membership inference attacks with human oversight capabilities.

2. Secure Development (Principles 5-9)

Principle Description
P5: Asset Inventory Catalogues of AI components including interdependencies between model artifacts and system dependencies.
P6: Versioning & Authentication Cryptographic authentication and version control for models, datasets, and pipeline artifacts.
P7: Disaster Recovery Recovery procedures explicitly addressing AI-specific attack scenarios beyond traditional system failures.
P8: Data & Input Protection Sanitization, validation checks, and safeguards for confidential training data, model weights, and parameters.
P9: Supply Chain Security Secure software supply chain practices including vulnerability disclosure and incident response plans.

3. Secure Deployment (Principle 10)

Principle Description
P10: End-User Communication Transparent guidance on data use, access, storage, limitations, failure modes, and proactive security update notices.

4. Secure Maintenance (Principles 11-12)

Principle Description
P11: Timely Updates Prompt deployment of patches and security fixes with contingencies when updates cannot be applied immediately.
P12: Operational Monitoring Logging of system and user actions, anomaly and drift detection, and internal-state monitoring for threat response.

5. Secure End-of-Life (Principle 13)

Principle Description
P13: Controlled Decommissioning Controlled transfer and disposal of training data and models with secure deletion of data and configurations.

AI-Specific Threats Addressed

The standard explicitly addresses threats unique to AI systems:

  • Data Poisoning: Manipulation of training data to corrupt model behavior
  • Model Inversion: Extraction of sensitive training data from model outputs
  • Membership Inference: Determining whether specific data was used in training
  • Indirect Prompt Injection: Manipulation of AI behavior through crafted inputs
  • Model Obfuscation: Hiding malicious functionality within AI models
  • Adversarial Examples: Inputs designed to cause model misclassification
  • EU AI Act (Regulation (EU) 2024/1689): Harmonised rules on artificial intelligence
  • ETSI TR 104 159: Domain-specific application to generative AI (forthcoming)
  • ISO/IEC 42001: AI management systems
  • ISO/IEC 22989: AI concepts and terminology
  • ISO/IEC 27001: Information security management

References

Official Sources