NIS2 Directive Lead Implementer
We have a NIS2 Directive Lead Implementer certificate!!!
14. October 2025.

The relationship between the AI Act and ISO/IEC 42001 AI Management System

Published by: boost, 5. February 2026.

Many today assume that complying with ISO/IEC 42001- the first international standard for artificial intelligence management, will automatically ensure compliance with legal frameworks such as the EU AI Act.

However, while ISO 42001 does provide a strong organisational foundation for responsible, transparent, and controlled AI governance, it does not replace the legal obligations established by EU regulation.

The EU AI Act is a legally binding regulation that sets out specific requirements for particular types of AI systems, including prohibitions, strict obligations for high‑risk systems, and detailed technical, data‑related, and transparency requirements explicitly defined in the legislative text.

Although ISO/IEC 42001 and the EU AI Act complement one another and ISO can certainly support an organisation’s efforts to achieve legal compliance there are important differences between them. Organisations must understand these distinctions to avoid mistakenly assuming that a certificate alone constitutes full compliance with the law.

 

Comparison of the AI Act and ISO/IEC 42001

Purpose

The AI Act is a legally binding EU regulation that governs the development, placing on the market, and use of AI systems within the European Union.

Its objective is to ensure safety, protect fundamental rights, enhance transparency, strengthen risk management, and prohibit certain harmful practices.

ISO/IEC 42001 is the first international standard for an AI Management System (AIMS). It provides a structured framework for managing AI‑related risks, ensuring transparency and accountability, and integrating ethical principles into organisational processes.

Its purpose is to help organisations manage AI systems responsibly and sustainably through established policies, processes, and controls.

 

Scope

The AI Act regulates AI systems and their operators, and it applies to all organisations that place AI systems on the EU market or use them within the EU.

It is not fully technology‑neutral, as its obligations vary depending on the type of AI system and associated risk level.

ISO/IEC 42001 regulates organisational processes for managing AI, rather than the AI systems themselves.

It is a voluntary standard that can be adopted by any organisation seeking to implement responsible AI practices, and it is fully technology‑neutral, meaning it can be applied regardless of the type of AI technology being used.

 

Focus

The EU AI Act focuses on regulating AI systems according to their risk level:

  • Prohibited practices (e.g., manipulative AI, social scoring)
  • High risk AI systems are subject to the strictest requirements (e.g., AI in healthcare, employment, critical infrastructure)
  • Special rules for general purpose AI models (GPAI), including generative AI

ISO/IEC 42001, on the other hand, focuses on the organisation, not on individual AI systems:

  • Risk management
  • Clear assignment of responsibilities
  • Policies and procedures
  • Data quality
  • Transparency and ethical principles
  • Continuous improvement of the AI Management System (AIMS)
  • Establishes governance as an ongoing process without prescribing the technical design of AI systems

 

Controls

For high risk AI systems, the EU AI Act requires:

  • A risk management system (Art. 9)
  • High quality data and data governance (Art. 10)
  • Technical documentation (Art. 11)
  • Record keeping (Art. 12)
  • Transparency towards users (Art. 13)
  • Human oversight (Art. 14)
  • Safety, robustness, and cybersecurity (Art. 15)

ISO/IEC 42001 requires the implementation of an AI Management System (AIMS) that includes:

  • Understanding the context of the organisation
  • Leadership and defined responsibilities
  • AI risk assessment and impact assessment
  • AI related policies and procedures
  • Governance of the AI development and usage lifecycle
  • Continuous monitoring and improvement of the management system
  • Integration with other ISO standards (e.g., ISO 9001, ISO 27001…)

 

Similarities

Both the AI Act and ISO/IEC 42001 address:

  • Risk management
  • Transparency
  • Accountability
  • Data governance
  • Human oversight
  • Safety and robustness
  • The promotion of trustworthy AI and ethical principles

ISO/IEC 42001 helps organisations more easily meet the requirements of the AI Act, although it does not replace legal compliance.

The two frameworks are complementary.

 

Key differences

EU AI ActISO/IEC 42001
NatureLawStandard
Level of regulationRegulates individual AI systemsRegulates the organisational AI management system
ObligationMandatory within the EUVoluntary, globally applicable
Risk approachCategorisation of AI systems by risk levelOrganisation‑wide risk assessment
Primary objectiveProtect EU citizens and the EU marketStrengthen organisational capability to manage AI
PenaltiesYes (fines and corrective measures)No

 

Conclusion

Although the EU AI Act and ISO/IEC 42001 are often mentioned in the same context, their roles in AI governance are fundamentally different.

The EU AI Act is a binding legal framework that regulates AI systems based on risk levels and sets clear requirements, restrictions, and penalties for non‑compliance. Its purpose is to protect users, society, and the market through strictly defined rules.

ISO/IEC 42001, on the other hand, provides an organisational and managerial framework intended for companies that want to establish a responsible, safe, and transparent approach to developing and using AI. The standard does not impose legal obligations; instead, it offers structured guidelines and processes that help organisations systematically manage risks and the quality of their AI systems.

Together, the two frameworks form a strong foundation: the EU AI Act defines what must be achieved, while ISO/IEC 42001 provides guidance on how to achieve it through systematic governance, documentation, and controls.

Organisations that treat them as complementary can meet regulatory requirements while building a sustainable and trustworthy AI management system.