Global AI Compliance Begins With ISO 42001 — Here’s What to Know
In Dec 2023, the International Organization for Standardization (ISO) published ISO 42001, the world’s first international standard specifically designed for artificial intelligence management systems. This landmark standard arrives at a critical juncture as organizations across industries grapple with implementing AI technologies responsibly, ethically, and effectively. Whether your organization is just beginning to explore AI capabilities or already has sophisticated AI systems in production, understanding ISO 42001 has become essential in today’s rapidly evolving technology landscape.
ISO 42001 is structured into 10 main clauses, covering areas such as organizational context, leadership, planning, support, operation, performance evaluation, and improvement. In addition to these clauses, the standard includes Annex A, which lists 38 AI-specific controls. In this blog post, we will focus on the recommended controls provided in Annex A.
Annex A of the Artificial Intelligence Management System (AIMS) – ISO 42001 provides nine control objectives supported by 38 individual controls that organizations may implement to address risks and opportunities associated with their AI systems.
While implementation of all controls is not mandatory, they serve as valuable references and provide a structured approach for managing various aspects of the AI lifecycle. Organizations are required to document a Statement of Applicability (SoA) justifying the inclusion or exclusion of these Annex A controls.
Let’s explore the control objectives and provide a high-level overview of the controls under each, using a practical example — an AI-powered diagnosis support tool — where relevant.
A.2 Policies related to AI
- A2.2- AI Policy
- A2.3- Alignment with other organizational policies
- A2.4- Review of AI policy
A2.2 is related to clause 5.2-AI Policy. While clause 5.2 focuses on a high-level AI policy by top management, this control pertains to defining and documenting specific AI policies based on their lifecycle. Some example policies include AI Development and Use Policy, Acceptable Use of AI Policy, AI Security Policy, AI Bias Detection and Mitigation Policy, AI Change Management Policy, etc. Policies should align with other organizational policies (A2.3) and be reviewed and updated periodically (A2.4)
A3 Internal Organization
- A3.2- AI Roles and Responsibilities
- A3.3- Reporting of concerns
A 3.2 aims to establish accountability for the responsible implementation and use of AI systems. It requires defining and documenting the roles and responsibilities for the AI system lifecycle. Some examples of areas include AI Safety, AI Security, AI development, AI Governance Committee, AI Data Quality Management, etc.
A3.3 emphasizes the need to define the process for individuals to report AI-related concerns. It could involve setting up a dedicated reporting channel (e.g., email address or anonymous reporting hotline) and a process for submission, handling, and documentation of concerns. Existing ethical reporting mechanisms can be adapted and updated. Lastly, it is crucial to communicate this reporting mechanism to employees and contractors so they are aware of how to report concerns.
A4 Resources for AI Systems
- A4.2- Resource documentation
- A4.3- Data Resources
- A4.4- Tooling Resources
- A4.5- System and Computing Resources
- A4.6- Human resources
This objective focuses on identifying and managing resources (tangible and intangible) essential for AI systems, including those from the organization, third parties, and customers.
To implement this control effectively, consider maintaining an AI System and Tooling Inventory, a data inventory, and clear human resources role definitions and skill requirements. Implementation best practices include establishing an inventory management process with defined ownership, version control, change logs, data quality controls, and regular audits.
A5 Assessing Impacts of AI system
- A5.2- AI System Impact Assessment Process
- A5.3- Documentation of AI System Impact Assessment
- A5.4- Assessing AI system impact on individuals or groups of individuals
- A5.5- Assessing societal impacts of AI systems
This set of controls ensures AI systems are evaluated for their impact on individuals and society.
This includes having a clear, structured process (A5.2) to evaluate the impact of the AI system throughout its lifecycle, considering the AI system’s purpose, complexity, and the sensitivity of data. A key part of this assessment is AI’s impact on individuals or groups (A5.4). e.g., fairness, privacy, safety, and security. Beyond that, there’s a broader responsibility to consider societal impacts (A5.5), such as shifts in employment, public trust, or social dynamics, e.g., environment, economy, the government, etc. It also requires proper documentation (A5.3) of the result of this assessment.
Below is an example of a system impact assessment for the AI-powered diagnostic support tool.
A6 AI System life cycle
- A6.1- Management guidance for AI system development
- A6.2- AI System Lifecycle
A6.1 calls for management to guide responsible AI development, aligned with objectives like fairness, security, and transparency (refer to ISO/IEC 23894).
A6.2 addresses the documentation requirement for the full AI system lifecycle, similar to any system development lifecycle but with specific AI considerations.
- Requirements: Business requirements should address questions such as why this AI system is being developed, how we will train the model, and what the data requirements are.
- Design and Development: documentation about AI system architectural choices, machine learning approach (supervised or unsupervised), model training and refinement (fine-tuning, retraining), data quality, infrastructure, security, etc.
- Verification and validation: documentation about evaluation criteria and a plan based on system requirements, as well as the system impact assessment.
- Maintenance: Ongoing performance monitoring, such as false positives/negatives, AI data drift, retraining needs, AI-specific security threats, etc. ISO/IEC 25059 — Quality model for AI systems can provide guidance in identifying and defining performance criteria.
A7 Data for AI systems
- A7.2- Data for development and enhancement of AI system
- A7.3- Acquisition of data
- A7.4- Quality of data for AI systems
- A7.5- Data Provenance
- A7.6- Data Preparation
This control is focused on the role and impact of data in the AI system lifecycle. It requires maintaining documentation on
- how data is acquired, including the source of the data, data subject characteristics, and potential biases, data rights, and provenance information (creation, updates, abstraction, sharing, etc.)
- data quality, including the impact of bias on data quality.
- data preparation and transformation criteria and methods (e.g., normalization, labeling, encoding)
Below is an example of this control’s implementation for the Medical Diagnosis AI System.
A8 Information for interested parties
- A8.2- System documentation and information for users
- A8.3- External Reporting
- A8.4- Communication of incidents
- A8.5- Information for interested parties
A8.2 details how users of the AI system should be informed about its usage, potential impacts (both benefits and risks), and how interested parties can report any adverse effects (A.8.3).
A 8.3 requires a documented process for any interested party to report any adverse impact, similar to the ISO 27001 control A16 Incident Management, but with a focus on AI security and privacy incident reporting. A8.5 focuses on documenting obligations for reporting information to interested parties as per legal and regulatory requirements.
As an example of this control, here is OpenAI’s page on safety, security, and privacy detailing their approach. Below is another example of a disclaimer on Google’s AI response indicating the need for human oversight.
A9 Use of AI systems
- A9.2- Processes for use of AI system
- A9.3- Objectives for use of AI system
- A9.4- Intended use of the AI system
While some of the previous controls addressed AI system development, this control focuses on the use of any AI system, regardless of its origin (in-house or third-party). It requires documenting policies, procedures, and objectives for the use of AI systems, such as approvals for sourcing and objectives like security, safety, accessibility, accountability, fairness, reliability, and human oversight to ensure these goals are met. Lastly, organizations should confirm that AI systems are used as intended. This can be achieved by implementing control A6.2 AI system lifecycle.
A10 Third-party and customer relationships
- A10.2- Allocating responsibilities
- A10.3- Suppliers
- A10.4- Customers
This is the last control objective for this standard and focuses on third parties and customer responsibilities, and risks.
Control A10.2 requires documenting all stakeholders in the AI system lifecycle and their roles and responsibilities. e.g., if an organization acquires data and model from 3rd party, it should document what their role and responsibility is with regards to the data and model. In contrast, if the organization is supplying an AI system to a third party, it should document its role and responsibility as per A6 and A8.
Control A10.3 is focused on AI supplier risk assessment, evaluation, selection, and monitoring process. This clause has a lot of overlap with the ISO 27001 A15 — Supplier relationship. On the other hand, control A10.4 requires the organization to identify its customers and their requirements about the AI system and customer relationship management. This may have some overlap with the ISO/IEC 20000–1:2018- Service management.
From a practical standpoint, initiating the implementation of this management standard can begin with a gap assessment against your existing management systems and frameworks, building upon that foundation. I hope you found it informative and that it provided you with a comprehensive overview of the standard and valuable insights into establishing and maintaining a robust Artificial Intelligence Management System.
References
Disclaimer
The content provided in this blog series is for informational purposes only and does not constitute legal, regulatory, or professional advice. While every effort has been made to ensure accuracy, including the use of AI tools for structure, formatting, and grammar, readers are encouraged to consult the official ISO 42001 standard and relevant regulatory or industry experts for specific guidance. The views expressed are those of the author and do not necessarily reflect the opinions of any affiliated organizations.