Proposed ISO Standard on Risk Management of AI: What Businesses Should Know

United KingdomScotland

Earlier this year the International Organisation for Standardisation (ISO) began its process for developing a new standard for AI risk management, ISO 23894 (“AI Risk Standard”). The AI Risk Standard will introduce a common framework around implementation and use of AI systems, a topic that many countries including the UK are actively engaged with.

What is the ISO and what are ISO Standards?

The ISO is an independent, non-governmental international organisation with a membership of 167 national standard bodies. Through its members, it brings together experts to share knowledge and develop voluntary, consensus-based, market relevant international standards that support innovation and provide solutions to global challenges. All standards are voluntary and left to individual businesses to implement. Those that do follow ISO standards benefit from the customer confidence, reduced costs and increased market access that using internationally accepted standards provide. ISO standards also provide a strong basis for the development of national and international regulations.

How are ISO standards created?

The ISO follows a six-stage process for developing its standards:

  • Proposal: A work team formulates and submits a proposal to the relevant ISO committee for a vote on whether the standard is needed.
  • Preparatory: A working group of experts and industry stakeholders is set up by the relevant committee (the “Working Group”) to prepare a working draft of the standard. Once the draft is deemed satisfactory, the committee decides whether to proceed to the Committee stage.
  • Committee: This is an optional stage where committee members review and comment on the draft. The committee must reach a consensus on the technical content of the draft before moving to the next stage.
  • Enquiry: The draft standard is circulated to all ISO members who then have 12 weeks to vote and provide comments. A draft standard is approved if two-thirds of the members are in favour. If it is approved with no technical changes, the ISO will publish the draft as a standard. There is flexibility in the process to repeat the Enquiry stage if the committee decides the draft is not ready the Approval stage.
  • Approval: In the event that there are technical changes from the Enquiry stage, the draft standard is submitted as a final draft to ISO members. The members have eight weeks to vote on whether the standard should be approved.
  • Publication: At this final stage the final draft is submitted for publication and then published as an official International Standard.

The draft AI Risk Standard

The draft AI Risk Standard incorporates the pre-existing standard ISO 31000:2018 which provides general guidance on risk assessment (the “General Risk Management Standard”). The General Risk Management Standard describes (i) the underlying principles of risk management (integrated, inclusive, continual improvement, structured and comprehensive risk management), (ii) how risk management frameworks should be integrated into significant activities and functions of an organisation, and (iii) how risk assessment processes and practices help to identify risk and ways to manage risk (as more fully discussed below). It emphasises the importance of considering the context of AI in an organisation.

The draft AI Risk Standard uses ISO 31000:2018 as its base but goes further by suggesting that organisations that develop, deploy or use AI products, systems and services need to manage specific risks relating to this technology. However, it is not intended for the specific risk management of products and services using AI for objectives such as safety and security.

Specific Risk Management Principles applied to AI

The draft AI Risk Standard aims to assist organisations in integrating risk management principles into their AI-related activities and functions. It notes that AI systems can introduce new or emergent risks for an organisation, with positive or negative consequences on objectives, or changes in the likelihood of existing risks. They also can necessitate specific consideration by the organisation. The following principles are addressed:

Inclusivity of stakeholders: As the use of AI systems can result in engagement with multiple stakeholders, organisations should seek dialog with diverse internal and external groups, both to communicate harms and benefits, and to incorporate feedback and awareness in the risk management process. Input from stakeholders will be beneficial for machine learning use cases, and generally for automated
decision-making processes and ensuring overall transparency and explainability of AI systems.

Dynamic risk management: As AI systems are dynamic and require continuous learning, refining and validating, legal and regulatory requirements related to AI need to be frequently updated. Organisations should seek to understand how AI will be integrated with management systems, and how it will impact their environmental footprints, health and safety and legal or corporate responsibilities.

Best available information: As AI impacts the way individuals interact with and react to technology, it is advisable for organisations to retain information regarding ongoing use of AI systems throughout the entire lifetime of the AI system.

Human and cultural factors: Human behaviour and culture significantly influence all aspects of risk management at each level and stage. Organisations engaged in the design, development or deployment of AI systems, or any combination of these should monitor their evolving cultural landscape. Organisations should focus particularly on effects of AI systems or their components on privacy, freedom of expression, fairness, safety, security, employment, environment, and more generally on human rights. Biases in decision making are overlooked without human interpretation.

Continual improvement: The identification of previously unknown risks related to the use of AI systems should be considered in the continual improvement process. Organisations engaged in the design, development or deployment of AI systems or system components, or any combination of these, should monitor the AI ecosystem for performance successes, shortcomings and lessons learned, and maintain awareness of new AI research findings and techniques.

Risk management framework to understand the context of AI in an organisation

Having a risk management framework assists organisations to integrate risk management into significant functions and activities. As risk management involves assembling relevant information for an organisation to make decisions and address risk, organisations are expected to decide on processes of identifying, assessing and treating risk within the organisation. The General Risk Management Standard specifically provides guidance on how leadership and commitment, integration of frameworks and the design of frameworks will assist in risk management. In addition to this guidance, the draft AI Risk Standard proposes organisations follow the following elements when considering the internal and external context of their organisation:

  • Guidelines of ethical use and design of AI issued by government-related groups, standardisation bodies and industry associations.
  • Technology trends and advancements in the various areas of AI.
  • Societal and political implications of the deployment of AI systems, including guidance from social sciences.
  • Stakeholder perceptions of AI systems or biased AI systems.
  • Use of AI, especially AI systems using continuous learning, and how it can affect the ability of the organisation to meet contractual obligations and guarantees.
  • Use of AI increasing the complexity of networks and dependencies.
  • The effect that an AI system can have on an organisation’s culture by shifting and introducing new responsibilities, roles and tasks.
  • Any additional international, regional, national and local AI-specific standards and guidelines imposed by use of the AI systems.
  • Use of AI systems leading to changes to the number of resources needed and the deskilling or loss of expertise.
  • Use of AI improving the quality in handling data.
  • The increased need for specialised training to operate AI systems.

Risk management processes

As noted above, the draft AI Risk Standard incorporates guidance from the General Risk Management Standard which recognises that risk management processes should be integrated into the structure, operations and processes of an organisation and should also be customised to achieve objectives to suit the external and internal context in which they are applied. The draft AI Risk Standard Suggests that risk assessment processes should take special care to identify where AI systems are being developed or used in an organisation and provides an example of a mapping between the risk management process and an AI system life cycle.

Additionally, as part of the risk management process of AI systems organisations are encouraged to consider the following:

  • Environment of an organisation’s stakeholders – are they part of the organisation or are they customers, suppliers, end users or regulators?
  • Inherent uncertainty in various parts of the AI system including software, mathematical models and human-in-the-loop aspects.
  • Consistent evaluation of effectiveness and appropriateness of measurement methods.
  • Consistent approach to determining risk levels and potential impact of AI systems, for example, on the organisation or individuals whose data is used to train AI systems.
  • Organisation’s AI capacity, knowledge level and ability to mitigate apparent AI risks.
  • Identification of risk and its sources

AI Standards Hub in the UK

The development of a separate ISO standard on AI risk management appears to be consistent with the UK’s AI strategy which places a great deal of emphasis on the development of global technical standards. For example, earlier in January, the UK government announced the creation of a new AI Standards Hub which we have discussed in our previous article . The AI Hub will be used to create practical tools for businesses, bringing the UK’s AI community together through a new online platform and developing educational materials to help organisations benefit from AI. Although the Hub was not specifically set up to address risk assessment, one of its key aims of educating, training and developing professionals is closely aligned with the draft AI Risk Standard. The draft AI Risk Standard puts emphasis on stakeholders to understand the impact of AI systems on their organisations and also calls for transparency of risk assessment processes.

Next Steps

As at the time of writing the Enquiry stage has been completed and comments have been received on the draft AI Risk Standard. We understand that the ISO editor has proposed responses to those comments. The Working Group responsible for the development of the draft AI Risk Standard has been invited to submit reconsideration requests on the Editor’s comments they do not agree with. We understand that the deadline for reconsideration requests is 23 May 2022. Once all reconsideration requests have been processed, the Editor will provide updated responses to the comments and an updated draft for final editorial improvements and approval will be provided to the Working Group. At that stage, the Working Group will decide on the progression of the draft AI Risk Standard to the Approval stage or whether a second Enquiry stage needs to be completed.

The authors would like to thank Abigail Atoyebi, Associate at CMS, for her assistance in writing this article.