Managing AI Risks effectively: a North American perspective

United KingdomScotland

The National Institute of Standards and Technology (“NIST”), the U.S. government agency, has recently published the second draft of its AI Risk Management Framework (the “Framework”) accompanied by the draft NIST AI Risk Management Framework (AI RNF) Playbook (the “Playbook”). The Framework and the Playbook offer practical guidance for companies on how to address risks in the design, development, use and evaluation of AI products, services, and systems. The Framework is law and regulation agnostic and outcome focused. As AI policy discussions are live and evolving, it may prove to be a useful guidance for businesses in the USA and beyond (including the UK and EU).

What is NIST?

NIST’s remit falls within promoting US innovation and industrial competitiveness “by advancing measurement science, standards and technology in ways that enhance economic security and improve quality of life”. NIST’s outputs include research, formulating standards, and evaluating data required to advance the use of and trust in AI. NIST’s role and function is similar to the UK’s National Standards Body, the British Standards Institution.

The Framework follows the report on managing AI bias published by NIST earlier in the year which we have summarised here.

The purpose and structure of the Framework

The Framework is intended to address challenges unique to AI systems and encourage and equip different AI stakeholders to manage AI risks proactively and purposefully throughout the AI lifecycle. The Framework defines AI as: “an engineered or machine-based system that can, for a given set of human-defined objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy” (Adapted from OECD Recommendation on AI:2019;ISO/IEC 22989:2022). It is NIST’s intention that the Framework will be used by AI Actors, defined by the Organisation for Economic Co-operation and Development as “those who play an active role in the AI system lifecycle, including organisation and individuals that deploy or operate AI”.

The Framework and the Playbook are intended for voluntary use, and it is expected that both resources will evolve over time. It is not specific to any territory, and it is not intended to be a framework which supersedes the laws and regulations of any one jurisdiction. Instead, it serves as a tool to better govern, map, measure and manage the risks of AI, supporting national organisations to operate under its own laws and regulations. The Framework describes a process for managing AI risks across a wide spectrum of types, applications, and maturity – regardless of sector, size, or level of familiarity with a specific type of technology.

The Framework is split into two parts: Part 1 explains the motivation for developing and using the Framework, its audience, and the framing of AI risk and trustworthiness; and Part 2 focuses on the proposed core elements of the Framework, i.e., Govern, Map, Measure and Manage to maximise the benefits and minimise the risks of AI. The Playbook, a companion resource, offers sample practices to be considered in carrying out the guidance, before, during, and after AI products are developed and deployed.

AI presents challenges in managing risk

In line with other publications, including in the UK, the Framework acknowledges that in addition to risks most technologies are exposed to, such as cybersecurity, privacy and safety, AI carries additional risks . AI systems can amplify, perpetuate, or exacerbate inequitable outcomes. AI systems may exhibit emergent properties or lead to unintended consequences for individuals and communities. The risks could stem from in a variety of scenarios: from the data that trained the AI; the AI system itself; from the way the AI is used and the way humans interact with the AI system.

Responsible use and practice of AI systems is a counterpart to AI system trustworthiness. AI systems are not inherently bad or risky, and it is often the contextual environment that determines whether negative impact will occur.

The Framework aims to help companies to address a variety of risk issues. It emphasises four main challenges when it comes to managing AI risk in pursuit of AI trustworthiness:

  1. Measuring risk: AI risks and impacts that are not well-defined or adequately understood are difficult to measure quantitatively or qualitatively. For example, the metrics or methodologies used by the organisation developing the AI system may not align (or may not be transparent or documented) with the metrics or methodologies used by the organization deploying or operating the system. Further, AI risks evolve during the lifecycle on the product.
  2. Risk Tolerance: Framework does not prescribe risk tolerance as stakeholders’ risks tolerance differ. It supports organisations in the determination, and management of, reasonable risk as well as how to document such risk.
  3. Risk Perspectives: Attempting to eliminate all risk in AI is not possible and wasted expenditure might occur to achieve the impossible, but the Framework does equip stakeholders with the skills to distinguish risk. A risk mitigation culture can help companies recognise that not all AI risks are the same, so they can allocate resources appropriately.
  4. Organisation integration of risk: The Framework in neither the checklist, nor a compliance mechanism to be used in isolation. It should be integrated within the company developing and using AI technologies and incorporated into broader risk management strategy and processes.

AI Trustworthiness presents an opportunity

The Framework defines Trustworthy AI as “valid and reliable, safe, fair and bias is managed, secure and resilient, accountable and transparent, explainable and interpretable, and privacy enhanced”.

The Framework assumes that trustworthy AI reduces risk. According to the Framework, Trustworthy AI systems should achieve a high degree of control over risk while retaining a high level of performance quality. Achieving this goal requires a comprehensive approach to risk management, with trade-offs among the trustworthiness characteristics. The Framework is not just focused on minimising the negative effects of AI systems, it also aims to identify opportunities in which the positive effects of AI can be maximised.

Continuous application of the Framework

 

For the Framework to be most effective, its recommendations should be applied during the entire lifecycle of an AI system. As AI systems develop and expand, context, stakeholder expectation and knowledge will change and require review. As shown in the diagram below (Figure 1 in the Framework) NIST has modified the OECD’s framework for classifying AI actors and their AI lifecycle activities to emphasise the importance of test, evaluation, verification, and validation (TEVV) throughout an AI lifecycle.

The Functions of AI Management

The Framework provides outcomes and actions that enable dialogue, understanding, and activities to manage AI risks. The four core functions developed by NIST are listed below:

  • Govern: This function is focused on cultivating and implementing a culture of risk management within organisations developing, deploying and acquiring AI systems. Governance provides a structure through which AI risk management functions can align with organisational policies and strategic priorities whether they are related to AI systems. The Govern function is a cross-cutting function that is infused throughout AI risk management and informs the other functions of the process. Senior leadership sets the tone for risk management within an organisation, and with it, organisational culture.
  • Map: This function establishes the context to frame risks related to an AI system. The information gathered while carrying out this function enables risk prevention and informs decisions for processes such as model management, and an initial decision about appropriateness or the need for an AI solution. Implementing this function necessitates a broad set of perspectives from a diverse internal team and engagement with external stakeholders.
  • Measure: This function is focused on employing quantitative, qualitative, or mixed-methods tools, techniques, and methodologies to analyse, assess, benchmark, and monitor AI risks and related impacts. It uses knowledge relevant to AI risks identified in the Map function and informs the Manage function. Measuring AI risks includes tracking metrics for trustworthy characteristics, social impact, and human-AI configurations. Processes developed or adopted in the Measure function should include rigorous software testing and performance assessment methodologies, comparisons to performance benchmarks, and formalised reporting and documentation of results. AI systems should be tested before their deployment and regularly while in operation.
  • Manage: This function is focused on allocating risk management resources to mapped and measured risks on a regular basis and as defined by the Govern function. After completing the Manage function, plans for prioritising risk and continuous monitoring and improvement will be in place.

The below diagram (Figure 5 in the Framework) visually demonstrates how these functions operate during the lifecycle of an AI product.

The above four, high-level, functions are broken down into categories and subcategories, further divided into outcomes and actions. After adopting the outcomes in Govern, most users of the Framework would start with the Map function and continue to Measure or Manage. According to the Framework, users may apply these functions as best suits their needs for managing AI risks. Some organisations may choose to select from among the categories and subcategories; others will want and have the capacity to apply all categories and subcategories. Assuming a governance structure is in place, functions may be performed in any order across the AI lifecycle. The Framework reiterates that the core functions should be carried out in a way that reflects diverse and multidisciplinary perspectives, potentially including the views of stakeholders from outside the company.

How the Framework and the Playbook complement one another

The Playbook (mentioned at the beginning of this article) is a digital, interactive tool that is available to help organisations navigate the Framework and achieve the outcomes through suggested tactical actions. The Playbook is a work-in-progress and now only actions, references and documentation guidance are available for the functions Map and Govern, with the other two functions, Measure and Manage, expected to be added to the Playbook in the future. The Playbook compliments the Framework, by providing additional resources, including, guidance, proposed actions, transparency documentation and references, to better inform the user as to how AI risk can be managed.

Application of the Framework and the Playbook in the UK

NIST’s recommendations align with the UK’s strategy which aims to establish itself as a world leader in the AI economy (see further details here). NIST’s recommendations echo the approach already taken in the UK, for example, the guidance produced by the Alan Turing Institute on the deployment of AI in the public sector (which we understand will be reworked in order to be used by the private sector).

What’s Next?

The Framework’s purpose is to address challenges inherent in AI systems rather than technology broadly. It is not intended to be a central resource for managing all risks in AI, rather it provides practical guidance on how to manage AI risk. Therefore, the Framework and the Playbook, should not be used in isolation, but recognised as a useful resource to evaluate the impact of trustworthy and responsible AI.

The Framework and the Playbook are in draft form on which NIST invited feedback until 29 September 2022 and a further workshop took place on the 18th and 19th October 2022 (the recording and slides of that workshop (and earlier workshops) can be accessed here).

The authors would like to thank Grainne Duffy, associate, for her assistance in writing this article.

 

Our experts will be closely monitoring these developments and predictions during the course of the year, providing regular updates and analysis through Law-Now, the CMS subscription service. Sign up today and ensure that you never miss an important update again.