The National Institute of Standards and Technology has released risk management advice for organizations developing or using AI systems. The federal government agency says it aims to cultivate trust in AI technologies and promote AI innovation while mitigating risk.
The guidance,
"AI is going faster than people realize," said Sumeet Chabria, founder and CEO of ThoughtLinks, a consulting firm that advises banks. Before founding ThoughtLinks, Chabria was chief operating officer, global technology and operations at Bank of America, where he was a
Congress ordered NIST to create this framework in the National AI Initiative Act of 2020.
"This work is critical in the AI space to ensure public trust of these rapidly developing and evolving technologies," said Laurie Locascio, director of NIST, as she announced the framework at an event in Washington on Thursday.
While noting the positive change AI could bring to society, commerce, health, transportation and cybersecurity, she emphasized the need to consider AI's impact on people and the need to make sure bias doesn't seep into AI models.
"If we're not careful, and sometimes even when we are, AI can exacerbate biases and inequalities that already exist in our society," Locascio said. "The good news is that understanding and managing the risks of AI systems will help to enhance their trustworthiness. And this in turn, will cultivate public trust in AI to drive innovation while preserving civil liberties and rights."
The framework document itself notes that the risks of AI systems differ from the risks of traditional software.
"AI systems, for example, may be trained on data that can change over time, sometimes significantly and unexpectedly, affecting system functionality and trustworthiness in ways that are hard to understand," it states. "AI systems and the contexts in which they are deployed are frequently complex, making it difficult to detect and respond to failures when they occur. AI systems are inherently socio-technical in nature, meaning they are influenced by societal dynamics and human behavior. AI risks — and benefits — can emerge from the interplay of technical aspects combined with societal factors related to how a system is used, its interactions with other AI systems, who operates it, and the social context in which it is deployed.
What's in the document
The framework's recommendations are grouped in under four categories: govern, map, measure and manage. The govern category includes things like cultivating a culture of risk management within organizations and trying to anticipate, identify, and manage the risks a system can pose, including to users and others across society. Under the map category, the framework lists items like, "Intended purposes, potentially beneficial uses, context specific laws, norms and expectations, and prospective settings in which the AI system will be deployed are understood and documented." The measure category includes measuring privacy risk, fairness and bias. "Manage" includes things like monitoring and continuous improvement of AI models.
The AI risk management framework and the more practical playbooks that will follow are meant to be guides for how to manage societal and ethical risks in the development and deployment of AI systems, said Jacob Metcalf, director of the AI on the Ground Initiative at Data & Society, a nonprofit research organization.
The framework is not technically part of government policy, Metcalf noted, since policymaking is not NIST's job.
"Instead, NIST is offering a new standard that can be written into private contracts between vendors and clients, can be used internally by companies seeking to manage risk, or can be referenced in future policy-making by state and federal agencies as one type of assessment that satisfies regulatory requirements," he said.
The framework is different from the
Chabria sees the two documents as complementary. The AI Bill of Rights "sets out principles and speaks to the outcomes we desire and why they are important, like safety, privacy and algorithmic discrimination," he said. "NIST aims to put it into action with an implementable risk management framework."
Strengths and limitations
Several observers praised the government's effort.
"The NIST framework is comprehensive and balanced — very high quality," said Brad Fisher, CEO at Lumenova AI, which offers responsible AI software. "While other frameworks exist, the NIST framework will likely become the de facto standard followed in the U.S. and, to a large extent, globally."
"Having an enterprise risk framework refreshed with the right metrics and controls for AI is super important," Chabria said. "Without it, companies using AI will not achieve the right business and customer outcomes."
However, the new framework will be challenging to implement, Fisher noted, because many of the objectives that it raises are not easily answered and will require thorough evaluation by business leaders, AI leaders and AI practitioners, recognizing the implications on customers and employees.
"This won't be quick and it won't be easy," Fisher said.
A limitation of the NIST framework is that it is voluntary.
"As in many things that are voluntary, early adopters that use this guidance are likely the ones who are better able to comply with its provisions, whereas those who choose not to adopt it may be those with more problematic situations — in other words, those that really need it," Fisher noted.
Because the framework came out of an act of Congress, a follow-on congressional action would be needed to make sure that all relevant companies comply with it.
Chabria said there were a few things he wished the NIST framework included.
"I'd like to see more in the framework about assessing and documenting the business purpose of the AI system and intended customer outcomes," he said. "That is a critical component of setting the context and the right governance: Why is AI the right solution for the business opportunity or problem?"
He would also like to see more guidance on making sure the risk management framework is applied to non-AI systems as well as artificial intelligence software.
"An AI system seldom operates in a vacuum," he said.
He would also like to see more attention to AI resiliency, risks and recovery.
"What happens if a critical, machine learning AI system has a software failure and misfires?" he said.
All that said, Chabria sees this as a useful document for banks and fintechs.
"Any organization starting to just use AI will get a lot of value using this as a starting point and reference," Chabria said. "Mature organizations can get value using it as a reference and comparing it to their current frameworks. Fintechs using AI should review this seriously."