On Monday, the National Institute of Standards and Technology (NIST) announced efforts to help the public improve the safety, security and trustworthiness of artificial intelligence. The efforts include guidance on detecting, authenticating and labeling synthetic content and a competition to create tools that help do just that.
NIST's efforts were accompanied by the U.S. Patent and Trademark Office publishing
Two NIST documents released Monday address AI risks. While banks are not specifically required to maintain AI risk management frameworks, regulators generally expect banks to maintain cybersecurity risk frameworks, which can cover risks posed by AI. For banks large enough to develop their own software and AI systems,
The efforts directly respond to President Joe Biden's
"The announcements we are making today show our commitment to transparency and feedback from all stakeholders and the tremendous progress we have made in a short amount of time," Raimondo said.
As part of the announcement, NIST issued four guidance documents on Monday. The first two will help manage the risks of generative AI — the technology that enables chatbots and text-based image and video creation tools — and serve as companions to NIST's
An AI framework released by the National Institute of Standards and Technology this week is the government's advice on coping with pitfalls and making the software trustworthy.
The first document provides
The
While watermarks in particular have shortcomings, strategies like it could improve banks' ability to identify doctored images of photo IDs and checks. The document overviews the current state of the art in detecting AI-generated images, which can be helpful to banks seeking to detect
The
NIST also opened registration for
NIST promises a similar competition "coming soon" that will relate to text-to-image systems, which create images based on a prompt. This competition will also focus on identifying whether an image is created by a human (possibly photographs or drawings) or generated synthetically by AI. The competition could provide a better public understanding of how companies can identify fake images from real ones.
AI presents risks that are "significantly different from those we see with traditional software," according to NIST director Laurie E. Locascio, necessitating separate but parallel efforts on secure software development and cybersecurity as a whole.
"These guidance documents will not only inform software creators about these unique risks, but also help them develop ways to mitigate the risks while supporting innovation," Locascio said.