An investigation by the Federal Trade Commission into practices at ChatGPT maker OpenAI highlights some of the primary risks of AI on which regulators are focusing, including many risks that concern banks. One special area of focus is protections for users' personal data.
The Washington Post
OpenAI founder and CEO Sam Altman said in
Banks have
The FTC's investigation touches on multiple concerns that lawmakers have
The bulk of the FTC's request revolves around "false, misleading or disparaging statements" that OpenAI's models could make or have made about individuals. For banks, perhaps the most relevant requests concern the company's practices with respect to protecting consumer data and about the security of the model itself.
During a Tuesday hearing, lawmakers talked through how to properly regulate the wide-ranging uses of AI. Some voiced support for forming a new AI agency.
Protecting consumer data
The FTC requested details from OpenAI about
After that breach, OpenAI published technical details on how the breach happened. In summary, a change the company made to a server caused the server to, in certain cases, share cached data with users even if the data belonged to a different user.
The FTC also asked OpenAI about its practices for handling users' personal information, something that
Regulators and lawmakers have also expressed concerns about the ends to which companies have used large language models. In the May hearing before the Senate Judiciary Subcommittee on Privacy, Technology and the Law, part of the Senate Judiciary Committee, Senator Josh Hawley asked Altman about training AI models on data about the kinds of content that gain and keep users' attention on social media and the "manipulation" that could come of that amid what he called a "war for clicks" on social media.
"We should be concerned about that," Altman said, but he added that OpenAI does not do that kind of work. "I think other companies are already — and certainly will in the future — use AI models to create very good ad predictions of what a user will like."
Hacking the models
The FTC also asked OpenAI to share any information the company has gathered about what it called "prompt injection" attacks that can cause the model to output information or generate statements that OpenAI has trained the model not to provide.
For example, users have documented cases of getting the model to
This method has worked for other, contrived role-playing scenarios, as well.
Banks that have launched AI chatbots have been careful not to give the products any capabilities that go beyond what the bank needs them to do, according to Doug Wilbert, managing director in the risk and compliance division at consulting firm Protiviti. For example, AI chatbots like Capital One's Eno cannot answer even some seemingly basic questions, like whether the chatbot is a large language model.
"It's not going to answer everything. It's going to have a focus on particular areas," Wilbert said. "Part of the problem is giving bad information to a client is bad, so you want to ring-fence what it's going to say and what it's going to do, because especially on the customer service side, regulators are looking at wait times, chat times, responses — things like that."