Companies that attempt "AI washing" — or claiming their technology has AI capabilities it doesn't really have — are on notice that the Securities and Exchange Commission is watching.
"If an issuer is raising money from the public or doing regular quarterly filings, you're supposed to be truthful in those filings," Gary Gensler said in a fireside chat with Robert Weissman, president of Public Citizen, a Washington, D.C., consumer advocacy group, on Wednesday afternoon. "This is the basic tenet of our capital markets: full, fair and truthful disclosure. And we have found over the decades when new technologies come along, sometimes there's investor interest, one might say buoyant interest at times. So the issuers really have to be careful to be truthful about what they're saying about their claims if they're using artificial intelligence and what they're saying their use of it is, but also truthful about the risk and how they're managing the risk."
"Simply put, don't AI wash the same way that you wouldn't have in other eras — in the 1990s, one might have called it internet wash," Gensler said.
Gensler also spoke of the dangers of AI in fraud and crime, the risk of racial bias in AI models and macro risks of a monoculture where everybody uses the same models or the same data sources to train and feed models.
One particular warning he gave was about deep fakes.
"If you're going to use artificial intelligence to fake out the public, using deep fakes or otherwise, that's still against our laws to mislead the public," he said. He gave the example of a
Gensler also expressed concern about how investment advice may be delivered or informed by AI, and where there might be conflicts of interest. He worries that models may be optimized to benefit an investment advisor or a broker-dealer, rather than the customer.
"There might be a conflict if you're communicating to investors, whether that be recommendation advice or prompts or behavioral nudges that are guiding them," Gensler said. "The fiduciary duty still applies, but to ensure that in essence, you're putting the investor's interest ahead of the robo-advisor or broker-dealer's interest in those investor interactions, that's fundamental." The SEC has proposed a rule to help ensure this called
Another concern for Gensler is the potential for danger in using AI for narrowcasting, where a company sends targeted messages about products or pricing to individuals based on their personal data, drawn from smart devices, such as a Fitbit, a car (telematics), an air conditioner or a refrigerator.
The potential for racial bias in models is also a worry for the SEC.
"I think it's a real challenge across our society, not just in finance, in using artificial intelligence to select which resumes to read, which people get interviews, when AI is used to determine whether somebody gets medical treatment or not," he said.
In finance, he is concerned about the use of AI in determining who gets credit cards, student loans or mortgages.
"The data itself reflects the biases that still exist," Gensler said. "I wish I could say they don't, but they still exist in our society writ large. We're better than when I was a kid, but we're not there yet."
In addition to all these risks, Gensler pointed out that there are overarching, macro risks to the use of AI in financial services.
He noted that the number of cloud providers in the U.S. can be counted on the fingers of one hand, and the number of major search providers is smaller than that. The number of major providers of artificial intelligence technology may well shake out to be very small, too, and there are only a handful of major data aggregators providing massive amounts of information to large language models.
"That centralization and that reliance on two or three base models or data aggregators, what does it ultimately mean?" Gensler said. "It means we're likely to have what you would call in science and biology a monoculture that without even knowing it, there will be hundreds if not thousands of financial actors relying on some central data or central base model."
These models may not have explainability, either, he noted. This could all heighten the risk of a financial crisis.
"If those nodes have it wrong, the monoculture goes one way, well, then there's a risk in society and in the financial sector at large," Gensler said.
Led by Congressmen French Hill and Stephen F. Lynch, the council will explore how artificial intelligence is influencing the development of new products and services, fraud prevention and other areas across the financial services and housing industries.
In spite of all these warnings, Gensler characterized artificial intelligence as a net positive to society and to efficiency and access in the financial markets.
"It's every bit as transformative, as big as other transformational times like the 1920s with electrification of the factory and the automobile and the refrigerator and things like that," Gensler said. "I think it's more transformative than the internet. And that's saying a lot. In terms of finance, I think it can drive to greater efficiency."
It will mean some changes in job functions, he added.
"But we're already using this in a lot of places in finance for what some people might call the back office compliance and check processing and compliance and claims processing and insurance companies and the like," Gensler said. "So I think it will drive potentially to greater efficiency, lower cost. The question is whether investors will get the benefit of those lower costs. The producers of the services will have lower cost for sure. And then I also think that there will be greater access, as well."