WASHINGTON — The House Financial Services Committee's task force on artificial intelligence questioned federal bank regulators, nonbank financial firms and mortgage companies on their use of AI, and to what extent Congress should consider legislation on the topic, according to a report released by the panel on Thursday.
The report is the first public action of the working group, a bipartisan effort
Some agencies said that they did not need federal legislation to address AI concerns, while others said it would be helpful.
"Certain agencies indicated legislative gaps could appear as AI becomes more widely adopted and sophisticated," the lawmakers said in the report.
Although the report was light on details about what kinds of legislation the panel could consider, it did say that the House Financial Services Committee should "explore the potential benefits of a chief AI officer at each financial regulator."
Panelists, which include a wide range of regulators and market participants, said that use of AI doesn't exempt banks, mortgage lenders or other financial institutions from obligations like nondiscrimination in lending.
"Regulators must leverage their oversight and enforcement authorities to ensure those obligations are met as well as examine alternative compliance processes, where appropriate," the lawmakers said. "Congress and regulators must work to identify any legislative or regulatory gaps or limitations in light of the application of AI in the financial services and housing industries."
Some lawmakers, in roundtables with regulators including the Office of the Comptroller of the Currency, Federal Deposit Insurance Corp. and Federal Reserve, raised concerns about "lack of definitional clarity" about what AI use in financial services actually means.
The Fed, in particular, emphasized the potential financial stability implications of generative AI, a type of machine learning that uses large language models to create text, videos and images, and other AI tools.
Specifically, the Fed was concerned about a "monoculture of models" where multiple financial institutions use the same third-party providers, according to the report.
"One panelist warned that the widespread adoption of certain AI models may encourage herd-like behavior in capital markets," the lawmakers said in the report. "Firms reported trying to mitigate this risk by subjecting models to rigorous testing before deployment and actively reviewing the models' outputs for partial or skewed results."
Some panelists said that many smaller institutions, including banks, might not have all the
"While some firms use third parties to implement AI into their operations, others, like single-branch community banks, do not have the financial, technological, or personnel resources to do so," the report said.