On Wednesday, the New York State Department of Financial Services, or NYDFS,
The four risks the department highlighted relate to both how threat actors can use AI against firms — AI-enabled social engineering and AI-enhanced cybersecurity attacks — and threats posed by the use and reliance upon AI — exposure or theft of nonpublic information and increased vulnerabilities due to supply chain dependencies.
The six examples of mitigations the department highlighted in its industry letter will be familiar to many cybersecurity and risk professionals — among them, risk-based programs and policies, vendor management, access controls and cybersecurity training. However, their mention in the guidance is notable because the department directly tied these practices to requirements set forth in its regulations.
Adrienne A. Harris, the department's superintendent, acknowledged that even as the guidance focused on the risks of AI, the technology also presented opportunities for financial institutions to improve their cybersecurity.
"AI has improved the ability for businesses to enhance threat detection and incident response strategies, while concurrently creating new opportunities for cybercriminals to commit crimes at greater scale and speed," Harris said in
How threat actors use AI against banks
Social engineering, which relies on manipulating people to break into a system rather than exploiting more technical vulnerabilities, has long been a concern in the cybersecurity space. Many firms including KnowBe4, Fortinet, SANS Institute and others offer security awareness training programs that focus on mitigating the threat of social engineering by teaching employees about the signs they may be getting targeted by such a campaign.
One of the factors that differentiates more dangerous social engineering campaigns from the pack is the degree to which a campaign is realistic, and interactivity is one key to this. AI has enhanced the ability of threat actors to present a more convincing front through deepfakes, according to the NYDFS guidance.
One example the guidance cited transpired in February, when a clerk working for the Hong Kong branch of a multinational company transferred $25 million to fraudsters after being tricked into joining a video conference where all the other participants were AI-generated deepfakes, including one impersonating the chief financial officer of the firm. The clerk made 15 transactions to five local bank accounts as a result, according to
AI can also enhance the technical abilities of threat actors, according to the NYDFS guidance, enabling less technically skilled actors to launch attacks on their own, and improving the efficiency of those that are more technically adept — such as by accelerating malware development. In other words, AI can help threat actors at nearly every stage of an attack, including in the middle of an intrusion.
"Once inside an organization's information systems, AI can be used to conduct reconnaissance to determine, among other things, how best to deploy malware and access and exfiltrate non-public information," the guidance states.
How banks' reliance on AI can pose threats
A threat actor doesn't need to infiltrate a bank's IT systems to steal data; they also steal from third parties that a bank has entrusted with data. Indeed, this has been a growing tactic for threat actors hoping to steal consumer data, even independent of the rise of AI.
So-called third-party risks and supply chain vulnerabilities are a common concern among banks and regulators, and AI magnifies these concerns.
"AI-powered tools and applications depend heavily on the collection and maintenance of vast amounts of data," reads the NYDFS guidance. "The process of gathering that data frequently involves working with vendors and third-party service providers. Each link in this supply chain introduces potential security vulnerabilities that can be exploited by threat actors."
Because of the vast amounts of data that banks and third parties must collect to enable and improve their AI models, NYDFS also pointed to the exposure or theft of these vast troves as a risk of relying on AI.
"Maintaining non-public information in large quantities poses additional risks for covered entities that develop or deploy AI because they need to protect substantially more data, and threat actors have a greater incentive to target these entities in an attempt to extract non-public information for financial gain or other malicious purposes," the guidance reads.
Six strategies for risk mitigation
The guidance from NYDFS emphasized the need for financial services companies to practice the principle of
From a compliance perspective, the first and most important measures banks that operate in New York can implement is cybersecurity risk assessments. This is one of the most critical aspects of the NYDFS Cybersecurity Regulation, also known as Part 500, which the department last amended in November 2023.
The Cybersecurity Regulation requires banks to maintain programs, policies and procedures that are based on these risk assessments, which the guidance said "must take into account cybersecurity risks faced by the covered entity, including deepfakes and other threats posed by AI, to determine which defensive measures they should implement."
The Cybersecurity Regulation also requires banks that operate in the state to "establish, maintain, and test plans that contain proactive measures to investigate and mitigate cybersecurity events," such as data breaches or ransomware attacks. Again, the NYDFS guidance indicated that AI-related risks must be accounted for in these plans.
Second, NYDFS "strongly recommends" that each bank consider, among other factors, the threats that its third-party service providers face from the use of AI and how these threats could be exploited against the bank itself. Efforts to mitigate these threats might include imposing requirements on third parties to take advantage of available enhanced privacy, security and confidentiality options, according to the guidance.
Third, banks need to implement multifactor authentication, which the Cybersecurity Regulation requires all banks to use by November 2025. The department has
Fourth, the department reminded banks of the need to provide "cybersecurity training for all personnel" on at least an annual basis, and this training must include social engineering — another requirement set forth by the Cybersecurity Regulation. This ensures that bank personnel are familiar with how threat actors can use AI to enhance their campaigns.
"For example, trainings should address the need to verify a requester's identity and the legitimacy of the request if an employee receives an unexpected money transfer request by telephone, video, or email," reads the guidance.
Fifth, covered entities "must have a monitoring process in place" that can identify new security vulnerabilities promptly so they can remediate them quickly. The guidance reminded banks that the Cybersecurity Regulation requires them to monitor the activity of users (primarily employees), including email and web traffic, to block malicious content and protect against the installation of malicious code.
"Covered Entities that use AI-enabled products or services, or allow personnel to use AI applications such as ChatGPT, should also consider monitoring for unusual query behaviors that might indicate an attempt to extract NPI and blocking queries from personnel that might expose NPI to a public AI product or system," the guidance reads.
Sixth and finally, the guidance recommended banks use effective data management practices. One important example is disposing of unused data when it is no longer necessary for business operations. This practice is required by the department's regulations, and starting in November 2025, banks will also need to maintain and update data inventories. This "should" include identifying all information systems that rely on or use AI, according to the guidance.