Rise of 'shadow AI' presents dilemmas for tech leaders

DeepSeek, ChatGPT and Gemini app icons on a smartphone

As generative AI becomes more accessible to the public, some employees are using unsanctioned tools without telling their managers. The ripple effects of this "shadow AI" include higher productivity and possibly more favorable performance reviews for employees, dilemmas for their managers and security headaches for tech staff.

Technology leaders at banks and other companies must decide whether to close their eyes to this rogue use of artificial intelligence, police it with lockdowns or forced disclosure, or develop other approaches.

Adding to the difficulty of the decision is that generative AI is bringing productivity benefits to the workplace. A study the Federal Reserve Bank of St. Louis published in February found that across industries, 28% of workers use generative AI at work. Among those who had used it in the past week, all reported that the technology saved them some time at work, with 21% gaining four hours or more that week, 20% reported three hours saved, 27% reported two hours, and 33% reported an hour or less. People who used it more frequently reported greater time savings.  

A separate study found managers evaluated content produced with the assistance of ChatGPT more favorably than work done without it, as long as they didn't know generative AI was involved. The study, which is expected to be released soon by the Paris business school HEC, interviewed 130 mid-level managers at a consulting firm and found that when employees admitted they had used ChatGPT, managers often undervalued their work. When team members kept their use of ChatGPT concealed, managers gave more positive evaluations. 

Managers generally couldn't tell when tools were used unless they were explicitly told, the study found. About 44% of managers suspected employees had used ChatGPT when they hadn't. 

Saving mental power

Banks struggle with managing stealth AI along with everyone else.

"I think it is very real, and I think it's happening, even across those very large-scale banks that have locked down their IT," Gilles Ubaghs, strategic advisor, commercial banking and payments at Datos Insights, told American Banker. "I've been hearing quite a bit from bankers that staff use these technologies all the time." 

If a bank blocks the use of publicly available generative AI tools on its network, employees can go to their own phones and laptops and use them, Ubaghs said. Common uses include finding a document, drafting emails and summarizing customer calls. Usage is especially common in banks' marketing and product teams, he said. 

"This is a challenge I think that all banks are going to face," Ubaghs said. "These tools are handy. You can ask a quick question and they give you a quick summary or analysis. It's not just helping you get your job done — it can also make you better at the job you're doing."

For instance, everyone should have the skills to write an email, he said. "But writing a good one can take time. It takes a lot of mental power to word it just right, it soaks up a lot of productive time and energy. That blank-sheet-of-paper challenge is a big one."

Younger workers, especially, are used to generative AI.

"The workforce that's coming in is very comfortable using these tools and they will keep using those tools," Ubaghs said.

The fact that employees are keeping their use of gen AI a secret from their managers and HR is not necessarily bad, he said. 

"If anything, I think it's a sign that these organizations need to come to grips with this technology faster, because especially if they're seeing improvements in how their staff are behaving, that suggests they should be using it more widely," Ubaghs said. 

Banks' wary approach

When OpenAI launched ChatGPT in November 2022, some banks banned it outright and put up firewalls blocking access to it from their networks. Last year, an American Banker survey found that 30% of banks were restricting use of generative AI tools, at least for some users. 

Banks including JPMorgan Chase, Morgan Stanley, Zions Bank and Goldman Sachs have set up portals that let employees use approved generative AI tools. This appears to be a best practice.

Zions Bank in Salt Lake City is a case in point. The bank provides gated access to Microsoft's M365 Copilot chat, "which effectively meets many of our employees' needs while ensuring data protection," Jennifer Smith, chief technology and operations officer, told American Banker.

Security controls prevent sensitive information from being copied and pasted into external AI tools, even when accessed on personal devices. Other controls prevent Zions employees from using tools like ChatGPT, Claude and Perplexity on corporate devices. Any new gen AI model the bank might deploy is assessed according to the NIST AI risk management framework. 

Zions also provides sandboxes within which employees can safely experiment with AI models.

The bank has updated employee handbooks and governance documents and provided company-wide training to ensure a responsible and informed AI strategy, Smith said. It also holds hackathons to encourage people to come up with ideas for AI use cases.

"I believe the benefits of generative AI outweigh the risks when managed effectively," Smith said. "A thoughtful approach — rooted in governance, risk assessment, and ethical implementation — allows companies to harness AI's advantages while minimizing potential downsides."

Advice for banks

Zions and other banks have reason to worry about data security as employees use gen AI, Ubaghs said.

"Most banks, very rightly, are hyper nervous, and would absolutely not allow sensitive data to be uploaded into large language models," Ubaghs said. "Picture a staffer who, stressed and under pressure, uploads a spreadsheet that they probably shouldn't, just to get some results. Nobody knows about it. Maybe it's being absorbed by a Chinese AI company somewhere, but you'll probably never hear about it again, so you probably get away with it." From the employee's perspective, the reward is worth the risk.

"I think banks need to accept that these things are going to happen to some degree," Ubaghs said. "They should be teaching people, drilling into them best practices such as don't use unauthorized tools and don't upload sensitive data or proprietary information. There needs to be an understanding that people are going to use these tools, and I think that's only going to rise."

Overall, banks would be wise to support staff use of gen AI and provide them with guidelines and space to use the tools effectively, rather than try to crack down on shadow AI, Ubaghs said.

"There are always going to be workarounds," he said. "People are busy. If there are tools they can find in their day-to-day life to make these things easier and more automated, helping them be more productive, maybe they should be rolled out to more people, to get better outcomes for the bank as a whole."

HEC Professor David Restrepo Amariles, who led the study mentioned above, recommends that firms establish clear policies regarding AI use and make disclosure of AI use mandatory. He also suggests companies should create incentive systems to ensure that employees' efforts are fairly recognized while encouraging transparency in AI adoption.

"Successful integration of AI tools like ChatGPT requires not only transparency, but also fair recognition of human effort and well-balanced incentives," he said.

For reprint and licensing requests for this article, click here.
Artificial intelligence Technology
MORE FROM AMERICAN BANKER