New AI tools aim to save Wall Street research analysts time

Wall Street
Wall Street firms have lately been taking a harder look at the cost and return of generative AI.

A new crop of generative artificial intelligence models aim to save buy-side and sell-side analysts time in their research of companies, markets and trends. The technologies from companies like Bloomberg, FactSet and Brightwave can summarize earnings call transcripts and pore through SEC filings to find relevant data.

They could potentially save firms, including banks, a lot of time. They are also arriving at a time when many on Wall Street are questioning the expense and value of generative AI and when some question the accuracy and usefulness of technology known to hallucinate and provide false "facts."

The providers of these tools have answers to all of these objections. Their tools are worth a closer look, they say, if only to be aware of technology competitors may use to their advantage.

What the new generative AI tools do

This crop of tools mainly summarize and retrieve information. They use natural language understanding to create a chatbot experience where users can ask questions in English. 

FactSet released a generative-AI-based chatbot called FactSet Mercury in November that helps find content on the company's platform. Beta customers are using it to ask questions in plain English. It can build charts and tables and summarize documents such as earnings call transcripts. 

FactSet is working on generative AI for workflow automation. 

"If you are an investment banker, if you're a wealth manager, if you're a portfolio manager, you have specific workflows that you've been relying on FactSet to help you with for the last 40 years," said Kristina Karnovsky, chief product officer at FactSet. "If you're a banker, it could be building a pitch book. If you are a wealth manager, it could be generating a proposal to a potential new client. If you're a portfolio manager, it could be around asset allocation, or reporting on how that portfolio did this period." 

In March, FactSet made an AI-based transcript assistant generally available.  

"Clients can open up a transcript from a recent earnings call and be able to ask any question about the content in that transcript," Karnovsky said. "They could ask about themes about what the analyst asked the management team that quarter, anything that could be answered based on that single transcript."

In April, the company announced generative AI-based software that can generate source-linked portfolio commentary. FactSet executives say they expect it to be able to reduce the amount of time asset managers, asset owners and wealth managers spend writing portfolio commentary by a factor of eight. 

The company is working on a generative AI model that can create pitch books. 

It's also developing the ability to embed FactSet's gen AI chatbot or workflow automation into an existing application using APIs. 

"It is all about being able to use the AI tools that we have developed, like the conversational experience where you can ask a question and get answers from FactSet or like automating certain workflows, and bring those into your own environment," Karnovsky said. 

Bloomberg announced the availability of AI-powered earnings call summaries on its Bloomberg Terminal in January. 

According to the company, Bloomberg Intelligence analysts trained the tool to understand the nuances of financial language and anticipate what's most important to investors. It provides summary points with context links so analysts can discover related information elsewhere in the Bloomberg system, like company financials, dividend forecasts and supply chain analyses. 

"Most of my team's job is reading and synthesizing trends across companies, so the quality and accuracy of the summarization tool gives us a big edge," said Joyce Meng, managing director and partner at Fact Capital, in the announcement. The tool "makes it easy to read coverage across ancillary and adjacent companies," she said. "It also distills the contentious points so we know where in the material to look for insights on the important debates."

Brightwave, a startup that launched in 2023, has developed an AI research assistant that's intended to generate financial analysis on any subject. It is based on several large language models and proprietary AI models, although the company declined to say which ones.

Users of the software come up with a thesis about a company or a market. They enter this into Brightwave, which searches through quarterly reports, earnings call transcripts, analyst notes, real-time news feeds and other documents to find relevant text and data. It then weaves those data points together in a summary report. 

Brightwave does not claim that its software can do the work of a research analyst.

"You talk to the sharpest analysts, and they're extremely sophisticated people," said Mike Conover, CEO and co-founder of Brightwave. "I think it would be foolish to say that you're going to replace the sharpest, hardest working, most channel checking investment research teams in the world."

What the software can do, he said, is expand their ability to connect the dots. 

"We're building a system that is a partner in thought and you direct its attention and you say, run that down for me and report it back," Conover said. The analyst could then ask Brightwave AI to track down answers to follow-up questions.

Brightwave's clients are buy-side firms, including hedge funds and nonfinancial firms. The company does not share pricing information. 

In June, Brightwave announced a $6 million seed funding round led by Decibel Partners, with backing from Point72 Ventures, Moonfire Ventures and angel investors including executives from OpenAI, Databricks, Uber and LinkedIn. 

Demand on Wall Street

A recent Celent survey showed that about only 9% of capital markets firms are using generative AI live. About 17% have a generative AI proof-of-concept underway, 18% have it on their 2024/2025 road map, 27% are exploring use cases, 15% have no plans to use gen AI in the next two years and 13% said generative AI is "not of interest to us." 

However, "research is a great place to start because the first use cases are about summarization," Monica Summerville, who heads Celent's Capital Markets practice, said. "Analysts have to constantly synthesize information. They have to get a view on sentiment. They have to not just look at the companies they follow, but then they want to see the adjacent companies and competitors." 

Generative AI could increase analysts' productivity and efficiency, Summerville said. 

"Asset managers are very concerned about cost and interested in keeping costs down," Summerville said. "So if it makes their analysts more efficient, it could give them a competitive edge in that their analysts can cover wider areas of the market."

But there are several hurdles generative AI needs to clear for it to be practical in this setting. 

What firms need to look for in generative AI tools

One thing needed to make generative AI useful to analysts is integration with their existing workflows, according to Summerville. 

"People already use a lot of tools to access data," she said. "They've built workflows around those tools already. There are compliance and risk systems that these need to feed into, because if you make a recommendation, you have to check position limits and you have to check if the company's legit." 

Companies keep track of how effective their analysts are by monitoring the recommendations they make and the degree to which they are acted on. 

"Did we act on them in a timely way? And if we did act on them, did that actually increase return on the portfolio? Was that an alpha generating idea?" Summerville said. "There are ways that AI can help with a lot of those things. But if the tools sit outside the current workflows, they're not as useful."

Another critical element is the ability to understand subject matter and jargon, of which there is a lot in financial reports and news.

"If you just ask ChatGPT questions about capital markets, often it just gives you a completely wrong answer because it doesn't really understand what you're asking," Summerville pointed out. "It's great if you're asking it for general things, but when you start getting into an area that has more jargon and a lot of secrecy, it's harder to get a good answer."

Context is also important. Whether a new stat or piece of news is good or bad depends on who's asking, she noted. 

"It could be a disaster in some part of the world, but that could be great for my company because we don't have a supply chain issue in that part of the world," Summerville said.

A third concern is hallucinations, which occur when large language models create false information in their process of predicting the next word. Sometimes their predictions are way off.

"Everyone's worried about this," Summerville said. 

Some firms deal with the risk of hallucination by dialing down the creativity of the system. "But if you dial it down to zero, there's no creativity, then you're really not generating anything," Summerville said.  

Another way vendors address this is by providing attribution — every answer links to the original document, transcript or meeting recording from which it was drawn. Brightwave, Bloomberg and FactSet all provide such links.

"You can look at a point and immediately click through and see where they got this, there's transparency around how they've come up with these," Summerville said. This is important from a regulatory and compliance point of view, because companies have to capture evidence of where they found information, for instance to prove it wasn't derived from insider trading.

Consultants at McKinsey and Accenture are offering to use gen AI to understand legacy software and write new code. Experts say this idea has merit, up to a point.

July 8

And proper data sources are important. The many lawsuits against OpenAI and Microsoft over ChatGPT's hoovering up of copyrighted content, including New York Times archives, have shone a spotlight on the seemingly random and limitless data foundational model providers use to train their models. 

"Companies sometimes don't like to be transparent about that because they feel that's part of their secret sauce," Summerville said. "But I just don't see how companies can use this if they don't understand the data that went into the training of the large language models." There's also the related issue that the data sources used in model training have inherent bias that will influence the workings of the model.

Bloomberg and FactSet say the generative AI models they offer are trained only on the companies' own data. 

Expense has become more of a factor recently: Wall Street firms are concerned about the cost of artificial intelligence technology and where a return can be found to justify that cost. 

"Tech giants and beyond are set to spend over $1 trillion on AI [capital expense] in coming years, with so far little to show for it," stated a recent report from Goldman Sachs, entitled "Gen AI: Too much spend, too little benefit?" 

"Companies have to look carefully at the cost, because it's a lot more expensive than you think," Summerville said.

For reprint and licensing requests for this article, click here.
Artificial intelligence Capital markets Technology
MORE FROM AMERICAN BANKER