The transformative potential and rapid adoption of generative artificial intelligence within investment companies is pushing AI up the worry list for fund boards and their governance advisors.
The use of Gen AI applications has exploded among financial services companies, which are struggling to keep up with the technology and the growing list of risks to the security of data exposed to AI applications. The list of potential threats changes as quickly as the technology, but already includes the potential for fake “hallucinated” content in legitimate documents and the risk of increased negative attention from an SEC, which is on the lookout for fraud and conflicts of interest involving the new tech.
“You have what is a really rapidly changing environment with the generative AI,” said Carolyn McPhillips, president of the Mutual Fund Directors Forum. “No one was talking about this a year ago, and now you can’t have a conversation with directors where it does not come up.”
The sudden popularity of Gen AI has forced fund boards to grapple with the risks of adviser use of Gen AI before they have a full view of the concerns and opportunities that the AI may bring, said Hassell McClellan, an independent director, and chair of the John Hancock Group of Funds board.
When dealing with a technology whose risk profile changes as quickly as its list of capabilities, “you may not be able to completely see around the corner whether it’s a shadow or whether there are growling noises.”
SEC worries
Some of the concerns were laid out by SEC Chairman Gary Gensler in a February speech: That AI decisions are often unexplainable, that it can make biased decisions based on biased data, and that it can hallucinate and make inaccurate predictions.
Gensler also noted that financial AI applications will probably rely on only a handful of base models, which could lead to herd decision-making that could create systemic risk for the whole investment industry.
The SEC announced in May that it would withdraw and rewrite the controversial proposal to address widespread objections filed as public comments that the rule as proposed could apply to non-AI technologies as well, but did not sufficiently address the range of ways AI could be used.
Gensler made clear at the time and in public comments since, however, that the agency would continue to pursue instances in which advisers seemed to be using the new technology without sufficient assurance that it was designed to avoid conflicts between the interests of the adviser and the fund’s fiduciary duty to investors.
Misuse from that strategic level is not the only risk AI applications pose, however.
When AI is being used by individuals, informally and without authorization, as is often the case in financial services, they warn, auditing is even more difficult because the Gen AI apps are owned by Big Tech companies that don’t let customers peek under the hood or offer zero-conflict guarantees with individual accounts.
Whether the SEC comes back with a new version of the rule or not, fund boards must understand as much as they can about AI and its potential implications for advisers from both an investing and operational perspective, McClellan said. “It’s not debatable whether or not it’s coming, and it’s not debatable whether it will have an impact.”
Ask the adviser
Fund boards need to ask about their investment advisers’ use of AI, and about the data they are using to train the AI models, said Daniel Michael, partner at Skadden Arps and former chief of the SEC enforcement division’s complex financial instruments unit.
Currently, Michael said that the bulk of AI use among investment advisers is focused on saving time and resources by automating labor-intensive customer interactions, such as screening emails to identify which can be dealt with automatically and which need to be interpreted by humans.
AI is also being used to automate the screening of large volumes of communications records for signs of possible insider trading on which humans should follow up, and for large-volume data analysis supporting risk management and trading strategies, Michael said.
All those functions are valid uses for AI, but leaving any of them entirely in the hands of AI, especially those involving investment recommendations or decisions –could cause problems with regulators, he said.
“There are obviously sensitive tasks that should have human review and analysis. People are trying to identify the areas that are on the relatively safe end of the spectrum,” he said.
Boards should be especially careful to scrutinize how fund advisers are protecting nonpublic information in their use of AI, Michael said. For example, if the AI were to gain access to client information and develop trades based on that information, it could result in frontrunning trades. Or a model may simply disclose the nonpublic information, which would violate securities laws, he said.
Ask the service providers
That scrutiny also needs to extend to the investment adviser’s vendors, said Joyce Li, CEO of Averanda Partners, which advises boards on the risks and benefits of AI.
Investor data could be exposed, for example, if a third-party service provider used an AI note-taking or transcription tool during a meeting in which sensitive information was discussed, she said. Many free or low-cost AI tools are cloud-based and include no special security functions an adviser could use to make sure any sensitive information discussed would not be included in the AI provider’s database of content used to train other AI models, Li said.
Fund boards need to keep an eye out for AI models likely to take sensitive data out of the control of the investment adviser. They also need to keep an eye on AI applications they know to be secure because AI models are designed to evaluate and improve their performance over time, which could alter the application’s targeted objectives with no indication to the investment adviser that its goals have changed, Michael said.
“It continually seeks to improve and refine the results that it’s able to deliver based on its ongoing work, which is a real benefit. But there needs to be some visibility into how it’s doing that,” Michael said.
That lack of visibility will continue to be a problem for investment advisers, who are required by law to put their clients’ best interests ahead of their own, but have difficulty showing clients or regulators that they are living up to their fiduciary duty if they don’t have enough insight into the AI decision-making to understand the results it produces, Michael said. “The use of AI certainly doesn’t absolve anyone of those duties.”
The responsibility of fund board members isn’t necessarily to understand the inner logic of generative AI, McPhillips said. But they do have to understand how it is being used in their funds and know what questions to ask about the problems that could crop up due to problems in visibility, data acquisition or other known proclivities of AI.
“Understand what the technology does, what data it relies on, what the outcomes are supposed to be and who is testing to make sure that those outcomes are actually occurring,” she said. “The board’s job is to oversee all of that, make sure someone is attuned to what the technology is doing, what the potential risks are, and what that means for the fund complex as a whole.”
AI-evaluation framework: H O R S E
McClellan, who grew up in “horse country” in Kentucky, said he thinks of generative AI as like an unbroken horse: “You see it has tremendous potential; you’re not sure how to ride it but you’ve got to get your hands around it. And once you do, you have something of great benefit.”
That analogy led him to develop what he calls his HORSE framework; an acronym for the questions fund boards can ask to understand how generative AI is used and can be used on behalf of their funds:
- How are companies using AI, and how will they use it, not just in the fund industry?
- Objectives: What are the objectives of using it? Speedy access to information? Cutting costs? Increased productivity? To reduce headcount?
- Risk: What are the risks? For example, are there biases built into the algorithms? Does it rely on black-box models where the inner logic is unknown?
- Safeguards: What are users doing to manage and understand the risk?
- Expenditures: How much are they spending on it? The level of spending by an AI user will indicate its level of commitment to the AI.
The oversight of generative AI is a full-board issue, he said: Every board member needs to learn more about AI from expert presentations, at board meetings and conferences, and they need to work closely with the board’s independent counsel to keep up-to-date on AI issues.
Boards should be asking advisors, subadvisors and vendors the HORSE questions, and including them in their 15(c ) questionnaires, McClellan said. As a tool, boards could potentially put AI to work during their 15(c) processes to better analyze and understand data, such as performance data, and enhance their ability to ask the right questions and push back on claims of the adviser, he said.
To evaluate the fund investment adviser’s oversight of a vendor’s AI risk, boards should seek independent audit results, similar to those of a cybersecurity audit: questionnaires, policy reviews, vulnerability discussions and a third-party assessment, Li said. Boards may also need to assess how the adviser selects, manages risk, and monitors its vendors’ performance, as well as contingency plans for vendor disruptions.
The Mutual Fund Directors Forum recommends another risk guide – the National Institute of Standards and Technology’s “AI Risk Management Framework,” issued in 2023 – which provides criteria for testing AI’s reliability, security and transparency.
Regulator focus
Meanwhile, AI remains a priority of Gensler and the SEC. The SEC’s Division of Examinations reported that AI services would remain one of its areas of focus in 2024, and late last year the Wall Street Journal reported that the SEC had sent out so-called “sweep” letters seeking information from investment advisers about their use of AI.
In February, the SEC reported a settlement agreement with a purported hedge fund manager whom it accused of fraudulently claiming to have AI-guided investment strategies.
As for the SEC’s proposal to regulate the use of AI by broker-dealers and investment advisers, industry attorneys characterized the level of opposition as unusually high, even compared to other unpopular SEC proposals. Gensler has acknowledged that the proposed rule was too broad and probably needs to be scaled back, Michael said.
A key problem of the proposal was how it defined “predictive data analytics” (PDA), a term discussed at SEC meetings as referring to generative AI.
But the actual definition in the proposed rule is so broad that it could cover much-less-sophisticated technologies, like an Excel spreadsheet or a calculator, said Mitra Surrell, associate general counsel of the Investment Company Institute.
“They didn’t clearly articulate the problem or the risks – the potential harms that they were trying to solve for,” she said.
“The way it’s currently defined, it covers really any tool whatsoever that’s electronic in nature,” said Sanjay Lamba, associate general counsel of the Investment Adviser Association. He expects that the SEC will modify its definition of PDA in a re-proposal, which could occur as early as October.
There’s no question that generative AI is developing quickly and needs risk governance, said Gail Bernstein, general counsel of the Investment Adviser Association. “We would all agree that there are potential risks that could be massive, and you have to figure out how to risk manage that.”
But the SEC’s proposal was aimed at conflicts of interest, which is a different subject, she said, and existing regulations already cover adviser interactions with investors and how to manage conflicts of interest, disclosure requirements and fiduciary responsibilities.
At its core, the investment business is all about risk and defining acceptable risk parameters, McClellan said, and fund boards are still trying to understand those parameters for generative AI.
It’s an issue of concern for directors, but not yet worrisome, McClellan said: “These are things that I won’t say they keep me awake at night. But I do think about them before I go to sleep.”
SEC pulls back on AI, custody rules, ponders extension of swing pricing
Avoiding SEC ‘AI washing’ penalties
ICI CEO blasts SEC for acceleration of “harmful” rulemaking
Gensler defends proposed AI rule, dodges questions about compliance
Congressional task forces could negate proposed SEC AI rule
2024 Outlook: Regulatory issues dominate board concerns while AI climbs up agendas
More regulators talk tough about AI in investing, without slowing its growth