As new chairman Paul Atkins took the reins at the SEC, he publicly advocated an embrace for innovation, including emerging technology, in what he predicted would be a “new day” at the SEC.
Artificial intelligence is high on the agenda of any discussion about technology-based market innovation, though it’s unclear how the SEC will address the use of AI by asset managers, following the withdrawal of the SEC’s first attempt to regulate it.
On June 12, Atkins’ SEC withdrew 14 Gary Gensler-era rule proposals, including the agency’s first-ever effort to regulate the use of AI by asset managers, leaving the industry without a concrete framework on how to implement artificial intelligence in their operations and investment strategies without running afoul of the SEC.
Not that there have been many complaints from asset managers who argued that the original proposal focused on predictive analytics and specific computing setups to be too broadly defined and less helpful in the general regulatory framework.
Industry pushbacks on proposed AI rule
The original proposal was “underbaked” and introduced a very complicated compliance infrastructure due to SEC’s broad definitions of those covered technology that could cast across an extensive range of technologically assisted decision making, according to Karl Paulson-Egbert, global co-chair of the investment funds at Baker & McKenzie LLP.
The goal of the rule proposal was to explicitly forbid the use of AI systems that did not have specific rules baked into them that would avoid adviser/investor conflicts of interest in investors interaction.
“Everybody started talking about AI and the SEC felt like they had to do something, and they put together a rule that was too loose in its definition of what AI is,” Paulson-Egbert said.
SEC taps out on 14 stalled rule proposals
Questions remain about how SEC rules will be applied under new Atkins policies
Board role may grow with deregulation
Fund boards seek guidance on oversight of AI risks that remain unclear
New ‘AI washing’ charges for misstating AUM, AI in trading systems
Gensler defends proposed AI rule, dodges questions about compliance
Comments posted by the Investment Company Institute and Investment Advisers Association also warned that the language in the AI proposal was so imprecise it could accidentally put Excel-based analytical tools in the same category as the most sophisticated AI-powered analytics.
The decision to withdraw the AI rule was no surprise to asset managers, many of whom were critical of its “extremely broad” definitions of predictive analytics and other AI-related technologies, according to Aaron Pinnick, senior manager of thought leadership at ACA Group.
“I think for a lot of us who are watching it, we weren’t really surprised when that one got pulled back,” Pinnick said.
“At the most basic, the concern is about hallucination risk where people are trusting the model and not verifying the outcome of that model, and the model is just wrong,” said Pinnick.
Fiduciary responsibility applies no matter what technology is involved, though advisers do need to keep an extra-sharp eye out for conflicts in use cases such as robo-investing or AI-powered strategies that potentially cause drift and unforeseen conflicts in their investment models, Pinnick said.
“The SEC is perfectly capable of bringing enforcement actions for improper disclosure of the use of AI for misleading statements about the use of AI,” Paulson-Egbert said. “I think it is better to think holistically about conflicts rather than singling out any one piece of technology.”
Since May 2024, the SEC has announced the adoption of amendments to Regulation S-P to govern investors’ nonpublic personal information and require firms to address the expanded use of technology and relevant risks, which will take effect for large advisors on December 3.
“Reg S-P governs the unauthorized use and access of sensitive customer data,” said Pinnick, “In the US at least, it’s not a huge deal at the moment, but it’s about to become a much bigger deal.”
What is on fund boards’ radar?
Fund boards still need to keep an eye out for AI-related problems and ask questions to uncover any potential conflicts of interest or other problems that could be created by AI-generated advice or trading, Paulson-Egbert said.
The goal is to make sure any new technology, however nostalgic or futuristic, is used in ways that will ensure the best interests of investors.
“[Fund boards] can look at the proposed rule that was withdrawn, certainly for inspiration. But they can think about how to adapt it the best way that makes sense for their firms to use AI,” Paulson-Egbert said.
AI-related evaluations will likely be integrated into annual Section 15(c) evaluations, simply by adding potential AI conflicts to the usual questions about performance, fee levels and the language used to market innovative investment products, he said.
AI is already in heavy use at most asset managers, mostly to automate internal operations, but relatively few firms use generative AI or AI-powered tools to interact with clients, according to Pinnick.
“From the conversations I’ve had, there is cautious optimism about the tools. There’s still, in general in our industry, a real hesitancy risk having a client interaction go wrong because of AI,” said Pinnick, suspecting many firms will be fast followers but are not necessarily willing to stand on the leading edge for implementing AI tools in customer interactions.
He also warns that fund boards and advisers should ensure having a formal process to verify and validate AI-generated results to avoid certain tech-based risks caused to investors.
Future of AI regulations unclear
It is unclear whether the SEC under Atkins is likely to revisit the possibility of new regulations about AI or machine-learning software, though there have been no explicit suggestions that AI-related rule proposals might be in the works.
Jephte Lanthia, co-founder at Basswood Counsel who most recently served as a special counsel in the office of the advocate for small business capital formation at the SEC, suggested the agency has shown an appetite and willingness to understand AI technology while weighing their three core missions differently under a new chairman.
“Particularly when you’re looking at the chairmanship in light of the Trump administration memo on AI, the administration tells the SEC and certain regulatory bodies to not be so tough on innovations on AI,” Lanthia said. “That’s why I was not surprised those rules, particularly that rule was also withdrawn.”
The industry may see a surge of state-level AI regulation designed to protect investors, however, according to Paulson-Egbert.
“I’d expect that the SEC at some point in time, maybe not under Atkins, but at some point, will establish greater clarity and guidance and rulemaking around AI,” Pinnick said. “They are trying to go back to their core mission of investor protection.”