Avoiding SEC ‘AI washing’ penalties

Say what you do, do what you say, and document it, or suffer penalties like the $400k SEC levied on firms that exaggerated AI capabilities

The SEC has announced enforcement actions that penalized two companies a total of $400,000 for misleading investors about the performance and capabilities of services falsely described as being powered by artificial intelligence (AI).

On March 18 the SEC announced a penalty of $225,000 for false claims of AI use in a robo-adviser owned by Toronto-based Delphia (USA) Inc. It also announced a fine of $175,000 for San Francisco-based Global Predictions Inc., which it accused of overstating the performance of its advisory services and falsely claiming AI capabilities in its online trading platform.

The charges are the SEC’s first effort to enforce rules banning the misuse of AI by investment services through either undisclosed conflicts of interest or by trying to lure in new clients by overstating the capabilities of AI-driven analytic systems or for “AI washing” – falsely claiming to use AI in systems with no AI or machine-learning capabilities.

SEC chair Gary Gensler has spoken often of the potential dangers of AI in financial services, including the risk that retail investors would be misled by AI-washing claims in adviser marketing materials, which Gensler referred to as truth-in-advertising problems for the industry. In July of 2023 the SEC proposed a rule regulating the use of AI in financial services, but the proposal focused primarily on the misuse of AI in predictive analytics and the potential for conflicts of interest for advisers who might be tempted to turn AI-powered recommendations to favor the adviser’s bottom line more than the client’s.

AI “has potential benefits of greater inclusion, efficiency, and user experience,” Gensler said in a video released on the same day the enforcement actions were announced. “But, let’s face it, when new technologies come along, we’ve also seen time and again false claims to investors by those purporting to use those new technologies.”

The March 18 release also quotes Div. of Enforcement director Gurbir Grewal articulating the enforcement division’s commitment to fighting AI washing by investment companies.

“If you claim to use AI in your investment process,” Grewal said in the release, “you need to ensure that your representations are not false or misleading.”

The SEC’s Office of Investor Education and Advocacy backed up Gensler’s warnings to the industry with a Jan. 15 investor alert warning about the potential for fraud by advisers and investment platforms making unrealistic promises by falsely claiming to use sophisticated AI or machine-learning systems to benefit investors.

“At the SEC, we want to make sure that these folks are telling the truth,” Gensler said in his video.  Advisers and broker-dealers “should not mislead the public by saying they are using an AI model when they’re not, nor say that they’re using an AI model in a particular way, but not do so.”

Overstatements, underperformance

In the event announced March 18, both companies failed to document or follow procedures and misled investors about the capabilities they were offering, according to the SEC’s announcement.

The SEC accused Delphia of making misleading statements in marketing and in regulatory filings that claimed Delphia was using AI and machine-learning techniques to tailor investment advice to retail clients based on analysis of their spending habits and social-media data. Delphia, in fact, used no such data in its investment process, according to the SEC’s cease-and-desist letter, which accused the company of falsely claiming to have done so between August of 2019 and August of 2023.

The company’s goal was to offer retail clients a unique service presenting robo-advice based on information provided by the clients and third-party data made up of social media, banking, credit-card records, online purchases and other information. While the company did collect some client data during the four years addressed in the SEC settlement, the company never succeeded in effectively using those data streams as input for its investing algorithms or in the recommendations supposedly based on that data.

Delphia claimed in 2019 to be “the first investment adviser to [use machine learning to] convert personal data into a renewable source of investable capital . . . that will allow consumers to invest in the stock market using their personal data.”

Beginning in November of 2020 the company claimed that it “put[s] collective data to work to make our artificial intelligence smarter so it can predict which companies and trends are about to make it big and invest in them before everyone else.”

Toronto-based Delphia admitted to the Div. of Examinations in July 2021 that it had not used any of its client data in investment recommendations and had not created an algorithm to process such data. It did make an effort after the examination to cut back on its claims and correct some of its false statements, but continued to make other misleading statements about its use of AI and machine learning through August, 2023, however, according to the SEC. In a November, 2022 press release, for example, Delphia claimed that its “proprietary algorithms combine the data invested by its members with commercially available data, to make predictions across thousands of publicly traded companies up to two years into the future.”

The company also failed to adopt and implement policies or procedures to prevent more violations involving misleading statements, lay out a clear process to help staff understand their role in a corrective process, and failed to fully explain the disconnect between its claims and reality in either client communications or regulatory filings.

Delphia filed a Form ADV-W on Jan. 26, 2024 notifying the SEC that ti had ended its advisory services as of the end of 2023, and moved its five pooled investment vehicles to a newly formed adviser.

Before that notice, the SEC estimated that Delphia held approximately $187M in assets under management, including about $7m handled through robo-advisory services that supported approximately 29,000 individual retail accounts.

Global Predictions claimed on its website and in social-media postings to be the “first regulated AI financial advisor” and claimed its platform provided “expert AI-driven forecasts” but failed to substantiate its performance claims when the SEC asked for proof.

Global also violated the Marketing Rule by falsely claiming to offer tax-loss harvesting services and by including a liability hedge clause in its advisory contract that clients had waived non-waivable causes of action against the company.

Global offered its services through an interactive online platform called PortfolioPilot that made recommendations the company said were driven by AI, though they were not, and claimed its models were “outperforming IMF forecasts by 34%” without saying how the comparison was made or what metric the 34% referred to, according to the SEC’s cease-and-desist order against the company. Global also failed to disclose conflicts of interest between the company and influencers offering testimonials about its service and failed to provide advance notice of changes to its advisory contracts in violation of its fiduciary duty as a financial adviser, the settlement letter said.

“AI washing” – new wrapper, old problem

AI washing may be a problem, but is one that is being addressed through the enforcement of existing disclosure rules, not through the much more problematic rule the SEC proposed to address the risks specific to the use of AI in financial services, according to Eric Pan, president and CEO of the Investment Company Institute (ICI)

Pan and other leaders of industry associations have been harshly critical of the SEC’s proposed AI rule.  In its comment letters, ICI has asked the SEC to modify language in the rule to eliminate what Pan said is a broad implication of wrongdoing that is more likely to prompt investment companies to avoid tech-driven innovation for fear of unwarranted prosecution than it is to encourage them to make tech-related decisions with care.  

The rule’s language is so imprecise, and its unintended consequences are so broad, that the Investment Adviser Association (IAA) demanded that the SEC withdraw the AI proposal and try again with an approach that would cause less damage, according to IAA president Karen Barr.

The SEC has issued very thorough exams asking detailed questions about how the industry is starting to use AI and what potential problems AI could present in data privacy, security, transparency, explainability, the use of copyrighted source material, and user-trust issues. Barr said a replacement rule proposal could address any one of these issues more effectively than the current version.

Gensler defended the SEC’s approach during an onstage back-and-forth with Barr during a March 8 IAA conference. He declined to address issues beyond the potential for predictive-analytic conflicts of interest and the chance that investors would be misled by investment companies overstating their results from the use – or non-use – of AI in platforms aimed at consumers.

The SEC is seeing a lot of problems involving digital assets or AI, but most are “old things that we would consider problematic that are just under a shiny wrapper,” according to Vanessa Horton, associate regional director of the SEC’s Div. of Examinations, who spoke during a panel session March 20 at the ICI’s Investment Management Conference.  Rather than seeing a lot of innovative new problems, SEC examiners are seeing AI being used to mask “things we’ve always had concerns with, like disclosures, statements that may be inaccurate or inadequate in how a firm is using digital assets or AI, those are things we would have been concerned about no matter what the statement is or what the product is,” she said.

The prevalence of AI washing is similar to the burst of greenwashing in the industry when advisers tried to take advantage of the spike in investor interest in ESG products in their marketing, according to Andrew Dean, co-chief of the SEC Div. of Enforcement’s Asset Management Unit, who spoke on the same panel as Horton.

Avoiding trouble through documentation

“With AI, a lot of advisers who are trying to market things may be going too fast. We see some disclosures that are not consistent with what‘s actually happening,” he said.

“But if you talk about priorities – it’s not always a good look to have the chair and the director of Enforcement with quotes in the same press release, and it’s not often they both come out with videos on artificial intelligence on the same day,” he said. “So, yeah, I’d say it’s something that’s being looked at a lot now, but there is this common overlay of ensuring that your disclosures match your investment process. ‘Do what you say, say what you do’ – as we hear a lot internally and as the chair said in his video – is probably a good way to go.”

That’s good advice, but it misses one critical step, according to according to Amy Doberman, a partner at Wilmer Cutler Pickering Hale and Dorr LLP who was also on the panel.

“You’ve got to document that you are doing what you saying what you’re doing,” she said. “Any kind of inconsistency will surface in the context of the lack of documentation. If you have a policy or a procedure, you’ve got to document how you’re complying with it.”