SEC plans regulation of AI in financial services

Ban on robo-adviser conflicts of interest, bias in data sets is easy compared to "explainability" – making famously opaque AI decision-making clear to consumers

The Securities and Exchange Commission (SEC) is developing rules that would change the way financial companies can use artificial intelligence in robo-advisers, brokerage apps and other software.

SEC Chair Gary Gensler has asked the agency’s staff to develop and pitch to commissioners a set of regulations to reduce the potential for conflicts of interest in AI-driven financial applications, especially those designed to make it easier for consumers to handle their own investments without the help of human advisers.

“I think [AI’s] potential for transformation is every bit as big as the Internet, and maybe bigger in terms of predictive analytics,” Gensler said during the AI Policy Forum Summit webcast sponsored by Massachusetts Institute of Technology Sept 28 to focus on the use of AI in financial-services applications.

“This is an emerging risk,” Gensler said. “On a highway, you don’t want the other driver to crash. In finance you have situations in which one person wants to win and someone else loses. So you have a different game theory embedded in capital markets.”

The SEC did address that risk in 2017 by offering guidance on the use of robo-advisers.

In September 2021, five months after Gensler took over as SEC chair, the SEC also put out a formal request for comment asking the financial services industry for feedback on how to approach regulation of robo-advisers, gamified brokerage apps and other “digital engagement practices.”

Gensler said that feedback will form the basis for proposals he asked the SEC staff to develop.

“We [already] see it in the core of the capital markets; many large asset managers are now using predictive data analytics,” Gensler said.

The huge volumes of data considered routine for highly regulated financial-services firms forced most to adopt AI/ML applications faster and more aggressively than most other industries, according to Jo Ann Barefoot, CEO of the Alliance for Innovation Regulation and author of a May report from the Brookings Institution on regulation and use of AI/ML in financial firms.

Software based on AI/machine-learning techniques power increasingly sophisticated market analyses and customer-behavior models that allow robo-advisers to make recommendations based on the risk/return criteria of individual consumers.

They also drive much of the machinery of financial services. AI-driven natural-language processing powers voice response systems, allows automated claims processing of credit and insurance, automates fraud-detection and anti-money laundering systems and does the grunt work of document scanning and data management.

There are problems, however, Barefoot writes.

The most obvious target for regulation is the potential for conflicts of interest that prioritize the profits of a broker or adviser rather than the best interest of consumers, Gensler said.

There are also serious issues with the privacy and security of AI/ML apps that concentrate too large a volume of sensitive information, often with too little security, Barefoot wrote.

More seriously, the data underlying those applications can preserve or even magnify systemic biases found in the large datasets on which some are based, Barefoot wrote.

Identifying problems – even obvious problems probably caused by minor bugs, can also be a problem because the data models on which AI/ML applications are based are, famously, so opaque it can be difficult for even the programmer to figure out why an application made a particular decision.

That opacity is one reason self-driving cars designed to avoid obstacles will sometimes appear to be trying to crash into them on purpose.

Regulations would have to protect the stability of the system by requiring clarity from even advanced analytics, but would also have to apply the same policies and regulations to advanced technology that have been applied to lesser computers and human beings for decades, especially when the decisions affect individuals directly, Gensler said.

“We have issues of what I would call ‘explainability,’” Gensler said. “We’ve embedded these ideas in laws for 50 years. When you deny someone credit you have to explain why they’ve been denied credit. That’s a really important thing.”

Explainability is difficult, but comes in more than one flavor, depending on the complexity of the decisions being made,  according to economist Kevin Hassett, a Hoover Institution economist who has been chairman of the White House Council of Economic Advisors since 2017.

It may always be impossible to explain the logic path of an AI model doing the kind of sophisticated predictive market analysis Gensler described, Hassett said during a panel discussion on the use of AI in the financial services that followed Gensler’s appearance.

Specific decisions about a specific person – whether to grant a loan or recommend a specific investment, for example, involve a small-enough number of variables that it shouldn’t be too difficult, in most cases, to explain them well enough to explain the result to the client and satisfy regulatory requirements, Hassett said.

A much bigger risk is the potential that AI/ML models would extend or even magnify patterns of economic or social imbalance they find in economic data, as some have done in the past. The ability to rely on decisions of those models depends on having some way to identify and correct for those biases, especially if the process they use to reach decisions is relatively opaque, he said.

Without knowing how robo-advisers and high-powered predictive analytics make market projections or other decisions on which other systems depend, would put the stability of the whole system at risk, Gensler said.

Print
Save