AI has existed for decades, but the advent of generative AI has accelerated the adoption of AI in the investment industry and highlighted the significant impact of AI applications that perform functions other than generating text or images when prompted by a user.
AI’s rapid pace of development and rising popularity has extended not only to the automation of back-office processes but also to the enhancement of customer interactions through interactive investment sites and robo-advisers. Now that generative AI has been added to the mix, AI is now quite high on the agenda of fund boards who see a need to develop oversight strategies as soon as possible.
In the registered fund space, AI has been used extensively by fund complexes and service providers in a wide variety of ways to create operational efficiencies and, increasingly, to help craft or execute fund investment strategies.
Developing an effective methodology for oversight is a significant undertaking for fund directors –who are seeking to learn to recognize the strengths and weaknesses of AI itself, understand how it is being used within their fund complexes, and assess how those applications compare with the way it is being used not only by other fund complexes but also across other areas of the investment industry.
AI brings a lot of potential benefits, but the risks associated with the use of AI across the investment management industry are exceptionally broad.
The SEC has recently focused on the disclosure risk of ‘AI-washing’ – a term it uses to refer to registration statement disclosures that are inaccurate in their description of a firm’s use of AI. The term, and application, is similar to the term greenwashing in the ESG context with respect to misleading or exaggerated statements a firm might make about its efforts to attract investors to ESG-related funds.
However, disclosure risk is just the tip of the iceberg from an AI risk oversight perspective. The potential risks associated with AI use are extensive, and range from model and data risk to bias, deception, fraud, market manipulation, privacy concerns and potential conflicts of interest, among others. In addition, if a few AI providers produce the models upon which numerous tech providers rely, this could present systemic as well as enterprise risk.
In order to have an understanding of these risks, boards may seek to confer with fund compliance and risk teams about the status of AI compliance policies and procedures – in addition to any questions they may have about the functions of the AI applications the firm is already using and the levels of risk and mitigation controls available to ensure their output matches the goals the firm has set for them.
From a fund-director perspective, developing a list of questions to ask management, fund compliance and fund service providers can provide a valuable foundation for effective oversight.
The following are some questions directors might start with in their effort to internalize, understand and provide effective oversight for AI use by or on behalf of the funds they oversee.
Fund Management
Does the fund’s principal investment strategy involve the use AI decision making? If so:
- Can the adviser explain to the board the data used by the AI model, its objective and how it is monitored and tested, including by portfolio management personnel?
- Has the AI been audited to confirm it favors the investor’s best interests?
- Does the fund’s registration statement include appropriate explanations of the AI’s role in developing or applying an investment strategy and the potential risks of its use?
- Does the adviser use AI in other ways that materially affect the services provided to the fund?
- How are these uses monitored and tested on an ongoing basis?
- What are the goals of each AI application or model?
- How are the applications tested and how are their results verified?
- What type of data underlies each model?
- How often is model output compared to anticipated results?
- Does the adviser have policies and procedures to address potential conflicts of interest associated with the use of AI?
- Has the adviser updated relevant policies and procedures to reflect the incorporation of AI processes?
- Does the adviser’s risk assessment address the risk of using AI and the degree of mitigation delivered by the controls being used?
Fund Compliance and Service Providers
- Does the provider have written AI policies in place and procedures to enforce them?
- Are AI policies monitored and tested on an ongoing basis?
- Is the use of proprietary data or customer data for AI applications reviewed regularly for compliance with applicable privacy laws?
- Have AI and security experts examined the AI and its controls to assess potential risks to the funds?
- Has the provider assessed the ability of AI as a tool to enhance the management of risks not related to AI?
- Do risk assessments address AI risk levels and mitigating controls?
AI Data Oversight Inquiries
- What questions are AI developers asking the AI model to answer and how are those translated into the machine-learning formulas that create working models during the AI training process?
- What data are AI models built on? How is that data procured and how frequently is it tested and validated?
- Is there any inherent bias in the data or in the ways it is used to produce working AI models? How would bias be identified or mitigated?
- How would errors or bias in data be identified or corrected? What is the process for ongoing oversight and monitoring of new or updated data?
- What are the security protocols that are in place to protect proprietary or sensitive data?
The eventual incorporation of AI across all aspects of the investment management industry is virtually a foregone conclusion, and fund directors are eager to stay one step ahead of the rising tide.
Fund directors should remain vigilant about the use of AI by fund advisers and service providers on an ongoing basis, particularly given the rapid evolution of AI and its various applications in the investment management industry. Fund compliance and cybersecurity vendors may serve a vital role in keeping fund directors apprised of developments relating to AI developments, uses, and attendant risks.
Boards will put themselves in the best position to achieve effective oversight through consistent and ongoing inquiry into the use of AI in the funds they oversee.
Dianne M. Descoteaux, senior counsel for the Mutual Fund Directors Forum, has more than 20 years of experience as an attorney in the investment management industry and in private law practice advising clients on issues related to the Investment Company Act and Investment Advisers Act.