As part of our consumer protection mission, Congress tasked the Bureau with ensuring that markets for consumer financial products and services operate transparently and efficiently to facilitate access and innovation. One area of innovation we are monitoring is artificial intelligence (AI), and particularly a subset of AI, machine learning (ML). For example, in 2017, the Bureau issued a Request for Information Regarding Use of Alternative Data and Modeling Techniques in the Credit Process (RFI). We also issued a No-Action Letter to Upstart Network, Inc., a company that uses ML in making credit decisions, and later shared key highlights from information provided by Upstart.
Financial institutions are starting to deploy AI across a range of functions, including as virtual assistants that can fulfill customer requests, in models to detect fraud or other potential illegal activity, or as compliance monitoring tools. One additional area where AI may have a profound impact is in credit underwriting.
In 2015, the Bureau released a Data Point titled “Credit Invisibles.” The Data Point reported that 26 million consumers—about one in 10 adults in America—could be considered credit invisible because they do not have any credit record at the nationwide credit bureaus. Another 19 million consumers have too little information to be evaluated by a widely used credit scoring model.
AI has the potential to expand credit access by enabling lenders to evaluate the creditworthiness of some of the millions of consumers who are unscorable using traditional underwriting techniques. These technologies typically involve the use of models that allow lenders to evaluate more information about credit applicants. Consideration of such information may lead to more efficient credit decisions and potentially lower the cost of credit. On the other hand, AI may create or amplify risks, including risks of unlawful discrimination, lack of transparency, and privacy concerns. Bias in the source data or model construction can also lead to inaccurate predictions. In considering AI or other technologies, the Bureau is committed to helping spur innovation consistent with consumer protections.
Use of AI/ML and Explainability
Despite AI’s potential, industry uncertainty about how AI fits into the existing regulatory framework may be slowing its adoption, especially for credit underwriting. One important issue is how complex AI models address the adverse action notice requirements in the Equal Credit Opportunity Act (ECOA) and the Fair Credit Reporting Act (FCRA). ECOA requires creditors to provide consumers with the main reasons for a denial of credit or other adverse action. FCRA also includes adverse action notice requirements. For example, when adverse action is based in whole or in part on a credit score obtained from a consumer reporting agency (CRA), creditors must disclose key factors that adversely affected the score, the name and contact information of the CRA, and additional content. These notice provisions serve important anti-discrimination, educational, and accuracy purposes. There may be questions about how institutions can comply with these requirements if the reasons driving an AI decision are based on complex interrelationships.
The existing regulatory framework has built-in flexibility that can be compatible with AI algorithms. For example, although a creditor must provide the specific reasons for an adverse action, the Official Interpretation to Regulation B, which implements ECOA, provides that a creditor need not describe how or why a disclosed factor adversely affected an application, 12 CFR pt. 1002, comment 9(b)(2)-3, or, for credit scoring systems, how the factor relates to creditworthiness. Id. at 9(b)(2)-4. Thus, the Official Interpretation provides an example that a creditor may disclose a reason for a denial even if the relationship of that disclosed factor to predicting creditworthiness may be unclear to the applicant. This flexibility may be useful to creditors when issuing adverse action notices based on AI models where the variables and key reasons are known, but which may rely upon non-intuitive relationships.
Another example of flexibility is that neither ECOA nor Regulation B mandate the use of any particular list of reasons. Indeed, the regulation provides that creditors must accurately describe the factors actually considered and scored by a creditor, even if those reasons are not reflected on the current sample forms. 12 CFR pt. 1002, comment 9(b)(2)-2 and App. C, ¶ 4. This latitude may be useful to creditors when providing reasons that reflect alternative data sources and more complex models.
Industry continues to develop tools to accurately explain complex AI decisions, and we expect more methods will emerge. These developments hold great promise to enhance the explainability of AI and facilitate use of AI for credit underwriting compatible with adverse action notice requirements.
Reducing Regulatory Uncertainty in the Use of AI/ML
Despite this flexibility, there may still be some regulatory uncertainty about how certain aspects of ECOA or other consumer financial services laws apply in the context of AI.
To address this type of uncertainty, the Bureau has various tools to promote innovation and facilitate compliance. In September 2019, the Bureau’s Office of Innovation launched three new policies to facilitate innovation and reduce regulatory uncertainty: (1) a revised Policy to Encourage Trial Disclosure Programs (TDP Policy), (2) a revised No-Action Letter Policy (NAL Policy), and (3) the Compliance Assistance Sandbox Policy (CAS Policy). The TDP Policy and CAS Policy, in particular, provide for a legal safe harbor that could reduce regulatory uncertainty in the area of AI and adverse action notices. The TDP Policy also specifically identifies adverse action notices as a type of Federal disclosure requirement that would be covered by the policy.
We hope stakeholders will use these and other Bureau tools to address areas of regulatory uncertainty, including in the area of AI and adverse action notices.
We are particularly interested in exploring at least three areas.
- The methodologies for determining the principal reasons for an adverse action. The example methods currently provided in the Official Interpretation were issued in 1982, and there may be uncertainty in the application of these examples to current AI models and explainability methods. See 12 CFR pt. 1002, comment 9(b)(2)-5.
- The accuracy of explainability methods, particularly as applied to deep learning and other complex ensemble models.
- How to convey the principal reasons in a manner that accurately reflects the factors used in the model and is understandable to consumers, including how to describe varied and alternative data sources, or their interrelationships, in an adverse action reason.
We also hope financial institutions and other stakeholders will think creatively about how to take advantage of AI’s potential benefits, including by exploring novel ways to engage with consumers. For example, this could include providing consumers with interactive features or educational components, or sharing more information with consumers on how underwriting decisions are made and what factors or data are used. We also hope industry will consider using the TDP Policy to test disclosures that may improve upon existing adverse action disclosures, including in ways that might go beyond the four corners of the regulations without causing consumer harm. We encourage stakeholders to reach out and discuss these possibilities with the Bureau to help the Bureau better understand the market and the impact of its regulations.
The Bureau intends to leverage experiences gained through the innovation policies to inform policy. For example, applications granted under the innovation policies, as well as other stakeholder engagement with the Bureau, may ultimately be used to help support an amendment to a regulation or its Official Interpretation.
By working together, we hope to facilitate the use of this promising technology to expand access to credit and benefit consumers.