CFPB Should Encourage Lenders to Look for Less Discriminatory Models


March 11, 2022

The Honorable Rohit Chopra
Consumer Financial Protection Bureau
1700 G Street NW
Washington, DC 20552

Via email 

RE:     CFPB Tools for Ensuring That Creditors Do Not Rely on Discriminatory Models

Dear Director Chopra:

On behalf of the National Community Reinvestment Coalition, Upturn, and Zest AI, we write to urge the Consumer Financial Protection Bureau (“CFPB” or “Bureau”) to take steps to ensure that creditors engage in effective fair lending testing of their automated models.


Effective fair lending testing requires entities to search for and adopt less discriminatory alternative (“LDA”) models.[1] As leading consumer and civil rights advocacy organizations have urged, creditors should proactively explore and “adopt alternative models to reduce adverse impact.”[2] The signatories to this letter have seen that meaningful and effective investment in searches for LDA models can produce significant reductions in protected class disparities. However, to date, industry practice in this area remains uneven.

Accordingly, the CFPB should take steps to ensure consistent and effective testing through guidance, paired with appropriate supervisory and enforcement actions and capabilities.[3] In short, the CFPB should leverage the tools at its disposal to make clear that it expects entities to guard against protected class disparities caused by their models.

Consistent with those goals, we ask the CFPB to take action to ensure that creditors are routinely conducting effective fair lending searches for LDAs to their existing and soon-to-be-in-production automated models. Such action could include:

  1. Robust supervisory efforts related to model testing;
  2. Public guidance regarding best practices, how the CFPB approaches methodological questions in its own work, and resolution of interpretive ambiguities; and
  3. Public enforcement actions, in the event of legal violations.

One specific tool is effective public signaling of how the CFPB intends to exercise its supervision and enforcement discretion. The CFPB should continue to make clear that it will monitor the use of algorithmic decision tools and that institutions will face consequences for illegal discrimination. The CFPB should complement those warnings by making clear that it expects entities to meaningfully, effectively, and routinely search for and adopt LDA models, and that it may favorably consider such efforts in assessing supervisory and enforcement activities.[4] The sophistication of such efforts should, at a minimum, be commensurate with the sophistication brought to other model assessments, and token efforts will not be considered meaningful or effective.

These actions would encourage consistent and effective fair lending testing. They would also align with CFPB’s emphasis on a culture of proactive compliance and legal doctrines designed to encourage actors to guard against potential harms without fear that making improvements will be used against them. In the fair lending context, this guidance would go a long way towards ensuring that lenders implement new models designed to help consumers and further the CFPB’s mandate.


To ensure compliance with the Equal Credit Opportunity Act (“ECOA”) and Regulation B, creditors should conduct fair lending analysis on their credit models to ensure that their models do not unlawfully discriminate against applicants on the basis of protected class status, such as race, national origin, sex, and age. See ECOA, 15 U.S.C. § 1691 et seq.; Regulation B, 12 C.F.R. part 1002.[5]

The methodologies that institutions use for fair lending testing their models vary, but as a general matter such testing often includes: (1) ensuring that models do not include protected class status, or close proxies for protected class status, as attributes; and (2) assessing whether facially-neutral automated models are likely to disproportionately lead to negative outcomes for a protected class, and if such negative impacts exist, ensuring the models serve legitimate business needs and evaluating whether changes to the models would result in less of a disparate impact while maintaining model performance.[6]

This final step—adopting LDA models—is key.[7] Many companies routinely test their models to identify LDAs, and those institutions adopt less discriminatory models if they exist. Ideally, a responsible and meaningful compliance program involves this type of testing prior to implementation, as well as fair lending monitoring and testing of models once deployed.[8] The signatories to this letter have seen significant disparity improvements when entities take meaningful steps to adopt LDAs. However, testing models in this way has not been universal.[9] The CFPB should take action to make clear that it expects entities to effectively and routinely engage in this work to prevent their models from discriminating against consumers.

Specific Proposals

1. Encourage Entities to Replace Existing Models with LDAs

Some institutions may have models in use for which no LDA was identified prior to deployment, in part, because certain rigorous methodologies were not available when the initial searches were conducted. The CFPB should state that it expects creditors to conduct rigorous testing now or to update testing conducted using older methodologies and to replace existing models with LDAs, and that such efforts may be viewed favorably in assessing supervisory and enforcement activities.

Now, more institutions are interested in conducting or updating these tests but are reluctant to do so on models that are currently being used to assess applicants because of perceived regulatory and legal uncertainty; this reluctance may lead to continuous, irreversible, persistent harms to consumers because an unnecessarily discriminatory model is in production. For example, if this testing was not conducted before a model was implemented, an entity may be concerned that adoption of an LDA may be used as evidence that the entity was not in compliance with the law. This worry has the perverse effect of discouraging entities from undertaking this important type of fair lending testing and perpetuating the use of discriminatory models.[10]

Similarly, even lenders that conducted LDA searches prior to putting models in production may discover more advanced LDA search techniques after the fact. Lenders may be discouraged from using these techniques on existing models for fear that they will be faulted for not having done so earlier. Accordingly, we believe that the foregoing statement regarding the CFPB’s exercise of its supervisory and enforcement discretion would encourage more entities to adopt LDAs, which would advance the goals of the CFPB’s recently-announced fair lending initiatives.

2.Encourage Entities to Make Reasonable Trade-offs Between Model Performance and Fairness

Some lenders are willing to deploy an underwriting model that sacrifices performance or accuracy (which could translate into profit) relative to a benchmark in favor of an alternative model that causes less disparate impact.[11] For prudential reasons these lenders may want to do so on a trial basis with one or two underwriting models before applying the policy company-wide.

However, some creditors have expressed concern that deciding to trade performance or profitability for fairness with respect to one model may be used as evidence in a supervisory or enforcement proceeding that they should have made the same choice with respect to another model. Even for models in development, this uncertainty may encourage lenders to stick with traditional but less effective LDA search methods for fear that using a more advanced LDA search method on the new model may be used against them with respect to models that are already being used.

These concerns have the perverse effect of keeping lenders from testing and, ultimately, implementing a policy that will benefit consumers and the public at large.

Accordingly, the CFPB should state that in assessing supervisory or enforcement activities, it may favorably view a creditor’s decision to deploy an underwriting model that is less accurate or less profitable than a model optimized for performance in favor of a model that causes less disparate impact, and that it may favorably view a creditor’s decision to explore and use the most effective LDA search methods available.

Monitoring and Public Reporting

Any guidance regarding how the CFPB intends to exercise its supervisory and enforcement discretion should include a statement that the Bureau will monitor through its supervisory and other authorities entity practices related to adopting LDAs, including replacing existing models with LDAs. The Bureau should publicly report on its observations and findings in Supervisory Highlights or otherwise. Such reporting could include best practices, and highlight compliance gaps and activities that would violate or risk violating fair lending laws. The Bureau could also observe trends, including whether it has observed increased adoption of LDAs and how such LDAs likely contributed to reduced disparities or increased access to credit.


National Community Reinvestment Coalition (“NCRC”) is a membership association representing more than 800 community reinvestment organizations, community development financial institutions, minority- and women-owned business associations, and social service providers. NCRC and many of its members provide loans, small business technical assistance, financial counseling, homeownership assistance, and other forms of investment to applicants of color and applicants living in or conducting business in communities of color. NCRC also negotiates with banks to obtain commitments to support the credit needs of people and communities of color and has a mission of supporting financial inclusion.

Upturn is a nonprofit research and advocacy organization that advances justice in the design, governance, and use of technology. In the context of consumer credit, Upturn confronts predatory practices and works to ensure that credit and other financial services are fair, affordable, and nondiscriminatory. Upturn’s work suggests that LDAs have significant untapped potential to more equitably serve marginalized communities, including people of color.

Zest AI develops technology that assists lenders in building models for credit underwriting. Zest is particularly interested in these issues because it has invented patented technology that creditors can use to search for LDAs. The software gives creditors more alternatives than are available using legacy fair lending analytic techniques. Zest is aware, however, that concerns regarding this issue contribute to the reluctance of entities to search for or adopt LDA models, even though those models would be more fair to historically underserved groups, such as women and people of color.


            Effective fair lending testing requires entities to search for and adopt LDA models, but industry practices in this regard are uneven. The CFPB should leverage the tools at its disposal to make clear that it expects entities to rigorously guard against protected class disparities caused by their models. One tool is to publicly signal that the CFPB will exercise its supervisory and enforcement discretion in ways that encourage creditors to engage in proactive attempts to identify and adopt LDA models, thereby facilitating more uniform practices and making models and the credit system more fair for historically underserved consumers. This guidance should be deployed to complement other effective actions to ensure models do not illegally discriminate against consumers.

Thank you for your attention to these matters. We would welcome an opportunity to meet and discuss further.

This letter was prepared with assistance from Relman Colfax PLLC. For any questions or further discussion, please contact Stephen Hayes, Relman Colfax, at shayes@relmanlaw.com or (215) 888-7784.


[1] See, e.g., Joint Civil Rights, Consumer, and Technology Advocates Response to Joint Agency Request for Information and Comment on Financial Institutions’ Use of Artificial Intelligence, including Machine Learning (July 1, 2021), https://nationalfairhousing.org/wp-content/uploads/2021/07/Federal-Banking-Regulator-RFI-re-AI_Advocate-Letter_FINAL_2021-07-01.pdf; NCRC Coalition’s Innovation Council for Financial Inclusion, Statement on Request for Guidance on Implementation of Disparate Impact Rules Under ECOA (June 29, 2021), https://ncrc.org/statement-on-request-for-guidance-on-implementation-of-disparate-impact-rules-under-ecoa/; Zest AI Response to CFPB Request for Information on the Equal Credit Opportunity Act and Regulation B (Dec. 1, 2020), https://www.regulations.gov/document/CFPB-2020-0026-0134; NFHA Response to CFPB Request for Information on the Equal Credit Opportunity Act and Regulation B (Dec. 10, 2020), https://www.regulations.gov/comment/CFPB-2020-0026-0133; Michael Akinwumi, et al., “An AI fair lending policy agenda for the federal financial regulators,” Brookings Center on Regulation and Markets (Dec. 2, 2021), https://www.brookings.edu/research/an-ai-fair-lending-policy-agenda-for-the-federal-financial-regulators/.

[2] See ACLU, Center for Democracy & Technology, Center on Privacy & Technology at Georgetown Law, Lawyers’ Committee for Civil Rights Under Law, National Consumer Law Center, National Fair Housing Alliance, Upturn, Letter to Regulatory Agencies re: Addressing Technology’s Role in Financial Services Discrimination at 3-4 (July 13, 2021), https://www.aclu.org/sites/default/files/field_document/2021-07-13_coalition_memo_on_technology_and_financial_services_discrimination.pdf.

[3] Id. at 3-4; see also Jo Ann Barefoot and Theodore Flo, Regulators must root out bias in AI lending, American Banker (Dec. 15, 2021) available at https://www.americanbanker.com/opinion/banking-regulators-must-root-out-bias-in-ai-based-lending.

[4] This guidance should include circumstances where lenders develop their own models as well as where lenders rely on third parties to develop or assist in developing credit models. This guidance, of course, should not suggest that LDA models be immune from fair lending and other regulatory requirements. If an LDA model caused unlawful disparate impact or subjected consumers to disparate treatment, the lender who implemented the model may be subject to supervisory or enforcement action. Moreover, this guidance should only apply if the creditor searched for the LDA or replaced an existing model with an LDA prior to commencement of any fair-lending focused exam or enforcement action.

[5] ECOA and Regulation B prohibit disparate treatment and disparate impact, including “prohibit[ing] a creditor practice that is discriminatory in effect because it has a disproportionately negative impact on a prohibited basis, even though the creditor has no intent to discriminate and the practice appears neutral on its face, unless the creditor practice meets a legitimate business need that cannot reasonably be achieved as well by means that are less disparate in their impact.” 12 C.F.R. part 1002, Supp. I ¶ 1002.6(a)-2.

[6] See Initial Report of the Independent Monitor, Fair Lending Monitorship of Upstart Network’s Lending Model at 7 (April 14, 2021) (“Initial Upstart Report”), https://www.relmanlaw.com/media/cases/1086_Upstart%20Initial%20Report%20-%20Final.pdf; David Skanderson & Dubravka Ritter, Federal Reserve Bank of Philadelphia, “Fair Lending Analysis of Credit Cards” 38–40 (2014).

[7] See, e.g., supra notes 1 and 2.

[8] See, e.g., See Elisa Jillson, “Aiming for Truth, Fairness, and Equity in Your Company’s Use of AI,” FTC BUS. BLOG (Apr. 19, 2021) (“It’s essential to test your algorithm – both before you use it and periodically after that – to make sure it doesn’t discriminate on the basis of race, gender, or other protected class.”), https://www.ftc.gov/news-events/blogs/business-blog/2021/04/aiming-truth-fairnessequity-your-companys-use-ai; Letter from Chairwoman Waters and Congressman Foster, United States House Committee on Financial Services, to the heads of the Federal Financial Regulatory Agencies (Nov. 29, 2021) (“[R]outine reviews of AI/ML models could work to promote inclusiveness and help overcome historic disparities experienced by protected classes.”), https://financialservices.house.gov/uploadedfiles/11.29_ai_ffiec_ltr_cmw_foster.pdf.

[9] Testimony of Stephen F. Hayes, Relman Colfax, Before the Task Force on Artificial Intelligence, United States House Committee on Financial Services (May 7, 2021), https://financialservices.house.gov/uploadedfiles/hhrg-117-ba00-wstate-hayess-20210507.pdf.

[10] Lenders use a range of criteria for deciding whether to identify or choose an alternative model, taking into account considerations such as the trade-offs between performance and disparity changes, disparity rates for different protected classes, and compliance with the institution’s model risk management and other modeling criteria. The guidance suggested here would not include approval for a creditor’s decisions regarding what methodologies to use to identify potential alternative models or which potential alternative models, if any, to adopt. This guidance would not require the Bureau to announce a standard as to what constitutes an LDA or to forego fair lending claims that may exist.

[11] This is not meant to suggest that the implementation of LDA models always requires trading profitability for fairness or that such trade-offs are always significant. A growing body of academic research suggests that such trade-offs are continually being reduced, often to the point of being negligible. See Rodolfa, K.T., Lamba, H. & Ghani, R. Empirical observation of negligible fairness–accuracy trade-offs in machine learning for public policy. Nat Mach Intell 3, 896–904 (2021). https://doi.org/10.1038/s42256-021-00396-x.

Print Friendly, PDF & Email
Scroll to Top