NCRC and Fintechs Urge Regulators to Use AI to Detect and Eliminate Lending Discrimination

Government should act to harness AI’s power to detect and eliminate discrimination in lending, leading fintechs and economic equity advocates argued in a letter to regulators

“One of AI/[machine learning]’s beneficial applications is to make it possible, even using traditional credit history data, to score previously excluded or unscorable consumers,” the letter states. “In some cases, AI models are enabling access and inclusivity.”

AI systems have advanced more rapidly than the government’s ability to effectively regulate them. Agencies have struggled to implement an approach that can be applied uniformly to the financial services industry.

That lag increases the risk that corporate decisions powered by machines may introduce new forms of discrimination, inequity and bias into life-altering decisions for things such as mortgages, business loans and employment and startup investments for small businesses, especially for low- and moderate-income (LMI) people.

Quick, shrewd regulatory action can nonetheless mitigate that risk and help foster machine-powered underwriting that not only avoids bias but actively outperforms other techniques.

That is why the National Community Reinvestment Coalition (NCRC) and a group of financial technology firms submitted a joint letter urging regulators to offer clear guidelines to lenders on how the usage of new AI fair lending tools could better evaluate disparities in lending. The letter to the Consumer Financial Protection Bureau (CFPB) and Federal Housing Finance Agency (FHFA), signed by NCRC, Zest AI, Upstart, Stratyfy and FairPlay, was issued in response to the White House’s Executive Order on AI from October 2023.

Some lenders remain reluctant to adopt these newer tools for underwriting analysis because they believe they can remain compliant with existing fair lending laws despite evidence that suggests older scoring models continue to perpetuate systemic discrimination. Newer fair lending tools can allow lenders to conduct searches for new underwriting models that perform just as well as older scoring models, while also mitigating the risk of discrimination in their analysis of an LMI credit applicant.

In spite of regulatory challenges, AI holds great promise as an instrument to identify and eliminate discrimination that may be hidden. Regulators can help the process by making it clear to lenders that the usage of modern AI tools can help companies comply with fair lending laws. 

From the companies’ perspective, the power of new AI tools lies not only in their ability to help lenders comply with regulations, but also in their ability to expand credit access to applicants who have traditionally been underserved or considered too risky by legacy underwriting models.

The key recommendations offered within the joint letter include:

  1. Don’t wait for perfect information to act:
    AI will continue to rapidly evolve. Supervisory highlights should be used by regulators to highlight best practices within the industry.
  2. Provide written guidance on activity that triggers fair lending oversight:
    The CFPB should provide clearer guidelines on the conditions that would require a lender to engage in a Less Discriminatory Alternative (LDA) search, as well as the frequency with which such searches will be conducted.
  3. Clarify that fair lending applies not only to how applicants are treated, but also how they are selected:
    Evaluating the creditworthiness of applicants can happen at the earliest stages of the lending process, including during marketing campaign planning. AI tools that can more comprehensively assess the risk of an applicant should be adopted earlier and favored over older models and tools.
  4. The FHFA should continue to build upon its 2022 AI Advisory Opinions:
    The prior advisory opinions offered AI-specific guidance to the GSEs based on select use cases with potential to improve housing finance for consumers.
  5. The CFPB should assert that fair lending compliance should be as high a priority as all other parts of the lending process:
    For companies using AI in credit decisioning, the CFPB should make clear the usage of outdated tools is not sufficient to remain compliant with fair lending laws.
  6. Supervisory examination and training should address routine review of financial institutions’ model testing protocols and results:
    Fair lending examinations should also include reviews of the models used, testing protocols and positive assessment of LDA searches. Data concerning the efficacy of tools and practices should be shared in a forum with regulators and policymakers.

Bakari Levy is a Government Affairs Associate at NCRC.

Photo by BoliviaInteligente on Unsplash

Scroll to Top