A quick way to figure out what changes are needed in Community Reinvestment Act (CRA) regulations is to look at recent CRA exams, which illustrate how the most up-to-date examination techniques are shaping CRA outcomes. An informal survey of exams conducted in the last year identifies some areas with a glaring need for reform – and offers hints at what effective change might look like.
Overall conclusions and ratings need more clarity and transparency.
CRA exams discuss overall conclusions for the subtests in their first few pages. For the layperson, the descriptions can seem vague. For example, one large bank exam stated that for home loans by geography (percentage of home loans in low- and moderate-income (LMI) tracts), performance was “adequate in Tennessee, Kentucky, New Jersey, Pennsylvania, and the Columbus multistate MSA. In all other rated areas, the bank’s performance was good.”
To translate these adjectives, a layperson would have to pull up an obscure part of the CRA regulations entitled Appendix A. “Adequate” actually means “low satisfactory” and “good” means “high satisfactory.” It would be more helpful for the reader if the adjectives could be dropped for words that directly describe subtest ratings like “low or high satisfactory.”
Clearer verbal labels would help, but cannot on their own provide full clarity. Examiners could also produce a summary table that shows the ratings for each criterion, for each subtest and for each geographical area. This would allow the reader to better understand how the examiner calculated the rating overall and to pinpoint geographical areas where performance was weak or strong.
More precision needed for conclusions on lending test performance measures.
A recent Federal Deposit Insurance Corporation (FDIC) examination of an intermediate small bank headquartered in College Station, Texas, seemed to stretch conclusions in order to pass the bank, particularly on the lending test. In its home town of College Station, the bank’s examiner stated that:
“The bank originated 63.9% of the sampled small business loans by number to businesses with Gross Annual Revenues of $1 million or less. While the bank’s performance falls below Dun and Bradstreet (D&B) demographic data [on the percent of business with revenues less than $1 million] by 18.3 percentage points, the bank’s overall level reflects reasonable performance.”
[….]
“The bank’s level of lending to moderate-income borrowers falls below aggregate data (all lenders) by 4.0 percentage points, also reflective of reasonable performance.”
In these cases the bank’s percentage of loans was either far below a demographic benchmark (percent of business with revenues less than $1 million) or an industry benchmark (percent of loans issued by all lenders). Nevertheless, the exam judged this performance to be reasonable, which generally corresponds to a Satisfactory rating.
In a recent position paper, NCRC urged the agencies to develop guidelines for examiners that would include developing ratings for a bank’s lending performance compared to demographics and industry benchmarks. In NCRC’s experience, lenders tend to cluster together at an average or median percent of loans to a group of borrowers, suggesting that a result of four percentage points below an industry benchmark is more akin to low satisfactory, rather than satisfactory performance. Likewise, a percentage of loans almost 20 percentage points below a demographic benchmark should not be regarded as reasonable or satisfactory.
A good start on community development financing benchmarks but progress needs to be made.
Community development (CD) financing refers to loans and investments for affordable housing, economic development or community facilities. This is vital financing that helps a community revitalize economically so that it can attract and retain residents. Exams tend to compare a bank’s level of CD financing to its capacity as measured by assets or capital. The exam will also indicate if CD financing has increased or decreased since the last exam, which should also factor into the rating.
For example, the large bank exam discussed above stated:
“During this evaluation period, the bank originated 1,773 community development loans totaling $6.3 billion. This performance represents 2.8% of the bank’s average total assets ($225.0 billion).”
[….]
“Since the prior evaluation, the number and dollar volume of community development loans increased by 53.5% and 39.2% respectively.”
These measures are useful but not sufficient. They do not compare the bank to its peers, which would include a ratio for all large banks of CD loans per assets. Another exam of a large bank in the Northwest hints at a comparison to peers. It stated:
“The bank’s community development lending levels surpassed all of its similarly situated institutions. Specifically, the bank extended 1,253 community development loans totaling $3.1 billion throughout its assessment areas.”
The first sentence hints at a comparison to similarly situated banks but does not offer any comparison ratios. Comparison to peers is an important part of determining ratings and scores for the subtests. While the total and dollar amounts of CD lending appear impressive, the full context for judging performance is not presented – a frustrating failure.
Qualitative analysis as well as quantitative analysis needs to improve.
Many exams do not carefully analyze how responsive the community development financing is to community needs. One exception to this unfortunate pattern demonstrates how valuable it would be to incorporate such a careful needs inquiry across all CRA exams. In the first exam discussed above, the examiner noted that rural North Carolina is experiencing economic restructuring that necessitates CD financing focusing on job creation and workforce training, and that the bank provided this type of CD financing.
The exam also stated:
“In order to maintain economic vitality in rural areas, there is a need to create leadership training, capacity building, and technical resources for community leaders to help them manage the demands of economic transition.”
After identifying this need, the exam documented grant funding the bank devoted to training local leadership. This detailed assessment of how banks do or don’t tailor CD activity to community needs is rare on CRA exams and should occur more frequently.
Service test needs more data on alternative service delivery.
The service test of the large bank exam focuses on the distribution of branches by income level of census tracts. As banks have accelerated branch closures in recent years, exams also have paid increased attention to the banks’ track record of opening and closing branches. In particular, the first large bank exam discussed above calculated the percentages of branch openings and closings by income level of census tracts in order to ensure that closings were not disproportionately impacting LMI tracts.
Given the trend of accelerating branch closures, exams also need to do a better job assessing whether banks have compensated for this by figuring out other ways to serve LMI borrowers or census tracts with deposit accounts. This necessitates more data on deposit accounts, as the Federal Reserve acknowledged in its Advance Notice of Proposed Rulemaking.
Large banks have the capacity to generate this data, but examiners are not presenting this data to the public. Another exam of a very large bank documented the income levels of borrowers served by alternative delivery systems (ADS), which as the name implies involves non-branch delivery of services and products.
The exam concluded:
“Digital banking platform usage by LMI households and customers in LMI geographies was comparable to usage by [middle- and upper-income (MUI)] households and customers in MUI geographies. Based on customer usage data for ATMs and other digital banking platforms during the evaluation period, ADS had a positive effect on the overall service delivery systems conclusion.”
The examiner’s assertion suggests that customers are equally comfortable and content with nonbranch services regardless of their income level — but the exam provides no data to back that assertion. Statements like these need to be supported by data tables and benchmarked to demographic and industry-wide figures.
This review of CRA exams suggests that improvements such as more consistently using data and enhancing qualitative analysis on CRA exams are possible and desirable. We look forward to making these suggestions in the CRA reform effort slated to start shortly.
Josh Silver is a Senior Policy Advisor at NCRC.
Photo by Milivoj Kuhar on Unsplash