In anticipation of the reform proposals to the Community Reinvestment Act (CRA) expected this week, I am continuing the review of performance measures on CRA exams. The most recent blog looked at performance measures on the lending test. This one will scrutinize performance measures on the investment and service test.
While one of the agencies, the Office of the Comptroller of the Currency (OCC) is poised to propose “transformational” or wide-ranging change to CRA exams, a more prudent approach is refining and sharpening the existing performance measures. The comments on the OCC’s Advance Notice of Proposed Rulemaking (ANPR) largely rejected a transformational “one ratio” approach and instead pleaded for more clarity and consistency on CRA exams. This review of performance measures illustrates how increasing clarity can address legitimate concerns without unnecessarily turning CRA exams upside down.
For starters, the National Community Reinvestment Coalition (NCRC) has advocated replacing the investment test with a community development (CD) test. The CD test would consider both community development lending and investing. CD financing refers to lending and investing that supports affordable housing, economic development and community facilities. It is different than retail lending in that retail lending finances individual homebuyers, homeowners or small businesses whereas community development financing has a larger community impact such as financing the creation or rehabilitation of a commercial corridor serving a low- and moderate-income (LMI) neighborhood.
The rationale for combining CD lending and investment is that both types of financing support community development. Banks have complained that keeping CD lending and investment in separate tests forces banks to make sure they have minimal levels of both so as to make sure they pass either the lending or investment test, rather than considering which form of financing (lending or investment) is optimal or most efficient for the particular project in question. Community advocates have worried that combining the tests may further shortchange investment. NCRC has found that current CD investment levels are significantly lower than CD lending. A response to this concern is to require examiners to separately evaluate CD lending and investment totals on a CD test and to opine on whether the mix responds adequately to local needs in each geographical area (assessment area or AA).
A CD test, like the current lending and investment test, will contain both quantitative and qualitative criteria. The quantitative criteria include examining the dollar amount of CD financing and comparing it to bank capacity.
Here are some examples from current CRA exams:
CD lending has a significantly positive impact on lending performance in the Chicago AA. The bank demonstrated excellent responsiveness to CD lending needs and opportunities. The bank made 48 CD loans totaling $273.8 million during the evaluation period. By dollar volume, 60 percent of these loans provide affordable housing to LMI persons (651 units created or rehabilitated), 35 percent support community services for LMI persons, 4 percent fund community revitalization projects, and the remainder promote economic development. The dollar volume of CD lending represents 43 percent of allocated Tier 1 Capital for the Chicago AA. CD loans include:
- Term financing for a 227-unit apartment building, with 152 units allocated to tenants earning 60 percent or less of the area median income.
- Term financing for a 145-unit housing complex for seniors, with all units allocated to tenants earning 65 percent or less of the area median income.
- A working capital line of credit to a hospital whose charity health care for LMI patients constitutes a majority of its service costs.
Here is another example:
The bank is a leader in making community development loans. During the evaluation period, the bank extended 1,155 community development loans for $4.5 billion. Community development loans account for 3.3 percent of total loans and 2.1 percent of total assets as of December 31, 2016.
Since the last evaluation, the number and dollar volume of community development increased by 30 and 95 percent, respectively
These examples, though brief, contain a series of performance measures that need to be unpacked. First, let us consider a quantitative performance measure:
Dollar Amount of CD Financing Compared to Bank Capacity
In the first example, the examiner compares CD lending to Tier 1 capital. OCC exams tend to use Tier 1 capital and allocate Tier 1 capital to various AAs based on the pro-rata percent of a bank’s deposits in the AA. The finding that CD lending consumes almost half of Tier 1 capital in the Chicago AA would suggest that the bank is performing at a high level on this quantitative measure. However, without any peer comparisons, deciding whether this performance is “Outstanding” or merits a lower rating is guesswork. NCRC has argued for the creation of a community development financing database, similar to the Home Mortgage Disclosure Act (HMDA) data, on a county or census tract level that would facilitate peer comparisons.
In the second example, the exam indicates that the level of CD lending equals 3.3% of total loans and 2.1% of total assets. While this may appear to be low, it could be a reasonable level of CD financing compared to peers. Again, the exams need to use regularly collected data to make peer comparisons in order to designate a rating or score for this performance measure.
If a bank is significantly above the peer ratio, it could merit an Outstanding rating; if it was modestly above, it could receive a High Satisfactory rating; if it was equal to the peer ratio, a Satisfactory rating could be awarded; and the degree to which it was below a peer ratio, the ratings of Needs-to-Improve and Substantial Non-Compliance would be given. These ratings are similar to those that are used now on component tests. As suggested in the previous blog on retail lending, NCRC recommends using the ratings to describe performance on individual criteria and establishing guidelines such as comparisons to peer banks that would direct the designation of these ratings for performance. The guidelines would be explicit in stating that the extent of being above or below peer performance would help determine the ratings. Currently, no such guidelines exist.
Qualitative Criteria: Responsiveness
In addition to the quantitative criteria, exams consider qualitative criteria like responsiveness. The application of qualitative criteria could use significant improvement. In the first example above, the allocation of CD lending is discussed: 60% is for affordable housing and 35% for community services. The remaining 5% is for economic development and neighborhood revitalization. Exams have become better at displaying allocation of community development but that is where exams stop. They tend not to comment on whether just 5% for economic development and neighborhood revitalization is responsive to needs in Chicago, for example.
Suppose unemployment was relatively high in Chicago and was higher than in most other areas across the state. Then, the examiner would expect the bank to allocate more community development financing for supporting economic development, job creation and small business growth. NCRC has advocated for better performance context analysis that involves the utilization of economic and social metrics such as unemployment rates and housing affordability to compare AAs against each other and to determine priority needs in each AA. If a bank’s allocation of CD financing is skewed away from priority needs, then it should score poorly, Low Satisfactory or less, on the qualitative criterion of responsiveness. Also, if a great majority of financing is for one need, such as affordable housing, and the need is not necessarily a priority, then a lower rating could be justified. Again, no regulatory guidelines along these lines exist to judge qualitative aspects of performance.
The affordable housing for low-income and older adult tenants in the first example is commendable, however, the housing exclusively for lower-income people raises the question of whether the bank is engaged in any activities that promote integration. For example, is it supporting affordable housing in more affluent suburban or urban neighborhoods? Does the local community have a campaign promoting integration or services for recent immigrants that the bank is supporting? These needs are rarely discussed on CRA exams although we are learning as a country that segregation is contributing significantly to increasing inequality.
Another important means of determining scores on the responsiveness criterion is to consider community group comments on CRA exams. Exams now record public comments in a generic manner and do not come close to utilizing the full potential of public input. For instance, suppose several stakeholders provided written and verbal comments that banks are not utilizing local public sector downpayment homeownership programs or not providing lines of credit needed by nonprofit housing developers. If a particular bank was the exception and helping to finance affordable homeownership programs and/or nonprofit developers, this would help boost its performance on the responsiveness criterion.
The qualitative criterion of responsiveness would have guidelines that would indicate for each AA to what extent banks were responding to priority needs identified via data analysis and/or public comments. Also, while some needs may merit more financing than others during the exam cycle, it is also important to have balance and address a variety of needs. Spending a great majority of dollars on one need would probably factor negatively on the bank’s rating. While guidelines for qualitative criteria are complex and may lead to inevitable charges of subjectively, at least establishing some guidelines would improve the rating process. Currently, a lack of guidelines on the qualitative criteria has led to frustration with inconsistent exams. Moreover, establishing guidelines enables stakeholders to see the extent to which they are applied on exams and to better frame their public comments on bank performance.
Summing Performance on the Community Development Test
This blog did not review all the criteria currently on the CD part of the lending test or the investment Test. However, using the quantitative ratio and qualitative measure of responsiveness discussed here as examples, the scores on both criteria can be summed for each AA. The scores of 1 to 5 could correspond to each rating from Substantial Noncompliance to Outstanding. The quantitative and qualitative criteria could also be weighted to reflect judgments of their relative importance. For example, the quantitative criteria could be weighted at 60% while the qualitative criteria could be weighted at 40%. A shortcoming of the OCC’s one ratio concept is that it is difficult to see how it would contain any qualitative criteria. As a result, banks could favor quantity or large dollar financing volumes over quality or responsiveness to needs. This could result in financing large infrastructure or other deals but neglect important community needs such as supporting small-dollar business or consumer lending. Exam criteria, therefore, must include a judicious balancing of both quantitative and qualitative criteria.
Like the investment test, the service test has a number of criteria, both quantitative and qualitative, for measuring performance. Some are better developed than others, but with refinement instead of abandonment, they can be improved by CRA reform. Here are highlights:
Distribution of Branches
CRA exams will compare the percentage of branches and sometimes ATMs in LMI tracts to the percentage of the population that resides in LMI tracts. The distribution of branches is weighed more heavily, as it should be, since branches provide a full range of services including lending as well as deposit-related activities. Guidelines could ensure more consistency. For example, peer comparisons involving the percentage of branches of all other banks in the area are not often made. They should be since this is common on other measures of performance. If a bank has a percentage of branches in LMI tracts that is greater than the percentage of the population and peer banks, it could receive an Outstanding rating. If it was close to the demographic measure and higher than the peer measure, it could receive High Satisfactory. If it was lower than the demographic measure, and equal or modestly lower than the peer measure, it could receive Low Satisfactory. The two lowest ratings would be reserved for banks below both measures.
One inconsistency is how branches in nearby tracts that are not LMI are considered. I have seen some OCC exams that consider them if they are “across the street” or “within blocks” and others in which they are as far as one-half mile. A half-mile is too far, especially for the elderly and other populations that have limited mobility. Other exams consider branches that are “within blocks” to be within one-quarter of a mile; this could be reasonable. The agencies should develop common guidelines open to public comment for how they consider nearby branches.
Record of Opening and Closing Branches
Currently, exams make sure that the bank’s record of opening and closing branches does not disproportionately harm LMI tracts. This is an important criterion because recent Federal Reserve research has demonstrated that after considering economic factors (such as profitability and employment levels), CRA has helped prevent branch closures in LMI tracts. Additional Federal Reserve research has found that LMI customers, particularly small businesses, rely on branches for deposit transactions and lending.
Range of Services Including Alternative Service Delivery Systems
The range of services and delivery criteria are not consistently applied on CRA exams. The range of services includes hours of operation. This is an important indicator of service because before the enactment of CRA, the Congressional hearings leading to the passage of CRA documented branch hours of operation being much less in inner-city neighborhoods than suburban, white and affluent neighborhoods. Checking to make sure hours of operation are equitably distributed across branches is an important check against redlining.
In addition, alternative service delivery such as mobile banking has received considerable attention in the last few years. The agencies updated the Interagency Question and Answer (Q&A) document in 2016 to include a Q&A on alternative service delivery. It stated that factors such as ease of use and rate of use would be considered. As a result, readers of CRA exams will periodically see descriptions such as this one:
The growth rate of new accounts for customers residing in LMI geographies is significantly higher than the growth rate of new accounts for customers residing in MUI geographies. The bank’s internal data also shows an increase in the usage of Alternative Delivery Systems (ADS) by customers residing in LMI geographies, and ADS usage by customers residing in LMI geographies exceeds the ADS usage by customers residing in MUI geographies. The proportion of the bank’s LMI customers using ADS is significantly greater than the conservatively estimated population of fully banked LMI consumers.
This description is a step in the right direction but is not satisfying in terms of a rigorous evaluation of bank service provision. The examiner actually compared the percentage of fully banked customers in the area (using the FDIC survey on unbanked and underbanked populations) to the percentage of bank customers using ADS. The examiner concluded that the percentage of customers that were LMI and using the bank’s ADS was greater than the percentage of fully banked LMI customers.
While this is encouraging, the data in this CRA exam narrative is confusing and hidden. The exam does not present actual numbers and percentages of accounts although the examiner clearly had this data. Also, it is not clear when the exam is referring to accounts in general and accounts generated via ADS. It would be useful to have a table that breaks that down. NCRC has advocated that data on the number and percentage of accounts for customers that are LMI and/or by income of census tracts be provided on exams. It would seem that this is possible per this example, but has not been implemented due to interest group pressures (some banks and trade associations resist this). If the agencies implemented regular data collection and dissemination similar to HMDA data, more consistent guidelines such as those discussed above for lending and branching could be developed for this important measure.
Another aspect of service that is not discussed on CRA exams is its cost. The interagency Q&A encourages the provision of low-cost deposit accounts and also states that the cost of alternative service delivery should be compared against the cost of the other delivery systems of the bank. Further developing an analysis of the cost of services is desirable since services that gouge the consumer, particularly, LMI consumers do not truly serve community needs. For this round of CRA reform, including cost considerations in the qualitative criteria with guidelines calling for a comparison of pricing within and across banks for LMI and non-LMI customers would be an advance.
Community Development Services
Under the service test, community development (CD) services refer to activities that are related to the provision of financial services or have community development as their primary purpose. In this section of the exam, it is common to see a discussion of financial education for consumers, homeowners, homebuyers or small businesses. Also, bank service on the board of directors of nonprofit organizations involved in community development is reviewed.
A shortcoming is that it is hard for banks or community groups to determine how much is enough. Some exams measure the provision of CD services in units and others use hours. A unit is a confusing measure because it is hard to know to what a unit refers. Is it one hour or some other time frame? Hours would seem to be more intuitive. If the agencies created a database of CD hours from CRA exams, they would at least be able to describe ranges of annual hours for banks of various asset sizes and would be able to develop guidelines regarding median time periods. Banks that are far above or below the medians would receive Outstanding or failing ratings on this measure.
The measure should also have a qualitative component that should constitute 20% to 30% of the overall rating on this criterion. Banks should be asked to document impact. For example, if a staff person delivered 12 lectures, each an hour, to a financial education class held by a nonprofit over the course of a year, how many of the clients increased their savings or improved their credit scores. The more data on impact and how the services responded to needs in localities, the higher the score on the qualitative component of CD services.
When I talk to CRA bank staff, I encounter frustration at exam inconsistencies but not an overwhelming desire to completely revamp CRA exams. I also have not encountered CRA bank staff that are enthusiastic about a one ratio approach. Based on these conversations, I would surmise that the incremental suggestions and clearer guidelines suggested above, would carry more currency with bank CRA staff than a topsy-turvy “transformation” of CRA. Even if the bank staff did not agree with a particular suggestion, I would bet that the overall concept of incremental change would resonate better with them. Reforms will only be enduring if there is sufficient support or acceptance from all constituencies. I think NCRC’s suggestions are more conducive to enduring reform.
Josh Silver is a Senior Advisor for NCRC.