Evaluation of Rating Differences Between the
FDIC and Other Primary Federal Regulators


February 8, 2002
Evaluation Report No. 02-001

FDIC
Federal Deposit Insurance Corporation
Office of Audits
Office of Inspector General
Washington, D.C. 20434

DATE: February 8, 2002

TO: Michael J. Zamorski, Director, Division of Supervision

FROM: Russell A. Rau [Electronically produced version; original signed by Russell A. Rau], Assistant Inspector General for Audits

SUBJECT: Evaluation of Rating Differences Between the FDIC and Other Primary Federal Regulators (Evaluation Report No. 02-001)

The Office of Inspector General (OIG) initiated this evaluation in response to issues related to the failure of Superior Bank, FSB, Hinsdale, Illinois, that was placed into receivership on July 27, 2001. Superior Bank was a federally chartered savings association supervised by the Office of Thrift Supervision (OTS). In 1999, the Federal Deposit Insurance Corporation (FDIC) internally reduced the CAMELS composite rating assigned to Superior Bank by the OTS based on the results of the 1999 OTS examination. (Note: The CAMELS rating for an institution is part of the Uniform Financial Institutions Rating System which is used to evaluate the soundness of institutions on a uniform basis and to identify institutions requiring special attention. The CAMELS acronym represents each of the factors that are rated: Capital, Asset Quality, Management, Earnings, Liquidity, and Sensitivity to Market Risk. Appendix II provides an overview of the Uniform Financial Institutions Rating System.) Specifically, the OTS assigned Superior Bank a composite CAMELS rating of "2," and the FDIC assigned a composite CAMELS rating of "3." The FDIC and OTS subsequently agreed on the assigned composite CAMELS rating during the next examination. In light of the 1999 rating difference between the FDIC and OTS reported in the chronicles of the Superior Bank case, the OIG anticipated that there may be congressional interest in knowing how often the FDIC disagreed with the composite CAMELS rating assigned by the primary federal regulator. (Note: The primary federal regulators include the FDIC, the OTS, the Office of the Comptroller of the Currency (OCC), and the Board of Governors of the Federal Reserve System.) Thus, the objectives of this evaluation were to identify the extent to which there are rating differences between the FDIC and the primary federal regulator and to evaluate the process for resolving those differences.

We identified few rating differences during the period covered by our review. In addition, case managers told us that rating differences were rare. Rating differences generally result when the FDIC case manager’s evaluation of the condition of the institution differs from that of the primary federal regulator based on the case manager’s review of the primary federal regulator’s report of examination and other information routinely obtained, including data from the FDIC’s off-site monitoring systems. (Note: Existing off-site monitoring systems include the Growth Monitoring System (GMS), the Large Insured Depository Institution (LIDI) Program, and the Statistical CAMELS Off-site Rating (SCOR).) The process for resolving rating differences centers on communication between the FDIC and the primary federal regulator.

Based on the cases we reviewed, we concluded that the FDIC was working with the primary federal regulators to evaluate the issues underlying these rating differences and, more generally, the condition of the institutions. Additionally, the majority of case managers characterized communication and their working relationships with their counterparts at the federal banking regulatory agencies as good or very good. (Note: Examples of counterparts at the federal banking agencies include regional reserve bank team leaders at the Federal Reserve, OCC examiners-in-charge, and OTS review examiners.) We found this to be especially significant because in all of the cases with rating differences that we reviewed, the FDIC had assigned the institutions CAMELS ratings that indicated some degree of supervisory concern. Nonetheless, a few case managers discussed some general concerns with issues related to the FDIC’s special examination authority. (Note: Section 10(b)(3) of the Federal Deposit Insurance Act provides FDIC examiners with the power to make special examinations of any insured depository institution whenever the Board of Directors determines a special examination of any such depository institution is necessary to determine the condition of such depository institution for insurance purposes.) For example, some case managers stated that cooperation could be improved among the regulators when the FDIC participates in examinations along with the primary federal regulator or requests additional information. The Office of Audits conducted a separate follow-up review related to the issue of the FDIC’s use of special examination authority and DOS’s efforts to monitor large bank insurance risks.

Appendix I describes our scope and methodology in detail. In brief, the Division of Supervision (DOS) and the Division of Insurance (DOI) provided us with reports dated June 27 and July 1, 2001, respectively, that we used to identify those instances where there were rating differences. We met with selected DOS and DOI officials in Washington, D.C.; Atlanta, Georgia; Boston, Massachusetts; Chicago, Illinois; Dallas, Texas; New York City, New York; and San Francisco, California. We conducted our review from August to November 2001 in accordance with the President’s Council on Integrity and Efficiency’s Quality Standards for Inspections.

BACKGROUND

The FDIC shares supervisory and regulatory responsibility for approximately 9,796 banks and savings institutions with other regulatory agencies including the Board of Governors of the Federal Reserve System, OCC, the OTS, and state authorities. (Note: The number of banks and savings institutions is based on data obtained from DOS’s Case Managers’ Work Load Summary dated July 12, 2001.) The FDIC is the primary federal regulator for 5,579 federally insured state-chartered commercial banks that are not members of the Federal Reserve System, that is, state nonmember banks, including state-licensed branches of foreign banks and state-chartered mutual savings banks.

As the insuring agency, the FDIC strives to keep abreast of developments that occur in all insured depository institutions to determine their potential risks to the deposit funds. The FDIC’s Regional Case Manager Program was implemented in 1997 to significantly enhance risk assessment and supervision activities by assigning responsibility and accountability for a caseload of institutions or companies to one individual, regardless of charter or location, and by encouraging a more proactive, but non-intrusive, coordinated supervisory approach. An equally important goal of the program was to promote better communication and coordination between the FDIC, other state and federal regulators, and the banking industry. The FDIC monitors insured institutions’ efforts to appropriately manage risks through on-site examinations and off-site reviews.

For both the FDIC supervised and the non-FDIC supervised institutions, case managers rely on reports of examination to determine the financial condition and risks to the deposit insurance funds. Case managers review these reports to determine whether problems and risks have been identified and appropriate corrective actions are being taken. As part of the review process, the FDIC Case Managers Procedures Manual states that case managers should also review other relevant information, such as

  • the previous examination report,
  • any correspondence received since the previous examination,
  • the Uniform Bank Performance Report (UBPR) (Note: UBPR is an analytical tool created for bank supervisory, examination, and management purposes. The performance and composition data contained in the report can be used as an aid in evaluating the adequacy of earnings, liquidity, capital, asset and liability management, and growth management.),
  • off-site monitoring systems, and
  • all memoranda and documentation submitted with the report of examination.

For OTS examination reports, case managers can also review the latest financial data on the thrift from the Uniform Thrift Performance Report and Thrift Financial Report. For OCC reports, FDIC case managers should also review information in OCC’s system, Examiner View, which contains expanded examination data, as well as other supervisory and financial issues related to a specific institution.

Based on their evaluation of this information, case managers are responsible for ensuring that the assigned CAMELS ratings are appropriate. Case managers are also responsible for reviewing the supervisory subgroup assignments (insurance rating) as part of the semiannual insurance assessment process under the FDIC’s Risk Related Premium System (RRPS). (Note: An overview of RRPS is provided in Appendix II.) Supervisory subgroup assignments tie into the CAMELS examination ratings system. This insurance rating, coupled with the capital group assignments, is used by the FDIC to assess premiums on individual institutions. Thus, rating differences can also occur during the semiannual assessment process if a case manager determines that there is a basis for overriding the supervisory subgroup rating indicated by the primary federal regulator in the RRPS. Essentially, these differences are determined based on the case manager’s evaluation of information similar to that used in evaluating a primary federal regulator’s report of examination, that is, off-site monitoring reports, information available from the other regulators’ information systems, targeted examination or visitation reports, and other correspondence from state and federal regulators. The FDIC makes the final determination for insurance ratings.

As described in greater detail in Appendix II, the composite CAMELS rating is the primary driver of the supervisory strategy for the FDIC insured institutions and a factor in determining the appropriate supervisory subgroup assignment (insurance rating). A "1" indicates the highest rating, strongest performance and risk management practices, and least degree of supervisory concern, while a "5" indicates the lowest rating, weakest performance, inadequate risk management practices, and thus, the highest degree of supervisory concern. There is a different degree of supervisory concern between a "2" and "3" rated institution. For example, a "2" rated institution indicates there are no material supervisory concerns and, as a result, the supervisory response is informal and limited. A "3" rated institution, requires more than normal supervision, which may include formal or informal enforcement actions. Similarly, if the FDIC overrides the primary federal regulator’s "2" rating and assigns an institution a "3" rating as part of the semiannual insurance assessment process, the supervisory subgroup assignment would be affected. In this case, the supervisory subgroup assignment would change from Subgroup A to Subgroup B. Thus, the resolution of rating differences is important to ensure that an institution receives the appropriate level of supervision or insurance assessment.

Whether there is a supervisory or an insurance rating difference, case managers are responsible, after consulting with FDIC regional management, for contacting their counterparts at the other regulatory agencies to discuss the difference as described in the FDIC Case Managers Procedures Manual, Section 2.4, Part IV, Procedures for Disagreement with Primary Regulator Rating. To facilitate the appropriate level of communication regarding the resolution of rating differences, this section outlines a hierarchy for consultation and decision with the other federal bank regulators. In brief, case managers, after consulting with regional management, contact their counterpart, and if an agreement cannot be reached, the discussion should be elevated to increasingly higher levels of management in both agencies until the matter can be resolved.

In some cases, the discussions about rating differences may lead to the FDIC’s participation in the next examination or special review. As the insurer of bank and savings association deposits, the FDIC, under the Federal Deposit Insurance Act, has special examination authority for all insured depository institutions. Should the FDIC identify significant emerging risks or have serious concerns relative to any of these non-FDIC supervised depository institutions, the FDIC and the institution’s primary federal regulator work in conjunction to resolve them. These cooperative efforts may include the FDIC’s performing or participating in the safety and soundness examination of an institution with the concurrence of the institution’s primary federal regulator or the FDIC Board of Directors.

A rating difference is reported as a "preliminary" difference until such time as all attempts by the FDIC and the primary federal regulator to reconcile the rating difference have been exhausted. After that, it is considered a "final" rating difference. The FDIC and the primary federal regulator may agree to a rating difference until the next examination. Because final rating differences can impact the insurance rating, final rating differences are reviewed and approved by DOS officials in Washington, D.C. Rating differences identified by case managers under the RRPS are initially discussed at the regional level too. In addition, DOI officials discuss rating differences with regulatory counterparts in Washington, D.C. DOS prepares a periodic report to the Chairman describing the status of preliminary and final rating differences.

In the event the primary federal regulator does not agree with the FDIC’s rating change, the case manager must prepare a letter notifying the primary federal regulator of the rating difference and the basis for the FDIC’s position. In addition, the case manager must prepare a letter notifying the institution’s board of directors of the composite rating change and the reason for the change if the rating assigned by the FDIC affects the risk related premium assessments. The FDIC uses a risk-based premium system that assesses higher rates on those institutions that pose greater risks to the insurance funds. Thus, those institutions with higher composite CAMELS ratings would be assessed more than those with lower ratings.

RESULTS OF EVALUATION

We identified few rating differences between the FDIC and the primary federal regulators during our review period. Specifically, we identified 7 institutions for which there were final or insurance rating differences as of July 1, 2001 and 3 additional institutions with preliminary rating differences based on discussions held with DOS officials in San Francisco between August and October 2001. Moreover, case managers generally opined that rating differences were not that common. Consistent with the FDIC’s Procedures for Disagreement with Primary Regulator Rating, case managers told us that good communication and coordination with the primary federal regulator were the underlying keys to resolving rating differences and, more broadly, monitoring the condition of institutions not supervised by the FDIC.

The cases we reviewed indicated that the FDIC was working with the primary federal regulators to evaluate the issues underlying these rating differences and, more generally, the condition of the institutions. This was especially significant because in all of the cases with rating differences that we reviewed, the FDIC had assigned the institutions CAMELS ratings that indicated some degree of supervisory concern. Nevertheless, some case managers raised general concerns related to the FDIC’s special examination authority. For example, some case managers stated that cooperation could be improved among the regulators when the FDIC participates in examinations or requests additional information. These concerns are being addressed as part of our follow-up audit of the FDIC’s use of special examination authority and DOS’s efforts to monitor large bank insurance risks. We did not identify any specific issues related to the process for resolving rating differences and, thus, did not make any recommendations in this report.

NUMBER OF RATING DIFFERENCES

The numbers reported by DOS and DOI suggested that the FDIC did not routinely disagree with the rating assigned by the primary federal regulator. Specifically, we reviewed seven cases where there were rating differences using the reports we obtained from DOS and DOI. We also reviewed three preliminary rating differences discussed by officials in the FDIC San Francisco region.

In June 2001, DOS reported five final rating differences and no preliminary ratings differences. In July 2001, DOI reported six risk-related premium assessment differences, of which four were included on the list of DOS rating differences. Considering that the FDIC does not regulate 4,217 of the 9,796 (or 43 percent) insured institutions, the number of rating differences reported suggested that differences between the FDIC and the other federal regulators are not that common. Additionally, results from our discussions with case managers indicated rating differences among the federal regulators were not that common. A DOI official in Washington, D.C. also stated that disagreements about insurance ratings are rare. Table 1 provides an overview of the cases we reviewed.

Table 1: Overview of Final and Insurance Rating Differences Reviewed

Case No. FDIC Region Primary Federal Regulator
(PFR)
Source of Rating Difference
(PFR vs FDIC)
FDIC
Participation
Status

1

Atlanta

OTS

Insurance
(A vs. B)

Yes

Rating difference resolved.

2

Chicago

OTS

Insurance and Supervisory
(A vs. B)
(2 vs. 3)

Yes

Resolution expected upon completion of ongoing examination. Report of examination expected 1st quarter 2002.

3

Chicago

OCC

Insurance and Supervisory
(A vs. B)
(2 vs. 3)

Yes

Resolution expected upon completion of ongoing examination. Report of examination expected 1st quarter 2002.

4

Dallas

OTS

Insurance and Supervisory
(B vs. C)
(3 vs. 4)

No

Unresolved. However, the institution is no longer engaged in banking function and rating difference is considered a moot issue because of DOS’s plans to terminate insurance.

5

New York

OTS

Insurance and Supervisory
(B vs. C)
(3 vs. 4)

Yes

Rating difference resolved based on more recent examination.

6

San Francisco

OTS

Supervisory
(3 vs. 4)

Yes

Rating difference resolved based on more recent examination.

7

San Francisco

OCC

Insurance
(B vs. C)

Yes

Rating difference resolved through discussion with officials in Washington. New examination underway.

Source: OIG Analysis of information provided by DOS and DOI officials and documents as of November 2, 2001.

Note: For table 1, regarding the column header "Source of Rating Difference (PFR vs FDIC)," CAMEL ratings and insurance ratings are defined in Appendix II.

During our review, officials in San Francisco also told us about three other cases in that region where there were preliminary rating differences. All three cases were resolved at the regional level, but did involve senior regional management. Several case managers indicated that rating differences are typically resolved through discussions with their counterparts at the regional level. Table 2 provides an overview of the San Francisco cases.

Table 2: Overview of Preliminary Rating Differences Identified by San Francisco Officials

Case No. FDIC Region Primary Federal Regulator
(PFR)
Source of Rating Difference
(PFR vs FDIC)
FDIC Participation Status

8

San Francisco

OTS

Supervisory
(2 vs. 3)

No

Rating difference resolved after OTS completed a visitation. OTS concurred with FDIC’s rating.

9

San Francisco

OTS

Supervisory
(2 vs. 3)

Yes

FDIC had participated in on-site examination. However, OTS and FDIC disagreed initially on composite rating. Rating difference resolved through discussion. OTS concurred with FDIC. Joint examination planned.

10

San Francisco

OTS

Supervisory
( 4 vs. 5)

Yes

FDIC had participated in on-site examination. However, OTS and FDIC disagreed initially on composite rating. Rating difference resolved through discussion at regional level.

Source: OIG analysis of discussions with FDIC San Francisco officials.

Note: For table 2, regarding the column header "Source of Rating Difference (PFR vs FDIC)," CAMEL ratings and insurance ratings are defined in Appendix II.

PROCESS FOR RESOLVING RATING DIFFERENCES

As the policy, Procedures for Disagreement with Primary Regulator Rating, was designed, resolution of rating differences is dependent on communication and effective working relationships between the FDIC and its regulatory counterparts. More broadly, the FDIC’s ability to monitor institutions it does not supervise is also dependent upon the relationships that case managers establish with their counterparts at the other federal banking regulatory agencies. The results of our review indicated that the process for resolving rating differences worked as intended in the cases we reviewed. This was particularly significant given that the CAMELS ratings assigned by the FDIC to these institutions indicated some degree of supervisory concern. Specifically, the FDIC assigned CAMELS ratings for those institutions in our sample as "3", "4", and, in one case, "5". Nevertheless, some case managers discussed some general concerns about the communication flow with the other regulators and the FDIC’s special examination authority.

The process for resolving rating differences basically requires that case managers contact their counterparts to discuss the matter. In cases we reviewed, case managers had communicated with their respective counterparts and did not express any concerns about the process for resolving rating differences. More specifically, as the previous tables illustrate:

  • Not only had case managers communicated with their counterparts, but the FDIC was participating with the other primary federal regulator in either on-site examinations or special reviews in all but two cases. In one of those two cases, the FDIC had determined it was not necessary to participate with the OTS because the institution poses no risk to the insurance funds. In the other case, the OTS had agreed to do a targeted visitation to address the FDIC’s concerns.

  • The FDIC has subsequently resolved four of the seven final and insurance rating differences with the other primary federal regulator. Of the three remaining cases, one case involved litigation and the institution was no longer involved in banking activity. The case manager told us this was not a typical example of a rating difference. In the remaining two cases, the FDIC is currently participating in ongoing examinations and anticipates that the agencies will agree on the next rating.
  • With respect to general relationships with their counterparts, the majority (21 of 26) of case managers characterized their relationship with their counterparts as good or very good. Nonetheless, nearly one-fifth of the case managers (5 of 26) stated that cooperation with the other regulators could be improved. For example, several case managers stated that the primary regulator could be more forthcoming with information, rather than waiting until the case manager specifically asks for the information. To illustrate this point, one case manager stated that the primary federal regulator had changed the rating based on its off-site monitoring efforts, but did not inform the case manager even though the FDIC had participated in the last on-site examination. The case manager became aware of the rating change during the RRPS process because the CAMELS rating in the FDIC’s database differed from the CAMELS rating transmitted by the primary federal regulator during the semiannual insurance assessment process. The case manager had to initiate discussions with the primary federal regulator to evaluate whether the FDIC agreed with the rating change.

    In addition, a few case managers discussed their concerns about the process for exercising the FDIC’s special examination authority. For example, one case manager stated that the FDIC examiners who were participating in an examination related to a case in our sample were not allowed by the primary federal regulator’s on-site examination team to ask direct questions to the institution’s management. The case manager viewed this situation as somewhat limiting to the FDIC. The views expressed to us were consistent with the results of previous audits which are discussed more fully in Appendix III. The Office of Audits recently completed a review related to the FDIC’s special examination authority.

    Suggestions and recommendations made in the previous reports were intended to enhance relationships with other regulators and extend DOS’s capability to monitor risks to the insurance funds. DOS has taken action to respond to our previous recommendations. In addition, during October and November 2001, the FDIC Chairman directed FDIC officials to work with the other federal regulators in an effort to develop an agreement that would improve the FDIC’s access to banks for purposes of performing special examinations and to provide DOS with more timely data on large banks. On January 29, 2002, the FDIC Board of Directors approved an interagency agreement that enhances the process for determining when the FDIC will use its authority to examine any insured institution. Given that our review found no indication of specific issues related to the process for resolving rating differences, we did not make additional recommendations in this report.

    CONCLUSION

    The number of rating differences reported during our review period between the FDIC and the other primary federal regulators did not suggest that this was a widespread concern. The FDIC’s policy promotes communication among the regulators as the key to resolving rating differences when they occur. The results of our review indicated that the FDIC was working with the primary federal regulators to resolve the underlying issues related to the rating differences. In many of the cases we reviewed, the FDIC was participating in the on-site reviews or examinations. Given that rating differences we reviewed occurred in institutions where the FDIC had assigned a composite CAMELS rating of "3", "4", or "5", it was significant to see that the regulators were working cooperatively to evaluate the merits of the underlying issues and minimize the risk to the deposit insurance funds.

    CORPORATION COMMENTS AND OIG EVALUATION

    We provided DOS with a draft report on January 17, 2001. The Director, DOS, provided a written response dated February 1, 2001. Although the report did not contain recommendations, in its response DOS stated that it agreed with our conclusion that rating differences between the FDIC and primary federal regulators are not that common, and that the process for resolving those rating differences centers on communication with the primary federal regulators. Further, DOS stated that its policy for resolving rating differences promotes debate and discussion between the FDIC and the primary federal regulator that enhances each agency’s understanding of the relevant issues and allows for more effective regulation of the financial institution.


    APPENDIX I

    OBJECTIVES, SCOPE, AND METHODOLOGY

    Our objectives were to identify the extent to which there are rating differences between the FDIC and other federal regulators and to evaluate the process for addressing those differences. Appendix III describes related reviews recently completed that more broadly addressed how DOS monitors risks at institutions not supervised by the FDIC.

    To identify those institutions where the FDIC’s supervisory and risk-related insurance assessment rating differed from the primary federal regulator, we relied on information provided to us by DOS and DOI. DOS officials in Washington, D.C. told us that DOS did not formally track the number of preliminary and final rating supervisory differences until December 2000, which limited our analysis of rating differences with each primary federal regulator over time to 1 year. Given that the number of supervisory and insurance rating differences ranged between 4 and 10 during this period, we decided that focusing on the most recent cases at the time of our request in August 2001 would provide us with a representative sample of differences.

    More specifically, our scope included reviewing the rating differences included in:

  • DOS’s Memorandum on Rating Differences with the Primary Federal Regulator from DOS’s Acting Director to the Chairman, dated June 27, 2001, which identified five institutions with final rating differences and no institutions with preliminary rating differences and
  • DOI’s spreadsheet Rating Differences for July 1, 2001 Assessment Period which identified six institutions with supervisory subgroup rating differences, four of which were included in DOS’s June 27, 2001 memorandum.
  • To understand the nature of these cases, we reviewed memoranda and correspondence and discussed these cases with the respective case managers in Atlanta, Chicago, Dallas, New York, and San Francisco. Our review was limited to discussions with the FDIC officials. We did not hold discussions or solicit the opinions of FRB, OCC, or OTS officials regarding any of the matters addressed in this report, nor did we collect or review documents from these organizations. The scope of our review did not include evaluating the merits of the rating differences. Officials in the FDIC’s San Francisco region identified three additional preliminary rating differences in that region that we discussed with the respective case managers.

    We also met with other selected case managers in Boston, Dallas, and San Francisco to more broadly discuss how case managers monitor risk in the non-FDIC supervised institutions, their relationships with primary federal regulators, and general thoughts about the process for resolving rating differences. In total, we interviewed 26 cases managers in 6 of the 8 DOS regions. Table 3 provides additional information about the case managers we interviewed. In addition, we reviewed relevant policies and procedures, including the FDIC Case Managers Procedures Manual and various DOS memoranda and met with DOS and DOI officials in Washington, D.C.

    Table 3: Summary of Case Managers Interviewed

    Region No. of Case Managers Interviewed No. of Institutions in Workload No. of Non-FDIC Supervised Institutions Percent of Workload Non-FDIC Supervised
    Atlanta

    1

    40

    14

    35%

    Boston

    5

    140

    35

    25%

    Chicago

    1

    48

    20

    42%

    Dallas

    8

    498

    263

    53%

    New York

    1

    N/A

    N/A

    N/A

    San Francisco

    10

    224

    109

    49%

    TOTALS

    26

    950

    441

    46%

    All Case Managers

    178

    9,796

    4,217

    43%

    Source: OIG analysis of Case Load Summary Report as of July 12, 2001.

    Note: For table 3, for New York, interviewed Senior Examination Specialist with knowledge of case because case manager was unavailable. However, we included this interview in the total number of case managers interviewed.

    The scope of our review included gaining an understanding of the case manager’s management control responsibilities with respect to reviewing the primary federal regulators’ reports of examination. We conducted our review from August to November 2001 in accordance with the President’s Council on Integrity and Efficiency’s Quality Standards for Inspections.


    APPENDIX II

    OVERVIEW OF CAMELS RATINGS AND SUPERVISORY SUBGROUP RATINGS

    CAMELS RATINGS

    The Uniform Financial Institution Rating System (UFIRS) was adopted by the Federal Financial Institutions Examination Council (FFIEC) in 1979 and was updated in 1996. According to the DOS Manual of Examination Policies dated March 2000, over the years, the UFIRS has proven to be an effective internal supervisory tool for evaluating the soundness of financial institutions on a uniform basis and for identifying those institutions requiring special attention or concern. Under this system, the supervisory agencies endeavor to ensure that all financial institutions are evaluated in a comprehensive and uniform manner and that supervisory attention is appropriately focused on the financial institutions exhibiting financial and operational weaknesses or adverse trends.

    Under the UFIRS, each financial institution is assigned a composite rating based on an evaluation and rating of six essential components of an institution’s financial condition and operations. The composite rating is commonly referred to as the CAMELS. These component factors address the:

  • adequacy of capital (C),
  • quality of assets (A),
  • capability of management (M),
  • quality and level of earnings (E),
  • adequacy of liquidity (L), and
  • sensitivity to market risk (S).
  • As a result of an onsite examination, composite and component ratings are assigned based on a 1 to 5 numerical scale. A "1" indicates the highest rating, strongest performance and risk management practices, and least degree of supervisory concern, while a "5" indicates the lowest rating, weakest performance, inadequate risk management practices, and thus, the highest degree of supervisory concern. The composite rating generally bears a close relationship to the component ratings assigned. However, the composite rating is not determined by computing an arithmetic average of the component ratings. Each component rating is based on a qualitative analysis of the factors comprising that component and its interrelationship with other components. When assigning a composite rating, some components may be given more weight than others depending on the situation at the institution. According to FFIEC guidance, composite ratings should be based on a careful evaluation of an institution’s managerial, operational, financial, and compliance performance. The following provides a brief summary of each composite rating:

    Composite 1 – Financial institutions in this group are sound in every respect and generally have components rated "1" or "2".
    Composite 2 – Financial institutions in this group are fundamentally sound.
    Composite 3 – Financial institutions in this group exhibit some degree of supervisory concern in one or more of the component areas.
    Composite 4 – Financial institutions in this group generally exhibit unsafe and unsound practices and conditions.
    Composite 5 – Financial institutions in this group exhibit extremely unsafe and unsound practices or conditions; exhibit a critically deficient performance; often contain inadequate risk management practices relative to the institution’s size, complexity, and risk profile; and are of the greatest supervisory concern.

    Risk Related Insurance Premium System Assessment Process

    The FDIC uses a risk-based premium system that assesses higher rates on those institutions that pose greater risks to the insurance funds. Under the Division of Insurance’s (DOI) Risk Related Premium System (RRPS), each insured institution is assigned to one of three capital groups and to one of three supervisory subgroups for purposes of assigning an insurance fund assessment risk classification. Supervisory subgroup assignments for the insurance funds are made in accordance with section 327.4(a)(2) of the FDIC’s Rules and Regulations. The three supervisory subgroups tie into the CAMELS examination rating system as follows:

    Subgroup A – Composite Rating "1" or "2",
    Subgroup B – Composite Rating "3", and
    Subgroup C – Composite Rating "4" or "5".

    A well capitalized institution in Subgroup A would have the lowest risk rating and pay no assessment. An undercapitalized institution in Subgroup C would have the highest risk and would pay the highest assessment.

    All institutions are notified of their assessment risk classification in June and December of each year, and each one has a right to request review of their assessment risk classification. The review process is primarily designed for instances in which an institution’s supervisory subgroup assignment differs from what would be likely considering the final composite rating most recently assigned, based on a safety and soundness review, and communicated to the institution in writing prior to the supervisory subgroup cut-off date.

    Semiannually DOI provides case managers with the RRPS Reconciliation List. An institution will appear on the reconciliation list if the rating provided via tape by the other primary federal regulator does not agree with the current examination information stored on the FDIC database. The case manager must evaluate whether he/she agrees with the assigned supervisory subgroup.

    In addition to the Reconciliation List, DOI provides supplementary review lists to the regions. Supplementary screens based on Call Report data are developed by DOI on an ongoing basis to identify "outlier" institutions in the best rated supervisory subgroup ("1" and "2" rated institutions) with atypically high risk profiles. Reviews of institutions on the supplementary review list focus on whether there are unresolved supervisory concerns regarding risk management practices. After the case managers assign the supervisory subgroup rating, the final rating differences are discussed with the other primary federal regulators at the Washington, D.C. level. Supervisory overrides occur when an institution’s final supervisory subgroup assignment, as determined by the FDIC, differs from the supervisory subgroup assignment indicated by the primary federal regulator on or before the supervisory subgroup cut-off date. The FDIC makes the final determination of the supervisory subgroup rating.


    APPENDIX III

    DESCRIPTION OF PRIOR AUDIT WORK

    Results of OIG Survey of DOS’s Process for Monitoring the Insurance Risks Associated with Small and Medium Sized Banks for Which the FDIC is Not the Primary Federal Regulator (Audit Memorandum No. 00-002 dated November 30, 2000)

    Objective

    Determine whether (1) the information available to case managers was sufficient for them to effectively monitor the safety and soundness of banks that are supervised by OCC, FRB, OTS and (2) the other regulators are providing DOS with copies of their examination reports in a timely manner.

    Suggestions

    The Director, DOS, should

    1. Request the other primary federal regulators to provide all DOS regional offices with information detailing planned dates for starting safety and soundness examinations.
    2. Through the use of existing information databases, develop a report that will provide feedback on the time between exams conducted by the other primary federal regulators and alert case managers when statutory timeframes are exceeded.
    3. Work with OTS officials to address the concern that OTS’s approach for measuring exam frequency may result in extended examination cycles for problem institutions.
    4. Continue working through FFIEC with FRB and OTS to improve access to their automated information systems, providing FDIC’s case managers with sufficient and timely information on the supervisory status of insured state member banks and insured thrifts.
    5. Ensure that all case managers are aware that in areas of the country where OCC is transitioning between systems that they will continue to have access to financial data throughout the transition period.

    Results of OIG Review of the Backup Examination Process and DOS’s Efforts to Monitor Megabank Insurance Risks (Audit Memorandum dated October 19, 1999)

    Objective

    Focused on the backup examination process for insured thrifts, national banks and state member banks, and DOS’s efforts to monitor the risks associated with the nation’s largest and most complex financial institutions, often referred to as the "megabanks."

    Suggestions

    The Chairman, FDIC, should

    1. Request delegated authority from the FDIC Board of Directors to the Chairman to initiate special examinations of insured institutions that pose significant safety and soundness concerns, without having to secure the concurrence of the primary federal regulator or the approval of the Board; or, seek a legislative change to vest this authority in the Chairman.
    2. Identify the specific information that DOS needs to monitor the insurance risk presented by megabanks and other insured institutions.
    3. Work to develop agreements with the other bank regulatory agencies that allow for the provision of a consistent, minimum level of information/access for all FDIC case managers.
    4. Establish well-defined criteria for case managers to use in evaluating the insurance fund risks posed by the megabanks and other insured institutions, and clearly articulate DOS’s monitoring goals and objectives.

    Division of Supervision Case Manager Program – Views of Those Who Are Implementing It (EVAL Report No. 99-003 dated March 31, 1999)

    Objective

    To learn how the Case Manager Program was working. The ultimate objective was to identify issues that warranted further review or management’s attention.

    Recommendations

    The Director, DOS, should

    1. Study what constitutes a manageable workload for a case manager. Specifically, DOS should consider studying the impact of potential increases to workload and methods for mitigating the risk that case managers would be unable to fully carry out all their responsibilities should events occur that would cause those increases.
    2. Evaluate regional office best practices for managing the fluctuating applications workload.
    3. Study whether the effort to prepare the Quarterly Large Insured Depository Institutions Reports (LIDI reports), in their current, form, was worth the value the reports provided; or actions can be taken to increase the value of the reports.

    APPENDIX IV

    CORPORATION COMMENTS

    FDIC Federal Deposit Insurance Corporation
    Federal Deposit Insurance Corporation

    550 17th Street NW, Washington, DC 20429
    Division of Supervision

    February 1, 2002

    TO: Stephen M. Beard, Deputy Inspector General, Office of Inspector General

    FROM: Michael J. Zamorski, Director [Electronically produced version; original signed by Michael J. Zamorski], Division of Supervision

    SUBJECT: Draft Report Entitled Evaluation of Rating Differences Between the FDIC and Other Primary Federal Regulators

    The Division of Supervision (DOS) appreciates the opportunity to respond to this draft report. The report does not contain any recommendations, but its conclusions deserve brief comments. The objectives of this OIG evaluation were 1) to identify the extent to which there are rating differences between the FDIC and the primary federal regulators (either the Office of the Comptroller of the Currency (OCC), the Office of Thrift Supervision (OTS), or the Federal Reserve Board (FRB)), and 2) to evaluate the process for resolving those differences. You conclude that rating differences between the FDIC and primary federal regulators are not common, and that the process for resolving those differences centers on communication with the other primary federal regulator. We agree with these findings.

    The limited number of rating differences is at least partly attributable to the FDIC’s policies regarding the resolution of rating differences. We feel that resolving a rating difference before it becomes permanent allows for more effective regulation of the financial institution, and our policies promote meaningful debate and discussion with our OCC, OTS, and FRB counterparts at the Regional, and if necessary, Washington Office levels before a difference becomes final. This debate and discussion enhances each agency’s understanding of the relevant issues and allows each agency to present alternative viewpoints regarding component ratings, the composite rating, or overall institution performance. Even if the difference is not resolved, the communication serves to enhance regulatory awareness of the issues and increase regulatory effectiveness.

    Please contact Assistant Director Miller at 898-8523 or Manager Calvin Riddick at 898-6758 if you have any questions.

    Last Updated 04/02/2002 Contact the OIG
    Search | Accessibility | Privacy | Information Quality | Contact Us | Site Map | Home