Automated Collection System Customer Satisfaction Survey Results Should Be Qualified if Used for the GPRA

May 2000

Reference Number: 2000-10-078

 

This report has cleared the Treasury Inspector General for Tax Administration disclosure review process and information determined to be restricted from public release has been redacted from this document.

May 17, 2000

 

MEMORANDUM FOR COMMISSIONER ROSSOTTI

 

FROM: Pamela J. Gardiner /s/ Pamela J. Gardiner

Deputy Inspector General for Audit

SUBJECT: Final Audit Report – Automated Collection System Customer Satisfaction Survey Results Should Be Qualified if Used for the GPRA

This report presents the results of our review of the Automated Collection System (ACS) Customer Satisfaction Survey Results as they relate to the Government Performance and Results Act of 1993 (GPRA). The overall objective of this review was to evaluate the reliability of the information used to measure customer satisfaction with ACS services. The ACS is a computerized inventory of customers who have received notices informing them they have not paid their taxes or filed a return. The notices provide the ACS telephone number should the taxpayer wish to respond to the delinquency by telephone.

In summary, we found that the ACS Customer Satisfaction Survey results are not statistically valid and should not be used to satisfy requirements of the GPRA unless they are appropriately qualified. We recommend that the Director, Strategic Planning and Budget consider establishing a process to oversee the ACS Customer Satisfaction Survey. This will help ensure compliance with survey selection criteria and procedures. We also recommend that the survey selection criteria be revised to ensure that all ACS customers are given an equal chance of being selected for the survey.

Management’s response was due on May 8, 2000. As of May 11, 2000, management had not responded to this draft report.

Copies of this report are also being sent to the Internal Revenue Service managers who are affected by the report recommendations. Please contact me at (202) 622-6510 if you have questions, or your staff may call Maurice S. Moody, Associate Inspector General for Audit (Headquarters Operations and Exempt Organizations Programs), at (202) 622-8500.

Table of Contents

Executive Summary

Objective and Scope

Background

Results

Automated Collection System Customer Satisfaction Survey Results Should Be Qualified if Used to Report Performance Measures for the Government Performance and Results Act of 1993

Conclusion

Appendix I – Detailed Objective, Scope, and Methodology

Appendix II – Major Contributors to This Report

Appendix III – Report Distribution List

Appendix IV – Outcome Measures

Appendix V – Table: Automated Collection System Call Site Hours of Operation and Survey Selection Times

Executive Summary

This audit was performed as part of the Treasury Inspector General for Tax Administration’s overall strategy to assess the reliability of the Internal Revenue Service’s (IRS) customer (taxpayer) satisfaction performance measures as they relate to the Government Performance and Results Act of 1993 (GPRA). The GPRA is intended to improve agency performance and provide objective information to congressional and executive branch decision-makers to assist them in appropriating and allocating federal funds. The law requires executive agencies to prepare multi-year strategic plans, annual performance plans, and performance reports on prior year accomplishments.

The IRS established balanced performance measures to support achievement of its strategic goals: provide quality service to each taxpayer, serve all taxpayers, and be productive through a quality work environment. Achievement of these goals is measured through customer satisfaction, employee satisfaction, and business results. Customer satisfaction surveys are used to develop performance measures that balance customer needs with IRS operational needs. Taxpayers that receive specific kinds of services from the IRS might be asked to rate the service. These survey results are being summarized by a vendor and used by the IRS to evaluate the overall satisfaction with IRS services.

The IRS originally designed the surveys as part of its balanced performance management system and did not anticipate using the surveys for GPRA reporting purposes. Subsequently, the IRS used the survey results to fulfill the GPRA reporting requirements. The intent of the GPRA is that the Congress will use the performance measurement results to help evaluate IRS budget appropriations. Therefore, it is essential that the IRS accurately measure its success in meeting the performance goals.

This review evaluated the accuracy, validity, and reliability of the information used by the IRS to measure customer satisfaction with the Automated Collection System (ACS). The ACS is a computerized inventory of customers who owe taxes or have not filed returns, and represents a significant part of IRS activities for providing customer service. Customer representatives make telephone calls to customers on the ACS inventory, as well as answer telephone calls from these customers. Each day, Customer Service employees ask a number of ACS customers to take the survey, which measures customer satisfaction with the service provided by ACS employees.

Results

The present results of the ACS Customer Satisfaction Survey should not be used to satisfy all requirements of the GPRA, unless appropriately qualified. The original purpose of the ACS Customer Satisfaction Survey was not for GPRA reporting, but to measure the overall trend of taxpayer satisfaction with ACS service. While the survey may be an effective marketing tool to gauge taxpayers’ satisfaction with the services provided by the ACS offices, a higher standard is required when these results are used for GPRA reporting purposes. Internal controls should be in place to ensure the survey results are reliable and that the data used for GPRA reporting purposes has been verified and validated.

IRS executives have not established a management process to ensure the survey is conducted appropriately to measure the level of satisfaction all customers receive from interactions with ACS employees. There is little accountability over the survey, and few management controls are in place to ensure the survey results are reliable and can be verified and validated. Without reliable information, the IRS cannot provide a basis for comparing program results with the established performance goals and will not achieve the benefits of the GPRA intended by the Congress.

We conducted testing in the National Office and three ACS call sites. We discussed GPRA measurements and requirements with National Office managers. We also interviewed Customer Service managers and employees, reviewed telephone system documentation to determine how ACS personnel select customers for the survey, and observed Customer Service employee selection and monitoring of 29 telephone calls for the survey.

In addition to our testing, we consulted with an expert statistician to solicit an independent opinion regarding the results of the survey. We asked the statistician to focus on the procedures and practices used to select survey participants and determine if they produced a random sample suitable for projecting an overall customer satisfaction rate for GPRA purposes.

Automated Collection System Customer Satisfaction Survey Results Should Be Qualified if Used to Report Performance Measures for the Government Performance and Results Act of 1993

The current management control process is not adequate to ensure the ACS Customer Satisfaction Survey is administered properly. Specifically:

We believe that without reliable survey results, the IRS cannot meet the GPRA requirements of establishing measurable performance goals and reporting of accurate results.

Summary of Recommendations

To ensure that the IRS accurately measures the level of satisfaction customers receive from interactions with ACS services, additional actions are needed. We recommend that the Director, Strategic Planning and Budget, consider establishing a process to oversee the ACS Customer Satisfaction Survey. This will ensure compliance with survey selection criteria and procedures. We also recommend that the survey selection criteria be revised to ensure that all ACS customers are given an equal chance of being selected for the survey.

Management’s Response: Management’s response was due on May 8, 2000. As of May 11, 2000, management had not responded to this draft report.

Objective and Scope

This audit is part of the Treasury Inspector General for Tax Administration’s (TIGTA) overall strategy to assess the reliability of the Internal Revenue Service’s (IRS) customer satisfaction performance measures as they relate to the Government Performance and Results Act of 1993 (GPRA). The IRS is implementing a balanced performance measurement system to balance customer (taxpayer) satisfaction, employee satisfaction, and business results. These quantitative measures will support and reinforce the IRS’ achievement of its overall strategic goals. The TIGTA is conducting several reviews to address separate elements of the customer satisfaction measurement system.

The overall objective of this audit was to evaluate the accuracy, validity, and reliability of the information used to measure customer satisfaction with the Automated Collection System (ACS). We performed this audit from October 1999 to February 2000 in accordance with Government Auditing Standards.

We performed testing in the National Office and three ACS call sites: Buffalo, New York; Jacksonville, Florida; and Philadelphia, Pennsylvania. We judgmentally selected these call sites after considering the historical volumes of ACS incoming telephone calls and current IRS pilot programs.

During our audit, we:

Our scope of work for the current audit was limited to evaluating the process the IRS uses to gather the information for measuring customer satisfaction with ACS services. Details of our audit objective, scope, and methodology are presented in Appendix I. Major contributors to this report are listed in Appendix II.

Background

The GPRA requires federal agencies to establish standards for measuring their performance and effectiveness. Executive agencies are required to prepare multi-year strategic plans, annual performance plans, and performance reports on prior year accomplishments.

The overall goal of the GPRA was to improve agency performance and to provide objective information to congressional and executive branch decision-makers to assist them in appropriating and allocating federal funds. Therefore, it is essential that the data used for the performance measures are reliable and the results are valid and verifiable to ensure proper conclusions are made by both the Congress and the IRS.

The IRS prepared a multi-year strategic plan and annually prepares a performance plan. It also established three strategic goals: provide quality service to each taxpayer, serve all taxpayers, and be productive through a quality work environment. Providing quality service to each taxpayer is a key part of customer satisfaction and includes:

The IRS measures its success in achieving this goal by using a customer satisfaction survey to measure its programs. The IRS contracted with a vendor to conduct 11 customer satisfaction surveys. The vendor designed and prepared the surveys with IRS executives and staff. Customers that receive specific kinds of services might be asked to complete a survey to rate the service they received. The vendor summarizes the responses to measure the overall trend in satisfaction with IRS service.

A District Office of Research and Analysis (DORA) group conducted the only IRS verification of the survey reports. The DORA reported it successfully replicated most of the vendor’s results and recommended no further validations be done unless the vendor changed its methodology. The DORA focused on the methods used by the vendor to analyze the data provided by the IRS, not on the process used to obtain the data.

One of the 11 programs using surveys to measure customer satisfaction is the ACS Program. When a customer telephones the ACS Program, the call is routed to the proper call site and group, and then to the first available customer representative. Customer representatives research customer accounts, attempt to resolve their issues, and update the ACS with the information from the customer contact. ACS representatives also make outgoing telephone calls to customers.

The ACS is a significant IRS activity for providing customer service. Between April 1998 and March 1999, the ACS Program received 3.8 million telephone calls. During that time, 18,809 customers were asked to participate in, and 13,082 customers (70 percent) completed, the survey.

In addition, the IRS is developing a new planning process that will provide support for its efforts to comply with the GPRA. Until this process is completed, the Commissioner designated the Director, Strategic Planning and Budget, as responsible for overseeing and coordinating the implementation of all the GPRA-related activity for the IRS. However, other offices are responsible for actually conducting the ACS Customer Satisfaction Survey.

Results

The IRS should clearly disclose the limitations of the survey’s design and the reliability of its data if the results are used to report the level of satisfaction customers have with ACS services. In the three call sites visited, surveyors and customer representatives took steps to administer specific aspects of the survey. For example:

However, the results of the ACS Customer Satisfaction Survey should not be used to satisfy all requirements of the GPRA, unless qualified. If used to report the level of satisfaction customers have with ACS services, the IRS should clearly disclose the limitations of the survey’s design and the reliability of its data.

The original purpose of the ACS Customer Satisfaction Survey was not for GPRA reporting, but to measure the overall trend of taxpayer satisfaction with ACS service. However, the survey results are being reported to the Congress for GPRA purposes, requiring that the data obtained from the surveys be reliable, verified and validated.

IRS executives have not established an adequate management process to ensure the survey is conducted appropriately to measure the level of satisfaction all customers receive from interactions with the ACS or its employees. The expert statistician we consulted believes the survey design was reasonable, but that the implementation of the survey subjected the results to potential bias. Specifically, the survey is not controlled to ensure consistency in sample selection, it excludes specific customers, and the sampling techniques are a source of potential bias. The statistician went on to say the survey should not be regarded as a valid sample of all customers interacting with ACS employees.

As a result, we believe the IRS cannot validate or verify the sample selection methodology or survey results to ensure they are reliable. Without reliable information, the IRS cannot provide a basis for comparing program results with the established performance goals as intended by the Congress when it established the GPRA.

Automated Collection System Customer Satisfaction Survey Results Should Be Qualified if Used to Report Performance Measures for the Government Performance and Results Act of 1993

The current management control process is not adequate to ensure the ACS Customer Satisfaction Survey is administered properly. Without a reliable survey process and system of internal controls, there is a significant risk the IRS is not collecting data essential for accurate and reliable survey results that represent all customers who interact with the ACS, or that all customers have not had the opportunity to participate in the survey.

Without improvements in the above conditions, the ACS Customer Satisfaction Survey results should be qualified if used to meet the requirements of the GPRA. We believe the IRS cannot validate or verify the sample selection methodology or survey results to ensure they are reliable for all customer interactions. Without reliable information, the IRS cannot provide a basis for comparing program results with the established performance goals and will not fully achieve the benefits of the GPRA intended by the Congress.

More accountability is needed for the ACS Customer Satisfaction Survey results

Three IRS offices currently provide oversight for the ACS Customer Satisfaction Survey.

Guidance and instructions were provided for ACS employees to conduct the survey. However, none of the three offices appear to be accountable for the reliability of the survey results. There were no on-site reviews to ensure that surveyors were properly selecting customers to participate in the survey, or that customer representatives and surveyors were properly administering the survey.

OMB Circular No.A-123, Management Accountability and Control, states that management accountability is the expectation that managers are responsible for program performance. As managers develop and execute strategies for implementing agency programs, they should design management structures to help ensure accountability for results.

More management controls are needed to ensure the integrity and reliability of the survey results

The IRS reported to the Congress in its Fiscal Year (FY) 2000 Congressional Justification (submitted in FY 1999) that it would determine the level of satisfaction ACS customers receive from interactions with ACS services. Close to half of the customers interacting with the ACS did not have the opportunity to participate in the ACS Customer Satisfaction Survey. We believe the reliability of the results decreased when the IRS introduced potential bias into the survey by excluding specific groups of customers.

Including outgoing ACS telephone calls will increase the reliability of the results.

Only incoming telephone calls from customers to the ACS Program are included in the survey. However, approximately 46 percent of customer interactions with ACS employees are made when ACS customer representatives make outgoing telephone calls to customers. We reviewed ACS management reports for 15 of the 18 ACS call sites for the quarter January through March 1999. Of the 957,521 telephone calls reported by the ACS during that time period, 440,835 (approximately 46 percent) were outgoing telephone calls that were not included in the survey process.

When developing the ACS Customer Satisfaction Survey, the IRS decided to follow the methodology of another program (the Toll-Free Customer Satisfaction Survey) since it uses the same telephone system as the ACS. However, the Toll-Free receives only incoming telephone calls.

Customers who are telephoned by an ACS customer representative might have a different point of view of the ACS’ customer service than ones who initiate contact. The IRS is considering adding these customers to the survey process.

Including customers who telephone the ACS after the surveyors’ regular work hours will increase the reliability of the results.

Only 4 of 17 call sites survey after 5:00 p.m., even though all ACS call sites operate until 8:00 p.m., Monday through Friday. In 13 of the 17 call sites, surveyors were not normally scheduled to work past 5:00 p.m., with most surveyors scheduled to finish their day between 3:00 p.m. and 4:30 p.m. (See Appendix V for each call site’s actual hours of operation and survey schedule.)

The IRS tracks the number of telephone calls received each hour and has determined those received in the evening are not a significant number. However, not including survey participation in the evening hours could potentially distort the sample results because the types of customers and issues are likely to differ between daytime and evening.

Including non-English speaking customers will increase the reliability of the results.

IRS management went to great lengths to ensure non-English speaking customers could participate in the survey. For example, the survey was recorded in Spanish and guidelines were established for Spanish speaking customers.

However, IRS surveyors did not always speak a language other than English. Therefore, surveyors could not monitor telephone calls to determine issues, resolutions, etc., or communicate with non-English speaking customers to ask them to participate in the survey. Because of this, surveyors did not select telephone calls from non-English speaking customers.

For all three sites visited, surveyors only selected English-speaking customers to participate in the survey. We could not determine how many non-English speaking callers did not have the opportunity to participate in the survey. However, the IRS believes the total number of non-English speaking customers is a small percentage of all ACS customers.

More controls are needed to provide all ACS customers with the opportunity to be included in the survey

The basic criterion in selecting a random sample is that every item (e.g., every ACS customer) must be given an equal opportunity of being selected. Otherwise, bias is introduced, and there will be no way of assessing how accurately the sample reflects the characteristics of the population. We reviewed the sampling techniques at the three selected call sites. Our results follow.

Instructions in the Internal Revenue Manual (IRM) do not produce a random sample.

Customer representatives are assigned unique identification (ID) numbers in a non-sequential manner. Due to the selection method illustrated in the IRM, customer representatives with a wide separation between their ID number and other customer representatives are more likely to be included in the sample. For the selection of customer representatives to be a completely random process, every customer representative must have an equal chance of being selected.

Each working day, five customer representatives are randomly selected at three random times throughout the day. The customers who the customer representatives are talking to at the time of the selections are asked to participate in the survey.

To identify the first customer representative to monitor for each assigned time, surveyors should select a random number within the range of all customer representative identification numbers, as well as a direction indicator of either higher (+) or lower (-). For example, assume the random number selected is 3335 with a direction indicator of higher (+). According to the IRM, the surveyor should identify the first eligible customer representative whose ID number is 3335 or higher and monitor that customer representative’s call.

Using the table below as an example and assuming all customer representatives are answering telephone calls at the assigned time, the surveyor would select the customer representative whose employee number is 3339.

Customer Representative

Identification Numbers

3329

3331

3332

3334

3339

3344

3346

3349

3350

3351

3352

3353

This customer representative has at least nine chances of being selected as the first customer representative selected at any time, no matter who is answering telephone calls:

In contrast, the customer representative whose ID number is 3351 has only one chance of being selected when the customer representatives whose ID numbers are 3350 and 3352 are also available to answer telephone calls.

The surveyors do not always follow sample selection procedures, and selection methods vary between call sites.

Surveyors do not always understand IRM procedures for sample selection. This results in improper use of the random number tables to select customer representatives. We identified the following examples in one or more of the three call sites we visited:

Use of sampling methods which are not random in nature increases the risk that the results will not be representative of all ACS customers. Weighting of sample results is one technique that can be used to improve the efficiency of the survey results. However, we did not identify this or other techniques being used to correct for sampling deficiencies.

Recommendations

We recommend that the Director, Strategic Planning and Budget, improve the process for overseeing the ACS Customer Satisfaction Survey to ensure the survey is properly administered and that results are accurate, valid, and reliable. This will help ensure the IRS meets GPRA requirements for measuring customer satisfaction with ACS services.

The Director should:

  1. Establish an oversight process to ensure IRS surveyors and customer representatives comply with survey selection criteria and procedures.
  2. Revise survey selection criteria to ensure all ACS customers are given an equal opportunity of being selected for the survey. For example, the sample should include outgoing calls, non-English speaking customers, and calls from all hours of operation.

Management’s Response: Management’s response was due on May 8, 2000. As of May 11, 2000, management had not responded to this draft report.

Conclusion

Presently, the ACS Customer Satisfaction Survey results are not valid for establishing a statistical basis for comparing program results with established performance goals as required by the GPRA. While the ACS Customer Satisfaction Survey may be an effective marketing tool to gauge taxpayers’ satisfaction with the services provided by the ACS offices, the survey results are not statistically valid and should be appropriately qualified if they are used to satisfy the GPRA requirements.

Appendix I

Detailed Objective, Scope, and Methodology

The overall objective of this review was to evaluate the accuracy, validity, and reliability of the information used to measure customer satisfaction with the Automated Collection System (ACS). We performed the following work:

  1. Conducted meetings with appropriate Internal Revenue Service (IRS) and ACS management officials and:
  1. Determined what actions were taken to implement Government Results and Performance Act of 1993 (GRPA) laws and regulations for the IRS and the ACS.
  2. Identified potential measures to be used.
  3. Identified improvements or decisions made based on vendor results.
  1. Reviewed the process used to perform the survey in the ACS to determine if results are accurate, valid, and reliable.
  1. Identified the primary purpose and objectives of the survey to determine if the results will meet the needs of reporting GPRA performance measures to the Congress. This included reviewing quarterly IRS Customer Satisfaction Survey reports for the ACS, as well as interviewing appropriate IRS executives and managers.
  2. Reviewed the vendor contract and interviewed appropriate IRS executives to determine who established the sample sizes, confidence levels, error rates, precision levels, etc.; how they were set; and whether the sampling components ensure valid, reliable results. In addition, consulted with a statistician to determine if the sampling components, data collection instruments, and vendor results provided valid and reliable results.
  3. Determined why the IRS selected only incoming ACS telephone calls to survey (and not outgoing calls) and if results will provide an accurate, valid, and reliable satisfaction rate for all ACS customers. In addition, we determined the feasibility of using the ACS telephone system to monitor outgoing calls as part of the survey. Compared guidelines/procedures for the sample selection and results documentation developed by the vendor and the IRS to determine if there were inconsistencies.
  4. Determined what population and sample data/information are captured and forwarded to the vendor, and identified the person responsible for documenting and forwarding the information (e.g., the daily production/assignment sheets used to document the call attempt outcomes and the incoming telephone call population volumes).
  5. Determined if site operational reviews or visitations are conducted to ensure the ACS sites are following procedures.
  6. Reviewed the Los Angeles District Office Research and Analysis (LA DORA) validation process of the vendor results to determine if it validated information provided to the vendor.
  7. Determined who sends ACS telephone call volume data to the vendor and how often they are sent.
  1. Determined how taxpayers are selected to be included in the ACS surveys.
  1. Interviewed appropriate site executives to determine if the IRS/vendor performs any site operational reviews or visitations to ensure the ACS sites are following procedures.
  2. Determined how the telephone system is used to select incoming telephone calls and how incoming and outgoing call volumes are captured. Also, determined if the telephone call volumes can be validated and if the IRS does so.
  3. Performed a walk-through of the survey process in the selected sites to determine if the process used for randomly selecting incoming telephone calls complies with vendor criteria.
  4. Determined if the telephone calls selected meet the vendor’s selection criteria, and, if not, what method is used to randomly select calls.
  5. Determined whether copies of the Daily Production/Assignment Sheets are maintained at the site and by whom. Determined if they review the documents to identify potential control breakdowns and if the correct number of telephone calls was monitored.
  1. At each field office visited, we evaluated and tested controls over the sample selection process and the data/information provided to the vendor to ensure accurate, valid, and reliable information was provided to the vendor.
  1. Obtained a listing of all ACS customer representative identification numbers for the site.
  2. Interviewed ACS monitors to identify procedures they use to select telephone calls.
  3. Observed how surveyors identified the first telephone call to be monitored and how subsequent calls are selected. Determined whether the process used complies with the vendor’s criteria and whether the selection process is random.
  4. For each call attempt that was monitored, documented the details of the telephone call.
  5. Determined whether all incoming telephone calls are randomly selected using sample selection documents and determined whether specific types of calls are excluded from the sample. For example, calls received at certain times of the day, Spanish-speaking customers, etc.
  6. Obtained the Daily Production/Assignment Sheets for the period October 1998 through March 1999. Analyzed the log sheets for indications that:
  1. Identified procedures for reporting customer representatives who do not follow procedures.
  2. Evaluated the adequacy of telephone call volume information (populations) sent to the vendor by the IRS.
  1. Determined whether survey results will be transportable to the new organizational structure.
  1. Interviewed Customer Service executives to determine the future mission of the ACS (Collection or Customer Service) and organizational plans for how the inventory will be assigned to the ACS call sites.
  2. Determined how the ACS call sites will be assigned to the new organizational structure of the Office of Customer Service Operations Center.
  3. Determined which sites will be assigned to the Wage and Investment Division, the Small Business Division, the Middle and Large Business Division, and the Exempt Organization Division.
  4. Determined the status of the development of a Business Operating Division code that will be assigned to Masterfile accounts.
  1. Determined how many potential taxpayers will have a reduced burden as a result of improved customer service.

Appendix II

Major Contributors to This Report

Maurice S. Moody, Associate Inspector General for Audit (Headquarters Operations and Exempt Organizations Programs)

Stanley C. Rinehart, Director

Augusta R. Cook, Audit Manager

Gerald T. Hawkins, Audit Manager

Kenneth L. Carlson, Jr., Senior Auditor

Lynn Faulkner, Senior Auditor

Catherine Cloudt, Auditor

David Lowe, Auditor

Lynn Ross, Auditor

Appendix III

Report Distribution List

Deputy Commissioner Operations C:DO

Chief Management and Finance M

Chief Operations Officer OP

Assistant Commissioner (Customer Service) OP:C

Director, Strategic Planning and Budget M:SPB

Director, Office of Program Evaluation and Risk Analysis M:O

Executive Officer for Service Center Operations OP:SC

Chief, Customer Service Field Operations OP:C:CS

Controller/National Director for Financial Management M:CFO:F

National Director for Legislative Affairs CL:LA

Office of Management Controls M:CFO:A:M

Office of the Chief Counsel CC

Office of the National Taxpayer Advocate C:TA

Organizational Performance Management Executive C:DO:OPME

Director, Atlanta Customer Service Center

Director, Brookhaven Customer Service Center

Director, Philadelphia Customer Service Center

Audit Liaisons:

Chief Management and Finance M

Assistant Commissioner (Customer Service) OP:C

Appendix IV

Outcome Measures

This appendix presents detailed information on the measurable impact that our recommended corrective actions will have on tax administration. These benefits will be incorporated into our Semiannual Report to the Congress.

Finding and recommendation:

The Automated Collection System (ACS) Customer Satisfaction Survey results should be qualified if used to report performance measures for the Government Performance and Results Act of 1993 (GPRA).

We recommend that the Director, Strategic Planning and Budget, improve the process for overseeing the ACS Customer Satisfaction Survey to ensure the survey is properly administered and that results are accurate, valid, and reliable. This will help ensure the Internal Revenue Service (IRS) meets GPRA requirements for measuring customer satisfaction with ACS services. The Director should:

  1. Establish an oversight process to ensure IRS surveyors and customer representatives comply with survey selection criteria and procedures.
  2. Revise survey selection criteria to ensure all ACS customers are given an equal opportunity of being selected for the survey. For example, the sample should include outgoing calls, non-English speaking customers, and calls from all hours of operation.

Type of Outcome Measure:

Value of the Benefit:

The IRS annually provides ACS service at 18 call sites. Approximately 3.8 million customers telephone these call sites in a given year. The IRS uses the ACS Customer Satisfaction Survey to measure taxpayers’ level of satisfaction with the services they receive from the ACS and its employees. An outside vendor summarizes the survey results used to evaluate the overall satisfaction with IRS services.

The IRS originally designed the surveys as part of its balanced performance management system and did not anticipate using the surveys for GPRA reporting purposes. Subsequently, the IRS used the survey results to fulfill the GPRA reporting requirements. The intent of the law is that the Congress will use the GPRA measurement results to help evaluate IRS budget appropriations. Therefore, it is essential that the IRS accurately measure its success in meeting the performance goals.

Methodology Used to Measure the Reported Benefit:

To measure taxpayer burden, we used the number of incoming telephone calls received by the ACS to quantify the organizational impact of reported issues and recommended corrective actions. We obtained reports showing incoming call volumes for the first year that the IRS conducted the ACS Customer Satisfaction Survey (April 1998 through March 1999).

The outcome measure, Protection of Assets/Reliability of Information (potential), cannot be quantified. We cannot determine the extent of the survey results’ unreliability. In addition, while this information is critical for GPRA purposes, it is not possible to quantify the outcome measure by a dollar amount or number.

Appendix V

Automated Collection System Call Site Hours of Operation and Survey Selection Times

Automated Call Site (ACS)

ACS Hours of Operation
(Before October 4, 1999*)

Survey Times

Atlanta, GA

7:30 a.m. to 8:15 p.m.

7:30 a.m. to 9:00 p.m.

Austin, TX

8:00 a.m. to 8:00 p.m.

7:30 a.m. to 4:30 p.m.

Buffalo, NY

8:00 a.m. to 8:00 p.m.

7:30 a.m. to 4:00 p.m.

Cleveland, OH

8:00 a.m. to 8:00 p.m.

8:00 a.m. to 4:00 p.m.

Dallas, TX

8:00 a.m. to 8:00 p.m.

7:30 a.m. to 4:30 p.m.

Denver, CO

8:00 a.m. to 8:00 p.m.

8:00 a.m. to 5:00 p.m.

Detroit, MI

8:00 a.m. to 8:00 p.m.

8:00 a.m. to 4:00 p.m.

Fresno, CA

8:00 a.m. to 8:00 p.m.

7:30 a.m. to 7:00 p.m.

Holtsville, NY

(Brookhaven Service Center)

8:00 a.m. to 8:00 p.m.

8:00 a.m. to 8:00 p.m.

Indianapolis, IN

7:00 a.m. to 8:00 p.m.

8:00 a.m. to 4:00 p.m.

Jacksonville, FL

7:30 a.m. to 9:00 p.m.

7:30 a.m. to 3:00 p.m.

Kansas City, MO

8:00 a.m. to 12:00 p.m.

8:00 a.m. to 3:30 p.m.

Nashville, TN

7:00 a.m. to 8:15 p.m.

7:30 a.m. to 5:00 p.m.

Oakland, CA

7:30 a.m. to 8:00 p.m.

8:00 a.m. to 4:00 p.m.

Philadelphia, PA**

6:00 a.m. to 10:00 p.m.

7:30 a.m. to 8:00 p.m.

Seattle, WA

7:30 a.m. to 8:00 p.m.

7:30 a.m. to 4:00 p.m.

St. Louis, MO

7:00 a.m. to 7:30 p.m.

8:00 a.m. to 4:00 p.m.

* Beginning October 4, 1999, all ACS call site hours of operation are from 8:00 a.m. to 8:00 p.m. Before October 4, 1999, each call site determined its hours of operation.

** The IRS merged the former Pennsylvania call site into the Philadelphia call site on January 1, 1999.