U.S. Department of Commerce

American Community Survey

Census.govAmerican Community Survey › Methodology: 2006 Content Test › Limitations
Skip top of page navigation

Limitations

New and Modified Content on the 2008 ACS Questionnaire: Results of Testing Prior to Implementation

Limitations

Limitations unique to the 2006 ACS Content Test

The 2006 ACS Content Test maintained the same general data collection methods as the ACS, but deviated from these methods in some aspects to meet research objectives as well as resource constraints. In general the deviations did not impact the validity of the results, and in many cases increased the effectiveness of the testing. However, these differences should be considered when evaluating the data. The differences between the ACS and the 2006 ACS Content Test are described as follows.

The 2006 ACS Content Test did not provide a toll-free number on the printed questionnaires for respondents to call if they had questions, as the ACS does. The decision to exclude this service in the 2006 ACS Content Test primarily reflects resource issues in developing the materials needed to train and implement the operation for a one-time test. However, excluding this telephone assistance allows us to collect data that reflects the respondent's interpretation and response without the aid of a trained Census Bureau interviewer.

The ACS Computer Assisted Telephone Interview (CATI) follow-up operation was excluded from the 2006 ACS Content Test in order to meet field data collection constraints. Without CATI, questions administered differently over the phone did not get the benefit of a full CATI operation (though some of the Computer Assisted Personal Interviews (CAPI) actually do occur by phone). However, since only ten percent of ACS data is collected by CATI and CATI interviewers are trained to help respondents understand question intent and response categories, overall ACS data quality should not suffer when questions are implemented using CATI.

Limitations unique to the 2007 ACS Grid-Sequential Test

The main objective of the 2007 Grid-Sequential Test was to determine the effects of changing the layout of the basic demographic items on the ACS paper questionnaire from a grid to a sequential layout. To meet this objective, we used a study design that differed from the production ACS. The grid-sequential test was strictly a mail (respondent-completed) test. The other modes of data collection used in the production ACS for mail nonresponse followup, CATI and CAPI, were not used for this test. Therefore, characteristics of CATI and CAPI respondents that may influence the estimates or distributions were not incorporated.

The data collection process for the grid-sequential test and the production cases used in the analysis used a modified Key From Paper (KFP) system rather than the (Key From Image) KFI system implemented for the 2008 ACS. A difference between the standard KFP system and the new KFI system is that the KFI system contains edits that clean the data. As a result, we modified the KFP system to include imaging of the grid-sequential test questionnaires so that we could achieve similar results to the KFI system by having the ability to verify household or person records if needed.

Limitations Shared by the 2006 ACS Content Test and the 2007 ACS Grid-Sequential Test

The universe for both tests was restricted to housing units (e.g. Group Quarters were excluded) and also excluded the areas of Alaska, Hawaii, and Puerto Rico.

Both tests excluded the Failed Edit Follow-Up operation in order to fully assess the impact of the proposed 2008 ACS content changes on data quality. As a result, households that provided incomplete information on their form or who reported more than five people living within the household were not contacted to collect their remaining information via a CATI interview.

To allow for the study of the impact of the proposed 2008 ACS content changes on data quality, neither test applied the ACS editing and imputation rules. The ACS editing and imputation rules correct inconsistent responses and fill in missing responses. These tests only had item nonresponse rates calculated. A change in the item nonresponse rate can cause a change in the item allocation rates.

The ACS weighting methodology was not replicated in either test. All tabulations were derived using only the sampling weights. There were neither adjustments made to the weights for nonresponse nor adjustments to control to the independent housing and population estimates like the full ACS.



Source: U.S. Census Bureau | American Community Survey Office | Email ACS | Last Revised: September 20, 2012
Give us your feedback on this website!