Skip To Content

Table Of Contents

Click for DHHS Home Page
Click for the SAMHSA Home Page
Click for the OAS Drug Abuse Statistics Home Page
Click for What's New
Click for Recent Reports and HighlightsClick for Information by Topic Click for OAS Data Systems and more Pubs Click for Data on Specific Drugs of Use Click for Short Reports and Facts Click for Frequently Asked Questions Click for Publications Click to send OAS Comments, Questions and Requests Click for OAS Home Page Click for Substance Abuse and Mental Health Services Administration Home Page Click to Search Our Site

Computer Assisted Interviewing for SAMHSA's National Household Survey on Drug Abuse

9. Operation of the 1997 Field Experiment

In this chapter, we discuss the following aspects of the operation of the 1997 field experiment:

  1. the design and performance of the computer-assisted interviewing (CAI) programs, the hardware, and the supporting systems;

  2. recruiting, hiring, and training of field staff; and

  3. conduct of the field experiment.

Recall that the comparison group consisted of the Quarter 4 1997 NHSDA respondents in the primary sampling units (PSUs) where the field experiment was conducted. No special procedures were needed to get NHSDA substantive data from this comparison group. However, a subsample of these Quarter 4 NHSDA respondents was selected to receive an audio computer-assisted self-interviewing audio computer-assisted self-interviewing (ACASI) debriefing interview, and special interviewing procedures were needed to for this subsample. Thus, this chapter discusses both the operations for the large field experiment and the comparison group debriefing subsample.

9.1 Design and Performance of the CAI Software, Hardware, and Supporting Systems

The 1997 field experiment interviewers used handheld computers for screening and laptop computers for case management, communications, and collection of interview data. Data were transmitted from the handheld to the laptop computer through a cable and from the laptop to the Research Triangle Institute (RTI) through a client-server transmission system using telephone connections. Processing took place at RTI in a networked PC environment, and operational data were made available to field supervisors (FSs) and survey managers through a protected website on the World Wide Web (WWW). Together, these computer systems allowed data to flow from the field to RTI electronically. We discussed the development and testing of the electronic screener in Chapter 8. In this chapter, we focus on the development and operation of the other software components.

The 1997 field experiment demonstrated that electronic data collection could be implemented successfully, but that care needs to be given to details to ensure good quality data. Overall, the systems worked well, although many needed improvements were identified and implemented for the 1999 NHSDA.

Software applications controlled or facilitated data movement at each stage of data collection. Programs for the CAI instrument, laptop case management system (CMS), transmission system, electronic screener, control system, and website were developed by RTI staff using commercial software packages.

The hardware for the 1997 field experiment consisted of a Toshiba Satellite 105CS laptop computer with a PCMCIA sound card and a PCMCIA modem, a set of headphones, and an Apple Newton MessagePad 2000. The Toshibas had 8 MB of RAM and a 75 MHz Pentium processor.

9.1.1 CAI Interview Software

The CAI interview for the field experiment had four components: a CAPI interview, an ACASI interview, an ACASI debriefing interview for the respondent, and a self-administered debriefing for the interviewing staff. The two debriefing components were modified for use in the comparison group subsample.

Field experiment interview. The experimental design of the 1997 field experiment incorporated three factors with two levels each. As a result, eight distinct questionnaires had to be developed. Specifications were prepared for each questionnaire and were used by the programming staff to develop the CAI instrument. Once all question wordings were finalized, we began work on the associated audio files to be used in the ACASI portion of the interview. The eight treatments differed only for the questions included in the first eight sections of the ACASI portion of the instrument (alcohol through inhalants). Separate hard-copy specifications were prepared for these sections and included all question wordings, routing instructions, "fill" text, and definitions of new variables created as part of the logical flow of the instrument. Programmers used the electronic versions of these specifications to prepare the treatment portion of the CAI instrument. The remainder of the instrument was prepared by revising code used in the 1996 feasibility experiment.

The instrument was programmed using the Blaise 2.5 developed by the Netherlands Central Bureau of Statistics. Blaise 2.5 ran under the MS-DOS operating system, and due to questionnaire size limitations, the entire instrument could not fit into a single module. Instead, eight separate modules were developed:

  1. FT2INTRO (CAPI): introduction, front-end demographics, and tutorial sections (ACASI);

  2. TREAT (ACASI): experimental treatment module containing tobacco, alcohol, marijuana, cocaine, crack, heroin, hallucinogens, and inhalants sections;

  3. NTREAT1 (ACASI): analgesics, tranquilizers, stimulants, and sedatives sections;

  4. NTREAT2 (ACASI): special drugs, risk, general drug, special topics, and treatment sections;

  5. NTREAT3 (ACASI): workplace issues, drug experiences, youth experiences, and mental health sections;

  6. FT2DEMOG (CAPI): back-end demographics;

  7. FT2DEBRF (ACASI): respondent debriefing; and

  8. FIDEBRF: field interviewer (FI) debriefing (self-administered by the interviewer).

Once the FI began an interview, the CMS would then automatically write an MS-DOS command file for that case and execute it. This command file would run each of the modules in order and pass the case identification number as a parameter to each of the CAI modules.

Rather than have eight different versions of the experimental sections (TREAT) containing redundant question definitions, we programmed a single version with conditional branching based upon the experimental treatments. This approach made it easier to update the software and maintain comparability among treatments, and it also meant that one data output file was generated rather than eight.

All screens and their associated logic were tested before the audio files were incorporated. The hard-copy specifications were used during all testing. The testers followed each path through the instrument, ensuring that every response category for a particular question (including the "don't know" and "refusal" options) was being routed to the appropriate place. Programming staff made corrections, and the tester rechecked the program to verify that the corrections were made.

Within the Blaise definition of each ACASI question, the names of the audio files corresponding to the question text were identified. In some cases, such as with dates, ages, and responses to other questions, the audio file name was variable. The CAI instrument would determine the names of the audio files to be played based on previous question responses.

Once the text of every ACASI question and every response category was finalized, we identified the associated audio files. These audio files are formally designated as waveform audio files (or simply WAV files). In the simplest case, a question will require two associated WAV files: one for the question text and one for the response categories. However, in many cases, the text of a single question was split into multiple WAV files. It was necessary to split questions that include "fill" text, such as the 30-day reference date, into multiple WAV files to allow the "fill" text to change separately from the remainder of the question text. As an example, consider the following question:

The audio for this question was split into five separate WAV files as indicated by the brackets. Similarly, a series of questions that all begin with a common stem were split into two pieces: one for the common stem and one for the text specific to that question. Splitting the text of a question into multiple WAV files reduced the total number of WAV files, thereby reducing the size of the ACASI program and enabling it to process more rapidly. There is also a downside to using multiple WAV files for a single question; namely, a pause may occur between the end of one file and the beginning of another, which can cause the question to sound choppy.

We contracted with an outside vendor for both the recording and the WAV file creation. RTI staff read the audio text from a script prepared by the programming staff. A member of the NHSDA project staff attended all recording sessions to catch any mistakes or poorly phrased readings so that a re-recording could be made on the spot.

We checked each WAV file for accuracy. The WAV files were loaded into a database along with the screen text associated with the file. Staff listened to each WAV file while comparing it to the on-screen text. Mistakes and omissions were corrected in a second round of recording, and this proved to be a valuable procedure for reducing the need to return to the recording studio. Next, we verified that the correct audio files were being played at the correct time in the interview. Testers followed every path through the questionnaire and listened to every question and every response category to make sure the audio matched what was on the screen. This detected any problems that could arise due to associating the wrong WAV file with a question. Inconsistencies were corrected in a last round of revisions.

Quarter 4 NHSDA debriefing interview. The debriefing interview for the respondents using the paper-and-pencil interview (PAPI)/self-administered questionnaire (SAQ) was designed to gather information about their experience in using the answer sheets and their opinions about using computers to answer such questions. The instrument contained

  1. INTERVIEW ADMINISTRATION: seven items for the interviewer to record the screener roster number, record whether or not the respondent was willing to complete the interview, and end of the interview closing statements;

  2. RESPONDENT TUTORIAL (ACASI): a tutorial that was nearly identical to the tutorial used in the field experiment sample; and

  3. RESPONDENT DEBRIEFING (ACASI): questions that paralleled those used in the field experiment sample.

The questionnaire was also programmed using Blaise 2.5. After the interview was fully tested, the audio files were recorded and tested.

Performance of the CAI software. Although the CAI software performed reasonably well, some problems were encountered. Many of these involved situations where the laptop computer was turned off or lost power. In these cases, the CMS would sometimes lose track of where the respondent was when the power loss occurred. When the FI attempted to resume the interview, a "key already exists" error message would appear, indicating that the CAI module was being executed as if a new interview were being launched rather than a breakoff interview being resumed. When a low battery power message appeared, FIs would sometimes turn off their computers, and the CMS would lose track of the case.

Problems occurred if an inconsistent respondent age and birthdate data were entered during the initial demographics section. If the FI did not correctly resolve this inconsistency, the software would use the birthdate to compute the respondent's age. In some cases, sample members' ages were incorrectly computed as less than 12 years, and the CAI program was terminated inappropriately.

9.1.2 Case Management System

Design of the case management system. The CMS was developed in conjunction with the development of the Newton screening application (see Chapter 8) and the CAI questionnaire instrument. The CMS handled the following major functions:

  1. case tracking,

  2. launching and management of the CAI, and

  3. interface with the Newton/FormLogic application and the RTI field communications system.

In the field experiment, the screening data that were collected on the Newton were transferred to the Toshiba laptops, and all data were transmitted to RTI by the CMS. The CMS was based on a set of rules and specifications for the recording of dates and events in the system. The system largely replaced the paper- and-pencil system that interviewers were using for tracking their activities associated with locating, screening, and interviewing sample housing units. Final status codes indicating a successful screening were generated by the screening application, and the CAI application generated the code and event for a final completed interview. Many other actions, such as visits to the housing unit, appointment for interview, pending refusals, and so on, were entered and coded by the interviewers. A set of final events and codes was also entered by the interviewer that reflected the last action taken at a sample dwelling unit, such as deciding the dwelling unit was vacant, that there was a language barrier, or the occupants refused to be screened or interviewed.

The user interface for entering events was a single screen design for case management that allowed the FI to move around without using the laptop's mouse. This design approach resulted in a dense and complex display that was somewhat hard to use. In addition, the FIs had to use the Alt-[letter] method of selecting actions for a very large proportion of allowed actions. The remainder are triggered in pop-up dialog boxes and require the interviewer to use push buttons to execute the actions.

The CMS was programmed in the Delphi 3.0 Windows development system. The CMS database system was MS Access 97. Programming took place in parallel with the CAI application development and the Newton screening application development. During the development phase, prototype versions were quickly created for fast-turnaround evaluation by survey managers and methodologists. This allowed the system to evolve and become more suitable to the task. Functional testing took place on an ongoing basis by the programmer and project staff. First stage usability testing also was conducted on a flow basis. Problems or suggested improvements were made quickly and returned to the tester for evaluation. Field testing of the system was confined to nine cases. Because of the dynamic nature of the development of the Newton application and the CAI application, programming of the CMS was rushed, causing some problems to emerge during the field experiment.

The CMS for the comparison group subsample allowed the interviewers to administer the interviews and then transmit the data back to RTI. This system was written in Visual Basic 5.0 and contained systems for calling the debriefing interview, status of all cases initiated on that laptop, and a system for transmitting data to RTI. All cases selected for the 1997 Quarter 4 NHSDA debriefing subsample were loaded onto the laptops so that the interviewer could initiate a case by entering the case identification number, which was verified by the program before the debriefing interview could begin.

Problems encountered with the case management system. The CMS guided the interview. Five problems were encountered with the CMS:

  1. For the first few weeks of the 1997 field experiment, the CMS was using the same experimental treatment, specifically treatment 2, for the first CAI launched on each laptop. The initial treatment should have been selected randomly. A revised version of the CMS, which corrected this problem, was sent to the interviewers during the third week of data collection.

  2. The CMS was very susceptible to corruption if the laptop was turned off without a normal shutdown during the Newton/Toshiba link and transmission to RTI. A total of 31 out of 195 FIs had to ship their Toshibas to RTI to have the databases repaired. Of the 232 Toshibas that were used on the field experiment, 27 were returned for repair (11.6%). This number includes mechanical failures and CMS problems. One particularly common problem related to the design of the Toshiba laptop and the CMS came to be known at the "power off" problem. The power button on the Toshiba happened to be located on the left side of the computer, precisely where one would put his or her hand to turn the machine away from the FI and toward the respondent for the ACASI portion of the interview. If this power button was accidently depressed while the ACASI interview was running, the CMS would irrevocably shut down. That is, restoring power would not rectify the situation. The entire system would be locked up and had to be returned to RTI for repair. This frequent problem eventually was corrected via an update to the CMS.

  3. Sometimes the records of contacts with the dwelling units were not in the correct order because FIs could enter a pending event with a date and time later than a final event in the Newton. After a Newton to Toshiba link, the final event would appear in the record of contacts, but because it was before the pending event, the case would remain in a pending status.

  4. The FIs could circumvent some of the guidelines for entering information by using the mouse. They were instructed to use keystroke combinations but did not always do so, resulting in some difficulties determining the status of cases.

  5. Some data and file problems were encountered during in-house data processing that arose from problems in the CMS, and modifications had to be made to both the in-house system and the CMS to adjust for these. The changes to the CMS were sent to the interviewers during the third week of data collection.

On the whole, we were able to recover from these difficulties; however, they clearly pointed out the necessity of making sure that there is sufficient time to program and test CMS.

9.1.3 Hardware

The hardware for the 1997 field experiment, as noted earlier, consisted of a Toshiba Satellite 105CS laptop computer with a PCMCIA sound card and a PCMCIA modem, a set of headphones, and an Apple Newton MessagePad 2000. Because the Toshibas only had 8 MB of RAM and 75 MHz Pentium processors, many of the interviewers reported the laptop systems to be slow in processing. Some of the hardware problems encountered during the 1997 field experiment included the following:

  1. Toshiba Laptops: Problems with loose keyboard ribbons, batteries not holding charges, and ease of inadvertently turning off the power.

  2. Newton Handhelds: Three Newtons had to be replaced. Two were dropped and one shorted out after being pluged into a faulty electrical outlet. More details concerning the operation of the handheld computers are given in Chapter 8.

9.1.4 Data Transmission and Capture Systems

To transmit data to RTI, an FI activated an automated data transfer process on his or her laptop. After entry of appropriate security information by the FI, data files were transmitted from the laptop to RTI, or vice versa, under the control of a client-server transmission system. Microsoft Visual Basic and SQL Server software, running under Windows NT or Windows 95, supported both transmission and capture. The transmission software system consisted of a central RTI database, which listed files to transfer and maintained activity logs, plus a laptop component called by the CMS. The capture process archived, expanded, and distributed files for use by other processes.

Transmission questions accounted for the second largest number of problems among interviewers, which included (a) difficulties calling from a hotel room, (b) uncertainties about the successful transfer of cases from interviewer to interviewer, and (c) interrupted transmissions. Because of the number of transmission problems encountered, every time a laptop and Newton were returned to RTI, a transmission was done to try to capture any data that may not have been transmitted. Difficulties were encountered on approximately 1% of attempts.

Again, these problems demonstrated to us the importance of having very robust computerized field management systems.

RTI provided telephone support for the FIs during the course of the 1997 field experiment. Almost all of the FIs had a reason to contact RTI's computer support staff on at least one occasion. There were 150 FIs at work for 91 days and, on the average, problems were encountered on 5.4% of the total workdays. Interviewers were able to contact an RTI technical support person by using RTI's toll-free number during normal business hours (477 calls), calling an automated response system after hours (32 calls), or calling the "after-hours" emergency pager number (232 calls). Exhibit 9.1.1 summarizes the calls that were received.

9.1.5 Monitoring and In-House Processing of Data

Electronic data collection shifts some aspects of data processing from the central site to the field. First, data entry is done by the interviewer or respondent and the manual pre-edit step at RTI is eliminated. Second, the cycle time is reduced in terms of receipt of data from the field, which means that central processes must respond quickly to the presence of new data to make the information available. Thus, there are new sources of error in the paperless environment, and new consistency checks must be defined so that inconsistencies and missing components can be noted. Additional errors may arise from equipment misuse or failure, and problems may be at any or all of several levels: the data items themselves, the files in which they are stored, or the software that manipulates them.

A PC-based control system, which used Microsoft Access and Visual Basic, was developed to be compatible with the CMS data files received from the field systems. Due to time constraints in implementation and the need for defining new methods of handling data and trapping errors, this system offered limited functionality as compared to the NHSDA national control system. However, they did provide valuable experience that was subsequently used to design the procedures for the 1999 NHSDA.

One gratifying result of electronic data transmission was the rapid availability of data. For example, this allowed us to detect the problem with the random assignment of respondents to treatments within the first 2 weeks of data collection. However, with expedited transmission of data from the field came a need for rapid processing. Control system operations were developed and scheduled to run periodically throughout the day and night. Thus, incoming data were added to the central database as rapidly as possible. Once in the database, the information was made available through a restricted-access site on the web. Because of this, reports and website database updates were available every morning instead of weekly. FSs and project staff members appreciated the ease of use and accessibility of the website.

Reports were viewable by authorized NHSDA staff through the use of standard browser software, either within RTI or from the field, on a 24-hour basis. FSs and RTI staff expressed frustration with the limitations of both sets of reports, and these were improved and supplemented for future use.

Based on the results, the electronic control systems were changed to include four features:

  1. increased level of error checking in field systems and at the data-capture stage,

  2. expanded functionality for the FS website management system,

  3. expanded and improved reporting from the control system, and

  4. increased data-review and correction systems in-house.

9.1.6 Problems with the 1997 Quarter 4 NHSDA Debriefing Systems

Several similar problems were noted for this component of the survey; however, because the component was much smaller and simpler than the field experiment application, the scale of problems was greatly reduced. As noted in the next section, the interviewers who participated in the debriefing did not attend training but learned how to use the laptop through home study. We were interested in how well they performed in the household and asked the respondents about laptop problems using the question shown in Exhibit 9.1.2. The exhibit shows that about 98% of the interviews were conducted without much trouble.

.

Exhibit 9.1.1 Technical Support Provided to the FIs During the 1997 Field Experiment
Type of Problem Number of Problems Reported Number of FIs Reporting Problem
CMS 242 (33%) 99 (26%)
Transmission 154 (21%) 74 (19%)
Screener/Newton 127 (20%) 77 (20%)
User Error 113 (15%) 61 (16%)
ACASI/Interview 33 (4%) 20 (5%)
Newton/Toshiba Link 30 (4%) 23 (6%)
Hardware 24 (3%) 16 (4%)
Miscellaneous 18 (2%) 17 (4%)
Total 741 (100%) 387 (100%)

Source: National Household Survey on Drug Abuse: Development of Computer-Assisted Interviewing Procedures; 1997 Field Experiment.

Exhibit 9.1.2 Question Used to Evaluate the FI's Use of the Equipment
Question: How much trouble did the interviewer seem to have setting up the computer to begin this portion of the interview?
Response (n = 593)
Frequency
Percent
No trouble at all 522 87.9
Some trouble 59 9.9
A lot of trouble 12 2
Don't know 1 0.2

Source: National Household Survey on Drug Abuse: Development of Computer-Assisted Interviewing Procedures; 1997 Field Experiment.

9.2 Recruiting, Hiring, and Training of Field Staff

To collect the data, RTI employed 144 FIs reporting to 10 FSs. The effort was managed by two in-house regional supervisors (RSs). To maintain consistent leadership with the 1997 NHSDA main study, the 1997 field experiment used many of the same in-house staff. Recruitment and hiring of FSs and FIs began in April 1997 and extended through August 1997.

All 10 of the FSs hired for the field experiment were recruited from RTI's database of field personnel. Each FS had served in at least one field supervisory role with RTI in the past; four had prior experience as an FS on the NHSDA. The territorial assignments were made based on PSU size, geography, and historical territorial assignments, and the assignments were divided relatively equally among the 10 FSs. The number of PSUs assigned to each FS ranged from 5 to 14, depending upon the specific characteristics of the assigned PSUs. The number of FIs working under each FS ranged from 12 to 17, again depending upon the specific makeup of the territory.

The FSs attended two training sessions to prepare them for their supervisory roles: one in late June and the second in mid-September. These sessions were conducted by RSs from both the 1997 field experiment and from the 1997 NHSDA main study. The first training session lasted 6 days and covered recruiting procedures, an introduction to the NHSDA project, including general background information, locating and contacting the sample dwelling units, and a review of the screening procedures. The final segment of FS training consisted of 2 days of training on the use of the laptop computers. A portion of the training overlapped days 1 through 3 of a 1997 NHSDA main study's FI training session in order to save costs.

The second FS training session was held for 2 days in 1997 and focused on management and monitoring.

Following the FS introductory training session, the FSs began recruiting. Their efforts focused on FI candidates who had computer experience or who were deemed likely to succeed when working in a CAI environment. Sources for FI candidates included RTI's National Interviewer File, contacts with other survey organizations, and local government employment agencies. For sample areas in which FSs had difficulty recruiting qualified candidates, advertisements were placed in local newspapers.

9.2.1 Field Interviewer Training for the 1997 Field Experiment

Training included a "Training of the Trainers" session and an FI session. A 3-day session to instruct those in-house and field management staff who would serve as trainers for the FI sessions was held on the RTI campus in September 1997. The attendees included three lead trainers, 30 staff who would subsequently serve as FI trainers, technical support staff, administrative support staff, other NHSDA project staff, and a representative from the Substance Abuse and Mental Health Services Administration (SAMHSA).

Prior to data collection activities, two simultaneous FI training sessions were held to train those interviewers retained for the 1997 field experiment on the procedures for conducting the study. One session was held in Research Triangle Park (RTP), North Carolina; the other was in Los Angeles, California. Each prospective FI hired for the 1997 field experiment was required to complete a homestudy exercise package prior to attending the in-person training session. The homestudy package included a cover memo that instructed the recipients to read the field experiment manual, answer the homestudy review questions, watch the videotape, and listen to the audiotape. Those trainees who were new to field interviewing also were instructed to read the manual on fundamentals. The completed homestudy review questions were to be turned in upon registration at the assigned training site.

Due to the complexity of the 1997 field experiment's training program, a training team consisting of a number of positions was assigned to each training site. At each site, there was a site leader, an administrative support person, a lead technical support, 4 technical assistants, 5 lead trainers, and 10 assistant trainers. A training team consisted of a lead trainer, 2 assistant trainers, and a technical support person who trained from 12 to 17 field interviewers.

During the training, interviewers were divided into groups based on their experience. Special sessions were held for those who had never done personal visit interviews; for some sessions, we formed groups with different degrees of computer experience. As a supplement to the daily training sessions, a 2-hour study session was scheduled for each evening. These sessions allowed trainees to receive further instruction.

There were some difficulties. The Newton did not have video-display capabilities that allowed active screens to be projected onto a larger screen for viewing by the group. To deal with this issue, RTI's photographers prepared images of each screen of the prescripted Newton training exercises. These were loaded into a PowerPoint presentation that resided on the trainer's Toshiba and were projected using a device designed for computer presentations. This, however, did not allow the trainer to spontaneously manipulate Newton screens or exercises to address specific questions raised by the class. For the 1999 training, projectors that could display the Newton screen were used.

There were also some problems due to the fragile state of the CMS. Some procedures were awkward, and some software bugs were discovered. Sheets summarizing the commands were developed and distributed to all trainees, and this was well received by the trainees.

A key requirement of the 1997 field experiment interviewers was that data be transmitted to RTI every night. Practicing this was important, but at each training hotel the telephone line requirements were not met. Site leaders and technical support staff worked closely with the hotel technicians to resolve this problem.

Trainers carefully monitored all attendees on overall performance and comprehension. Weak performers were given extra attention, primarily during the evening study sessions. Through these efforts, 71 out of 72 interviewers "graduated" from the Los Angeles session; 70 out of 72 interviewers graduated from the RTP session. Overall,10 of those graduating were classified as needing extra attention from their supervisors once they began their field assignments. By the end of October, a total of 14 FIs had left or had been released from the project. Thus, a supplemental FI training session was held from November 6 through November 11, 1997, to boost the number of FIs working on the 1997 field experiment. A total of 23 FIs were trained at this session, which was held on campus at RTI. Ten (10) of the trainees were brought in as replacement FIs; the remaining 13 trainees were designated as traveling FIs who would travel as needed to boost production in particular areas. A third and final supplemental training session was held in late November 1997. During this session, four experienced NHSDA interviewers were trained to serve as additional traveling FIs. Due to their experience with the study, the training session was reduced from 6 to 3½ days and focused on the Newton and Toshiba administration components of the 1997 field experiment.

9.2.2 Field Interviewer Training for the 1997 Quarter 4 NHSDA Debriefing

A subset of the 1997 NHSDA FIs were trained to administer the ACASI debriefing interview to the subsample of PAPI respondents. A total of 71 interviewers who were working on the 1997 NHSDA main study were selected based upon their performance on the NHSDA in general, their perceived ability to conduct the ACASI debriefing portion of the interview, and their location relative to the segments selected for the debriefing sample. Field supervision of the debriefing sample was subsumed under the regular supervisory duties of the assigned FS for the 1997 NHSDA; most of the 12 supervisors had one or more interviewers working on the debriefing component. The target number of debriefing interviews to be completed was 750.

All interviewers selected for the debriefing component of the 1997 field experiment were already trained on the NHSDA PAPI administration procedures. Because the ACASI debriefing program was simple, an in-person training session for the debriefing interviewers was not necessary. Instead, they were trained via a comprehensive homestudy package that contained a booklet of study materials that covered procedures for homestudy, an overview of the debriefing study, hardware and software, a step-by-step guide through the actual debriefing interview, and a technical support information sheet. In addition, they were sent a videotape that was used in conjunction with the home study booklet. The video showed detailed, step-by-step instructions on using the Toshiba laptop computer and administering the ACASI debriefing interview. It also provided instructions on transmitting/telecommunicating completed debriefing data to RTI. Upon completing the self-study portion of the homestudy package, the interviewer was required to successfully complete a certification and transmission exercise. Receipt of the debriefing assignment was contingent upon successful completion of the certification exercise.

A laptop carrying case containing a Toshiba laptop computer and all necessary ancillary materials were also shipped to the interviewers along with packing materials and a Federal Express label for return shipment.

9.3 Data Collection

As noted above, the 1997 field experiment used many of the same in-house management staff as the 1997 NHSDA main study. The overall management approach was based on the procedures already in place for the main study; however, modifications were necessary to accommodate the differences primarily related to electronic data collection.

9.3.1 Management of Data Collection Operations

Data collection for the field experiment began on October 1, 1997, and continued through January 31, 1998. The final response rates did not reach the targeted level. Only 1,982 interviews were completed, which is 11.9% lower than the expected number of 2,250. In this section, we discuss how data were transmitted from the field, communications between field staff, the use of traveling FIs, and other field conditions. We highlight the situations that contributed to the shortfall in the number of interviews.

During training, interviewers were instructed to update all events related to their fieldwork each day and to transmit their data everyday. These daily transmissions provided the FSs and project management with frequent and consistent reports of progress made in the field. Each day, reports were generated and posted to the password-protected NHSDA website. Thus, more timely feedback could be provided to all staff. This procedure was a significant improvement in timeliness over the weekly reporting schedule used on the NHSDA main study. However, initial system problems early in the field period resulted in some inefficiencies in compiling the information necessary to manage the field staff efficiently and, in some cases, effectively. The system problems were continually addressed throughout the early weeks of data collection in an effort to stabilize and improve the automated reporting system.

FSs held a mandatory weekly conference call with each of their assigned FIs. Each call lasted a minimum of 1 hour and allowed the supervisor to work with each interviewer to monitor and improve the quality of work, rate of production, and costs associated with his or her field efforts. In addition, any other relevant topics, questions, concerns, or problems were discussed during this time. Supervisors also used the conference call to relay important information from the field management staff to the interviewers. For example, known computer problems and their corresponding solutions (if available) were relayed to the FIs during the weekly call.

Initially, we intended to use the daily and weekly production reports that were to be posted to the website to review work status with the interviewers. However, due to uncertainties in the system during the early weeks of the 1997 field experiment, a backup monitoring and management plan was put in place, and during the weekly call, the FIs reported their updated status codes for each case in their assignment. The FS used this information to discuss difficult cases with the interviewer, and it also provided a check on the information that was received during the transmission. At the beginning of the study, the FSs' hand counts most often were more accurate than the web-based reports.

The in-house regional supervisors (RSs) called their FSs each week to obtain their counts of completed and pending cases, and this became a benchmark that was used to monitor the revision of the web- based reporting system. The RS/FS calls also included discussions about FI workload and any other field production problems. In addition, the RSs conducted periodic group teleconferences with the supervisory staff to share information on how to improve respondent participation rates.

Once data collection started, FI attrition was higher than had been anticipated. At the same time, a higher-than-expected yield from the field experiment sample resulted in more cases being eligible for the study. As a result, very early in the field period it became evident that the interviewer staffing level was insufficient to complete the work as planned. To help combat this problem, a select group of specially skilled and trained FIs were identified and secured to serve as traveling FIs, or TFIs. The TFIs did not have a home-based assignment; rather, they were available for field deployment where needed. Thus, additional interviewers were trained as described above.

The 1997 field experiment was the first major NHSDA data collection effort for which all data--screening and interviewing--were collected and transmitted via computer. Prior to the start of data collection, considerable effort was expended to test the screening and ACASI components. However, a somewhat compressed development period for the CMS resulted in a product that needed a more complete and thorough testing. We identified some problems very early in the field period that required immediate corrective actions. After about 3 weeks of data collection, all FIs were instructed to stop work until they received a new version of the CMS; this was shipped out on a diskette. The need to upgrade to a new version of the CMS resulted in several days of lost field time. Although the new version of the CMS was much improved, problems still existed with the updating and posting of case-level status codes, and the CMS continued to receive a great deal of attention from the RTI technical staff. Additional problems were identified and, when possible, corrected electronically when the FIs transmitted data to RTI. For those problems that could not be readily resolved via software corrections, FIs were notified by their FSs of the known problems and of steps they could take should the problem happen to them. Again, as was noted earlier, backup hard-copy reporting systems also were implemented to provide a cross-reference on the status of work in the field.

The FSs had a version of the CMS that was a modified version of the FI system. Thus, because the FI system experienced problems early on, so too did the FS CMS. In fact, completion of the FS CMS system was delayed until the initial problems identified in the FI version were corrected. By mid-November 1997, a preliminary version of the FS CMS was operating as a web-based reporting system.

Essentially all of the problems experienced with the CMS were a result of a lack of sufficient time to thoroughly test and revise the systems prior to actual data collection. The data collection period became the testing ground, which is a less-than-satisfactory situation. The consequences included delays in field progress and resultant frustration among the field, management, and technical staff. After the field test was completed, the CMS was redesigned, enhanced, and rigorously tested to develop the version that would serve as the foundation for systems to be used in the 1999 NHSDA. In April 1998, a 2-week field test of the prototype web-based CMS was conducted, and the system performed extremely well, and some additional refinement and testing was done prior to the beginning of the 1999 survey.

Because 1997 field experiment was the first time in which screening data were collected via computer, it was unknown what impact, if any, that method of data capture would have on the screening results. Thus, the sample was designed with the intent to accommodate potential shortfalls due to failure of the screening hardware or software. However, these possible problems were not realized, and the yield was 14% higher than expected. The size of the FI staff was insufficient to handle this increased workload. To resolve this problem, at mid-quarter 1, 837 cases were removed from the sample. This required a redistribution of assignments for the interviewers and some inefficiencies in the data collection.

For the 1997 field experiment, the sampling rates were set so that 50% of the respondents were expected to be 12 to 17 years old. Because we used similar sampling algorithms as were used in the national NHSDA, this resulted in a higher than usual number of households in which more than one person was selected. Each interview takes about an hour, and the response rates are depressed in a two-interview household because of the increased burden.

The electronic screening mechanism had many advantages, particularly related to the accuracy of implementing the respondent selection routine. However, this initial screening application was relatively rigid; once an error was made by an FI, it was difficult or impossible to correct that error, particularly when standing on the respondent's doorstep. As these types of errors became known, all staff were cautioned to be particularly careful, and errors were reduced but not eliminated.

Another issue that arose was the interviewers' perception of the respondents' attitude toward the overall interview length. Although the total length of the CAI and the PAPI version used in the national study is nearly the same, many interviewers seemed to feel that respondents thought the process was excessively long. This perception apparently arose as a result of the time needed to set up and then break down the electronic equipment. Some FIs may have been less inclined to push for the interview because they felt it was excessively burdensome for the respondents. As the field period progressed, the FIs became more proficient with setting up and breaking down the equipment, and this concern subsided.

Finally, the use of the TFIs was not as effective as anticipated. Some high-quality, veteran NHSDA interviewers were selected for this assignment; however, they had not worked as traveling interviewers before and did not deliver the level of effort on the 1997 field experiment to which they had committed.

As was noted earlier, the time frame to prepare for the 1997 field experiment was relatively compressed. The final decision to introduce electronic screening for the field experiment did not occur until June 1997. Again, this limited our ability to adequately test the complete electronic system, especially as it involved interactions with the CMS.

The data collection period spanned the winter months of November and December 1997 and January 1998. This is traditionally the most difficult time for fieldwork on the NHSDA. In spite of efforts to counteract the effects, uncontrollable factors, such as bad weather, holidays, and fewer daylight hours, had a negative impact on response rates.

In spite of the problems that were experienced, much of the overall reaction to the computerized approach was, in fact, favorable. The tremendous potential for a fully automated NHSDA data collection effort was obvious, as is evidenced in the sections that follow.

9.3.2 Field Staff Reactions to Computer-Assisted Interviewing Procedures

At the conclusion of the field experiment, we obtained input from the field staff regarding their perception of the process as a whole. Four conference calls were held in January and February 1998 to debrief the field staff. Two calls were held with the FSs, and two were held with selected FIs. RTI project management staff moderated and listened in on the calls, and SAMHSA staff participated. Prior to each call, a memo containing general instructions and a list of specific issues was distributed. Each call lasted approximately 2 hours.

Field supervisor reactions. In the discussion of training, several FSs noted that they felt inadequately prepared at the FI training session for dealing with the computers and the computer problems that occurred. They suggested that detailed explanations of potential computer problems prior to training would have been helpful (although in reality such details were not known prior to the training session). They also noted that training might be more effective if FIs with little to no computer experience were trained separately from those who had experience. They noted that the the amount of equipment/hardware involved with the field experiment was difficult for some FIs to carry.

The FSs also noted that they would have preferred to have more software and hardware capabilities available at the onset of data collection, including (a) the ability to electronically assign and transfer cases between FIs, (b) the ability to communicate via e-mail with the Fis, (c) prior training on handling the more common computer/technical problems, and (d) a more functional system of automated and accurate field status reports.

A functional electronic system of gathering and reporting data alleviates many of the mundane paperwork tasks typically completed by the FSs. However, the field experiment FSs noted that they regretted having no way to see the quality of their FIs' work. That is, with the paper NHSDA, the initial completed cases from the FIs are sent to the FS for a quality review. In the field experiment, there was no such ability to monitor the actual data being entered by the FI.

The FSs reported that they were more comfortable with hiring FIs who had computer experience. They also noted that it generally was not difficult to find candidates meeting this requirement. Some did feel that it was sometimes difficult to recruit people for the NHSDA due to the subject of the survey and the lack of respondent incentives.

Overall, FSs were generally positive. They understood that this was a test of procedures, and thus problems were to be expected. The FSs made a point to emphasize that the technical support available to the field staff was excellent, especially after the initial rush of calls during the first few weeks of data collection. Moreover, they were very positive about using the Newton for screening.

Field interviewer reactions. For the FI debriefing calls, the RSs chose a cross-section of staff from their region. Overall, the FIs also were positive. They felt that potential respondents were very receptive to the Newton and to completing the interview on the computer. The Newton generally performed well, even in adverse weather conditions. However, at times it was difficult to see the screen in direct sunlight. Some FIs complained that the Newton ran slower or the screen fogged up in cold or rainy conditions.

Problems with the CMS constituted the largest concern expressed by the FIs. They requested simpler and more consistent procedures for the software access, management, and reporting tasks. The FIs noted that a detailed "cheat sheet" pertaining to the CMS was provided at the conclusion of training and was helpful in terms of providing a source for easy reference. Several complained about the limited battery life they experienced, especially in the Newton. Other difficulties were mentioned:

  1. holding all of the required materials and equipment while at the door conducting screening;

  2. a perception that the interview took much longer than 1 hour to complete;

  3. the lack of an incentive for respondents;

  4. difficulty in completing the second interview in a two-interview household due to the length of the first interview;

  5. repetitive questions in the interview (this varied by questionnaire version); and

  6. the ACASI tutorial, which, for many respondents, was seen as being a waste of time.

The FIs were asked what they did to occupy themselves while the respondent completed the ACASI portion of the interview. They reported that they completed paperwork, prepared materials, socialized with other members of the household, read personal materials, read their FI manual, or did some combination of these activities. Some FIs reported that most of their respondents used the headphones during the interview; others reported that they did not. This action seemed to depend on whether the FI encouraged the use of the headphones.

When asked about training, the FIs generally were positive about their experience. Many said they would have liked to have had more information on troubleshooting potential problems. However, they acknowledged that such information was not readily available until after problems had occurred in the field. Others suggested that there should be more time at training to practice with the computers. Similar to what was expressed by the FSs, many FIs also felt that they should have been separated by their degree of computer experience. Also similar to the FSs' response, most felt that technical support during training and while in the field was excellent. A few FIs complained about isolated problems with reaching technical support staff and with receiving a prompt response to a specific issue.

9.3.3 Management of Debriefing Sample Data Collection Operations

By design, the debriefing sample cases were assigned to field staff working on the 1997 NHSDA main study. Consequently, the FIs, FSs, and RSs taking on this additional assignment were all veteran NHSDA staff. The same 1997 NHSDA management team at RTI was in place to oversee the debriefing sample.

In those segments selected to be part of the debriefing sample, one or two people in the sample dwelling units may have been selected for the NHSDA interview. A hard-copy screening form indicated whether no, one, or two debriefing interviews were to be conducted. At the conclusion of each day's work where they had completed one or more ACASI debriefing interviews, the FIs were to transmit the debriefing data directly to RTI.

From the FSs' standpoint, management of the debriefing sample was somewhat difficult. The FSs were not be able to determine with certainty whether a debriefing interview was to be conducted at a given dwelling unit until they received the hard-copy screening form from the FI because the information was resident on the screening forms. Thus, during their weekly conference calls, for every completed interview reported, the FS would ask if an ACASI debriefing interview had been conducted. Because the FSs had no records with which to cross-check the information, they had to rely on the FI to properly relay the debriefing information to them. Upon receiving the hard-copy information, the FS

  1. reviewed the screening and selection procedures,

  2. confirmed the debriefing interview assignment status debriefing interview for each eligible case, and

  3. checked the debriefing sample transmission report found on the FS website and confirmed that a transmission had occurred.

Depending on the outcome of the review, one of two steps was taken:

  1. If all information was correct and the ACASI interview had been transmitted, the case was considered to be complete and the FS noted this in the tracking system.

  2. If a transmission did not occur, the FS contacted the FI to verify an ACASI debriefing interview had been completed. If "yes," the FI was instructed to do an immediate follow-up transmission. If "no," information regarding the lack of a debriefing interview was entered into the tracking system.

Several types of problems were encountered, including missing interviews, unsuccessful transmission of cases, cases coded as completed for which the respondent had actually refused the interview so that no interview should have been sent, and extra interviews that appeared because an ACASI interview was inadvertently completed at a dwelling unit where none should have been.

In spite of the effort to make the homestudy as complete and thorough as possible, the preparations proved to be insufficient for some of FIs. A higher-than-average number of errors were associated with FIs failing to administer debriefing interviews to eligible respondents and erroneously administering the interview to ineligible respondents, as well as excessive technical difficulties related to operating the hardware/system and transmitting completed data.

In addition, the sampling rates embedded in the screeners did not yield the expected 938 cases, of which 750 would be completed. In fact, only 738 cases were yielded, which is 20% less than expected. Approximately 596 ACASI debriefing interviews were completed, 154 short of the projected target of 750. This shortfall can be explained in part by the aforementioned difficulties; however, we also felt that the large number of resources required to simultaneously manage the field experiment, the main NHSDA, and the debriefing component contributed to less than adequate attention to this latter component.

Top Of PageTable of Contents

This is the page footer.

This page was last updated on June 16, 2008.

SAMHSA, an agency in the Department of Health and Human Services, is the Federal Government's lead agency for improving the quality and availability of substance abuse prevention, addiction treatment, and mental health services in the United States.

Yellow Line

Site Map | Contact Us | Accessibility Privacy PolicyFreedom of Information ActDisclaimer  |  Department of Health and Human ServicesSAMHSAWhite HouseUSA.gov

* Adobe™ PDF and MS Office™ formatted files require software viewer programs to properly read them. Click here to download these FREE programs now

What's New

Highlights Topics Data Drugs Pubs Short Reports Treatment Help Mail OAS