[This Transcript is Unedited]

Department of Health and Human Services

National Committee on Vital and Health Statistics

Subcommittee on Populations

June 21, 2000

Hubert H. Humphrey Building
Room 705A
200 Independence Avenue, SW
Washington, DC 20201

Proceedings By:
CASET Associates, Ltd.
10201 Lee Highway Suite 160
Fairfax, VA 22030
(703) 352-0091

PARTICIPANTS

Subcommittee:

Staff:


P R O C E E D I N G S 1:30 PM

DR. IEZZONI: We were to get a list of the types of people whom we expect to see at the July meeting.

Susan, could you tell us who you have lined up for July?

DR. QUEEN: I have contacted the trade associations you had mentioned, APTA and AOTA and each one has given me a promise of someone who will speak for their organization on the ICIDH.

We have, also, lined up one possibly two international speakers to look at their perspective on the ICIDH. Thanks to Paul's suggestion we have a couple of people whom we are referring to as disability advocates, Nora Wells for children's issues and somebody else who was recommended by AARP who will speak to issues with the elderly.

There will be some presentations on the roll-out that occurred in the ICIDH and we have a few other speakers lined up as well, and for today we have three people coming from the Census Bureau who were feeding their meter a while ago, Theresa Demaio, Elizabeth Martin, Bessie Martin and Charles Nelson, and they were just going to go upstairs I think and get a cup of coffee.

What I don't have if you wanted somebody from the AMA, that is where I haven't gotten any confirmed speaker from the physician's group.

DR. IEZZONI: There was a representative of the AMA here at this meeting. Actually there were two of them this morning. I think that they might keep a list of whom attends the meetings, and you should just maybe call them and find out who would be the right person.

DR. EDINGER: I think the other one went downstairs to the privacy meeting.

DR. IEZZONI: All right, that sounds good. Kathleen Fyffe kind of raised her eyebrows about payers. We haven't had any payers who have been able to come. We had talked earlier for the, I think it was the January meeting about having payers come but we had trouble finding them. That was back when Carolyn Rinds was the lead staff for the subcommittee and I think that that has just kind of fallen by the wayside a little bit.

Maybe what you might do, Susan is contact Kathleen Fyffe who is one of the members of the subcommittee and see whether she has anybody who might be interested. She is at the Health Insurance Association of America, whether she might have somebody who might be interested in speaking or could represent her organization. What is the issue?

DR. STARFIELD: Representation of payers.

DR. IEZZONI: Right, what their views would be of having ICIDH as a code set that would be required on administrative transactions, what they might use the information for, what their concerns would be about implementation feasibility, those kinds of things.

One of the people that we were supposed to hear from the day that we had the snowstorm was actually Gretchen Swanson who had used the ICIDH at the request of California Blue Cross Blue Shield for specific projects that they viewed as important to them for doing their business, and so, Gretchen who was flown in from California for that particular meeting has not been flown in from California by us again, obviously and so we haven't heard that perspective but that would be --

PARTICIPANT: And what is Kathleen's affiliation again?

DR. IEZZONI: She is at the Health Insurance Association of America. She is a member of the full Committee and so you will be able to get her complete information from the full Committee.

DR. STARFIELD: I can imagine that they would have a terrible time dealing with the issue unless we can really specify it better. I mean we don't even have any idea what code sets there. You know, given how limited our time is and I am not denying that we should have their point of view, but I don't think this is the right time to have it.

DR. IEZZONI: Because Kathleen reacted the way that she did during my presentation a few minutes ago let us just contact her and see if there is anybody she might be interested in having us hear from. Remember that she is on the Privacy Committee and the Privacy Committee as we speak right now is meeting to talk about this topic.

I think given that we do have a few minutes while we are waiting for our speakers to come, Paul, do you want to raise the issue? All right, we have finally gotten started. My name is Lisa Iezzoni. I am chairing the Committee and so let us see if we can find places around the table. I apologize to you. The Committee apparently was having quorum problems and so the decision was made to try to break early which meant that we cut into lunch. I see people having various things. I, myself, had a Baby Ruth bar. Paul, are you having ice cream, I hope?

DR NEWACHECK: Oh, yes.

DR. IEZZONI: I am glad to hear that. It makes me feel a little less guilty about this.

We really thank you for coming because we have been hearing all sorts of rumors about the census and about the census's effort to collect information on disability and we are hoping that you all can tell us the truth about what actually did happen, is happening I guess. There are continuing efforts to finalize the counts and to get complete information.

We have Ms. Demaio, Ms. Martin and Mr. Nelson. Do you want to go in that order or do you want to change your order.

MS. MARTIN: I think Chuck Nelson is going to start and then Terry Demaio is going to speak and then I am going to talk.

DR. IEZZONI: Great.

MR. NELSON: I have a handout. Aaron had a point of order. We haven't introduced ourselves to you guys. I guess the question is would you remember anything about us if we told you, but you have got a list of the Committee members and you can see our names. So, maybe we should just go around and state our affiliation really quickly.

DR. EDINGER: Edinger, Agency for Health Care Research and Quality, the Quality Subgroup.

MR. HANDLER: Aaron Handler, Indian Health Service. I used to be with the Population Division, Census Bureau.

DR. IEZZONI: I am Lisa Iezzoni. I am from Harvard Medical School in Boston.

DR. QUEEN: Susan Queen from Health Resources and Services Administration, staff to the Subcommittee.

MR. HITCHCOCK: I am Dale Hitchcock, ASPE, here in the Humphrey Building, staff for the Committee.

DR. FRIEDMAN: I am Dan Friedman with the Massachusetts Department of Public Health.

MS. COLTIN: I am Kathy Coltin with Harvard Pilgrim Health Care in Wellesley, Massachusetts.

DR. GREENBERG: I am Marjorie Greenberg from the National Center for Health Statistics and Executive Secretary to the full Committee.

DR. STARFIELD: Barbara Starfield from Johns Hopkins University, member of the Committee.

MS. WARD: Elizabeth Ward from the Foundation for Health Care Quality.

DR. NEWACHECK: Paul Newacheck from the University of California, San Francisco.

MS. MARTIN: I am Bessie Martin. I am a senior survey methodologist at the Census Bureau.

MS. DEMAIO: I am Terry Demaio, and I am a researcher at the Census Bureau.

DR. IEZZONI: Now, we are taping this and are we on the Internet? We are and so when people are speaking we are going to have to have you be miked.

So, Mr. Nelson, we are going to need to have you be wired and Ms. Martin I think when you speak you are going to have to go closer to the microphone so they can hear.

MR. NELSON: Okay, I think by way of background I am the Assistant Division Chief for Income, Poverty and Health Statistics, and the Census Bureau's Housing and Household Economics Statistics Division, and as such I actually manage the development of a lot of Census 2000 content, really five areas, income, labor force status, work experience, industry and occupation and disability.

Four out of five of those content types went very well for Census 2000. Disability was the only one that had a sort of rough spot, but nobody ever asked me to come and talk about income or labor force, but I get a lot of requests to come and answer questions about disability.

So, just to start, I should probably start with the 1990 disability question, and some of you have probably seen this before, and others may not have but in 1990, the Census Bureau asked four questions. Two had to do with work disability, whether or not you were limited in the kind of work you could do, whether you were prevented from work, and two had to do with the daily living activities, the IADL question about going outside of the home alone and the ADL question about taking care of your own personal needs, and these were traditional disability questions back in that time.

Over the nineties it was clear that we were going to have to make major changes to the disability question. The ADA came about. There was even in 1990, there was a displeasure with the questions. So, the work disability questions we knew were going to need work, and there were problems with Question 19B, taking care of your personal needs.

That is a question that did particularly badly in our data quality evaluation for the 1990 census, and so we knew that a lot of work had to be done on that question. So, really three out of the four questions we knew some research had to be done on, and we planned on fairly major changes to that.

Now, the Census 2000 content development program was a little bit different than previous censuses. Going up to 2000 there was a lot of pressure on the Census Bureau to cut back on the long form, and we really spent a lot of our time in the nineties justifying questions and justifying content and in fact a lot of your agencies probably were involved in this process of justifying whether or not we really needed disability questions in 2000, and that was what the interagency focus was on, and we didn't spend as much time on the questionnaire development although there was certainly a question development process and certainly in the midnineties there was quite an extensive questionnaire development content, but it wasn't as much of an interagency action as perhaps previous censuses were because the interagency focus was really on justification, and the feeling was that if we knew the content, then we could develop the questions that fit that content, that fit the needs of the users.

So, in the mid-1990's there was a national content test where various questions were tested, various combinations were tested, and there was a data quality evaluation done of those, and coming out of that evaluation the Census Bureau came up with a recommendation, and this was in early 1997. The Census Bureau came out with a recommendation to ask this question, this set of questions in the dress rehearsal, the 1998 census dress rehearsal and then in the 2000 census, and if you compare the two this is that one-page handout. The 1990 questionnaire was chart one of that paper on Page 23 of that papers. So, if you compare the two you can see they are quite a bit different. They are very much different.

I think the biggest difference was that we wanted to focus on activities and indeed we listed, we ended up listing a lot of activities, walking, mental tasks, going outside the home alone which is actually the only one that is probably similar to the 1990 question, blindness, deafness and hearing and seeing and then a question about work but not prevention but a question about a condition that made it difficult to remain employed or find a job, and the major difference was that we were going to ask these questions not only of adults but of children for I guess four out of the six questions which was a big change from 1990.

So, the content development was done in concert with a, oh, it was sort of a broad disability group. There was an interagency committee on disability research and there was a subcommittee on disability statistics and the plans for testing and, also the results and the proposed questions were sort of vetted through this group, this sort of broad interagency group, and then in 1997, in the spring of 1997, these questions along with the other questions were sent out to all the stakeholder agencies that had a stake in the census, and then there was sort of a major meeting at OMB to talk about them and to really discuss whether or not all the questions fit the needs of all the stakeholder agencies, and at this meeting and even before the meeting we got the clear message that the disability questions that were proposed for 2000 didn't fit the needs of the user community of the federal census disability stakeholder community, and so shortly thereafter there was a group set up under OMB. Nancy Kirkendall was the Chair of OMB, and it was an interagency group that had members of Census Bureau, HHS, the Department of Education, Department of Justice, SSA and probably some other agencies as well to see if that set of questions could be improved or if a set of questions could be developed that fit the agency needs better than the one proposed by the Census Bureau and the problem was that there was a very short period of time in which to do that because the Census Bureau said that only questions that were asked in the dress rehearsal would make it to the 2000 census and this was the year before the dress rehearsal and anyone who has worked on surveys knows that there is a lot of lead time involved, and it was a very tough time, and actually Terry is going to talk about the actual testing that was done to end up with the last set of questions that we actually ended up asking for the 1998 dress rehearsal and the 2000 census which is this set of questions.

So, I guess the major problems that were cited with the original census set of questions was one, the Census Bureau was proposing to drop that question on taking care of personal needs you know, the ADL question. That was a specific decision that the agencies were particularly not pleased with and I think there was another, I guess specific issue which was that there was dissatisfaction with the work question.

There was a request to get some kind of work accommodation question which was something that we had never asked before, and there was very little experience with it but that was something that this group was at least interested in testing and maybe Terry is going to talk about this stuff. So I won't talk too much about it, but basically the story is as sort of an overall picture we went from the 1990 set of questions that was fairly limited and had problems to a proposed set of 2000 questions that were I think certainly in the right direction but still according to the user community had problems to this set of questions which fit the bureau requirements for testing and certainly fit the needs of this group that was set up by OMB to see if a set of questions could be developed. So, I think in the end everybody was happy or as happy as they could be given the fact that this happened in a very short period of time.

We went from the other set of questions to this set of questions in about 2 months or something like that. So, it was a very tough time, and I guess Terry is going to talk a little bit about the testing of this thing and then maybe I will come back and talk about the results of that so far.

DR. IEZZONI: Can we just ask clarifying questions about these? Questions 16A, B, and 17A, B ask the children?

MR. NELSON: Yes.

DR. NEWACHECK: Children of all ages or 6 and above?

MR. NELSON: Five and above.

DR. NEWACHECK: Did you give consideration to going all the way down to zero?

MR. NELSON: There was some testing done of questions that went down to zero, and there is a bigger piece on the national content test and there was some testing of questions that went down to zero but the results were not what we wanted.

DR. NEWACHECK: It is difficult to get good questions there, but it would be nice to have a full US population estimate instead of capturing 96 percent of the population or 92 or whatever you are getting.

MR. NELSON: Right. I think part of it was a feeling that you might have to ask somewhat different questions and then I don't know. Part of the problem with the census is that, and I should have talked about this at the beginning, but there really is sort of a real general purpose. I mean it is trying to fit a lot of needs in a small amount of space. I mean the fact that the Bureau was able to justify getting a little more space for disability questions was sort of a major victory, and so, I think there is always more you wish you could do at the census, but it is just a constant trade-off and perhaps, you know, maybe your question for all ages could have been developed at the expense of one of the other questions, and there would have been a trade-off but I think given the fact that it didn't test very well, that what we tried didn't test very well and there was a sort of desire to put a lot of activities there, you know, to mention a lot of activities. It was what do you give up. There is always a trade-off.

In fact, that was one of the, and I am sure Terry is going to talk about this, but that was one of the requirements that we said, you know, you can come up with other questions, but they had to take the same amount of space because there was only this much space allocated for disability questions on the census, and that was it. That was a non-negotiable item.

DR. IEZZONI: Are there other specific questions on the questions because I suspect some of the more policy and testing will be talked about by our other speakers?

Barbara?

DR. STARFIELD: Just a clarifying question. You have my sympathy, by the way. You mentioned that OMB wouldn't let you use questions that hadn't been in the 1998 pretest but actually what you came up with is very different. Many of those hadn't been used before. How did you come up with this particular set?

MR. NELSON: There is a pretesting protocol that the Census Bureau has and these questions, because of this testing that Terry and her staff did, they fit the Census Bureau's pretesting protocol. It wasn't OMB's. It was really the Census Bureau's pretesting protocol and that this set of questions fit that protocol.

DR. EDINGER: You mentioned that the last question said like for 16 years or older, like looking at a job. Could you have just added for the younger population who are attending school which would have sort of the equivalent problem? It wouldn't work for disability but certainly if you couldn't attend school you indicate there is a problem.

DR. STARFIELD: We cannot do that because of the tremendous amount of home schooling.

MR. NELSON: That is true. I don't know if that occurred to us at the time or not. I don't know if that was brought up or not. It has been a while, but things happened so quickly. These questions did go through major -- Terry can probably tell you.

DR. IEZZONI: Okay, we will take you off the hook for a second.

MR. NELSON: And I will come back afterwards and talk something about the results.

DR. IEZZONI: Okay. You might need to relinquish your mike.

MS. DEMAIO: As a result of the institution of this working group on disability the OMB working group on disability that Chuck mentioned which was constituted with coming up with a set of questions that would be agreeable both to the Census Bureau and to all the various federal agencies the group worked as Chuck said very quickly. I think it was 6 weeks or 8 weeks, and we had to come up an alternative set of questions and have it be ready to go into the dress rehearsal.

So, my role was as a survey methodologist. The disability experts from the various different agencies came up with their set of questions and during the meetings there was a lot of interaction about why this version of this question would work or wouldn't work and then they kind of went back to the table several times and the upshot of it was that they came up with a set of questions that they felt met their needs and could be tested.

Meanwhile the Census Bureau made some changes to the version of the question that Chuck put up as a transparency and so this is what it looked like and the differences had to do with our original question asked about walking one-quarter mile, performing mental tasks. That is up there. Basically what we did was add the ADL question back in, and this is the Census Bureau version that went into cognitive testing.

As Chuck said it was more detailed than the 1990 question. It asked six pieces of information rather than four. It may not seem like a lot, but that is valuable real estate and also, instead of asking all yes and no questions it tried to get a measure of severity. I don't think you can make it so all parts of the transparency can be read at the same time. So, instead of asking all yes and no questions we tried to get a measure of severity to differentiate between no difficulty, some difficulty and great difficulty or unable, and then the second version as I mentioned was developed by the disability experts that used a different approach.

There they came up with three subquestions that focused on different types of conditions. This is Chart 2 in your handout. Six A has to do with mental disability. Then there is a question about sensory disability and a question about physical disability and then three additional questions focused on limitations of activity, ability to work, activities of daily living and instrumental activities of daily living, and then another question was added at the bottom, Question 8, and this was in response to the Americans with Disabilities Act. It was an attempt to identify persons with a need for work accommodation.

So, we had these two different versions of the disability questions, and we wanted to do some cognitive testing. We only had 6 weeks. We had to work very quickly. Cognitive testing for those of you who aren't familiar with it involves small-scale one-on-one type interviews between a research subject and an interviewer to see how people interpret the questions, formulate the response categories and in a self-administered form like the census to see how they navigate through the questionnaire, and the technique is designed to give quick and qualitative information about problems people have with the terminology and concept, with the reference period and with the respondent's ability to provide information, but the respondents don't constitute a sample in the strict sense of the word and so the results are not generalizable to any larger population.

So, staff from the Census Bureau conducted 20 cognitive interviews late in the summer of 1997. We interviewed 10 people with each form. We wanted to test the questions with a wide variety of people. So, we included people who had disabilities, people who could report about disabled household members and people who didn't have any disabilities.

Some of our disabled subjects were children. Some of them were adults and we had respondents or household members of respondents with a wide variety of disabilities ranging from blindness to diabetes to multiple sclerosis to severe hearing impairment, attention deficit disorder, mental retardation. We tried to cover as much of the territory as we could with 20 interviews.

That was the process and next I want to go through what we found out about each version starting with the Census Bureau version.

Then I will talk about the recommendations we made based on the research. One of the major problems with the Census Bureau version of the disability questions was caused by the response scale on Question 6. The stem is not in the form of a question but is a statement. Mark the category that best describes the person's usual ability to perform the following activities. The respondents found this confusing since the response scales didn't exactly match up with the question, and some respondents understood the question to be asking if they could do an activity. They were thinking in terms of yes and no. They were expect yes/no response categories and since they weren't there they kind of had to translate their yes/no into something that matched the scale and it proved difficult for respondents. There were many examples of answers being changed because they couldn't easily map what they were thinking into these categories and, also, when respondents were asked to interpret the meaning of these response categories their responses suggested that they couldn't easily differentiate between them. There didn't seem to be any consistency across respondents in the meaning of the categories. So, a level of disability that might be some difficult for some people might be no difficulty for others or something that was great difficulty for one person somebody else might thing that that was some difficulty.

So, it didn't seem to be, there wasn't any standardization across the categories and it seemed easier for respondents just to be able to report whether they had difficulty or didn't have difficulty in performing the activity.

The formatting of the response options in Question 7 also proved problematic. The yes boxes in Question 6 if you go down there in a perfect line with the some difficulty questions in Question 7. The some difficulty boxes in Question 6 are in the column with the yes categories in Question 7. So, some people were answering the yes question and as they were talking out loud they were saying, "Some difficult." So, people weren't differentiating between those two levels of response categories.

Another problem is that the dimensions of the response categories are reversed in Question 6 and Question 7. In Question 6 persons who have no difficulty report in the left most category. In Questions 7 if you have no difficulty you report in the right most column, and this created problems when respondents learned the pattern of responses in Question 6. They got used to that and then they went down to Question 7, and they wanted the pattern to be the same. So, there were cases of misreporting by people who were reporting the opposite of what they actually meant.

Those were the problems we found with the Census Bureau's version of the questions. Then in terms of the alternative version of the disability questions because these questions were new we paid particular attention to how well respondents understood the intent of the questions. For example, in the lead-in to Question 6 we wanted to see whether respondents understood that the scope of the question included physical mental and emotional disabilities, and so we asked people to paraphrase, to tell us in their own words what the question meant.

DR. STARFIELD: Could I just ask you, why do you need that? I mean why does it matter whether it is physical, mental or an emotional problem. You just want to know if people are disabled. Why do you have to attribute it to a cause?

MS. DEMAIO: I am only involved in testing the questions.

DR. STARFIELD: I know, but what is the rationale?

DR. IEZZONI: Probably the capacity versus performance issue, Barbara that some people might not do these things because they don't want to. They choose not to, and so even though they might have the capacity to do it they don't do it because --

DR. STARFIELD: It still seems to me from the point of view of disability it is the effect you want not the cause.

DR. IEZZONI: This is a huge debate in the measurement --

DR. STARFIELD: But from the point of view of technique, of surveying you have basically very much complicated the question by asking these things here. You ask them to attribute cause, and you give them three choices and then they have to think whether it is 6 months or more which boggles my mind. This whole field is so complicated, and we have to start all over again.

DR. IEZZONI: The stem that is used here is virtually identical to the stem that is used in the NHIS.

DR. STARFIELD: I know. That doesn't make that right though.

DR. IEZZONI: Their ADL question which is kind of interesting.

DR. STARFIELD: That is exactly the point. It doesn't make it right just because --

MS. MARTIN: One reason they want to eliminate temporary illnesses so that if you have laryngitis and cannot talk you don't want people to say, "Yes," or a broken ankle so you cannot walk, you don't want them to say, "Yes."

DR. STARFIELD: Then you say, "Other than some acute problem." You don't put it in the stem.It is the stem that is the problem.

MS. MARTIN: It may or may not be a good design solution, but I think that is one reason it is done, and there may be others, but I am not a disability expert.

MS. DEMAIO: I am not a disability expert either.

DR. NEWACHECK: There is some recent research that would indicate that if you ask those questions first you estimate the consequences, that is are you limited in these things and then you ask is it due to physical, mental or emotional condition and then you ask if this condition lasted 6 months or longer. You get better data, but you cannot do that.

MS. DEMAIO: That is probably true, but the real estate facet of the census makes it so that we cannot do that, and we aren't finding out whether it is physical or mental or emotional. We are just trying to make the fact that it is supposed to include all those things to be a part of the question. I know that is probably not a satisfactory answer.

MR. NELSON: I think we were build on the survey because you couldn't really start from scratch with 2 months to develop questions. So, we were building on questions we already knew worked in surveys.

DR. STARFIELD: They don't work. That is the problem, and this committee has broader jurisdiction and -- MS. DEMAIO: This was the wording that the disability experts from the federal agencies came up with.

DR. IEZZONI: It is efficient. It may not be perfect. I think that one of the things, did you find in your cognitive interviewing that people reacted to the notion of a mental or emotional limitation?

MS. DEMAIO: Reacted negatively?

DR. IEZZONI: Yes.

MS. DEMAIO: No, we really didn't. We did find that people included both of those, all three of those kinds of things in what they thought the question was asking about.

DR. IEZZONI: But they didn't feel it was a stigmatized category, a mental problem causing it?

MS. DEMAIO: We didn't have anybody that said anything like that, and they didn't have to identify that it was a mental problem.

DR. IEZZONI: No, I know. It is just that I have watched older people answer questions like this, and especially given kind of historical stigmatizations that older people may feel more than younger folks sometimes when you put mental in a string that includes physical even though they might have a physical problem they don't want to admit it because they don't want anybody to think that they might have a mental condition, too. It is just really interesting to see that with some of the older people, but you only did 20 interviews total. You only did 10 of this version.

MS. DEMAIO: There were more than 10 people accounted for, however. I forget the number but I think it was more like probably twenty-something because we were asking this about everyone in the household. We got self-reports but we also got reports about any other household members. I don't remember exactly how many people it asked about but it was more than 10, but again I mean a lot of this was dictated by the time frame that we had.

DR. IEZZONI: Right and by OMB. I mean weren't you limited in how many people you could interview by OMB?

MS. DEMAIO: No. That didn't play a role in it.

MR. HANDLER: I have a point of clarification. This is a question that was included only on the long form questionnaire which was one out of six people, one out of seven households or one out of six, and it was asked along with about 50 other questions. The general population, now if a survey was conducted only of disabled people and you had 20 questions to ask on that questionnaire you would get a lot more information, but you were limited in what you could do with a general purpose questionnaire going across the country to everyone. That was the problem you had.

MS. DEMAIO: And that was, I don't know if you want me to go on or you want to answer more questions. That was clearly the problem with Question No. 8. That question had been used in one of the NCHS surveys, I think the national survey of disability or something like that.

DR. IEZZONI: The survey disability supplement. That was 1994 and 1995.

MS. DEMAIO: Okay, and this question had been used and the disability experts that we were dealing with said that this question works because it has been used before, and it was a matter of context. Here it was being asked of everyone in the nation, and people in a wide variety of circumstances and in the previous survey the universe had been screened down to the point where it was only being asked I think of people who --

DR. IEZZONI: Yes, I am looking at this question. This was a Phase II question from the NHISD and so, yes, it would have been screened down to people who in the Phase I study had been shown to have, quote a disability.

MS. DEMAIO: So, it was supposed to ask about work arrangements, but it was not always interpreted that way. Some people thought it was asking about accommodations for school or for getting around the house.

Another problem was that because it was asked of everyone there were problems with whether and how it should be answered. Retired people without disabilities weren't sure whether they should answer this question or not. They didn't work. They hadn't been working in some time. Should they answer the question? One person of retirement age with disabilities reported that he needed special accommodations because he used a cane and a wheelchair but even if he had those accommodations he wouldn't have used them because he wouldn't have worked because he was of retirement age.

Then another factor was whether the disabled person was working or not. We interviewed a younger man who had multiple sclerosis. He mentioned several physical disabilities. He said, "No," to this question. He didn't need accommodations because he wasn't working, and there are other reasons why the disabled don't work apparently partly to do with whether they won't work, they don't work when they are trying to qualify for disability and so there were other reasons besides whether he needed accommodations and couldn't get them that had something to do with why he wasn't working.

So, he said that he might be able to hold a job but he didn't think this question applied to him because he wasn't working at the time, and if people to whom the question is meant to be addressed don't understand the question properly then that is a big problem for a survey that is meant to go out to everybody. For good question design principles a yes response should mean that a person needs special accommodations for work and a no response should mean that a person doesn't need special accommodations for work and this question didn't work in the census application although that doesn't mean that it wouldn't work in other settings.

The other problem was Part 6B. We had a problem both with the talking and the seeing with glasses. In terms of the seeing people who had glasses had problems because they were confused. They weren't sure whether it meant that they had a problem, a difficulty and they needed glasses or they had difficulty with their glasses, and so that caused problems and then in terms of the talking there were several instances where the respondents clearly had difficulty talking. Their difficulties were apparent to the interviewers, but the people said, "No," they didn't have any problems, and then there was another case where a parent of a child said that her child didn't have any trouble and then later on in the debriefing afterwards she reported that her son went to a speech therapist because his speech was slurred. She said that she thought the initial question was just asking could he talk, and he could talk. So, she answered that he didn't have any difficulty even though he clearly had some difficulty.

So, those were the problems that we saw with this version of the question.

Do you want me to stop now and take questions?

DR. IEZZONI: Sure.

MS. DEMAIO: Or do you want me to talk about the recommendations?

DR. IEZZONI: Yes, why don't we go on.

MS. DEMAIO: Okay. So, in terms of the recommendations since we basically found that the Census Bureau version the problems mainly had to do with format and response categories which caused misreporting and in the alternative questions the content was found to be problematic in cases we tried to come up with aspects of both versions of the questions in our recommendation.

We adopted the yes/no response categories from the alternative version. So we incorporated, when we could we incorporated the wording from the alternative version into the structure of the Census Bureau version, and then we eliminated the problem questions that I discussed because we thought that that would, it didn't seem like those questions would produce high-quality data.

There was one serious flaw in our recommendations. When we deleted the work accommodation question we neglected to replace it with another question on work disability and so in the final meeting of this group we added Question D. We added a work disability question, and we came to a consensus on the wording and the order of the other items, and this is the consensus. They were used in the 1998 dress rehearsal and in Census 2000 and Chuck can talk some about the results, what was found when it was administered in the dress rehearsal and in the census, but if you want to ask me more questions I can do that now.

DR. IEZZONI: Are there any more questions about the cognitive interview or the survey design?

No? Okay.

MR. NELSON: This will be brief. The dress rehearsal isn't really an analytic tool. It is really an operational tool that is a dress rehearsal for the census and it is more operational than analytic, but we did edit these items on the 1998 dress rehearsal and we looked at the results for California and South Carolina, two places in California and a place in South Carolina, and Menominee(?) an indian reservation.

So, we have noticed two things, and both of them have to do with the fact that you have these sort of embedded skips here which is something you do to save space. In a perfect world you would have a skip item, you know, skip out the people under 16 and then repeat the question and have a yes/no. I mean that is what you would do if you had space. In the census we didn't have space, and we wanted to ask all these questions and were willing to accept the trade-off.

So what we found is that the non-response rate even for people 16 plus for these two items is higher than the non-response rate for the other items, not tremendously higher, you know, a percentage point or something like that, but in the census that is a lot of people.

The other thing we have noticed, and this is all kind of speculative because what we have done is we have questions in CIP(?) that are comparable to these questions, and we have sort of compared them item by item to the CIP results which are national and you know there are differences between these sites and we didn't control for the differences in the population between the dress rehearsal sites and the national population but just to get a broad idea of how these items looked individually and there is some concern that for Items C and D, particularly for C which has a pretty low prevalence rate, you know, we see what we thought were higher prevalence rates than we expected for C, not so much for D but for C, and we think that is fairly rare and that there might be some people who go through and they maybe lose the thread of the question and answer yes this person is really 16 years old or older or that yes, this person goes outside. You know, there might be a small percentage of people that are kind of losing the thread of the question and answering the wrong question.

So, there was concern about this, and we don't know if it is going to be a problem. We really don't know from the census yet. We haven't looked thus far in the census. All we have done is look at the question, look at the data to make sure the machines are reading the questionnaires correctly, but certainly from the dress rehearsal we have seen enough that it is something that, you know. So, I would advise people if they are going to use these questions and you have the space to separate C and D from A and B, and it probably would not be a problem if this was a survey that was given by a trained interviewer obviously. i mean mail-out questionnaires are always messy, and navigating the questionnaire is always a difficult thing. In fact, I shouldn't make a big deal about the non-response rate because I think any skip item there is always a percentage of people that miss a skip or answer a question they shouldn't have answered or skip wrongly.

In the census the way this thing works is on the first line it says, "Answer if this person is 16 years old," and then "Or over" is on the next line. So, there might be some people who see that first line and then skip, also, but anyway the other types really have compared pretty well to CIP. From other information we have from surveys they look pretty reasonable and so I would say that overall our feeling is that it is performing okay, and that these problems we are seeing are probably a trade-off between wanting as much information as possible and in a limited amount of space and that is kind of a trade-off you make when you have a mailed-out questionnaire where space is very limited and you want to get a lot of content and so on.

I think what we have learned from this experience is that we made a conscious decision in the early nineties not to form groups on every set of content items and it was a conscious decision by the bureau and I would say that the disability you know we should have made the exception, and we should have had a closer relationship with the decennial stakeholders coming in to our content testing because we knew that this set of questions was going to undergo more changes than any other set of questions on the census questionnaire, but what happens in the census environment is that there is a fear that we will have to do it for one set of questions, we will have to do it for another set and there are always lots more formal things, and things are much different on the census than surveys where it is a lot simpler. You can just form a group and things we do all the time on CIP and CBS and census and other surveys we work on. The census is just not, you know, it is more of a formal process, and so, in this case it led to this great work that Terry did, and in coming up with something that really merged two pretty different set of questions in a matter of about of 6 weeks.

DR. IEZZONI: Shall we hear from Ms. Martin then?

MR. NELSON: Yes.

DR. STARFIELD: Can I just make a comment first, and this is more a comment to my colleagues than to you. I mean I just really think that our whole approach to disability is 50 years outdated. It probably was good 50 years ago It is not now. This is so informed by a biomedical conceptualization of disability that it hampers our ability to talk about dysfunction in a population.

I mean I would venture to guess, and it is just a guess that more people are afraid to go out alone because of safety than because of any biomedical, but that is dysfunction. Why shouldn't we know about that? I think if we are really going to do a good job of this whole issue we just have to change our concept.

DR. IEZZONI: That is an NHIS question, too. Given Ed Sondik's plea this morning maybe that is something that we should be focusing on.

You weren't at the April meeting, unfortunately, Barbara, but we had a presentation about the redesign of the functional status measures in the NHIS and they look awfully like this, not in detail, obviously. The NHIS allows people to ask a lot more questions, but the spirit of it and the medicalization of it is certainly part of it.

Okay, Ms. Martin?

MS. MARTIN: I was going to tell you something about some quite different data which is as part of the, during the census process we have had a partnership with a company called Intersurvey that sponsored by several outside funding organizations has been monitoring people's experience with the census, and it has covered especially later in the stages of the census process when all the controversy about the long form arose it covered the public's reaction to the long form, their evaluation of long form questions, and I thought you might be interested in some of that because we did actually ask them some about their reactions to the disability questions.

I could tell you just a word or two about the survey. This is a series of weekly surveys. They are RDD surveys conducted by a private company. Intersurvey is the name of it, and it has covered a whole broad spectrum of questions about how the public sees the census, you know, their exposure to advertising, their awareness, privacy and confidentiality concerns and on and on.

The particular survey I was going to show you some results from was done in mid-May. These surveys are self-administered. They are conducted through web TV. So they sign up a panel of respondents that they recruit and then they administer the survey over web TV. It is about 2000 people. It is not something that you would rely on to estimate, that the Census Bureau would estimate factual items, and it is really a monitoring survey.

DR. IEZZONI: Is web TV something that everybody in the country has access to? Do poor people have it? Let us be blunt about this?

MS. MARTIN: What they do is if you have a telephone they recruit people into their sample and they put the web TV in your house and they provide an Internet account and it is not something that requires great technological knowledge or expertise to use. It is not like an Internet survey where you have to be able to negotiate the Internet.

DR. IEZZONI: It is cheaper than just doing a phone survey?

MS. MARTIN: It is pretty cheap. It is actually very interesting. We were using it for other purposes as well. You can play ads to them. You can show them the form and ask them if they have seen it. It has other capabilities, but what I was going to talk about was a little bit about the public's reaction to the long form questions, what kinds of information might improve their acceptance and cooperation with them and I was going to start with just a little bit of sort of comparative information between long form and short form recipients from the survey.

One thing is that we know that short form recipients were much less likely to report that they left some items blank and for either of two reasons. One is because they didn't want to answer the items or because they didn't know how to answer the items, and in particular if you notice 3 percent of short form respondents said that they left some blank because they didn't want to answer them. Seventeen percent of long form recipients said that they left some blank because they didn't want to answer them.

So, there is indication that the long form generates more reactions and less cooperation than the short form.

DR. STARFIELD: This is actually true?

MS. MARTIN: We haven't got the data yet, but there is concern about item non-response on the census especially after commentary. You know there was a fair amount of commentary suggesting that people didn't have to answer them. People who got the long form are much more likely to say that the census is an invasion of privacy.

So, the question is which are the questions that are bothering them and what is underlying this kind of reaction to these long form questions.

So, we asked them which questions if any did you feel reluctant or find difficult to answer, and we only asked this of long form recipients because that is really who we are interested in, and not surprisingly income is far and away the most sensitive item. Over one-half of long form recipients said that they didn't like it. Disability is second although it is a pretty distant second. Close to 20 percent say that they felt reluctant or found it difficult to answer, and actually there is a whole set of them that are about the same, disability, employment, race and housing, and then 10 percent don't really like reporting the names of the people who are living there, but almost 40 percent said, "None." There were none of the questions on the long form that they minded. So, that is important to keep in mind that this is a minority, well, it is not a minority; it is a majority, but it is not everybody.

So, we asked them, "Why were you reluctant or unable to answer the questions on mental and physical disabilities?" These are our categories that we gave them, and they were allowed to check all that applied, and we asked about several of the different questions that they expressed reluctance about and the main, the most popular reason is that it is none of the government's business, and the second most popular reason is there is no purpose for the question.

So, they don't see a reason for asking it, and there is, also, some concern about confidentiality for all these questions even ones that you wouldn't think would have a lot of confidentiality concerns.

MR. HITCHCOCK: Are those questions open ended or was there like an "other" category?

MS. MARTIN: Yes, there was an other category but this was most of what -- it was not open ended. We presented various reasons why they might object. Actually only 7 percent said that it was difficult to provide the information.

We asked people what information would have helped you understand the census better and it is interesting to see this separately for long form and short form respondents.

Short form respondents the uses of the data is a more prominent thing they would like to know more about than long form respondents but the reasons for particular questions being asked looms very large as something that the long form respondents want to know more about, three-quarters of them compared to 54 percent of the short form respondents.

MR. HANDLER: When I commented for my agency, Indian Health Service to the Census Bureau what suggestions should be made I said, "Accompanying the long form questionnaire should be a description of the uses made of every item on the long form." You could ask Wallace Snyder about that. There you go. Maybe next time.

MS. MARTIN: You know our advertising campaign was, there was a general public advertising campaign and it didn't talk about reasons for particular questions and in fact for the short form which 5/6ths of the people got it wouldn't have made as much sense to tell them the reasons why all this was being asked, but it almost suggests you need a different kind of promotional or educational effort for long form type questions, and a fair minority of people say they want to know more about procedures for protecting individual data although that is not prominent as the main thing they want to know about.

So, then this raises the question would providing information on specific uses and purposes of the individual questions improve cooperation and perhaps improve their response to the questions. So, we tried some little experiments and not real experiments but we asked them -- we provided the reasons for the questions and asked people how they would feel about answering disability and we made this up. So, you may not agree with what we gave as reasons, but we did it for three of the questions the three least popular. If you knew that census information on disabilities is used to make sure communities can provide adequate services for people with disabilities such as the elderly how would you feel about answering the questions and this one actually seemed fairly persuasive for the people who had objected to the disability questions. Forty-one percent said that they would feel more like answering the questions. I mean half of them said that it wouldn't make any difference to them and 2 percent said that they felt less like answering the questions.

MR. HANDLER: You should have asked if you were disabled.

MS. MARTIN: That is right, and we did the same thing for two others. We did it about income where we asked them if you knew that census information on income is used to allocate funds for school districts and help compute the consumer price index, and I don't know how much that means to people, how would you feel about answering the question, and about one-third said that they would feel more like answering the question. About 60 percent it wouldn't make any difference and 6 percent would feel less like answering the questions. So, both of these things suggest that actually if you did tell them more about what you did with the data that it might help the situation.

Our last example though we said that if you knew that the Voting Rights Act required census information on race to be used to make sure all people have equal representation in Congress, state legislatures and local government -- this one kind of was a bust because it made almost as many people feel less like answering the question as it made feel more like answering the question as it made feel more like answering the question.

MR. HANDLER: They used to divide that as Republican and Democrat.

MS. MARTIN: Or maybe we need to look at it by race, but anyway it really suggests in terms of public acceptance of these questions a lot of people are reporting that they skipped questions they didn't want to answer. Three-quarters of them said that they want to know the reasons behind particular questions, and at least in this sort of hypothetical instance providing reasons seemed to make questions more acceptable but not always. It really depends on the reasons for the question, how it is being used. It won't necessarily make it more acceptable to the public to provide a reason.

DR. IEZZONI: That is fascinating.

MR. HITCHCOCK: Sort of off the subject of disability I find the methodology really quite fascinating. I was wondering did you mention the response rate?

MS. MARTIN: To the special survey? It is about 50 percent, pretty low.

DR. IEZZONI: The web TV response rate was only 50 percent?

MS. MARTIN: This is, they have had previous surveys. So, we did a baseline survey where we collected all kinds of information not related to the census before we asked questions about the census and then we went back and did interviews about the census, and then this is then a re-interview. So, at each point you lose people and then the initial recruitment they don't get everybody. So, it is not comparable to the Census Bureau.

DR. IEZZONI: And was it in different languages?

MS. MARTIN: No.

DR. IEZZONI: Just in English?

MS. MARTIN: Only English.

MR. HANDLER: What is the response rate for the 2000 census, the long form and the short form; is it still coming in?

MS. MARTIN: They don't know yet.

PARTICIPANT: I would like to know what the mail back rates are.

MS. MARTIN: Yes, I don't know it off the top of my head, do you?

MR. NELSON: I know there was like a 12 percentage points difference between the long and short form response rate, but we don't know what the individual item non-response rates are yet. All we know now is I guess the mail-back rates from the initial mail-out, mail-back phase. I guess how much non-response followed I guess it is almost all done. I guess it is pretty close to being done, and we know we were certainly disappointed by the fact that there was such a difference between the long and short form mail-back rates. I guess it was over twice the difference as in 1990 as from the 1990 census. So that was a little discouraging for sample data, you know, like people who want income disability data.

DR. IEZZONI: Paul, do you have a question?

DR. NEWACHECK: Yes, I had a question about the 2010 census. Have you guys thought about the process you are going to use to design or redesign these questions for 2010 or is there some process in place for that already? I know you haven't even got the 2000 data yet.

MR. NELSON: We can talk a little bit. There is the American Community Survey. You people are aware of that. The plans are not to take a long form in 2010 and in fact start sort of a rolling census, a rolling sample, a very large sample in 2003 and that I guess the plans are you would interview around 3 million households a year so that over the course of a 5-year period you would get data that would be comparable in size to what you get from the census long form for those real small areas that you get census data for.

You would have a 5-year average. So for individual track data you would have to get multi-year averages together, but for states and cities you would have data every year.

You would have small area data and right now the content of the American Community Survey or the ACS is right now similar in content to the 2000 census.

DR. NEWACHECK: So, that is already pretty much designed at this time?

MR. NELSON: No, it is not until 2003 that we go out in production and there is a process that is starting right now with OMB. There is an OMB process of content development that is geared toward the 2003 ACS and I assume there will be a disability statistics subcommittee of this OMB committee.

DR. NEWACHECK: I think on that topic this committee would be delighted to offer any advice it might on that topic. There are a number of us who work in the disability area and part of our responsibility is to advise NCHS about these kinds of issues as well.

MR. NELSON: I think the process is supposed to start this summer and there will be numerous content groups and they have already talked about getting agency involvement and so I can certainly be in touch. Who is a good person to be in touch with about this?

DR. IEZZONI: Marjorie, I think it is called the American Community?

MR. NELSON: American Community Survey.

DR. IEZZONI: The American Community Survey is something that maybe the full Committee should at least be aware of.

MR. HANDLER: There is a CD-ROM disk you could get. I got my copy that describes what is on that. You get it from the Census Bureau.

DR. IEZZONI: It hasn't been designed yet finally?

MR. NELSON: There has been testing going on since 1996 and here is site data available for several years now for a number of sites, New York and Florida and so on.

MR. HITCHCOCK: Do you see this as a survey that would have sort of a constant core over time and supplements will be added to it or how would the content --

MR. NELSON: Yes, that is the idea. There would be a constant core and probably at the beginning I suspect the core would have, would have a similar kind of justification required as the census.

I mean they would probably keep that core pretty tight, but there would be this ability for supplements, the ability to add supplements. They have talked to individual agencies, and some of your individual agencies may have gotten this presentation but I would suggest you -- they do this all the time.

Right now it is the long form content with a couple of questions about non-cash benefits added, food stamps, things that didn't make it onto the census because there wasn't a justification but we were able to get it on ACS, food stamps, public housing, school lunches, but aside from that it is very similar.

DR. FRIEDMAN: What is going to be the relationship between that and the CPS, in the CPS, I guess it is the large supplement?

MR. NELSON: They will both go on.

MR. HANDLER: CPS doesn't go to the state and community level but this would. That is the difference.

MR. NELSON: It is much larger. It is tremendously larger than any survey, any current survey the Census Bureau does.

DR. FRIEDMAN: And this could replace the 2010 census?

MR. NELSON: This would replace the long form.

DR. IEZZONI: Has Congress bought off on this?

MR. NELSON: No, it would take $100 million a year or something.

DR. IEZZONI: I was going to ask if you had the budget appropriation for it.

MR. NELSON: No, I mean we are trying every day as far as I know to convince and obviously they are hoping that Congress sees the wisdom of separating this need for all this information from the need to count the population and they are really trying hard to push it, but no, there is no guaranty that this will happen.

DR. IEZZONI: This has been really interesting. Are there other questions for our guests?

DR. NEWACHECK: I have one more question. It is off the topic of disability per se but is the Census Bureau thinking about alternative ways of doing these surveys as we move into sort of the Internet age and we have the experiment with the web TV thing or is this all going to be pretty much by mail or in person interviewing as it has traditionally been done as you see it in looking toward the future?

MS. MARTIN: In the census it was possible to respond over the Internet if you had a short form. It wasn't widely advertised because there was a lot of concern about security and hackers and fraud.

DR. NEWACHECK: It said that on the form?

MS. MARTIN: No.

MR. NELSON: If you went to the bureau web site you could somehow find it.

MS. MARTIN: And there was a couple of news conference where it was publicized and some people did.

DR. NEWACHECK: But right now you are not planning to put web TV in everybody's home?

MS. MARTIN: No, but I think it is a good guess that it would be more prominent in 2010 than it was this time.

DR. IEZZONI: Very interesting. Again, thank you very, very much for coming and sharing that information with us. It was very revealing. Thank you.

All right. I have been looking at the body language around the table. I think we should adjourn. Do people feel okay with that?

(There was a chorus of agreement.)

DR. IEZZONI: Okay.

(Thereupon at 2:50 p.m., the meeting was adjourned.)