Division of Science Resources Studies
Professional Societies Workshop Series

Workshop on Graduate Student Attrition


Uses and Limitations of Different Kinds of Databases

Alan Tupek, Deputy Division Director, Division of Science Resource Studies, NSF, Moderator

This is the first of three panel discussions on graduate student attrition. This panel will address the studies that have been done and the data that are available to address graduate student attrition issues – the place we should be starting. We are going to reverse the order of presentation because of equipment needs and we are going to start with Vincent Tinto, who is professor of education at Syracuse University. Professor Tinto has done research and written extensively on higher education, particularly on student retention – I like retention better than attrition myself – and the impact of learning communities on student growth and attainment.

Contextualizing the Study of Graduate Persistence

Vincent Tinto, Professor of Education, Syracuse University

Good morning. I would like to share with you the research that Beatriz Chu Clewell and I are carrying out on graduate student attrition under the auspices of a National Science Foundation grant. I want to share not the results of our research, which is still in process, but what we have learned about the sorts of research designs one needs to effectively study graduate student attrition.

Since a number of issues that I will raise this morning have already been discussed by speakers before me, I will go rather quickly through much of my presentation, reserving time for a few issues about which we feel most strongly. Let me note that, in addition to the slides you will see, there are copies of a background paper that Beatriz Chu Clewell and I developed that provides a broader context for our conversation today. It synthesizes past research on graduate student attrition and describes the sort of research design that would be appropriate to studying graduate student persistence. You can obtain additional copies of this paper by calling either Toni Clewell or me or by contacting us via e-mail. My e-mail address is vtinto@mailbox.syr.edu.

To begin, let me observe that the issue with which Toni and I are concerned is not simply whether some individuals leave or finish their doctoral degrees, but whether there are substantial differences among the experiences of groups of individuals, programs, fields, and/or institutions. We are particularly concerned with those differences as they relate to issues of equity and as they reflect program functioning. In our view, the elements of an effective study design should not only reveal details of the experiences of differing individuals in different programs, but also shed light on those differences in ways that can become objects of policy. The argument I want to make this morning is that our research on graduate student attrition must shed light on the contextual elements of the doctoral student experience. The experience of doctoral education is a very specific one, reflecting the experience of individuals in specific places with specific groups of individuals: faculty, administrators, and fellow students. Our research designs must reveal the full longitudinal sequence of that experience. They must be long-term and must follow people through the experience as it reflects the beginning, the middle, and the endpoint of the doctoral completion process.

The doctoral completion process has a number of characteristics that impact study design. First, doctoral persistence is a long-term process, which in some fields of study requires up to 10 years to complete. To understand fully that process, we must therefore commit ourselves both in effort and resources to following students over that period. Second, doctoral persistence occurs in discrete stages, each with its own characteristic set of hurdles to be overcome.

We know of at least four stages of doctoral persistence. The first is the pre-entry stage, which over time shapes where and in what form you enter doctoral study. The second is that of the beginning of doctoral study, namely the first year of study and the important and sometimes difficult transitions and adjustments it requires. Many students decide to leave during this early stage of doctoral study. The third stage of doctoral persistence covers that period of study leading to the completion of the doctoral comprehensive exams and to the point where one can formally pursue research for one's doctoral degree. The fourth and last stage of the process of doctoral persistence covers the completion of the research proposal, the carrying out of the dissertation research, and the completion of a doctoral dissertation including the doctoral defense. It has been well-documented that many students, having completed their doctoral comprehensive exams, fail to complete their doctoral dissertation research (i.e., ABD-all but dissertation). Because these stages appear to be qualitatively different, our research designs must enable us to understand each stage as it arises. We cannot design our studies as if the process is uniform in character over time.

Finally, we know that doctoral persistence is field- and institution-variable. The experience of doctoral study depends not only on where one studies but also in what field one pursues the doctoral degree. There is no general model of persistence that captures this variability, other than saying that each field has it own norms, it own structures, and its own characteristic patterns of interactions that shape the way students come to experience doctoral study. And, of course, the process of doctoral persistence necessarily reflects the experience of individuals within very specific departments or programs, often with one or two faculty members.

My point is really quite simple. Unlike the experience of undergraduate education and the process of undergraduate persistence – one that has been widely studied – the experience of graduate education, and by extension the process of persistence which marks it, is highly contextualized by very specific situations often with one or two people. To be effective, our research designs must enable us to capture that complexity.

What, then, does this imply about how one would go about studying the process of doctoral persistence? First, we need to commit ourselves to long-term panel studies, not cohort studies. If we want to capture the full, longitudinal sequence of events that shape doctoral persistence over time, then our studies must be true panel studies that follow cohorts of beginning students over time from their entry to their completion or departure. In the past, because of either resource limitations or our impatience, we invested in cross-sectional, even sequential cross-sectional studies as Toni and I have done for the National Science Foundation. Quite honestly, these types of studies simply do not reveal the specific longitudinal sequence of events that shape individual decisions about staying or leaving. At best, they provide us with an aggregated view of the process of doctoral persistence, which is not very useful for the development of policy. If our concern is with the establishment of programs and policies to promote both individual persistence and equity in doctoral education, then our data collection must be contextualized in the sense that they capture the unique experiences of individuals in unique fields, programs, and departments.

Methodologically, it follows that we need to collect both qualitative and quantitative data. We need to follow the example of Willie Pearson Jr., Toni Clewell, and Gail Thomas, and construct studies that are multi-method in character. We need both survey designs that allow us to speak to issues of representativeness and generalizability of findings, and we need qualitative designs that enable us to capture the quality of student experience in the specific contexts within which students find themselves. Each design, whether quantitative or qualitative, must be informed by the other. As we have learned from our studies for the National Science Foundation, our survey design was enriched by insights gained from the qualitative data. By the same token, our qualitative design was made more powerful because its foci were, in part, shaped by what we learned from the results of the survey part of our studies.

Our experience in using multi-method research leads me to argue that there are a number of questions and question formats that can and should be constructed within existing survey research designs which would allow us to tap more effectively the highly contextualized experience of doctoral study. At the same time, it is my view that the current requirement that survey studies of graduate education based on simple or even stratified random samples of programs and institutions is not a viable way to proceed, at least within the current resource constraints. Given the need for contextualizing our data collection, we must invest in the strategic selection of fewer fields and institutions. This will allow us not only to capture, in a comparative sense, the variation of student experience in different fields and programs of study, but also to invest the time and energy necessary to understand those experiences within the specific contexts in which they arise. Here I am reminded of the current research involving comparative case study of secondary math and science learning that is now being carried out for the National Science Foundation by Horizon Corporation.

We must invest, as Charlotte Kuh correctly observed, in a carefully designed series of comparative, longitudinal panel studies of institutions within which we select fields and programs, and, in turn, individuals to follow. In picking those institutions, we should focus, if at all possible, on what works. Making our selections simply a statistical choice, as is typically the case in randomized survey designs, ignores the fact that some institutions are innovative and effective, while others not. Our research designs and our sampling plans must enable us to learn from the past and to frame future policies that are based on data about successful practice.

How might these data be used? Clearly, because I am a researcher I would say that we would have to use those data to understand better the complex contextual process of doctoral persistence. But we should do so not only to enhance our understanding of that process, but also to develop policies that would allow us to address important questions of equity in doctoral education. It is not enough to know why some individuals leave or what explains differences among groups of people, fields, programs, and institutions. We need to use that knowledge to develop policies that promote doctoral education and advance equity.

In some way, we must also address – as we are now addressing at the undergraduate level – issues of institutional and program effectiveness and accountability. Graduate education has not been extremely accountable. But that day is coming. We have to prepare our data collection now so that in the future we will have access to the sort of information we need to develop a performance metric. Take, for instance, the drive to develop metrics of institutional performance and the work of Sandy Astin at UCLA, and others, to develop an institutional performance metric based on the notion of expected versus actual performance. My point is simple, namely that as we develop a national database on graduate persistence, we need to think about what data we could now collect that will allow us to develop those metrics in the future.

There are several other issues, the first being individual privacy and our ability to track individuals over time. We have had one heck of a time simply trying to get institutional permission to track individuals over time. To do so effectively, one typically needs individual Social Security numbers and other information that locating services require to track individuals who might have been out of graduate studies for five, six, or seven years. However, much of that information is held as confidential and often is not obtainable. That issue must somehow be addressed. Second, resources must be committed to long-term studies of doctoral persistence. A meaningful study of graduate attrition requires at least 10 years of effort and at least four different periods of study over those years. Such research requires a commitment of resources, either in combination or in collaboration with other agencies, that is long-term. We have too long been satisfied with piecemeal approaches to the study of doctoral persistence, with the construction of cross-sectional studies and/or very limited longitudinal studies that capture but a portion of the process of doctoral persistence. Cross-sectional studies, however appealing in the short term, simply do not provide the information we need to make sense of a process of doctoral persistence. At some point, we have to move beyond those studies to others that reveal the full extent and complexity of that process and capture its highly contextualized character.

Alan Tupek: Thank you, Professor Tinto. Are there any questions for Professor Tinto, or questions that people wanted to ask before the end of the last session but they did not get a chance to ask?

Janice Madden, University of Pennsylvania: Does the Association for Graduate Schools database meet your criteria?

Victor Tinto: Since I am not very familiar with the Association for Graduate Schools database, I cannot answer your question. One of the criteria I spoke of is a collection of both quantitative and qualitative data that would enable us to understand how different groups of students make sense of their experiences in different programs and settings.

Participant: The database does not have the qualitative data, but it identifies the programs in the schools so you easily can collect those data as a researcher and you do not have to collect the basic data.

Vincent Tinto: We are talking about at least several stages of research study. First is the identification of institutions and student cohorts that you want to study as part of the institutional panel. Second, after selecting institutions from that database, you invest in a series of case studies that follow samples of students who enter those institutions at a particular point in time.

Janice Madden, University of Pennsylvania: Why wouldn't you use all of the institutions that are in the database?

Vincent Tinto: Do you have the checkbook and could you write out a check for the cost of such a study? One does not have to study all the included institutions, but only a strategic sample of them. I do not know how many institutions you would want to study. In part, it depends on what would you want to achieve as a result of the study. For example, we do know that one of the attributes of a successful program is the construction of a supportive educational community. Some programs promote that community. Others do not. We also know that being a valued member of that community is important to student persistence, that the process of becoming a member of a supportive educational community is part and parcel of the process of doctoral persistence. We also know that faculty mentoring and advising matter – that having a mentor and/or being an apprentice to a faculty member is important to the completion of one's program of study. But knowing that in general does not tell us about which faculty, which fields, and which programs matter. In building a database for the study of doctoral persistence that is policy relevant, one would have to make strategic choices of institutions, programs, and fields of study. Clearly, the database you refer to is one place to begin.

Alan Tupek: Marilyn Baker, who was going to talk about administrative databases, could not make it, but Charlotte Kuh agreed to say a few words on Marilyn?s behalf.

top

Administrative Databases

Charlotte Kuh, Office of Scientific and Engineering Personnel, National Research Council

Let me give you my understanding of what an administrative database is. An administrative database is a database that is not collected by the Federal Government. Frankly, one of the ways in which the Federal Government ties its hands is that because of the Privacy Act, it does not permit the melding together of administrative data and NSF or NCES data, for example. You can tell me if I am wrong on this. I hope I am. Those agencies that would like to collect data on attrition can collect data from institutions, administrative data, or they can collect data from individuals attending the institutions. Collecting data from institutions is cheaper and surer than collecting data from individuals, if the institutions already collect such data.

What I want to do is ask you, the audience, some questions. The first thing I would like to ask is how many of you are involved in the graduate school in an institution? Could you just raise your hands? We have a lot of experts here. What I would like to hear a little bit about is, first, if we wanted to know the attrition rates for your graduate programs over the past 10 years, you could even average it over 5 years, how many would be able to tell us, for entering cohorts, how many people have actually dropped out of the program? That looks to me to be about a quarter of the people who raised their hands earlier, although there is some double counting. How many people would know, for students who both completed and did not complete, the history of funding that these students received? For example, did they get research assistantships, did they have NSF fellowships, did they have teaching assistantships, did they get dissertation grants? How many would know by type of aid, what sort of aid they received? Now we are down to maybe 10 percent.

This informal survey illustrates a couple of points. From my experience with the AAU/AGS data project on whose steering committee I sit, even when schools are committed to giving the data to someone who would like to present the information in a comparable way across departments and institutions, it is very difficult for schools, given the way they collect data, to do that kind of collection. In particular, there are differences, niggling administrative differences between having someone, such as a TA on a payroll, and having someone receive grants, which may or may not go through the institution.

I think the key policy question in doing something about attrition is knowing what resources go to students and knowing whether student completion is sensitive to provision of resources. One of the big problems is that even the people in this room who are connected with universities cannot tell you the answer to that question. Now that makes me wonder, to contextualize this a little bit more, about why they cannot tell you. Isn't this a really important question? And it seems to me that they cannot tell you in part because of the history of what graduate education has been, rather than what it is becoming. It is now true that over half of the people who get Ph.D.s in a variety of programs end up in non-academic jobs, whereas graduate education typically has been an area where people worried mostly about the people who become academically employed. If you are just worrying about graduates who are destined for academic careers and if you have an academic sector in which employment is not growing much, then you do not really care too much about what you have to give up to get the few who go to the jobs that matterthe academic jobs. Now, that is not the view of the graduate deans in this room, but it may be the views of some of your faculty. It is important when measuring attrition to think about what questions we can answer, what questions we cannot answer, and why we cannot answer them, because otherwise we are going to have a horrible time collecting these data, especially at an institutional level.

Participant: If you had to construct an institutional panel based solely on the institutions that now have the capacity to collect the data, then you would sacrifice a lot of issues that need to be addressed in terms of equity, range, and representation. We cannot just rely on those institutions that have that capacity now, but help build that capacity through some sort of support structure for institutions.

Charlotte Kuh: The next question is for those of you who did not raise your hands. How many years would it take you to change your administrative databases? How many people could change your databases, say in the next six months? How about in the next two years? I see some shaky hands. How about the next five years? So what I am seeing by this response is that maybe these are questions we do not want to answer. Am I being a conspiracy theorist?

Participant: Let me raise another part of this. Very often, I have found that the employee database of the institution does not talk to the graduate student database. For example, the question of student support. It is not unusual, let's say for engineering graduate students to hold TA appointments in the math department, but the engineering department may not be aware of this, depending on how things have been set up in the institution. A lot of institutions do not have the capacity to do this, although I think some of them are trying to develop it. A classic example is the Bowen and Rudenstine book, which was looking for institutions that had longitudinal data and had a heck of a time finding those institutions. The other problem, as you saw from the audience response, is that developing this kind of database is not a question of doing something in six months or a year. So it comes back to a question that Vince [Tinto] raised: How do you get people to give you the resources when you are not going to be able to tell them very much for 7 or 10 years?

Claudia Mitchell-Kernan, University of California, Los Angeles: When I became graduate dean in 1989, I listed about five or six central objectives that I hoped to accomplish during the period of my deanship. One of the things I hoped to accomplish in the first two years was to build an institutional database on graduate careers. The reality is that it took five years. The delay had to do with some things that have been discussed already. One problem is that a lot of graduate student support comes in the form of stipends, whereas teaching assistantships and research assistantships come in the form of employment payments. These are handled by different sectors of the university. However, as we gained greater facility in utilizing relational databases, I think we conquered that problem. Something else that was a very, very big step for our institution was the introduction of the universal ID. This allowed us to track people through all kinds of different niches where they may have gotten some support for graduate education.

It is very difficult to get institutions to invest in creating institutional graduate student databases. Most institutional research offices are really not very big. In our own particular planning office, we had budget cuts in the early 1990s and our institutional research office has yet to recover. But that was very important because it stimulated other offices that felt that this type of database was critical for our own planning and our own success. I don't think it should take five years. We now have some model institutions to follow and therefore do not have to go through such a trial-and-error process. Nevertheless, it does take an investment, but this investment has major payoffs for various parts of the university. Some of you may discover that the investment can be sold on that basis.

Vincent Tinto, Syracuse University: How many here know about PeopleSoft? As some of you know, and I am sure as more of you will soon find out, there is a growing tidal wave focused on integrating institutional databases. PeopleSoft is a firm that has served businesses for many years and now is engaged in helping institutions develop online coordinated databases. Syracuse University is one of a number of institutions now working with PeopleSoft to develop integrated institutional databases that seamlessly combine data from different parts of the institution. By the fall of 1999, our institutional database will be integrated and available online to a variety of users. One of the results of that integration will be the capacity to track students from entry to completion and in turn the capacity to study persistence in a way not previously possible.

Whether it is with a firm like PeopleSoft or one of a number of other firms, I think it is clear that most institutions will soon move to integrating their databases and computerizing access to them. If that is the case (and I am convinced that it will be the case) then we should plan now to put on the table the sort of data elements we need from that database to study doctoral persistence so that 10 years from now we are not having the same conversation.

Alan Tupek: Our next speaker is Gita Wilder. She is a research program director in the Division of Education Policy Research at the Educational Testing Service (ETS). Her research interests include sex differences in educational attainment, educational applications of technology, and the application of both quantitative and qualitative methods to policy research. Dr. Wilder will be discussing a longitudinal study of GRE [Graduate Record Examination] test takers.

top

An ETS Longitudinal Survey

Gita Wilder, Educational Testing Service

Since we are being autobiographical this morning, I should say that I did not meet anyone I sat next to in PoliSci. However, I do want to point out that I am a psychologist by training, not a labor economist and not a mathematical statistician. I do not think in light bulb terms, but I do come from a field that values both quantitative and qualitative data. I am going to talk today about a database that we have been accumulating at ETS over a period of 10 years. I should also mention the fact that much of this was sponsored by the GRE Research Committee, probably in your watch, Claudia, but in the watch of others as well, because it went on for so long. I am very grateful the Committee for its support. The study was initiated by Michael Nettles when he was at ETS thousands of institutional associations ago.

Because I have more to say than time in which to say it, I am going to gloss over a good bit, but I think this database is a very interesting one, given the questions that were raised this morning. Higher education has not been my field. One of my goals today is to ask for your help in developing questions that you who are in attendance might pose of this database. We originally designed the study with the intention of looking at the role of financial aid and decisions about graduate education. The decisions in which we were interested were the decision to apply, having taken the GRE; the decision to enroll, once accepted; and the decision to attend, given certain levels of financial aid. We chose our sample from among GRE takers at three administrations in 1986. (I am reminded that this was my daughter's cohort and she has been out of graduate school for three years now.) The sample was chosen to provide reliable estimates for African-American and Latino test takers as well as the rest of the GRE population of mainly white examinees. (As it happens, the rest of the GRE population was a mixed bag and included representatives of many different groups, but it was mostly white.)

We had phenomenally high response rates during the first two years. You can see that in the first year the response rate was 90 percent. The original sample consisted of 2,500 people, all U.S. citizens. The initial questionnaire was distributed in the fall of 1987, and was followed in 1988, 1989, and 1990 by similar instruments. Meanwhile, I had taken a two-year leave from ETS and returned to discover that the 1990 data – I do not know why I was surprised by this – showed that very few of the study participants had completed doctoral programs during the four years of the study. If we had thought about this we might have anticipated it, but we had not. I convinced the GRE Board that maybe we should go after these sample members one more time to see if we could find them, since we had a great number of respondents even at the end of the fourth year. The Board consented and we did.

We had very high response rates every year, and in the 1995 follow-up, which was the ninth year after the initial sample was drawn, we were able to collect data from 80 percent of the original 2,136 test takers who had been our respondents in the first year.

I am going to run very quickly through the data that we collected. We collected a lot of background information, information that is traditionally collected when people take the GRE. We also collected in the questionnaires we distributed, age and marital status and information about spouse's/partner's education, about people's living situations, their number of dependents. We also collected what we call academic and career data, GRE scores and undergraduate grades, graduate and undergraduate majors. We did a very involved piece and got fairly good response to it on schools applied to, accepted by, and enrolled in each year. We repeated this each year because we again, were surprised, although we should not have been that people did not always apply to graduate school the first year after they took the GRE. In fact, our sample members have continued to apply over the nine years of the study. We collected information about degree aspirations, again every single year, and progress toward the degree. We also collected information on degrees obtained and reasons for leaving a graduate school or program.

The questionnaire was structured to start with the question about whether the subject was currently enrolled in graduate school and then branched him or her into multiple sections of the questionnaire. We asked about plans for further schooling and what turned out to be a fairly fruitful set of variables – satisfaction with graduate school experience and with graduate program which we made into scales.

As I said, the original purpose of the study was financial. We asked for a great deal of detailed financial data and it was by far the biggest failure in the study. It had to do in part – and I love to tell people this – with differences in personality between Michael [Nettles] and me. Michael remembers every penny of support he ever got for all of his years of graduate study. I had support from this source, that source, and the other source, but it came to me in the form of tuition rebate and a check, a single check and I could not have cared less what the source of the money was as long as the check came on time. Between the two of us lies a whole variety of people who answered this questionnaire in ways that I cannot begin to explain except to say that they played fast and loose with zeroes. The results are frightening; I will come back to that in a bit. In any event, we asked about school-related costs and expenses, and sources and amounts of financial support.

We actually pre-tested this questionnaire, massively. We thought we had done a very good job of asking for this information, but we had not. We asked for income sources beyond those related to school, total income, indebtedness, reasons for applying for and not applying for financial aid, time spent as a teaching assistant or a research assistant, and then a sort of self-reported level of financial worry. We thought that the really the big issue was the extent to which people were worried about their finances, so there was a question in there about attitudes about finances. We also asked a question about how much debt respondents thought it worth to take on in order to pursue their graduate careers.

We collected some data about plans for employment and careers, because we wanted to know what people's aspirations were. Over time we know how they changed. We wanted to know the impact of the graduate school major on their career aspirations and on career choice. For those who were working, and you will see that not everyone applied to graduate school, we asked about jobs, the sector in which they were employed, salary, length of time in current job, and satisfaction with job. Overall, this was a melange of fairly quantitative data and some attempt to quantify some qualitative notions. In the last year of the survey, in the 1995 follow-up, because we had anticipated that relatively large numbers of doctoral students would have finished their programs, we asked about postdoctoral fellowships and about mentorship, because by that time mentorship had become a topic of some interest. We were a little bit late for the longitudinal piece, but we did it in that year's cross-sectional piece.

Throughout the years of the study, through all five surveys, we had asked a series of questions in the same form about attitudes towards a graduate education and what it might be worth to people, and again we combined these into a scale. That turned out to be a pretty productive variable. At the end of the questionnaire – I always like to do this because I think that steam builds up when people fill out questionnaires of this sort – we left a blank page with a request to respondents to tell us something in narrative form. Sometimes it was, tell us anything else you think is important about your graduate school career. In the last follow-up, we asked about the impact of respondents attending or not attending graduate school, since all of them had, at one point, taken the GRE.

I am going to talk very briefly about some very general issues raised by the database and our attempts to make sense of it. Remember, first of all, that we looked at this database partly for information about behavior – including persistence – in the presence of different kinds of financial aid arrangements. I am discovering, as I start to read in anticipation of thinking more about attrition, that attrition is simply not the mirror image of persistence. You probably all knew that, but this was a new thought to me.

A second issue is how to account for and interpret the "in and out," that is, the tendency of today's students to enter graduate school and then to stop for some period of time, possibly to stop again later. We have a great deal of information about in and out. We have not – and I would love help from anyone with good ideas – been terribly clever yet about how to deal with these behaviors analytically. We have some linear tables that describe the patterns of in and out, but we also have some relational variables that would be nice to relate to the patterns, like attitudinal scales, and the satisfaction with programs. We think that we are going to have to do something more complex than we have thought of yet, about how to represent the various progressions. I think it was Vincent [Tinto] who talked about following people over time and learning what you can about their reasons for being in and out.

Let me go back to the first year and tell you very briefly what we found. There were 2,136 respondents in that first year. Fifty-five percent were enrolled in graduate or professional school that first year. (We had to ask the question about professional school; people take the GRE sometimes along with a law school exam or the GMAT and then choose among options during the period between the time they took the GRE and the time they enroll wherever they enroll. We were interested in whether graduate schools were losing candidates in substantial numbers to professional schools.) We also discovered, a little bit to our surprise, that 10 percent of the people who took the GRE in that year were already enrolled in graduate school when they enrolled, for a whole range of reasons. Two-thirds of those enrolled were attending full-time, and one-third said that their degree aspiration was a doctorate. (In response to the question raised earlier about whether self-identification is a useful way to look at doctoral aspirations, I think it probably is.) Forty-five percent were not enrolled that first year, and about half of those who were not enrolled had not applied.

In an earlier study, we had regressed an assortment of variables from the database on the likelihood that people would apply to graduate school having taken the exam, be accepted having applied, enroll having been accepted, receive financial aid offers having been accepted, and enroll having both been accepted and received financial aid. Eighty percent of the respondents to the questionnaires in that first year, including those who did enroll in graduate school, agreed that a graduate degree would provide them with better career opportunities than those available without one. This says something to me about the process before people even get to making the decision about whether to attend graduate school. If we are worried about graduate education attracting the best and the brightest, then there needs to be a place for looking at that, although probably not in this context.

The best predictors of enrollment, we discovered in that second year study, were GRE scores (in response to your concern about what GRE scores do give us). They do predict enrollment given application. That may be a function of the selection process more than anything else. Self-reported undergraduate grades also predicted enrollment. We also found that self-reported undergraduate grades, along with GRE scores and favorable attitudes about the value of a graduate education, predicted persistence to a second year.

In the interests of time, I will skip to the fourth year, 1990. That year, there were 1,903 respondents, 34 percent of whom were enrolled in a graduate or professional school at the time, 62 percent full-time. More than half of them, 362 of the enrolled students, said their degree aspirations were doctorates. Two-thirds of the original respondents sampled were not enrolled in graduate or professional school. However, the same 80 percent agreed with the statement that a graduate degree would help them in their careers. By the fourth year, 70 percent of the original sample was known to have been enrolled at one time; 31 percent had earned a master's degree. I should point out, consistent with something someone said earlier, that by far the largest proportion of people who enrolled in the first year said that their degree objective was a master's degree. The modal category within the broad array of major fields and people consisted of women who were working toward a master's degree in education. You can imagine that, with 1,903 subjects and upwards of 40 fields of study, different degree aspirations, and part-time and full-time study, the number of people in any given cell was small. However, the modal category was this one of women pursuing master's degrees in education, more often then not, part-time.

The best predictors of graduate school persistence up to the fourth year were attitudes toward the benefits of graduate school education. Other strong predictors were satisfaction with school progress and with other students in the program, a collection of items rather than a single item, and a pretty robust finding; and, again, GRE scores and self-reported undergraduate grades.

There is a great deal of data here and I will give people copies of my slides, but let me turn to one finding that I think is particularly interesting from our 1995 data. We asked for the highest degree each respondent had received by 1995. As an aside, I should point out that our original sample had more females in it than males. In the fifth year there were 1,265 females and 863 males in the database, reflective of the original sample, since there are more women than men in the GRE taking population. By 1995, 42 percent of the women and 38 percent of the men said that their highest earned degree was a master's degree. Ten percent of the males and 6 percent of the females had earned doctoral degree, and an additional 4 percent of the males and 3 percent of the females claimed to have completed all but their dissertations. Some were enrolled and some were not. Between 4 percent and 5 percent of both populations had received other professional degrees, consistent with something Claudia [Mitchell-Kernan] said earlier. There were 20 percent for whom we did not have data in the final year of the study.

Let me conclude with a few observations about the database, since that was the focus of my remarks today. I think one of its major strengths is that it is longitudinal, that is, it has followed the same people over nine years. Another of its strengths is that it includes a range of data, and while I could hardly call it a qualitative study, we have completed open-ended responses from many people. A summer pre-doctoral student who worked on these data two years ago became very interested in people's stories. They are interesting, the things that people wrote at the back of their questionnaires. We had the prescience to photocopy all of the stories from the most recent year. I say prescience because, sadly, there was a major fire in a storage facility that we rented. The original questionnaires for the first four years went up in smoke, so the earlier stories are lost. Given the heartfeltness with which people wrote their stories throughout this study, I feel terrible about having the stories disappear.

Let me add something about the shortcomings of the database, a major one of which is that it does not tell us anything at all about institutions. Rocco Russo and I have talked at various times about trying to put our databases together, but the overlap is very minor. When you have the number of people we have scattered across the universe of graduate schools, it turns out that not a lot of them come from a single institution. But it would be wonderful for us to be able, somehow, to have our data interlock. The other serious shortcoming is that the database is minimally useful with respect to financial data. We can describe the sources of our subjects' financial aid and the relative amounts, with very gross error terms. The final limitation is that the database does not really tell stories. Here I want to return to Vincent Tinto's point about contextualizing. In our questionnaires, we offered respondents choices of reasons for the decisions they made. Why did you change schools? Why did you not return to graduate school? Why did you choose this school over that school? These questions were followed by lists of choices that had been generated from qualitative interviews that we had done. But what we discovered is that these decisions are much too complex to be represented by checking off options from a list, no matter how comprehensive. We have missed out completely on the ways in which people arrive at the decisions they make. For this reason, I think we need to be able – and I hope Toni Clewell will address that this afternoon i to fill in around the edges of the large amount of data that we have with the kinds of things that actually have an impact on people's decisions.

Alan Tupek: Thanks Dr. Wilder. The final two speakers are from NSF, Mark Regets of the Division of Science Resources Studies has been managing and analyzing our workforce surveys for the past several years. He has a great deal of knowledge of the three surveys that NSF conducts, which we put together and we call the Sestat system. With that, I will turn it over to Dr. Regets.

top

Using the National Surveys of Recent College Graduates (NSRCG) and College Graduates (NSCG) to Study Graduate Student Attrition

Mark Regets, Division of Science Resources Studies, NSF

Unfortunately, I never tried to use our surveys to do anything on graduate attrition until this last weekend. I am up here to say something about how NSF's various workforce surveys might be useful to the study of attrition. When we put our three surveys together, we call it the Sestat Integrated File. They are combined to provide nearly complete coverage of the universe of college-educated scientists and engineers. Note that sample questionnaires for each of the three surveys are in the blue-gray book out on the publications table.

The National Survey of College Graduates (NSCG) was initially based upon the 1990 census long form, and we have information from the long form as part of it. The NSCG follows a sample of individuals who had bachelor's degrees or higher as of the 1990 census, and like the other Sestat surveys it is collected every two years. Thus far, we have data from 1993 and 1995, and 1997 is in the field. The NSCG is actually our largest labor force survey, and it carries the responsibility of collecting data on about 85 percent of the science and engineering trained population: the pre-1990 bachelor's and master's as well as any degrees that come from foreign institutions. It is available, as are at least some data from the other surveys, on the SRS Web site, and the big 1993 edition of the NSCG, with data on non-science engineering graduates as well, is available on a CD-ROM.

The Survey of Doctorate Recipients (SDR) Carolyn [Shettle] will be telling you a lot about in just a little while?is a 1-in-11 sample of individuals with science and engineering Ph.D.s from U.S. schools. To protect confidentiality, this is currently available only by licensing, but I think there might be a public use version that is more limited that will be available shortly.

The survey that has the most potential for analysis of attrition is probably the National Survey of Recent College Graduates. It follows a sample of individuals who received BS or MS degrees in science or engineering since April 1990. It has a two-part sample design, first a sample of educational institutions and then a sample of students from within these institutions. As with the other Sestat surveys, there are biennial follow-ups on individuals once they get into the system.

Each of the Sestat surveys has a large common core of data. Each survey takes about a half-hour to collect. In broad terms, the major variables we collect are educational history, the detailed information on three or more degrees: date received, region, whether or not the person ever attended a community college, whether he or she attended college since receiving his or her last degree and, if so, what field and the degree being worked on.

We are currently attempting to add information about institutional characteristics to the data and in the SDR and the Recent College Graduate surveys. The institutional names would already be available for any type of matching for license users.

On labor force data, we get salary, and in the more recent survey rounds, also information on total earnings and family income. Various other labor force status items include occupation, prior occupation, and years of professional work experience. We also asked respondents about the relationship between their work and their highest degree and about their work activities whether it is basic research or administration, sales, etc.; and their sector of employment whether they in educational institutions or a private-for-profit employer. The demographic information is the standard age, race, sex, citizenship, and nativity, and includes some specialized items such as parents' education.

In addition, in individual survey years, we have included special modules on various topics: occupational history, i.e., what jobs has a person held over their careers; time out of the labor force; patents and publications; postdoc history; vocational and continuing education; and career objectives. Of course, since this is a panel data set, data collected in one cycle can sometimes be useful in interpreting and analyzing the information collected on the same individuals in different years.

In addition to the Sestat core variables, which are useful in providing a broad profile of the individual's education and labor market history, the National Survey of Recent College Graduates (NSRCG) asks several special questions that might be useful to graduate student enrollment and attrition: undergraduate grade point average; undergraduate loans and the amounts still owed on these loans; the same thing for graduate loans; reasons for taking college courses since their last degree; and, if they were not taking college courses, reasons they were not taking them. I have to second the previous speaker's comment that a check list of reasons for why people do things does not capture some very complicated motivations that are out there, but we do have those types of questions on the survey. The NSRCG also captures information on a range of support mechanisms, both for their undergraduate degree and for any graduate training that they are undertaking. We do not have amounts, but we do have a check list of the type of support they are receiving, as opposed to just the primary mode.

In the initial interview in 1993 for the 1990-1992 graduates, we had a sample size of about 19,000 for the NSCRG. For the initial interview in 1995 for the 1993-1994 graduation years, the sample was dropped to 16,000, but also in 1995 we followed up on about 7,000 individuals who had completed 1993 interviews. Unfortunately, we did nothing to try to over-sample those who had actually made decisions to go to graduate school, but that type of thing can certainly be discussed for the future.

The National Survey of Recent College Graduate Survey is very well-designed for many purposes. (When somebody starts talking about the virtues of a survey for something you know there will be a lot of negatives as well.) It is very well-designed for studies measuring educational outcomes, since it has a great deal of occupational labor market information that you can relate to a very detailed educational history.

It also does a pretty good job (and I recommend this to anyone who wants to do this as a research project because there has been very little actually done on this) it is also a very good data set to look at why a person enters graduate school. The NSRCG captures fairly well the fact that they enter graduate school, and we have a great deal of their personal history and their economic variables to look at. We also have the comparison sample of the people who decided not to go to graduate school.

Now for studies of graduate attrition. As we sit down and look at what is there, frankly there are holes in what we have collected in 1993 and 1995. This is the type of thing you find out only when trying to use the data for a very specific purpose. We do not really have the exact timing of attrition, and clearly that is critical for a lot of the questions that we want to ask today. We do not ask, currently, the reasons for attrition. I strongly suspect that we would have a hard time, even though we ask the question, identifying whether the person is going for a Ph.D. or for a master's degree, for reasons that other speakers discussed earlier. That is something perhaps we can do a little better by asking about ultimate career decisions.

Many of these problems are fixed or at least reduced in the 1997 survey, which will be going to the field shortly. However, in order to really get at issues of attrition and particularly the timing of attrition, it will be at least until 1999 before we will have something like an attrition module in the data after focus group tests and hopefully the help of a lot of people in this room.

This is not to say that nothing can be done with the existing data – when the data stab you, I believe in taking a stab back – so the next slide is my stab at it. It looks at the April 1993 enrollment status of the 1990 to 1992 bachelor's degree recipients in the 1993 NSRCG. These are people who said that since their last degree they were enrolled in a program seeking either a master's degree or a Ph.D.. This definition is probably somewhat restrictive given the exact format of our question, but it still leaves us with an unweighted sample of slightly more than 3,000. I divided these groups by whether or not they were enrolled in college as of April 1993. Notice that I said enrolled in college, not necessarily enrolled in a graduate program. It is probably not worth analyzing these data too much. I put the slide up to give you a feel of what was there on the data set.

There were very small differences between male and female graduate students and between underrepresented minorities and others. There are somewhat larger differences by graduate major. This is definitely preliminary, so please do not cite the work. But if further work suggests that this very early measure of attrition has any merit, it could be analyzed against financial aid and other items of interest already available on the data.

The strengths of the NSRCG as an attrition study, while it will never be perfect, are not minor. The study provides a large comparison sample of those not going into graduate school. Maybe it is the econometrician side of me, but it seems that any detailed study of why people leave graduate school needs in some way to control for who goes to graduate school in the first place. Thus, that seems like a useful item.

The study also may collect an individual's history for many years before he or she actually enrolls, since we are picking up a person at the time of the bachelor's degree. An obvious advantage is just that you are adding it to an existing large panel survey. You can just add a few questions and still pick up information on the person that had been collected in previous years, and frankly information on the labor market and educational history that is already being paid for. As I have said before, new questions have been added in 1997, and there is some reason to believe that we can do better than that with more time.

A weakness of the study is that it does not include any non-science and engineering bachelor's degree recipients who would be attending graduate school seeking a science and engineering degree. The data on people's history suggest there are at least some of those. It also does not include people whose bachelor's degrees came from a foreign institution. Notice it will include many foreign students who did receive a U.S. bachelor's, but those coming with foreign bachelor's degrees are just outside the system. I mentioned the possibility that we would not?have a detailed attrition module to allow any type of finer detail until at least 1999. Also, the strength of it in picking up people who are not entering graduate school is also a weakness. Since we are doing the sampling at the time of the bachelor's degree, it would be much more expensive to try to increase the initial sample sizes of those actually going to graduate school, in comparing this to some other type of survey.

Alan Tupek: Thanks, Mark. I think we probably should go on to the final speaker and if Carolyn can keep it reasonably short, we might have time for some interaction afterwards. Our final speaker is Carolyn Shettle from the Division of Science Resources Studies at NSF. She is the director of the Doctorate Data Project and has been designing, managing, and analyzing our work force surveys since she came to NSF in 1989.

top

Using the Surveys of Earned Doctorates (SED) and Doctoral Recipients (SDR) to Study Graduate Student Attrition

Carolyn Shettle, Divison of Science Resources Studies, NSF

The Doctorate Data Project consists of the two major SRS surveys of the doctoral population, the Survey of Earned Doctorates (SED) and the Survey of Doctorate Recipients, and related analytical studies. I will talk a little bit about these surveys. Although most of you are familiar with the surveys, some people have not been at these meetings before, so I would like to go over very quickly a couple of the basic points about the surveys.

The Survey of Earned Doctorates is the one that those of you who are deans of graduate schools are hopefully intimately familiar with, because you are instrumental in collecting the data. This survey form is distributed through the graduate schools to everyone at the point – or very shortly before the point – that they get a doctoral degree. That means we get around 40,000 forms a year. The survey is conducted for us under contract. Until recently, the National Academy of Sciences conducted the survey; at this point, the National Opinion Research Center (NORC) at the University of Chicago collects the data.

The second survey that is important here is the Survey of Doctorate Recipients that Mark [Regets] has already mentioned. It is a longitudinal survey of individuals with doctoral degrees. We interview them every two years until they hit age 76. Approximately 50,000 individuals are included in the sample.

The question that we are posing here is "What do these two surveys tell us about attrition?" In one sense, nothing, because these are surveys of people who have completed doctorates, so they contain no information about individuals who have failed to complete their degrees. However, although they tell us nothing in and of themselves, combined with other data, I think there is a lot of potential use of the data to study attrition. I also think that this potential has not been fully realized up to now.

As Charlotte has said, one of the reasons that the data have been underutilized is that we have had a lot of problems in allowing access to these two files. However, as Mark [Regets] mentioned, we now have data licensing agreements that permit researchers to use the data at their home institutions with proper safeguards to the confidentiality of the data. Further, it is important to note that the Survey of Earned Doctorates has always been conducted with the understanding that schools can obtain information on their own graduates.

It is important to note that we do not give researchers access to the individual identifiers needed to merge databases except for letting graduate schools have the data about their own students. However, if you have a file with individual identifying information (e.g., Social Security numbers) that you want merged, we can have NORC merge the files and then, with proper precautions, permit you to get the merged data under a licensing agreement. We have done this on a couple of occasions in the past – including for ETS.

Let me talk a bit about the two main approaches to studying attrition using SRS doctorate data that have been alluded to today. The first approach is the cohort approach. Although this approach has been disparaged by some of the speakers, I think it has some advantages. The second approach is the merged file approach, which permits us to create longitudinal data files out of two or more cross-sectional files. Let me give you a very quick example of the type of thing that we could do using a cohort approach. Since I did this very quickly, I do not guarantee the accuracy of the results; however, they do illustrate the approach.

As I am sure most of you know, the Department of Education collects data on degrees granted, by type of degree. In one of our basic reports on degree trends, we "massage" the NCES data to conform with the NSF definition of science and engineering. I used this information on the annual number of bachelor's degrees in science and engineering granted by U.S. institutions as the denominators of my estimated completion rates. For the numerators, I have used SED information to estimate the annual number of individuals who received a bachelor's degree from a U.S. institution in science and engineering and also completed a doctorate within the 12-year period of time subsequent to the BA year. The resulting percentages should provide pretty good estimates of the annual percentages of bachelor's degree recipients who complete science and engineering doctorates. The estimates should be pretty good because both the numerator and denominator refer to a very specific cohort of individuals receiving science and engineering bachelor's degrees in a given year. Looking at the data, we see what looks like a trend. Cindy Woods of the National Academy of Sciences, who has been working with me on attrition, has pointed out that the downward trend in completion started after the Vietnam era. Of course, during the Vietnam era, there were some special incentives to attend and stay in graduate school.

The approach I have used in this graph is pretty easy to understand. However, it has limitations. For one thing, it does not provide information about individuals who received bachelor's degrees less than 12 years ago. This could be remedied by doing multiple analyses (e.g., individuals completing within one year, two years, etc.) or by using more sophisticated statistical techniques.

In thinking about cohort analysis, we do not need to start with individuals who received their bachelor's degrees at a particular point in time.

Alan Tupek: Carolyn, the IPED database does not have individual data in it. Aren't these different individuals in the numerator and denominator?

Carolyn Shettle: These are individuals defined by cohorts. I used the IPEDS data to estimate the number of individuals who got bachelor's degrees in science and engineering in, for example, 1966 as the denominator. I used the SED to estimate the number of individuals who both received a bachelor's degree in science and engineering in 1966 and also received a Ph.D. in science in engineering between 1966 and 1978. So the critical point is that the SED gives us an estimate of a subgroup of a larger group estimated from another database. Of course, the bachelor's degree does not have to be the starting point. For example, I could have defined the cohort as those getting a master's degree.

With the cohort approach, you have to be very careful that the numerator and the denominator do, in fact, refer to the same group of people. That is a challenge when you go across surveys and that is part of the reason I am hesitant in saying my estimates are highly accurate. The degree report is one that uses our definitions of science and engineering, but I have not gone back to check that they are as comparable as possible.

The second big limitation of the cohort approach is, as we have all heard this morning, that there is no information on what happens to those not getting research doctorates. We know, for example, that among the 95 percent of people with bachelor's degrees who do not get doctorates in science and engineering from a U.S. institution, there is a wide variety of experiences. Certainly, there are individuals who had planned to get such a doctorate but never got a doctorate. However, there are also graduates who got doctorates in other fields or who got a doctorate in science and engineering from a foreign institution. There are also people who never wanted to go to graduate school and people who always wanted to get a master's degree – and, in fact, received one. The SED data do not permit us to differentiate among such alternate experiences.

Also, the cohort approach does not provide qualitative data about attrition. Like most of the work done in the Division of Science Resources Studies, it is only a quantitative picture.

As far as future work goes, using the cohort approach, there is a lot that I did not do that could be done with the existing data. As I said earlier, we could define the cohort by the master's degree rather than bachelor's degree. We could also do subgroup analyses. For example, we could examine attrition by gender and perhaps by race/ethnicity, although I am concerned about the comparability of the databases on race/ethnicity. The cohort approach could also be examined separately for different degree fields. For example, it would be very interesting to look at the engineers and the scientists separately, since engineering is a field in which a master's degree is often an appropriate ending point.

It is also important to note that the possible use of the cohort approach is not necessarily limited to the IPEDS. Any database that we can define in a way that we can match it with a similarly defined cohort in the SED is a potential source of information about attrition rates. One important example is university data files. As we said earlier today, one of the limitations of university data files is that institutions do not know about those students who transferred. If a university data file can provide an estimate of the number of first-time graduate school enrollees each year, then the SED can be used to estimate the number of individuals who started their graduate education at that institution and went on to get a U.S. doctorate within a specified number of years after initial entrance. This would provide information on the overall completion rate for entrants as contrasted with their completion rate at the institution at which they initially enrolled. I would think this would be of special interest to those of you who are graduate deans. Similarly, some of you in professional societies may have data files that provide opportunities for cohort analyses.

There is a second type of study that I mentioned that we can look at – the merging of two data files. The Bowen and Rudenstine book, mentioned earlier today, is one of the classics here. I have reproduced one of their analyses to look at today. In conjunction with staff at the National Academy of Sciences, they merged a file containing information about people who applied for NSF fellowships with the SED. Their analysis is shown separately for physics and the social sciences. It is clear that there are field differences in completion rates. There are also differences in completion rates in terms of the quality ranking and award status of these NSF fellows. I think that the potential for doing this type of merged study is great; it is limited only by our imagination and our ability to locate appropriate files to be merged.

In terms of the limitations of studies based on merged files, note that they are technically challenging. Probably the best variable to use for merging is Social Security Number (SSN). On our SED file, we have SSNs for most people who received their doctorates in 1964 or afterwards. However, it is helpful to have additional identifiers to verify the matches, such as names and birth dates, because people can make mistakes in reporting their SSNs or can refuse to report them.

The merging approach gets around some of the limitations of the cohort approach discussed earlier. By merging a file like the Educational Testing Service (ETS) file, which has a lot of qualitative data on it with the SED, we end up with a file that permits us to compare individuals who have completed doctorates in science and engineering at U.S. institutions with individuals who shared a characteristic (such as taking graduate boards) at an earlier time but did not complete a U.S. science and engineering doctoral degree.

There are additional things that we could do with a merged NSF fellowship/SED file, such as that created for the Bowen and Rudenstine study. We could, for example, conduct subgroup analyses based on gender and fields of degree. It would also be interesting to merge the SED data with university files containing information about individuals entering a given program.

Mark [Regets] mentioned that I was going to talk about the SDR; however, I am not going to talk very much about it because I think the SED for this particular topic is the more appropriate file on which to focus. One important future use of the SDR in studying attrition is likely to be a comparison of individuals who have received a doctorate with those individuals in the Survey of Recent College Graduates (SRCG) and the National Survey of College Graduates (NSCG) who entered graduate school but did not complete a doctorate. Starting with the 1993 work force surveys, considerable effort was spent in ensuring comparability among the variables measured in the three SRS work force surveys, so that such comparisons are feasible. Another possibility of using the SDR is conducting surveys similar to the SDR based on individuals who are known to have attrited from graduate school.

In conclusion, the doctorate surveys certainly do not tell us everything we would like to know about attrition. However, I feel there is a lot more that can be done to mine the data in these surveys to help us understand graduate school attrition.

Alan Tupek: Thank you, Carolyn. There will be a quiz on all the acronyms after lunch.

Jules LaPidus, Council of Graduate Schools: It seems to me that some years ago I saw some data that indicated that as the number of bachelor's degrees in science and engineering increased, particularly for some American minority groups, the number of graduate students in science education increased faster than the number of graduate students in doctoral programs and the sciences. In the survey of earned doctorates, there is information on doctorates in science education, and I just wondered if anyone had ever tried to look at that interface.

Carolyn Shettle: The person who would most likely be able to answer your question is Susan Hill, and I see her in the back. She has a long history of working on the Survey of Earned Doctorates. Of particular interest is the work she has done on the Baccalaureate Origins report.

Jules LaPidus, Council of Graduate Schools: Yes, and I cannot cite the data source because it has been years ago since I saw it. The question has to do with people who get bachelor's degrees in science and engineering but then do not go on to graduate school in science and engineering disciplines; rather, they make a shift to science education and get either Ph.D.s or Ed.D.s in science education. Those figures are in the Survey of Earned Doctorates, the science education doctoral degrees. In the Survey of Earned Doctorates there is also some information on whether the bachelor's degree was in the same field, whether the doctoral degree is in the same field as the bachelor's degree was. Has there been any attempt to look at anything like that?

Carolyn Shettle: You might want to mention that copies of the baccalaureate origins report are on the table outside the room. I think that information might be apropos in this situation.

Barbara Lovitts, University Of Maryland: Gita raised the issue of financial support, which is collected in most studies, and I think it is right and appropriate. But what is really important about financial support, at least based on my own research, is not the monetary value of the support but the strings that are attached to it. In my sample, students who got teaching assistantships (TAs) and research assistantships (RAs) were two to three times as likely to complete rather than not complete. Students who got fellowships from the university or other outside sources were equally likely to complete as they were not to complete, and students who got no financial support at all were about nine times more likely not to complete. Now what is going on here is not the value of the money, but, as I said, the strings that are attached to it. If you have an RA or a TA you are forced to come onto campus; you are forced to interact with faculty and other graduate students; you are forced to engage in the intellectual and professional tasks of the discipline; and you are also more likely to get a desk in a group office with other graduate students, which allows you to tap into a lot of the tacit knowledge that is contained in the graduate student subculture.

I have also turned those data around and played with them in other ways. In terms of students' participation in intellectual and social activities in the department and their engagement in the professional tasks of the discipline, I found large differences between completers and non-completers and their engagement with those activities. But if you look at engagement in those activities by type of support for the whole sample, not separating out completers and non-completers, you find that the students with RAs and TAs were much more likely to participate in those activities than were students with fellowships and students with no support at all. When you control for support type and look at differences between completers and non-completers, most but not all of the differences between those two groups of students disappear. Basically, the type of support that students receive, not the value of the money of that support, shapes students' experiences in graduate school and leads to differences in persistence outcomes. So when you look at financial support issues, I would encourage you to think in terms of the strings attached and not how many dollars and cents on either side of the decimal point you are looking at.

Carolyn Shettle: Thank you. That makes me feel much better because we do have measures of the type of support in the SED.

Orlando Taylor, Howard University: There are some reports that the awarding of financial aid to underrepresented groups, particularly African Americans and Latinos, has been disproportionate in that second group, that is the non-TA and the RA. This suggests that our efforts at diversity in effect have backfired because we have been giving the wrong kind of money. Could you comment on that?

Barbara Lovitts, University of Maryland: My sample was 88 percent white, so I could not go in and sample by ethnic group. If the theory that I am presenting is correct, then yes, minorities are being given the wrong type of support and also graduate students in general are being given their support backwards. They should be "TA-ing" and "RA-ing" at the beginning and getting into the discipline and getting the fellowships at the end to free them up to do their dissertation work and not being pulled away by the assistantship tasks.

Deborah Stewart, North Carolina State, University: I think it also makes the point that traineeships are a better Federal investment than fellowships, because with traineeships we construct the strings in the competitive process as a way of ensuring success in the national competition. Those very thoughtfully constructed strings, in many ways are much more thoughtful than you can anticipate any director of teaching to do for TAs or any individual PI to do for RAs. In a traineeship program you have a theory behind what the strings are and you have some level of accountability established to ensure that those strings actually remain tied. I think that is the most powerful argument for traineeships as opposed to fellowships.

Harvey Waterman, Rutgers University: This conversation is beginning to make me nervous because it entails a lot of generalizations across disciplines. It seems to me that while there is truth in the generalization, it is not necessarily true discipline by discipline. With regard to minority students, our local experience was that giving them four-year fellowships was a disaster. We switched and gave them packages that are mixtures of fellowships and assistantships, which seems to work much better. But to conclude from that that it is appropriate in any discipline to start out with an assistantship rather than a fellowship, seems to me to take a wrong turn. There is a burden on the graduate program to create a community and a culture for students. But that can be done for fellows, too, and should be in many disciplines where the student needs to get through a lot of course work before doing other things. Being burdened with an assistantship has a substantial effect on time to degree if it is done early rather than late. We end up with a very complex picture.

Barbara Lovitts, University of Maryland: I am just focusing on how these strings get you into the community. If the fellowship has no strings that get you into the community, then it can be counterproductive.

Participant: A paper published by Maresi Nerad and Joseph Cerny a few years ago tracked some students at the University of California, Berkeley, and one of the things they did was look at forms of support by year and related support to time to degree. One of the things that comes out is a point very similar to what Harvey [Waterman] was saying. As I recall those data, they found that a good way to look at financial aid packages was fellowship in the first year, so students could get their feet on the ground. But again, a very important point, the department has a responsibility to create some of those strings. TAs and RAs in the intervening years and dissertation fellowships in the last year seem to be a good way to look at total support if you want to move students through the process.

Harvey Waterman, Rutgers University: It seems to me that the distinction is that data available, for instance through NSF, give us a view of the system of graduate study and the flow of students through the system, and attrition as a system phenomenon. This is in contrast to the conversations we are having now about an institution- and department- and field-specific phenomenon. It seems that that is the struggle. How can NSF, for example, structure its data collection but reward systems to make incentives for the programs and institutions? Often, it seems that individual faculty are responsible. That is the trick: how to get the system to provide incentives for the institution and programs, and people eventually.

Alan Tupek: Thank you very much. I think this has been a very informative session.

top


Previous Page Workshop HomePage Next Page

agenda welcome
keynote professional future directions


SRS HomePage