Division of Science Resources Studies
Professional Societies Workshop Series

Workshop on Graduate Student Attrition


Professional Society Interest and Data Collection

Catherine Gaddy, Commission on Professionals in Science & Technology, Moderator

I have the privilege to introduce the last three speakers in the formal part of the program today. First let me note that the fields of physics, psychology, math, and chemistry have been the leaders historically, in terms of collecting data on the employment of recent doctorates. The first speaker, Michael Neuschatz, is senior research associate at the American Institute of Physics (AIP). AIP has provided us with a lot of good ideas for survey methods and questions. The second speaker, Rocco Russo, is with the Educational Testing Service (ETS), and he is going to tell us about an Association of American Universities (AAU) study being performed at ETS. They are developing a longitudinal database of doctoral students, and starting to graduate some of the cohorts. The third speaker, Peter Syverson, is vice president of research and information services for the Council of Graduate Schools, and he previously worked on the Survey of Earned Doctorates and Survey of Doctoral Recipients. So first we will hear about physics, then we will hear about the AAU longitudinal study, and then Peter will provide some excellent specific examples from institutions that have done really good in-depth studies of attrition and retention.

Graduate Attrition in Physics: Some Rough Thoughts and Estimates

Michael Neuschatz, American Institute of Physics

I think this is a very timely topic – I know that in physics it is going to be extremely timely. The reason is that physics, after years of what many people considered an overproduction of degree recipients, is facing a situation in which there will be a vast plummeting in the number of students. Whereas attrition may not have been a burning issue a few years ago when everyone was talking about the difficulty of getting jobs, a few years from now many deans of graduate schools and department chairs are going to be upset about a lack of graduate students. In that climate, keeping the students you have is going to be very important.

It is very hard to study attrition, the main reason being because you are studying a population that is no longer there. We have a very easy time of reaching people by looking at their institutional membership, but this group is specifically defined as non-members. That is why they are of interest.

Another problem involves the importance of understanding the original intentions of students. We have lots of departments in physics that produce both master's degrees and Ph.D.s. It is very important not to assume that anyone who gets a master's is an example of attrition, because many students enter those departments planning to get only a master's. In fact, some of the people who were going to get a master's end up leaving without anything. At the same time, some of the students who were going to get Ph.D.s leave with only master's degrees.

I am going to try to give some estimates of what these numbers are in physics. It is tricky in a single discipline due to a degree of parochiality. A discipline considers anyone who does not graduate in that discipline to be a leaver. But, of course, there are many people who may transfer to another discipline. Similarly, an institution often considers anyone who does not graduate from that institution to be a leaver, and of course there are many people who may transfer and persist in the discipline. Discipline-based studies like ours would pick up those people, while institutions would tend to pick up people who transfer disciplines but did not transfer institutions. So there are many different definitions running around and it is important to be clear about which definition is being used.

Many different approaches to attrition research have been discussed by the previous speakers. Longitudinal studies track individual students. Those are especially good because they give what might be called a natural history of the process of leaving. They follow individuals and they can track not only individual characteristics, but also the exact timing of the process: where in the graduate school development difficulties occurred and somebody left. These studies can also give hints about early warning signals – certain patterns that may develop among students prior to leaving. These can then be further explored among the current batch of students through "action research" to try to look at whether attrition among students showing these patterns can be lowered through intervention.

There are other ways of approaching attrition, too. There is the exit interview approach, although that is hard with graduate students because exactly when they exit often is not clear. One way of addressing this problem is a kind of "postpartum" study that can be done at a certain interval after exit. A certain period after they leave, when it is clear that they are not coming back, those students are followed up. The problem here is that these kinds of studies do not necessarily get good comparisons between the leavers and the persisters.

And then there is the messiest, but cheapest and easiest approach, which is what I am going to call the "roster approach," in which you look at institutional records to see who stays and who leaves without ever actually talking to the people involved. This is very common technique and it gives very limited information, but it is readily accessible information. At the American Institute of Physics, we developed over the last 20 years or more a system of getting the names of graduate students from their departments, in order to survey them. Those lists have become in effect a longitudinal database of sorts that allows us to look at who stays and who leaves.

There are many problems with this approach. Among the problems is that we do not always get full lists from all of the partners, and if you are going to be sensitive to transfers, you really need full lists. There are ways to get over that, ways of tweaking data sources, as there are with any data set. It is not sufficient to give a hard set of numbers, but it is sufficient to give a first idea, a rough estimate of how many people are we talking about. Without such estimates, you run the risk of having no general sense of how things stand – what the dimensions of the problem might be over long periods of time.

However, our problem was that we had never actually collected the names of the Ph.D. recipients until recently. We are not really ready to do that analysis until we have built up a few more years of degree recipients, and then we can begin to look at the names of people who have dropped off and were not successful in getting degrees. Because people take different amounts of time to get their degrees, you have to have a series of years in order to put together a real roster approach. Without that, all there is are aggregate numbers, and our aggregate numbers are not even as good as Carolyn [Shettle's] which were based on specific degree dates for the bachelor's degree. Our aggregate numbers only indicate how many people graduated from their bachelor's departments and entered graduate school and, after some arbitrary "typical" interval, the total number receiving a master's or a Ph.D.

This is a very poor approach. One thing that it does have going for it is that when you do it over and over again, you can allow for ebbs and flows in the time to degree. You can look at the stability of the numbers, and if the numbers are stable over a long period of time, then you have a reasonable guess, at least in aggregate terms, of the proportion that persist to degree.

I did this for physics and I tried to use the National Survey of College Graduates data that Mark Regets talked about, to give me some kind of corroboration, although that had its problems also. I was able, at least, to obtain some comparisons. This first figure does not show the attrition in graduate school, but rather the attrition rate from undergraduate to graduate studies. This is an important issue in physics, because we get about 60 percent of our students considering going on to graduate school in physics, and about 37 percent of them actually doing so. You can see that the numbers are slightly different for men and women, but relatively close. Not surprisingly, the longer students say they are going to delay between undergraduate and graduate studies, the less likely they are to actually go.

We do studies where we ask students about their educational aspirations. We find that about 10 percent of the physics bachelor's recipients who plan to do graduate study in physics aim to stop at the master's. Incidentally, when we ask first-year graduate students the same question, the number has risen to 17 percent. These numbers are not hard and fast, but they can give some sense of intentions at the point of transition between graduate and undergraduate school.

If we go on to the next slide, this is the meat of the figures that we are going to provide. This is our best estimate using very rough aggregate numbers, of how many actually succeed in getting through, for U.S. students only. These numbers are very stable. We took 10 years of data on entering students, then looked at two years after entrance and computed what percentage of students were coming out with a master's degree. It was a pretty steady number, ranging from 20 to 25 percent over that entire period of time. So 23 percent is a pretty good estimate of how many students come out with master's degrees approximately two years later. (Remember that 17 percent of first-year grad students said that this was their goal, and not all of them would have succeeded; we will come back to this point in a minute.) As I said, given the 10 years of numbers, there is enough robustness here to allow for some students taking one year, some students taking three years, because these are accumulated numbers over time.

For students entering with the aim of getting a Ph.D., we are getting somewhere around 45 percent actually emerging with one six years later. You heard 50 percent before; for physics it looks like it is just under 50 percent, while over one-third of students drop out with nothing. One of the reasons I put this up is that I wanted you to see the difference in gender. Physics historically has been very much a male-dominated discipline with only a small proportion of women getting in. These data show that a noticeably smaller proportion of women than men who start actually make it through to the Ph.D..,p> This difference is somewhat mitigated when we factor in master's degrees, since it turns out that slightly more women than men aim to get master's degree to start with. We do not have figures on the actual breakdown of people who are getting master's degrees: how many originally intended to and how many did not. As a very rough working assumption, just as around half the students aiming for a Ph.D. actually make it, we might imagine that somewhere around the same proportion of master's students also succeed. Using this assumption, we come out with final figures for Ph.D. students of something like 45 percent getting a Ph.D., 15 percent to 20 percent exiting with a terminal master's, and 35 percent or 40 percent leaving with nothing. I would not stake my life on those numbers, but I think that they are a pretty good first estimate of the way things are.

When we have about two more years of data under our belt, we actually could put in for an NSF grant to do the number crunching on the longitudinal study, using our longitudinal database. We could choose a subset of schools for which we have information for the entire time, and look over 15 to 20 years to see how many names continue to appear on the rosters and how many finally exit with which type of degree.

My final point, relating to the earlier discussion, is that I think there needs to be a little more interest in terms of the actual factors that promote retention, for example, not only the mentoring between students and professors, but also the mentoring between students and students. For many students, survival in graduate school depends heavily on other students who transmit both formal and informal rules of survival. I think that departments that promote students interacting and collaborating with each other in ways that pass this information down would end up promoting the process of retention.

top

Research Topics Addressed by the AAU/AGS Project for Research on Doctoral Education

Rocco Russo, Educational Testing Service

In the few minutes that I have this afternoon, I would like to provide you with a brief overview of the Association of American Universities (AAU) Association of Graduate Schools (AGS) Project for Research on Doctoral Education. My discussions will address two objectives: (1) to highlight the activities and the research topics addressed by the AAU/AGS Project, and (2) to identify some basic concerns related to our efforts to study graduate program attrition, retention, and completion.

The AAU/AGS Project, as this research effort has become known, was initiated in 1989 to develop a national longitudinal database intended to track the flow of students through doctoral programs in the arts, sciences, and engineering. The Project initially was located at and hosted by the University of Rochester for a period of four years. In July 1993 the Project was relocated to its present home and host, the Educational Testing Service. Charlotte Kuh played a key role in helping that transition take place. I should point out that some of my comments today will overlap with comments Charlotte presented in her talk this morning. Therefore, I presume that this also is reflective of her influence on the Project in other areas.

Basically, the AAU/AGS Project has been designed to develop a database that provides a foundation to address a particular set of policy issues and questions. Specifically, the database seeks to provide information that improves our understanding of national and institutional trends in doctoral education. In addition, the Project aims to provide data that departments can use to compare the features of their programs to features of other programs, as well as to help policymakers understand the forces affecting the flow of doctoral student talent from admission to completion or attrition – the specific focus of today's meeting.

The Project's institutional focus stems from the voluntary participation of the institutional members of the Association of Graduate Schools within the Association of American Universities. There are currently about 40 participating institutions among the set of 62 AAU institutions. I should note that when the Project started in 1989 there were 58 member AAU institutions. I also would like to note that the 40 participating institutions have not been a consistent set of participants. The Project has experienced institutions joining and then dropping out; other institutions have joined, dropped out, and then rejoined. These participation breaks do impose some data analysis concerns relative to tracking doctoral student progress.

Each year, we request computerized student-level data that are provided to the Project by institutions in five areas, which I will turn to in a minute. There is no direct contact with students in terms of a survey form that is completed and returned to the Project for processing. Rather, the Project relies on the transfer of computerized data that, hopefully, are available at the institution. These data are transferred to the Project for inclusion in its annual data sets and longitudinal database.

Since 1989, the Project has studied and developed annual data sets pertinent to five graduate fields: biochemistry, English, economics, math, and mechanical engineering. In 1992, five additional fields were added: chemical engineering, history, physics, political science, and psychology. To reiterate, our research focus includes students in doctoral programs in these fields, not master's degree students. The data that the Project has worked to gather from its participating institutions include basic demographic information, contextual background information, and academic talent measures defined as Graduate Record Examination scores only. We have considered collecting a measure of grade point average (GPA), but because the Project processes computerized records across 10 different departments it is difficult to deal with the variety of GPA definitions: overall GPA, GPA over the last two undergraduate years, GPA in the major field, etc.

The Project also focuses on gathering student status data in terms of field of study, program track, etc. Finally, the Project has attempted to collect financial aid data. We have struggled to collect information about the type and source of financial aid resulting in a reduction in the number of data elements requested as of the 1996-97 data collection cycle. Across our set of participating institutions, the transfer of financial aid data has been very sporadic, and information that has been transferred is often incomplete. Thus, we continue to try to fine tune this process in order to get the dollar amounts of the various types of aid that are provided to doctoral students.

Having briefly described the AAU/AGS Project, I would like to move to a discussion of some of the data concerns that relate to analyses of retention and attrition concerning the Project's data. First, the Project is sometimes provided a "temporary" student identifier that frequently is not updated or at least not updated in a timely manner relative to subsequent reporting cycles. This is particularly a concern with non-U.S. students who initially are assigned a temporary ID until they acquire Social Security numbers. This poses problems in the development and maintenance of our longitudinal database as student progress records are linked from year to year. A second concern relates to the operational aspects of graduate programs regarding requirements or lack of requirements for continuous enrollment. As a result, the Project must deal with a set of inactive students – those students who are not enrolled full-time or part-time, but have not formally left the program. Included in this group are non-enrolled students who may be making progress toward a degree, who may be considered dropouts, who may be transfers or leavers from these programs. The size of this group of students and who to include or exclude poses special concerns in defining and pursuing attrition and retention analyses.

An additional concern encompasses data about the student's date of first enrollment. We have found that dates of first enrollment often are modified by institutions, and thus do not necessarily reflect the actual start date of the graduate program. Again, since the Project deals with the transfer of information that is usually downloaded from a computerized information system that is updated at various cycles or that a particular office updates under a specified set of constraints or procedures, it becomes difficult to define and select cohorts to study.

For example, in instances when a student re-admits into a program, the readmission date may replace the initial date of first enrollment in the institution's information system. Thus, the date reported to the Project does not necessarily reflect a true start date for the doctoral program. As another example, some institutions may maintain a date of first enrollment that reflects when a student first enrolled at that institution, which may have been for an undergraduate degree or master's program rather than the doctoral program. Again, these data concerns need to be considered fully because they pose difficulties to the identification of cohorts to be included in analyses.

Another concern that the Project confronts has been talked about earlier today, that is, institutional variation in graduate program tracks depending on degree objective. Some programs identify all students as doctoral students, although some may leave with a master's degree. Other programs require students to complete a master's degree first before they move on to a doctoral degree program. In these situations, the student may or may not be required to apply formally for admission to the doctoral program. Once again, these issues reflect difficulties in identifying student cohorts to be included in research studies. Added to this latter concern are complications resulting from the institution's approach to funding graduate students. For example, some institutions only fund their doctoral students. Master's students are not funded or are funded at different levels. Thus, if you were a student going through one of these programs and had a choice of receiving more funding as a doctoral student or less funding as a master's student, then the decision is fairly clear. These programs would certainly have many more doctoral students, although many of these students may only be pursuing master's degrees.

Another area of concern relates to variations in definitions of candidacy status and the availability of that information. The Project's participating graduate programs, both across and within institutions, define candidacy differently. At some institutions a doctoral student is a candidate from the first day of enrollment; others have a more traditional focus that awards candidacy to students after they pass a particular set of exams or prelims. These policies, practices, and definitions are critical to studies of attrition and to determining at what stage attrition occurs.

A final area of concern results from data entry or information system problems. For example, links of records over multiple years produce unexplained changes in demographic data. You would be surprised how often the male student suddenly becomes a female student in the fifth year of his program. In addition, we have problems with the sudden appearance of doctoral students. The Project's approach has been to ask institutions each year to identify all of their enrolled students in particular programs and to provide us with information about those students who are no longer enrolled. We have found that several institutions have submitted records of their 1992 enrollments, for example, that include continuous enrollment information about students whose dates of first enrollment were 1991. However, when these student records are linked across years, we find that the records for one or two of these students were not provided with the 1991 data submission. Why these students were present in 1992, but not in 1991 is unclear.

These concerns provide a good picture of what potential issues must be confronted in analyses of attrition, retention, and completion. Basically, my suggestions lean in the direction of the recommendations presented in the NRC's report and that promote institutions and programs to conduct studies of attrition. I think some of these issues cut across a large segment of institutions and programs. Therefore, instruction- and program-based studies would help to shore up data at these levels. Having better data that can be gathered and analyzed is important because they enable us to provide meaningful comparative information across departments and across institutions. I also would strongly support the collection and sharing of information at the institutional and program levels. Furthermore, I would promote the collection and reporting of contextual information to enhance comparative studies, to understand better program policy and practices, and to develop trends as to whether or not particular graduate programs are expanding or downsizing, as well as funding patterns under which programs operate.

Should you have any questions, please feel free to contact me and I would be happy to share as much information as is possible. The AAU/AGS Project does prepare an annual set of reports, but I felt that time did not permit a detailed discussion of them today.

top

Graduate School-Based Studies of Attrition/Retention

Peter Syverson, Council of Graduate Schools

Being at the end of the day, I have been able to listen and reflect on a lot of what we have heard today, and the first thing I would like to do is thank the National Science Foundation for bringing such a diverse group of people together. We have the government people, the association people, the researchers, and the graduate administrators, and I think this is a very interesting group. Graduate administrators are a little bit different, perhaps more like the government people, in that they live and work in the world of the practical. While the researchers (and we have heard the leading researchers in this field) are working on the cutting edge of designing and conducting the perfect study, the administrators need to ask the questions, "What can I do today?" and "What can I do to improve graduate education at my place?" The operative word here, of course, is doable: What can we do? How can I get the conversation going with my departments?

What I would like to do this afternoon is to describe some of the research conducted by graduate schools and some of the tools that graduate deans use to assess attrition. How did I find out about these tools? Well, I asked a question on our Council of Graduate Schools (CGS) listserv of about 250 graduate administrators and got a surprisingly large number of answers. We received responses from all kinds of institutions from all of those wonderful Carnegie categories, from the University of South Alabama to the University of Florida, from the small to the large and from the comprehensive to the research institution.

I am going to discuss three topics. First, I want to categorize and describe some studies of attrition, retention, and completion that we learned about in this process. The second point I want to talk about is what is going on at the undergraduate level. While that may not seem completely relevant, I think it is important and it talks a little bit about the political context that we may, though I hope we do not, face down the road. Finally, I want to note some of the lessons that we have learned in this process.

The most important finding of this study is the graduate schools' focus on completion. They focus on completion rates not on attrition, and I think that that is not surprising for a number of reasons. First, of course, we would all like to focus on our successes, so that is human nature. We have the survey of earned doctorates and the survey of doctoral recipients, and we do tend to focus on those people who do finish. The second reason is that it does answer the question that prospective students what to know, which is, "What are the chances of me completing a degree in this program?" Finally, methodologically it is certainly easier to calculate completion rates than it is to compute attrition. With attrition you have to follow and follow, with completion rates, if you can set your beginning cohort, you already have your second data point, which is completion, and that is an agreed-upon definition. If you can set your beginning cohort you can calculate a completion rate.

I do want to make one comment about attrition. One of the problems with measuring attrition is that students can languish in a sort of a limbo state for years. They can leave the institution, and Princeton has a very interesting term for this, and I thought you would be interested. They call them ETDCCs – Enrollment Terminated, Degree Candidacy Continues. We are going to find a better word for attrition, but I am not sure that is it.

There is sort of a triage of graduate school studies, and I am going to go through these three types very quickly. The first are point estimates, which are measures of completion. I do not want to underestimate the value of these, both because they really are conversation starters (they really can get discussions going with departments), and because of their simplicity. I think this simplicity is part of their value; they are something that can be done. Second, we have some flow analysis studies that I want to comment on. I guess Gerry Crawley is here and we are going to look at some data from Michigan State University that he recommended. Third, there is what I would call the comprehensive approach, which relates to taking those data, adding to them qualitative information, and then going further and doing work with departments and students and financial aid.

Let me show you something that the University of California, San Diego has on the World Wide Web. This is sort of truth in advertising. They have statistics on completion rates in selected University of California, San Diego, science and engineering departments. The first three are the important ones to look at because we can see a number of things that we have been talking about today.

The first thing is that there are tremendous differences across departments, and you are going to see this in all of these data. The second thing is that there are tremendous differences not just across departments, but in how the progression works. For example, in biology, we have 24 percent completing in six years or less; in eight years or less it is 77 percent and then it tails off. In other words that eighth year is the big break point. When we look at chemistry, there is not such a large difference between the first two and then we go on to the second one. People complete early in chemistry. Most completers are done in six years or less. In physics there is more of a step-by-step progression.

The University of Wisconsin at Madison also has this information on its Web site. They have extensive information about each department, as you click on each department, you can find out, for example, now many students started, what are the size of the cohorts starting, and how many students by now have received master's degrees and Ph.D.s. It is very interesting stuff and I think it is new. The Web is giving us the ability and the capability of doing this, and I think institutions are getting very serious about providing this kind of information to prospective students.

The next slide has the same information in graphical form, a little bit easier to look at. Again, if we look at the first three you can see in biology the big jump from the six-year to the eight-year range, in chemistry the smaller jump, in physics more of a stair step. You can see also the differences in total completion rates across fields.

There are also studies that show institutional differences, differences across institutions. I am going to let this institution be anonymous. These are three different institutions in the three columns, and you are going to see a tremendous difference in completion rates, not just across programs, but across institutions. Some of this may be due to the fact that Institution A does grant a higher proportion of master's degrees than the other two institutions, and that may again be a function of how you set your base cohorts. You can look at the tremendous variety of completion rates across these programs. Anthropology was 33, 44, and 55 percent, but sociology was 46 percent for Institution A, 38 percent for Institution B, and 52 percent for Institution C. So there is just a lot of variation, both across departments and across student cohorts. I think it is very important if we can do this, to average, have a three- four-year average with these figures, because you will see differences in cohorts in various departments. Those are some of the things you can do with completion rates.

This next graphic also can be found on the Michigan State graduate school Web site. Here is an anonymous department, but it is named I think on the Web site. This is obviously the kind of pattern that you want to see in this kind of data. The blue line represents the students registered. If you subtract those percentages from 100 you can get the attrition rate. You can see that attrition drops very quickly; you have fast attrition early in the program, and by three or four years out, you have basically finished with attrition. The red line shows completions; you have a very sharp increase in completions and you get up to your maximum completion very quickly. In the end, you have a very small gap where you have perhaps that 2 percent difference or those languishing students. This is sort of the ideal; you start at 100 percent and your students drop out or leave for various reasons. They leave quickly and then you start at 0 percent with completions and you go up very fast.

The second slide shows the other pattern. It shows sort of a lingering attrition in the blue line, as students drop out slowly, and then a very slow rise and a continued rise in completion, up to 9 to 10 years. So this is the other side of the coin.

As I understand it, Gerry used both of these graphs in discussions with departments, and they are very effective because they are a very simple way to present this information. Very effective, I think, in looking at what is going on in various departments. I think I have seen similar graphs at the University of California, Berkeley, and at Penn. So this kind of a presentation can be particularly effective.

Finally, we have what I have termed the comprehensive approach. This is represented both at the University of California, Berkeley, and by a Mellon Project in the Humanities. Both are going beyond quantitative data, putting qualitative data into the mix and setting up methods for enhancing students' participation in graduate education, getting them to their degree much faster and really tailoring financial aid for each stage of this process. The University of California, Berkeley, has a handout out on the table, which is a matrix showing institution- and field-specific factors, and there are plenty of them. The Mellon Project in Humanities has 10 institutions and 50 departments within those 10 institutions, and again is doing the same kind of thing with graduate students. It is trying to reduce time to degree, increase flow to the graduate degree, and enhance the participation of the students in the life of the department.

Let me just say something about what is going on at the baccalaureate level, because I do think that this is important and it is something that we should know about. As you know, the NCAA is not the only group that does completion rates. U.S. News & World Report publishes completion rates in their guides. The NCAA is the most visible right now. Division One institutions are required by the NCAA to provide six-year completion rates, but there is something new on the horizon; it is just starting right now, and is called Student Right to Know. That is brought to you, to all of us, by the U.S. Congress.

The U.S. Congress passed a law that said that all U.S. universities have to publish a completion rate for their typical degree levels based on 150 percent of catalogue time. So for a four-year BA, that is a six-year rate. If you have an associate degree, which is a two-year program, that would be a three-year rate. The beginning cohort started in 1996 and the first publication of these data, by law (this is not a voluntary survey done by the National Science Foundation, this is by law), will be made available to students in the year 2003. Congress was very concerned about reports of very low completion rates at undergraduate institutions, so they are having institutions publish this. The GRS (Graduation Rate Survey), which is voluntary, is being conducted by the National Center for Education Statistics, and has just been put on the Web.

If an institution goes through the process of filling this out, then it will have the statistical data necessary for Student Right to Know, for making these data available to the public. I do not think this is going to happen at the graduate level. There has not been any discussion of this. I was just speaking with people at the National Center for Education Statistics. They have not heard anything about this, but this is the kind of thing that could happen at the graduate level. I think it really means that we do need to pay attention to how important this topic is.

Let me just finish up with some lessons learned. We have talked about these all day, and what struck me as interesting was that when I wrote these up, the same issues came up again and again. First, this kind of work is very difficult to do. It is very difficult to do practically because you have to maintain a sustained effort. You have to do this over a long period of time. You have to have a long-term commitment to operate this kind of a project. I think that it is very important at the institution level. If a graduate school is going to do this, it has to talk to its departments, to talk to its graduate council. I do not think this is the kind of thing you can do and just walk in and drop on the lap of a department.

There is an old saying about the "cockroach school of management." I am sure you have all heard this: If you turn on the light of information, the cockroaches scatter. Well, those cockroaches can bite, they can come back and bite you. I heard lots of stories as I was doing this little piece of research about departments and faculty that came back and bit the researchers. Politically, this is tricky, and you have to handle it in the right way.

On the other hand, completion data can be an effective tool for initiating action. This information is extraordinarily useful at the graduate level. It is very important that graduate schools develop these kinds of data, and use them in talking to their departments. They are a tremendous tool because they give departments an opportunity to take action and I think that is the important part of this data.

Third, not all attrition is bad. We have been talking in the breaks about the need to come up with a new term for attrition. I think that there is sort of a negative to the word "attrit." The department of defense attrits its enemies. I hope during the discussion we can think about this, because not all attrition is bad. Students do graduate to bigger and better things. They do learn skills in graduate school and they are drawn to labor markets as we have heard. The students who are leaving early often do go on to good things, and I think that attrition, particularly at the early stages, is not a very good term. Most attrition occurs early. Qualitative data are a powerful complement to statistical data. Maybe after hearing all of the discussion today I should reverse that. Statistical data are a powerful complement to qualitative data. Let me leave you with that. Thank you for your attention and I again thank the National Science for bringing me here.

Participant: Has there been any reduction of the rigor in which a time limit exists to finish graduate school? When I went to graduate school you had to finish in seven years or you had to repeat your first-year courses. There was an amazing number of people who finished, for whom that was an incentive. In the seventh year they got on the stick. I have to guess that that is probably more relaxed and, if so, there may be people who just go on and on and then sort of fade off. Are there any data on that?

Peter Syverson, Council of Graduate Schools: I am going to defer to a couple of the deans who are here, but I think that, again, there are tremendous differences among departments and institutions. I know that many institutions are taking this very seriously now. They are working with having annual progress reports for graduate students as a way to keep information flowing from the student to the faculty member, to the graduate school, and becoming much tougher on setting these guidelines. Debra, do you want to comment?

Debra Stewart, North Carolina State University: We instituted a continuous registration requirement three years ago, and for two years have had a real bumper crop of graduates. There are fairly simple institutional things you can do to deal with the people who are lingering. What is really important in this attrition discussion is recognizing that there is not just one cause. There are people who just need a little boost. There are people who leave for quite good reasons. All sorts of wonderful new career paths are opened up for these individuals because of their experiences in graduate school. There are people who leave because we did not provide an adequate experience. There are people who leave because they cannot "cut it." People leave because they are not matched right to the program, people who would do quite well if we did a better job. Then there is a whole set of other reasons why people leave.

Somehow, at least speaking from the point of view of a public institution, we absolutely must do a better job of describing this landscape of leavers. We could be very much fulfilling our mission by enabling people to leave successfully and productively and moving into whatever it is that they are moving into, and it is nobody's failure. Somehow we need to be able to capture that piece better than we have so far.

Barbara Lovitts, University of Maryland: I think what Debra [Stewart] is saying is critically important. It is important that people who leave without completing the degree are helped to leave with their pride and dignity intact. I looked at personal consequences of attrition, especially for the interview sample, and the personal consequences for many people are profound. Years of emotional distress, years of feeling like failures. These feelings influence their initial entry into the job market. Although we do not have hard data on this, when looking at the level of suicidal thinking, suicide attempts, and actual suicide completions among non-completers comparing it to National Institute of Health data, the rates for non-completers is way out of proportion to the national average. I did take inspiration from a few of the non-completers in my study. Some realized that there are other ways to define success. But the personal consequences are real, and I think people need to be helped to leave feeling like they are not failures.

Sandra Terrell, University of North Texas: This is a question to anyone on the panel or anyone in the audience. For institutions that do not have this type of data-gathering process and for associations such as mine, the American Speech-Language-Hearing Association, where do people go when they want to get started in collecting this type of data? We have heard a lot of information about what can be done and so on, but I would think that there might be some organization where they could go if they wanted to get started. Where would they go? How would they start it? What kind of data? What kind of surveys do they need to get started to do it?

Participant: The Council of Graduate Schools provides an excellent service in gathering good practice models, especially if you want to do something grass roots. Peter, do you have a sense of what percentage of schools do exit interviews? It seems to me like the data alone are sort of empty given that there are so many circumstances.

Peter Syverson, Council of Graduate Schools: There are lots and lots of exit surveys out there. Many, if not most, institutions do some sort of exit survey. Many institutions do exit surveys of their doctorate recipients, and not their master's recipients. I am talking about exit surveys of successful completers; they are very common. Surveys of unsuccessful completers are much more rare.

Susan Strehle, State University of New York at Binghamton: At our institution, graduate students who are leaving have to obtain formal permission to do so; they have to withdraw formally, and it comes through me. So when I send them a letter saying, fine, you are clear, I send them a survey that asks detailed information about why they have left, and it gives several categories in the free response section. I have useful information from students at both the master's and doctoral levels as a result of that.

Carolyn Shettle, Division of Science Resources Studies, NSF: If you have some kind of administrative records that let you identify individuals at some starting point in time, such as starting their doctoral studies, then it would be feasible to think about using this merge approach, assuming that we will have some basic information, such as SSN that we can merge on. I would be happy to talk with you if you want to contact me. We have an arrangement with National Opinion Research Center, which is our survey contractor on this, to do some analysis at cost back to the university. So this is one way that you may be able to get a picture of some of the basic statistical completion rates that Peter [Syverson] is talking about, without having to wait 10 years to follow a cohort through. It will not give you as much information, but there is something there. If anyone is interested in that, just let me know.

Peter Syverson: I should also say, to plug the CGS annual meeting coming up in the first week in December, that we will be having a session called "Graduate Education Data: What Works." The person organizing that session is Sharon Brucker, who is in charge of the Mellon Project database. She is an expert and has written something for us about how you go about the process of putting together completion rates for your institution. We are going to use CGS as a forum for this kind of discussion.

Janice Madden, University of Pennsylvania: There has been a tightening of time to degree in the past several years. My institution has never had an institutional standard. The reason is that if you pick an institution standard like seven years, it is trivial for chemistry, but is very hard on anthropology and East Asia studies and other programs. I have emphasized programs having a norm that can be tested. That is why I do like that presented by Michigan State, to see that the program has some sort of norm. If graduation rates are all over the place, then you know that there is no norm in the field. It is difficult to go to a university norm, although a couple of institutions do that. We have also gone the annual report way to nudge people.

We also look at students who have been working on dissertations or research for five years, requiring every department to review and make sure that those students are working full-time on their degrees. If they are not, then we change them purposely to half-time. This ensures that international students are not simply using registration to maintain their ability to stay in the United States, which is one of our concerns. I got thank you letters from graduate students telling me that they had not figured out how to cut the strings. They realized that their lives had moved in different directions and it was nice to get some closure.

Rocco Russo, Educational Testing Service: I want to respond to Sandra [Terrell] and share that, among the 40 institutions that we work with through the AAU/AGS Project, there exists a wide range of capability in reporting and providing data. It varies based on the institution's interest in asking questions, as well as the graduate dean's need to ask questions. One strategy would be to identify what your peer institutions are doing. PeopleSoft was mentioned as one vendor and there are other commercial vendors developing information systems that seem to be catching on fast. Commercial vendors are even working with some institutions that have good information systems up and running.

Orlando Taylor, Howard University: This might spill over to the next discussion and it might be more appropriate for some organization like CGS. It appears to me that we have discussed a number of issues and possibilities for categories of data that might be helpful to us. While I recognize that each one of us, particularly speaking of my colleagues as graduate deans, will want data that will meet our own institutional needs. There might be a set of critical questions that all of us need to ask, and they may be broken down around a variety of subcategories such as degree objective, discipline, demographics, etc. If we could agree that NSF might commission, or CGS or somebody would get a group of people together to propose some critical questions that everybody ought to ask, and some ways of asking them, then we would come closer to getting our arms around this thorny, complex topic that we have been talking about all day. My fear is that when we leave here today, we will all go on our own ways and in a few years we will be at the same spot.

Jules LaPidus, Council of Graduate Schools: In 1991, CGS published the results of a 50-university study called "The Role and Nature of the Doctoral Dissertation." It just deals with the dissertation part of the process. But there is a lot in that publication about advisor/student relationships; about the role of the advisor; about what leads to completion; about what kind of data departments keep in terms of student completion rates, time to degree, and so on. That publication is on its way to our Web page [www.cgsnet.org] and will be available to everybody. Some people who were not around as graduate deans in 1991 may not be familiar with it, but I think you would be interested in it. It covers all disciplines in all fields, all universities.

top


Previous Page Workshop HomePage Next Page

agenda welcome
keynote attrition uses & limitations


SRS Home