Archived Information

Answers in the Tool Box: Academic Intensity, Attendance Patterns, and Bachelor's degree Attainment — June 1999

Introduction:
Departing from Standard Accounts of Attainment

This study analyzes the relationship between the academic resources students bring to college, their long-term attendance patterns, and their degree completion rates.

The study is both related to and departs from a number of honored lines of research on the determinants of educational attainment. It diverges from previous research on attainment principally by emphasizing the details of students' high school and college curricula and academic histories that are available from transcript records. Its principal data are drawn from the new (1998) restricted edition of the High School & Beyond/Sophomore cohort files (hereafter referred to as the HS&B/So)(1). This longitudinal study followed a national sample of students from the time they were in the 10th grade in 1980 to roughly age 30 in 1993(2).

In round numbers, of the high school graduates in this cohort, 65 percent attended some form of postsecondary education and 40 percent attended a 4-year college by age 30. These are basic "access rates." Of the group attending four-year colleges at some time, 63 percent earned a bachelor's degree.

While the 63 percent completion rates sounds impressive for a mass system of higher education, it masks an unhappy differential in degree completion rates by race/ethnicity. Furthermore, we have since reached a 75 percent access rate (Berkner and Chavez, 1997), and, in the late 1990s, our national policies have invited an even higher percentage of high school graduates into postsecondary education. Simply to maintain-let alone improve-our long-term degree completion rate will take a great deal of effort. We need guidance.

So this study asks a simple question:

What contributes most to bachelor's degree completion of students who attend 4-year colleges at any time in their undergraduate careers?

The answers to that question help us develop strategies to address anomalies, paradoxes, and disappointments in educational attainment after high school. The answers suggest what tools to put in our tool boxes and where to take them. The answers advise us how to use the tools in an environment of changing student enrollment behavior. The answers clearly instruct us as to what is important in research on this issue and what is no longer so important. The answers may not be exhaustive, but without them, there is no tool box.

The HS&B/So is the second of three national age-cohort longitudinal studies carried out under the design and sponsorship of the National Center for Education Statistics (NCES). From time to time in this monograph, the other two studies will be invoked: the National Longitudinal Study of the High School Class of 1972 (hereafter referred to as the NLS-72) and the National Education Longitudinal Study of 1988 (hereafter referred to as the NELS-88). For the reader's convenience in bench marking, think of these three studies as following the scheduled high school graduating classes of 1972, 1982, and 1992. The last of these studies, the NELS-88, is still in progress. Occasionally, too, data from NCES's Beginning Postsecondary Students longitudinal study of 1989-1994(3) will enter the discussion.

Why Are We Asking Such an Obvious Question?

While employers increasingly use "college degree" as a screening device in hiring (National Center on the Educational Quality of the Workforce, 1998), and legislatures everywhere ask for evidence of "graduation rates," the research literature devotes infinitely more attention to access than to degree completion. How strange! And stranger, still when, in spite of all we know about recurrent education and delayed entry to higher education, in spite of all the public displays of 30 year-olds returning to complete degrees they abandoned to have children or start businesses, how little research uses long-term degree completion as its time-frame. In the country of the second and third chance, our legislation and our research ask us to hurry up and get it over with, and judge both individuals and institutions negatively if they fail to get it overwith fast.

Yet, as we will see, the winds have changed, and both our legislation and our research have yet to acknowledge that change: going to college in the 1990s means something very different from what it meant 20 years ago. Unless we recognize these changes, the higher education enterprise will drift like a ship in the horse latitudes. One reason for asking the simple question is to help policy re-navigate to find the winds and the new currents of student attendance patterns.

Yet another reason for asking the basic question lies in contemporary policy disputes involving admissions formulas at selective institutions, principally as a by-product of a new dispensation for affirmative action. The heat of those disputes has unfortunately been raised by a dubious argument over two indicators of pre-college attainment—grades/class rank versus test scores—that make no reference whatsoever to curricular content. We owe it to students, and to minority students in particular, to assess the most profitable paths to degree completion in any institution. This obligation decrees that we explore the potential power of secondary school curriculum to set students on trajectories that will culminate in a satisfactory ending for them and for the society writ large. We talk a great deal in policy about school-college connections and collaborations. Test scores and class rank have little to do with those connections and collaborations. Curriculum has everything to do with them.

In these pages, then, the reader will see a great deal of college attendance patterns and high school curriculum.

Structure of This Monograph

Parts I-III of this monograph set up the principal touchstones for the story that seeks to answer our basic, simple question. We begin by demonstrating how to construct an index of student academic resources (Part I). The analysis then sorts through a few major variables frequently found in analyses of persistence and degree completion, discarding some as unreliable, and reconstructing others (Part II). Finally, we open up the new world of attendance patterns in higher education, and explore its significance for both research and policy (Part III). College administrators, state policy-makers, researchers, and journalists often ask "How did you get that number?" or "What do you mean by that variable?" Watching the construction and testing of a variable provides the answers. The major variables in this analysis are not buried in appendices, though to make life easier for the general reader, some of the technical material and comments on the literature have been placed in endnotes.

Part IV builds two series of statistical models to explain what made a difference in bachelor's degree completion by age 30 for the students in the HS&B/So cohort. First, a series of linear Ordinary Least Squares regressions seeks to explain how much of the variance in bachelor's degree attainment can be attributed to different background characteristics, achievement, and experiences when all other variables in an equation are held constant. Each model in the series takes up a successive stage in students' life histories, dropping those variables that don't make a difference and adding new sets of variables until we reach a plateau of explanation. Second, the same models are tested using logistic regressions in a manner suggested by Cabrera (1994) to provide a different type of portrait of the results. The concluding section of the monograph explores the contemporary significance of the findings in terms of achieving a greater degree of equity in degree completion rates.

"Academic Resources" and the Dow Jones

I did not invent the term, "academic resources," as used in this paper. Credit goes principally to Karl Alexander and his various associates over nearly two decades of research on the paths from secondary to postsecondary education (e.g. Alexander and Eckland, 1973; Alexander and Eckland, 1977; Thomas, Alexander, and Eckland, 1979; Alexander and Cook, 1979; Alexander, McPartland and Cook, 1981; Alexander and Cook, 1982; Alexander, Riordan, Fennessey and Pallas, 1982; Alexander and Pallas, 1984; Alexander, Pallas, and Holupka, 1987; Alexander, Holupka, and Pallas, 1987; Pallas and Alexander, 1983), and this material will be cited frequently. Alexander and his colleagues persistently demonstrated that the power of a student's academic background overwhelms the predictive power of demographic variables (gender, race, socioeconomic status) in relation to test performance (Alexander and Pallas, 1984), college attendance (Thomas, Alexander, and Eckland, 1979) and, in one study, college completion (Alexander, Riordan, Fennessey and Pallas, 1982), yet few higher education researchers pay much attention to this body of literature. At the same time, what Alexander and his colleagues mean by "student academic background" calls out for revisiting and reconstruction, and one of the purposes of this study is to expand, deepen, and test the concept of student academic resources in light of both transcript data and long-term paths through postsecondary education to degree completion.

Indeed, while most related research focuses on access or year-to-year retention, the dependent variable in this study is completion of bachelor's degrees, the Dow Jones Industrial Average of U.S. higher education. The reasons for focusing on degree completion relate principally to equity issues in an age when 65 percent of high school graduates enter higher education directly from high school and 75 percent enter within two years of high school graduation (Berkner and Chavez, 1997). While the "college access gap" between whites and blacks and whites and Latinos has closed from the 11-15 percent range to 5 percent over the past two decades, the degree completion gap remains stubbornly wide at 20 percent or higher (Smith et al, 1996, p. 25), and it behooves us to inquire into this unhappy paradox in somewhat different directions than have been followed in the past.

The reasons for focusing on attendance and curriculum patterns in the analysis are that we have become as mobile and consumeristic in higher education as we are in the rest of our lives, that we no longer stay in one place for prescribed periods of time, and that we feel free to mix various life activities in whatever order we wish--or whatever order is made necessary by other life commitments and circumstances (Blumberg, Lavin, Lerer, and Kovath, 1997). What one studies may thus be more important than the many places at which one studies it.

Our abiding interest in research on retention and completion is to discover those aspects of student and institutional behavior that actually can be changed to improve the odds of attainment, even though our definition of attainment may be different from that of some students (Tierney, 1992). We look for concrete and practical suggestions that can be assigned to particular individuals and groups to carry out, not the generalized, abstract flourishes that Orwell (1949) called "soft snow," and that we witness at the conclusion of too many research articles that expend their energy on complex statistical modeling.

Because we seek behavior that can be changed, our research must focus on conditions that are realistically subject to manipulation by people in the best positions to do so, people who can use the tool box. For example, some research has demonstrated the strong role of parents, peers, and significant others in student decisions to attend college, choose a particular college, choose a particular major, and choose to persist (Bean, 1982). More recent research has demonstrated that reputation and location are criteria that overwhelm all others (influence of parents, peers, etc. included) in choosing a postsecondary institution (Choy and Ottinger, 1998). While high school counselors and teachers can work with parents in matters of preparing students for college and encouraging application, there is very little anybody else can do to orchestrate these external players in terms of affective influences on post-matriculation student enrollment and persistence behaviors. There is even less that one can do within the expanding patterns of transfer and multi-institutional attendance that the HS&B/So 11-year history (to age 30) reveals and that will be detailed below. For those beyond the age of 30, the decision to return to complete degrees begun earlier is even more influenced by complex interactions of external and personal factors (Smart and Pascarella, 1987; Spanard, 1990). Events in life-course history such as changes from dependent to independent status, marriage and divorce, and increases in the number of children in a household lie beyond the micro-management of higher education faculty and staff(4).

The Tenor of History

There is a tenor to the approach and methodology of this study that also should be posited at the outset because it departs from reigning models. The tenor is that of exploratory historical investigation, and thus inevitably conditions what I regard as credible evidence, what meets the criteria for statistical relationships, what type of regression analysis is best suited to chronological story-telling, and what we might call "the problem of the typical."

History is a fiercely empirical discipline. The evidence it assembles is all tangible: artifacts, diaries, parish records, letters, communiqués, e-mails, texts, photographs, recordings, ruins, taped interviews, dictionaries, maps, ship's manifests and logs, etc. Truth often lies in--and can be extracted from--the details. Historians do not design or conduct surveys (their subjects are often dead, so surveys are a moot methodology). Rather, they will find surveys and treat them as texts (see, for example, Clubb, Austin, and Kirk, 1989). They are interested, foremost, in the traces of human behavior, "the marks, perceptible to the senses, which some phenomenon . . . has left behind" (Connerton, 1989, p. 13). Thus, unobtrusive evidence is of paramount value in history. While historians may speculate about the meaning and significance of that evidence, they treat it as authoritative, even when they take samples of the evidence as representing characteristics of populations (Haskins and Jeffrey, 1990. chapter 4). They may discover that the evidence was contrived, but they then will treat the fact of contrivance as equally authoritative.

What does this fierce empiricism mean for interpreting a data set that was prepared for the National Center for Education Statistics (or any other federal agency, for that matter)? Simply because we paid someone for gathering and coding the data does not mean the data were handed down from Mount Sinai and must never be questioned. The data set consists of historical evidence, which is "in no sense privileged" (Connerton, 1989, loc cit). Every case of every variable requires examination. Anomalies are subject to multiple examinations. Editorial adjustments and corrections are made only under the strictest decision rules. But these adjustments and corrections must be made or we will never tell a true story. The practice is called historical reconstruction.

While accounts of these editorial processes and their decision rules have been published elsewhere (Adelman, 1997; Adelman, 1995), an example would be helpful. Assume a student for whom we have college transcripts and a full post-secondary history beginning in the fall of 1982. But the secondary school record for this student appears strange and spotty. We do not know what the coders were looking at when they entered data from this student's high school transcript back in 1982, but there appear to be only 6 Carnegie units on the transcript, no indication that the student changed high schools (which might explain a truncated record), and no indication the student ever studied mathematics or foreign language. The college transcript records, however, show that in the fall semester of 1982, the student entered a flagship campus of a state university, earned a B+ in calculus 3 and a B in Russian conversation and composition 5. On the basis of this information alone(5) we can reasonably revise our record of the student's high school transcript to include 3 units of mathematics through pre-calculus and 2 units of foreign language. Using independent sources, we know the requirements for high school graduation in the student's home state (Medrich, Brown and Henke, 1992). The flagship campus of the state university would not accept the student unless he/she was a high school graduate, hence had met those requirements. By the time we are done, our record of the student's high school transcript is a lot more accurate than what we originally received. The secondary school records of approximately 18 percent of the HS&B/So students were subject to this type of adjustment.

Editing in this manner may involve inference, but not what statisticians call imputation. It does not assign specific behaviors or attainments to masses of people on the basis of the intrinsic characteristics of those people. It does not say, "because you look like all other people of a certain configuration of characteristics and because your survey form is missing transcripts, we are going to assign you the degrees, majors, test scores, etc. of those other people." At no point in editorial work on a data set will a historian make such assumptions and impute characteristics to individuals on the basis of group models.

Explanation More Than Prediction

The second departure from reigning modes of analysis of postsecondary careers derives from one of the most fundamental lessons of history: while stories may repeat themselves, they never do so in the same way. Even when they employ quantitative methods, historians are not in the prediction business, and, with rare exceptions, do not worry about directional causality(6). Researchers have spent the past two decades attempting to squeeze every drop of predictive blood from the data on college access and persistence. They have consumed thousands of journal pages with arguments over the comparative power of different statistical models: factor analysis, structural equations such as LISREL, logistic regression, weighted least squares regression, probit, etc. (Dey and Astin, 1993). By the time we are done reading this library shelf, contrary predictions often arise, and the point is lost on anyone who might use the information. While employing statistical models commonly used in prediction, this study is less interested in forecasting the behavior of future students than in explaining what made a difference for past students.

To be sure, the story may provide guidelines for thinking about the experience of future cohorts, but the groups will inevitably differ. A statistical model derived from a class that entered higher education in 1968, when the majority of students were middle-class white males who enrolled full-time and directly from high school, may reveal relationships that are worth exploring with contemporary populations, but is still unique to its time and circumstance. The proper form of a sentence stating the conclusion of an equation for such a cohort might be, for example, "the socioeconomic composition of one's high school class had a greater net impact on attainment for this group than the selectivity of the first college attended." That sentence is, in fact, a re-write of a major conclusion reached by Alexander and Eckland (1977).

The story we tell about a cohort rests on the assumption that what we observe is representative, or "typical," of that population. One of the principal reasons for performing statistical tests, in fact, is to demonstrate that the story line and its components did not come about by chance, and that there is far more coherence than chaos. The task is analogous to that faced by historians attempting to determine what is typical of a particular culture or sub-culture during a particular period. The notion of "typical" may involve a range or array of behaviors, attitudes, conditions--and these are derived from the traces of artifacts, records, and the style of texts.

Searching through the details of these remains, one cannot determine what is "typical" by collapsing variables into categories at such a level of aggregation that a constructive story-line is impossible to detect. For example, if one is going to describe the geographic "region" of potential college students (St. John, 1991; St. John and Noell, 1989) and under the conviction that the quality of student preparation is determined by geographic region, then four "regions" consisting of 13-14 states is much too large an aggregation. There are nine (9) Census divisions one can invoke, and the more promising analytic combination is that of Census division by urbanicity of high school (urban, suburban, rural), yielding 27 cells. If one wants to know the geographic origins of students taking more than two college courses at remedial levels so that one can address the comparative severity and distribution of the problem of remediation, 27 combinations offer some compelling suggestions(7). Four regions do not help us take our tool boxes to the places they are needed. As Hearn (1988) noted, "it is in the details that the most precise, and most useful, answers lie".

What Evidence Do We Use? The Case of Student Self-Reports.

How do we know that students were taking remedial(8) courses in college, let alone what kind of remedial courses? Do we ask the students, and, if so, how (are you enrolled this term in a remedial course? were you ever enrolled . . .?)? Do we use a cross-sectional survey of registrars (Lewis and Farris, 1997)? Or do we use college transcripts, and trace remedial problems back through high school transcripts? Let us briefly compare what we find from each of these methods.

Granted, these are three different surveys with different time periods. But the discrepancies between the unobtrusive evidence (transcripts), second-party accounts, and student testimony are simply too great for comfort(9), and it is worth further demonstration of this problem.

Two tables should drive home the virtues of unobtrusive evidence. Table 1 demonstrates the disparities between students' claims to degree attainment and the evidence of their transcript records. What do we see in this table? (1) about 7 percent of those who claim to earn a bachelor's degree or higher have earned, at best, an associate's degree; (2) some people do not understand the question about highest degree, and claim less than the evidence shows they have earned; (3) the concept of a "certificate" is very slippery, and people will try to claim at least some minimum postsecondary credential as psychological compensation for their time; and (4) because there was a 12-15 month gap between the date of the 1992 survey interview and the period of 1993 transcript receipt, it appears that some students in graduate school expressed expectations for degree completion in 1992 that were not realized by 1993.

Table 1.–Discrepancies between highest degree claimed and highest degree earned by students in the High School & Beyond/Sophomore cohort.


  Highest Degree Earned by 1993 (Transcript Evidence)
  None Certif- Assoc Bachel Gradu   % of All
Highest Degree Claimed in 1992:              
None 93.0% 2.2% 1.4% 2.5% 0.4%*   36.8%
Certificate 48.9 49.2 1.0 0.7* 0.2*   14.3
Associate's 16.3 18.2 63.0 2.4* 0.1*   12.5
Bachelor's 4.7 0.8* 1.6 75.0 18.0   30.6
Graduate 2.7* --- 0.9* 9.4 87.0   5.8
               
% of Earners 45.0 10.4 9.1 24.8 10.7   100.0

Is the gap between claim and reality at the bachelor's level something to worry about? For the NLS-72, a decade earlier, this gap was in the 6 percent range (Adelman, 1994). The increase is not statistically significant, but in both cohorts there are significant differences by race and SES, and under those circumstances, the transcripts must be the default.

Since the primary variables in Parts 1 and 2 of this monograph are pre-collegiate, it might also be helpful to ponder the differences between student accounts of grades and course-taking in high school and the evidence of their high school transcripts. Table 2 is extracted from Fetters, Stowe, and Owings's (1984) analysis of this issue in the HS&B/So. It is obvious that we have significant differences in reporting of both grades and course-taking by race, and in course-taking by SES. The case of mathematics course-taking should be particularly troubling to anyone who analyzes pre-college preparation on the basis of student self-reports (in the national data, that includes the annual survey of freshman by the Cooperative Institutional Research Project and the Student Descriptive Questionnaire that accompanies administration of the SAT). Rosenbaum (1980) mapped even greater variances than these in the NLS-72. In two successive cohorts, then, students have been consistent in claiming more coursework than their records show.

Table 2.–Discrepancies between student reports of grades and amount of coursework in high school, by selected student demographic characteristics.


  SES Composite
  All Men Wom   White Black Latino   Low Med High
GPA  
Student 2.84 2.71 2.96   2.91 2.62 2.57   2.64 2.85 3.07
Transcript 2.62 2.51 2.73   2.71 2.31 2.39   2.44 2.63 2.84
Bias .22 .20 .23   .20 .31 .18   .20 .22 .23
Semesters of
Mathematics,
Grades 10-12
Student 4.15 4.31 4.02   4.15 4.50 3.97   3.68 4.07 4.76
Transcript 3.07 3.17 3.03   3.27 2.65 2.39   2.27 3.03 4.02
Bias 1.08 1.14 .99   .88 1.85 1.58   1.41 1.04 .74
Semesters of
Science,
Grades 10-12
Student 3.43 3.58 3.30   3.47 3.46 3.13   2.92 3.26 4.09
Transcript 2.87 2.99 2.78   3.00 2.59 2.33   2.29 2.78 3.66
Bias .56 .59 .52   .47 .87 .80   .63 .48 .43

And yet student self-reports continue to be the principal sources of information invoked in the mass of studies on the determinants of college retention and completion. To date, the research community has proven itself intimidated by the richness and power of the details that lie in transcript records. For example, much of the literature on college access was driven by a concern with tracking in secondary schools, and hence collapses the entire range of a student's high school academic background into the dichotomous variable, "academic/non-academic curriculum"­or sometimes, the trichotomized academic/general/vocational heuristic for curriculum­thus ignoring some of the most important variations that occur under those umbrellas(10). It is no wonder that serious consideration of what people study in high school is completely absent from investigations that squeeze the rocks of pre-collegiate "determinants" of college access and persistence, and policy follows suit. People are then surprised when students on putatively "academic" (also known as "college preparatory") tracks wind up in remedial courses in college and/or do not complete degrees.

Table 3.–Selected content and intensity measures for students in high school academic/
college preparatory programs, High School & Beyond/Sophomore cohort



In the mid-1980s Alexander and his colleagues began to study curriculum effects with this more empirical flavor, realizing that there was a compelling reason to move away from the dichotomous presentation of high school curriculum, particularly in light of the background research for the National Commission on Excellence in Education (e.g., Adelman, 1983) and its subsequent recommendations for the "new basics" curriculum of A Nation at Risk. Using high school transcripts from ETS's Study of Academic Prediction and Growth that tracked students in 15 communities during the period 1961-1969, Alexander and Pallas (1984) found that even among "academic track" high school graduates, only 53 percent met the "new basics" criterion for science, 71 percent did so in mathematics, and a paltry 31 percent matched the mark in foreign languages.

The HS&B/So data allow a more contemporary-and detailed- confirmation. There is an obvious range of intensity and quality in the high school "academic" or "college preparatory" curriculum. Table 3 provides a very simple demonstration. It takes all HS&B/So students for whom an academic curriculum was indicated by the student's school, turns to the transcripts, and displays some basic disappointments on the content of that curriculum. These data clearly indicate that some disaggregation of "academic curriculum" is called for. Once again, the more precise and useful data (to guide students onto trajectories leading not merely to college but to degree completion) lie in the details.


-###-
On Reading Tables In This Study [Table of Contents] I. Cultivating ACRES, the Academic Resources Index