LEAD & MANAGE MY SCHOOL
Evaluating Online Learning: Challenges and Strategies for Success
July 2008

Evaluating Multifaceted Online Resources

Like many traditional education programs, online learning resources sometimes offer participants a wide range of learning experiences. Their multifaceted offerings are a boon for students or teachers with diverse interests, but can be a dilemma for evaluators seeking uniform findings about effectiveness. In the case of an educational Web site like Washington's DLC, for example, different types of users will explore different resources; some students may take an online course while others may be researching colleges or seeking a mentor. Virtual schools that use multiple course providers present a similar conundrum, and even the same online course may offer differentiated learning experiences if, for example, students initiate more or less contact with the course instructor or receive varying degrees of face-to-face support from a parent or coach. (A similar lack of uniformity can be found in traditional settings with different instructors using varying instructional models.)

When faced with a multifaceted resource, how is an evaluator to understand and document the online learning experience, much less determine what value it adds?

Several of the evaluations featured in this guide encountered this issue, albeit in distinct ways. DLC evaluators were challenged to assess how students experienced and benefited from the Web site's broad range of resources. Evaluators of Maryland Public Television's Thinkport Web site, with its extensive teacher and student resources from many providers, similarly struggled to assess its impact on student achievement. In a very different example, the evaluators for the Arizona Virtual Academy (AZVA) faced the challenge of evaluating a hybrid course that included both online and face-to-face components and in which students' individual experiences varied considerably.

Combine Breadth and Depth to Evaluate Resource-rich Web Sites

With its wide range of services and resources for students and teachers, DLC is a sprawling, diverse project to evaluate. Through this centrally hosted Web site, students can access over 300 online courses, including all core subjects and various electives, plus Advanced Placement (AP) and English as a Second Language courses. DLC also offers students online mentors, college and career planning resources, and an extensive digital library. In addition, DLC offers other resources and tools for teachers, including online curricula, activities, and diagnostics. For schools that sign up to use DLC, the program provides training for school personnel to assist them in implementing the Web site's resources.

Initially, DLC's evaluation strategy was to collect broad information about how the Web site is used. Later, program leaders shifted their strategy to focus on fewer and narrower topics that could substantiate the program's efficacy. The evaluators focused on student achievement in the online courses and on school-level supports for educators to help them make the most of DLC's resources. Together, the series of DLC evaluations—there have been at least five distinct efforts to date—combine breadth and depth, have built on each other's findings from year to year, and have produced important formative and summative findings (see Glossary of Common Evaluation Terms, p. 65).

In the project's first year, Debra Friedman, a lead administrator at the University of Washington (a DLC partner organization), conducted an evaluation that sought information on whom DLC serves and what school conditions and policies best support its use. To answer these questions, the evaluators selected methods designed to elicit information directly from participants, including discussions with DLC administrators, board members, school leaders, and teachers, as well as student and teacher surveys that asked about their use of computers and the Internet and about the utility of the DLC training. The evaluator also looked at a few indicators of student achievement, such as the grades that students received for DLC online courses.

The evaluation yielded broad findings about operational issues and noted the need for DLC to prioritize among its many purposes and audiences. It also uncovered an important finding about student achievement in the online courses: The greatest percentage of students (52 percent) received Fs, but the next greatest percentage (37 percent) received As. To explain these outcomes, the evaluator pointed to the lack of uniformity in students' motivation and needs, and the type of academic support available to them. The evaluator also noted the varying quality of the vendors who provided courses, finding that "some vendors are flexible and responsive to students' needs; others are notably inflexible. Some are highly professional operations, others less so."9 This evaluation also described substantial variation in how well schools were able to support the use of DLC resources. The findings helped program administrators who, in response, stepped up their efforts to train educators about the Web portal's resources and how to use it with students. The evaluation also was a jumping-off point for future assessments that would follow up on the themes of student achievement in the online courses and supports for educators to help them take advantage of DLC's offerings.

In the project's second year, project leaders launched another evaluation. This effort consisted of student focus groups to identify students' expectations of DLC's online courses, student pre-course preparation, overall experience, and suggestions for improving the courses and providing better support. As a separate effort, they also contracted with an independent evaluator, Cohen Research and Evaluation, to learn more about the behavior and motivations of students and other users, such as school librarians, teachers, and administrators. This aspect of the evaluation consisted of online surveys with students, teachers, and school librarians; and interviews with selected teachers, librarians, and administrators (primarily to help develop survey questions). To gain insight into how well students were performing in the classes, the evaluators analyzed grades and completion rates for students enrolled in DLC courses. The evaluation activities conducted in the second year again pointed to the need for more school-level support for using DLC resources. The evaluators found that some schools were excited and committed to using the Web site's resources, but were underutilizing it because they lacked sufficient structures, such as training and time for teachers to learn about its offerings, internal communication mechanisms to track student progress, and adequate technical support.

When DLC's leaders began to contemplate a third-year evaluation, they wanted more than basic outcome data, such as student grades and completion rates. "We can count how many courses, we know the favorite subjects, and we know the grade averages and all of that," says Judy Margrath-Huge, DLC president and chief executive officer. What they needed, she explains, was to get at the "so what," meaning they wanted to understand "what difference [DLC] makes."

The evaluation team knew that if its effort were to produce reliable information about DLC's influence on student achievement, it would need to zero in on one, or just a few, of the Web site's many components. Some features—DLC's vast digital library, for example—simply were not good candidates for the kind of study they planned to conduct. As Karl Nelson, DLC's director of technology and operations, explains, "It is very difficult to evaluate the effectiveness of and to answer a 'so what' question about a library database, for example. It's just hard to point to a kid using a library database and then a test score going up." Ultimately, says Nelson, DLC's leaders chose to look primarily at the online courses, believing that this was the feature they could best evaluate.

DLC leaders hired outside evaluators, Fouts & Associates, to help them drill down into a specific aspect of student achievement—determining the role that DLC online courses play in: 1) enabling students to graduate from high school and 2) helping students become eligible and fully prepared for college. In this evaluation, the researchers identified a sample of 115 graduated seniors from 17 schools who had completed DLC courses. The evaluators visited the schools to better understand online course-taking policies and graduation requirements and to identify DLC courses on the transcripts of these 115 students. At the school sites, evaluators interviewed school coordinators and examined student achievement data, student transcripts, and DLC documents.

The evaluation gave DLC's leaders what they wanted: concrete evidence of the impact of DLC online courses. This study showed that 76 percent of students who took an online course through DLC did so because the class was not available at their school and that one-third of the students in the study would not have graduated without the credits from their online course. This and other evaluation findings, show that "we are meeting our mission," says Margrath-Huge. "We are accomplishing what we were set out to accomplish. And it's really important for us to be able to stand and deliver those kinds of messages with that kind of data behind us." DLC has used its evaluation findings in multiple ways, including when marketing the program to outsiders, to demonstrate its range of offerings and its effect on students (see fig. 1, Excerpt from Digital Learning Commons' Meeting 21st Century Learning Challenges in Washington State, p. 24).

Learning Commons' Meeting 21st Century Learning Challenges in Washington State, p. 24).

It would be impossible to conduct a comprehensive evaluation of everything that DLC has to offer, but certainly the evaluation strategy of combining breadth and depth has given it a great deal of useful information. DLC's leaders have used the findings from all the evaluations to improve their offerings and to demonstrate effectiveness to funders and other stakeholders.

In Maryland, Thinkport evaluators faced a similar challenge in trying to study a vast Web site that compiles educational resources for teachers and students. At first, the evaluation team from Macro International, a research, management, and information technology firm, conducted such activities as gathering satisfaction data, reviewing Web site content, and documenting how the site was used. But over time, project leaders were asked by funders to provide more concrete evidence about Thinkport's impact on student performance. The evaluation (and the project itself) had to evolve to meet this demand.

In response, the team decided to "retrofit" the evaluation in 2005, settling on a two-part evaluation that would offer both breadth and depth. First, the evaluators surveyed all registered users about site usage and satisfaction, and second, they designed a randomized controlled trial (see Glossary of Common Evaluation Terms, p. 65) to study how one of the site's most popular features—an "electronic field trip"—affected students' learning. Several field trips had been developed under this grant; the one selected was Pathways to Freedom, about slavery and the Underground Railroad. This particular product was chosen for a number of reasons: most middle school social studies curricula include the topic; the evaluators had observed the field trip in classrooms in an earlier formative study and were aware of students' high interest in its topics and activities; and Web site statistics indicated that it was a heavily viewed and utilized resource.

Figure 1. Excerpt From Digital Learning Commons' Meeting 21st Century Learning Challenges in Washington State*

Independent research demonstrates increased on-time graduation rates and college/workforce readiness.

The results are clear—DLC access to online courses increases on-time graduation rates at schools studied in Washington State.

When online courses are made available through the DLC to students who would not otherwise have had access to that course—whether for purposes of re mediation, advanced placement, or college entrance—it makes a significant difference, increasing graduation rates and college/workforce readiness.

Research focused on online courses

The DLC has focused their evaluation research on the impact from online courses, as outcomes and results can be objectively gathered and tabulated. Over two years worth of data demonstrate consistent results.

2006 Evaluation Results

In the spring of 2006 researchers from Fouts & Associates analyzed the transcript of approximately 115 students at seventeen DLC—participating high schools across the state. Quantitative and qualitative data were gathered from transcripts, student achievement data, DLC documents, and school coordinators to identify whether access to online courses through the DLC could objectively be shown to make a difference.

Online Course Registrations

When the DLC was launched, online course enrollment was projected to reach 200 students. During the 2004-05 school year alone, however, 1,159 students from forty-two high schools took an online course. So, what courses are students taking online? Our data indicate significant growth in foreign languages over the last year. Our 2004-05 statistics onenrollmentt in advanced coursework are consistent with those of NCES, which reports that 14% ofenrollmentt nationally are in AP or college-level courses.

  1. INCREASED GRADUATION RATES: Of the 115 students who graduated, approximately 33% would NOT have graduated without a course made available through the DLC.

  2. COLLEGE AND WORKFORCE READINESS: Of the fifty-nine students who were college eligible, thirty-six students—61%—took advanced classes to better prepare themselves for college.

The Digital Learning Commons | Meeting 21st Century Learning Challenges in Washington State

*The U.S. Department of Education does not mandate or prescribe particular curricula or lesson plans. The information in this figure was provided by the identified site or program and is included here as an illustration of only one of many resources that educators may find helpful and use at their option. The Department cannot ensure its accuracy. Furthermore, the inclusion of information in this figure does not reflect the relevance, timeliness, or completeness of this information; nor is it intended to endorse any views, approaches, products, or services mentioned in the figure.

The evaluators gave pre- and posttests of content knowledge about slavery and the Underground Railroad to students whose teachers used the electronic field trip and control groups of students whose teachers used traditional instruction. They found that the electronic field trip had a very substantial positive impact on student learning, particularly among students whose teachers had previously used it: These students of experienced teachers scored 121 percent higher on the content knowledge test than the students whose teachers used traditional instruction.

Like the DLC evaluation, the Thinkport evaluation proved useful both for formative and summative purposes. Thinkport's leaders learned that teachers who were new to the electronic field trip needed more training and experience to successfully incorporate it into their classrooms. They also learned that once teachers knew how to use the tool, their students learned the unit's content far better than their peers in traditional classes. The evaluators' two-part plan gave Thinkport's leaders what they needed: broad information about usage and satisfaction and credible evidence that a frequently used feature has a real impact on students.

Use Multiple Methods to Capture Wide-ranging Student Experiences in Online Courses

In the examples above, evaluators struggled to wrap their arms around sprawling Web resources that lacked uniformity. Sometimes a similar challenge is found at the micro level, as when students have heterogeneous experiences in the same online class. The leaders of Arizona's AZVA struggled with this problem when they set out to evaluate one of their online courses.

In 2006, AZVA school leaders began to experiment with hybrid courses—regular online classes supplemented with weekly face-to-face lessons from a classroom teacher. The in-person component was originally designed in a very structured way: Students received classroom instruction every week at a specific time and location, and they had to commit to this weekly instruction for an entire semester. In addition, students could participate in the hybrid class only if they were working either on grade level or no more than one level below grade level. These restrictions allowed the face-to-face teachers to offer the same lessons to all students during the weekly session. School leaders specifically designed this structure to bring uniformity to students' experiences and make it easier to evaluate the class. As AZVA's director, Mary Gifford, explains, "We wanted the hybrid experience to be the same for all the kids so we could actually determine whether or not it is increasing student achievement."

However, when program leaders surveyed parents at the semester break, Gifford says, "Parents offered some very specific feedback." They didn't like the semester-long, once-a-week commitment, and they argued that the structure prevented students from working at their own pace. Instead, parents wanted a drop-in model that would offer students more flexibility and tailored assistance. In response, she says, "We totally overhauled the course for the second semester and made it a different kind of a model." In the new format, teachers do not deliver prepared lessons, but, instead, work with students one-on-one or in small groups on any topic with which a student is struggling.

While the flexibility of the new model meets student needs, students naturally will have more varied experiences using it and school leaders will not be able to isolate its effect on student achievement. In other words, observed gains could be due to any number of factors, such as how frequently the student drops in, whether a student works one-on-one with a teacher or in a small group, and what content is covered during the drop-in session. In this instance, the needs of program participants necessarily outweighed those of the evaluation.

However, because the school has a number of other data collection efforts in place, Gifford and her colleagues will still be able to gather information about whether the hybrid model is helping students. School administrators track how frequently students attend the drop-in class and chart students' academic progress through the curriculum both before and after they participate in the hybrid program. AZVA also separately examines state test scores for students who attend the hybrid program on a regular basis. In addition, AZVA frequently uses surveys of parents, students, and teachers to gather information about the effectiveness of many aspects of their program, including the hybrid class. These kinds of activities can provide important insights when controlled studies are impossible.

Summary

Though multifaceted resources can make it difficult for evaluators to gauge effectiveness, good evaluations—especially those using multiple, complementary research methods—can identify the circumstances under which the program or resource is most likely to succeed or fail and can generate useful recommendations for strengthening weak points. Evaluators who are studying multifaceted resources should consider a strategy that combines both breadth and depth.

If studying an educational Web site that offers an array of resources, evaluators might collect broad information about site usage and then select one or two particular features to examine in more depth. Program leaders can facilitate this process by clearly articulating what each resource is intended to do, or what outcomes they would hope to see if the resource was being used effectively. From this list, program leaders and evaluators can work together to determine what to study and how. In some instances, it might be logical to design a multiyear evaluation that focuses on distinct program components in different years, or collects broad information in the first year, and narrows in focus in subsequent years.

If evaluating a particular course or resource that offers students a wide range of experiences, evaluators might consider using a mix of quantitative and qualitative methods to provide a well-rounded assessment of it. Rich, descriptive information about students' experiences with the course or resource can be useful when trying to interpret data about student outcomes.


   10 | 11 | 12
TOC
Print this page Printable view Send this page Share this page
Last Modified: 10/15/2008

Secretary's Corner No Child Left Behind Higher Education American Competitiveness Meet the Secretary On the Road with the Secretary
No Child Left Behind
Related Topics
list bullet No Related Topics Found