LEAD & MANAGE MY SCHOOL
Evaluating Online Learning: Challenges and Strategies for Success
July 2008

APPENDIX B
Research Methodology

The research approach underlying this guide is a combination of case study methodology and benchmarking of best practices. Used in businesses worldwide as they seek to continuously improve their operations, benchmarking has more recently been applied to education for identifying promising practices. Benchmarking is a structured, efficient process that targets key operations and identifies promising practices in relationship to traditional practice, previous practice at the selected sites (lessons learned), and local outcome data. The methodology is further explained in a background document,18 which lays out the justification for identifying promising practices based on four sources of rigor in the approach:

  • Theory and research base

  • Expert review

  • Site evidence of effectiveness

  • Systematic field research and cross-site analysis

The steps of the research process were: defining a study scope; seeking input from experts to refine the scope and inform site selection criteria; screening potential sites; selecting sites to study; conducting site interviews, visits, or both; collecting and analyzing data to write case reports; and writing a user-friendly guide.

Site Selection Criteria and Process

In this guide, the term "online learning program" is used to describe a range of education programs and settings in the K-12 arena, including distance learning courses offered by universities, private providers, or teachers at other schools; stand-alone "virtual schools" that provide students with a full range of online courses and services; and Web portals that provide teachers, parents, and students with a variety of online tools and supplementary education materials. As a first step in the study underlying this guide, researchers compiled a list of evaluations of K-12 online programs that had been conducted by external evaluators, research organizations, foundations, and program leaders. This initial list, compiled via Web and document searches, was expanded through referrals from a six-member advisory group and other knowledgeable experts in the field. Forty organizations and programs were on the final list for consideration.

A matrix of selection criteria was drafted and revised based on feedback from advisors. The three quality criteria were:

  • The evaluation included multiple outcome measures, including student achievement.

  • The findings from the evaluation were widely communicated to key stakeholders of the program being studied.

  • Program leaders acted on evaluation results.

Researchers then rated each potential site on these three criteria, using publicly available information, review of evaluation reports, and gap-filling interviews with program leaders. All the included sites scored at least six of the possible nine points across these three criteria.

Because the goal of the publication was to showcase a variety of types of evaluations, the potential sites were coded as to such additional characteristics as internal vs. external evaluator, type of evaluation design, type of online learning program, organization unit: district or state, and stage of maturity. The final selection was made to draw from as wide a range on these characteristics as possible while keeping the quality criteria high, as described above.

Data Collection

Data were collected through a combination of on-site and virtual visits. Because the program sites themselves were not brick-and-mortar, phone interviews were generally sufficient and cost-effective. But some site visits were conducted face-to-face to ensure access to all available information. Semistructured interviews were conducted with program leaders, other key program staff, and evaluators. Key interviews were tape recorded to ensure lively descriptions and quotes using natural language. While conducting the case studies, staff also obtained copies of local documents, including evaluation reports and plans documenting use of evaluation findings.

Program leaders and evaluators were asked to:

  • Describe the rationale behind the evaluation and, if applicable, the criteria for choosing an external evaluator;

  • Explain the challenges and obstacles that were faced throughout the evaluation process, and how they were addressed;

  • Tell how the study design was affected by available resources;

  • If the evaluation was conducted externally, describe the relationship between the program and contractor;

  • Provide the framework used to design and implement the evaluation;

  • Tell how the appropriate measures or indicators were established;

  • Explain how the indicators are aligned to local, state, and/or national standards, as well as program goals;

  • Describe the data collection tools;

  • Explain the methods used for managing and securing data;

  • Describe how data were interpreted and reported; and

  • Share improvements made in program services and the evaluation process.

Analysis and Reporting

A case report was written about each program and its evaluation and reviewed by program leaders and evaluators for accuracy. Drawing from these case reports, program and evaluation documentation, and interview transcripts, the project team identified common themes about the challenges faced over the course of the evaluations and what contributed to the success of the evaluations. This cross-site analysis built on both the research literature and on emerging patterns in the data.

This descriptive research process suggests promising practices—ways to do things that others have found helpful, lessons they have learned—and offers practical how-to guidance. This is not the kind of experimental research that can yield valid causal claims about what works. Readers should judge for themselves the merits of these practices, based on their understanding of why they should work, how they fit the local context, and what happens when they actually try them. Also, readers should understand that these de-scriptions do not constitute an endorsement of specific practices or products.

Using the Guide

Ultimately, readers of this guide will need to select, adapt, and implement practices that meet their individual needs and contexts. Evaluators of online programs, whether internal or external, may continue to study the issues identified in this guide, using the ideas and practices and, indeed, the challenges, from these program evaluations as a springboard for further discussion and exploration. In this way, a pool of promising practices will grow, and program leaders and evaluators alike can work together toward finding increasingly effective approaches to evaluating online learning programs.


   31 | 32 | 33
TOC
Print this page Printable view Send this page Share this page
Last Modified: 10/15/2008

Secretary's Corner No Child Left Behind Higher Education American Competitiveness Meet the Secretary On the Road with the Secretary
No Child Left Behind
Related Topics
list bullet No Related Topics Found