LEAD & MANAGE MY SCHOOL
Evaluating Online Learning: Challenges and Strategies for Success
July 2008

Translating Evaluation Findings Into Action

As the phases of data collection and analysis wind down, work of another sort begins. Evaluators present their findings and, frequently, their recommendations; then program leaders begin the task of responding to them. Several factors contribute to the ease and success of this process: the strength of the findings, the clarity and specificity of the recommendations, how they are disseminated, and to whom. The relationship between the evaluators and the program leaders is key: When evaluators (external or internal) have ongoing opportunities to talk about and work with program staff on improvements, there is greater support for change. Conversely, if evaluators are fairly isolated from program leaders and leave the process once they have presented their recommendations, there is less support and, perhaps, a reduced sense of accountability among program staff. Of course, while frequent and open communication is important, maintaining objectivity and a respectful distance are also critical to obtaining valid research findings. Collaborations between researchers and practitioners should not be inappropriately close.

When program leaders try to act on evaluation findings, the structure and overall health of the organization play a role as well. Some program leaders meet with substantial barriers at this stage, particularly if they are trying to change the behavior of colleagues in scattered program sites or other offices or departments. The problem is compounded if there is a general lack of buy-in or familiarity with the evaluation.

In such circumstances, how can program leaders and evaluators translate findings into program improvements? Among the programs featured in this guide, there are a variety of approaches to using evaluation findings to effect change. In the CPS/VHS, for example, program leaders have used evaluation findings to persuade reluctant colleagues to make needed changes and have repeatedly returned to the evaluation recommendations to guide and justify internal decisions. Meanwhile, AZVA program staff use a structured, formal process for turning evaluation recommendations into program improvements, including establishing timelines, staff assignments, and regular status reports. The AZVA system, though time-consuming, has helped program administrators implement changes.

Use Evaluation Findings to Inform and Encourage Change

Chicago's CPS/VHS is managed and implemented collaboratively by three CPS offices: the Office of Technology Services eLearning, the Office of High School Programs, and the Office of Research, Evaluation, and Accountability. In 2005, Chief eLearning Officer Sharnell Jackson initiated an external evaluation to understand the cause of low course completion rates and find ways to help struggling students.

Evaluator Tom Clark of TA Consulting found great variation in students' ability to work independently, manage their time, and succeed without having an instructor physically present. The evaluation report recommended several ways to offer more support for struggling students, including having a dedicated class period in the school schedule for completing online course work and assigning on-site mentors to assist students during these periods. But when program administrators tried to implement these recommendations, they had difficulty compelling all participating schools to change. CPS is a large district with a distributed governance structure, making it difficult for the central office to force changes at the school-site level.

Facing resistance from schools, the program's administrators tried several different tacks to encourage implementation of the recommendations. First, they took every opportunity to communicate the evaluation findings to area administrators and principals of participating schools, making the case for change with credible data from an external source. Some school leaders resisted, saying they simply did not have the manpower or the funds to assign on-site mentors. Still, they could not ignore the compelling data showing that students needed help with pacing, study skills, and troubleshooting the technology; without this help many were failing. The strength of these findings, along with financial assistance from the district to provide modest stipends, convinced school site leaders to invest in mentors. Crystal Brown, senior analyst in CPS's Office of Technology Services, reports that most CPS/VHS students now have access to an on-site mentor, "whereas before they just told a counselor, 'I want to enroll in this class,' and then they were on their own." Brown says the program leaders say the evaluation also has been useful for prodding principals to provide professional development for mentors and for persuading mentors to participate. They use the evaluation findings "whenever we train a mentor," she says, "and that's how we get a lot of buy-in."

CPS/VHS administrators also are careful to set a good example by using the evaluation findings and recommendations to guide their own practices at the central office. To date, they have implemented several "high priority" recommendations from the report. For example, program leaders strengthened mentor preparation by instituting quarterly trainings for mentors and establishing a shared online workspace that provides guidelines and advice for mentors. The district also has implemented Advancement Via Individual Determination* programs in many high schools to boost students' study skills and support their achievement in the online program. As CPS/VHS expands, some of the earlier problems have been avoided by getting site administrators to agree up front to the practices recommended by the evaluation. Brown says, "We constantly reiterate what this study recommended whenever we have any type of orientation [for] a new school that's enrolling."

Finally, in some instances, the program administrators changed program requirements outright and forced participating schools to comply. Beginning this year, for example, all online classes must have a regularly scheduled time during the school day. (There are a few exceptions made for very high-performing students.) This change ensures that students have dedicated computer time and mentor support to help them successfully complete their course work on time. In addition, participating students are now required to attend an orientation for the online courses where they receive training on study skills.

Take a Structured Approach to Improvement

Changing behavior and policy can be difficult in a large organization and, as the above example shows, program administrators must be creative and persistent to make it happen. Sometimes a small organization, such as an online school with a small central staff, has a distinct advantage when trying to implement evaluation recommendations. With a nimble staff and a strong improvement process in place, AZVA, for example, has been especially effective in making program changes based on findings from its many evaluation efforts.

Several factors explain AZVA's success in translating evaluation findings into program improvements. First, its evaluation process generates detailed recommendations from both outsiders and insiders. That is, staff from their main content provider, K12 Inc., visit AZVA approximately every other year and conduct a quality assurance audit to identify areas for program improvement. Following the audit, K12 Inc. develops a series of recommendations and, in turn, AZVA creates a detailed plan that shows what actions will be taken to address each recommendation, including who will be responsible and the target date for completion. For example, when the 2005 audit recommended that AZVA create a formal feedback loop for teachers, the school assigned a staff member to administer monthly electronic surveys to collect information from teachers about the effectiveness of their professional development, their training and technology needs, their perceptions of parent training needs, and their suggestions for enhancing community relations. AZVA has responded in similar fashion to many other recommendations generated by K12 Inc.'s site visit, addressing a range of organizational, instructional, and operational issues (see table 4, Excerpts From AZVA's Next Steps Plan, in Response to Recommendations From the K12 Quality Assurance Audit, p. 47).

In addition to the audit, K12 Inc. also requires that AZVA complete an annual School Improvement Plan (SIP), which consists of two parts: a self-evaluation of school operations in general and a Student Achievement Improvement Plan (SAIP) that specifically focuses on student outcomes. In developing these plans, AZVA staff articulate a series of goals and specific objectives for improvement, again including strategies and timelines for meeting each objective. A strength of this process is its specificity. For example, one key SAIP goal was to improve student achievement in math, and the administrative team set the specific goal of decreasing by 5 percent the number of students who score "far below the standards" on the state standards test and increasing by 5 percent the number who meet or exceed state standards. To accomplish this, AZVA staff took action on several fronts: they aligned their curriculum to the state's testing blueprint, developed a new curriculum sequencing plan, implemented additional teacher and parent training, worked with students to encourage test preparation and participation, and developed individual math learning plans for students.

Another strength of AZVA's process is that it requires program staff to review evaluation recommendations regularly and continually track the progress that has been made toward them. The SIP and SAIP are evolving plans that are regularly updated and revised by "basically everyone that has any role in instruction," such as the director of instruction, the high school director, and the special education manager, says AZVA's director, Mary Gifford. As part of this process, team members continually return to the document and track how much progress has been made in reaching their goals. There also is external accountability for making progress on SIP and SAIP goals: approximately once a quarter, AZVA staff review the plans via conference calls with K12 Inc.

Finally, AZVA's approach succeeds because it permeates the work of the entire school. As Bridget Schleifer, the K-8 principal, explains, "Evaluation is built into everybody's role and responsibility." Staff members at all levels are expected to take evaluation recommendations seriously and to help to implement changes based on them.

Table 4. Excerpts From AZVA's Next Steps Plan, In Response to Recommendations From the K12 Quality Assurance Audit

Summary

As the above examples show, whether and how evaluation findings lead to program improvements is a function not only of the quality of the evaluation, but of many other contextual and organizational factors. Program leaders can facilitate program change by working from the beginning to create an ongoing relationship between evaluators and program staff. Throughout the process, there should be opportunities for staff members to discuss the evaluation, its findings, and its implications for improving the program. Off-site staff and partners should be included as well. In short, program leaders should communicate early and often about the evaluation with anyone whose behavior might be expected to change as a result of its findings.

Once findings and recommendations are available, program leaders might want to consider using a structured, formal process for turning those recommendations into program improvements. One approach is to decide on a course of action, set a timeline, and identify who will be responsible for implementation. Whether using a formal process or not, program leaders should revisit recommendations regularly and continually track the progress that has been made toward them.

In some instances, recommended changes to an online program or resource may be technically hard to implement. For example, it may be difficult and expensive to make changes to the content or format of an online course. It also may be quite costly to change, repair, or provide the hardware needed to improve an online program. Insufficient funding may cause other difficulties as well. If the program has relied on external funding that has subsequently run out, there may be pressure to dilute the program's approach; for example, districts might feel pressured to keep an online course but eliminate the face-to-face mentors that support students as they proceed through it.

Online program evaluators need to keep these kinds of practical challenges in mind when formulating recommendations and should consider ranking their suggestions both in order of importance and feasibility. Program leaders should develop a plan for addressing the highest priority recommendations first. In situations where program funds are running low, evaluators can provide a much-needed external perspective, reminding stakeholders of the project's goals and helping them identify the most critical program elements to keep, even if it means serving fewer participants. More broadly, communication and persistence are essential when attempting to translate evaluation findings into action.

* Advancement Via Individual Determination (AVID) is a program that prepares 4th through 12th grade students for four-year college eligibility.


   16 | 17 | 18
TOC
Print this page Printable view Send this page Share this page
Last Modified: 10/15/2008

Secretary's Corner No Child Left Behind Higher Education American Competitiveness Meet the Secretary On the Road with the Secretary
No Child Left Behind
Related Topics
list bullet No Related Topics Found