Chapter 3. Results (continued)
Primary Objective 2
Primary Objective 2 was to measure and assess to what extent AHRQ's children's health care
activities, i.e., its research findings, meetings, conference support,
products, tools, etc., improved clinical practice and health care outcomes and
influenced heath care policies over the past fifteen years. To address this objective, we used Stryer's
approach to categorizing the impact of research findings, bibliometric analyses,
and qualitative case studies of high-impact activities. The tables referred to in this section can be
found in Appendix C.
Stryer analysis
We coded the
external and internal publications by the Stryer categories (Table 18). As mentioned earlier, we were only able to
code those publications with full abstracts available on-line. Similar to the findings by Stryer and
colleagues, more than two-thirds (70%) of the publications relay research
findings. Fourteen percent of the
publications focus on research on policy changes or with clear policy
implications. Fifteen percent of the
publications describe research that evaluates clinical behavior, demonstrates
changes in clinical behavior, or demonstrates the use of tools in a clinical
setting. One percent of the publications
describe research to determine which clinical or health behaviors affect health
outcomes. We also calculated the mean
(and standard deviation) Stryer score within each of the categorizations (Table 19). Appendix D provides a list of the
publications included in the Stryer analysis along with the results of the
coding effort.
Bibliometric analysis
Citations
We examined the
number of times child health publications were cited as a measure of the impact
of AHRQ's children's health activities. Overall, the 794 external and internal publications arising from AHRQ's
children's health work were cited nearly 3,000 times.24 Table 19 shows the mean (and standard
deviation) number of citations for each categorization scheme.
Impact factors
We also examined
the impact factor of the journal in which the child health articles
appeared. Publications from AHRQ's
children's health activities tended to cluster within a few key journals. The most common journal was Pediatrics with 122 publications. The activities also led to publications in
the Archives of Pediatric and Adolescent
Medicine (40), Health Services
Research (36), Medical Care (36)
and Ambulatory Pediatrics (31). It is notable that the vast majority of
publications appear in pediatric-focused journals. This is to be expected, but at the same time,
if a goal is to raise the profile of children's health activities, then AHRQ
should be encouraging children's health researchers to publish more in journals
such as Health Services Research and Medical Care than they are
now. Across the 616 publications with
impact scores, the average impact score was 3.79 with the range extending from
44 for the New England Journal of
Medicine to .35 for Current
Therapeutic Research-Clinical and Experimental. In general, clinical journals have a higher
impact score than do health services or health policy journals. Table 20 shows the average impact score for
the publications in each Stryer category. For those publications that primarily described research findings, the
average impact score was 3.16. The
average impact score for publications focused on policies and their impact was
somewhat higher at 3.65. Publications
that described the clinical impact of interventions had an average impact score
of 4.56. Those publications that focused
on how clinical or health behaviors affect health outcomes had a considerably
higher average impact score of 7.84. Table 19 shows the mean (and standard deviation) of the impact factor
for journals in which child health articles were published within each of the
categorization schemes.
Case studies
and key informant interviews
We present the results of the case studies and key informant interviews
according to the main topics addressed: a) The impact on policy, clinical
processes, or health care outcomes; b) The processes that influenced these
activities' impact; c) The ways in which AHRQ staff contributed to the impact
of these activities; and d) The ways in which structural or organizational
characteristics of AHRQ contributed to the impact of these activities.
SCHIP/CHIRI™
The first case we studied was the body of intramural and externally
funded research around SCHIP, including CHIRI™. Early intramural research at
AHRQ informed the implementation of some of the SCHIP regulations. CHIRI™, an initiative consisting of more than
$9 million of funding to 9 extramural research projects over the course of 3
years, was conceived of as a way of exploiting the natural experiment resulting
from the latitude given to States to implement their SCHIP programs. The David and Lucille Packard Foundation and
the Health Resources and Services Administration co-funded CHIRI™.
Respondents cited a wide range of impacts for AHRQ's SCHIP/CHIRI™
activities. Many respondents suggested
that determining the impact of research on policy is very difficult because
policy making is extremely complex and multi-determined, with research as only
one of several inputs into the policy decision. Nevertheless, respondents were able to point
to examples of this work being cited and used by policy makers. Respondents suggested that AHRQ's pre-SCHIP
intramural research documenting the number of potentially insurable children, as
well as AHRQ's collaboration with other agencies such as the Centers for
Medicare and Medicaid Services (CMS) were both important in informing
implementation regulations. CHIRI™
research was cited as useful at both federal and state policy levels. One respondent (a CHIRI™ P.I.) reported that
CHIRI™ research was cited in debates at the federal level to argue against
proposed cuts in a specific service program. Another CHIRI™ PI reported that state
legislatures relied heavily on CHIRI™ research in crafting policy for renewing
enrollment and for preventive care. Other
respondents reported that CHIRI™ results were regularly disseminated to policy
makers through the CHIRI™ Policy Advisory Committee and at national meetings of
state SCHIP directors. In sum, AHRQ's
internal activities and support of the CHIRI™ initiative can be characterized as
having a discernible impact on children's health policy.
Several key factors related to this impact were cited. A number of respondents cited key features of
the RFP that AHRQ prepared as being influential. These included requiring investigators to
partner with state officials and to collaborate across projects, and an
emphasis on dissemination. A
respondent representing another funder reported that the 'CHIRI™ model' was one
that the funder organization had subsequently employed successfully in another
arena. Other cooperative
agreements typically require some degree of collaboration across projects but,
unlike SCHIP/CHIRI™, none had required partnership with policy makers. As one
respondent stated:
"The RFP process
was one of the things that made CHIRI™ a success. It required applicants to work with policy
makers before putting in their application to make sure policy makers cared
about their research. This brought some
people who had working relationships with policy makers but were not
necessarily traditional AHRQ researchers into the application pool. The RFP also required researchers to continue
working with policy makers throughout grant. This forced researchers to establish a relationship with policy makers
at the beginning, which facilitated impact and dissemination after the research
was done. There was also a users group
set up, which became part of the dissemination network, who commented early on
about research and its usefulness and later on about how the results could be
used and disseminated."
Respondents also cited the efficiency and determination of AHRQ Project
Officer Cindy Brach as critical. Respondents
reported that Ms. Brach was particularly effective in creating a collaborative
environment, in making sure that investigators were producing research that was
useful for policy makers, and in ensuring that policy relevant information was
disseminated to the appropriate audiences. One respondent compared experiences between
the CHIRI™ cooperative agreement and that of another, stating that in the other
cooperative agreement, one investigator had expressed concern about sharing
findings and guarding intellectual property. In that case, the project officer had not been
able to dispel those concerns in order to create a collaborative environment,
but this respondent noted that Ms. Brach was able to do this very well.
Another
factor cited was the engagement of an outside professional to create research
briefs and technical reports. Along with
that, the CHIRI™ project was successful in developing a 'brand' by using a
consistent logo and acronym and by giving advance notice to intended audiences
that there would be findings produced from this effort.
Partnering
with the Packard Foundation was advantageous in two ways. The first was that Packard Foundation project
staff were persuasive and facilitative in developing policy relevant documents
and in creating forums for these to occur. Secondly, the Packard Foundation was able to
underwrite the costs of dinners during the CHIRI™-wide meetings. This was important in creating a collaborative
group feeling. Several respondents
stated that the group of CHIRI™ investigators developed mentor/mentee
relationships and professional collaborations that have extended beyond the
scope and timing of the initial CHIRI™ initiative.
Difficulties
also existed. There was a sense from
investigators that, while the overall process was positive, it was painful and
at times frustrating to be required to produce collaborative products and to
focus on policy relevant work. One
interviewee reported that:
"The project was done as well as it could have been. There is a tension between academics who are
interested in publications for CV's and promotion and the policymakers who are
interested in getting the information out. Including the academics delays the process but increases the credibility
of the research. The CHRI project
addressed this tension. While they could
have gotten publications out more quickly, both sides compromised."
At
the same time, these investigators recognized the necessity of this mandate and
grew professionally as a result. Part of
the reason that there was such a steep learning curve for investigators in this
arena may be related to an observation from one respondent that the review
panel for the CHIRI™ initiative was composed in the standard manner and given
the standard charge to focus on scientific merit. This respondent felt that better
representation of policy researchers or other interested stakeholders might
have resulted in the selection of projects or investigators with more
experience in policy.
Respondents
described another tension between AHRQ's role as a research agency and the
Agency's or researchers' desires to inform policy. Several respondents suggested that AHRQ could
do more to publicize the results of its findings, but others recognized the
tension between research and policy advocacy and that, in the current political
climate and as an agency of the federal government, there was the potential for
pressure on AHRQ and AHRQ-supported researchers to publicize only 'positive'
findings. As one respondent reported:
"There
is a constant pull between justifying their existence to Capitol Hill—and
then it has to be the 'right' policy—and the fact that they have created an
enormous community of researchers that depend on them for funding. This is a difficult balancing act and I am
not sure how to negotiate the tension. People in the academic community want no strings attached to their
research funding, but AHRQ has to show the impact of the work to people on the
Hill."
In
terms of the role of AHRQ staff, as described above, the work by Cindy Brach
was described as a key factor influencing the success of this case. Her abilities to hold investigators to their
deadline commitments, to encourage collaboration among investigators, to
balance the investigators' needs for publications with the policy makers' needs
for information, and her willingness to engage outside professionals (for
example, to create policy briefs), were often cited. Another key contribution from AHRQ staff was
that of then-Deputy Director Lisa Simpson. Respondents credited her with pushing the
CHIRI™ initiative forward after a meeting of federal agencies and other funders
failed to bring forward a clear private sponsor of the proposed initiative.
Turning
to structural and organizational characteristics, respondents noted challenges
related to AHRQ dissemination of findings. One respondent noted that:
"AHRQ
is totally backwards as to where they put their emphasis. They don't look past when the grant is ended
and that's when a lot of the impact starts occurring. As part of grant requirement, AHRQ should
require that a chunk of time and money be devoted to dissemination. For example, AHRQ could put an extra year
into the grant to fund dissemination. AHRQ is not rewarding researchers for doing dissemination and although a
large number of researchers have a personal incentive to develop publications,
it is not always the case."
Respondents
had further structural suggestions. One
suggestion was to require Cooperative Agreements to have a dedicated
dissemination coordinator. Another was
to improve the functioning of the public affairs office so that it could better
present a more comprehensive and integrated picture of AHRQ's work to various
audiences. A third was that AHRQ's
information systems ought to be focused on tracking dissemination and impact as
well as getting through the grant review and funding process, as illustrated by
the following:
"Even
AHRQ's information system is structured to get through the grant process and
not focused on impact of dissemination after grant done (e.g., there is no
systematic way of tracking publications). Although the mindset within some areas of AHRQ has shifted,
infrastructure (IT and public affairs) has not made a similar shift."
Asthma and ADHD
Our second case concerned moving evidence into practice for asthma and
ADHD. Conceptually, the process of moving evidence into practice can be seen as
a) Distilling the evidence to create guidelines; b) Informing practitioners of
the guidelines and motivating behavior change; and c) Improving outcomes. In this case study, AHRQ involvement in both
asthma and ADHD began
with evidence reviews, resulted in AAP guidelines (or in asthma the NAEPP 2002
EPR2), and researchers subsequently responded to RFAs on implementation in
practice. A variety of AHRQ projects focused on the first two of these,
and presumably had some effect on the third.
In this case, respondents had mixed assessments of the impact of AHRQ's
funding. AHRQ's role in distilling the evidence, primarily funding
Evidence-based Practice Centers (EPCs) to perform evidence reviews was seen as
having substantial impact. One
respondent reported that 85 percent of the American Academy
of Pediatrics' (AAP) clinical guidelines were based on AHRQ evidence reviews. Respondents cited the value, in terms of rigor
and credibility, of having an agency like AHRQ arrange for and fund these
reviews and they cited the reviews of evidence regarding diagnosis and
treatment of both asthma and ADHD as extremely valuable. As one interviewee
reported:
"Because of AHRQ's
Evidence-based Practice Centers (EPC) clinical practice guidelines have
changed. Previously, the guidelines were
done on a shoestring budget but AHRQ's involvement allowed high quality
evidence reviews on topics that are relevant, and the results are far better
than what had been done previously. The
ADHD field is controversial, so the quality of the evidence reviews behind the
guidelines is important."
However, certain
shortcomings were noted. Some noted that
most of the EPCs were unaccustomed to thinking about children's issues, and
suggested that AHRQ should implement policies that require EPCs to include
children in their reports unless there is a compelling reason not to. One interviewee noted the difficulty of
creating guidelines for preventive care given the state of the evidence and
that AHRQ is beginning to address this.
The main
disappointment in the EPC reviews was the length of time it took to complete
them. Several respondents noted that the advisory committees, as well as the
committees attempting to write the guidelines, were frustrated by reviews that
took so long to complete that the evidence was out of date. This forced these committees either to attempt
to update the reviews themselves or to ask the EPC reviewers to update them. As a result, respondents reported that they
decided on several occasions not to suggest additional topics for review for
fear of "gumming up the works." One
respondent felt that, in addition to the long time required, EPC reviewers at
times had the tendency to "make something out of nothing." By this, the respondent meant that there were
occasions where the evidence was scarce, yet instead of stating this and moving
on, the EPC reviewers pulled evidence from other areas and over-interpreted the
scanty existing literature to derive an answer this respondent felt was
unjustified.
Despite these disappointments, respondents reported that when they or
their organizations had attempted to review the evidence themselves, the
product was not nearly as robust as an AHRQ EPC review. They therefore offered several suggestions for
improvement. The first was a suggestion
that the EPCs release their reviews in phases as questions are answered, rather
than all together in one report at the end. Another suggestion was for EPCs to offer a
product consisting of the evidence tables only, without the accompanying text
interpreting the tables and synthesizing the results of the tables. Another possibility would be to release the
information in stages so that at least the evidence tables got to the field
earlier. All of these suggestions, it
was felt, would allow the EPC reviewers to produce more timely products without
sacrificing validity.
In terms of dissemination of results and motivating practice changes,
respondents cited AHRQ's funding of the University of North Carolina CERT
asthma toolkit and initiative as well as its funding of the AAP's national
collaborative on improving ADHD. Other respondents cited AHRQ's support of the
Practice-Based Research Networks (PBRNs) as being important in creating infrastructure
for disseminating results and motivating practice change. It should be noted that key to dissemination
is the readiness of the practice community to take up new guidelines such as
these. Nevertheless, other respondents felt that, in terms of moving evidence
into practice and improving outcomes, AHRQ activities did not have much of an
impact. These respondents reported that
AHRQ did not appear to be particularly interested in documenting the impact of
tools it had helped develop, nor in research on changing clinical practice. This divergence of viewpoints seemed to
correspond to the primary interests of the respondents. Those most interested
in research and from more traditional academic settings tended to be pleased
with AHRQ's attempts to move research into practice, while those from more
applied settings or with a primary interest in practice (as opposed to
research) tended to be less sanguine in their views.
Respondents
cited several key factors associated with impact, all having to do with creating
an infrastructure wherein evidence could be moved into practice. The informational infrastructure of
evidence-based reviews, the practice-based research infrastructure, and the
funding of tools for improvement were cited as factors in creating the potential
for change in clinical practice and outcomes.
In
terms of the role of AHRQ staff, Denise Dougherty's efforts were particularly
noted. Several respondents cited her tireless work to make sure children's
health activities were included in larger AHRQ efforts and to engage other
organizations and agencies in this work.
Turning
to organizational and structural characteristics of AHRQ, respondents reported
their perceptions that the emphasis at AHRQ was on dissemination, as opposed to
implementation of results. One
respondent suggested that AHRQ is "still in the mode of paying for good
science, getting it published, and hoping for the best." Another reported that:
"AHRQ
needs to be better at translating policy into practice. It is always done last and as more of an
afterthought. There needs to be more
emphasis on getting the word out. This
would include improving its Web site, which is very busy and hard to navigate."
Some
thought the under emphasis on implementation had to do with AHRQ being more
connected to the research community than to the practice community. For example, one respondent asked:
"Whether
AHRQ considers their target audience to be NIH-funded researchers or
practitioners. If AHRQ is trying to
reach practitioners then they need to ask whether and how they are reaching
them. At the practice level, there is
competition for time. It is easier for
practitioners to go along with the local chapter of AAP than to wonder what
AHRQ is doing."
Another
respondent was more encouraging, noting that recent AHRQ activities in
improvement research were a good start:
"Having
the courage to look at interventions that are a bit outside the box and that
could impact care is very good. AHRQ is beginning to look at how to support
these activities. They are to be commended for that—we need AHRQ's leadership
to expand the NIH biomedical paradigm."
This
respondent suggested that AHRQ could have a stronger voice in pushing
the definition of 'translational research' from clinical research to practice
change and care delivery.
Return to Contents
Primary
Objective 4
Primary
Objective 4: Measure and assess to what extent the Agency succeeded in
involving children's health care stakeholders and/or creating partnerships to
fund and disseminate key child health activities.
We
used the case studies and key informant interviews to address Objective 4. Several successful examples of partnership
development were cited. Respondents
cited CHIRI™ as a very good example of forming partnerships: AHRQ partnered with
the Packard Foundation and HRSA to fund CHIRI™; the RFP structure forced
researchers to work closely with policy makers; and the structure of the
project, with its emphasis on collaboration and cross-project findings brought
researchers together. Another example of
partnering was AHRQ's coordination with the FDA in the 1990's regarding
regulations on including children in research on new medicines.
Another
important example of partnering was AHRQ's efforts to develop a child health
services research community. Interviewees
cited AHRQ's funding for the annual Child Health Services Research Meeting and
other meetings as beneficial to the child health community. As one interviewee
reported:
"AHRQ
has provided funding for an annual forum every year that 500-600 people attend. It is the only forum where people from a
variety of settings (public, regulatory, private, hospital) get together. These forums would not have happened without
AHRQ funding. There is also an annual
conference in quality improvement (in 5th year) that has had
continuous AHRQ support and a conference coming up on Medicaid quality and
children that AHRQ is supporting. There
has also been ongoing support from AHRQ for dissemination and Denise Dougherty
has been tireless in advocacy, coordinating and networking."
The
child health listserv was cited several times as very useful and informative,
and staff at other agencies and organizations gave kudos to AHRQ staff and
especially Denise Dougherty for finding opportunities to collaborate. AHRQ's funding of NRSA awards for child health
services research fellows was also seen as very important in building a child
health services research community. However,
many interviewees noted that having formed this community, the subsequent
restriction of funding for extramural child health services research grants
meant that this community and its accumulated expertise was in danger of
dissolving and moving on to other fields or areas of inquiry.
Nevertheless,
there were several challenges and opportunities. Several interviewees noted that cooperation
between AHRQ and other institutes/agencies been uneven, as the following quote
illustrates:
"AHRQ
and MCHB haven't "played well together" which has interfered with children's
research activities. It would be better
if they coordinated. The same is true
for the CDC. They need to
leverage relationships in other areas to support kids' research and
partnerships."
Another
interviewee expressed disappointment in AHRQ's lack of proactive work
with CMS:
"Theoretically, CMS is a potent lever to affect quality
and AHRQ has not figured out what policy research should be done to get CMS to
push those levers and put teeth into improving quality. There has been no strategic thinking about
leveraging the influence of CMS to help improve quality. Partnering is the right approach, but only if
you focus on the right issue. There is a
need to focus on how to improve children's health care quality overall."
Several
key stakeholders at other agencies suggested that their agencies have not been
particularly interested in partnering with AHRQ. One reason for this is the perception that
their agencies and AHRQ do not overlap in mission or mandate. Another reason cited was that AHRQ's budget
was too small to enable partnering around funding. For example, one interviewee at an NIH agency
reported that, "I am unaware of AHRQ's work on ADHD or asthma and really
have no knowledge about AHRQ's work on children's health in general, although I
think highly of the few AHRQ people with whom I have come in contact,
especially Denise Dougherty." On the
other hand, one researcher interviewee suggested that AHRQ in general and
Denise Dougherty in particular had played a critical role in keeping NIH
agencies aware of the need to translate evidence into practice.
In
terms of partnering with other organizations beyond HHS, there was a sense that
AHRQ could do more to reach out to both the policy and practice
communities. The sense from interviewees
in the improvement community and the family/consumer advocacy community was
that AHRQ could be much more responsive to their immediate needs for
improvement and could play a greater role in these areas, but that the Agency
was more concerned with academic research and more attuned to the needs of
academic researchers. As one interviewee stated:
"The
basic issue is that there is a tendency to see the world through a very narrow
lens. For an organization like AHRQ to
be effective, they have to figure out a way to broaden their support base and
be responsive to audiences who are not their natural allies. AHRQ's work will be more useful if they ask
questions about what information policy makers need to make programs work
better. Their stuff will be more useful
if they ask people on (Capitol) Hill what they need to make this program
better. AHRQ needs to reach out beyond
the true believers to those in the field (policy, practitioners) to find out
what information they need and what they want and then do that."
Similarly, respondents suggested that AHRQ should think of itself
as a problem-solving agency:
"(They) need to find problems where you see promising
research that might solve a definable problem and take that research and apply
it.... "What is going into JAMA and NEJM is irrelevant, because that is to convince
the skeptics—what I want to do is go back to the people that they are working
with to implement it—what works, what didn't, how does it go to scale, how do
you implement."
Others
wanted to see AHRQ reach out to "organizations that broadly impact children's
health, such as the AAP, institutions leading change (like Cincinnati
Children's) and groups like NACHRI." As one interviewee put it:
"The
academic community is not well connected to the practice community. AHRQ needs to decide whether it is important
for them to reach the practice community. Then, they need to look at the budget and see if they have put money
into reaching practitioners. One way to
link to practitioners would be to talk to local AAP leaders or have focus
groups of providers. AHRQ should think
about what interests practitioners, what format, what content, and what kind of
support needs to accompany it. This kind
of activity would help them become part of the implementation world as opposed
to standing on the sideline."
The
thread running through all these comments was that partnering more effectively
with organizations or stakeholders that create change (either policy or
clinical) would have a two-fold effect: It would allow the Agency to better
realize its mission and it would allow the Agency to advocate for more
resources by providing specific answers to the 'Porter Question.'
24. The
number of times an article was cited was not available for all the identified
publications so it is likely that this number is an underestimation.
Return to Contents
Proceed to Next Section