Your browser doesn't support JavaScript. Please upgrade to a modern browser or enable JavaScript in your existing browser.
Skip Navigation U.S. Department of Health and Human Services www.hhs.gov
Agency for Healthcare Research Quality www.ahrq.gov
www.ahrq.gov

Chapter 2. Methods (continued)

2.6. Data Collection

Data collection and analysis techniques varied for each method. In this section we summarize how data we collected and analyzed data.

2.6.1. SNA Data Collection and Analysis

The analysis of the CERTs network was based on data available from internal planning or management documents (e.g., meeting minutes, memos, and progress reports), publication lists, and the data collected from the discussions and site visits. This allowed us to describe the CERTs network and assess potential mechanisms through which the network might be maintained and expanded. We defined the boundaries of the study population through official documentation and the application of participation criteria and the collation and review of CERTs publication data. The second data collection phase involved the CERTs network members, including the Coordinating Center, Steering Committee and partners mentioned above. This discussion phase aimed to verify previously collected data, to ascertain the relationships between different actors (an organization, agency, group, or individual (e.g. Steering Committee chair) in the network, and to assess processes and practices. Through the analysis of discussion-based and publication-based data, we characterized the entire CERTs network with respect to its productivity, collaboration, cohesiveness, and organization practices and processes. As part of the network analysis a "collaboration network" was created.

The CERT Coordinating Center is the primary network of analysis within this study and encompasses the entire CERT network structure. The ego (individual organizational networks of the particular CERTs) are subsets of the primary network. Data collection centered around interview questions of key CERT personnel regarding the presence, nature, and type of relation individual CERTs had with other CERTs, the Coordinating Center, other agencies (e.g. FDA, NIH), and any other entities with which the CERT collaborate. Each relationship depicted in the SNA diagrams within this analysis was validated by a triangulated data collection methodology with more than one key staff person at each CERT interviewed, through content analysis of CERT reports and documentation, and through follow-up interviews with CERT and agency personnel. Each node is labeled with the abbreviation for the entity's name or appropriate acronym. We used several measures to depict each network, including size of the network, number of ties, average distance, density, degree centralization, closeness, betweeness, and keyplayers within the network. These measures are described briefly in the analysis section and in more depth in Volume 2 Attachment 2). Relational and organizational data collected during discussions and the publication data were indexed in Excel files and then formatted for import into the UCINET 6 software package16 to calculate the measures described above and to draw sociograms (network graphs).

2.6.2. Site and Telephone Interview Data Collection

We developed guides to structure discussions with each stakeholder group. Individual discussions varied in content. The majority of respondents participated in only one discussion; however, a few CERT individuals were re-contacted because their research was selected as a case study. Abt staff and consultants with expertise in pharmacoepidemiology and patient safety, discussion techniques, appreciative inquiry, and qualitative research design developed the discussion guides. Data gathered through review of administrative documents (e.g. applications, progress reports) provided by AHRQ were used for background information on the CERTs, the investigators, and their research prior to the discussions. We developed sample questions to address each of the areas identified for data collection, and we tailored the questions for each group of respondents. After the discussion guide was drafted, the Project Officer provided input and the guide was finalized. Additionally, Abt conducted an initial site visit to the HMO Research Network CERT as a "live" data collection activity and as a pilot.

We addressed respondent questions and emphasized the need for candid contributions by respondents. Respondents were assured that the information they provided would be used without name, specific job title or by any identifier, with the exception of data provided for case studies.17 Open-ended questions phrased in objective language were often used to encourage candor and openness at the start of a discussion. The language used in questions and the sequence of topics in the discussion guides did not imply any particular viewpoint. Probes and follow-up questions were used to obtain examples and evidence behind responses that might be initially articulated in generalizations. The primary Abt staff member who conducted the discussions is a PharmD, which facilitated dialogue about the research topics in therapeutics. Two Abt staff members conducted the on-site discussions. One led the discussion and the other took notes. Telephone discussions were conducted by one Abt staff member, recorded when respondents granted permission, and transcribed by another Abt staff member. The study approach and discussion guides were reviewed and approved by Abt's Institutional Review Board.

Discussion notes were coded using the NVivo software package to annotate and organize the information produced through the tasks outlined above. Each discussion was formatted as required for use with the software and coded. Coded reports list all text for relevant codes to facilitate the analysis, interpretation, and summary of the findings relevant to each topic. The coding reports generated from NVivo were often used as one data element that was triangulated with data from other sources. The analysis included looking at similarities and contrasts among the different stakeholder group perspectives, the context in which perspectives were offered, and review of program documents. The analysis included summaries of the findings for each objective and research question. When appropriate, across and within stakeholder level findings were distinguished.

Once common and different perspectives and themes were identified, quotes were selected to best illustrate a perspective or theme. Additionally, respondent quotes were selected based on cogency and appropriate illustration of the finding(s) being described, rather than as representative of all respondents' perspectives. The respondents' statements are represented by use of quotation marks when the statement is less than 40 words and represented by italics and indented when the statement is more than 40 words. Additionally, the statements contain the essential content provided by the respondent, but the language was edited to facilitate conveying the point. Lastly, he respondent who made a statement or the stakeholder group he/she represents is referenced either in the introduction of the statement or in parentheses following the statement. Depending on the nature and content of the statement the reference was masked at different levels. For example, if the content of the statement was particularly critical the reference was to a CERT investigator rather than to the "Name of CERT" investigator to further ensure confidentiality. We use summary terms such as "a few" and do not usually report specific numbers because the nature of the interviews generally did not include yes/no questions; each stakeholder group was asked slightly different questions, so it would be difficult to directly quantify such responses.

2.6.3. Data Collection and Analysis of Documents

Documents that were reviewed and used to inform data collection and analysis included those shown in Appendix 3. Prior to interviews, Abt staff carefully reviewed documents. Where appropriate, extracted data were used to supplement the background information on the CERTs and Portfolio grants. The annual and cumulative progress reports provided quantitative data on program outcomes (e.g. number of publications, number of presentations) as well as qualitative data (e.g. organizational structure, what the researchers consider to be the most important outcomes and impacts). We extracted much of this data from the progress reports to construct (1) An updated list of Portfolio publications, presentations, and other research outputs; (2) Descriptions of research findings and outcomes; (3) A compilation of educational trainings, courses, or curricula development funded (e.g. CME, medical school courses, and patient education Web sites); and (4) A list of CERTs respondents. The databases were used as one source of information about the Portfolio's productivity. We collected data on educational activities ranging from single trainings to Web-based modules to address fulfillment of the educational mission.

We also quantified publications18, books/book chapters, lectures/presentations and performed bibliometric analyses on reported publication information using standard measures of scientific productivity. Bibliometric analysis included basic counts of publications by CERTs, by year, publication type (e.g., journal article, abstract, conference proceeding) and whether a journal was high impact,19 indicated by impact factor (IF). A journal's impact factor is based on two elements "the numerator, which is the number of citations in the current year to any items published in a journal in the previous 2 years, and the denominator, which is the number of substantive articles (source items) published in the same 2 years."19

We compiled a breakdown of funding by source and percent of total funding from the information provided by each CERT. To evaluate the utility and effectiveness of investigator progress reporting to AHRQ as a management tool for the Portfolio, we reviewed and coded these reports for content relevant to the research objectives and questions. Common formats and information across CERTs were noted as well as inconsistencies. The site visits and discussions offered data to externally validate and/or update the data extracted from the progress reports. Evidence that addressed the following research questions was extracted from the progress reports and coded: (1) What have been the research outputs? (2) What have been the educational outputs? (3) What have been the program impacts? (4) Is investigator progress reporting complete, accurate, and timely? Is it adequate to assess inputs/outputs/outcomes/impacts?

When we report findings and when statements were pulled directly from Portfolio documents (e.g. progress reports, annual reports) the reference is provided. For example, if an output of the Arizona CERT was provided in their progress report for 2003-2004, then the finding is referenced as AZ PR 03-04, to indicated Arizona progress report (PR) for the year 2003-2004. Another example, AR Y5 indicates the annual report for year 5. The excerpts taken from Portfolio documents, research abstracts, or articles are referenced in this way. The citations for respondent statements are similar but italicized and indented if longer than 40 words.

The publication list was compiled from the Coordinating Center publication data file in addition to the updates to the list provided directly by each center and were coded for type of publication and characterized by CERT and evaluation year (2002-2005). The presentations list was compiled from the CERT Web site http://www.certs.hhs.gov. We compiled the educational outputs from the various document sources referenced above in addition to the individual CERTs Web sites and descriptions provided in the discussions and site visits.

2.6.4. Impact Case Study Data Collection and Analysis

The case studies relied primarily on data collected through the discussion and documentation review data sources described above. These data sources identified candidate case studies and supplied more in-depth information regarding the cases. Additional telephone discussions were conducted with the PIs to obtain further details of the case, the findings, and background on how it was able to achieve the impact. Discussions were also conducted with members of the target audience for the case study outputs, such as policymakers (CMS, FDA, NIH), clinical directors, or partners. As appropriate, discussions were conducted via telephone and took place soon after the case studies were selected. In addition to the data collected through discussions and documentation, information was gathered from a literature and media search of a topic citation to lend support to the case studies. For each case study an Internet search was conducted to identify the publication(s) of the case study, pick-up by Web sites, discussions conducted with the PI, and other relevant publicly available information. The next stage involved integrating the data. For each case, a timeline of events (publications, reports, and related statements from discussions) was constructed. Given the small quantity of cases and the diversity of topics and impacts, it was difficult to identify common mechanisms of impact, so we describe mechanisms that arose in each of the case studies.

Appreciative Inquiry Data Collection and Analysis

Data were collected by a facilitator with a discussion guide during a CERT Steering Committee Meeting. Some of the questions for the workshop were derived from the discussion data component of the overall evaluation; those discussions were conducted with key CERTs stakeholders. In addition to the AI workshop, Abt Associates also incorporated AI questions into the discussion guide used in the discussions with key stakeholders at the CERTs. Similar to the AI workshop, these questions were designed to uncover the CERTs' greatest strengths and successes to-date. Volume 2 Attachment 12 contains the discussion guide used in the exercise.

Return to Contents

2.7. Limitations of the Methods

2.7.1. Interviews and Site Visits

We selected a sample of Portfolio researchers for either site visits or telephone interviews. Five of 8 CERTs (including the Coordinating Center) were visited, while researchers from the other CERTs were contacted by phone. Researchers who were willing and able to participate in discussions may be different from researchers who were not, and information collected in person may have differed from information collected by telephone. However, we did speak with a relatively large number of researchers chosen carefully to represent a variety of perspectives. Our use of the publicly available Web sites and CERTs resources to obtain additional information about their projects helped provided information about CERTS not visited. Furthermore, we used the broader program data gained from administrative data review and previous evaluations to frame the discussions.

2.7.2. Document Review

A wide variety of documents were abstracted, and abstraction was constrained only by the availability, completeness, and accuracy of the documents. The most important methodological challenge in using administrative data such as progress reports was the internal and external validity. Examples of threats to internal validity include inaccurate or incomplete citations in a publications list. Threats to external validity included missing citations or citations not truly attributable to the program. If the available documents were systematically more likely to include certain types of information (e.g. publications from earlier program years), this might have introduced bias.

2.7.3. Impact Case Studies

While we hoped to learn a great deal about the impact of the cases we chose, that understanding may be difficult to generalize and the studies selected may not always be the best examples. While we selected a variety of cases, these cases are not necessarily representative of the impacts of all CERTs products. Furthermore, the validity of the mechanisms we identified was entirely dependent on the availability of relevant data. Finally, the endpoints of the impact case studies for the purposes of this evaluation were intermediate with respect to the ultimate outcome of the therapeutic under study. Instead of measuring changes in medical practice or improvement in patient survival (the ultimate outcome), we assessed a necessary step in the process — the impact of CERTs research as a proxy for the ultimate outcome.

2.7.4. Social Network Analysis

Social network analysis can only represent the data used to create the network diagrams or measures and can tell us only a limited amount about why the network has formed as it has. In addition, the social network analysis is a snapshot in time and may not adequately address the dynamic nature of the network. Thus, changes in the collaborative nature of colleagues or centers after data collection could not be represented in this analysis. Similarly, leaves from work, the shifting workload of the academic calendar, and the yearly funding or fiscal cycle may all affect how people recall their current social relationships and as such might be reflected in the quantity/quality of the relationships reported. These effects were partly mitigated through careful construction of the discussion guide and through secondary analysis of CERTs materials to validate reported relationships.

2.7.5. Appreciative Inquiry Exercise

The most important design limitation to this exercise within the study is that the discussion facilitated by Abt was a one-time data collection event. Ideally, appreciative inquiry is conducted as a multi-stage process, but resource constraints required that it be condensed in this case.

Return to Contents
Proceed to Next Section

 

AHRQ Advancing Excellence in Health Care