SAMHSA
Home Frequently Asked Questions Fact Sheet Contact Us
Cancer Control P.L.A.N.E.T. Home
National Cancer Institute Research-tested Intervention Programs RTIPs - Moving Science into Programs for People

16 Review Criteria for Research Integrity

Criterion Definition
1. Theory-Driven Measure Selection: Outcome measures for a study should be selected before data are collected and should be based on a priori theories or hypotheses.
2. Reliability: Outcome measures should have acceptable reliability to be interpretable. "Acceptable" here means reliability at a level that is conventionally accepted by experts in the field.
3. Validity: Outcome measures should have acceptable validity to be interpretable. "Acceptable" here means validity at a level that is conventionally accepted by experts in the field.
4. Intervention Fidelity: The "experimental" intervention implemented in a study should have fidelity to the intervention proposed by the applicant. Instruments that have tested acceptable psychometric properties (e.g., inter-rater reliability, validity as shown by positive association with outcomes) provide the highest level of evidence.
5. Comparison Fidelity: A study's comparison condition should be implemented with fidelity to the comparison condition proposed by the applicant. Instruments for measuring fidelity that have tested acceptable psychometric properties (e.g., inter-rater reliability, validity as shown by predicted association with outcomes) provide the highest level of evidence.
6. Nature of Comparison Condition: The quality of evidence for an intervention depends in part on the nature of the comparison condition(s), including assessments of their active components and overall effectiveness. Interventions have the potential to cause more harm than good; therefore, an active comparison intervention should be shown to be better than no treatment.
7. Assurances to Participants: Study participants should always be assured that their responses will be kept confidential and not affect their care or services. When these procedures are in place, participants are more likely to disclose valid data.
8. Participant Expectations: Participants can be biased by how an intervention is introduced to them and by an awareness of their study condition. Information used to recruit and inform study participants should be carefully crafted to equalize expectations. Masking treatment conditions during implementation of the study provides the strongest control for participant expectancies.
9. Standardized Data Collection: All outcome data should be collected in a standardized manner. Data collectors trained and monitored for adherence to standardized protocols provide the highest quality evidence of standardized data collection.
10. Data Collection Bias: Data collector bias is most strongly controlled when data collectors are not aware of the conditions to which study participants have been assigned. When data collectors are aware of specific study conditions, their expectations should be controlled for through training and/or statistical methods.
11. Selection Bias: Concealed random assignment of participants provides the strongest evidence of control for selection bias. When participants are not randomly assigned, covariates and confounding variables should be controlled as indicated by theory and research.
12. Attrition: Study results can be biased by participant attrition. Statistical methods as supported by theory and research can be employed to control for attrition that would bias results, but studies with no attrition needing adjustment provide the strongest evidence that results are not biased.
13. Missing Data: Study results can be biased by missing data. Statistical methods as supported by theory and research can be employed to control for missing data that would bias results, but studies with no missing data needing adjustment provide the strongest evidence.
14. Analysis Meets Data Assumptions: The appropriateness of statistical analyses is a function of the properties of the data being analyzed and the degree to which data meet statistical assumptions.
15. Theory-Driven Selection of Analytic Methods: Analytic methods should be selected for a study based on a priori theories or hypotheses underlying the intervention. Changes to analytic methods after initial data analysis (e.g., to "dredge" for significant results) decrease the confidence that can be placed in the findings.
16. Anomalous Findings: Findings that contradict the theories and hypotheses underlying an intervention suggest the possibility of confounding causal variables and limit the validity of study findings.




Intervention Impact Score Calculation

The table below describes how the final Combined Intervention Impact Score is determined for posting on the RTIPs program summary page.

Reach Score Effect Size Score Combined Intervention Impact Score

1 = Low Reach – The study or studies excluded or probably excluded a high proportion of members of the defined target population (i.e., defined according to demographic and or risk factor characteristics). The intervention tested was not representative of the target population.

1 – Small 1

3 – Medium

2

5 – Large

3

3 = Moderate Reach – The study or studies excluded or probably excluded a small but significant proportion of members of the defined target population. The intervention tested may be only partially representative of the target population.

1 – Small

2

3 – Medium

3

5 – Large

4

5 = Broad Reach – The study or studies included virtually all relevant members of the defined target population. The intervention tested was representative of the target population.

1 – Small

3

3 – Medium

4

5 – Large

5

Cancer.gov National Institutes of Health Department of Health and Human Services USA.gov
Last Modified Date: null