Commentary

Cost-Effectiveness Analyses: Making a Pseudoscience Legitimate

By Drummond Rennie, University of California—San Francisco.


Notice of Copyright

This article was originally published in the Journal of Health Politics, Policy and Law. All rights reserved. This material may be saved for personal use only, but may not be otherwise reproduced, stored, or transmitted by any medium, print or electronic, without the explicit permission of the copyright holder. Any alteration to or republication of this material is expressly prohibited.

It is a violation of copyright law to reproduce any copyrighted information from this publication without first obtaining separate permission directly from the copyright holder who may charge fees for the use of such materials. It is the responsibility of the user to contact and obtain the needed copyright permissions prior to reproducing materials in any form.

Permission requests should be directed to:
Journals Division
Duke University Press
Box 90660
Durham, NC 27708
Fax: (919) 688-3524


In this essay, I shall not attempt to deal with the many points raised in Peter D. Jacobson and Matthew L. Kanna's presentation. I shall merely note that they find cost-effectiveness analyses (CEAs) inevitable, unpopular both politically and with juries, and often unacceptable in product liability cases but increasingly accepted in negligence cases and in the regulatory arena. Their future in the courts is uncertain.

This seems to suggest that the lay public has difficulty finding CEAs credible. As a medical editor who sees many such analyses, and as someone who accepts that they are indeed inevitable, I think that the man in the street is perfectly right to be highly skeptical of their worth.

Their unpopularity is scarcely surprising, if only because when beliefs conflict with evidence, beliefs tend to win. It seems to be against human nature to trust scientific evidence. In October 1998, I traveled from Milan to Palermo, in Sicily, where I was due to give a talk on evidence-based medicine, so I was brooding about the meaning of evidence. As I went through the metal detector in the airport in Milan, lights flashed and loud horns blared: my titanium hip showing up again. Nearby was a group of six carabinieri, arguing about football. No one took the slightest notice. I could have been carrying a hydrogen bomb. In other words, no action in the face of irrefutable evidence to act. During the flight, late at night, a youngish woman collapsed, everyone got into a panic, and, reluctantly, I responded to the call for a doctor. The cramped seating made it impossible to examine her and I had no instruments. She seemed dead to me, but if she was dead, I couldn't help her and if she were alive, I'd probably harm her. Simply to calm the mounting panic, I announced in a loud voice that she would recover completely. Immediately, calm was restored and after ten minutes, and to my astonishment, the woman suddenly sat up. Decisive action in the face of highly ambiguous evidence.

Yet, if we are to make decisions about our patients and about our health care system, there is no alternative to trying to wring as much out of the science as we legitimately can. Where treatments are concerned, given that all of us consider there should be no limit to what the system should give us in the way of the best care, to pretend that our decisions have nothing to do with cost is to live in a world of fantasy. So we are stuck with CEAs and with trying to keep them honest. And there lies a substantial difficulty. Perhaps the principal job of an editor like myself is to weed out and prevent the publication of biased studies. As Hal Luft and I have described in an essay, published just after the Institute of Medicine/Agency for Healthcare Research and Quality conference, which forms the basis for these comments, CEAs tend to be riddled with bias and it's not hard to see that money is the reason for this (Rennie and Luft 2000).

In at least three countries, CEA has been mandated by law for use by agencies responsible for deciding whether taxpayer money should pay for pharmaceuticals already approved for use. The stakes are high. When I arrived in London last October, the front pages of all the newspapers showed entertaining pictures of the scowling chairman of Glaxo Wellcome, losing his temper publicly after NICE (the National Institute for Clinical Effectiveness) had decided that his new anti-influenza drug, Relenza, would not be paid for under the National Health Service. He declared NICE to be nasty and threatened to pull all his operations, and some 40,000 employees, out of the United Kingdom. Both in Ontario and in Australia, heavy pressure, political and legal, is being put on such agencies by companies for whom hundreds of millions of dollars are at stake (ibid.; Nasty Start 1999).

Given the importance to manufacturers of showing that their drugs are more effective for a lower cost than the competition, there is a great incentive for companies to conduct cost-effectiveness studies. It has been shown that studies funded by pharmaceutical companies rarely reach conclusions unfavorable to the drug company's product (Rennie and Luft 2000). It has also been shown that the rules for good CEAs are widely flouted (Udvarhelyi et al. 1992) and that CEAs are unstandardized. Marketers choose the often ineffective comparison drugs and the type of analysis; control the data seen by the researcher; dictate the models and assumptions; pull the researchers off studies that seem unfavorable to their drug; and block publication of such results (Hillman et al. 1991).

Indeed, Robert Evans (1995: 59) has called the science of such CEAs, pharmacoeconomics, a "pseudo-discipline . . . conjured into existence by the magic of money." Evans (ibid.) observes: "There are a lot of drugs, and there is a lot of money, so the 'field' is booming." The experience of the Pharmaceutical Evaluation Section of the Australian Pharmaceutical Benefits Scheme, published in my journal, the Journal of the American Medical Association, suggests that Evans does not exaggerate (Hill, Mitchell, and Henry 2000). Hill and her colleagues report that this agency, provided by law with all available information, found that 67 percent of presentations concerning drugs already approved for use presented "serious problems of interpretation."

What is to be done? Given that there is no sensible alternative to CEA, we need to improve the design, conduct, analysis, and reporting of such analyses, and of course of the trials upon which these analyses are based. Editors and trialists have made a good start with the CONSORT rules for the reporting of trials, which are proving to be successful in making the conduct of the trials transparent to the reader (Begg et al. 1996). Editors must also encourage naturalistic, real world trials in which patients and physicians both pursue their usual practices and which are more likely to result in useful CEAs.

Editors must demand of the authors of CEAs full disclosure of all commercial ties and statements that the authors were in no way inhibited, their access to information restricted, or their right to publication controlled by the funding company. But the key to good reporting comes from recognizing that CEAs are, in essence, reviews. Giving them a formal systematic structure to reduce bias in selection and making all methods and criteria for selection both rigorous and transparent has revolutionized the reporting of reviews (Mulrow 1987).

The problem about editors ordering that there be complete transparency in reporting, however, is that it is unworkable, given the limits of journal peer review. The article from Australia shows why. Hill et al. (2000) showed that it takes specially trained experts working whole time about two weeks to review CEAs provided by industry, even though these experts have all the information and have spreadsheets already in place. We agree with them that journal peer reviewers cannot possibly identify all the many serious problems, given their part-time approach.

The answer in a curious way leads us scientists to emulate the lawyers. Journals will have to admit that their own peer review cannot be completely adequate. Editors must then insist that authors provide all the background material for their articles, to be used by the peer reviewers in the first instance, and then to be placed on the journal's Web site, so that the models, the criteria for selection of studies, the studies themselves, and the assumptions can all be examined in detail. Needless to say, those who are likely to examine the data most closely will be the payers and the manufacturers of competing products. As we have pointed out, "In essence, our proposal would be similar to the discovery process in lawsuits, whereby each side has access to the underlying data that may be presented. . . . We are saying that methodological choices determine the output, the results, and we need to be able to examine those choices by setting up a much more open process. Is it possible to publish credible cost-effectiveness analyses sponsored by drug companies? We'll see, if we can all see the data" (Rennie and Luft 2000: 2012).

References

Begg, C., M. Cho, S. Eastwood, R. Horton, D. Moher, I. Olkin, R. Pitkin, D. Rennie, K. F. Schulz, D. Simel, and D. F. Stroup. 1996. Improving the Quality of Reporting of Randomized Controlled Trials. The CONSORT Statement. Journal of the American Medical Association 276:637-639.

Evans, R. G. 1995. Manufacturing Consensus, Marketing Truth: Guidelines for Economic Evaluation. Annals of Internal Medicine 123:59-60.

Hill, S., A. Mitchell, and D. Henry. 2000. Problems with the Interpretation of Pharmcoeconomic Analyses. Journal of the American Medical Association 283:2116-2121.

Hillman, A. L., J. M. Eisenberg, M. V. Pauly, B. S. Bloom, H. Glick, B. Kinosian, and J. S. Schwartz. 1991. Avoiding Bias in the Conduct and Reporting of Cost-Effectiveness Research Sponsored by Pharmaceutical Companies. New England Journal of Medicine 324:1362-1365.

Mulrow, C. D. 1987. The Medical Review Article: State of the Science. Annals of Internal Medicine 106:485-488.

Nasty Start. 1999. A Nasty Start for NICE (editorial). Lancet 354:1313.

Rennie, D., and H. S. Luft. 2000. Pharmaco-Economic Analyses: Making Them Transparent, Making Them Credible. Journal of the American Medical Association 283:2158-2160.

Udvarhelyi, I. S., G. A. Colditz, A. Rai, and A. M. Epstein. 1992. Cost-Effectiveness and Cost-Benefit Analyses in the Medical Literature. Are the Methods Being Used Correctly? Annals of Internal Medicine 116:238-244.

Return to Contents


Internet Citation:

Cost-Effectiveness Analyses: Making a Pseudoscience Legitimate. Journal of Health Politics, Policy and Law, 26:2, April 2001. Copyright 2001, Duke University Press. All rights reserved; posted with permission. For information on the journal or to order a hard copy, go to http://www.dukeupress.edu/jhppl/


Return to Index