Wednesday, March 16, 2011

How Honest Reports of Research Can Still Overhype New Drugs

In recent months, several important books and articles have appeared that jointly help us understand much better how we can be misled about the value of new pharmaceuticals from reports in the medical literature. In a later post I'll try to pull all the strands together to give a big picture. Here I want to get on record a very nice article by a major expert in research analysis, that contributes some of the key threads. (Hat tip to Rick Bukata and Jerry Hoffman at Primary Care Medical Abstracts for citing this paper.)

Our expert of the day is John Ioannidis from Greece, whose work on debunking the claims of the research literature have even made it into the popular press. The article in question appeared in the BMJ (subscription required).

We have focused a lot in previous posts on one way a drug company can mislead us--suppress negative research data and spin neutral data to make it seem positive. Ioannidis asks this question: how can we be misled even if the company is scrupulously honest in reporting the data?

There are two major ways, the authors report, and they can be illustrated by a specific case study, the research history of tumor necrosis factor blocker drugs for cancer and rheumatoid arthritis. (I'll here summarize the general points and you can read the paper if you want the details of the TNF story.)

First, drug companies typically try out a drug on numerous conditions, hoping to expand the sales potential. Typically, for each condition, the drug is tested against an array of outcomes, as many as 10-20 per study. (For example, a cancer drug might be reported in terms of outcomes such as survival at 3, 6, 9, 12, 15, and 18 months, as well as quality of life measures, time to first metastasis, etc.) Statistician Ioannidis reminds us (the reminder really shouldn't be needed) that we can do the math and calculate how many of these outcome measures will be positive simply by chance, assuming that the drug is actually no better than placebo--or in the more common case, is a little bit better than placebo, while maybe also having some significant adverse reactions and a high cost. If you looked at 20 outcomes per trial, and conducted trials for the drug in 6 different medical conditions, the odds are that for each condition, at least one outcome will be statistically significant in favor of the drug. If the company plays its cards right, it can get regulatory approval to market the drug for all 6 conditions, even though the results, so far, occur purely at random and indicate no real benefit.

The second mechanism is one we've previous looked at, early stopping of trials. In those earlier posts (such as http://brodyhooked.blogspot.com/2009/11/no-fair-peeking-more-questions-about.html), I erred in focusing on the question of whether the company inappropriately pressured the data safety and monitoring committees to end the trials early in ways that benefitted marketing. Ioannidis shrewdly reminds us that we don't have to assume any skullduggery to see how stopping trials early could exaggerate the drug's efficacy. Suppose we simply do what DSM committes are routinely told to do--for ethical reasons so that research subjects are not put at unnecessary risk. If a treatment reaches a pre-specified level of statistical significance showing superiority, then the trial is stopped, on the belief that you have the answer and that continuing the trial longer would not change things. But that's surely wrong, the authors say, because of the well-known phenomenon of regression to the mean. If at any given stage in the research, the drug is beating the placebo by let's say 20%, if you quit then, you report that the drug is better than placebo by 20%. Yet if you'd continued the trial longer, the odds are excellent either that the drug would have turned out to be no better at all, or else that the true degree of superiority is 5% or 10%, not 20%. The authors cite a previous paper that analyzed 91 early-stopped trials and demonstrated these effects clearly.

On this topic I like to use the analogy of a horse race. We all know that the right way to run a race is to run a given distance, and the first horse across the finish line is the winner. Supposing that we decided that a statistically significant lead is 1-1/2 lengths. So we develop a new rule, that as soon as one horse is out in front by at least 1-1/2 lengths, then you stop the race and declare that horse the winner, even if the field has only gone a quarter of the distance. How often do you think the winner by these new rules would be the same horse as would have won by the old rules?

Ioannidis adds a further wrinkle. If you stop a trial early, the results are also published early. If you let a trial run its normal course, the results will be published much later. A trial that's stopped early is stopped because the treatment looks very good early on. A trial that is not stopped is therefore a trial where the treatment does not look so much better for most of the trial duration. That means almost certainly that the first data published about a new drug will be unrepresentatively positive, and that the more negative data will come rolling in much more slowly. (Or not at all if we add the common tactic of data suppression to the company's bag of tricks.)

Bottom line: if companies were scrupulously honest in reporting data, you could still end up concluding that a new drug is much more effective than it really is. But we know, as documented here ad nauseam, that all too often this scrupulous honesty is honored in the breach rather than the observance. So if you add a little sprinkling of dishonesty or spin to the factors Ioannidis cites, then you have an even more misleading picture.

Ioannidis proceeds to explain how the authors of systematic literature reviews and meta-analyses can try to correct for these sources of bias. But for us the main lesson is to understand these sources of bias and how they operate. Later I'll try to connect the dots between these concepts and other recent analyses of sources of bias in the research literature and its interpretation.

Ioannidis J, Karassa F. The need to consider the wider agenda in systematic reviews and meta-analyses. BMJ 341:761-64, 9 October 2010.

1 comment:

Joseph P Arpaia, MD said...

That horse race analogy is AWESOME! Thanks for making these issues so clear.