Sunday, September 26, 2010

Tilting the Pinball Table, for Fun and Profit

Thanks to faithful reader Marilyn Mann once again for word of this posting on the blog conducted by an anonymous neuroscientist from the UK:
http://neuroskeptic.blogspot.com/2010/09/big-pharma-explain-how-to-pick-cherries.html

Background: My colleague Dan Moerman, a medical anthropologist at U Michigan-Flint, published a classic paper on the placebo effect nearly 30 years ago. He looked at about 35 published studies of the then-new-miracle drug, cimetidine (Tagamet) for healing peptic ulcers, all of which had almost identical methods--the patient was endoscoped at the start of therapy and then a month later to see whether the ulcer had healed and what size it was. (They didn't have the term back then, as I recall, but Moerman did an early meta-analysis.) He showed a number of surprising things:
  • According to his meta-analysis, cimetidine was actually no better than placebo.
  • Cimetidine, however, was quite consistent in its effects across studies. No matter where the study was done (a wide range of international sites were represented), the healing rate in the cimetidine-treated group at one month was about 70-75%.
  • If you looked at the individual studies, about half showed that cimetidine was superior to placebo, and half showed it wasn't.
  • Since cimetidine was so consistent, the only variable to explain this inconsistency had to be the placebo response rate. And indeed that varied from a low of 10 percent to a high of 80 percent.
  • So whether cimetidine was shown in any individual study to be better than placebo had virtually nothing to do with the cimetidine response rate and everything to do with the placebo response rate.
  • The placebo response rate in these studies is not the same as the "placebo effect." Studies of this sort cannot distinguish between healing of ulcers caused by administering a placebo, vs. healing of ulcers due to other causes (primarily, spontaneous remission). Most ulcers, given time, heal. However, it would be contrary to most of what we think about peptic ulcers to imagine that the spontaneous healing rate of these ulcers differ so widely among study centers in different countries. So it is more plausible to imagine that the rates of placebo effect differed among study sites to primarily account for the large differences.

Now, let me make two points about Moerman's (subsequently replicated) research. First, it reveals a real problem in using the typical double-armed, placebo controlled, double-blind randomized trial to assess drug effects. It reveals that the placebo arm of the trial can be a source of "noise" that might obscure a presumably real drug effect. Second, I take Moerman's work to be real science. Moerman was not trying to sell Tagamet. (Nor, so far as I know, did he own stocks in a placebo company.) Moerman was trying to understand what various factors determine the outcome of placebo-controlled studies and quantitatively, how much of a result can be attributed to each factor.

Fast-forward to the article reviewed by Neuroskeptic in his blog. It's one of a series of studies funded by drug companies either directly or indirectly, and differs from earlier entries into the series (according to Neuroskeptic at least) only in its brazenness. If you are trying to sell drugs, then you really want to take what Moerman observed and work it to your advantage. What that usually means is to try to manipulate the placebo arm of the trial so as to reduce, as much as possible, the response rate among subjects randomized to that arm--thereby assuring that the subjects taking your drug have the best possible chance of doing better than their placebo counterparts.

Hence, in the name of "accuracy" or more usually, "efficiency," we get a variety of proposals that all amount to various ways to ignore or toss out data when the placebo effect is inconveniently high. These efforts fall (in my view) along a spectrum. At one end we have relatively innocent and well-reasoned alterations of study design that try to correct for extreme and obvious distortions that lead to underestimating the true drug effect. At the other end of the spectrum are blatant efforts to wipe out unfavorable data and replace them with good-looking data, science be damned. You can read the post about this most recent proposal from GlaxoSmithKline and you be the judge. (Neuroskeptic thinks it's an extreme case of tilting the pinball table by eliminating all study sites that have an "abnormally" high placebo response rate, thereby assuring that your drug will emerge the winner.)

My own view is that most efforts, at most points along the spectrum, run afoul of one basic consideration. In the real world of medical practice, the placebo effect is omnipresent. Further, while in a study setting, one might have a legitimate reason to try to minimize placebo effects (in both arms of the trial equally), in the world of clinical medicine, practitioners do everything possible most of the time to augment the placebo effect, quite appropriately as this makes more patients get better faster. So any study that tries to get "better" data by minimizing the placebo effect is likely not to inform us of how this drug will perform in actual practice settings.

Merlo-Pich E, Alexander RC, Fava M, & Gomeni R. A New Population-Enrichment Strategy to Improve Efficiency of Placebo-Controlled Clinical Trials of Antidepressant Drugs. Clinical Pharmacology and Therapeutics PMID: 20861834 (published on line 22 Sept. 2010).

2 comments:

Neuroskeptic said...

Good post, I wasn't aware of the background that you describe.

I agree that there are good reasons to want to look into reducing placebo responses, but the risk of bias has to be taken into account and any proposed method should be tested against simulated data in which there is no effect, in order to verify that it doesn't create false positives. This paper's method wasn't.

I've been thinking about how one might do this properly, and I think one approach would be to plot Placebo Response vs. Drug Response for each center, and test whether the correlation was what you'd expect assuming the situation was like cimetidine.

From such a scatter plot it should be fairly obvious whether high placebo responses were in fact "diluting" the effect, and there should be statistical ways of testing that.

What you can't do is to just assume that that's what's going on, and bin some of the data under that assumption.

Anonymous said...

Explaining unusual and/or unexpected results is commonplace in clinical trial work. However, if we let go of the bedrock of our method, that being faith in the double blind RCT, then we will be truly adrift.
It is my belief that placebo rates do vary between centres for two reasons: (1) as a natural function of random event distribution. Sometimes the clustering can appear very extreme, but that is the way it is sometimes with random events, and that is why trials need to be large enough to avoid missing true effects, and (2) centres treat patients slightly differently, but if blinding is appropriately done then both arms at a given site will vary in the same way.
Messing with the random distribution of the placebo effect is only going to add another layer of opacity to our ability to read a published clinical papers and is therefore a bad thing.
Regarding the cimetidine data from 30 years ago, I find it fascinating, but for a different reason. Trial conduct, design and governance has improved significantly over the last decade, and even now I would expect to see variation in the results from small trials. To see the cimetidine data that consistent over many smallish trials while the placebo rate varies so widely, I am inclined to question the conduct of the trials. I would particularly focus on reviewing the blinding of the investigators to the treatment.
In my experience, human error (trial design and/or conduct) is the source of unexpected results. Sometimes a true effect is missed, but sometimes the result is just not what we wanted to see.
Dr A