Monday, August 24, 2009

Sifting Through the Garbage to Find the Science of Antidepressants

There are two bits of text for this sermon. One is a recent meta-analysis by Stone and colleagues, published electronically ahead of print in the BMJ:

http://www.bmj.com.libux.utmb.edu/cgi/content/full/339/aug11_2/b2880?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=suicidality+stone&searchid=1&FIRSTINDEX=0&sortspec=date&resourcetype=HWCIT

The second is Roy Poses' thoughtful blog posting on the same:

http://hcrenewal.blogspot.com/2009/08/high-costs-and-poor-outcomes-lesson.html

The sermon is part of our ongoing series on the newer generation of antidepressants (primarily those known as SSRIs), and how the industry has routinely suppressed evidence of serious side effects and less-than-ideal efectiveness. The result has been many patients being put on expensive and often not very helpful drugs, that often cause unpleasant side effects and rarely cause life-threatening side effects. Had the truth been known from the get-go, it's likely that far fewer of these patients would have been subjected to this trial-by-pharmacopoeia.

The true story of these drugs has slowly been making itself known--no thanks to the drug industry--through recent systematic reviews of the literature, both published and unpublished. The present meta-analysis comes from an unlikely source--the FDA. The authors explain that after an FDA advisory committee recommended a black box warning on children and adolescents due to emerging evidence of increased risk of suicidal thoughts when taking SSRIs, the agency was ordered to look into the extent to which this same adverse reaction might be found in adults.

Now, here's the big news. Normally when investigators do a meta-analysis they play a grand game of "let's pretend." They find, let's say, 10 studies of a certain drug compared to placebo. And let's say that each study enrolls 100 subjects. The investigators then re-analyze the data as if they were doing a brand new study of the drug vs. placebo with 1000 subjects enrolled. But this "as if" game is flawed by the fact that they do not, as a rule, have direct access to the raw numbers generated in each of the previous 10 studies. They have the published results and have to job backwards from those results to imagine what would have happened if there had been a grand study involving all 100 subjects instead of 10 studies with only 100 each. (That's the simplest version, ignoring the fact that each study might have had slightly different conditions and enrolled a different population of subjects so that you really mix apples and oranges by combining the studies in the first place.)

Now, if you happen to be the FDA, this is not how you have to do a meta-analysis. You can demand that the companies supply you with all their raw data. So you can get much closer to re-investigating the actual data of each study. This is what the FDA investigators did with the antidepressant studies. They were able to sum the data from 372 separate studies of 12 different SSRI-type antidepressants. Were their methods fool-proof? No. If drug company research people managed to completely hide the fact that subject #36 (let's say) had suicidal thoughts on day 8 of taking the drug, by misclassifying the reaction as "increased anxiety" rather than "suicidal thoughts," I don't think these authors could have seen through the scam. But compared to the more usual meta-analysis, this was obviously a much deeper-drilled study.

What did the authors find? Basically, the likelihood of suicidal thoughts and behavior with this group of antidepressants is highly age-related. The increased risk extends beyond adolescence and goes up to about age 25. From age 25 to 64, either the end result is a toss-up, or the drugs are very slightly protective. Above age 65, the drugs seem to work to significantly reduce the risks of suicidal stuff.

One comment in the paper by Stone et al. highlights the problem of practicing physicians, poor blokes, who had to rely on published medical journal articles to figure out when and whether to prescribe these drugs: "Some of the trials included in the analysis were the basis of articles published in peer reviewed journals but the question of suicidality was not considered in any detail by the authors or reviewers." Hmm--the raw data of the study, when sifted through carefully by FDA gumshoes, provided evidence of some significant risks of suicidality; but you'd never get a clue that this was so by reading what was published in a peer-reviewed medical journal. So has the medical scientific literature been effectively captured and held hostage by the pharmaceutical industry? You decide.

So let's step back and take in the big picture. Does this mean that these drugs are the tool of the devil? Hardly-- even in the worst case scenario, only a handful of patients have serious adverse reactions. Does this mean that these extermely useful, even life-saving drugs have been unfairly maligned by their anti-Pharma critics ("pharmascolds")? Not that either. If the best these drugs can show for suicidal behavior in the huge mass of non-elderly patients, age 25-64, is close to a toss-up, then it hardly seems to be the case that these drugs are the greatest thing since sliced bread.

The real take-home message seems to be that we began consuming mass quantities of these drugs (as the old "Coneheads" skit on "Saturday Night Live" would have put it) now nearly two decades ago. The true scientific picture of who should be given these drugs, who should not be given them, and what will happen to each group if they take them, seems to be emerging only in the last couple of years. And it is emerging not at the instigation of, but over the dead body of the pharmaceutical industry research enterprise. This is hardly a vote of confidence for an industry that enjoys portraying its mission as the discovery of new, safer and more effective drugs, and its relationship with the medical profession as "educational."

No comments: