The Wall Street Journal: May 22 2007
The big news yesterday that the diabetes drug Avandia may pose cardiac risks was based on something called a meta-analysis. It’s a type of research that has some significant drawbacks, but also some unique advantages.
In a meta-analysis, researchers pool results from different studies in this case, Cleveland Clinic cardiologist Steven Nissen and statistician Kathy Wolski analyzed 42 studies. Those studies were done by many different people, and as you might expect, there was wide variation between them. Sometimes Avandia was compared with a placebo and sometimes with alternate treatments. Adverse events namely heart attacks shown to occur with higher frequency among Avandia users may not have been identified consistently across the different trials. And if they weren’t, Dr. Nissen would have no way to know, because he was looking at study summaries and not patient-level data. The limitations of this “study of studies” filled a lengthy third paragraph in an accompanying New England Journal of Medicine editorial.
So why, then, use meta-analysis at all? Because for drug dangers that are rare enough, even studies of thousands of patients might not suffice to separate a real risk from random statistical variation. Combining tens of thousands of patients who underwent the treatment separately, under different protocols and supervision, may be the only way to clear thresholds for statistical significance.
Whether a result is significant is determined by the value of a statistical variable called p, which depends on the magnitude of the effect, the consistency of that effect and the number of observations. Researchers can’t control the first two factors, which, in a drug trial, ought to be governed by biochemistry. They can, however, add more observations by wrapping together multiple studies, which can lower the value of p, which is a good thing. A value below 0.05 commonly chosen as a threshold means there is less than a 5% probability that an observed effect arose purely by chance.
To see the power of that approach, consider one Avandia trial, known as Dream (Diabetes REduction Assessment with Ramipril and Rosiglitazone Medication), whose results were reported in The Lancet last year. This study alarmed Dr. Nissen, because, as he wrote in a letter to The Lancet, patients taking Avandia had a 37% greater risk of adverse heart outcomes compared with a placebo, which he found “very disturbing.” And for every cardiac problem studied such as angina, stroke and heart failure the pattern was the same: The rate was higher among people taking Avandia. But because the events were so rare (for example, just 15 heart attacks in the Avandia group, compared with nine in the control group), the overall findings weren’t statistically significant. Indeed, Avandia maker GlaxoSmithKline’s asserted in its response yesterday to the NEJM study that the drug’s users “showed no increase in cardiovascular risk when compared to placebo” in the Dream trial.
Dr. Nissen’s meta-analysis, which included the Dream study, found a similar elevation of risk for heart attacks 43% higher among Avandia users and, thanks to the addition of 41 other studies, managed to nudge p just below the 0.05 threshold, to 0.03.
I asked Dr. Nissen why he did a meta-analysis. He replied, “If you have a question you want to ask, and no single clinical trial is large enough to answer the question, then you have no answer at all. But if you can carefully combine the results of several trials, then you can answer the question you otherwise cannot. And that was exactly the situation we faced with Avandia.” About the technique of meta-analysis, he added, “It’s not as statistically powerful as a single large trial, and should never be a substitute.
But in the absence of a single large trial, it can be quite helpful.”
Another advantage of a meta-analysis is that it helps avoid giving too much weight to any one reading. Study something 20 times, and you’d expect at least one of the experiments to yield a p value of at least 0.05 purely by chance. (I discussed this in a column about Pfizer’s decision to halt a drug trial last December.) But combine them, and a p value below 0.05 carries more heft.
It’s also worth noting that the Nissen study avoided a potential pitfall of meta-analysis: When studies incorporate only published results, they might miss out on meaningful experiments that were filed away for various reasons (the “file drawer effect,” as discussed in this 2001 USA Today article). Excluding such “grey literature” may compromise and bias meta-analyses, according to this 2000 article in the Lancet. But in this case, Dr. Nissen consulted Glaxo’s drug-trial registry, meaning he considered all studies, published and unpublished, for inclusion.