(Page 2 of 4)
statistics to better oversight of clinical trials, that are on the table to combat the placebo effect. Will any of them work?
The only thing for sure right now is that there is a market for them, part of the hundreds of millions of dollars drug companies spend annually to try to make data more reliable in neurology trials, says Adam Butler. He’s a consultant who advises on the development of digital tools for clinical trials and has worked on studies that used SPCD. “It’s a cottage industry,” he says. “It’s a thing.”
A Design to Beat Placebo?
If all had gone well with Alkermes’ depression drug, Maurizio Fava might be taking a victory lap right about now. In the early 2000s, Fava and biostatistician David Schoenfeld invented SPCD to do two things: One, remove placebo responders from a trial and provide a clearer picture of a drug’s benefit. And two, reduce trial size and costs.
Here’s how: In the first stage of SPCD, instead of a normal 50/50 split between those receiving drug and placebo, far more participants get placebo than drug. The people who respond to placebo are washed out of the study. The non-responders advance to a second stage and are re-randomized either to drug or placebo. Results from the two stages are pooled into one analysis. In other words, some trial participants contribute twice to the data set. That’s how SPCD saves money; it counts some participants twice.
MGH owns the patents to SPCD but the rights are owned by PPD, a global contract research organization. (Fava says less than a quarter of every SPCD license fee is split between him and Schoenfeld.)
There are many skeptics, including one of Fava’s longtime peers at MGH. “This might be one of those brilliant ideas that didn’t pan out,” says Gary Sachs, an MGH psychiatrist who specializes in bipolar disorder and who has known Fava since they were residents together and calls him a terrific researcher. “It’s kind of elegant, but it hasn’t been a magic bullet.”
By Fava’s count, SPCD has been used in at least 20 completed trials, with at least 15 more ongoing, but the Alkermes program was the first time the FDA had to review SPCD as part of a drug approval application. Fava argues that SPCD acquitted itself, with two of the three SPCD-designed trials “clearly positive.”
The FDA saw it differently. In documents prepared for the November 2018 advisory committee, agency staff poked holes in Alkermes’ arguments for each trial and warned, more generally, that “there are a number of unresolved statistical questions regarding the most appropriate method for analyzing the results of an SPCD study.” SPCD wasn’t the only source of the FDA’s statistical concerns, but in the staff’s view, SPCD wasn’t ready for prime time.
(Alkermes declined to answer questions for this story.)
It’s not that the agency was unfamiliar with SPCD. FDA statisticians published a paper in 2011 that studied SPCD, then relatively new, alongside another trial designs. (Making SPCD work in large drug trials would not be an easy task, they wrote.) One of these statisticians joined Fava and others at a “psychiatry academy” that MGH convened near FDA headquarters in 2016.
But the newness of SPCD, added to the worry over opioids, was too much. Fava wasn’t surprised agency reviewers raised so many questions. Despite their familiarity with the concept, he says, “they don’t have a lot of [SPCD] experience in trials.”
He is confident that the design will come up again for FDA review. In his view, it makes too much sense: It’s cost effective, boosting trial size by 50 percent without actually adding more patients; and it reduces placebo response by up to 70 percent.
Another Phase 3 program using SPCD, for a treatment of agitation in Alzheimer’s patients, recently reported results. One study in the program used SPCD; it hit its goal with statistical significance on one of two doses. A second study, which did not use SPCD, failed altogether. The drug’s owner, Avanir Pharmaceuticals of Aliso Viejo, CA, has not indicated if it will ask the FDA for approval.
Let the Right Ones In
SPCD aims to kick placebo responders out of studies. Why not stop them from entering in the first place? Fava is in favor of this, too; in fact, he’s a co-author of a 2017 paper titled “Guarding the Gate” that described a more rigorous form of patient recruitment interview, dubbed SAFER, conducted over the phone by independent staff across nine depression studies. (A phone call is considered less prone to unconscious bias than a face-to-face interview.)
More than 15 percent of patients deemed eligible by more basic methods were … Next Page »