One of the most widely read college textbooks in the 1960s and ‘70s was How to Lie with Statistics by Darrell Huff. Despite the humorous title, the serious intent of the book (written by a journalist, not a statistician) was to illustrate how common errors in the use of statistics frequently lead to misleading conclusions.
Though popular, it may not have done the job it was intended to do. Almost 50 years later, author John Allen Paulos revisited this subject in his book Innumeracy: Mathematical Illiteracy and Its Consequences. Unfortunately, problems with the proper use of statistics appear to still be a serious and widespread concern. A recent paper by statistician Valen Johnson of Texas A&M University in College Station suggested that one out of every four published scientific studies draw false conclusions because they employ weak statistical standards.
The idea that many researchers simply aren’t running and analyzing their experiments properly appeared in yet another article focused on trying to understand why mouse models often don’t wind up being useful predictors of drug responses in human diseases. A survey of 76 influential animal studies “found that half used five or fewer animals per group”, and many failed to properly randomize mice into control and treated groups. In a similar vein, a recent study compared the data obtained from two research groups who were testing cancer cell lines for their susceptibility to anti-cancer drugs. While some of the drugs gave similar results in both studies, the majority did not.
These findings dovetail with the widely reported observations by researchers at Amgen and Bayer Healthcare, who were unable to reproduce the data in most of the high profile academic papers they tested. Their failure to replicate these experiments left them with a morass of untrustworthy data, and they decided not to move forward with plans to develop new medicines based on this information. This quagmire is territory where Big Pharma doesn’t want to find itself, given its increasing reliance on academia for new drug candidates , along with the widespread downsizing of many of their internal research programs.
Recognizing that this failure to replicate experiments was a serious problem in science, a for-profit group known as the Science Exchange put forth a potential solution known as the Reproducibility Initiative. I’ve argued previously that the Reproducibility Initiative has its heart in the right place, but that it will fail for a number of reasons. These include a lack of grant funding to pay for repeating the experiments as well as a number of other scientific and cultural issues. There is good news to report: a philanthropic organization contributed $1.3M to have 50 high profile cancer biology articles put through the validation wringer; results are expected by the end of 2014. The average cost of $26,000 per article to repeat certain key experiments from these papers is quite high; where funding would come from … Next Page »
By posting a comment, you agree to our terms and conditions.