Pharma Must Stop Its Obsession with P-Values





Statistical "Significance" is one the most abused concepts in the pharma/medical industry. The Sola nonsense is a great case study.

https://eighteenthelephant.wordpress.com/2016/04/29/how-do-i-hate-p-values-let-me-count-the-ways/

Yes, statistics are awful. Let's just go with our gut on things. We don't actually need to try and prove that our results aren't a product of chance - as long as I think our drug works, we're good.

You just revolutionized the entire clinical trial process. We'll call it the Trump Method of science - "If I think it, it must be true."
 




Yes, statistics are awful. Let's just go with our gut on things. We don't actually need to try and prove that our results aren't a product of chance - as long as I think our drug works, we're good.

You just revolutionized the entire clinical trial process. We'll call it the Trump Method of science - "If I think it, it must be true."
You completely mis-characterize the argument. My intent is not to trash statistics, but to point out that it is abused, in particular the use of p-values. Perhaps, if you would actually studied the subject you might understand. Bottom line, the Lilly Sola team allowed itself to be deceived by a p-value that was randomly "significant". A classic case of p-value hacking.

How many reps reading this have gone around spouting about some subset analysis with "statistical significance". In Oncology that was how we operated...well that was years ago anyway.
 




You completely mis-characterize the argument. My intent is not to trash statistics, but to point out that it is abused, in particular the use of p-values. Perhaps, if you would actually studied the subject you might understand. Bottom line, the Lilly Sola team allowed itself to be deceived by a p-value that was randomly "significant". A classic case of p-value hacking.

How many reps reading this have gone around spouting about some subset analysis with "statistical significance". In Oncology that was how we operated...well that was years ago anyway.

Your argument is that a significant p-value isn't significant at all. Like it or not, you're trying to trash the entire discipline of statistics. Good luck with that argument.

And, thanks for pointing out something you don't like without any sort of alternative method. I was wrong about you being like Trump. You're much more like Ross Perot.
 




Your argument is that a significant p-value isn't significant at all. Like it or not, you're trying to trash the entire discipline of statistics. Good luck with that argument.

And, thanks for pointing out something you don't like without any sort of alternative method. I was wrong about you being like Trump. You're much more like Ross Perot.
As someone who makes a living in statistics, your argument has gone from the sublime to the ridiculous.

Geez, I wonder what the American Statistical Association has to say about it?

Oh, well here it is...
http://amstat.tandfonline.com/doi/abs/10.1080/00031305.2016.1154108

Read it, if you can.

Alternatives? Don't hack p-values and don't quote them as some gospel. Sir Ronald Fisher, who designed the concept, never intended it to be used as a binary selection criterion.
 




You completely mis-characterize the argument. My intent is not to trash statistics, but to point out that it is abused, in particular the use of p-values. Perhaps, if you would actually studied the subject you might understand. Bottom line, the Lilly Sola team allowed itself to be deceived by a p-value that was randomly "significant". A classic case of p-value hacking.

How many reps reading this have gone around spouting about some subset analysis with "statistical significance". In Oncology that was how we operated...well that was years ago anyway.


The sad thing is that the Sola team didn't deceive themselves. They knew full well that selectively looking at "the best" makes statistical inference exceedingly difficult. It has never been about finding a drug.
 




As someone who makes a living in statistics, your argument has gone from the sublime to the ridiculous.

Geez, I wonder what the American Statistical Association has to say about it?

Oh, well here it is...
http://amstat.tandfonline.com/doi/abs/10.1080/00031305.2016.1154108

Read it, if you can.

Alternatives? Don't hack p-values and don't quote them as some gospel. Sir Ronald Fisher, who designed the concept, never intended it to be used as a binary selection criterion.
The ASA Board was also stimulated by highly visible discussions over the last few years. For example, ScienceNews (Siegfried 2010) wrote: “It’s science’s dirtiest secret: The ‘scientific method’ of testing hypotheses by statistical analysis stands on a flimsy foundation.” A November 2013, article in Phys.org Science News Wire (2013) cited “numerous deep flaws” in null hypothesis significance testing. A ScienceNews article (Siegfried 2014) on February 7, 2014, said “statistical techniques for testing hypotheses …have more flaws than Facebook’s privacy policies.”
 




Notice that a p-value is calculated from a normal distribution. That makes it a random variable. Therein lies the problem! Unlike what you have been told, very few things. other than height, follow a normal distribution. Go looking long enough in your data, and you will find so-called statistical significance (try it in excel and see). Also, when does a a null hypothesis offer us "no effect"? Perhaps a Bayesian approach would manifest better results? That is a subject of much academic debate.

https://onlinecourses.science.psu.edu/statprogram/node/138