What if I were to suggest to you that most published research findings are probably false? If I did, I wouldn't be the first to say so. A paper published back in 2005/1 made this claim. And, yes, you might find such a statement very hard to swallow, but likely not after you started reading "Statistics Done Wrong: The Woefully Complete Guide" by Alex Reinhart. And why might you believe Alex? Because he gives a lot of insight into the kind of errors that even some of the most intelligent of us make when analyzing data. In fact, even peer-reviewed scientific and medical research falls prey to faulty statistical analysis. Why? Because most of us don't really know how to do statistics.
Reinhart traces the problem back to training and to the pressure placed on analysts to produce exaggerated results. He explains how poorly scientists and people in the medical profession are prepared to understand any kind of statistical analysis. And then he runs the reader through a series of topics that explain the kind of faulty statistical thinking that clouds our judgment and why it happens.
The problems range from bad choices in experimental design to errors in determining statistical significance. If you don't choose the right type of analysis and the right data sample and avoid false positives and understand the tests that you apply, you're unlikely to arrive at conclusions that stand up to rigorous review. And, what's worse, a lot of your readers and even your peers won't be likely to notice.
By the time you reach chapter 12, you might be seriously ready to conclude that the problem is simply too big and too complex to be addressed. And does it matter? Absolutely. The errors can be critical if the results are meant to attest to the safety or the efficacy of new drugs or practices related to public safety. Decisions that affect your safety and well being could hinge on conclusions based on faulty thinking. That's big.
Chapter 12 offers suggestions on how we can get ourselves out of the predicament that faulty statistical analysis has put us in. He suggests:
- more exhaustive statistics training
- roping in statisticians to assist in analysis
- questioning methods and conclusions in papers that you read or review
- challenging results that don't measure up to proper statistical rigor
Chapter 1: An Introduction to Statistical Significance Chapter 2: Statistical Power and Underpowered Statistics Chapter 3: Pseudoreplication: Choose Your Data Wisely Chapter 4: The p Value and the Base Rate Fallacy Chapter 5: Bad Judges of Significance Chapter 6: Double-Dipping in the Data Chapter 7: Continuity Errors Chapter 8: Model Abuse Chapter 9: Researcher Freedom:Good Vibrations? Chapter 10: Everybody Makes Mistakes Chapter 11: Hiding the Data Chapter 12: What Can Be Done? Notes Index
who should read this book?
Anyone involved in the process of analyzing data should consider reading this book whether the research is your main focus or you're part of a team. In addition, anyone who might like to have a chance at understanding or responsibly questioning scientific results or wanting to round out their statistical education with material you're not likely to find in a traditional statistic course should grab a copy. I'd hope that this would include doctors and policy makers, but also anyone who is deeply and personally affected by research findings.
It's sad that so few of us understand statistics /2, and worrisome that so many decisions are made based on faulty research. Anyone who wants a chance at understanding research findings should consider this book as an invaluable guide to getting it right.
This story, "Statistics Done Wrong: The Woefully Complete Guide by Alex Reinhart" was originally published by ITworld.