dotlah! dotlah!
  • Cities
  • Technology
  • Business
  • Politics
  • Society
  • Science
  • About
Social Links
  • zedreviews.com
  • citi.io
  • aster.cloud
  • liwaiwai.com
  • guzz.co.uk
  • atinatin.com
0 Likes
0 Followers
0 Subscribers
dotlah!
  • Cities
  • Technology
  • Business
  • Politics
  • Society
  • Science
  • About
  • Science
  • Technology

One Reason So Many Scientific Studies May Be Wrong

  • November 25, 2019
Total
0
Shares
0
0
0

There is a replicability crisis in science – unidentified “false positives” are pervading even our top research journals.

Statistics: if you torture the data enough, they will confess. clemsonunivlibrary/ Flickr, CC BY-NC

A false positive is a claim that an effect exists when in actuality it doesn’t. No one knows what proportion of published papers contain such incorrect or overstated results, but there are signs that the proportion is not small.

The epidemiologist John Ioannidis gave the best explanation for this phenomenon in a famous paper in 2005, provocatively titled “Why most published research results are false”. One of the reasons Ioannidis gave for so many false results has come to be called “p hacking”, which arises from the pressure researchers feel to achieve statistical significance.

What is statistical significance?

To draw conclusions from data, researchers usually rely on significance testing. In simple terms, this means calculating the “p value”, which is the probability of results like ours if there really is no effect. If the p value is sufficiently small, the result is declared to be statistically significant.

Traditionally, a p value of less than .05 is the criterion for significance. If you report a p<.05, readers are likely to believe you have found a real effect. Perhaps, however, there is actually no effect and you have reported a false positive.

Many journals will only publish studies that can report one or more statistically significant effects. Graduate students quickly learn that achieving the mythical p<.05 is the key to progress, obtaining a PhD and the ultimate goal of achieving publication in a good journal.

This pressure to achieve p<.05 leads to researchers cutting corners, knowingly or unknowingly, for example by p hacking.

The lure of p hacking

To illustrate p hacking, here is a hypothetical example.

Bruce has recently completed a PhD and has landed a prestigious grant to join one of the top research teams in his field. His first experiment doesn’t work out well, but Bruce quickly refines the procedures and runs a second study. This looks more promising, but still doesn’t give a p value of less than .05.

Convinced that he is onto something, Bruce gathers more data. He decides to drop a few of the results, which looked clearly way off.

He then notices that one of his measures gives a clearer picture, so he focuses on that. A few more tweaks and Bruce finally identifies a slightly surprising but really interesting effect that achieves p<.05. He carefully writes up his study and submits it to a good journal, which accepts his report for publication.

Bruce tried so hard to find the effect that he knew was lurking somewhere. He was also feeling the pressure to hit p<.05 so he could declare statistical significance, publish his finding and taste sweet success.

There is only one catch: there was actually no effect. Despite the statistically significant result, Bruce has published a false positive.

Bruce felt he was using his scientific insight to reveal the lurking effect as he took various steps after starting his study:

  • He collected further data.
  • He dropped some data that seemed aberrant.
  • He dropped some of his measures and focused on the most promising.
  • He analysed the data a little differently and made a few further tweaks.

The trouble is that all these choices were made after seeing the data. Bruce may, unconsciously, have been cherrypicking – selecting and tweaking until he obtained the elusive p<.05. Even when there is no effect, such selecting and tweaking might easily find something in the data for which p<.05.

Statisticians have a saying: if you torture the data enough, they will confess. Choices and tweaks made after seeing the data are questionable research practices. Using these, deliberately or not, to achieve the right statistical result is p hacking, which is one important reason that published, statistically significant results may be false positives.

What proportion of published results are wrong?

This is a good question, and a fiendishly tricky one. No one knows the answer, which is likely to be different in different research fields.

A large and impressive effort to answer the question for social and cognitive psychology was published in 2015. Led by Brian Nosek and his colleagues at the Center for Open Science, the Replicability Project: Psychology (RP:P) had 100 research groups around the world each carry out a careful replication of one of 100 published results. Overall, roughly 40 replicated fairly well, whereas in around 60 cases the replication studies obtained smaller or much smaller effects.

The 100 RP:P replication studies reported effects that were, on average, just half the size of the effects reported by the original studies. The carefully conducted replications are probably giving more accurate estimates than the possibly p hacked original studies, so we could conclude that the original studies overestimated true effects by, on average, a factor of two. That’s alarming!

How to avoid p hacking

The best way to avoid p hacking is to avoid making any selection or tweaks after seeing the data. In other words, avoid questionable research practices. In most cases, the best way to do this is to use preregistration.

Preregistration requires that you prepare in advance a detailed research plan, including the statistical analysis to be applied to the data. Then you preregister the plan, with date stamp, at the Open Science Framework or some other online registry.

Then carry out the study, analyse the data in accordance with the plan, and report the results, whatever they are. Readers can check the preregistered plan and thus be confident that the analysis was specified in advance, and not p hacked. Preregistration is a challenging new idea for many researchers, but likely to be the way of the future.

Estimation rather than p values

The temptation to p hack is one of the big disadvantages of relying on p values. Another is that the p<.05 criterion encourages black-and-white thinking: an effect is either statistically significant or it isn’t, which sounds rather like saying an effect exists or it doesn’t.

But the world is not black and white. To recognise the numerous shades of grey it’s much better to use estimation rather than p values. The aim with estimation is to estimate the size of an effect – which may be small or large, zero, or even negative. In terms of estimation, a false positive result is an estimate that’s larger or much larger than the true value of an effect.

Let’s take a hypothetical study on the impact of therapy. The study might, for example, estimate that therapy gives, on average, a 7-point decrease in anxiety. Suppose we calculate from our data a confidence interval – a range of uncertainty either side of our best estimate – of [4, 10]. This tells us that our estimate of 7 is, most likely, within about 3 points on the anxiety scale of the true effect – the true average amount of benefit of the therapy.

In other words, the confidence interval indicates how precise our estimate is. Knowing such an estimate and its confidence interval is much more informative than any p value.

I refer to estimation as one of the “new statistics”. The techniques themselves are not new, but using them as the main way to draw conclusions from data would for many researchers be new, and a big step forward. It would also help avoid the distortions caused by p hacking.The Conversation

 

Geoff Cumming, Emeritus Professor, La Trobe University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Total
0
Shares
Share
Tweet
Share
Share
Related Topics
  • Journals
  • Open Science
  • Research
  • Scientific Publication
  • Statistics
majulah

Previous Article
  • Technology

Best Deals On Earphones You Can Get On AliExpress Today

  • November 25, 2019
View Post
Next Article
  • Technology

Singapore Signs MOU With The Republic Of Korea To Enhance Cybersecurity Cooperation

  • November 25, 2019
View Post
You May Also Like
View Post
  • Artificial Intelligence
  • Technology

U.S. Ski & Snowboard and Google Announce Collaboration to Build an AI-Based Athlete Performance Tool

  • Dean Marc
  • February 8, 2026
View Post
  • Artificial Intelligence
  • Technology

IBM to Support Missile Defense Agency SHIELD Contract

  • Dean Marc
  • February 5, 2026
Smartphone hero image
View Post
  • Gears
  • Technology

Zed Approves | Smartphones for Every Budget Range

  • Ackley Wyndam
  • January 29, 2026
View Post
  • Cities
  • Climate Change
  • Science

New research may help scientists predict when a humid heat wave will break

  • dotlah.com
  • January 6, 2026
View Post
  • People
  • Technology

This is what the new frontier of AI-powered financial inclusion looks like

  • dotlah.com
  • January 2, 2026
View Post
  • Artificial Intelligence
  • Technology

How AI can accelerate the energy transition, rather than compete with it

  • dotlah.com
  • November 19, 2025
View Post
  • Gears
  • Technology

Apple Vision Pro upgraded with the powerful M5 chip and comfortable Dual Knit Band

  • Dean Marc
  • October 15, 2025
View Post
  • Gears
  • Technology

Meet Samsung Galaxy Tab S11 Series: Packing Everything You Expect From a Premium Tablet

  • Dean Marc
  • September 4, 2025


Trending
  • 1
    • Technology
    Students Design New Bank Branch Concept In A Collaboration Between UOB And Singapore Polytechnic
    • January 28, 2020
  • 2
    • People
    • World Events
    7 Dangerous Myths About Coronavirus Busted By The World Health Organization
    • February 11, 2020
  • 3
    • People
    • World Events
    Are We Opening When We Should?
    • June 5, 2020
  • 4
    • Lah!
    NUS Business Analytics Centre And TigerGraph Collaborate To Strengthen Graph Database Capabilities
    • November 14, 2021
  • singapore-street-from-above-chuttersnap-d271d_SOGR8-unsplash 5
    • Cities
    Cities Are At The Heart Of Our Journey To Net Zero. Here’s Why
    • March 12, 2021
  • 6
    • Cities
    • Features
    • Rows
    Cities Of Marvel : Avengers Age Of Ultron
    • April 22, 2015
  • 7
    • Lah!
    Sembcorp To Build Solar Energy System To Power Singapore’s Changi Exhibition Centre
    • June 19, 2019
  • Microsoft. Windows 8
    • Artificial Intelligence
    • People
    • Technology
    Ousted Sam Altman To Lead New Microsoft AI Team
    • November 20, 2023
  • zedreviews-amazon-uk-50-christmas-deals 9
    • Gears
    Zed Approves | The Amazon 50+ Holiday Gift Deals Worth Buying – UK Edition
    • December 14, 2025
  • Lemon | Basil | Scent 10
    • People
    • Technology
    Alchemy Of Aroma. Decoding The Art And Science Of Perfumery.
    • June 21, 2023
  • 11
    • Technology
    MAS Enhances Guidelines To Combat Heightened Cyber Risks
    • January 20, 2021
  • 12
    • Lah!
    NEA Wins International Engineering Award For Singapore’s Semakau Landfill
    • November 28, 2019
Trending
  • Samsung Odyssey 1
    Samsung Showcases Glasses-Free 3D and HDR10+ GAMING With Acclaimed Game Titles at GDC 2026
    • March 9, 2026
  • 2
    How the Iran war could create a ‘fertiliser shock’ – an often ignored global risk to food prices and farming
    • March 6, 2026
  • 3
    About 23,000 community care sector employees could get at least 7% pay raise as part of new salary guidelines
    • February 18, 2026
  • 4
    U.S. Ski & Snowboard and Google Announce Collaboration to Build an AI-Based Athlete Performance Tool
    • February 8, 2026
  • 5
    IBM to Support Missile Defense Agency SHIELD Contract
    • February 5, 2026
  • Smartphone hero image 6
    Zed Approves | Smartphones for Every Budget Range
    • January 29, 2026
  • 7
    Zed Approves | Work From Anywhere, Efficiently – The 2026 Essential Gear Guide
    • January 20, 2026
  • 8
    Global power struggles over the ocean’s finite resources call for creative diplomacy
    • January 17, 2026
  • 9
    New research may help scientists predict when a humid heat wave will break
    • January 6, 2026
  • 10
    This is what the new frontier of AI-powered financial inclusion looks like
    • January 2, 2026
Social Links
dotlah! dotlah!
  • Cities
  • Technology
  • Business
  • Politics
  • Society
  • Science
  • About
Connecting Dots Across Asia's Tech and Urban Landscape

Input your search keywords and press Enter.