Why You Can’t Always Trust Medical Research Results

Clinical_research_questions

If you open the “Health” section of any online or paper publication, you’re bound to find plenty of stories saying “New study suggests X causes Y”, or something similar. The catch is usually an unusual correlation that grabs your attention.

The articles usually give the impression that there’s a linear relationship between the different variables being studied, and if you just ate more of some food, or did more of some behavior, then you’ll receive similar results.

If only it were that easy, right?

The problem is that new studies showing bizarre correlations are typically done using small sample sizes, and as more studies are done in larger numbers, the observed differences disappear and the original effect becomes random. However, the march towards randomness never makes the headlines.

Alex Tabarrok gave a great example today in Marginal Revolution. In 1992 an article published in the journal Science found that a gene variant of an enzyme called ACE significantly modified someone’s chances of having a heart attack. Woohoo! And, the article was published in Science, the journal of all journals.

It must be fact then, right?

As it turns out, not quite.

The study was done on 500 people, but as sample sizes grew larger the effects began to diffuse, until eventually there was no effect at all.

This phenomenon is not unique. Look at the graph below. Genetic variants of 8 different enzymes that had a statistically significant effect on diseases in small samples had no effects when performed on larger ones.

Natgen_funnel-plot_cropped

Now think about all the articles you’ve read stating “New study suggests pistacchios might cure Cancer” or some other vibe. Huh.

This effect is more robust when you consider that most research reports no significant results at all….but it doesn’t get published as often. No news is boring.

One report even suggested that the majority of published medical research is false. Sample sizes must be small, and Bayesian reasoning would predict that the majority of “findings” are actually false positives.

So the next time you read something about the latest trend, or finding that you can cure a disease with some food, take it with a grain of salt. Researchers are probably just thinking too small.

12 thoughts on “Why You Can’t Always Trust Medical Research Results”

  1. Interesting post, and very useful. I agree that reading the health news can be confusing if you take it all at face value. The concept of statistical significance has varying levels of usefulness, and a p-value is not the be-all end-all of research validity. In fact, in epidemiological research there’s a growing consensus that they should be avoided entirely.

    Like

  2. Al, I remember thinking in college that p-values were a way of window dressing otherwise flimsy research. In general, I also think it throws sand on the quality of a lot of the media you read w/ regards to diet, as they usually fling X causes Y stories all over the place w/ out a scintilla of awareness about the limitations of the research they’re talking about.

    Like

  3. Example: Vioxx, which really worked well for me. The problem with human testing on new meds is that the group is too small. It’s harmfulness is not discovered until the med is released and thousands of people use it and start having disastrous side effects that forces the manufacturer to take it off the market and start paying damages to those who were harmed.Another example: Coffee used to a big bad thing. Now it turns out it’s a powerful antioxidant. Now they’re attacking salt. And I still say plastic is the probable culprit is so many odd diseases and syndromes that have appeared in the last 20 years, like fibromyalgia, autism, etc Just sayin’

    Like

  4. Example: Vioxx, which really worked well for me. The problem with human testing on new meds is that the group is too small. It’s harmfulness is not discovered until the med is released and thousands of people use it and start having disastrous side effects that forces the manufacturer to take it off the market and start paying damages to those who were harmed.Another example: Coffee used to a big bad thing. Now it turns out it’s a powerful antioxidant. Now they’re attacking salt. And I still say plastic is the probable culprit is so many odd diseases and syndromes that have appeared in the last 20 years, like fibromyalgia, autism, etc Just sayin’

    Like

  5. This is so interesting to me and something I’ve independently thought of before, but haven’t seen anyone really write about it on a food/health blog. So my big question is – how do we (or I specifically) know what’s best for my body and what’s good? I like to think the milk thistle I’m taking is good for my liver, but what if the study I read about it is really just correlation and not causation? Should I just go on trial-and-error with my self, and ignore all the studies out there?

    Like

    1. Tracy, I know when I read research on something, I usually go on the assumption that one study all by itself doesn’t say much of anything. Ideally you’d want a number of studies across the three types of research: molecular/biochemical, which pin down the different mechanisms by which certain compounds behave in the body, clinical studies which test results in human patients, and epidemiological studies which study population wide effects of certain variables.

      Any one of these studies taken all by themselves have pretty serious issues. When you can combine all three of them (or get pretty close to it), then you’ve got something.

      This is why I don’t like news articles that begin with “New Study Shows That….”

      Like

Leave a comment