Jump to content

Worried About That New Medical Study? Read This First.


aum

Recommended Posts

There’s more than meets the eye — here are some tips to help avoid confusion.

 

04-Parenting-scientificstudy-master768.j

 

In August 2019, JAMA Pediatrics, a widely respected journal, published a study with a contentious result: Pregnant women in Canada who were exposed to increasing levels of fluoride (such as from drinking water) were more likely to have children with lower I.Q. Some media outlets ran overblown headlines, claiming that fluoride exposure actually lowers I.Q. And while academics and journalists quickly pointed out the study’s many flaws — that it didn’t prove cause and effect; and showed a drop in I.Q. only in boys, not girls — the damage was done. People took to social media, voicing their concerns about the potential harms of fluoride exposure.

 

We place immense trust in scientific studies, as well as in the journalists who report on them. But deciding whether a study warrants changing the way we live our lives is challenging. Is that extra hour of screen time really devastating? Does feeding processed meat to children increase their risk of cancer?

 

As a physician and a medical journalist with training in biostatistics and epidemiology, I sought advice from several experts about how parents can gauge the quality of research studies they read about. Here are eight tips to remember the next time you see a story about a scientific study.

1. Wet pavement doesn’t cause rain.

Put another way, correlation does not equal causation. This is one of the most common traps that health journalists fall into with studies that have found associations between two things — like that people who drink coffee live longer lives — but which haven’t definitively shown that one thing (coffee drinking) causes another (a longer life). These types of studies are typically referred to as observational studies.

 

When designing and analyzing studies, experts must have satisfactory answers to several questions before determining cause and effect, said Elizabeth Platz, Sc.D., a professor of epidemiology and deputy chair of the department of epidemiology at the Johns Hopkins Bloomberg School of Public Health. In smoking and lung cancer studies, for example, researchers needed to show that the chemicals in cigarettes affected lung tissue in ways that resulted in lung cancer, and that those changes came after the exposure. They also needed to show that those results were reproducible. In many studies, cause and effect isn’t proven after many years, or even decades, of study.

2. Mice aren’t men.

Large human clinical studies are expensive, cumbersome and potentially dangerous to humans. This is why researchers often turn to mice or other animals with human-like physiologies (like flies, worms, rats, dogs and monkeys) first.

 

If you spot a headline that seems way overblown, like that aspirin thwarts bowel cancer in mice, it’s potentially notable, but could take years or even decades (if ever) to test and see the same findings in humans.

3. Study quality matters.

When it comes to study design, not all are created equal. In medicine, randomized clinical trials and systematic reviews are kings. In a randomized clinical trial, researchers typically split people into at least two groups: one that receives or does the thing the study researchers are testing, like a new drug or daily exercise; and another that receives either the current standard of care (like a statin for high cholesterol) or a placebo. To decrease bias, the participant and researcher ideally won’t know which group each participant is in.

 

Systematic reviews are similarly useful, in that researchers gather anywhere from five to more than 100 randomized controlled trials on a given subject and comb through them, looking for patterns and consistency among their conclusions. These types of studies are important because they help to show potential consensus in a given body of evidence.

 

Other types of studies, which aren’t as rigorous as the above, include: cohort studies (which follow large groups of people over time to look for the development of disease), case-control studies (which first identify the disease, like cancer, and then trace back in time to figure out what might have caused it) and cross-sectional studies (which are usually surveys that try to identify how a disease and exposure might have been correlated with each other, but not which caused the other).

Next on the quality spectrum come case reports (which describe what happened to a single patient) and case series (a group of case reports), which are both lowest in quality, but which often inspire higher quality studies.

4. Statistics can be misinterpreted.

Statistical significance is one of the most common things that confuses the lay reader. When a study or a journalistic publication says that a study’s finding was “statistically significant,” it means that the results were unlikely to have happened by chance.

 

But a result that is statistically significant may not be clinically significant, meaning it likely won’t change your day-to-day. Imagine a randomized controlled trial that split 200 women with migraines into two groups of 100. One was given a pill to prevent migraines and another was given a placebo. After six months, 11 women from the pill group and 12 from the placebo group had at least one migraine per week, but the 11 women in the pill group experienced arm tingling as a potential side effect. If women in the pill group were found to be statistically less likely to have migraines than those in the placebo group, the difference may still be too small to recommend the pill for migraines, since just one woman out of 100 had fewer migraines. Also, researchers would have to take potential side effects into account.

 

The opposite is also true. If a study reports that regular exercise helped relieve chronic pain symptoms in 30 percent of its participants, that might sound like a lot. But if the study included just 10 people, that’s only three people helped. This finding may not be statistically significant, but could be clinically important, since there are limited treatment options for people with chronic pain, and might warrant a larger trial.

5. Bigger is often better.

Scientists arguably can never fully know the truth about a given topic, but they can get close. And one way of doing that is to design a study that has high power.

 

“Power is telling us what the chances are that a study will detect a signal, if that signal does exist,” John Ioannidis, M.D., a professor of medicine and health research and policy at Stanford Medical School said via email.

 

The easiest way for researchers to increase a study’s power is to increase its size. A trial of 1,000 people typically has higher power than a trial of 500, and so on. Simply put, larger studies are more likely to help us get closer to the truth than smaller ones.

6. Not all findings apply to you.

If a news article reports that a high-quality study had statistical and clinical significance, the next step might be to determine whether the findings apply to you.

 

If researchers are testing a hypothetical new drug to relieve arthritis symptoms, they may only include participants who have arthritis and no other conditions. They may eliminate those who take medications that might interfere with the drug they’re studying. Researchers may recruit participants by age, gender or ethnicity. Early studies on heart disease, for instance, were performed primarily on white men.

 

Each of us is unique, genetically and environmentally, and our lives aren’t highly controlled like a study. So take each study for what it is: information. Over time, it will become clearer whether one conclusion was important enough to change clinical recommendations. Which gets to a related idea …

7. One study is just one study.

If findings from one study were enough to change medical practices and public policies, doctors would be practicing yo-yo medicine, where recommendations would change from day to day. That doesn’t typically happen, so when you see a headline that begins or ends with, “a study found,” it’s best to remember that one study isn’t likely to shift an entire course of medical practice. If a study is done well and has been replicated, it’s certainly possible that it may change medical guidelines down the line. If the topic is relevant to you or your family, it’s worth asking your doctor whether the findings are strong enough to suggest that you make different health choices.

8. Not all journals are created equal.

Legitimate scientific journals tend to publish studies that have been rigorously and objectively peer reviewed, which is the gold standard for scientific research and publishing. A good way to spot a high quality journal is to look for one with a high impact factor — a number that primarily reflects how often the average article from a given journal has been cited by other articles in a given year. (Keep in mind, however, that lower impact journals can still publish quality findings.) Most studies published on PubMed, a database of published scientific research articles and book chapters, are peer-reviewed.

 

Then there are so-called ‘predatory’ journals, which aren’t produced by legitimate publishers and which will publish almost any study — whether it’s been peer-reviewed or not — in exchange for a fee. (Legitimate journals may also request fees, primarily to cover their costs or to publish a study in front of a paywall, but only if the paper is accepted.) Predatory journals are attractive to some researchers who may feel pressure to ‘publish or perish.’ It’s challenging, however, to distinguish them from legitimate ones, because they often sound or look similar. If an article has grammatical errors and distorted images, or if its journal lacks a clear editorial board and physical address, it might be a predatory journal. But it’s not always obvious and even experienced researchers are occasionally fooled.

 

Reading about a study can be enlightening and engaging, but very few studies are profound enough to base changes to your daily life. When you see the next dramatic headline, read the story — and if you can find it, read the study, too (PubMed or Google Scholar are good places to start). If you have time, discuss the study with your doctor and see if any reputable organizations like the Centers for Disease Control and Prevention, World Health Organization, American Academy of Pediatrics, American College of Cardiology or National Cancer Institute have commented on the matter.

 

Medicine is not an exact science, and things change every day. In a field of gray, where headlines sometimes try to force us to see things in black-and-white, start with these tips to guide your curiosity. And hopefully, they’ll help you decide when — and when not to — make certain health and lifestyle choices for yourself and for your family.


Amitha Kalaichandran, M.D., is a resident physician in pediatrics, an epidemiologist and a writer based in Toronto. She is currently working on a book that critically explores the scientific evidence behind traditional and nontraditional approaches to healing and illness.

 

Source

 

Link to comment
Share on other sites


  • Replies 3
  • Views 388
  • Created
  • Last Reply

John Ioannidis published interesting papers about false findings, bias, etc. It's worth checking his work and reading his papers.

"Publish or perish"... one bad effect is that results have to be significant. Who cares about negative data?

Link to comment
Share on other sites


47 minutes ago, mp68terr said:

Who cares about negative data?

In Science, both positive as well as negative results are equally significant. 

Link to comment
Share on other sites


2 minutes ago, aum said:

In Science, both positive as well as negative results are equally significant. 

Agree, but try to get negative results published 😉

Link to comment
Share on other sites


Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...