A discussion on what to look for when reading and evaluating science news articles.
In the past year, the public has been inundated with science news coverage. Of course, the COVID-19 pandemic has dominated the spotlight but as most of us have spent more time at home, we’ve also been consuming a barrage of reporting on global warming, mental health, nutrition, fitness—the list goes on.
The more information that is available, the more confusing science seems to become. Studies contradict one another or new information emerges, and earlier studies are no longer valid. Somehow, readers are expected to sort through the slew of material and come out with an understanding of the science.
We also live in a world where misinformation is rampant, whether that be in the form of fake news or a misleading scientific study. Some articles are blatantly incorrect, but most are more subtle. So how can a news consumer determine what is reputable and what is questionable?
Science news essentially serves as the liaison between scientific papers and public understanding. However, most news articles offer only a snapshot of a particularly exciting finding in the scientific community. Rarely do they dig into preceding studies or contradictory findings. This can be frustrating for the reader because it seems like scientists are constantly changing their minds. In reality, the scientific process is non-linear and messy. Sensationalist journalism doesn’t often get that across to readers.
While wading through the sea of media there are some things the reader can look for to evaluate whether or not the news article should be trusted, or if they should seek other sources of information.
Sensational headlines
Headlines are often the reader’s first interaction with a news article. Even the most reputable news sources craft catchy headlines designed to convince readers to click on their article. It’s also the first chance to get readers interested in the science. As Mark Schapiro, environmental journalist and lecturer at UC Berkeley’s Graduate School of Journalism, puts it, “There is drama in all science” because it’s pushing the envelope of the current understanding of the world.
But headlines that are too sensational can be a red flag. For example, compare these headlines from March 2020 about the potential hydroxychloroquine cure for COVID-19, which became a cautionary tale early in the search for COVID-19 treatments.
This headline from the Daily Wire doesn’t allow for any nuance:
“French Peer-Review Study: Our Treatment Cured 100% of Coronavirus Patients”
This headline from Wired is more restrained, though the reader has to get to the subhead for the important caveat (an absence of clinical trials):
“Chloroquine May Fight COVID-19—and Silicon Valley’s Into It
The old malaria drug is getting used against the coronavirus. Tech enthusiasts are abuzz. One missing step: clinical trials.”
Vox writes a reasonable and accurate headline that tells the reader what to expect from the article and acknowledges other news that had been floating around without making any extravagant claims about the drug.
“What you need to know about hydroxychloroquine, Trump’s new favorite treatment for COVID-19. Trump wants to use the anti-malaria drug to treat the new coronavirus, but the evidence is lacking.”
Science ended up winning out over the hype. Follow-up studies found that the drug was not effective for treatment of COVID-19 and the FDA withdrew its emergency authorization in June 2020.
Evaluating sources
In the main text of the news piece, note the sources being used to back up the article. Well-researched science news articles should cite specific studies. In most cases, it should be a peer-reviewed article from a reputable journal. It can be tricky to determine how reputable a journal is because there are many beyond the big names like Nature and Science that present good work. But there are also many predatory journals that will accept almost any article, with impressive sounding names like the American Journal of Biomedical Science & Research.
The issue is further complicated by the wide adoption of pre-prints in the scientific community. Pre-print articles are posted online before they have been reviewed by a journal and peer scientists. Pre-prints can play an important role in advancing science by spreading knowledge to other scientists sooner, but they should also be read with an air of caution. Journalists should note if they are citing a pre-print.
This article from CNBC does exactly that to present new evidence about the effectiveness of COVID-19 vaccines on emerging strains of the virus.
“Some early findings that were published in the preprint server bioRxiv, which have yet to be peer reviewed, indicate that the variant identified in South Africa, known as 501Y.V2, can evade the antibodies provided by some coronavirus treatments and may reduce the efficacy of the current line of available vaccines.”
The funding sources and any competing interests for a study should also be taken into consideration. A news article may not indicate this information, but it should be stated at the end of the original study.
Citing experts
Citing or quoting experts in the field adds a key perspective and is a source that many news articles will have. For one, interviewing those involved with the study is often important for translating the findings from the dense language of a scientific journal article. If a news article makes any claims as to the greater importance of the results it’s best that they cite the opinion of an expert not involved in the study, or someone who takes the opposite position on a controversial topic. And for either side, it’s important to note their credentials. Do they come from a reputable institution? Are they working for a company that would benefit from having the attention in the press?
The absence of certain information can also be revealing. The scientific process is inherently dynamic and uncertain. If the news article states the results definitively—or if the expert interviewed does—and doesn’t introduce any potential caveats of the study, it’s best to look for further discussion of the issue with different sources. Schapiro can speak to the importance of this from a journalist’s perspective. Since the scientist is the expert, they make statements that are assumed to be accurate and “introducing some level of critique, by allowing for the fact that this might not be the final word, increases their credibility,” he says.
Again, returning to the hydroxychloroquine story. An article in Townhall relates a quote from Gregory Rigano, the main proponent of using the drug to treat COVID-19, during an interview with Fox News:
“ ‘As of this morning … a well-controlled peer reviewed study carried out by the most eminent infectious disease specialist in the world—Didier Raoult, MD, PhD — out of the south of France, in which he enrolled 40 patients … that showed a 100 percent cure rate against coronavirus,’ Rigano told Fox News’ Tucker Carlson.”
Rigano’s exaggerated opinion is the only one presented. And while he is vaguely introduced as being an advisor to Stanford University School of Medicine, no other credentials are listed.
The confusing world of statistics
Statistics introduce another obstacle in the interpretation of science news because numbers can often be manipulated to reflect a certain viewpoint, either intentionally or simply because the framing of the numbers wasn’t carefully considered. As Schapiro states, “it’s important to know what’s being measured and be aware of what might be excluded from the statistic.” Statistics presented without context should be viewed critically.
Nutrition studies have a particularly bad reputation in the news. The available information and expert opinion seems to vacillate—one year fats are terrible for you and the next they are a health food. On top of that, the numbers can be hard to interpret, whether that’s a percent reduction or increase in risk of disease or the amount of a food that needs to be consumed. Or often, meaningful numbers aren’t even provided.
For example, this article from the New York Post on the benefits of red wine doesn’t even bother to use numbers. They leave it up to the reader to read the study themselves.
“We’ve all happily heard about the health benefits of red wine. A recent study by the Annals of Internal Medicine found that a nightly glass can increase levels of good HDL cholesterol and help lower blood pressure, decrease blood-sugar levels and fight belly fat.”
This article from the BBC present the results of a study without much context and with almost no analysis. The article fails to mention that the demographics of the group or the method of data collection may have been problematic, or what it means exactly to see a 23% lower risk of 10-year mortality.
“More than 120,000 Dutch 55-to-69-year-old men and women provided dietary and lifestyle information in 1986, and then their mortality rate was looked at 10 years later.
The premature mortality risk due to cancer, diabetes, respiratory and neurodegenerative diseases was lower among the nut consumers.
There was an average 23% lower risk of 10-year mortality across all diseases…”
In this instance, the BBC puts the responsibility on the reader to realize that self-reported data is often unreliable and that, while the study may have shown lower death rates in people who eat nuts, there are also many confounding factors that could have contributed to that number.
How to find more accurate information
If an article has any of the warning signs listed here, then the reader should put on their journalist hat and take a few steps to check the validity of the information.
First, a search for other articles that are covering the same topic—this is most useful for popular news topics—will reveal if many news outlets are making a similar argument or if there are conflicting opinions. This strategy has its own issues, in that a particularly exciting conclusion from the study may be inflated and repeated in the press. However, some news sources will provide a more nuanced view of the same information. Helpfully, many journals now track when a publication is cited in the media, making it easier to find additional coverage.
It’s also important to evaluate the reliability and bias of each source. This dynamic chart allows the consumer to compare news outlets to each other on a sliding scale. While no one source is infallible, there are some that are more consistent or typically less biased than others.
If there isn’t any other coverage of the topic but the reader has an interest in verifying the information, then it’s necessary to do some research. Look up the scientists who are quoted, if any, and look up their work. Look into the researcher’s institution and the number of times their publications have been cited as one indicator of the study’s impact.
Or go a step further and read the journal article that’s being discussed in the news piece (assuming it isn’t behind a pay wall). Digesting a full scientific manuscript can be a challenge for someone who isn’t a scientist, or even for a scientist in a different field. Start by reading the introduction of the paper, since it should provide some background for the study and the main findings. If the paper is a pre-print, read the comments from other scientists. Are they also excited about the work or highly critical? Going beyond the cited study, look for other recent publications on the same topic or review articles (summaries of work in the field), which can both help provide a better picture of the current research.
Becoming news literate and critical of the news is a skill that takes some practice. But always read with an air of skepticism, just like a good scientist would.