New study provides insight into how science journalists evaluate psychology research

A recent study sheds light on how science journalists assess psychology research and provides insights into the factors that matter most to them when determining trustworthiness and newsworthiness. The study, published in Advances in Methods and Practices in Psychological Science, found that one factor — sample size — significantly outweighs others in influencing how journalists evaluate research.

Science journalists play a pivotal role in translating complex scientific discoveries for the general public. But how do they decide which studies to report on and which ones to ignore? To understand this process better, researchers conducted a study of science journalists, exploring the factors that influence their judgments of trustworthiness and newsworthiness in scientific research findings.

“I’m a metascientist, and during my PhD, I wanted to explore different parts of how science is done and communicated, including looking at stakeholders in this process that don’t always get a lot of attention,” explained study author Julia Bottesini, an independent researcher. “Science journalists play a really important role in the ecosystem of science communication, and I wanted to better understand what’s important to them and how they make decisions about what scientific findings to report on.

Bottesini and her colleagues recruited a diverse group of 181 science journalists, primarily women (76.8%), from various news organizations, including online, print, audio, video, and more. Their educational backgrounds varied, with some holding journalism degrees and others having studied natural or social sciences at the undergraduate or graduate level.

To investigate the factors that influence science journalists’ evaluations of research, the study presented participants with eight fictitious psychology research vignettes. Participants read and evaluated these vignettes, and their responses were then analyzed. These vignettes were strategically designed to manipulate four key variables: sample size, sample representativeness, p-values (a statistical measure), and university prestige.

Sample size could be either small (50-89 participants) or large (500-1,000 participants). Sample type could be either a convenience sample (e.g., local volunteers) or a more representative U.S. sample (e.g., people from a nationwide sample). P-value could be either high (between .05 and .03) or low (between .005 and .0001). University prestige could be either higher (e.g., “Yale University”) or lower (e.g., “East Carolina University”).

Participants were also asked three open-ended questions related to their typical evaluation of research findings, their evaluation of the presented findings, and whether they had any guesses about the manipulated variables.

Among the four manipulated variables, sample size had the most significant impact on journalists’ evaluations. Studies with larger sample sizes were consistently perceived as more trustworthy and newsworthy. This finding aligns with scientific reasoning: larger sample sizes generally provide more reliable evidence.

Surprisingly, Bottesini and her colleagues found that the representativeness of the sample had minimal influence on science journalists’ judgments. Whether a study used a representative or convenience sample did not significantly sway their perceptions of trustworthiness or newsworthiness.

The exact p-value of a study’s findings also had limited impact on journalists’ evaluations. Results with p-values near the commonly accepted significance threshold of 0.05 were perceived similarly to results with highly significant p-values. However, in their open-ended responses, many journalists cited the presence of statistical significance as an important factor in judging a study’s trustworthiness.

Contrary to expectations, the prestige of the institution where the research was conducted did not significantly affect science journalists’ perceptions of trustworthiness or newsworthiness. This finding challenges the assumption that prestigious institutions automatically garner greater attention from journalists.

“I definitely went in thinking that findings coming out of prestigious universities would have at least some impact on how newsworthy and/or trustworthy they were perceived to be, and that was not the case at all in this study, which is good news,” Bottesini told PsyPost. “The qualitative answers suggest that other prestige factors might play a role, though, like the prestige of the journal the finding was published in.”

For example, when evaluating the trustworthiness of a scientific finding, one science journalist responded that a key factor was “the journal itself where the findings were published and its impact factor.”

Participants’ open-ended responses also indicated that there are various other factors, including plausibility, overclaiming/exaggerating the findings, conflicts of interest, and the opinions of outside experts, that play a role in journalists’ assessments of research findings. Many journalists also viewed experimental studies as more trustworthy than correlational studies.

“Science journalists (at least the ones in our study) are already using a wide range of strategies to vet the scientific information they transmit to the public, which is great,” Bottesini said. “If anything, I hope this study can serve as a starting point for others to create training materials and tools to help science journalists be even more effective at their jobs.”

While this study provides valuable insights into how science journalists evaluate research, it has its limitations. For instance, the findings are specific to a particular group of science journalists and may not generalize to all professionals in the field. Additionally, the study focused on psychology research, and the results might differ for other scientific domains.

Future research could delve deeper into the factors that influence science journalists’ decisions, including the impact of the researchers’ reputation, the journal’s standing, or the topic’s societal relevance. Furthermore, expanding the study to include a more diverse group of science journalists and exploring how their evaluations align with public perceptions of research could offer a more comprehensive understanding of science communication.

“I’d say this study is full of caveats, which reflect how complex this topic is,” Bottesini said. “One study can only scratch the surface in terms of understanding it, and that’s what I feel like our study did. But there are a lot of questions that still need to be addressed, and I hope this study can serve as a starting point for other researchers to investigate this topic.”

The study, “How Do Science Journalists Evaluate Psychology Research?“, was authored by Julia G. Bottesini, Christie Aschwanden, Mijke Rhemtulla, and and Simine Vazire.

© PsyPost