Jump to content

Can’t You All Just Get Along? Effects of Scientific Disagreement and Incivility on Attention to and Trust in Science


aum

Recommended Posts

Abstract

Disagreement and incivility are increasingly common in science communication. While previous work has explored effects on issue attitudes, it has not examined how disagreement and incivility in news coverage influence attention to and trust in science. In this study, we investigate how civil and uncivil disagreement about non-politicized issues affects attention to science news, evaluations of research, and scientific trust. Results reveal that disagreement and incivility can not only lead to less attention to and acceptance of particular science issues, but also broader mistrust of scientists and scientific methods.
 
Disagreement and incivility are increasingly public features of science communication (Dudo, 2015). Economic pressures have led media editors prioritize sensational coverage that attracts audience attention (Bennett et al., 2007), leading to an increase in coverage of scientific conflicts in news (Chinn et al., 2020; Hart et al., 2020). This intersects with trends toward increasing incivility in media overall (Sobieraj & Berry, 2011), but it is particularly important for science news. While disagreement is intrinsic to the production of scientific knowledge, the public tends to be skeptical of uncertain information (Chinn et al., 2018), with some believing that lack of scientific agreement is due to incompetence or is motivated by personal interests (Dieckmann et al., 2015).
 
Existing work on scientific disagreement and incivility in science communication has focused on issue attitudes (Anderson et al., 2014; Chinn et al., 2018; Malka et al., 2009; Yuan et al., 2018). However, previous research has not examined how disagreement and incivility affect attention to science news and trust in science, which influence public perceptions of expert positions, policies recommendations, and research funding (Besley et al., 2016). This study investigates how news stories containing civil scientific disagreement, uncivil scientific disagreement, or scientific agreement affect (a) attention to scientific issues, (b) evaluations of scientific research, and (c) trust in science. In doing so, we address gaps in extant work on how disagreement and incivility affect attitudes known to influence support for evidence-supported policies.

Background

This section reviews work informing hypotheses about the effects that civil and uncivil scientific disagreement will have on (a) attention to scientific topics, (b) evaluations of scientific research, and (c) trust in science. Scientific disagreement refers to information about debate within this scientific community, inconsistent findings, and scientific uncertainties. Civil disagreement maintains a neutral, respectful tone in line with professional journalistic norms. Uncivil disagreement conveys a lack of consensus and inconsistent findings by levying personal attacks on individual scientists and their work with aggressive language (e.g., “idiot scientists”) (Yuan et al., 2018, 2019).

Attention

Journalists often emphasize conflict because disagreement often captures audiences’ attention (Bennett et al., 2007). Disagreement also increases audiences’ anxiety, which can lead to greater information seeking (Huddy et al., 2007; Valentino et al., 2008). We therefore expect that a news article emphasizing scientific disagreement will lead to stronger attention-related outcomes (particularly interest and information seeking) than one reporting agreement.
 
Incivility also triggers attention. In political media, audiences find uncivil disagreement more entertaining than civil disagreement (Mutz & Reeves, 2005) and so audiences may be more attentive to uncivil content. Uncivil disagreement also motivates political engagement (Borah, 2014; Brooks & Geer, 2007; Masullo Chen & Lu, 2017), even if that engagement is more aggressive or uncivil (Gervais, 2014; Masullo Chen & Lu, 2017). This anger-provoking content is more likely to be shared on social media than less emotional content (Berger & Milkman, 2012; Hasell & Weeks, 2016). Perhaps because it violates a social norm (Yuan et al., 2019), exposure to uncivil disagreement is associated with greater physiological arousal (Mutz, 2007; Mutz & Reeves, 2005). In sum, incivility may be more engaging and attention-grabbing than civil content. Based on this prior research, we hypothesize the following:
 
Hypothesis 1: Participants exposed to scientific agreement will report the least (H1a) interest in the topic, (H1b) information seeking, (H1c) engagement, and (H1d) intentions to share information. Participants in the civil disagreement condition will have higher responses on attention measures than those in the agreement condition, but less than the uncivil disagreement condition. Participants in the uncivil disagreement condition will have the highest responses on attention measures.

Evaluation of Scientific Research

Although disagreement and incivility may increase attention to scientific topics, they may also lead to more negative evaluations of the science in question. People are more skeptical of scientific information when it appears disputed, and more accepting of scientific positions when they believe that scientists agree (Aklin & Urpelainen, 2014; Chinn et al., 2018; Malka et al., 2009). For example, balanced coverage of the refuted vaccine-autism link leads readers to perceive scientific disagreement and become less certain in their attitudes (Dixon & Clarke, 2013). We expect this to be reflected in participants’ acceptance and evaluations of debated study findings reported in a news article.
 
Incivility also affects message evaluation. Uncivil messages are seen as less informative and of poorer quality than civil messages (Brooks & Geer, 2007; Yuan et al., 2018). When civil and uncivil messages are presented side by side, civil messages are perceived as more credible (Thorson et al., 2010). In addition, uncivil messages are viewed as less fair (Brooks & Geer, 2007) and uncivil comments on a message increase perceptions that the message is biased, particularly among conservatives (Anderson et al., 2018). Above and beyond mere disagreement, incivility can also polarize scientific debates (Anderson et al., 2014) by increasing close-mindedness and attitude certainty (Borah, 2014). Thus, we expect that uncivil comments about a scientific study will negatively affect acceptance and evaluations of the study’s findings. Given prior research, we hypothesize the following:
 
Hypothesis 2: Participants exposed to scientific agreement will report (H2a) the most acceptance of the study’s findings presented in the news article and (H2b) the most positive perceptions of the research quality being done on the topic.
Participants in the civil disagreement condition will have lower responses on evaluation measures than those in the agreement condition, but higher than the uncivil disagreement condition. Participants in the uncivil disagreement condition will have the lowest responses on evaluation measures.

Trust in Scientists and Scientific Methods

Scientific disagreement not only affects perceptions of research, but also perceptions of scientists. While some see scientific disagreement as an indication of honesty and transparency, others believe that disagreeing experts are incompetent or self-interested (Dieckmann et al., 2015). However, the tone of disagreement is also likely to affect perceptions of scientists. Although scientists often use measured, cautious language (Boykoff & Boykoff, 2007), experts have been documented using aggressive language to “put down” disbelievers of consensus science (Yuan et al., 2019) and debated science has been uncivilly attacked as “junk” in mainstream media (McCright & Dunlap, 2010). Scientists who communicate more aggressively are considered less likable (Yuan et al., 2018). Although no work has empirically tested the effects of incivility on science trust outcomes, Mutz and Reeves (2005) find that uncivil disagreement has negative effects on political trust, compared with civil disagreement (Mutz & Reeves, 2005). This may be in part because uncivil politicians are rated more negatively than civil politicians (Mutz, 2007). In this experiment, as will be described in further detail below, the stimuli describe the findings of a recent scientific study and quotes a commenting scientist who agrees or disagrees (civilly or uncivilly) with the study’s findings. In this context, we expect the following:
 
Hypothesis 3: Participants in the scientific agreement condition will report the most trust in (H3a) the study authors and (H3b) the commenting scientist, followed by those in the civil disagreement condition, while those in the uncivil disagreement condition will report the least trust in the respective scientists.
 
Previous experimental work has not investigated how disagreement and incivility affect respondents’ general trust in science, which is positively associated with the acceptance of scientific claims and corresponding policy attitudes (Druckman & Bolsen, 2011; Lee, 2005). Some survey work has found evidence of a negative spillover effect; for example, individuals who consume media containing more climate change dissensus cues lose trust in scientists over time (Hmielowski et al., 2014). It is also important to note that people can have different levels of trust in scientific actors and scientific methods (Achterberg et al., 2017). We therefore separate the measurement of scientific methods and actors, though we expect the effects of civil and uncivil disagreement on each to be similar. Formally stated, we test the following hypotheses:
 
Hypothesis 4: Participants exposed to the scientific agreement condition will report the greatest (H4a) trust in scientists and (H4b) trust in scientific methods, followed by those in the civil disagreement condition, while those in the uncivil disagreement condition will report the least trust in scientists and scientific methods.

Method

Data

The data for this study were collected via Dynata (formerly Survey Sampling International) between September 23 and September 30, 2019, among U.S.-based respondents. After removing participants who did not complete the survey because they failed to pass simple attention checks (n = 698), the sample included 1,995 respondents. Although a diverse sample, our sample was more White, older, and educated than the U.S. population. Full sample information is available in supplementary material.

Procedure

After providing informed consent, participants saw a news article created for this study. Each article described the finding of a recent scientific study followed by comments from a scientist who was not an author on the study. The study findings that the articles presented were based on real discoveries, but agreement or disagreement was manipulated for this study. Article topics were stimulus sampled such that participants read about one of three topics: whether shocking the brain can improve athletic performance (n = 667), whether certain enzymes can convert type A blood to universal donor blood (n = 648), and whether Saturn’s rings were caused by a moon collision (n = 680). In a pretest, the topics had equal levels of self-reported comprehensibility.
 
The headline and commenting scientist’s remarks contained the experimental manipulations that defined the three conditions: scientific agreement (n = 673), civil disagreement (n = 669), and uncivil disagreement (n = 653). Each article was between 234 and 264 words, and contained no images, graphics, or source attribution. Differences in stimuli by condition are presented in Table 1. Full stimuli are in Appendix 2.1.
 
Table 1. Summary of Manipulated Differences in Stimuli by Experimental Condition.
Agreement Civil disagreement Uncivil disagreement
Headline: Scientists Agree with New Study’s Claim . . .
Body:
. . . Other scientists agree with the study’s findings.
“A large majority of researchers share the view that [ . . . ],” said Dr. Jonathan Hammig, who also researches [topic].
“The findings of this study are in line with findings from previous research. Other studies have also found [ . . . ] Most experts agree that there is ample evidence to support the findings of this study.”
Hammig emphasizes that this study’s suggestions are consistent with past research. “The available data strongly indicate [ . . . ], so the results really come as no surprise.”
Headline: Scientists Disagree over New Study’s Claim . . .
Body:
. . . Other scientists are skeptical of the study’s findings.
“There has been scientific debate in recent years on [ . . . ],” said Dr. Jonathan Hammig, who also researches [topic].
“The findings of this study contradict findings from previous research. Other studies have not found [ . . . ]. There is still considerable disagreement on this topic within the scientific community.”
Hammig emphasizes that this study’s results are inconsistent with past research. “This study challenges previous data we have suggesting that [ . . . ], so the results will really stir debate among scientists.”
Headline: Scientists Attack New Study’s Claim . . .
Body:
. . . Other scientists reject the study’s findings.
“There has been a lot of lousy research in recent years on [ . . . ],” said Dr. Jonathan Hammig, who also researches [topic].
“The findings of this garbage study go against findings from previous research. Other studies have not found [ . . . ]. The idiot authors of this study are clearly just writing nonsense.”
Hammig emphasizes that this study’s results completely oppose other research. “This study is so far off base from previous data we have suggesting [ . . . ], so this study is really just junk science.”
Note. Emphasis added.
OPEN IN VIEWER

Measures

Full question wording and measurement is available in supplementary material.

Attention

Interest
Interest in the topic of the article was measured with three items from past work (Karnowski et al., 2017; Oeldorf-Hirsch & Sundar, 2015; Turcotte et al., 2015) about respondents’ interest and desire to learn more (0 = low interest, 4 = high interest). Items were averaged to create a measure of interest (M = 1.90, SD = 1.27, Cronbach’s α = .93).
Information seeking
We asked participants two items drawn from past work (Karnowski et al., 2017) about their likelihood to seek further information on the study topic (0 = low likelihood, 4 = high likelihood). These were averaged into a measure of information seeking (M = 1.87, SD = 1.27, r = .82, p < .001).
Engagement
Participants were asked how likely (0 = not at all likely to 4 = very likely) they were to engage with the study authors (M = 1.81, SD = 1.43) and with the commenting scientist (M = 1.77, SD = 1.43). They were also asked two items with a similar scale about their likelihood of engagement on social media with scientists about the study topic (M = 1.63, SD = 1.36, r = .80, p < .001).
Sharing
Respondents answered two questions about their likelihood of sharing the scientific information in their social network (Bobkowski, 2015) (0 = not at all likely, 4 = very likely). We averaged these items into a measure of likelihood of sharing (M = 1.43, SD = 1.35, r = .76, p < .001).

Evaluation of Research

Acceptance of study findings
Two items captured participants’ agreement with and belief the study findings were right or wrong (0 = low acceptance, 6 = high acceptance). These items were averaged to capture respondents’ acceptance of study findings (M = 3.13, SD = 1.28, r = .74, p < .001).
Perception of research quality
Respondents reported that the science on that subject was trustworthy/untrustworthy, not credible/credible, bad science/good science, and sloppy/rigorous on 6-point semantic differential scales. These were averaged to create a measure of perception of research quality which was specific to the scientific issue respondents read about (M = 3.10, SD = 1.30, Cronbach’s α = .93).

Trust in Science

Trust in the study authors

Four items asked how much respondents trusted the study authors as a source of information, to tell the truth, do high-quality research, and to be unbiased in their work (Anderson et al., 2012; Cacciatore et al., 2018; Eiser et al., 2009; Hmielowski et al., 2014; Ho et al., 2011) (0 = none at all to 4 = a great deal). In line with Hasell et al. (2019), all measures included a mention of the study topic to be specific about the context in which respondents trusted the scientific actors. These items were averaged into indices of trust in the study authors (M = 2.16, SD = .87, Cronbach’s α = .76).

Trust in the commenting scientist

The above items, edited to ask about the commenting scientist, were also used to measure trust in the commenting scientist. They were averaged into indices of trust in commenting scientist (M = 2.16, SD = .87, Cronbach’s α = .87).

Trust in scientists in general

Measures of general trust in scientists and scientific methods sought to cover the three dimensions of trust: competence, benevolence, and integrity (Hasell et al., 2019). We asked respondents three items concerning how much they trust that scientists are competent, use findings to benefit the public, and do unbiased research (0 = none at all to 4 = a great deal). These items were averaged to create a measure of trust in scientists (M = 2.35, SD = .97, Cronbach’s α = .89).

Trust in scientific methods

We prefaced questions about trust in scientific methods by saying, “The following questions are about your opinions on scientific methods, meaning the principles and procedures for the systematic pursuit of knowledge used in scientific research.” Participants responded to three items concerning how much they trusted scientific methods, in general, to produce truthful, helpful, and unbiased knowledge about the world (0 = none at all to 4 = a great deal). These items were averaged into an index representing participants’ trust in scientific methods (M = 2.42, SD = 1.01, Cronbach’s α = .91).

Results

Preliminary Analyses

Gender, age, race, education, employment status, and partisanship did not vary by condition (all ps > .17). Analyses run with and without demographic controls follow a similar pattern of results; results reported below do not include demographic controls.
 
Following exposure to the stimuli, respondents were asked to identify the article topic from three possible choices (89% correct) and whether the commenting scientist agreed or disagreed with the finding of the study (76.3% correct). All participants were included in the analyses presented here, though results from analyses using only participants who correctly identified the topic and (dis)agreement follow a similar pattern.
 
As a manipulation check, participants were asked how polite or rude they thought the commenting scientist (very polite = 0, very rude = 6). Perceived rudeness was significantly associated with experimental condition, F(2,1991) = 357.40, p < .001. Participants in the uncivil disagreement condition thought the commenting scientist most rude (M = 3.77, SD = 1.53) compared with participants in the civil disagreement (M = 2.26, SD = 1.17, p = .003) and agreement conditions (M = 2.02, SD = 1.13, p < .001). The scientist was also considered ruder in the civil disagreement condition than the agreement condition (p = .003) (see Table 2, row 1).
Table 2. Mean Differences in Outcomes by Condition.
Outcome Range Condition
Scientific agreement Civil disagreement Uncivil disagreement
Rudeness of commenting scientist 0–6 2.02 (1.13)a 2.26 (1.17)b 3.77 (1.53)c
Attention
 Interest 0–4 2.04 (1.27)a 1.87 (1.26)b 1.77 (1.27)b
 Information seeking 0–4 2.01 (1.28)a 1.86 (1.27)ab 1.74 (1.26)b
 Engagement with study authors 0–4 1.93 (1.45)a 1.79 (1.42)ab 1.71 (1.40)b
 Engagement with commenting scientist 0–4 1.88 (1.43)a 1.78 (1.41)ab 1.65 (1.44)b
 Engagement on social media 0–4 1.79 (1.39)a 1.6 (1.35)b 1.48 (1.34)b
 Sharing 0–4 1.62 (1.37)a 1.42 (1.37)b 1.25 (1.28)b
Evaluation of research
 Agreement with study findings 0–6 3.51 (1.33)a 3.00 (1.18)b 2.86 (1.23)b
 Perception of research quality 0–5 3.43 (1.25)a 3.05 (1.25)b 2.81 (1.33)c
Trust in science
 Trust in study authors 0–4 2.39 (.89)a 2.19 (.84)b 1.89 (.82)c
 Trust in commenting scientist 0–4 2.40 (.88)a 2.22 (.81)b 1.86 (.85)c
 General trust in scientists 0–4 2.48 (.95)a 2.34 (.97)b 2.23 (.98)b
 General trust in scientific methods 0–4 2.54 (.99)a 2.37 (.99)b 2.33 (1.04)b
Note. Descriptive condition means were reported with standard deviations in parentheses. Means within the same row with different letters were found significantly different at p < .05 using a pairwise tests with a Bonferroni correction.
OPEN IN VIEWER

Analysis of Variance (ANOVA) Models

We examined the effects of the experimental manipulation on all outcomes with a series of ANOVAs in which the experimental condition and issue topic were the sole predictors. Analyses were also run including an interaction between condition and topic; the pattern of results was identical to analyses that did not include an interaction term for all except one outcome (discussed below).
 
Mean differences in outcomes by experimental condition can be found in Table 2. Due to the large number of hypotheses, we were at risk of discovering some false positive results; therefore, all pairwise tests were run with a Bonferroni correction.
Below, we present effects of the experimental condition, controlling for the article topic. In some cases, the article topic affected outcomes. In these cases, typically an article about shocking the brain to increase neuroplasticity resulted in lower outcomes than changing blood types or Saturn’s rings. Mean differences in outcomes by article topic are reported in Table 3.
 
Table 3. Mean Differences in Outcomes by Topic.
Outcome Range Topic
Convert blood type Shocking the brain Saturn’s rings
Attention
 Interest 0–4 1.99 (1.26)a 1.82 (1.29)b 1.87 (1.26)ab
 Information seeking 0–4 1.94 (1.25)a 1.82 (1.30)a 1.86 (1.26)a
 Engagement with study authors 0–4 1.78 (1.42)a 1.93 (1.43)ab 1.73 (1.42)ac
 Engagement with commenting scientist 0–4 1.75 (1.42)a 1.87 (1.42)ab 1.68 (1.44)ac
 Engagement on social media 0–4 1.63 (1.37)a 1.67 (1.36)a 1.59 (1.36)a
 Sharing 0–4 1.53 (1.38)a 1.42 (1.36)ab 1.35 (1.32)b
Evaluation of research
 Agreement with study findings 0–6 3.38 (1.17)a 2.7 (1.39)b 3.31 (1.14)a
 Perception of research quality 0–5 3.48 (1.22)a 2.57 (1.33)b 3.26 (1.17)c
Trust in science
 Trust in study authors 0–4 2.24 (.85)a 1.91 (.90)b 2.33 (.82)a
 Trust in commenting scientist 0–4 2.23 (.87)a 2.05 (.87)b 2.21 (.87)a
 General trust in scientists 0–4 2.43 (.97)a 2.23 (.98)b 2.41 (.96)a
 General trust in scientific methods 0–4 2.48 (1.03)a 2.33 (1.02)b 2.44 (.97)ab
Note. Descriptive condition means were reported with standard deviations in parentheses. Means within the same row with different subscripts were found significantly different at p < .05 using a pairwise tests with a Bonferroni correction.
OPEN IN VIEWER

Attention

Interest
There was a significant main effect of experimental condition on interest F(2, 1989) = 8.03, p < .001, �p2 = .008, but not in the expected direction. Pairwise comparisons showed that respondents reported greater interest in the agreement condition (M = 2.04, SD = 1.27), compared with the civil disagreement (M = 1.87, SD = 1.26, p = .045) and uncivil disagreement (M = 1.77, SD = 1.27, p < .001) conditions. There was no difference between the civil and uncivil disagreement conditions (p = .365) (Table 2, row 3). H1a was not supported.
Information seeking
There was a significant effect of experimental condition on information seeking F(2, 1990) = 7.40, p < .001, �p2 = .007. However, effects were contrary to those hypothesized. Pairwise comparisons revealed that respondents reported greater information seeking in the agreement condition (M = 2.01, SD = 1.28), compared with the uncivil disagreement condition (M = 1.74, SD = 1.26, p < .001). The civil disagreement condition (M = 1.86, SD = 1.27) did not differ significantly from either the agreement (p = .115) or uncivil disagreement (p = .225) condition (Table 2, row 4). H1b was not supported.

Engagement

Engagement with study authors
Experimental condition significantly affected engagement with the study authors, F(2, 1989) = 4.40, p < .05, �p2 = .004. Those in the agreement condition reported greater likelihood of engagement with the authors (M = 1.93, SD = 1.45) than the uncivil disagreement condition (M = 1.71, SD =1.40, p = .011). Respondents in the civil disagreement condition (M = 1.79, SD = 1.42) reported engagement between, and not significantly different from, agreement (p = .184) and uncivil disagreement (p = .871) conditions (Table 2, row 5).
Engagement with commenting scientist
Experimental condition significantly affected engagement with the commentating scientist F(2, 1988) = 4.31, p < .05, �p2 = .004. Those in the agreement condition (M = 1.88, SD = 1.43) reported greater likelihood of engagement with the commentator than the uncivil disagreement condition (M = 1.65, SD = 1.44, p = .010). Respondents in the civil disagreement condition (M = 1.78, SD = 1.41) did not significantly differ from those in the agreement (p = .580) and uncivil disagreement (p = .310) conditions (Table 2, row 6).
Engagement on social media
There was a significant effect of the experimental condition on likelihood to engage on social media F(2, 1989) = 8.71, p < .001, �p2 = .009. Pairwise comparisons showed that the agreement condition (M = 1.79, SD = 1.39) reported a higher likelihood of engagement than either the civil disagreement (M = 1.67, SD = 1.35, p = .032) or uncivil disagreement (M = 1.48, SD = 1.34, p < .001) conditions. There was no difference between civil and uncivil disagreement conditions (p = .333, Table 2, row 7).
In sum, results using different measures of engagement followed a pattern contrary to what was hypothesized. H1c was not supported.
Sharing
Finally, there was a significant effect of experimental condition on the likelihood that respondents would share information, F(2, 1990) = 12.96, p < .001, �p2 = .013. Again, the effect was contrary to that hypothesized. Those in the agreement condition (M = 1.62, SD = 1.37) were significantly more likely to share than those in the civil disagreement condition (M = 1.42, SD = 1.37, p = .019) and those in the uncivil disagreement condition (M = 1.25, SD = 1.28, p < .001). There were no differences between civil and uncivil disagreement conditions (p = .056) (Table 2, row 8). H1d was not supported.

Evaluation of the Research

Agreement with study findings
Experimental condition affected participants’ agreement with the study findings, F(2, 1990) = 53.88, p < .001, �p2 = .051. Participants reported stronger agreement with the findings when there was agreement (M = 3.51, SD = 1.33) than civil disagreement (M = 3.00, SD = 1.18, p < .001) or uncivil disagreement (M = 2.86, SD = 1.23, p < .001). There was no difference between civil and uncivil disagreement (p = .097) (Table 2, row 10). H2a was partially supported.
Perception of research quality
In addition, participants’ evaluation of the quality of the research was affected by the experimental condition, F(2, 1977) = 42.65, p < .001, �p2 = .041. Research was evaluated more positively in the agreement condition (M = 3.43, SD = 1.25) than in the civil disagreement (M = 3.05, SD = 1.25, p < .001) and uncivil disagreement (M = 2.81, SD = 1.33, p < .001) conditions. Furthermore, those in the civil disagreement condition evaluated the research more positively than those in the uncivil condition (p = .002) (Table 2, row 11). H2b was supported.

Trust in Science

Trust in study authors
Experimental condition affected trust in the study authors, F(2, 1988) = 60.52, p < .001, �p2 = .057. Participants in the agreement condition reported higher levels of trust (M = 2.39, SD = .89) than those in the civil disagreement (M = 2.19, SD = .84, p < .001) and uncivil disagreement (M = 1.89, SD = .82, p < .001) conditions. In addition, those in the uncivil condition reported lower levels of trust than those in the civil disagreement condition (p < .001) (Table 2, row 13). H3a was supported.
Trust in commenting scientist
Experimental condition also affected trust in the commenting scientist, F(2, 1987) = 71.26, p < .001, �p2 = .067. As with trust in the study authors, participants in the agreement condition reported higher levels of trust (M = 2.40, SD = .88) than those in the civil disagreement (M = 2.22, SD = .81, p < .001) or uncivil disagreement (M = 1.86, SD = .85, p < .001) conditions. Those in the civil disagreement condition reported significantly more trust than those in the uncivil condition (p < .001) (Table 2, row 14). H3b was supported.
We observed a significant interactive effect between condition and topic on trust in the commenting scientist, F(4, 1983) = 5.03, p < .001, �p2 = .01. Visual inspection of this interaction showed that, in the agreement condition, respondents reported higher levels of trust in the commenting scientist in the blood and space conditions than in the brain condition. In the civil and uncivil disagreement conditions, participants reported similar levels of trust across all topics (Figure in Appendix 2.2).
Trust in scientists
Experimental condition affected respondents’ overall trust in scientists, F(2, 1989) = 11.11, p < .001, �p2 = .011. Those in the agreement condition reported higher trust in scientists (M = 2.48, SD = .95) than did participants in either the civil disagreement (M = 2.34, SD = .97, p = .019) or uncivil disagreement (M = 2.23, SD = .98, p < .001) conditions. However, there was no difference between the civil and uncivil disagreement conditions (p = .156) (Table 2, row 15). H4a was partially supported.
Trust in scientific methods
Experimental condition affected respondents’ trust in scientific methods, F(2, 1988) = 8.30, p < .001, �p2 = .008. Those in the agreement condition reported higher trust in scientific methods (M = 2.54, SD = .99) than did participants in either the civil disagreement (M = 2.37, SD = .99, p = .006) or uncivil disagreement (M = 2.33, SD = 1.04, p < .001) conditions. The civil and uncivil disagreement conditions did not significantly differ (p = 1.00) (Table 2, row 16). H4b was partially supported.

Discussion

Summary of Findings

Attention

Contrary to our expectations, disagreement and incivility led to less interest, engagement, information seeking, and information sharing, compared with the agreement condition. As the relevant hypotheses were largely based on political communication research, this pattern of results highlights possible differences in the public’s reactions to scientific versus political messages.
 
Perhaps when scientific information is contested, people perceive it to be less useful, and in contrast are more inclined to be attentive to “usable” positions on which scientists agree. Some people may consume science news to increase their science knowledge and may therefore be more interested in learning settled information. Others may be more attentive to scientific information that appears certain because they see it as more valuable for decision making in daily life, conversely choosing to “wait and see” before acting or forming attitudes on debated information. While disagreement and incivility may attract attention to political information by providing an opportunity for one’s side to “win,” scientific information may be perceived as less important to attend to when there is not an agreed upon answer, position, or recommendation. Another possible explanation for these findings is that we measured attention differently from past work on political incivility, which has measured arousal using psychophysiological methods (Mutz, 2007) or self-reported entertainment (Mutz & Reeves, 2005).

Evaluation of Scientific Research

Respondents were more accepting of the study’s finding in their personal beliefs when that finding appeared more certain and less accepting when the finding remained debated, regardless of civility. Respondents’ perceptions of research quality were affected both by disagreement and incivility. This may be because uncivil attacks on science often target research practices (e.g., “junk science”) in addition to attacks on personal character (McCright & Dunlap, 2010), which was reflected in our stimuli. That is, uncivil attacks on science targeting both individuals and their research (McCright & Dunlap, 2010) may be interpreted as cues about the quality of scientific research in that area.

Trust in Science

Trust in science was greatest when there was agreement. However, there were some differences between how incivility affected trust in the scientists mentioned in the article and respondents’ general trust in scientists and methods. Trust in the scientists mentioned in the article was highest when there was agreement, followed by civil disagreement, and lowest in the uncivil disagreement condition. This pattern held for both trust in the authors of the study and trust in the commenting scientist. This finding serves as a warning for those who are inclined to attack researchers with whom they disagree; when the commenting scientist was uncivil, he was viewed as negatively as those he was disparaging. In addition, it suggests that incivility can further damage trust in the attacked scientists, at least in the absence of a statement from the attacked scientists (as our stimuli were designed).
 
However, participants’ general trust in scientists and scientific methods was unaffected by the incivility of a commenting scientist. Although scientific disagreement appears to negatively influence trust in scientists and scientific methods, a single incident of incivility did not further erode trust.

Strengths and Limitations

This study is one of the first to examine the ways in which disagreement and incivility in scientific messages affect not only issue attitudes and actors’ likeability (e.g., Yuan et al., 2018), but also attention outcomes and trust in scientific actors. Strengths of this study include the use of a large, diverse sample and stimulus sampling by testing effects across three different scientific topics. Stimuli were made as realistic as possible by using common language for disagreement and denial drawn from previous content analytic work (REDACTED). While incivility does exist in science news (Dominus, 2017; Gelman, 2016), the degree of incivility in the experimental stimuli may violate perceived norms of traditional news reporting (Yuan et al., 2018), and may be more common in fringe or politicized spaces.
 
We note that participants considered the commenting scientist to be slightly ruder in the civil disagreement condition (M = 2.02) than the scientific agreement condition (M = 2.26), though this was not the intention when designing stimuli. This raises the possibility that some of the differences between the agreement and civil disagreement conditions, attributed to the presence or absence of disagreement, may be driven in part by a perception of incivility. However, the mean difference between the agreement and civil disagreement conditions is quite small (.24 on a 7-point scale) compared with the difference between the civil and uncivil disagreement conditions (1.51).
 
While this study compared differences between scientific agreement, civil disagreement, and uncivil disagreement, it did not capture how people might respond to the studies presented in the absence of a message or in response to viewing a one-sided message (i.e., a news article about the study with no commenting scientist). We chose to compare civil and uncivil scientific disagreement conditions to an agreement condition to ensure that the stimuli were as similar as possible across conditions with respect to length and the featured speakers. Also, prior work has suggested that in non-politicized issue contexts individuals in the United States assume a high degree of scientific agreement in the absence of agreement information (Chinn et al., 2018). For this reason, our agreement condition may produce similar outcomes to a one-sided message. That said, we did not capture a baseline of how people respond to measures (e.g., trust in science) in the absence of any message as a control and cannot make claims about comparisons to a no-message condition.
 
Finally, online samples like ours are more educated than the general population. More educated people tend to have greater deference to scientific authority (Anderson et al., 2012) among other favorable science attitudes. Thus, it is possible that science attitude outcomes would be lower among a less educated sample or that our more educated sample responded differently to scientific disagreement than a less educated sample would (Dieckmann et al., 2015).

Implications and Areas for Future Research

Science news coverage increasingly emphasizes disagreement and uncivil conflict (Chinn et al., 2020; Hart et al., 2020). While disagreement is a normal condition within the scientific community, we find that disagreement and incivility reduce attention to, evaluations of, and trust in science. Most scientific issues have areas of certainty, about which scientists agree, and areas of uncertainty, where experts disagree. Yet following expert recommendations on points of certainty is perhaps most important when our understanding is incomplete. For example, though there is some debate about how severe the impacts of climate change will be (Davenport, 2018), there is widespread scientific agreement that human actions are causing climate change and that actions to address climate change are vital to avert the worst impacts (IPCC, 2018). However, the public may be less attentive to and trusting of experts’ views in the face of disagreement and debate.
 
There may be several reasons for why the public is skeptical of science reported with disagreement and incivility. First, debated information may be seen as incomplete and individuals may be uncomfortable to make decisions with uncertain information (Kahneman & Tversky, 1979). Second, while debate and critique are integral in the scientific process, such debates are not often made public (Simis-Wilkinson et al., 2018) and may violate public norms about scientists being dispassionate and objective. Finally, given politicization of prominent scientific issues like climate change and COVID-19 (Chinn et al., 2020; Hart et al., 2020), audiences may infer political motivations and bias to be at the root of scientific disagreements and incivility (Dieckmann et al., 2015).
 
Given that critique, peer review, and debate are necessary to managing the uncertainties and complexities of scientific research, it is important to continue research into how scientific disagreements can be transparently communicated in ways which do not diminish the value of scientific knowledge in the eyes of the public. This presents several opportunities for future research. First, while this study only looked at incivility in the context of scientific disagreement, future work can do more to separate the effects of incivility and disagreement with designs that fully cross polite and impolite language when scientists agree and disagree. In addition, future work should investigate strategies to reduce negative effects when presenting scientific disagreements on outcomes like trust, such as emphasizing points of agreement or explaining norms around peer critique. In addition, there is a lack of empirical work examining how scientists can effectively respond to uncivil attacks. Anecdotal evidence from Dr. Fauci’s responses to uncivil attacks suggests that civil responses that transparently explain the support and limitations of evidence for one’s position may be parts of an effective strategy of maintaining public trust (Nicholas & Yong, 2020). Finally, this study focused on non-politicized scientific issues. As perceptions of incivility may be dependent on whether one feels as though their side is under attack (Anderson et al., 2014; Borah, 2014), future research must address how disagreement and incivility affect attitudes in politicized contexts.

Declaration of Conflicting Interests

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding

The author(s) received no financial support for the research, authorship, and/or publication of this article.

ORCID iDs

References

Achterberg P., de Koster W., van der Waal J. (2017). A science confidence gap: Education, trust in scientific methods, and trust in scientific institutions in the United States, 2014. Public Understanding of Science, 26(6), 704–720. https://doi.org/10.1177/0963662515617367
Aklin M., Urpelainen J. (2014). Perceptions of scientific dissent undermine public support for environmental policy. Environmental Science & Policy, 38, 173–177. https://doi.org/10.1016/j.envsci.2013.10.006
Anderson A. A., Brossard D., Scheufele D. A., Xenos M. A., Ladwig P. (2014). The “nasty effect”: Online incivility and risk perceptions of emerging technologies: Crude comments and concern. Journal of Computer-Mediated Communication, 19(3), 373–387. https://doi.org/10.1111/jcc4.12009
Anderson A. A., Scheufele D. A., Brossard D., Corley E. A. (2012). The role of media and deference to scientific authority in cultivating trust in sources of information about emerging technologies. International Journal of Public Opinion Research, 24(2), 225–237. https://doi.org/10.1093/ijpor/edr032
Anderson A. A., Yeo S. K., Brossard D., Scheufele D. A., Xenos M. A. (2018). Toxic talk: How online incivility can undermine perceptions of media. International Journal of Public Opinion Research, 30(1), 156–168. https://doi.org/10.1093/ijpor/edw022
Bennett W. L., Lawrence R. G., Livingston S. (2007). When the press fails. University of Chicago Press.
Berger J., Milkman K. L. (2012). What makes online content viral? Journal of Marketing Research, 49(2), 192–205. https://doi.org/10.1509/jmr.10.0353
Besley J. C., Dudo A. D., Yuan S., Ghannam N. A. (2016). Qualitative interviews with science communication trainers about communication objectives and goals. Science Communication, 38(3), 356–381. https://doi.org/10.1177/1075547016645640
Bobkowski P. S. (2015). Sharing the news: Effects of informational utility and opinion leadership on online news sharing. Journalism & Mass Communication Quarterly, 92(2), 320–345. https://doi.org/10.1177/1077699015573194
Borah P. (2014). Does it matter where you read the news story? Interaction of incivility and news frames in the political blogosphere. Communication Research, 41(6), 809–827. https://doi.org/10.1177/0093650212449353
Boykoff M. T., Boykoff J. M. (2007). Climate change and journalistic norms: A case-study of US mass-media coverage. Geoforum, 38(6), 1190–1204. https://doi.org/10.1016/j.geoforum.2007.01.008
Brooks D. J., Geer J. G. (2007). Beyond negativity: The effects of incivility on the electorate. American Journal of Political Science, 51(1), 1–16. https://doi.org/10.1111/j.1540-5907.2007.00233.x
Cacciatore M. A., Browning N., Scheufele D. A., Brossard D., Xenos M. A., Corley E. A. (2018). Opposing ends of the spectrum: Exploring trust in scientific and religious authorities. Public Understanding of Science, 27, 11–28. https://doi.org/10.1177/0963662516661090
Chinn S., Hart P. S., Soroka S. (2020). Politicization and polarization in climate change news content, 1985-2017. Science Communication, 42(1), 112–129. https://doi.org/10.1177/1075547019900290
Chinn S., Lane D. S., Hart P. S. (2018). In consensus we trust? Persuasive effects of scientific consensus communication. Public Understanding of Science, 27(7), 807–823. https://doi.org/10.1177/0963662518791094
Davenport C. (2018, October 7). Major climate report describes a strong risk of crisis as early as 2040. The New York Times. https://www.nytimes.com/2018/10/07/climate/ipcc-climate-report-2040.html
Dieckmann N. F., Johnson B. B., Gregory R., Mayorga M., Han P. K. J., Slovic P. (2015). Public perceptions of expert disagreement: Bias and incompetence or a complex and random world? Public Understanding of Science, 26(3), 325–338. https://doi.org/10.1177/0963662515603271
Dixon G. N., Clarke C. E. (2013). Heightening uncertainty around certain science: Media coverage, false balance, and the autism-vaccine controversy. Science Communication, 35(3), 358–382.
Druckman J. N., Bolsen T. (2011). Framing, motivated reasoning, and opinions about emergent technologies. Journal of Communication, 61(4), 659–688. https://doi.org/10.1111/j.1460-2466.2011.01562.x
Dudo A. (2015). Scientists, the media, and the public communication of science: Scientists’ public communication activities. Sociology Compass, 9(9), 761–775. https://doi.org/10.1111/soc4.12298
Eiser J. R., Stafford T., Henneberry J., Catney P. (2009). “Trust me, I’m a scientist (not a developer)”: Perceived expertise and motives as predictors of trust in assessment of risk from contaminated land. Risk Analysis, 29(2), 288–297. https://doi.org/10.1111/j.1539-6924.2008.01131.x
Gelman A. (2016). The time-reversal heuristic—A new way to think about a published finding that is followed up by a large, preregistered replication (in context of Amy Cuddy’s claims about power pose). http://andrewgelman.com/2016/01/26/more-power-posing/
Gervais B. T. (2014). Following the news? Reception of uncivil partisan media and the use of incivility in political expression. Political Communication, 31(4), 564–583. https://doi.org/10.1080/10584609.2013.852640
Hart P. S., Chinn S., Soroka S. (2020). Politicization and polarization in COVID-19 news coverage. Science Communication, 42, 679–697. https://doi.org/10.1177/1075547020950735
Hasell A., Tallapragada M., Brossard D. (2019). Deference to scientific authority, trust in science, and credibility of scientific expertise: Distinguishing the three connected constructs in science communication [Conference session]. International Communications Association Annual Conference, Washington, D.C.
Hasell A., Weeks B. E. (2016). Partisan provocation: The role of partisan news use and emotional responses in political information sharing in social media. Human Communication Research, 42(4), 641–661. https://doi.org/10.1111/hcre.12092
Hmielowski J. D., Feldman L., Myers T. A., Leiserowitz A., Maibach E. (2014). An attack on science? Media use, trust in scientists, and perceptions of global warming. Public Understanding of Science, 23(7), 866–883. https://doi.org/10.1177/0963662513480091
Ho S. S., Scheufele D. A., Corley E. A. (2011). Value Predispositions, mass media, and attitudes toward nanotechnology: The interplay of public and experts. Science Communication, 33(2), 167–200. https://doi.org/10.1177/1075547010380386
Huddy L., Feldman S., Cassese E. (2007). On the distinct political effects of anxiety and anger. In Neuman R. W., Marcus G. E., Crigler A., MacKuen M. (Eds.), The affect effect: Dynamics of emotion in political thinking and behavior (pp, 202–230). The University of Chicago Press.
The Intergovernmental Panel on Climate Change (IPCC) (2018). Summary for Policymakers. In: Masson-Delmotte V., Zhai P., Pörtner H.-O., Roberts D., Skea J., Shukla P.R., Pirani A., Moufouma-Okia W., Péan C., Pidcock R., Connors S., Matthews J.B.R., Chen Y., Zhou X., Gomis M.I., Lonnoy E., Maycock T., Tignor M., Waterfield T. (Eds.), Global Warming of 1.5°C. An IPCC Special Report on the impacts of global warming of 1.5°C above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty (p. 32). World Meteorological Organization, Geneva, Switzerland.
Kahneman D., Tversky A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263–292.
Karnowski V., Kümpel A. S., Leonhard L., Leiner D. J. (2017). From incidental news exposure to news engagement. How perceptions of the news post and news usage patterns influence engagement with news articles encountered on Facebook. Computers in Human Behavior, 76:, 42–50. https://doi.org/10.1016/j.chb.2017.06.041
Lee C.-J. (2005). Public attitudes toward emerging technologies: Examining the interactive effects of cognitions and affect on public attitudes toward nanotechnology. Science Communication, 27(2), 240–267. https://doi.org/10.1177/1075547005281474
Malka A., Krosnick J. A., Debell M., Pasek J., Schneider D. (2009). Featuring skeptics in news media stories about global warming reduces public beliefs in the seriousness of global warming. Woods Institute for the Environment. http://woods.stanford.edu/research/global-warming-skeptics.html
Masullo Chen G., Lu S. (2017). Online political discourse: Exploring differences in effects of civil and uncivil disagreement in news website comments. Journal of Broadcasting & Electronic Media, 61(1), 108–125. https://doi.org/10.1080/08838151.2016.1273922
McCright A. M., Dunlap R. E. (2010). Anti-reflexivity. Theory, Culture & Society, 27(2–3), 100–133. https://doi.org/10.1177/0263276409356001
Mutz D. C. (2007). Effects of “in-your-face” television discourse on perceptions of a legitimate opposition. American Political Science Review, 101(4), 621–635. https://doi.org/10.1017/S000305540707044X
Mutz D. C., Reeves B. (2005). The new videomalaise: Effects of televised incivility on political trust. American Political Science Review, 99(1), 1–15.
Nicholas P., Yong E. (2020). Fauci: “Bizarre” White House behavior only hurts the president. https://www.theatlantic.com/politics/archive/2020/07/trump-fauci-coronavirus-pandemic-oppo/614224/
Oeldorf-Hirsch A., Sundar S. S. (2015). Posting, commenting, and tagging: Effects of sharing news stories on Facebook. Computers in Human Behavior, 44:, 240–249. https://doi.org/10.1016/j.chb.2014.11.024
Simis-Wilkinson M., Madden H., Lassen D., Su Y.-F., Brossard D., Scheufele D. A., Xenos M. A. (2018). Scientists joking on social media: An empirical analysis of #overlyhonestmethods. Science Communication, 40(3), 314–339. https://doi.org/10.1177/1075547018766557
Sobieraj S., Berry J. M. (2011). From incivility to outrage: Political discourse in blogs, talk radio, and cable news. Political Communication, 28(1), 19–41. https://doi.org/10.1080/10584609.2010.542360
Thorson K., Vraga E., Ekdale B. (2010). Credibility in context: How uncivil online commentary affects news credibility. Mass Communication and Society, 13(3), 289–313. https://doi.org/10.1080/15205430903225571
Turcotte J., York C., Irving J., Scholl R. M., Pingree R. J. (2015). News recommendations from social media opinion leaders: Effects on media trust and information seeking. Journal of Computer-Mediated Communication, 20(5), 520–535. https://doi.org/10.1111/jcc4.12127
Valentino N., a Hutchings V. L., Banks A. J., Davis A. K. (2008). Is a worried citizen a good citizen? Emotions, political information seeking, and learning via the Internet. Political Psychology, 29(2), 247–273. https://doi.org/10.1111/j.1467-9221.2008.00625.x
Yuan S., Besley J. C., Lou C. (2018). Does being a jerk work? Examining the effect of aggressive risk communication in the context of science blogs. Journal of Risk Research, 21(4), 502–520. https://doi.org/10.1080/13669877.2016.1223159
Yuan S., Ma W., Besley J. C. (2019). Should scientists talk about GMOs nicely? Exploring the effects of communication styles, source expertise, and preexisting attitude. Science Communication, 41(3), 267–290. https://doi.org/10.1177/1075547019837623

Biographies

Sedona Chinn is an assistant professor in the Life Sciences Communication department at the University of Wisconsin–Madison. Her research focuses on science communication, political communication, and media effects.
 
P. Sol Hart is an associate professor in Communication and Media and the Program in the Environment at the University of Michigan. Dr. Hart specializes in environmental, science, and risk communication.

Supplementary Material

Please find the following supplemental material visualised and available to download via Figshare in the display box below. Where there are more than one item, you can scroll through each tab to see each separate item.

 

Please note all supplemental material carries the same license as the article it is here associated with

 
Link to comment
Share on other sites


  • Views 433
  • Created
  • Last Reply

Top Posters In This Topic

  • aum

    1

Popular Days

Top Posters In This Topic

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...