Skip to main content
SearchLoginLogin or Signup

Theories of Public Opinion Change Versus Stability and their Implications for Null Findings

There is more or less a consensus agreement among pollsters and scholars that measures of public opinion are influenced by factors such as question wording or order, interviewer-respondent dynamics, or other features of the context in which the survey takes places ...

Published onOct 24, 2020
Theories of Public Opinion Change Versus Stability and their Implications for Null Findings

The Study of Opinion Change and the Problem of Null Findings

There is more or less a consensus agreement among pollsters and scholars that measures of public opinion are influenced by factors such as question wording or order, interviewer-respondent dynamics, or other features of the context in which the survey takes places (Dillman et al., 2009; Groves et al., 2009; Lepkowski et al., 2007; Tourangeau, Couper, & Conrad, 2004; Tourangeau & Smith, 1996).  However, scholars who work in different social science disciplines tend to interpret the meaning of public opinion’s mutability in different ways.  One of the oldest theoretical paradigms in survey research, the “total survey error” perspective (TSE), rests upon an implicit positivist assumption that people possess “true” opinions about social, political, and economic issues, and it is the researcher’s job to design a survey that minimizes exogenous influences so as to maximize the likelihood of measuring the respondent’s true response (Groves, 2004).  According to a strict reading of the TSE perspective, changes in measured opinions that are attributable to survey design or method are errors; they interfere with a researcher’s ability to gather a valid measure of the respondent’s opinion.  Researchers who work within the “cognitive aspects of survey methodology” paradigm (CASM) built upon the principles of TSE in order to better understand the features of human cognition that explain why survey design can influence respondents’ answers to the survey.  Numerous CASM studies elucidate a wide range of cognitive heuristics that people use (consciously and subconsciously) to understand survey questions, and proper understanding of the cognition of survey response empowers researchers to minimize survey error through design (Tourangeau, Rips, & Rasinski, 2000).  However, while the TSE and CASM paradigms have much to teach researchers about ways to minimize exogenous influences on survey response so as to garner valid measures of people’s opinions, they shed relatively little theoretical light on the reasons why people hold the opinions that they do in the first place.

In contrast, sociologists and political scientists typically devote more attention to studying exogeneous influences not as sources of error, but rather as integral components of the process of opinion formation.  Sociologists emphasize the social context of communication and argue that people’s expressed opinions are a function of the “audience” with whom they are communicating at a given time (Goffman, 2009; Gorden, 1952; Steiner, 1954).  Variance in response may not be disingenuousness, but rather a reflection of the fact that people “perform” different identities in different spheres of their lives with different people (Brenner & DeLamater, 2016; Stryker, 1980).  Political scientists are perhaps the most explicitly interested in public opinion change.  They have theorized that expressed opinions are a function of the cognitive considerations about an issue that are most salient at the moment a person is queried by a pollster (Zaller, 1992).  The salience and accessibility of considerations are themselves shaped by the way that politicians and the media “frame” debates about issues (Chong & Druckman, 2007).  Framing theory compliments the CASM paradigm in that it provides theoretical guidance to better understand how and when exogeneous forces influence the cognitive heuristics people use to form opinions (Kahneman, 2011).

What these different theoretical paradigms share in common is a focus on opinion change.  Survey methodologists study the causes of response variability so as better to isolate and prevent them through design (Groves et al., 2009).  Sociologists seek to understand the ways that issue framing and intergroup interactions can mobilize groups into collective action (Benford & Snow, 2000).   Political scientists study how framing can shift individual and mass policy preferences, which often leads to electoral turnover and/or policy change (Baumgartner & Jones, 2009; Chong & Druckman, 2007).  Across disciplines, many scholars of public opinion employ survey experiments to test how methodological design differences and/or exposure to alternative frames and information affects expressed opinion (Druckman, Green, Kuklinski, & Lupia, 2006).  These literatures are built upon empirical evidence that exposure to systematic variation in survey instruments causes variation in measured opinion.

However, what is the researcher to do if the experimental treatment causes no significant differences between participants in different groups?  Null results are generally considered to be the most dreaded of all outcomes of primary data collection, for we know that studies in which the researcher cannot reject the null hypothesis are more likely to remain unpublished than those that support hypotheses of change (i.e., significant effects) (Franco, Malhotra, & Simonovits, 2014; Gerber & Malhotra, 2008a, 2008b).  However, we argue that the absence of change is potentially just as interesting and meaningful as a significant treatment effect if the null results are compatible with theories of opinion stability.

Theorizing Opinion Stability: Finding Meaning in the Null

To consider opinion stability, we turn our attention away from the exogenous factors, like survey design characteristics or political messages, that may influence opinion and turn our attention toward the nature of the opinions themselves that we seek to measure.  In a seminal work, Converse (1964) posited that many people possess “nonattitudes” about social and political issues; they possess little information and care even less about many topics queried by pollsters.  Many theories of opinion change hearken back to Converse’s theory because they specify the conditions under which weakly-held opinions are vulnerable to outside influence.  However, following Converse, many scholars sought to “redeem” the average citizen and determine whether or not people ever hold strong, stable opinions about public issues.

As one example, the political scientists Edward Carmines and James Stimson (1980) differentiated between “easy” and “hard” issues.  They argued that hard issues deal with complicated, technical issues frequently related to public policies about which the average citizen knows little.  Due to their complexity, it is relatively challenging for citizens to form opinions about hard issues, and those opinions are weaker and more vulnerable to influence by framing.  Easy issues, in contrast, are largely symbolic in nature, and they address topics that are familiar to most people.  Whereas people have to draw upon new, unfamiliar information to form an opinion about hard issues, they only need to rely upon their emotions, personal beliefs, and “gut responses” to form opinions about easy issues (Carmines & Stimson, 1980, pg. 78).  As such, opinions about easy issues tend to be more strongly-held, which makes them more resistant to manipulation.  Likewise, survey methodologists have tested whether attitude strength moderates survey design effects; though results are not uniform (Krosnick & Schuman, 1988), the bulk of the evidence indicates that people’s strongly-held opinions are more stable and less vulnerable to influence than weakly-held opinions (Howe & Krosnick, 2017).  This includes evidence that strongly-held opinions are relatively resistant to the types of stimuli of interest to political scientists (Druckman & Nelson, 2003; Haider-Markel & Joslyn, 2001; Lecheler, de Vreese, & Slothuus, 2009).

Viewed in light of these studies, null experimental results may indicate not a failed hypothesis, but rather plausible evidence that the issue under study is one about which it is easy for people to form opinions, and possibly one about which they feel strongly.  Under such conditions, people’s resistance to influence is just as theoretically meaningful as the discovery of cues that can shift their opinions through a significant treatment effect.  However, without the proper use of theory, null results provide no clarity.  Survey methodologists have devoted significant attention to understanding how best to measure attitude strength; they have discovered that it is not a single construct.  Rather, “strength” encompasses many different attitude dimensions, such as importance, certainty, and accessibility (Howe & Krosnick, 2017).  But this body of knowledge largely fits within the CASM paradigm.  It teaches us the cognitive aspects of strongly- vs. weakly-held attitudes, but it provides somewhat less guidance as to why a person would perceive some issues to be more important than others, or why he or she would feel ambivalence.

In this chapter, we argue that sociological and political science theories provide guideposts to understand why people feel more strongly about some issues than others because these theories explain how people formulate their beliefs within social and political context.  They remind survey researchers to consider the full social and political meaning of the questions they ask participants, and they could help researchers anticipate which questions might be most vulnerable vs. resistant to treatments and survey design effects.  Equipped with theories of both change and stability, a researcher should be prepared to find meaning in any set of empirical findings.

The Case: Media Coverage of Police-Civilian Interactions during an Era of Protest

To provide an example of the balanced use of theories of change and stability, we present the results of a survey experiment designed to test the effect of exposure to media images of police on public attitudes toward law enforcement.  A series of recent, fatal encounters between police officers and people of color reignited long-simmering tensions between law enforcement and African American communities.  First the 2014 death of Michael Brown in Ferguson, Missouri and then the 2015 death of Freddie Gray in Baltimore, Maryland sparked weeks of mass public protests and sporadic vandalism and violence in their respective cities (Cobbina, 2019; Melley, 2014; Sappenfield, 2014).  Many commentators drew parallels between the Ferguson and Baltimore protests and the “long, hot summers” of the 1960s that were likewise characterized by numerous urban uprisings sparked by perceived police mistreatment of African Americans (Kennedy & Schuessler, 2014; National Advisory Commission on Civil Disorders & Kerner, 1968; Yokley, 2015), but the contemporary incidents received even more extensive media coverage due to the evolution of technology.  Journalists and civilians used smart phone cameras to provide visual documentation of police confronting protesters in full riot gear, often with guns drawn, sometimes from atop armored vehicles (Gately & Stolberg, 2015; Thorsen & Giegerich, 2014).  Walter Olson (2014) of the Cato Institute commented, “The dominant visual aspect of the [Ferguson] story…has been the sight of overpowering police forces confronting unarmed protesters who are seen waving signs or just their hands.”  Senator Rand Paul wrote in Time Magazine (2014), “When you couple this militarization of law enforcement with an erosion of civil liberties and due process that allows the police to become judge and jury…we begin to have a very serious problem on our hands.  Given these developments, it is almost impossible for many Americans not to feel like their government is targeting them.”

In so many words, Olson and Paul hypothesized that seeing pictures and video of heavily-armed police officers confronting unarmed, peaceful protesters would undermine people’s trust in law enforcement and government.  Their intuition was consistent with two bodies of theory that predict opinion change.  First is framing theory; seeing images of hostile confrontations between police and civilians could activate people’s cognitive considerations of police misconduct and biased treatment of people of color (Zaller, 1992).  Second is procedural justice theory; Tyler (2006) theorized that the legitimacy of legal authorities depends upon whether or not the public believes that those authorities act in a fair manner, consistent with due process and equal treatment under the law.  According to both theories, triggering thoughts of police misconduct would cause a reduction in people’s trust in the police, whereas triggering thoughts of fair, effective policing would cause an increase in people’s trust.  Prior empirical studies provide some support for these hypotheses.  Scholars have found that news consumption is significantly related to public opinion about the police (Callanan & Rosenberger, 2011; Dowler & Zawilski, 2007), and specific coverage of police misconduct, brutality, or controversy is associated with decreased public satisfaction with and confidence in the police (Lasley, 1994; Weitzer, 2002; Weitzer & Tuch, 2005).

In contrast, we argue that a third body of theory predicts that people’s opinions about the police will be relatively stable and immune to influence, even when presented with cues about police misconduct and use of force.  Jonathan Jackson and his colleagues (Jackson & Bradford, 2009; Jackson & Sunshine, 2007) argue that the police hold a symbolic role in society.  They draw upon Durkheim to argue, “When people think about the police and their ‘crime-fighting’ activities, they also think about what ‘crime’ stands for (erosion of norms and social ties that underpin group life) and what ‘policing’ stands for (organized defense of the norms and social ties). Individuals who are concerned about long-term social change, who see the modern world as too individualized and too atomized, then look to the police to defend a sense of order. . . .” (Jackson & Bradford, 2009, pg. 499).  In other words, people’s attitudes toward the police are deeply intertwined with their beliefs about morals and the state of society because they view the police as symbolic guardians of morality and order.  A few empirical studies confirm that a relationship exists between opinion about the police and opinions about morals and the state of society (e.g., Jackson & Bradford, 2009; Jackson & Sunshine, 2007; Wozniak, 2016).  Some aspects of contemporary events are consistent with this “neo-Durkheimian” perspective on policing.  Many politically conservative commentators criticized the Black Lives Matter movement not by denying activists’ right to demand equal protection under the law, but rather by arguing that BLM activists disrespect the police (Alcindor, 2016; Holley, 2016; Vitale, 2016).  The degree to which a person “supports the police” became the barometer by which a person’s support for social order was judged.

Though the social, political, and economic affairs of nations are complicated, it is not difficult for people to assess how they feel about the state of society.  If the police are a symbolic proxy for the moral stability of society, then it stands to reason that it would be easy for people to form opinions about the police – they need only assess how they feel about social order at the present time.  According to the typology of Carmines & Stimson (1980), we would then expect that seeing pictures of police interactions with civilians would not change people’s opinions about the police; they would only reinforce people’s extant positive or negative evaluations of law enforcement.

Design of the Present Study

It is possible to categorize police tactics along a spectrum defined by the police-civilian relationship (Weisburd & Eck, 2004).  On one side of the spectrum is community policing, which encourages officers to develop collegial relationships and maintain close communication with civilians in the communities they service.  Citizen satisfaction with the police is a primary concern in the community policing model.  On the other side of the spectrum are crime control tactics that emphasize intense surveillance and frequent arrests for offenses.  One example of these tactics is stop-and-frisk policing in which officers focus on catching large numbers of minor offenses in order to deter more serious crimes.  The confrontations between police and protesters in Ferguson and Baltimore likely stand at the furthest point on the hostile side of the spectrum.  Critics contend that aggressive crime control tactics create a contentious, “us vs. them” relationship between police and community members.

We designed a self-administered, online survey experiment to test whether exposure to images of police-civilian interactions affects public opinion about the police.  We chose three pictures intended to represent the spectrum of police-civilian relations.  The first picture was an image of two male police officers in riot gear atop an armored vehicle pointing a rifle at a group of protesters in the street with their hands raised; this was a picture taken during the 2014 protests in Ferguson, Missouri following the shooting of Michael Brown.  We chose this image to represent hostile, “militarized” conflict between police and civilians.  The second picture showed two smiling police officers (one male, one female), one of whom was giving a “high five” to a civilian seated on his porch.  We chose this image to represent positive, “community policing-style” interaction between police and civilians.  Finally, the third picture showed two male police officers patting down two male civilians who had their hands pressed against a wall.  We chose this image to represent a confrontational, “stop-and-frisk-style” interaction between police and civilians.1 

The experimental manipulations were embedded on the first, introductory page of the survey.  Above the survey’s title (“a study of public opinion about government and police”), participants who were randomly-assigned to an experimental condition saw a banner that contained three pictures.  Two of the pictures were constant across all three groups; these were a picture of Members of the U.S. House of Representatives delivering remarks at a podium emblazoned with the House seal and a picture of members of the U.S. National Guard in camouflage fatigues handing out care packages to a mother and child.  These constant images of government professionals were designed to slightly obscure the study’s central focus on images of the police in order to minimize the possibility that respondents would guess the intent of the pictures and reply as they thought the surveyors desired.  The third image, placed in the center of the banner, contained one of the three police-civilian interaction pictures.  Respondents were randomly assigned to receive one of the three treatment pictures or to receive a control condition that contained only the title text, no pictures.

We chose actual pictures of police and civilians found through Google image searches.  This means that the treatment stimuli reflect the kinds of pictures that people routinely see online and on the news.  As such, the treatments possess a degree of external validity, but this comes at the cost of precision.  Since the pictures each contain numerous elements, we cannot specify precisely which facet of the images may generate a framing effect on people’s opinions about the police.  Due to the complicated racial dynamics at the center of the policing debate (Peffley & Hurwitz, 2010), we were careful to ensure that the pictures represented racial diversity as much as possible.  No picture contained solely white people, and only the community policing treatment picture was racially-homogenous; both of the officers and the civilian in the picture were African American.

We ensured that participants were properly exposed to the treatment in two ways.  First, the title screen was set on a five second delay before participants could proceed, so they could not immediately skip over the pictures.  Second, following the title screen, participants were reshown each of the three images in the title banner, one by one, and asked to briefly describe what they saw in the picture; participants in the control group proceeded directly to the survey questions.  Only about 1% of respondents in each experimental group provided no descriptions or wrote blatantly-inaccurate descriptions of the pictures they saw, indicating that respondents were successfully treated (Mutz, 2011).  Furthermore, evidence indicates that the random assignment to conditions was successful.  There were no statistically significant differences across groups in regard to respondents’ political party affiliation (χ²[12, N = 1,068] = 19.25, p = 0.08), political ideology (χ²[12, N = 1,025] = 8.46, p = 0.75), self-reported degree of attention they paid to the news in the previous 7 days (χ²[6, N = 1,078] = 6.82, p = 0.34), or race (χ²[12, N = 1,100] = 5.73, p = 0.93), which indicates that the random assignment to conditions was successful.


According to the theories we identify as theories of change in relation to the present case (Tyler, 2006; Zaller, 1992), we hypothesize that respondents’ confidence in the police will vary across the experimental conditions in the following manner:

Hchange: Community Policing > Control Group > Stop & Frisk > Militarized Policing

According to the theories we identify as theories of stability in relation to the present case (Carmines & Stimson, 1980; Jackson & Bradford, 2009; Jackson & Sunshine, 2007), we hypothesize that exposure to different images of police-civilian interactions will not cause significant differences in respondents’ confidence in the police across the experimental conditions

Hstability: Community Policing = Control Group =  Stop & Frisk =  Militarized Policing


We recruited respondents from Qualtrics’ national, online panel to participate in a survey to “study people’s opinions about how well the government and the police are addressing problems facing the nation.”  The survey contained a variety of questions measuring respondents’ attitudes toward the police, their local government, and the federal government.  These were original questions that we wrote for the purpose of this study.  The survey was fielded in late April 2016.  We procured responses from 1,100 participants.  Since this is a nonprobability sample, we do not claim that the point estimates of respondents’ attitudes toward the police are generalizable to the national population (Baker et al., 2010).  Rather, we were interested in the causal relationship between image exposure and expressed opinion, and studies find that experiments administered to opt-in panel samples generate results that are similar to experiments administered to randomly-selected samples (e.g., Weinberg, Freese, & McElhattan, 2014; Yeager & Krosnick, 2012).

Still, there is value in understanding who these participants were.  Our sample was 70.6% white, 21.1% black, 4.6% Asian American, and 1.3% Pacific Islander.  Twenty percent of the sample identified as Hispanic.  The average age of participants was 45 years old with a standard deviation of 17 years.  Males comprised 49.9% of the sample, females comprised 49.8%, and transgender individuals comprised 0.3%.  In regard to education, 17.6% completed high school or less formal education, 58.5% possessed some college education or a bachelor’s degree, and 23.9% possessed some graduate education or a graduate degree.  In regard to annual household income, 30.4% of respondents made less than $35,000, 36.0% made between $35,000 and $75,000, and 33.7% made greater than $75,000.  Republicans comprised 21.6% of the sample, Democrats 46.0%, Independents 23.9%, 0.9% identified with another party, and 7.9% said they had no partisanship preference.

Dependent Variables

The first two dependent variables were preceded by the prompt, “Now, we’d like to talk about your perceptions of the police.  First, we’d like to talk about the police in your own community.”

1.      Evaluation of Local Police:  This scale variable was comprised of the answers to four questions that shared a common root:  “Do you think that your local police do a good job or a bad job in…” 1) “…dealing with problems that really concern people?” 2) “…preventing crime?” 3) “…responding to people after they have been victims of crime?” 4) “…maintaining order on the streets and sidewalks?”  All four questions shared the same response scale: a very bad job, a somewhat bad job, a slightly bad job, a slightly good job, a somewhat good job, a very good job.2 A factor analysis with orthogonal rotation revealed that these four items loaded onto a single factor with an Eigenvalue of 2.92.  The factor loadings of the items were 0.86, 0.86, 0.85, and 0.84, respectively.  We combined these items into an additive scale with a range of 4 to 24; higher values indicate stronger perception that the respondent’s local police do a “good job” at their duties.  This scale had a Cronbach’s alpha of 0.92.

2.      Local Police Misconduct:  This scale variable was comprised of the answers to four questions, three of which shared a common root:  “How often do you think your local police officers…” 1) “…stop people on the streets without good reason?”  2) “…when talking to people, use insulting language against them?”  3) “…use excessive force (more force than is necessary under the circumstances) against people?”  The fourth question was, 4) “How common do you think corruption (such as taking bribes or involvement in the drug trade) is in your local department?”  Questions one through three shared the same response scale: never, on occasion, fairly often, very often.  The response options for the fourth question were very uncommon, somewhat uncommon, fairly common, very common.  A factor analysis with orthogonal rotation revealed that these four items loaded onto a single factor with an Eigenvalue of 2.37.  The factor loadings of the items were 0.77, 0.83, 0.84, and 0.61, respectively.  We combined these items into an additive scale with a range of 4 to 16; higher values indicate stronger perception that the respondent’s local police engage in improper behavior more frequently.  This scale had a Cronbach’s alpha of 0.87.

The next three dependent variables were preceded by the following prompt, “Now, we’d like to talk about your perceptions of the police in general across the country.”

3.      Evaluation of National Police: This scale variable was comprised of the answers to four questions that shared a common root:  “Do you think that the police in general do a good job or a bad job in…”  1) “…dealing with problems that really concern people?” 2) “…preventing crime?” 3) “…responding to people after they have been victims of crime?” 4) “…maintaining order on the streets and sidewalks?”  All four questions shared the same response scale: a very bad job, a somewhat bad job, a slightly bad job, a slightly good job, a somewhat good job, a very good job.  A factor analysis with orthogonal rotation revealed that these four items loaded onto a single factor with an Eigenvalue of 2.94.  The factor loadings of the items were 0.86, 0.85, 0.87, and 0.86, respectively.  We combined these items into an additive scale with a range of 4 to 24; higher values indicate stronger perception that the police, in general, do a “good job” at their duties.  This scale had a Cronbach’s alpha of 0.92.

4.      National Police Misconduct: This scale variable was comprised of the answers to five questions, four of which shared a common root:  “How often do you think police in general…” 1) “…stop people on the streets without good reason?”  2) “…when talking to people, use insulting language against them?”  3) “…use excessive force (more force than is necessary under the circumstances) against people?” 4) “…stop and question or frisk people in communities across the country?”  The fifth question was, 5) “How common do you think corruption (such as taking bribes or involvement in the drug trade) is in police departments across the country?”  Questions one through four shared the same response scale: never, on occasion, fairly often, very often.  The response options for the fifth question were very uncommon, somewhat uncommon, fairly common, very common.  A factor analysis with orthogonal rotation revealed that these five items loaded onto a single factor with an Eigenvalue of 3.03.  The factor loadings of the items were 0.83, 0.81, 0.82, 0.80, and 0.62, respectively.  We combined these items into an additive scale with a range of 5 to 20; higher values indicate stronger perception that the police, in general, engage in improper behavior more frequently.  This scale had a Cronbach’s alpha of 0.89.

5.      National Police Bias:  This scale variable was comprised of the answers to four questions:  1) In general, do you think that police across the country treat wealthy people better, the same, or worse than poor people?  2) In general, do you think the police treat white people better, the same, or worse than black people?  3) In general, do you think the police treat white people better, the same, or worse than Hispanics?  4)  In general, do you think the police treat English-speaking people better, the same, or worse than non-English speaking people?  All four items shared variations of the same, five point response scale, which was, “Treat [Group A] much worse/somewhat worse/the same as/somewhat better than/much better than [Group B].”  A factor analysis with orthogonal rotation revealed that these four items loaded onto a single factor with an Eigenvalue of 2.37.  The factor loadings of the items were 0.65, 0.87, 0.86, and 0.69, respectively.  We combined these items into an additive scale with a range of 4 to 20; higher values indicate stronger perception that the police, in general, treat a socially-advantaged group in society better than a socially-disadvantaged group.  This scale had a Cronbach’s alpha of 0.85.

Data Quality Check

In order to verify the quality of these data, we created dummy variables that identified respondents who gave the same response to an entire string of questions that comprised each dependent variable, as described above.  Straight-lined responses are problematic if they are an indication that respondents sped through the survey without truly paying attention or meaningfully answering the questions.  The number of “straight-line” responses ranged from a low of 1 (a respondent who said that the police treat wealthy people, white people, and English-speakers “much worse” than poor people, black people, Hispanic people, and non-English speakers) to a high of 277 (respondents who said that “on occasion” police in general across the country engage in misconduct behaviors).  Of the 25 possible response options across all the dependent variable questions, only 5 prompted more than 100 respondents to straight-line their responses.

Even though this analysis reveals that a majority of responses to the dependent variable questions were heterogeneous rather than straight-lined, to err on the side of caution, we also identified respondents who fell within the fifth (about 6 minutes) and ninety-fifth percentiles (about 34 minutes) of the completion time distribution (the mean completion time was about 20 minutes, and the median time was about 12 minutes).  We assume that the most inattentive or distracted respondents were likely contained within these groups of the fastest and slowest respondents.  We dropped these 105 respondents and reran all analyses.  The substantive results were unchanged.  All together, these sensitivity analyses give us greater confidence that our results are not a function of invalid responses in the data.


Figure 1 presents the average of respondents’ scores on each of the five dependent variable scales across the experimental groups.  It clearly shows that the treatment had no significant impact on respondents’ answers; the average values are nearly identical across groups.  A series of one-way, between-subjects ANOVA tests confirm no significant differences between groups in regard to evaluation of local police [F(3, 1044)=1.20, p=0.31], local police misconduct [F(3, 1025)=0.30, p=0.83], evaluation of national police [F(3, 1048)=0.08, p=0.97], national police misconduct, [F(3, 1041)=0.24, p=0.87], or national police bias [F(3, 1040)=0.14, p=0.94].  We reiterate that virtually all respondents passed the manipulation check by providing accurate descriptions of the pictures to which they were exposed, which means that these results are not evidence of a failure to treat.  Evidence also indicates that randomization was successful.  The results genuinely indicate that exposure to these images of police-civilian interactions did not significantly influence respondents’ attitudes toward local or national police.  We cannot reject the null hypothesis.3

Figure 1. Attitudes Toward Police Across Experimental Image Conditions

Note: The dependent variable scales are not standardized, so comparisons should only be made between groups within each dependent variable, not across dependent variables

We also conducted a secondary analysis.  As discussed earlier in this chapter, most controversies about policing in the United States center on the relationship between police and communities of color.  Unsurprisingly, racial minorities express more critical opinions about police than do whites (Graziano & Gauthier, 2019; Peck, 2015; Peffley & Hurwitz, 2010).  As such, we tested whether the effect of image exposure varied across racial groups (full results available upon request).  For the sake of parsimony, we restricted this analysis to non-Hispanic white, non-Hispanic black, and Hispanic respondents; these three racial/ethnic groups made up over 90% of the sample.  We found no significant interactions between treatment condition and race in regard to local police evaluation, local police misconduct, national police evaluation, or national police misconduct.  We found two significant interactions in regard to assessment of national police bias.  Hispanic respondents who saw the image of the militarized police response in Ferguson or the image of a stop and frisk scored significantly higher on the scale of perceived police bias than Hispanic respondents in the control group (p < .05).  However, this interaction effect was no longer significant once a Bonferroni correction for multiple comparisons was applied.  Given the instability of this finding, and given the overall lack of interracial differences in treatment effects across the dependent variables, we still believe that the most valid and reliable interpretation of these findings is that image exposure did not substantively change people’s expressed opinions about the dimensions of policing assessed in this study.

Opinion Stability: The Police as Symbols of Social Order  

As quoted in the introduction to this chapter, Walter Olson and Rand Paul expressed concern that seeing images of the hostile, violent confrontations between police officers and peaceful protesters would shake people’s confidence in law enforcement.  Our results do not support their conjecture.  The protests and confrontations received extensive media coverage and fueled heated rhetoric in the political sphere (Alcindor, 2016; Arora, Phoenix, & Delshad, 2019).  Some research suggests that people’s attitudes toward the police and/or the Black Lives Matter movement polarized and mobilized voters to express even stronger support for Hillary Clinton or Donald Trump in the 2016 presidential election (Drakulich, Hagan, Johnson, & Wozniak, 2017).  The issue of police-community relations (especially with African Americans) certainly held emotional and political weight during the time period we gathered these data, yet seeing pictures designed to evoke the contemporary debate did not significantly alter our respondents’ confidence in the police.  Rather than supporting the theories of opinion change that were implicit in the concerns expressed by Olson and Paul, our results are consistent with the theoretical perspective that people’s opinions about the police are intertwined with their opinions about morals, values, and the state of society – beliefs that are deep-rooted and resistant to easy change.  Our results support the argument that opinions about the police should be classified as an easy issue according to Carmines & Stimson's (1980) typology.

Why are our findings inconsistent with prior evidence that news consumption, specifically coverage of cases of police brutality, is significantly related to variation in public opinion about the police (Callanan & Rosenberger, 2011; Dowler & Zawilski, 2007; Lasley, 1994; Weitzer, 2002; Weitzer & Tuch, 2005)?  To explain the discordance between these prior studies and our current findings, we posit that there may be a meaningful difference in the stimuli under study.  Though many critics (predominantly of a liberal persuasion) alleged that the militarized police responses in Ferguson and Baltimore were examples of police misconduct, it is highly plausible that many Americans would disagree with that perspective.  If one perceives mass protest (and, it must be acknowledged, some acts of vandalism and violence amidst predominantly peaceful protests) as acts of disorder that must be contained and suppressed, then one would consider the police’s aggressive response to be entirely appropriate – not an example of misconduct, but rather an example of the proper exercise of duty under difficult circumstances.4 Collective police response to protest is qualitatively different from the actions of individual officers that cause great physical harm or death to civilians.  The latter cases fit more easily into the type of individualistic thinking that perpetuates systemic racism and white hegemony (Bonilla-Silva, 2017); white people can condemn the actions of individual officers if they are clearly egregious while continuing to support the overall legitimacy of the police, thereby dismissing the more systemic concerns of Black Lives Matter.  Even our treatment picture of the militarized police reaction in Ferguson may not have risen to the level of misconduct in our participants’ minds.  In that light, our null findings are less surprising.  Thus, we cannot rule out the possibility that other types of cues designed to evoke considerations of police misconduct and wrongdoing would cause significant changes in people’s confidence in the police.  This would be a fruitful avenue of research to test in future framing experiments.

Implications for Survey Research

Given discussions of publication bias that favors significant results (Franco et al., 2014; Gerber & Malhotra, 2008a, 2008b), scholars dread few things more than completing a study only to discover that the evidence is insufficient to reject the null hypothesis.  However, single-minded focus on discovering statistically-significant relationships implicitly favors theories of change.  Theories of stability, in contrast, are supported precisely when a treatment fails to cause an effect and the null hypothesis cannot be rejected.  If a scholar were to rely exclusively on the theories that emphasize mutability in public opinion (Converse, 1964; Zaller, 1992), she would be stumped by an experiment that failed to cause opinion differences between groups of participants.  However, other scholars remind us that human beings are not tabula rasa (Haider-Markel & Joslyn, 2001).  Even though they do not spend a great deal of time thinking about many social or political issues, it is still true that most people do hold opinions about some issues that are meaningful or interesting to them.  When scholars study public opinion about issues that are deeply intertwined with people’s beliefs and values, they should expect to find that even “ignorant voters” are more resistant to outside influence than we might expect (Carmines & Stimson, 1980; Druckman & Nelson, 2003).

A deep consideration of Carmines and Stimson’s typology of easy versus hard issues calls on us to recognize that not all public opinion is equal.  We often think of public opinion as the dependent variable – something that is subject to influence by social and psychological forces that interest us.  Carmines and Stimson reminded us that some opinions are less subject to influence than others – an insight for which survey researchers have produced empirical support (Howe & Krosnick, 2017).  However, properly incorporating the insight that attitude strength matters into survey design is more complicated.  As discussed earlier, methodologists have shown that attitude strength is a multi-dimensional concept, which means that a researcher could be faced with the daunting task of writing numerous questions for each construct in the survey: one question to measure an opinion and several follow-up questions to measure the various dimensions of the strength with which the respondent holds that opinion.  Such an approach would very quickly consume the limited number of questions surveyors can reasonably expect respondents to answer in a single sitting.  There is a tradeoff between quality versus quantity in regard to the measurement of constructs in a survey.

It is here that sociological and political science theories may be useful.  A careful use of theory can help researchers plausibly classify survey topics as easy or hard (or strong vs. weak) a priori.  With theoretical guidance, a researcher could either selectively target which constructs most need additional questions to measure attitude strength, or she could save the space consumed by strength questions by making a theory-based argument as to why a particular construct should be assumed to be strongly- or weakly-held.  As we showed in our case study, we think that Durkheim is particularly useful in this regard since he postulated a link between morals, values, and social structure.  If a researcher can make the case that a particular construct is likely a proxy for a person’s deeper feelings about the state of values in society, it is plausible to assume that said attitude will be strongly-held (for a similar psychological perspective, see Haidt, 2012).  A similar, compatible political science theory holds that people form opinions about policy proposals based upon an assessment of the moral and social value of the people in society who will most benefit or be most harmed by said proposal (Schneider & Ingram, 1993).  Regardless of the complexity of actual policy implementation, it is easy for people to judge that they want policies to benefit small business owners and harm pedophiles, for example; it would not be surprising to find that frames cannot sway these opinions all that much.

In conclusion, the TSE and CASM paradigms have made invaluable contributions to survey research methodology, but it would be a mistake for scholars to focus so much on the process of crafting valid measures that they neglect to thoroughly frame their whole research project around theory.  By drawing upon public opinion theories of both stability and change from across the social sciences, scholars will be prepared to draw substantive meaning out of both statistically significant and statistically insignificant findings.  Discovering the topics over which elites have little power to sway the opinions of the mass public is just as interesting as identifying frames that cause significant opinion change.


[1] The treatment images may be viewed on the following websites, which are the sources from which we copied the pictures for use in our survey experiment: militarized policing (Topaz, 2014), community policing (Mirko, 2013), stop and frisk (Post Editorial Board, 2015).  The original stop and frisk image actually depicted an officer training exercise, and the officers were carrying bright blue model side arms.  We digitally altered the picture in order to color the handgun hilts black so that they would look like regular guns, thereby depicting a typical stop and frisk.

[2] All questions in this survey included a “prefer not to answer” response option.  We dropped these responses prior to the present analyses.  Only between 31 and 48 respondents chose this option in each of the dependent variable questions.

[3] Two additional pieces of evidence strengthen our conclusion of null findings.  First, we also conducted a series of two one-sided tests of equivalence for each of the dependent variables following the recommendations of Lakens (2017).  Equivalence tests provide an additional check against the possibility that non-significant treatment effects are a Type II error; essentially, they test whether a difference between two groups is large enough to be meaningful according to a standard set by the researcher.  This validity check confirmed that the differences in police attitudes across treatment groups are statistically equivalent; these results are available from the first author upon request.  Second, in a separate analysis, we did find that the treatment caused significant differences in respondents’ expressed presidential vote preference (Wozniak, Calfano, & Drakulich, 2019). This finding indicates that image exposure did affect some of the respondents’ expressed preferences, just not their attitudes toward the police.

[4] Moule, Parry, and Fox (2019) present evidence that supports this interpretation.  They found that nearly 65% of respondents to their national survey expressed support for the use of police SWAT teams to respond to “instances of civil unrest.”  On the other hand, only about 7% of respondents supported the use of SWAT teams to respond to “peaceful protests.”  Their finding suggests that people judge the appropriateness of police tactics differently under different circumstances, but the nature of those circumstances may be in the eye of the beholder. See also Lockwood, Doyle, and Comiskey (2018) for complimentary findings.


Alcindor, Y. (2016, August 16). Trump, rallying white crowd for police, accuses Democrats of exploiting blacks. The New York Times, p. 9.

Arora, M., Phoenix, D. L., & Delshad, A. (2019). Framing police and protesters: Assessing volume and framing of news coverage post-Ferguson, and corresponding impacts on legislative activity. Politics, Groups, and Identities, 7(1), 151–164.

Baker, R., Blumberg, S. J., Brick, J. M., Couper, M. P., Courtright, M., Dennis, J. M., … Zahs, D. (2010). AAPOR report on online panels. Public Opinion Quarterly, 74(4), 711–781.

Baumgartner, F. R., & Jones, B. D. (2009). Agendas and Instability in American Politics, Second Edition. Chicago: University of Chicago Press.

Benford, R. D., & Snow, D. A. (2000). Framing processes and social movements: An overview and assessment. Annual Review of Sociology, 26(1), 611–639.

Bonilla-Silva, E. (2017). Racism without Racists: Color-Blind Racism and the Persistence of Racial Inequality in America (5th ed.). Lanham, MD: Rowman & Littlefield Publishers.

Brenner, P. S., & DeLamater, J. (2016). Lies, damned lies, and survey self-reports? Identity as a cause of measurement bias. Social Psychology Quarterly, 79(4), 333–354.

Callanan, V. J., & Rosenberger, J. S. (2011). Media and public perceptions of the police: Examining the impact of race and personal experience. Policing and Society, 21(2), 167–189.

Carmines, E. G., & Stimson, J. A. (1980). The two faces of issue voting. American Political Science Review, 74(1), 78–91.

Chong, D., & Druckman, J. N. (2007). Framing theory. Annual Review of Political Science, 10, 103–126.

Cobbina, J. E. (2019). Hands Up, Don’t Shoot: Why the Protests in Ferguson and Baltimore Matter, and How They Changed America. New York: New York University Press.

Converse, P. E. (1964). The nature of belief systems in mass publics. In D. E. Apter (Ed.), Ideology and Discontent (pp. 206–261). New York: Free Press.

Dillman, D. A., Phelps, G., Tortora, R., Swift, K., Kohrell, J., Berck, J., & Messer, B. L. (2009). Response rate and measurement differences in mixed-mode surveys using mail, telephone, interactive voice response (IVR) and the Internet. Social Science Research, 38(1), 1–18.

Dowler, K., & Zawilski, V. (2007). Public perceptions of police misconduct and discrimination: Examining the impact of media consumption. Journal of Criminal Justice, 35(2), 193–203.

Drakulich, K., Hagan, J., Johnson, D., & Wozniak, K. H. (2017). Race, justice, policing, and the 2016 American presidential election. Du Bois Review: Social Science Research on Race, 14(1), 7–33.

Druckman, J. N., Green, D. P., Kuklinski, J. H., & Lupia, A. (2006). The growth and development of experimental research in political science. American Political Science Review, 100(4), 627–635.

Druckman, J. N., & Nelson, K. R. (2003). Framing and deliberation: How citizens’ conversations limit elite influence. American Journal of Political Science, 47(4), 729–745.

Franco, A., Malhotra, N., & Simonovits, G. (2014). Publication bias in the social sciences: Unlocking the file drawer. Science, 345(6203), 1502–1505.

Gately, G., & Stolberg, S. G. (2015, November 16). Baltimore police assailed for response after Freddie Gray’s death. The New York Times. Retrieved from

Gerber, A., & Malhotra, N. (2008). Do statistical reporting standards affect what is published? Publication bias in two leading political science journals. Quarterly Journal of Political Science, 3(3), 313–326.

Gerber, A., & Malhotra, N. (2008). Publication bias in empirical sociological research: Do arbitrary significance levels distort published results? Sociological Methods & Research, 37(1), 3–30.

Goffman, E. (2009). Stigma: Notes on the Management of Spoiled Identity. Simon and Schuster.

Gorden, R. L. (1952). Interaction between attitude and the definition of the situation in the expression of opinion. American Sociological Review, 17, 50–58.

Graziano, L. M., & Gauthier, J. F. (2019). Examining the racial-ethnic continuum and perceptions of police misconduct. Policing and Society, 29(6), 657–672.

Groves, R. M. (2004). Survey Errors and Survey Costs. Hoboken, NJ: John Wiley & Sons.

Groves, R. M., Jr, F. J. F., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2009). Survey Methodology (2nd ed.). John Wiley & Sons.

Haider-Markel, D. P., & Joslyn, M. R. (2001). Gun policy, tragedy, and blame attribution: The conditional influence of issue frames. Journal of Politics, 63(2), 520–543.

Haidt, J. (2012). The Righteous Mind: Why Good People Are Divided by Politics and Religion. Knopf Doubleday Publishing Group.

Holley, P. (2016, July 18). Wisconsin sheriff: Black Lives Matter’s ‘hateful ideology’ caused police killings. Washington Post. Retrieved from

Howe, L. C., & Krosnick, J. A. (2017). Attitude strength. Annual Review of Psychology, 68(1), 327–351.

Jackson, J., & Bradford, B. (2009). Crime, policing and social order: On the expressive nature of public confidence in policing. The British Journal of Sociology, 60(3), 493–521.

Jackson, J., & Sunshine, J. (2007). Public confidence in policing: A neo-Durkheimian perspective. The British Journal of Criminology, 47(2), 214–233.

Kahneman, D. (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.

Kennedy, R., & Schuessler, J. (2014, August 14). Ferguson images evoke civil rights era and changing visual perceptions. The New York Times, p. A14.

Krosnick, J. A., & Schuman, H. (1988). Attitude intensity, importance, and certainty and susceptibility to response effects. Journal of Personality and Social Psychology, 54(6), 940–952.

Lakens, D. (2017). Equivalence tests: A practical primer for t-tests, correlations, and meta-analyses. Social Psychological and Personality Science, 8(4), 355–362.

Lasley, J. R. (1994). The impact of the Rodney king incident on citizen attitudes toward police. Policing and Society, 3(4), 245–255.

Lecheler, S., de Vreese, C., & Slothuus, R. (2009). Issue importance as a moderator of framing effects. Communication Research, 36(3), 400–425.

Lepkowski, J. M., Tucker, N. C., Brick, J. M., Leeuw, E. D. de, Japec, L., Lavrakas, P. J., … Sangster, R. L. (2007). Advances in Telephone Survey Methodology. Hoboken, NJ: John Wiley & Sons.

Lockwood, B., Doyle, M. D., & Comiskey, J. G. (2018). Armed, but too dangerous? Factors associated with citizen support for the militarization of the police. Criminal Justice Studies, 31(2), 113–127.

Melley, B. (2014, August 31). Ferguson’s flashpoint sparks national outrage. Detroit News. Retrieved from

Mirko, M. (2013, July 3). Pictures: New Haven community policing. Retrieved October 1, 2019, from website:

Moule, R. K., Parry, M. M., & Fox, B. (2019). Public support for police use of SWAT: Examining the relevance of legitimacy. Journal of Crime and Justice, 42(1), 45–59.

Mutz, D. C. (2011). Population-Based Survey Experiments. Princeton, N.J.: Princeton University Press.

National Advisory Commission on Civil Disorders, & Kerner, O. (1968). Report of the National Advisory Commission on Civil Disorders. Washington, DC: US Government Printing Office.

Olson, W. (2014, August 13). Police militarization in Ferguson—and your town. Retrieved October 17, 2019, from Cato Institute website:

Paul, R. (2014, August 14). We must demilitarize the police. Time Magazine. Retrieved from

Peck, J. H. (2015). Minority perceptions of the police: A state-of-the-art review. Policing: An International Journal of Police Strategies & Management.

Peffley, M., & Hurwitz, J. (2010). Justice in America: The Separate Realities of Blacks and Whites. New York: Cambridge University Press.

Post Editorial Board. (2015, June 1). How many New Yorkers must die before the mayor brings back stop-and-frisk? Retrieved October 1, 2019, from

Sappenfield, M. (2014, November 30). Can Ferguson spark new civil rights movement? How times have changed. Christian Science Monitor. Retrieved from

Schneider, A., & Ingram, H. (1993). Social construction of target populations: Implications for politics and policy. American Political Science Review, 87(2), 334–347.

Steiner, I. D. (1954). Primary group influences on public opinion. American Sociological Review, 19(3), 260–267.

Stryker, S. (1980). Symbolic Interactionism: A Social Structural Version. Caldwell, NJ: Blackburn.

Thorsen, L., & Giegerich, S. (2014, August 10). Shooting of teen by Ferguson officer spurs  angry backlash. St. Louis Post-Dispatch, p. A1.

Topaz, J. (2014, August 14). Critics slam “militarization” of police. Retrieved October 1, 2019, from POLITICO website:

Tourangeau, R., Couper, M. P., & Conrad, F. (2004). Spacing, position, and order: Interpretive heuristics for visual features of survey questions. Public Opinion Quarterly, 68(3), 368–393.

Tourangeau, R., Rips, L. J., & Rasinski, K. (2000). The Psychology of Survey Response. New York: Cambridge University Press.

Tourangeau, R., & Smith, T. W. (1996). Asking sensitive questions: The impact of data collection mode, question format, and question context. Public Opinion Quarterly, 60(2), 275–304.

Tyler, T. R. (2006). Why People Obey the Law (2nd ed.). Princeton, N.J.: Princeton University Press.

Vitale, A. S. (2016, July 19). Giuliani’s convention speech got everything wrong about policing. The Nation. Retrieved from

Weinberg, J., Freese, J., & McElhattan, D. (2014). Comparing data characteristics and results of an online factorial survey between a population-based and a crowdsource-recruited sample. Sociological Science;  Stanford, 1, 292–310.

Weisburd, D., & Eck, J. E. (2004). What can police do to reduce crime, disorder, and fear? The Annals of the American Academy of Political and Social Science, 593(1), 42–65.

Weitzer, R. (2002). Incidents of police misconduct and public opinion. Journal of Criminal Justice, 30(5), 397–408.

Weitzer, R., & Tuch, S. A. (2005). Racially biased policing: Determinants of citizen perceptions. Social Forces, 83(3), 1009–1030.

Wozniak, K. H. (2016). Ontological insecurity, racial tension, and confidence in the police in the shadow of urban unrest. Sociological Forum, 31(4), 1063–1082.

Wozniak, K. H., Calfano, B. R., & Drakulich, K. M. (2019). A “Ferguson Effect” on 2016 presidential vote preference? Findings from a framing experiment examining “shy voters” and cues related to policing and social unrest. Social Science Quarterly, 100(4), 1023–1038.

Yeager, D. S., & Krosnick, J. A. (2012). Does mentioning “some people” and “other people” in an opinion question improve measurement quality? Public Opinion Quarterly, 76(1), 131–141.

Yokley, E. (2015, January 18). In Ferguson, push for criminal justice reform draws comparisons to ’60s fight for civil rights. The New York Times, p. A11.

Zaller, J. (1992). The Nature and Origins of Mass Opinion. New York: Cambridge University Press.

No comments here
Why not start the discussion?