Matthew Baum photoMarch 2023. GrowthPolicy’s Devjani Roy interviewed Matthew Baum, the Marvin Kalb Professor of Global Communications and Professor of Public Policy at Harvard Kennedy School, on the phenomenon of affective polarization in civic life, misinformation and how it spreads, and the impact of social media on public health crises like COVID-19.

Click here for more interviews like this one.

 

Links: Faculty page | HKS Misinformation Review (Baum is co-editor and co-founder) |  Personal website | The Covid States Project

 

GrowthPolicy: I’d like you to talk about the phenomenon you describe as “affective polarization,” wherein people vote on the basis of strong emotional affect. Given that most people are neither deeply knowledgeable nor particularly concerned about candidates’ specific policy positions, to what would you attribute affective polarization: cultivating and imputing strong feelings of hatred towards others based on differing party loyalties? Why have we become such a polarized society?

Matthew Baum: A widely cited poll result showed that Americans today are more disapproving of their children marrying someone from the opposing party than about their children marrying outside of their racial/ethnic group or religion. This may be somewhat overstated, as people may be less willing to admit to racial or religious preferences. But even so, the percentage emphasizing partisanship has roughly doubled since the 1950s. That’s a pretty dramatic change.

So, what accounts for it? I think there are actually a variety of contributing factors. First, beginning in the 1960s, with the Civil Rights movement, and accelerating in the 1970s, with the Roe v. Wade decision and the rise of Jerry Falwell’s “Moral Majority,” we have seen the emergence of so-called hot-button cultural issues that tended to unify people within each party, while pulling them farther apart from the opposing party. This has accelerated since 2000, with issues like gun control, LGBTQ rights, and immigration, to name just a few. Each are reinforcing, rather than cross-cutting issues, meaning they tend to promote intra-party unity while driving the parties farther apart. In other words, in each case, most Democrats are on one side of the issue and most Republicans are on the other side.

This wasn’t always the case. For instance, prior to Roe v. Wade, Democrats and Republicans were about evenly divided within their respective parties on the issue of abortion. Not coincidentally, unlike many policy issues—like, say, the progressivity of income-tax rates—partisans on both sides perceive many of the cultural hot-button issues as absolute questions of basic morality, with a right side and a wrong side. This leaves little room for compromise, which, in turn, tends to make it easier to vilify the other side.

Layered on top of this, Republicans and Democrats increasingly lead separate and distinct lives. They live in different places, watch different television shows, follow different sports, read different books, use different social media platforms, drive different cars, work in different types of jobs, and on and on. This means that for each side, the other side is increasingly alien and incomprehensible. That makes it easier to stereotype members of the other party as not simply disagreeing with you, but doing so because they are bad people who advocate “evil.”

Finally, for a variety of reasons, such as geographic sorting and partisan gerrymandering, legislative districts are increasingly one-sided, with fewer and fewer competitive districts. This makes it easier for ideological extremists, whose primary appeal to supporters may be fanning the flames of polarization, to win and remain in office primarily by vilifying the other party. Affective polarization arguably is a consequence of this “otherizing” of the increasingly unknown other side.

 

GrowthPolicy: I’d like you to talk about the HKS Misinformation Review, the journal for which you are both co-founder and co-editor. Please tell our readers a bit about the origin story of this journal. What is it about this specific cultural moment that, you believe, necessitates a journal dedicated to misinformation?

Matthew Baum: Let me start with the environment that led to the creation of the journal. There is, of course, nothing new about misinformation. It has been around, essentially, forever, and has long been recognized as harmful to democracy. For instance, I often show my students a 1925 Harper’s Magazine article in which the writer worried that fake news was threatening democracy in America.

But, that said, we are obviously living in a period of heightened partisan polarization, rising distrust in democratic institutions, and assertive authoritarianism and anti-democratic populism abroad, all of which combine to create a fertile ground for spreading misinformation. But unlike the fake news scare of the 1920s, today new digital technologies allow anyone to create and rapidly distribute a targeted audience sophisticated misinformation that near-perfectly mimics legitimate news. This means that all of the elements are in place for a potentially dangerous challenge to the institutions of liberal democracy that Americans and others have come to take for granted. 

Combined with the election of a president who routinely promoted and distributed misinformation to tens of millions of Americans, and a pandemic that left people around the world feeling isolated, confused, and helpless, and you have a recipe for what the World Health Organization called an “infodemic”: a pandemic of misinformation that occurred in parallel with the COVID-19 pandemic.  So, the upshot is that we confront a serious problem of disunity, both in the U.S. and in democracies around the world. Misinformation is not necessarily a primary cause of that disunity. But it undoubtedly exploits, and likely exacerbates, the problem.

This environment, combined with a noteworthy absence of scientific knowledge about how to combat misinformation, provided the impetus for launching the journal. I would trace its origin to the aftermath of the 2016 presidential election. Specifically, I was eating lunch with several colleagues. The topic of conversation was our mutual alarm over the misinformation-plagued 2016 election in which “fake news” had, arguably, played an unprecedented role, most notably accusations of a Russian disinformation campaign aimed at harming Clinton and helping Trump, including accusations of possible Russian collusion with the Trump campaign. Over the course of the meal, the conversation turned to what we, as academics, could do to help. We quickly settled on fake news as the problem area where we, arguably, held a comparative advantage.

Our initial idea was to hold a large conference at Harvard in which we would invite leading researchers to share their research and ideas for combating fake news/misinformation. Partly as a result of the conference, held at Harvard’s Law School in February 2017, the Shorenstein Center decided to undertake a Center-wide misinformation-related initiative. The Center hired misinformation researchers and launched a program on combating misinformation that included a variety of elements, such as training journalists to recognize misinformation and providing real-time alerts to news organization regarding the emergence of false stories.

One of the ideas that emerged during these conversations was creation of a new academic journal. The idea was to focus on making peer-reviewed, cutting-edge social science research on misinformation accessible to non-academics in general, and practitioners in particular. We further sought to review and publish research more quickly than traditional scholarly journals, so that insights from the research we published were still relevant in this rapidly evolving issue area. We also decided to emphasize practical policy implications in every paper we published.

Since our launch in January 2020, we have published over 100 research essays on all aspects of misinformation (technical, legal, social, educational, and cultural). Some of our most popular articles have been read by more than 70,000 people and our website has been visited by over 500,000 users from 148 countries. So, while a journal alone cannot “solve” the misinformation problem, we do feel like the journal is having a significant impact.

 

GrowthPolicy: In a research paper for JAMA last year, you wrote about the relationship between COVID-19 vaccination status and misinformation. In what ways did social media exacerbate the public-health crisis during the COVID-19 pandemic, both within the U.S. and globally?

Matthew Baum: As part of the COVID States Project, I and several colleagues have conducted near-monthly surveys of Americans in all 50 states (roughly 20,000-25,000 per survey wave) since April 2020 regarding the pandemic and various other social and political topics. We cannot, within the confines of our survey data, assert that social media use has caused greater vaccine skepticism or greater belief in public health misinformation in general. But we can say that among our U.S. survey respondents, people who say they rely on social media platforms for news about COVID-19 tend to be more likely to believe false claims about the pandemic and less likely to say they have been vaccinated, than people who say they do not. This pattern is not limited to COVID-19 misinformation; for instance, we find a similar pattern for misinformation about the war in Ukraine.

It is, of course, possible that the types of people who choose primarily to get their news from social media platforms are more likely to be vaccine resistant, in which case, exposure to the platform would not be the underlying cause of their resistance. But we can say that social media use seems to be positively associated with holding misperceptions about COVID-19 in general, and vaccine resistance in particular.

In an era of declining trust in political leadership and democratic institutions, and rising anti-democratic populism, the pandemic emerged in an environment ripe for exploitation by purveyors of misinformation. This, in turn, has spread to other areas in worrisome ways. For instance, skepticism about the COVID-19 vaccine has spilled over to other vaccines, resulting in movements in some locales to eliminate all mandatory vaccines for school children. More generally, public trust in science has suffered, arguably as a consequence of misinformation around the pandemic. Reduced public trust in science, and skepticism regarding vaccines will likely complicate future responses to public health emergencies.

While COVID-19-related misinformation has been somewhat more prevalent in the U.S. than in many nations, this environment is by no means limited to the US. We have seen distrust of democratic institutions rising in nations around the world, in parallel with misinformation, with sometimes dire consequences for nations’ responses to the pandemic. For instance, an October 2021 report from Brazil’s Senate found that Brazilian President Jair Bolsonaro’s misinformation-driven COVID response resulted in thousands of unnecessary deaths in Brazil.

Mistrust (in leaders and institutions) is strongly predictive of holding misperceptions about science, public health (including COVID-19), and politics.  So, while we cannot make causal claims, the circumstantial case seems fairly strong that social media platforms are a noteworthy vector for spreading misinformation, and so can exacerbate public-health-related problems, including pandemic response.  

 

GrowthPolicy: While political misinformation is well-researched by political scientists and economists, it is increasingly evident that social media has an impact on the spread of misinformation in many other areas of life. What are some less well-understood effects of social media on information spread, information bias, and information recall that you foresee will get worse, or even dangerous, in the future?

Matthew Baum: There are a variety of potentially harmful aspects of social media that intersect with misinformation and create short- and longer-term risks for a democratic society. I will mention three. First, social media algorithms are, first and foremost, intended to attract and sustain attention. To a far greater extent than legitimate news stories, misinformation is explicitly designed to exploit social-media algorithms by maximizing its attractiveness to readers.

False stories thus tend to be more engaging than true stories, all else being equal. This helps explain why, according to one widely cited study, false stories tend to spread faster and more widely than true ones. Along these lines, the political scientist Sam Popkin suggested a “Gresham’s Law of Information,” based on Sir Thomas Gresham’s famous observation that bad money tends to drive out good. Popkin’s informational extension is simply that small amounts of new, compelling, and personal information can outweigh very large amounts of older, less-personal, and less-compelling information in influencing peoples’ attitudes and behavior.

The application to misinformation (bad money) relative to truthful information (good money) is pretty clear: misinformation tends to be more compelling, and seemingly more personally relevant than truthful information. After all, unlike truthful information, the very purpose of misinformation is to be personally compelling, and so irresistible, to consumers. This creates problems for public officials attempting, for instance, to inform citizens about the effects of a policy in the face of a flood of misinformation about it.

Conspiracy theories, ranging from ACA death panels to microchip implants in COVID vaccines, spread widely and rapidly. We have even seen mob violence in several countries directed at vulnerable groups or individuals in response to false stories spreading on mobile instant messaging apps. In contrast, factual corrections take much longer to produce and circulate, and frequently fail to reach the most vulnerable consumers who most need reliable information. This can make false stories more familiar than true ones, especially to vulnerable populations. Research, in turn, has shown that people are more likely to accept information as valid if it is familiar.

Second, social-media environments are optimized for exploiting human cognitive biases, again, to maximize user engagement. For instance, we know that people prefer to consume information that reinforces, rather than challenges, their pre-existing beliefs. It is simply more pleasant to be told you are right than to be told you are wrong, all else being equal. However, when there were relatively few media options—such as in the era of the broadcast network oligopoly, roughly 1950-1980—the opportunities for selectively consuming favorable news were relatively limited.

So, most people were exposed to similar news about their community and the world. Cable TV broke the oligopoly, ushering in an era of information fragmentation. This made it easier for consumers to eschew news in favor of more appealing entertainment options. But social media have further transformed the information environment, from one of fragmentation to, what I would call, hyper-fragmentation, by exponentially increasing the range of options available to users. This makes multi-dimensional self-selection (regarding news, sports, entertainment, weather, etc.) far easier than ever before.

The verdict remains out concerning exactly how much selective news exposure typical social media users engage in. Yet, we can say that these platforms offer a near-perfect marriage of supply- and demand-driven opportunities for limiting the preponderance of news exposure to information that supports, rather than challenges, what the user already believes.

On the supply-side, algorithms observe what a user likes and then serve up more of that and less of other things. On the demand-side, users have a near limitless variety of information sources from which to choose. One recent study actually found that on YouTube, self-selection by users (demand-side) contributed more than algorithmic selection (supply-side) to the overall pro-attitudinal slant in users’ information diets on the platform.  

For better or worse, the broadcast television oligopoly created something of an information commons, in which citizens developed a more-or-less common understanding of what was happening in the world. This, of course, didn’t necessarily lead to agreement on solutions. But it did tend to lead to greater agreement on the existence and nature of the problems society must address.

Hyper-fragmented information streams are a particular concern because, increasingly, people disagree not only on policy solutions to problems, but also on whether or not a problem exists, or its fundamental nature. It is difficult to tackle public policy challenges when we cannot agree on whether or not such a challenge exists. Is there an immigration crisis at the border? Is the economy in trouble? Is there rampant voter fraud? Are racism and sexism major problems in the U.S.? Democrats and Republicans fundamentally disagree.

Third, social media appeal to the human need for community surveillance: we tend to evaluate our own lives in relation to those of others in our community. One purpose of this relational self-evaluation is identifying with and reinforcing our social identity. Sharing misinformation can serve an important function in this context, as a means for individuals to signal their “type” to others who share a similar type (say, political perspective). Signaling doesn’t necessarily require that the information be true; only that it clearly indicates membership. This means that in some, perhaps many, circumstances, and for some, perhaps many, users, the veracity of a piece of misinformation is largely beside the point, and so debunking it is unlikely to reduce its appeal. It also means that people are likely to encounter misinformation shared by others with whom they share a perceived identity, raising its credibility, and so their likelihood of believing it.