The first results of that research show that the company’s platforms play a critical role in funneling users to partisan information with which they are likely to agree. But the results cast doubt on assumptions that the strategies Meta could use to discourage virality and engagement on its social networks would substantially affect people’s political beliefs.
“Algorithms are extremely influential in terms of what people see on the platform, and in terms of shaping their on-platform experience,” Joshua Tucker, co-director of the Center for Social Media and Politics at New York University and one of the leaders on the research project, said in an interview.
“Despite the fact that we find this big impact in people’s on-platform experience, we find very little impact in changes to people’s attitudes about politics and even people’s self-reported participation around politics.”
The first four studies, which were released on Thursday in the peer-reviewed journals Science and Nature, are the result of a unique partnership between university researchers and Meta’s own analysts to study how social media affects political polarization and people’s understanding and opinions about news, government and democracy. The researchers, who relied on Meta for data and the ability to run experiments, analyzed those issues during the run-up to the 2020 election.
As part of the project, researchers altered the feeds of thousands of people using Facebook and Instagram in fall of 2020 to see if that could change political beliefs, knowledge or polarization by exposing them to different information than they might normally have received. The researchers generally concluded that such changes had little impact.
The collaboration, which is expected to be released over a dozen studies, also will examine data collected after the Jan. 6, 2021, attack on the U.S. Capitol, Tucker said.
The research arrives amid a years-long battle among advocates, lawmakers and industry leaders over how much tech companies should be doing to combat toxic, misleading and controversial content on their social networks. The highly charged debate has inspired regulators to propose new rules requiring social media platforms to make their algorithms more transparent and the companies more responsible for the content those algorithms promote.
The findings are likely to bolster social media companies’ long-standing arguments that algorithms are not the cause of political polarization and upheaval. Meta has said that political polarization and support for civic institutions started declining long before the rise of social media.
“These findings add to the growing body of research showing there is little evidence that social media causes harmful ‘affective’ polarization or has any meaningful impact on key political attitudes, beliefs or behaviors,” Meta Global Affairs President Nick Clegg said in a blog post on Thursday about the research.
But tech companies’ critics and some researchers who had seen the research before its release caution that the results don’t exonerate tech companies for the role they play in amplifying division, political upheaval or users’ belief in conspiracies. Nor should studies give social media platforms cover to do less to tamp down viral misinformation, some advocates argue.
“Studies that Meta endorses, which look piecemeal at small sample time periods, shouldn’t serve as excuses for allowing lies to spread,” said Nora Benavidez, a senior counsel at Free Press, a digital civil rights group that has pushed Meta and other companies to do more to fight election-related misinformation. “Social media platforms should be stepping up more in advance of elections, not concocting new schemes to dodge accountability.”
“It’s a little too buttoned up to say this shows Facebook is not a huge problem or social media platforms aren’t a problem,” said Michael W. Wagner, a professor at the University of Wisconsin at Madison’s School of Journalism and Mass Communication who served as an independent observer of the collaboration, spending hundreds of hours sitting in on meetings and interviewing scientists. “This is good scientific evidence there is not just one problem that is easy to solve.”
In one experiment, researchers studied the impact of switching users feed on Facebook and Instagram to display content chronologically as opposed to surfacing content with Meta’s algorithm. Critics such as whistleblower Frances Haugen have argued that Meta’s algorithm amplifies and rewards hateful, divisive, and false posts by surfacing it to the top of users’ feeds and that switching to a chronological feed would make the content less divisive. Facebook currently offers users the ability to see a mostly chronological feed.
The researchers found that the chronological timeline was clearly less engaging — users whose timeline was changed spent significantly less time on the platform. They also saw more political stories and content flagged as untrustworthy.
But surveys given to the users to measure their political beliefs found the chronological feed had little effect on levels of polarization, political knowledge or offline political behavior. That finding aligns with some of Meta’s own internal research, which suggests users may see higher quality content with a feed dictated by the company’s algorithm than one governed simply by when something was posted, The Washington Post has reported.
In an interview, Haugen, a former Facebook product manager who disclosed thousands of internal Facebook documents to the Securities and Exchange Commission in 2021, criticized the timing of the experiment. She argued that by the time the researchers evaluated the chronological approach during the fall of 2020, thousands of users had already joined mega groups that would have flooded their feeds with potentially problematic content. She noted that during the months leading up to the 2020 election, Meta already had implemented some of its most aggressive election protection measures to address the most extreme posts. The company rolled back many of those measures after the election, she said.
In another experiment, the academics tested the effect of limiting the visibility of viral content in users’ feeds. The researchers found that when they removed people’s ability to see posts that their friends were reposting on Facebook, those users were exposed to far less political news. Those users also didn’t click or react to as much content and had less news knowledge but their levels of political polarization or political attitudes remained unchanged, according to the surveys.
Collectively, the studies released Thursday paint a portrait of a complex and divided social media landscape, with liberal and conservative users seeing and interacting with vastly different news sources. One of the studies analyzed data for more than 200 million U.S. Facebook users and found that users consumed news in an ideologically segregated way. The studies also showed that while both liberal and conservative websites were shared by users, far more domains and URLs favored by conservatives circulated on Facebook. The research also showed that a larger share of content rated as false by third-party fact-checkers was right-leaning.
In another experiment, researchers reduced people’s exposure to content they were likely to agree with and increased their exposure to information from ideologically opposed viewpoints. It’s the kind of change that many people might assume would broaden people’s views. But that intervention did not measurably affect people’s political attitudes or belief in false claims, the research found.
Tucker cautioned against reading too much into the research. “It’s possible that if we did a similar study at another period of time or in another country where people were not paying as much attention to politics or not being as inundated with other information from other sources about politics, we would have found a different result,” he said.
The study was also conducted in a world in which, in many ways, the cat was already out of the bag. A three-month switch in how information is served on a social network occurred in the context of a long-standing change in how people share and find information.
“This finding cannot tell us what the world would have been like if we hadn’t had social media around for the last 10 to 15 years,” Tucker said.
Research collaborations between outside academics and tech companies have a checkered history. A 2018 initiative — Social Science One — also had academics partner with Facebook to study the role of social media in elections. But that project was plagued with accusations from researchers that Meta strung them along with promises of data that never materialized or ended up being flawed.
In the current study, Meta approached Tucker and Talia Stroud, the director of the Center for Media Engagement in the Moody College of Communication at The University of Texas at Austin, to lead the collaboration in early 2020, according to a written analysis of the project by Wagner, the University of Wisconsin journalism professor who observed the process. Stroud and Tucker chose the 15 other academics to be on the team, which they decided to limit to the nearly 100 scholars who had been affiliated with Social Science One.
Meta and the researchers also agreed in advance on what research questions would be studied, their hypotheses, and the research design, according to Wagner. The academics did not accept compensation from Meta, but the company covered the costs of data collection.
Wagner said in an interview that as the papers neared publication, there were disagreements among the researchers — including a Meta researcher who at one point threatened to remove their name from one paper because the researcher felt the language used in the paper describing the ideological segregation in news sources overinterpreted the effect of the social media platform. “There were a few meetings and memos shared about the disagreement and it was ultimately resolved,” he said.
Tucker said that the researcher had “scholarly questions” and decided to remain on the paper because they agreed with the conclusions.
Wagner said that his observations suggested that the data and scientific process were sound, but that future collaborations would benefit from greater independence from Meta.
“Meta supports the independence of researchers, which is why the external academics had control rights over the research design, analysis, and writing. We took a number of steps to ensure this process was independent, ethical, and well done,” Meta said in a statement.
Gary King, a political scientist at Harvard University who was also involved in initiating the project, said the 2020 election research project should not be a one-and-done collaboration because there are other more nuanced experiments researchers could evaluate about Meta’s algorithms.
“Meta deserves a lot of credit, if and only if, these studies continue,” he said. “If that’s the end of it, then yeah, I think they need regulation.”