Krishna Dasaratha, Assistant Professor of Economics, Boston University
Kevin He, Assistant Professor of Economics, University of Pennsylvania
This blog article is derived from the authors’ paper entitled Learning from Viral Content, a project of the Economics of Digital Services (EODS) initiative led by Penn’s Center for Technology, Innovation & Competition (CTIC) and Warren Center for Network & Data Services. CTIC and the Warren Center are grateful to the John S. and James L. Knight Foundation for its support of the initiative.
In the past decade, viral content on social media platforms like Twitter, Facebook, and Reddit has become a prominent source of news for many people. A key common feature of these platforms is that users consume content in news feeds—a small selection of the huge volume of new content posted by the platform’s users. Which stories go viral and which disappear is jointly determined by the algorithms generating these feeds and users’ actions on the platforms (e.g., sharing, retweeting, or upvoting stories). Twitter’s homepage, for example, shows a selection of tweets based on how many times users have liked and retweeted them. As users interact with the stories in their news feeds, they also help shape the stories that others will see.
How does the design of the news feed affect how users learn on such platforms? Our work focuses on a key parameter of news feed design: how much weight is placed on virality, or the current popularity of content, in determining which stories appear in users’ news feeds. In practice, the virality weight parameter is a simplified representation of design choices that tech companies pay attention to and iterate over. Different versions of the Twitter homepage have emphasized trending stories to varying degrees, while Reddit’s algorithm for ordering posts and comments has evolved over the past decade. This parameter also relates to public debates in recent years, where some have blamed the wide spread of misinformation about issues ranging from public health to politics on social media platforms pushing viral but inaccurate content into users’ feeds.
A model of social media news feeds
Our work uses a theoretical model to study users on a social media platform interacting with news stories about a particular issue. We focus on how the virality weight of the news feed affects how well people learn. A large number of users arrive in turn, discover new stories, and post them on the platform. Users also read a small selection of stories posted by others in a news feed. Users are rational and form beliefs about the stories they read, taking into account how the platform generates news feeds. Each user then shares some of the stories from their news feed, and we assume users prefer to share stories that they believe to be true.
When a story is shared, its popularity score increases. The virality weight of the platform determines how these scores affect how likely the stories will be shown in news feeds. A higher virality weight means the news feed focuses more on showing users popular content as opposed to random content.
A trade-off of information aggregation and misleading steady states
Our main results concern the advantages and disadvantages of having a higher virality weight.
A news feed that prioritizes viral content may help aggregate more information. Seeing a particular story in a news feed that selects widely shared stories gives a user more information than the content of that story. This is because the popularity of the story also tells the user about the past sharing decisions of their predecessors and thus lets the user draw inferences about the many stories that these predecessors saw in their news feeds. In some circumstances, seeing just a few stories in a news feed that emphasizes viral content can lead to strong beliefs about the relevant issue, even if individual stories only give imprecise information. Indeed, we formalize this intuition in our model and find that, up to a critical virality weight threshold, increasing the virality weight improves information aggregation and makes news feed stories more accurate.
Conversely, a news feed that mainly features widely shared stories can lead to a misleading steady state—a situation where incorrect but initially popular stories spread widely and shape people’s beliefs, despite being contradicted by later information. One might expect such feedback loops with naive users, but we show misleading steady states also arise in equilibrium with rational users whenever the virality weight exceeds the critical virality weight threshold. The idea is that when stories supporting an incorrect position are shared more, subsequent users tend to see these incorrect stories in their news feeds due to the stories’ popularity and hence form incorrect beliefs. Users rationally share these false stories and further increase their popularity. People have less exposure to the true stories: even if these stories are more numerous, they are shared less than the false stories and therefore shown less by the news feed algorithm.
Increasing the virality weight thus presents a trade-off. It can enhance information aggregation, but it also poses the systemic risk of leading society to a misleading steady state where most people form incorrect beliefs.
Since misleading steady states only appear when virality weight exceeds the critical virality weight threshold, we can use the level of this threshold to ask what properties of the platform make it more susceptible to misleading steady states. A platform is more susceptible when news stories are not very precise, when news feeds are large, and when users share many stories. In other words, misleading steady states arise on platforms where users consume and interact with an excessive amount of social information, compared to the quality of private information they receive from other sources.
Implications for platform design
We give two consequences of our results for platform design. First, we ask what virality weight a platform would choose to maximize a broad class of objectives, including users’ equilibrium utility from sharing stories on the platform. We find the optimal choice of virality weight either approaches the critical virality weight threshold or lies strictly above it as the number of users grows. We then discuss when a platform is robust to malicious attackers who manipulate its content. If a platform chooses its virality weight sufficiently below the threshold, a large amount of manipulation is required to produce a misleading steady state. We provide a simple explicit lower bound on this amount, interpreting it as a robustness guarantee.