- Ananya Sen, Assistant Professor, Heinz College, Carnegie Mellon University
- Wajeeha Ahmad, PhD Candidate, Stanford University
- Charles Eesley, Associate Professor, Stanford University
- Erik Brynjolfsson, Professor, Institute for Human-Centered Artificial Intelligence, Stanford University
This blog article is derived from the authors’ paper titled The Impact of Financing Misinformation on Exit and Voice: Experimental Evidence from Information Interventions, a project of the Economics of Digital Services (EODS) initiative led by Penn’s Center for Technology, Innovation & Competition (CTIC) and The Warren Center for Network & Data Services. CTIC and the Warren Center are grateful to the John S. and James L. Knight Foundation for its generous support of the EODS initiative.
Online misinformation is largely financially sustained via advertising revenue from ads placed automatically on misinformation websites by digital platforms. The financial motivation to earn advertising revenue by spreading misinformation has been widely conjectured to be one of the main reasons misinformation continues to be prevalent across various digital platforms (Hao, 2021; Giansiracusa, 2021). Presently, the predominant business model of several mainstream digital platforms relies on monetizing attention via advertising (Lazer et al., 2018). Despite attempts from digital media platforms to reduce ad revenue going to misinformation websites (Love and Cooke, 2016) and the existence of tools that companies can adopt to avoid advertising on misinformation websites, ads from multiple well-known companies continue to appear on misinformation websites. A recent estimate suggests that for every $2.16 in digital ad revenue sent to legitimate newspapers, U.S. advertisers send $1 to misinformation websites. Consequently, several advertising companies end up contributing towards financially sustaining misinformation. Our work investigates whether increased transparency about companies advertising on misinformation websites can reduce the advertising revenue generated by misinformation websites and mitigate the financial incentive to misinform.
Today, ads from multiple well-known companies and organizations continue to appear on misinformation websites. To change the business model that rewards misinformation, we need a better understanding of consumer preferences about companies advertising on misinformation websites. The extent to which firms benefit from advertising on misinformation websites or not depends crucially on how consumers react to information on such firm practices. Examining potential changes in behavior in response to such information will reveal the extent to which the practice of advertising on misinformation websites is harmful or profitable for the companies involved, if consumers were to find out about such practices.
We document the widespread extent of advertising activity on misinformation websites by well-known companies across different industries by combining data on misinformation websites identified by third party journalist organizations and online advertising data. To investigate how people react to information about advertising on misinformation websites, we carry out an online information provision experiment. Our study uses an incentive-compatible design to measure how people change their consumption and voice concerns about company or platform practices in response to the information provided. Using a randomized experiment, we measure how consumption behavior changes when people receive different pieces of information about ads appearing on misinformation websites. Additionally, we measure whether people respond to our information treatments in other ways such as by voicing their concerns (Hirschman, 1972) as well as by changing their attitudes, beliefs and preferences.
To measure changes in consumption, we inform participants at the beginning of our study that one in five (i.e., 20% of all respondents) who complete the survey will be offered a $25 gift card from a company of their choice out of six company options. These six companies belong to three categories: fast food, food delivery and ride-sharing. All six companies are those that appeared on misinformation websites during the past three years (2019-2021) and are commonly used throughout the US. Participants are then randomized into five groups, each of which receives differential information about advertising on misinformation websites. These information treatments are all based on factual information from prior research and our data and include different degrees of information about which companies appeared on misinformation websites and the role played by digital ad platforms in placing ads on misinformation websites.
After the information treatment, all participants are asked to make their final gift card choice from the same six options they were shown earlier. To ensure incentive compatibility, participants are told that those who are randomly selected to receive a gift card will be offered the gift card of their choice at the end of our study. Our main outcome of interest is whether participants switch their gift card preference, i.e., whether participants select a different gift card after the information treatment than their top choice indicated prior to the information treatment. Secondly, participants are given the option to sign a real online petition. This serves two purposes. First, it allows us to measure another way in which people may respond to our information treatments apart from changing their consumption behavior. Second, since participants must choose between signing either company or platform petitions, this outcome allows us to measure whether, across our treatments, people hold advertising companies more responsible than the digital ad platforms that automatically place ads for companies.
The next steps of our study include completing the data gathering for our experiment. We will additionally carry out the second part of this study to measure how decision-makers within firms respond to information about advertising on misinformation websites. To measure decision-maker preferences, we plan to survey a sample of relevant decision-makers in marketing and senior roles. In addition to measuring their existing beliefs and preferences about advertising on misinformation websites, we will test how decision-makers update their beliefs and demand for a solution to avoid advertising on misinformation websites in response to the information provided.
Our findings will have implications for both companies advertising online and digital media platforms whose revenue model is based on advertising. For advertising firms, our experiment provides empirical evidence about how the sales of advertising firms may be affected if consumers are made aware of their ads appearing on misinformation websites. For digital platforms aiming to reduce the financing of misinformation, our work will provide empirical support for information interventions for reducing the financial incentive to misinform while enabling consumers and advertisers to make decisions according to their preferences. From a practical implementation standpoint, a public dashboard or tool that enables consumers to view where companies are advertising could be helpful in ensuring that their advertising practices reflect the publicly stated values of their brands.
Hirschman, Albert O. 1972. Exit, Voice, and Loyalty — Albert O. Hirschman | Harvard University Press. URL: https://www.hup.harvard.edu/catalog.php?isbn=9780674276604
Hao, Karen. 2021. “How Facebook and Google fund global misinformation.” MIT Technology Review. https://www.technologyreview.com/2021/11/20/1039076/facebook-google-disinformation-clickbait
Lazer, David M.J., Matthew A. Baum, Yochai Benkler, Adam J. Berinsky, Kelly M. Greenhill, Filippo Menczer, Miriam J. Metzger, Brendan Nyhan, Gordon Pennycook, David Rothschild, Michael Schudson, Steven A. Sloman, Cass R. Sunstein, Emily A. Thorson, Duncan J. Watts and Jonathan L. Zittrain. 2018. “The science of fake news: Addressing fake news requires a multidisciplinary effort.” Science 359(6380):1094–1096.
Love, Julia and Kristina Cooke. 2016. “Google, Facebook move to restrict ads on fake news sites.” URL: https://www.reuters.com/article/us-alphabet-advertising/google-facebook-move-to-restrict-ads-on-fake-news-sites-idUSKBN1392MM
Giansiracusa, Noah. 2021. “Google needs to defund misinformation.”. URL: https://slate.com/technology/2021/11/google-ads-misinformation-defunding-artificial-intelligence.html