Skip to main content

The effect of ad-blocking and anti-tracking on consumer behavior: A preliminary report on the goals and infrastructure of the study (2021)

October 11, 2021

authors:

  • Cristobal Cheyre, Assistant Professor, Cornell University
  • Li Jiang, Assistant Professor, The George Washington University
  • Alisa Frik, University of California, Berkeley
  • Florian Schaub, Assistant Professor, University of Michigan
  • Alessandro Acquisti, Professor of Information Technology and Public Policy,
    Carnegie Mellon University

This blog article is derived from the authors’ 2021 paper titled The Effect of Ad-Blocking and Anti-Tracking on Consumer Behavior and an updated 2022 paper, A Field Experiment to Study the Effect of Ad-Blocking and Anti-Tracking on Consumer Behavior, projects of the Economics of Digital Services (EODS) research initiative led by Penn’s Center for Technology, Innovation & Competition (CTIC) and The Warren Center for Network & Data Services. CTIC and The Warren Center are grateful to the John S. and James L. Knight Foundation for its major support of the EODS initiative.

__________

The battle around online (behavioral) advertising

In the last few years, online advertising has grown to capture more than half of all advertising expenditures in the United States. A key innovation that has fueled the rise of online advertising is the ability to determine in real time what ad to show to individual consumers by targeting advertising messages to the context of webpages and search queries (contextual ads) and individual user behaviors (behaviorally targeted ads). The latter—targeting based on the visitor’s past behaviors and inferred characteristics and interests—is possible thanks to online tracking and behavioral profiling.

According to the advertising industry, targeting is an economic “win–win” for all stakeholders within the advertising ecosystem (publishers, advertisers, consumers, and, of course, data intermediaries such as ad networks) as it reduces less efficient untargeted advertising and provides a better experience for Internet visitors by reducing the exposure to irrelevant ads.[7]

The large number of online ads users receive daily, however, has led to frequent criticisms against online advertising for being intrusive and invasive[8] and as a potential enabler of market manipulation.[9] Moreover, due to repeated scandals associated with the handling of personal data by online (advertising) firms, concerns have kept growing over the risks users face for having their data continuously mined and analyzed by companies they may not even know and in ways they are often not aware of. Even when not individually targeted, the mere over-abundance or the potentially inappropriate content of online ads can cause annoyance among users.[10] This has led a significant number of Internet users to adopt privacy-enhancing technologies that limit the ability of firms to track them (e.g., anti-trackers) and/or limit the amount of online advertising they receive (e.g., ad-blockers): In a 2020 survey, 40% of U.S. respondents reported using some type of ad-blocking software.[11]

The growing popularity of ad-blockers and anti-trackers among consumers has not been well received by online advertising companies and online publishers. Recent research has attempted to estimate online publishers’ revenue losses due to ad-blockers and concluded that “ad-blocking poses a substantial threat to the ad-supported web.”[12] Regulators have responded to increasing concerns regarding online data collection by proposing, and in many cases enacting, more stringent privacy regulations, including the European Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). Technology firms have also reacted in different ways. Some firms, such as Apple, have incorporated privacy protection in their products to limit tracking. Instead of limiting data collection, other firms, such as Google, are attempting to increase anonymity with technologies such as the Federated Learning of Cohorts.

In summary, the online advertising industry has often heralded the economic benefits of targeted online advertising, but its claims are juxtaposed by the privacy concerns associated with the vast number of ad-tech companies tracking and analyzing consumers’ online behavior—often without consumers’ awareness. A battle is ongoing in both industry and policy circles around the value as well as the harm that can accrue to consumers (and other stakeholders) from online advertising—and in particular from the collection and use of vast amounts of personal data to target ads to individual consumers.

A paucity of research evidence

While firms in the online publishing and online advertising ecosystem have been vocal in arguing how limiting online tracking and advertising may harm their revenues, surprisingly little effort has been devoted to understanding how online advertising in generaland the ubiquitous mining and analyzing of users’ personal data in particularmay affect users’ welfare and how curtailing of tracking and/or advertising ultimately affects online users’ online purchasing behavior and economic outcomes.

The studies that have looked at the impact of online targeted advertising (as well as the opposite technologies of ad-blocking and anti-tracking) have taken the perspective of online publishers and the advertising industry rather than the impact of consumers themselves. For instance, the ad industry has suggested that $15.8 billion was lost due to the use of ad-blockers in 2017.[13] The Interactive Advertising Bureau (IAB) believes that ad-blockers are a threat to the industry rather than a benefit.[14] If reducing ads or targeting imposes a cost, it may be worthwhile to figure out who bears the costwhether consumers, advertisers, or publishers. Whereas research suggests that ad-blocking reduces site visits and reduces publishers’ revenues,[15] the effects of ad-blocking and anti-tracking software on consumers’ actual purchasing and search behaviors and satisfaction have not been studied comprehensively. Miroglio et al.[16] touched on the effect of ad-blocking on user engagement in browsing using observational Firefox browser usage data. Using propensity score matching, they found that ad-blocking users spend more time browsing relative to a matched control group without ad-blockers. In other words, ad-blocking benefits consumers by enhancing consumer engagement rather than inducing a cost. Yet without random assignment it is difficult to draw a causal relationship between ad-blocking usage and engagement: Maybe those who spend more time online are more likely to adopt ad-blockers because they are more exposed to advertising and not the other way around. Furthermore, engagement is only one part of the consumer experience and consumer welfare. A related study[17] used observational data to investigate the extent to which display advertising led to online retailers’ customer acquisition. The authors analyzed web browsing histories from 13.6 million users over a 12-month period and found that the vast majority of shopping sessions start from channels other than display ads: web searches, search ads, e-mail marketing, or direct navigation.

Only recently have some authors started looking at how exposing or shielding consumers from advertising influences their browsing and/or purchasing behavior,[18] but those efforts have focused on experiments on a single platform or used observational data. To date, no experimental research has comprehensively studied the actual effects of the presence or absence of online ads, and their being or not being targeted, on metrics such as consumer browsing, search and purchase behavior, and consumer satisfaction. There is a need for large-scale field experiments that capture detailed individual user online behavior, focusing on the actual economic impact of ad-blockers and anti-trackers on end-users (consumers).

The experiment: Capturing the effect of ad-blocking and anti-tracking on consumer behavior

In the last few years, our team has been working on a research design and associated technical infrastructure to fill this research gap. We have designed and built the components for a large-scale field experiment on the economic impact of ad-blocking and anti-tracking technologies on consumers’ behavior and economic outcomes. The experiment aims at analyzing the economic impact of ad-blocking and anti-tracking technologies, focusing on consumers’ online behaviors (e.g., browsing and shopping) and their ultimate purchasing outcomes (as measured by amounts of money spent online, product prices paid, time spent on product searching, and purchase satisfaction). Understanding the impact of these technologies on consumer outcomes is crucial to assess the benefits and costs associated with the provision of online advertisingespecially with the collection of personal information for the purpose of serving behaviorally targeted advertisingand thus to understand if the purported benefits of online advertising compensate for the invasion of consumers’ privacy as claimed by the online advertising industry. This is a critical question as regulatory and self-regulatory responses to growing concerns regarding online privacy are being discussed and implemented. Are the (supposed) harms caused by online tracking mainly a violation of privacy, or are there economic or other negative outcomes associated with them? Or said in another way, if we could create a perfect privacy preserving online tracking and targeting system, would that solve the ongoing concerns regarding the handling of personal data? Or is the problem deeper and more tightly associated with how leveraging such data affects consumers?

Our effort and focus are distinctly different from most of the large and growing body of work in economics, marketing, and information systems on the advertising economy. Most of the existing literature on online advertising has focused on the effects of online advertising on advertisers, advertising networks, and online publishers. Typically, research on online advertising has measured the effectiveness of specific campaigns by certain advertisers. Researchers have sought to understand the relationship between targeting, click-through rates,[19] and conversion rates.[20] Another topic of inquiry has been how to determine the impact and posterior effect on sales of viewed but not clicked ads.[21] As for current research on consumers and online ads, works in that area have been mostly limited to determining how consumers react and interact with advertising. For example, some researchers have studied whether users find targeted advertising intrusive and “creepy” or whether overexposure to advertising leads to “banner blindness.”[22] Finally, literature on privacy-enhancing technologies has primarily focused on determining how the use of these technologies can affect publishers’ revenues and thus the availability and quality of content.

Our research objective differs from prior literature in two important ways. Firstly, we focus on consumers and seek to understand whether being shielded from online advertising in general, and from behaviorally targeted advertising in particular, affects consumers’ welfare. In particular, we analyze how the presence of anti-tracking or ad-blocking technologies influences users’ search and purchase behavior and ultimately purchase outcomes (such as amounts spent, prices paid, and satisfaction). Secondly, our methodology is based on a carefully designed longitudinal field study, which has greater ecological validity than a lab experiment would thus enabling us to identify the real-world impact of ad-blocking and anti-tracking on consumer welfare. Below we briefly describe the infrastructure we have developed, our testing to date, and the questions we intend to address during our experiment.

A complex infrastructure

For this experiment, we developed a complex software based on a previous project conducted at Carnegie Mellon University (the Security Behavior Observatory[23]). The software allows us to track, with participants’ awareness and consent, consumer product searching and purchasing behavior longitudinally for several months. Our study software consists of a browser extension that records participants’ browsing behaviors (related to product searching and shopping), advertising exposure, and online purchases, and an e-mail client that detects promotional e-mails and online purchase confirmations received by the participant. Moreover, the software allows us to remotely exert ad-blocking and anti-tracking manipulations. In this way, we assign participants into three experimental conditions: control, ad-blocking, and anti-tracking. In the control condition, we exert no intervention in the software; participants are exposed to all, including behaviorally targeted, advertising as it naturally occurs online. In the anti-tracking condition, we opt participants out of targeted advertising using the online advertising industry’s self-regulatory approach for consumer control over targeted advertising;[24] participants are exposed to advertising, but those ads are not targeted to them. In the ad-blocking condition, online ads are blocked by our software, leveraging a prominent ad-blocking tool (AdBlockPlus configured to block online ads at its maximum capacity). In this condition, participants’ exposure to ads is minimal or none (limited by the state-of-the-art effectiveness of ad-blocking technologies). We also encourage users to implement the manipulations on their smartphones and tablets (and we are able to detect if they did so).

Our infrastructure uses a client-server architecture. The client components include the browser extension and e-mail client that we provide to participants to install on their devices. The server components provide web and database services to maintain secure and reliable collection and storage of the collected dataset. The overall goal of the infrastructure is to collect the required data while minimizing the collection of undesirable data (e.g., sensitive, identifiable, or unnecessary data). We achieve this goal by taking advantage of data-driven classification and redaction systems combined with curated whitelist and blacklist filtering in our client components.

The sensitive nature of the dataset collected as part of the study requires that we follow security best practices to safeguard each participant’s information. Toward this goal, we first designed a secure server environment that is maintained by our university’s IT department and limits access to the research team. Within this environment, we deployed three server components: a web server for communicating with client applications, a storage server for recording collected data, and a database server for providing the dataset to the research team for analysis. We further integrated end-to-end encryption of the collected data using a system designed as part of the Security Behavior Observatory project.[25] Each participant’s data is encrypted by the client application with a unique key. The client application uses public key encryption to share this key with our web server which then stores the key securely and separately from the collected data. In the client application, the collected data is continuously created, encrypted with the client’s key, and sent to our web server. Communications with the web server use Hypertext Transfer Protocol Secure (HTTPS) to provide additional security properties (server authentication) for the data in transit. Once the web server sends the collected data to our storage server, the data is decrypted, parsed, and inserted into an analysis database that uses role-based permissions and database-level encryption to further protect the participant data.

Where we are

To determine how users would react to the study software and to detect development issues and verify that we can correctly collect the data we need to answer our research questions, we have conducted three study pilots. The first pilot was conducted in late 2018, the second pilot in May 2019, and the third pilot in June 2021. We plan to complete an additional test before deploying the actual experiment. Our unique setting will allow us to answer a number of research questions that uncover the actual impact of ad-blocking and anti-tracking. Below we elaborate on the type of questions we will explore and how we can use the data we collect to address them in a causal way.

Do consumers spend more or less money to purchase products when they are not exposed to ads (vs. exposed to ads)?

The data we collect allows us to observe all the online spending of our study participants as long as they complete the purchase from the computer where they installed the software or if they receive a purchase confirmation e-mail in the e-mail client we include in our software. We can thus directly compare if the experimental manipulation, i.e. the use of ad-blockers and anti-trackers by some participants for a long period of time, has any effect on their total online spending.

Do consumers spend more or less time searching for product information if they are not exposed to ads?

As we collect information on browsing behaviors, we can compare, for example, how many shopping-related webpages participants visit on average per purchase they complete. If advertising reduces search time, we should observe that those who receive ads visit fewer pages per purchase than those in the ad-blocking and anti-tracking conditions.

Are consumers more or less satisfied with their online purchases across experimental conditions?

To answer this question, we directly ask participants in the exit survey about their satisfaction with some of their (randomly selected) purchases. We can evaluate whether purchases associated with any type of advertising differ in terms of post-purchase satisfaction from other purchases.

Does it matter (for expenditures, searching time, and product satisfaction) whether the ads received are behaviorally targeted or non-targeted?

One of the key advantages of our study design is that we randomly assign participants to the three experimental conditions, and we track their behaviors for a long period of time. We can thus explore whether the outcomes they experience are affected or not by the experimental condition they were assigned to.

Are participants in the ad-blocking condition more satisfied with their overall browsing experience, and do they spend more time-consuming online content?

Most studies on attitudes towards online advertising are based on surveys that capture people’s perception towards advertising. An interesting question is whether those stated perceptions will change over time once we modify study participants’ exposure to ads. By asking participants about their attitudes towards advertising at the beginning and the end of our study, we can see whether shielding participants from advertising or from targeting increases or decreases their tolerance towards advertising. We can even explicitly evaluate whether changes in advertising levels influence how much time participants spend consuming online content.

[1] Cornell University

[2] The George Washington University

[3] University of California, Berkeley

[4] University of Michigan

[5] Carnegie Mellon University

[6] The authors thank Vikash Anand, Gabriella Ardiles, Gaurav Deshpande, Ronak Dedhia, Zijun Ding, Gurinder Gill, Paritosh Gupta, Akshay Hundia, Madhukar Mohta, Sarthak Munshi, Keshav Pandey, Abhishek Parikh, Phani Krishna Pasumarthi, Akanksha Rawat, Vinit Shah, Kyle Yang, Yaman Yu, Richie Varghese, Divya Virmani, Zhengda Wu, and Anru Xu for excellent research assistantship, and Sarah Pearman and Logan Warberg for comments. The authors also gratefully acknowledge support from the Alfred P. Sloan Foundation, Carnegie Mellon’s CyLab Security and Privacy Institute, and the University of Pennsylvania Center for Technology, Innovation and Competition and the Warren Center for Network & Data Sciences. For a complete list of Acquisti’s additional grants, please visit https://www.heinz.cmu.edu/~acquisti/cv.htm.

[7] AudienceScience and DM2PRO, “Audience Targeting: State of the Industry Study,” 2010, https://www3.technologyevaluation.com/research/white-paper/audience-targeting-state-of-the-industry-study-whats-happening-now-and-whats-next.html; AdExchanger, “If A Consumer Asked You, ‘Why Is Tracking Good?’, What Would You Say?,” October 28, 2011, https://adexchanger.com/online-advertising/why-is-tracking-good/.

[8] Blase Ur et al., “Smart, Useful, Scary, Creepy: Perceptions of Online Behavioral Advertising,” in Proceedings of the Eighth Symposium on Usable Privacy and Security, SOUPS ’12 (New York, NY, USA: ACM, 2012), 4:1-4:15, https://doi.org/10.1145/2335356.2335362.ur

[9] Ryan Calo, “Digital Market Manipulation,” George Washington Law Review 82 (January 1, 2014): 995, https://digitalcommons.law.uw.edu/faculty-articles/25.

[10] Joseph Turow et al., “Americans Reject Tailored Advertising and Three Activities That Enable It,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, September 29, 2009), https://papers.ssrn.com/abstract=1478214; Mimi An, “Why People Block Ads (And What It Means for Marketers and Advertisers),” December 18, 2018, https://blog.hubspot.com/news-trends/why-people-block-ads-and-what-it-means-for-marketers-and-advertisers.

[11] AudienceProject, “Attitude towards Advertising and Use of Ad Blocking,” 2020, https://www.audienceproject.com/wp-content/uploads/audienceproject_study_attitude_towards_advertising_and_use_of_ad_blocking_2020.pdf?x54797.

[12] Ben Shiller, Joel Waldfogel, and Johnny Ryan, “Will Ad Blocking Break the Internet?,” Working Paper (National Bureau of Economic Research, January 2017), https://doi.org/10.3386/w23058.

[13] Laurie Sullivan, “U.S. Publishers: $15.8B Annual Revenue Lost To Ad Blocking,” October 16, 2017, https://www.mediapost.com/publications/article/308814/us-publishers-158b-annual-revenue-lost-to-ad.html.

[14] IAB, “Ad Blocking: Who Blocks Ads, Why and How to Win Them Back,” June 2016, https://www.iab.com/wp-content/uploads/2016/07/IAB-Ad-Blocking-2016-Who-Blocks-Ads-Why-and-How-to-Win-Them-Back.pdf.

[15] Shiller, Waldfogel, and Ryan, “Will Ad Blocking Break the Internet?”

[16] Ben Miroglio et al., “The Effect of Ad Blocking on User Engagement with the Web,” in Proceedings of the 2018 World Wide Web Conference, WWW ’18 (Republic and Canton of Geneva, Switzerland: International World Wide Web Conferences Steering Committee, 2018), 813–21, https://doi.org/10.1145/3178876.3186162.

[17] Ceren Budak et al., “Understanding Emerging Threats to Online Advertising” (Proceedings of the 2016 ACM Conference on Economics and Computation, ACM, 2016), 561–78, https://doi.org/10.1145/2940716.2940787.

[18] Vilma Todri, “The Impact of Ad-Blockers on Online Consumer Behavior,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, July 16, 2020), https://papers.ssrn.com/abstract=3795713; Sarah Moshary, “Sponsored Search in Equilibrium: Evidence from Two Experiments,” SSRN Scholarly Paper (Rochester, NY: Social Science Research Network, August 6, 2021), https://doi.org/10.2139/ssrn.3903602.

[19] Ayman Farahat and Michael C. Bailey, “How Effective Is Targeted Advertising?,” in Proceedings of the 21st International Conference on World Wide Web, WWW ’12 (New York, NY, USA: ACM, 2012), 111–20, https://doi.org/10.1145/2187836.2187852.

[20] Oliver J. Rutz, Randolph E. Bucklin, and Garrett P. Sonnier, “A Latent Instrumental Variables Approach to Modeling Keyword Conversion in Paid Search Advertising,” Journal of Marketing Research 49, no. 3 (June 1, 2012): 306–19, https://doi.org/10.1509/jmr.10.0354.

[21] e.g. Anindya Ghose and Vilma Todri-Adamopoulos, “Toward a Digital Attribution Model: Measuring the Impact of Display Advertising on Online Consumer Behavior,” MIS Quarterly 40, no. 4 (December 2016): 889–910, https://doi.org/10.25300/MISQ/2016/40.4.05.

[22] Jan Panero Benway and David M. Lane, “Banner Blindness: Web Searchers Often Miss ‘Obvious’ Links,” Itg Newsletter 1, no. 3 (1998): 1–22, http://www.ruf.rice.edu/~lane/papers/banner_blindness.pdf.

[23] Alain Forget et al., “Building the Security Behavior Observatory: An Infrastructure for Long-Term Monitoring of Client Machines,” in Proceedings of the 2014 Symposium and Bootcamp on the Science of Security, HotSoS ’14 (New York, NY, USA: ACM, 2014), 24:1-24:2, https://doi.org/10.1145/2600176.2600200.for

[24] DAA, “Self-Regulatory Principles for Online Behavioral Advertising,” July 2009, http://digitaladvertisingalliance.org/sites/aboutads/files/DAA_files/seven-principles-07-01-09.pdf.

[25] Alain Forget et al., “Building the Security Behavior Observatory: An Infrastructure for Long-Term Monitoring of Client Machines,” in Proceedings of the 2014 Symposium and Bootcamp on the Science of Security, HotSoS ’14 (New York, NY, USA: ACM, 2014), 24:1-24:2, https://doi.org/10.1145/2600176.2600200.