By Samyukta Prabhu
Online platforms like Facebook and Instagram have been widely discussed for reasons ranging from increased user data collection to rising misinformation and election manipulation. At the same time, rising internet penetration globally has improved access to information and opportunities like never before. While assessing the current state of the internet, therefore, there is an urgent need to address its limitations, while ensuring that its strengths are not curtailed.
One way to do so is to address the common thread that ties together the above-mentioned pitfalls of online platforms – targeted advertising. However, the contention surrounding targeted advertising is that it is the primary business model of such platforms, thus being viewed as a necessary evil.
To better understand the nuances of this issue, it is helpful to explore how the business model of targeted ads works. This can help us assess the ramifications of potential regulations to the model – both economically as well as ethically.
As explained in a report by the United States’ Federal Trade Commission (FTC), the basic model of targeted advertising involves three players – consumers, websites and firms. Websites provide consumers with ‘free’ online services (news articles, search features) into which targeted ads are embedded. Firms pay the websites (through ad networks) for publishing their ads, and specify the attributes of their target audience. To target these ads, websites use consumers’ personal data (browsing habits, purchase history, demographic data, behavioural patterns) and provide analysed metrics to firms; this is used to improve the precision of future targeted ads. Firms are incentivised to improve targeting of their ads since they earn money when users buy the advertised products. This model improves over time, with increased user engagement, since the algorithms running the websites analyse collected data contemporaneously to optimise users’ news feeds. It thus follows that lax data privacy laws and user behavioural manipulation (to increase user engagement) greatly supplement the business model of targeted ads. Phenomena such as engaging with and spreading controversial content, as well as rewarding the highest paying ad firm with millions of users’ attention, are then some of the obvious consequences of such a business model.
Over recent years, a few governments and regulatory bodies have taken select measures to address some concerns stemming from the targeted ad model. However, there often seem to be gaps in these regulations that are easily exploitable. For instance, the European Union’s General Data Protection Regulation (GDPR), a data protection and privacy law for the EU region, prohibits processing personal data of users without their consent, unless explicitly permitted by the law. However, loopholes in Member States’ laws, such as the Spanish law, for instance, allows political parties to obtain and analyse user data from publicly available sources. In 2016, a ProPublica report found that Facebook allowed advertisers to exclude people from viewing housing ads, based on factors such as race. Facebook’s response to remedy the situation was to limit targeting categories for advertisers offering housing, employment and credit opportunities, and barring advertisers from using metrics such as zip codes (proxy for race) as targeting filters. However, this is a temporary fix for a larger structural problem as there exist multiple proxies for race and gender that can be used for targeting. We thus see that despite efforts to target specific concerns (such as data processing, or algorithmic accountability) of online platforms, there exist legal loopholes that allow tech firms to override these regulations. Moreover, with rising billion-dollar revenues and tech innovations that far outpace legal reforms, there is increasing incentive for Big Tech firms to exploit targeted ad systems and maximise profits before the law finally catches up.
As we can see, niche regulations to the targeted ad system are thus unlikely to adequately address the rising concerns of online platforms. That leads us to a seemingly radical alternative: abandoning the targeted ad system altogether, and exploring other models of online advertising. Such models would neutralise incentives for firms to collect and analyse user data since revenues would no longer be dependent on them. The FTC’s report suggests two such models: first, an “ad-supported business model without targeted ads” – similar to the advertising model in newspapers. Websites would use macro-level indicators to target broad audiences, but would not collect user data for micro-targeting or behavioural manipulation. Second, a “payment-supported business model without ads” – similar to Netflix, which charges the user with a subscription fee. Some platforms (such as Spotify) currently work on a mixture of the two models – free to use with generic ads, or subscription-based without ads. The potential economic shortcomings for such a model include “increased search cost” for firms to find potential buyers of their product, and “decreased match quality” for consumers who might see unwanted generic ads. However, this model has been successful for several music streaming and OTT platforms (including Spotify, Netflix) and ensures useful, customised services without the associated perils of targeted advertising.
There exist a few other measures that continue to work within the purview of the targeted ad system, but use established regulatory frameworks to skew incentives of data collection and processing. One such measure that gained traction since Lina Khan’s seminal essay in 2017, Amazon’s Antitrust Paradox, is for anti-monopoly regulations as well as public utility regulations to be applied to Big Tech firms. Since these platforms effectively capture the majority of the market share for their respective products, they could be subject to anti-monopoly regulations including breaking up of the firm and separation of subsequent divisions, to prevent data collection and processing across platforms (for instance, separating Facebook from its acquired platforms Instagram and WhatsApp.) A more direct measure to limit data collection is to subject tech firms to data taxes. Another measure, that of public utility regulations, has been in play throughout history to limit the harms of private control over shared public infrastructure, including electricity and water. They stipulate “fair treatment, common carriage, and non-discrimination as well as limits on extractive pricing and constraints on utility business models.” Since the internet (and its ‘synonymous’ platforms like Google and Facebook) is an essential resource in the 21st century, being a principal source of information for the public, it can be argued that it is a public utility, thus requiring it to be subject to the appropriate regulations. With the current state of the internet requiring user surveillance and behavioural manipulation, it easily violates the fundamental public utility regulation of “fair treatment”. Making a case for these online platforms to be public utilities ensures that they do not exploit the technological shortcomings of the law, and ensures fairer access for its users.
In today’s world, where the internet is intertwined with most parts of one’s life, including politics, entertainment, education and work, it is of utmost importance that its online platforms be recognised as a public resource for all, rather than a quid pro quo for surveillance and behavioural manipulation. An essential part of achieving this recognition is to adequately address the harms of the targeted ad system, in an ethical and economically efficient manner.
Samyukta is a student of Economics, Finance and Media Studies at Ashoka University. In her free time, she enjoys discovering interesting long-form reads and exploring new board games.
We publish all articles under a Creative Commons Attribution-Noderivatives license. This means any news organisation, blog, website, newspaper or newsletter can republish our pieces for free, provided they attribute the original source (OpenAxis).