Categories
Issue 6

Technology will change, but what about ethics?

In a physically distanced world, through the power of technology the American media mogul Oprah Winfrey pulled-off a successful “in-person” interview with Barack Obama, a former President of the USA. Although Oprah was in Santa Barbara, California and Obama in Washington, D.C., the green screen technology used for the interview made it appear as though the pair were comfortably sitting across each other, by Oprah’s fireplace in her Montecito mansion. 

After the interview aired on Oprah’s Apple TV show, The Oprah Conversation, most people were stunned by what the technology used was able to do. The interview took place seamlessly and the two appeared to be in the same room throughout. The film industry, especially the Marvel franchise, extensively makes use of green screen technology. Technology like this has existed in the fictional space for a while now. But should the use of such technology enter the media space? 

We live in a world where misinformation is consistently proliferating. False representations tend to dominate the media landscape because they are being generated at a much faster pace compared to our ability to detect them. Advancements in artificial intelligence (AI) continue to blur our perceptive abilities. We have reached a stage where we find it difficult to distinguish between real and fake digital representations. Thus, among the existing sea of misinformation, do we want technology, like the one Oprah used, to be pursued for journalistic endeavours?  

Deepfakes (created through the use of AI, are audio and video representations of people saying and doing things that didn’t actually) first surfaced on the internet in 2017. For the first time, it gave creators the power to lip-sync audio or make other digital manipulations in a highly realistic way. The famous Obama deepfake is an example of how realistic they can get. Once the technology became cheaper and its application easier, deepfakes quickly started exploding on the internet. While the entertainment value of such technology is high, there is an uncomfortable amount of rising malicious content. 

The technology has acquired political value and is often used as a tool to amplify propaganda. Misrepresentations of political leaders and other public figures are frequently distributed to the masses. Possessing the power to undermine the credibility of journalism, manipulate elections and reduce trust in institutions, the use of this technology has been mainly sinister. According to a study, 96% of deepfakes on the internet are pornographic, with most being non-consensual. Apart from damaging the reputation of individuals, the deepfake AI has also raised broader ethical implications. Most technologies have positive as well as negative outcomes, but the discourse on deepfake technology has been more critical than appreciative.  

While it is essential to use technology ethically, maybe we need to take a step back, and ask: Is it morally right or wrong to use it in the first place? Even though Oprah publicly acknowledged the technology she was using, was mere disclosure enough? There is no doubt that technology holds power. Many of the ethical dilemmas we face today are an outcome of technology. Thus, when trying to deliberate upon whether or not it is okay to deploy certain technology in the space of journalism, thinking through ethical implications becomes important. 

Different ethical principles result in differing approaches to such issues. Let us assume that Oprah is still in the process of deciding whether it is ethical to use the green screen technology for her interview. For Consequentialist Oprah, the decision of using the technology would be governed by the outcomes of using it. She would have to deliberate whether the benefits of using the technology would overweigh the costs. Kantian Oprah would follow a deontological approach. Rather than looking at the consequences of her choice, her decision-making process would be based on the idea of performing moral duties grounded with rationality. Virtue ethicist Oprah’s decision would rely on deciding whether her act itself is virtuous. This decision would neither be based on duty nor based on the consequences of the outcome. 

When approaching whether or not to use technology, it is important to look at things through these different ethical lenses and perspectives because they provide insight into the types of moral conundrums that a situation may cause. While the guidance from these theories often conflicts with the other, it lays down different choices and options. The decision-making process used to arrive at a conclusion, thus, gets governed by a moral fabric. 

Digital technologies have spawned new opportunities as well as challenges with the way we communicate today. A global shift to digital media has changed the way information is being disseminated. Through the internet, every individual has the ability to discharge information to the masses. In an idealistic world, we would expect all individuals to practice basic ethical standards. Since the world we live in is far from ideal, it is especially important for media professionals to be careful about the form and application of technology they are deploying as it sets a precedent for others to follow. But even if journalistic codes are practised, some questions remain. Since technology keeps changing, which principles should be incorporated while making decisions? In case of ethical pitfalls, how can accountability be held? Should we be guided by a regulatory framework? Who should make these decisions?

Picture Credit: Elena Lacey; Getty Images

Shrishti is a Politics, Philosophy and Economics major at Ashoka University. In her free time, you’ll find her cooking, dancing or photographing.

We publish all articles under a Creative Commons Attribution-Noderivatives license. This means any news organisation, blog, website, newspaper or newsletter can republish our pieces for free, provided they attribute the original source (OpenAxis).

Categories
Uncategorized

Humans v. AI: How automated decision making is a game changer for legal liability

The Trolley Problem, like many thought experiments, has a pervasive shelf life. There is little to add to its 50-year-old documented history in and outside classrooms—except to add a footnote about its strange popularity in autonomous vehicle circles. This is evidenced by its crowdsourced avatar dubbed ‘Moral Machine’, that has been an inspiration to computer scientists and engineers within the Silicon Valley counter culture. 

Fiercely debated and disavowed by philosophers, ethicists and behavioural psychologists, it seems, we begin exactly where the trolley problem ends—the complexity of the real-time decision making and messy morality in the aftermath of the loss of a human life. The trolley problem isn’t theoretical anymore and neither are the algorithms that sought to adapt it to the digital age.

Our case in point—In 2018, Silicon Valley awoke to an autonomous Uber killing a 49-year-old pedestrian in Tempe, Arizona. As one reporter succinctly summarised—what happens when a two-ton machine, one that is run by an assortment of sensors and computers and makes decisions foreign to human reasoning, comes in contact with the all too human textures of urban life?

A growing demand for and interest in scholarship, at the intersection of law and technology, identifies the immediate and real puzzles for legal systems, the state and tech corporations. The levers pulled by these three key actors will lay much of the groundwork and have the battle lines drawn.

The State and Digital Governance 

A public-private partnership paradigm forms much of the situational context of the testing and adoption of autonomous vehicles in cars and other cyber-physical systems. When we consider the question of state responsibility or even liability in the aftermath of crashes in testing zones or general roll out areas, this partnership between the state and tech corporations is increasingly transforming governance and producing new modes of surveillance. The question, as Jack Balkin put, is not if there will be a Surveillance State, but who is better suited to lead the Surveillance State? Big Tech is certainly an unprecedented contender. 

New forms of governance are emerging in a transnational zone of ‘legal indistinction’, an operational space bound by legal systems specific to nations but beyond their borders. Here, the Tech Corporation, authorised by the state, exerts influence and dictates norms on issues that range from cybersecurity, surveillance, intellectual property, user privacy and most recently, pandemic contact tracing. In the case of the recent self-driving car crashes, the state liability for allowing autonomous cars without sufficient oversight is unlikely to fly as a legal standard outside of issues of faulty state-built infrastructure. Only a legislative attempt can compensate for the regulatory failure in establishing safety standards or oversight.

The Determination of Legal Liability and Compensation

Over the past decade, legal scholars have described the situation of ‘identifying legal liability for autonomous decision-making software powered vehicles’ to be a grey area where the law runs out. This typically creates room for courts to consider questions of legal liability, compensation and criminal action, while creating new legal tests and establishing precedent. However, the other key trend in the legal responses in autonomous vehicle crashes reveals the use of the doctrine of product liability instead of vehicular negligence in cases featuring damage caused by autonomous vehicles. What is clear to researchers working at the intersection of law and technology, is that the current trend of moving cases involving autonomous vehicle collisions away from criminal liability and courts, and towards civil suits and settlements, will prove to be a missed opportunity. This is because it can potentially chip away at the ability of courts to adjudicate or set new precedent. It also makes the debate on ‘product liability’ a fierce contest studied by both legal scholars and economists. Thus, there is a trade-off between allowing these cases to be heard in court, chipping away at the significant role legal systems could potentially play while regulators play catch up, and the project to raise public knowledge and civil society awareness about autonomous decision making is put at risk. 

Scholars like Bryan Smith point out that a shift from the doctrine of vehicular negligence to ‘product liability’ in the short run advances the prevention of injury and the compensation of victims, while keeping the calculations of compensation fairly private between the tech companies and any human victims. In an economic context, the shift to ‘producer’s responsibility’ is a debate on its own, whether it’s regulating the drivers, self-driving manufacturer, lawmakers or the car itself! 

Legal protections and tech corporations

A culture of codified secrecy is hardly new to the corporate form. A direct line can be traced from Wall Street to Silicon Valley and by extension between the legal personhood afforded to Big Banks and Big Tech. It is interesting to think about the distinctions drawn between financial and personal information in the digital age. The contrast almost vanishes when we consider Big Tech and its successful campaign as the dominant corporate form, surpassing even Big Banks in their ability to amass information and then bundling it so all information becomes inherently financial in our new digitally enabled surveillance paradigm. 

In the age of algorithms, a behind the scenes look into the secretive and complex business models, practices and interfaces of leading tech platforms are critical both for users and governments. These are referred to as ‘Black Box Systems’ precisely because they enhance the legal and real secrecy afforded by algorithms and automated systems to tech companies. They become a blind spot to both regulators and consumers at large.  Algorithms, which are largely covered by existing intellectual property standards, have also revived interest in property rights for the digital age. 

More and more predictive algorithms are impacting every aspect of our lives. The paucity of enforcement activity, requiring moral justification and rationale, makes it harder to track illegal or ethical discrimination carried out during self-driver crashes. Frank Pasquale highlighted how predictive algorithms mine personal information to make guesses about individuals’ likely actions and risks. Thus, it becomes imperative to explore the consequences to human values of fairness and justice, when scoring machines make judgments about individuals in order to avoid arbitrary and discriminatory ways.

An unlikely but growing collaboration between independent researchers, former/current Big Tech employees, legal and civil rights activists, has been instrumental in making the implications of automated decision making public knowledge. This offers critical momentum for regulators and legal systems as they play catch up to Big Tech’s bullish attempts to drive both the adoption and research into cyber-physical systems. Most recently, Amazon’s controversial facial recognition software and the company’s aggressive campaign for adoption by law enforcement came under scrutiny by several independent AI researchers, who detailed the higher error rates in identifying women of colour. 

While shaping the future of AI, tech companies’ input is essential, however, they cannot retain absolute power on how their systems impact the society or on how we evaluate the impact morally. In order to boost accountability and transparency, governments need to support independent AI research, create incentives for the industry to cooperate and use that leverage to demand that tech companies share data in properly-protected databases, with access granted to publicly-funded artificial intelligence researchers.

The ‘internet of things’ is growing exponentially, generating unprecedented volumes of data. With autonomous vehicles hitting the roads in increasing numbers and lives at stake, it is necessary to ensure that the liable party is held accountable when things go utterly wrong. The goal of economists, lawyers and policymakers alike then, would be to come up with a ‘pareto optimal’ scenario, while assuring that each party involved does not take undue advantage of each other.

Arushi Massey is a research and teaching fellow at the Department of Political Science at Ashoka University. Her research focuses on the digital political economy and questions at the intersection of law and social theory.

We publish all articles under a Creative Commons Attribution-Noderivatives license. This means any news organisation, blog, website, newspaper or newsletter can republish our pieces for free, provided they attribute the original source (OpenAxis). 

Categories
Uncategorized

Targeted ads: Is there an ethical, economically-viable alternative?

By Samyukta Prabhu

Online platforms like Facebook and Instagram have been widely discussed for reasons ranging from increased user data collection to rising misinformation and election manipulation. At the same time, rising internet penetration globally has improved access to information and opportunities like never before. While assessing the current state of the internet, therefore, there is an urgent need to address its limitations, while ensuring that its strengths are not curtailed.

One way to do so is to address the common thread that ties together the above-mentioned pitfalls of online platforms – targeted advertising. However, the contention surrounding targeted advertising is that it is the primary business model of such platforms, thus being viewed as a necessary evil.

To better understand the nuances of this issue, it is helpful to explore how the business model of targeted ads works. This can help us assess the ramifications of potential regulations to the model – both economically as well as ethically. 

As explained in a report by the United States’ Federal Trade Commission (FTC), the basic model of targeted advertising involves three players – consumers, websites and firms. Websites provide consumers with ‘free’ online services (news articles, search features) into which targeted ads are embedded. Firms pay the websites (through ad networks) for publishing their ads, and specify the attributes of their target audience. To target these ads, websites use consumers’ personal data (browsing habits, purchase history, demographic data, behavioural patterns) and provide analysed metrics to firms; this is used to improve the precision of future targeted ads. Firms are incentivised to improve targeting of their ads since they earn money when users buy the advertised products. This model improves over time, with increased user engagement, since the algorithms running the websites analyse collected data contemporaneously to optimise users’ news feeds. It thus follows that lax data privacy laws and user behavioural manipulation (to increase user engagement) greatly supplement the business model of targeted ads. Phenomena such as engaging with and spreading controversial content, as well as rewarding the highest paying ad firm with millions of users’ attention, are then some of the obvious consequences of such a business model.

Over recent years, a few governments and regulatory bodies have taken select measures to address some concerns stemming from the targeted ad model. However, there often seem to be gaps in these regulations that are easily exploitable. For instance, the European Union’s General Data Protection Regulation (GDPR), a data protection and privacy law for the EU region, prohibits processing personal data of users without their consent, unless explicitly permitted by the law. However, loopholes in Member States’ laws, such as the Spanish law, for instance, allows political parties to obtain and analyse user data from publicly available sources. In 2016, a ProPublica report found that Facebook allowed advertisers to exclude people from viewing housing ads, based on factors such as race. Facebook’s response to remedy the situation was to limit targeting categories for advertisers offering housing, employment and credit opportunities, and barring advertisers from using metrics such as zip codes (proxy for race) as targeting filters. However, this is a temporary fix for a larger structural problem as there exist multiple proxies for race and gender that can be used for targeting. We thus see that despite efforts to target specific concerns (such as data processing, or algorithmic accountability) of online platforms, there exist legal loopholes that allow tech firms to override these regulations. Moreover, with rising billion-dollar revenues and tech innovations that far outpace legal reforms, there is increasing incentive for Big Tech firms to exploit targeted ad systems and maximise profits before the law finally catches up. 

As we can see, niche regulations to the targeted ad system are thus unlikely to adequately address the rising concerns of online platforms. That leads us to a seemingly radical alternative: abandoning the targeted ad system altogether, and exploring other models of online advertising. Such models would neutralise incentives for firms to collect and analyse user data since revenues would no longer be dependent on them. The FTC’s report suggests two such models: first, an “ad-supported business model without targeted ads” – similar to the advertising model in newspapers. Websites would use macro-level indicators to target broad audiences, but would not collect user data for micro-targeting or behavioural manipulation. Second, a “payment-supported business model without ads” – similar to Netflix, which charges the user with a subscription fee. Some platforms (such as Spotify) currently work on a mixture of the two models – free to use with generic ads, or subscription-based without ads. The potential economic shortcomings for such a model include “increased search cost” for firms to find potential buyers of their product, and “decreased match quality” for consumers who might see unwanted generic ads. However, this model has been successful for several music streaming and OTT platforms (including Spotify, Netflix) and ensures useful, customised services without the associated perils of targeted advertising. 

There exist a few other measures that continue to work within the purview of the targeted ad system, but use established regulatory frameworks to skew incentives of data collection and processing. One such measure that gained traction since Lina Khan’s seminal essay in 2017, Amazon’s Antitrust Paradox, is for anti-monopoly regulations as well as public utility regulations to be applied to Big Tech firms. Since these platforms effectively capture the majority of the market share for their respective products, they could be subject to anti-monopoly regulations including breaking up of the firm and separation of subsequent divisions, to prevent data collection and processing across platforms (for instance, separating Facebook from its acquired platforms Instagram and WhatsApp.) A more direct measure to limit data collection is to subject tech firms to data taxes. Another measure, that of public utility regulations, has been in play throughout history to limit the harms of private control over shared public infrastructure, including electricity and water. They stipulate “fair treatment, common carriage, and non-discrimination as well as limits on extractive pricing and constraints on utility business models.” Since the internet (and its ‘synonymous’ platforms like Google and Facebook) is an essential resource in the 21st century, being a principal source of information for the public, it can be argued that it is a public utility, thus requiring it to be subject to the appropriate regulations. With the current state of the internet requiring user surveillance and behavioural manipulation, it easily violates the fundamental public utility regulation of “fair treatment”. Making a case for these online platforms to be public utilities ensures that they do not exploit the technological shortcomings of the law, and ensures fairer access for its users. 

In today’s world, where the internet is intertwined with most parts of one’s life, including politics, entertainment, education and work, it is of utmost importance that its online platforms be recognised as a public resource for all, rather than a quid pro quo for surveillance and behavioural manipulation. An essential part of achieving this recognition is to adequately address the harms of the targeted ad system, in an ethical and economically efficient manner.

Samyukta is a student of Economics, Finance and Media Studies at Ashoka University. In her free time, she enjoys discovering interesting long-form reads and exploring new board games.

We publish all articles under a Creative Commons Attribution-Noderivatives license. This means any news organisation, blog, website, newspaper or newsletter can republish our pieces for free, provided they attribute the original source (OpenAxis).