Categories
Issue 6

Technology will change, but what about ethics?

In a physically distanced world, through the power of technology the American media mogul Oprah Winfrey pulled-off a successful “in-person” interview with Barack Obama, a former President of the USA. Although Oprah was in Santa Barbara, California and Obama in Washington, D.C., the green screen technology used for the interview made it appear as though the pair were comfortably sitting across each other, by Oprah’s fireplace in her Montecito mansion. 

After the interview aired on Oprah’s Apple TV show, The Oprah Conversation, most people were stunned by what the technology used was able to do. The interview took place seamlessly and the two appeared to be in the same room throughout. The film industry, especially the Marvel franchise, extensively makes use of green screen technology. Technology like this has existed in the fictional space for a while now. But should the use of such technology enter the media space? 

We live in a world where misinformation is consistently proliferating. False representations tend to dominate the media landscape because they are being generated at a much faster pace compared to our ability to detect them. Advancements in artificial intelligence (AI) continue to blur our perceptive abilities. We have reached a stage where we find it difficult to distinguish between real and fake digital representations. Thus, among the existing sea of misinformation, do we want technology, like the one Oprah used, to be pursued for journalistic endeavours?  

Deepfakes (created through the use of AI, are audio and video representations of people saying and doing things that didn’t actually) first surfaced on the internet in 2017. For the first time, it gave creators the power to lip-sync audio or make other digital manipulations in a highly realistic way. The famous Obama deepfake is an example of how realistic they can get. Once the technology became cheaper and its application easier, deepfakes quickly started exploding on the internet. While the entertainment value of such technology is high, there is an uncomfortable amount of rising malicious content. 

The technology has acquired political value and is often used as a tool to amplify propaganda. Misrepresentations of political leaders and other public figures are frequently distributed to the masses. Possessing the power to undermine the credibility of journalism, manipulate elections and reduce trust in institutions, the use of this technology has been mainly sinister. According to a study, 96% of deepfakes on the internet are pornographic, with most being non-consensual. Apart from damaging the reputation of individuals, the deepfake AI has also raised broader ethical implications. Most technologies have positive as well as negative outcomes, but the discourse on deepfake technology has been more critical than appreciative.  

While it is essential to use technology ethically, maybe we need to take a step back, and ask: Is it morally right or wrong to use it in the first place? Even though Oprah publicly acknowledged the technology she was using, was mere disclosure enough? There is no doubt that technology holds power. Many of the ethical dilemmas we face today are an outcome of technology. Thus, when trying to deliberate upon whether or not it is okay to deploy certain technology in the space of journalism, thinking through ethical implications becomes important. 

Different ethical principles result in differing approaches to such issues. Let us assume that Oprah is still in the process of deciding whether it is ethical to use the green screen technology for her interview. For Consequentialist Oprah, the decision of using the technology would be governed by the outcomes of using it. She would have to deliberate whether the benefits of using the technology would overweigh the costs. Kantian Oprah would follow a deontological approach. Rather than looking at the consequences of her choice, her decision-making process would be based on the idea of performing moral duties grounded with rationality. Virtue ethicist Oprah’s decision would rely on deciding whether her act itself is virtuous. This decision would neither be based on duty nor based on the consequences of the outcome. 

When approaching whether or not to use technology, it is important to look at things through these different ethical lenses and perspectives because they provide insight into the types of moral conundrums that a situation may cause. While the guidance from these theories often conflicts with the other, it lays down different choices and options. The decision-making process used to arrive at a conclusion, thus, gets governed by a moral fabric. 

Digital technologies have spawned new opportunities as well as challenges with the way we communicate today. A global shift to digital media has changed the way information is being disseminated. Through the internet, every individual has the ability to discharge information to the masses. In an idealistic world, we would expect all individuals to practice basic ethical standards. Since the world we live in is far from ideal, it is especially important for media professionals to be careful about the form and application of technology they are deploying as it sets a precedent for others to follow. But even if journalistic codes are practised, some questions remain. Since technology keeps changing, which principles should be incorporated while making decisions? In case of ethical pitfalls, how can accountability be held? Should we be guided by a regulatory framework? Who should make these decisions?

Picture Credit: Elena Lacey; Getty Images

Shrishti is a Politics, Philosophy and Economics major at Ashoka University. In her free time, you’ll find her cooking, dancing or photographing.

We publish all articles under a Creative Commons Attribution-Noderivatives license. This means any news organisation, blog, website, newspaper or newsletter can republish our pieces for free, provided they attribute the original source (OpenAxis).

Categories
Uncategorized

Humans v. AI: How automated decision making is a game changer for legal liability

The Trolley Problem, like many thought experiments, has a pervasive shelf life. There is little to add to its 50-year-old documented history in and outside classrooms—except to add a footnote about its strange popularity in autonomous vehicle circles. This is evidenced by its crowdsourced avatar dubbed ‘Moral Machine’, that has been an inspiration to computer scientists and engineers within the Silicon Valley counter culture. 

Fiercely debated and disavowed by philosophers, ethicists and behavioural psychologists, it seems, we begin exactly where the trolley problem ends—the complexity of the real-time decision making and messy morality in the aftermath of the loss of a human life. The trolley problem isn’t theoretical anymore and neither are the algorithms that sought to adapt it to the digital age.

Our case in point—In 2018, Silicon Valley awoke to an autonomous Uber killing a 49-year-old pedestrian in Tempe, Arizona. As one reporter succinctly summarised—what happens when a two-ton machine, one that is run by an assortment of sensors and computers and makes decisions foreign to human reasoning, comes in contact with the all too human textures of urban life?

A growing demand for and interest in scholarship, at the intersection of law and technology, identifies the immediate and real puzzles for legal systems, the state and tech corporations. The levers pulled by these three key actors will lay much of the groundwork and have the battle lines drawn.

The State and Digital Governance 

A public-private partnership paradigm forms much of the situational context of the testing and adoption of autonomous vehicles in cars and other cyber-physical systems. When we consider the question of state responsibility or even liability in the aftermath of crashes in testing zones or general roll out areas, this partnership between the state and tech corporations is increasingly transforming governance and producing new modes of surveillance. The question, as Jack Balkin put, is not if there will be a Surveillance State, but who is better suited to lead the Surveillance State? Big Tech is certainly an unprecedented contender. 

New forms of governance are emerging in a transnational zone of ‘legal indistinction’, an operational space bound by legal systems specific to nations but beyond their borders. Here, the Tech Corporation, authorised by the state, exerts influence and dictates norms on issues that range from cybersecurity, surveillance, intellectual property, user privacy and most recently, pandemic contact tracing. In the case of the recent self-driving car crashes, the state liability for allowing autonomous cars without sufficient oversight is unlikely to fly as a legal standard outside of issues of faulty state-built infrastructure. Only a legislative attempt can compensate for the regulatory failure in establishing safety standards or oversight.

The Determination of Legal Liability and Compensation

Over the past decade, legal scholars have described the situation of ‘identifying legal liability for autonomous decision-making software powered vehicles’ to be a grey area where the law runs out. This typically creates room for courts to consider questions of legal liability, compensation and criminal action, while creating new legal tests and establishing precedent. However, the other key trend in the legal responses in autonomous vehicle crashes reveals the use of the doctrine of product liability instead of vehicular negligence in cases featuring damage caused by autonomous vehicles. What is clear to researchers working at the intersection of law and technology, is that the current trend of moving cases involving autonomous vehicle collisions away from criminal liability and courts, and towards civil suits and settlements, will prove to be a missed opportunity. This is because it can potentially chip away at the ability of courts to adjudicate or set new precedent. It also makes the debate on ‘product liability’ a fierce contest studied by both legal scholars and economists. Thus, there is a trade-off between allowing these cases to be heard in court, chipping away at the significant role legal systems could potentially play while regulators play catch up, and the project to raise public knowledge and civil society awareness about autonomous decision making is put at risk. 

Scholars like Bryan Smith point out that a shift from the doctrine of vehicular negligence to ‘product liability’ in the short run advances the prevention of injury and the compensation of victims, while keeping the calculations of compensation fairly private between the tech companies and any human victims. In an economic context, the shift to ‘producer’s responsibility’ is a debate on its own, whether it’s regulating the drivers, self-driving manufacturer, lawmakers or the car itself! 

Legal protections and tech corporations

A culture of codified secrecy is hardly new to the corporate form. A direct line can be traced from Wall Street to Silicon Valley and by extension between the legal personhood afforded to Big Banks and Big Tech. It is interesting to think about the distinctions drawn between financial and personal information in the digital age. The contrast almost vanishes when we consider Big Tech and its successful campaign as the dominant corporate form, surpassing even Big Banks in their ability to amass information and then bundling it so all information becomes inherently financial in our new digitally enabled surveillance paradigm. 

In the age of algorithms, a behind the scenes look into the secretive and complex business models, practices and interfaces of leading tech platforms are critical both for users and governments. These are referred to as ‘Black Box Systems’ precisely because they enhance the legal and real secrecy afforded by algorithms and automated systems to tech companies. They become a blind spot to both regulators and consumers at large.  Algorithms, which are largely covered by existing intellectual property standards, have also revived interest in property rights for the digital age. 

More and more predictive algorithms are impacting every aspect of our lives. The paucity of enforcement activity, requiring moral justification and rationale, makes it harder to track illegal or ethical discrimination carried out during self-driver crashes. Frank Pasquale highlighted how predictive algorithms mine personal information to make guesses about individuals’ likely actions and risks. Thus, it becomes imperative to explore the consequences to human values of fairness and justice, when scoring machines make judgments about individuals in order to avoid arbitrary and discriminatory ways.

An unlikely but growing collaboration between independent researchers, former/current Big Tech employees, legal and civil rights activists, has been instrumental in making the implications of automated decision making public knowledge. This offers critical momentum for regulators and legal systems as they play catch up to Big Tech’s bullish attempts to drive both the adoption and research into cyber-physical systems. Most recently, Amazon’s controversial facial recognition software and the company’s aggressive campaign for adoption by law enforcement came under scrutiny by several independent AI researchers, who detailed the higher error rates in identifying women of colour. 

While shaping the future of AI, tech companies’ input is essential, however, they cannot retain absolute power on how their systems impact the society or on how we evaluate the impact morally. In order to boost accountability and transparency, governments need to support independent AI research, create incentives for the industry to cooperate and use that leverage to demand that tech companies share data in properly-protected databases, with access granted to publicly-funded artificial intelligence researchers.

The ‘internet of things’ is growing exponentially, generating unprecedented volumes of data. With autonomous vehicles hitting the roads in increasing numbers and lives at stake, it is necessary to ensure that the liable party is held accountable when things go utterly wrong. The goal of economists, lawyers and policymakers alike then, would be to come up with a ‘pareto optimal’ scenario, while assuring that each party involved does not take undue advantage of each other.

Arushi Massey is a research and teaching fellow at the Department of Political Science at Ashoka University. Her research focuses on the digital political economy and questions at the intersection of law and social theory.

We publish all articles under a Creative Commons Attribution-Noderivatives license. This means any news organisation, blog, website, newspaper or newsletter can republish our pieces for free, provided they attribute the original source (OpenAxis). 

Categories
Uncategorized

Here’s the Truth: We Believe Misinformation Because We Want To

By Pravish Agnihotri

On September 14, Buzzfeed News published a leaked memo from a former data scientist at Facebook Sophie Zhang revealing Facebook’s deep and muddy entanglement in manipulating public opinion for political ends. “I have personally made decisions that affected national presidents without oversight, and taken action to enforce against so many prominent politicians globally that I’ve lost count”, Zhang said. 

This memo follows a piece by the WSJ, where Facebook was blamed for inaction in removing inflammatory posts by leaders of the ruling party BJP, fanning the flames of a deadly riot targeted against Muslims in Delhi. As the upcoming Bihar election campaign goes online, social media platforms and their ability to moderate hate speech and misinformation would come under further scrutiny. A look at past events does not bode too well. 

In March, videos of Muslims licking currency, fruits, and utensils were circulated online blaming the Muslim community in India for the coronavirus outbreak. Health misinformation also abounds on social media where a variety of unfounded treatments like cow urine and mustard oil are being claimed as possible cures of the coronavirus. Along with the rise in misinformation, we are also seeing a rise in a parallel, albeit much smaller group of fake news debunking news organisations. Misinformation, however, remains rampant. 

Why does misinformation spread, even in the face of hard evidence? Interactions between our socio-historical context, our psychology, and business models of social media companies might hold the answer. 

The Context

The dissemination of information was once a monopoly of states and a few elite media organisations. Information flowed from a top-down hierarchy with the state at the apex. Naturally, the media reflected elite interests. Information was scarce and its sources limited, thus it was trustworthy. This changed with the arrival of the TV and completely revolutionised with the arrival of the internet. Waves of information explosions not only changed how it was distributed but also how much information was trusted. In his book, The Revolt of the Public, Gurri argues, “once the monopoly on information is lost, so is our trust”. The shift from mere consumers of scarce media to hybrid creator-consumers of exponentially abundant information meant that every piece of information in the public domain became an object of scrutiny. In a world where everything could be false, anything could be the truth. It is in this context that we begin to understand misinformation. 

Historian Carolyn Biltoft terms this new context the dematerialisation of life. Under this context, beliefs are no longer formed on the basis of individual experience, but are constantly challenged by heavily circulated new information. Additionally, believing new information calls for larger leaps of faith, especially when related to science, technology, or the suffering of a distant community. Spiritual beliefs, beliefs in the superiority of a race, gender, or a form of family, all of which were strong sources of belongingness are now under question. 

The Individual

Individuals increasingly find themselves unable to explain the world around them, unsure of their identity, and unable to look at themselves and their social group in a positive light. It is precisely this condition which makes these individuals vulnerable to misinformation. Various studies have found that people are more likely to believe in conspiracies when faced with epistemic, existential, and social dilemmas. Misinformation allows them to preserve existing beliefs, remain in control of their environment, and defend their social groups. 

One might expect that once presented with evidence, a reasonable individual would cease to believe in misinformation. Psychologists Kahneman and Haidt argue that the role of reason in the formation of beliefs might be overstated to begin with. Individuals rely on their intuition, and not their reason, to make ethical decisions. Reason is later employed to explain the decision already taken through intuitive moral shorthands. 

How are these intuitions formed? Through social interaction with other individuals. Individuals do not and cannot evaluate all possible interpretations and arguments about any topic. They depend on the wisdom of those around them. Individuals who share beliefs trust each other more. Formation of beliefs, hence, is not an individual activity, but a social one based on trust. 

The ability of one’s social networks to influence their beliefs has remained constant. The advent of social media, however, now provides us with the ability to carefully curate our social networks based on our beliefs. This creates a cycle of reinforcement where existing beliefs, informed or misinformed, get solidified. 

Even in homogeneous societies, one is bound to encounter those who disagree with their belief. Although these disagreements can be expected to prevent misinformation, studies have found that they can actually have the opposite impact. Olsson finds that social networks who agree with each other increase the intensity of their belief over time, and in the process lose trust in those who disagree with them. A study also finds that correction of misinformation can actually backfire, leading people to believe misinformation even more than before. Our instinct to learn from those we trust, and mistrust those we disagree with creates a wedge between groups. Engagement becomes an unlikely solution to misinformation. 

Our socio-historical context predisposes us to misinformation, its social nature strengthens our belief in it, and makes us immune to correction. Social media then, acts as a trigger, to the already loaded gun of misinformation. 

The Platform

The misinformation epidemic cannot be attributed to human biases alone. Social media companies, and their monetisation models are part of the problem. Despite coronavirus slashing ad revenues, and an ad-boycott by over 200 companies over its handling of hate speech, Facebook clocked in $18.7 billion in revenue in the second quarter of 2020. Twitter managed to rake in $686 million. Advertising revenues constitute the largest part of these astronomical earnings. 

The business model for all social media companies aims to maximise two things: the amount of time users spend on their platform, and their engagement with other individuals, pages and posts. All this while, these companies collect a host of information about their users which can include demographics, preferences, even political beliefs to create extremely accurate personality profiles.

A recent study found that computers outperform humans when it comes to making personality judgements using an individual’s digital footprint. According to the study, the computer models require data on 10, 70, 150 and 300 of an individual’s likes to outperform their work colleagues, friends, family members, and spouses respectively. These models are sometimes better than the individual themselves in predicting patterns of substance abuse, health, and political attitudes. This data is then used for customising content and advertisements for every individual, creating echo chambers. In another study, Claire Wardle finds that humans regularly employ repetition and familiarity in order to gauge the trustworthiness of new information. If an individual’s beliefs are misinformed to begin with, these algorithms can further strengthen them through sheer repetition. These models can also predict what an individual finds most persuasive, and then ‘microtarget’ them with content, legitimising misinformation in the consumer’s eyes. 

As Facebook’s revenue shows, public opinion can be an extremely valuable commodity. It determines what you buy, what precautions you take (or don’t) in a global pandemic, even who you vote for. By arming those with vested interests in public opinion with accurate and effective tools of persuasion, the business models of social media companies end up facilitating the spread of misinformation. 

The truth is often nuanced, resists simplification and — if it disagrees with your beliefs — off-putting. This doesn’t necessarily make the truth worthy of going viral. Misinformation, on the other hand, tends to be reductive, sensational and perhaps most dangerously, easier to understand. It also relies on emotion to make the reader believe in it. This makes misinformation more likely to spread throughout the internet. A study conducted by MIT corroborates this claim. Falsehoods on Twitter were found to be 6 times faster in reaching users than truths. 

The ultimate goal for social media algorithms is to maximize engagement. As engagement with a post with misinformation increases, algorithms can expand its reach due to its likely popularity. Further, microtargeting ensures that such posts are shared with individuals who are more likely to agree with the information, and share it themselves. When controversial content leads to higher engagement, misinformation becomes profitable. Economic reasoning alone can lead social media companies to condone, and in worse cases, actively promote its dissemination. 

Our unique context, our instincts and biases, and the business models of social media platforms interact endlessly to create layers upon layers of reinforcing mechanisms that spread misinformation and make us believe in it. Artificial Intelligence is now being called on to fight and weed out misinformation from social media platforms. However, for any solution to be effective, it would need to address the interactions between the three. 

Pravish is a student of Political Science, International Relations, Economics and Media Studies at Ashoka University.

We publish all articles under a Creative Commons Attribution-Noderivatives license. This means any news organisation, blog, website, newspaper or newsletter can republish our pieces for free, provided they attribute the original source (OpenAxis).