Categories
Uncategorized

Humans v. AI: How automated decision making is a game changer for legal liability

Arushi Massey

What can the self-driving car crashes teach us about ethics and responsibility in the digital age? Are the trends shifting legal liability away from Big Tech? Can the State regulate? The intersection of law and technology poses new problems for moral philosophy, legal scholars and regulators. Product Liability may hold the answer.

The Trolley Problem, like many thought experiments, has a pervasive shelf life. There is little to add to its 50-year-old documented history in and outside classrooms—except to add a footnote about its strange popularity in autonomous vehicle circles. This is evidenced by its crowdsourced avatar dubbed ‘Moral Machine’, that has been an inspiration to computer scientists and engineers within the Silicon Valley counter culture. 

Fiercely debated and disavowed by philosophers, ethicists and behavioural psychologists, it seems, we begin exactly where the trolley problem ends—the complexity of the real-time decision making and messy morality in the aftermath of the loss of a human life. The trolley problem isn’t theoretical anymore and neither are the algorithms that sought to adapt it to the digital age.

Our case in point—In 2018, Silicon Valley awoke to an autonomous Uber killing a 49-year-old pedestrian in Tempe, Arizona. As one reporter succinctly summarised—what happens when a two-ton machine, one that is run by an assortment of sensors and computers and makes decisions foreign to human reasoning, comes in contact with the all too human textures of urban life?

A growing demand for and interest in scholarship, at the intersection of law and technology, identifies the immediate and real puzzles for legal systems, the state and tech corporations. The levers pulled by these three key actors will lay much of the groundwork and have the battle lines drawn.

The State and Digital Governance 

A public-private partnership paradigm forms much of the situational context of the testing and adoption of autonomous vehicles in cars and other cyber-physical systems. When we consider the question of state responsibility or even liability in the aftermath of crashes in testing zones or general roll out areas, this partnership between the state and tech corporations is increasingly transforming governance and producing new modes of surveillance. The question, as Jack Balkin put, is not if there will be a Surveillance State, but who is better suited to lead the Surveillance State? Big Tech is certainly an unprecedented contender. 

New forms of governance are emerging in a transnational zone of ‘legal indistinction’, an operational space bound by legal systems specific to nations but beyond their borders. Here, the Tech Corporation, authorised by the state, exerts influence and dictates norms on issues that range from cybersecurity, surveillance, intellectual property, user privacy and most recently, pandemic contact tracing. In the case of the recent self-driving car crashes, the state liability for allowing autonomous cars without sufficient oversight is unlikely to fly as a legal standard outside of issues of faulty state-built infrastructure. Only a legislative attempt can compensate for the regulatory failure in establishing safety standards or oversight.

The Determination of Legal Liability and Compensation

Over the past decade, legal scholars have described the situation of ‘identifying legal liability for autonomous decision-making software powered vehicles’ to be a grey area where the law runs out. This typically creates room for courts to consider questions of legal liability, compensation and criminal action, while creating new legal tests and establishing precedent. However, the other key trend in the legal responses in autonomous vehicle crashes reveals the use of the doctrine of product liability instead of vehicular negligence in cases featuring damage caused by autonomous vehicles. What is clear to researchers working at the intersection of law and technology, is that the current trend of moving cases involving autonomous vehicle collisions away from criminal liability and courts, and towards civil suits and settlements, will prove to be a missed opportunity. This is because it can potentially chip away at the ability of courts to adjudicate or set new precedent. It also makes the debate on ‘product liability’ a fierce contest studied by both legal scholars and economists. Thus, there is a trade-off between allowing these cases to be heard in court, chipping away at the significant role legal systems could potentially play while regulators play catch up, and the project to raise public knowledge and civil society awareness about autonomous decision making is put at risk. 

Scholars like Bryan Smith point out that a shift from the doctrine of vehicular negligence to ‘product liability’ in the short run advances the prevention of injury and the compensation of victims, while keeping the calculations of compensation fairly private between the tech companies and any human victims. In an economic context, the shift to ‘producer’s responsibility’ is a debate on its own, whether it’s regulating the drivers, self-driving manufacturer, lawmakers or the car itself! 

Legal protections and tech corporations

A culture of codified secrecy is hardly new to the corporate form. A direct line can be traced from Wall Street to Silicon Valley and by extension between the legal personhood afforded to Big Banks and Big Tech. It is interesting to think about the distinctions drawn between financial and personal information in the digital age. The contrast almost vanishes when we consider Big Tech and its successful campaign as the dominant corporate form, surpassing even Big Banks in their ability to amass information and then bundling it so all information becomes inherently financial in our new digitally enabled surveillance paradigm. 

In the age of algorithms, a behind the scenes look into the secretive and complex business models, practices and interfaces of leading tech platforms are critical both for users and governments. These are referred to as ‘Black Box Systems’ precisely because they enhance the legal and real secrecy afforded by algorithms and automated systems to tech companies. They become a blind spot to both regulators and consumers at large.  Algorithms, which are largely covered by existing intellectual property standards, have also revived interest in property rights for the digital age. 

More and more predictive algorithms are impacting every aspect of our lives. The paucity of enforcement activity, requiring moral justification and rationale, makes it harder to track illegal or ethical discrimination carried out during self-driver crashes. Frank Pasquale highlighted how predictive algorithms mine personal information to make guesses about individuals’ likely actions and risks. Thus, it becomes imperative to explore the consequences to human values of fairness and justice, when scoring machines make judgments about individuals in order to avoid arbitrary and discriminatory ways.

An unlikely but growing collaboration between independent researchers, former/current Big Tech employees, legal and civil rights activists, has been instrumental in making the implications of automated decision making public knowledge. This offers critical momentum for regulators and legal systems as they play catch up to Big Tech’s bullish attempts to drive both the adoption and research into cyber-physical systems. Most recently, Amazon’s controversial facial recognition software and the company’s aggressive campaign for adoption by law enforcement came under scrutiny by several independent AI researchers, who detailed the higher error rates in identifying women of colour. 

While shaping the future of AI, tech companies’ input is essential, however, they cannot retain absolute power on how their systems impact the society or on how we evaluate the impact morally. In order to boost accountability and transparency, governments need to support independent AI research, create incentives for the industry to cooperate and use that leverage to demand that tech companies share data in properly-protected databases, with access granted to publicly-funded artificial intelligence researchers.

The ‘internet of things’ is growing exponentially, generating unprecedented volumes of data. With autonomous vehicles hitting the roads in increasing numbers and lives at stake, it is necessary to ensure that the liable party is held accountable when things go utterly wrong. The goal of economists, lawyers and policymakers alike then, would be to come up with a ‘pareto optimal’ scenario, while assuring that each party involved does not take undue advantage of each other.

Arushi Massey is a research and teaching fellow at the Department of Political Science at Ashoka University. Her research focuses on the digital political economy and questions at the intersection of law and social theory.

We publish all articles under a Creative Commons Attribution-Noderivatives license. This means any news organisation, blog, website, newspaper or newsletter can republish our pieces for free, provided they attribute the original source (OpenAxis). 

Leave a comment