skip to content rich footer

subscibe to the rss feed

Discussing the Ethics of Driverless Cars

This morning we got into a discussion about the complexity of designing ethical driverless cars that meet social expectations. That’s a hard subject to articulate. Mostly because there isn’t just one ethical framework; but also because there is probably no way to produce an ethical car that passengers would step inside for a journey. We can’t even produce a safe car. Imagine if the car could insist, for the greater good, that you die.

When I see discussion about the design of an ethical driverless car the question, at least for me, becomes “Which ethical framework are we talking about?” Utiliarianism? Kant’s Categorical Imperative? Ethical Rights analysis? There is no hard and fast ethical regimen that would hold true in all cases.

In Utilitarian analysis, for the greater good, the car might be designed to sacrifice one driver so the family of five in another car survives. But what if the other car was at fault? Is it ethical to sacrifice the single driver so an oncoming carload who made an error would be spared? Do we count five lives against the one life; or, do we count each life as being of equal value to the individuals involved? Are younger lives more valuable than older lives? Would your gender, weight, health, criminal record or race be taken into account? Who makes that judgement? In the real World the Utilitarian perspective is a very cold calculation.

And if we run with Kant’s Categorical Imperative then the maxim might be something like: “All cars will kill all drivers all the time.” Or, “No cars will kill any drivers any of the time.” I’d take a punt that the second is the maxim that makes sense. Ethical driverless cars should never kill drivers.

The Categorical Imperative demands an empirical maxim, a black and white rule, that applies to everybody at all times and that is never discretional. To all cars in all situations. Therefore, Categorical Imperative cars that conform to Utilitarian Analysis where cars take lives makes no logical sense. Refer back to those maxims.

Under the Categorical Imperative, if all life is valuable and all people deserve autonomy, I don’t see how any designer of an ethical driverless car could arbitrarily introduce a computer algorithm for the taking of any passenger’s life. Regardless of the Utilitarian justification. It’s like imagining a World where it’s OK to shoot down a jet liner to get at a dangerous politician. Or, being euthanased by Government because of a dangerous hereditary illness in your bloodline. The greater good can be a scary justification.

The third ethical framework covers exactly that – Rights Analysis. The “Right to Life” stands out, as does the “Right to Autonomy”.

How does Rights Analysis fit with designing a driverless car that would possibly take away the passenger’s autonomy AND life as an algorithmic objective? Under Rights Analysis, we’re discussing an unethical car.

I’d therefore argue that an ethical driverless car is impossible. It’s as elusive a concept as a safe car. Safe cars don’t exist; they are compromises of safety versus price in a hostile and dangerous environment.

In a newbie ethics class we were asked to describe a safe car. Give it a try. Cars have petrol; there are drivers and passengers; cars are made of hard and sharp stuff and potentially crash into other hard and sharp stuff full of other people and a tank full of petrol. Cars travel at high speeds in all weather and environments. Safety is a corporate trade off without which there can be no cars.

Whereas, ethics is the study of black and white. Something is ethical, or not ethical. It’s impossible to be slightly ethical.

In the conversation about driverless cars the designers really just mean Utilitarianism, not the wider topic of Ethics. Utilitarianism is merely ONE ethical framework on which we could make decisions (as long as they don’t impinge on an individual’s Ethical Rights or fail the Categorical Imperative). I’m concerned because a World where the only ethical question is about the greater good is probably the most dangerous World we could conjure.

Comments are closed.

Social Networking

Keep an eye out for me on Twitter

About the Author

Steven Clark Steven Clark - the stand up guy on this site

My name is Steven Clark (aka nortypig) and I live in Southern Tasmania. I have an MBA (Specialisation) and a Bachelor of Computing from the University of Tasmania. I'm a photographer making pictures with film. A web developer for money. A business consultant for fun. A journalist on paper. Dreams of owning the World. Idea champion. Paradox. Life partner to Megan.

skip to top of page