Saturday, May 21st, 2016
This morning we got into a discussion about the complexity of designing ethical driverless cars that meet social expectations. That’s a hard subject to articulate. Mostly because there isn’t just one ethical framework; but also because there is probably no way to produce an ethical car that passengers would step inside for a journey. We can’t even produce a safe car. Imagine if the car could insist, for the greater good, that you die.
When I see discussion about the design of an ethical driverless car the question, at least for me, becomes “Which ethical framework are we talking about?” Utiliarianism? Kant’s Categorical Imperative? Ethical Rights analysis? There is no hard and fast ethical regimen that would hold true in all cases.
In Utilitarian analysis, for the greater good, the car might be designed to sacrifice one driver so the family of five in another car survives. But what if the other car was at fault? Is it ethical to sacrifice the single driver so an oncoming carload who made an error would be spared? Do we count five lives against the one life; or, do we count each life as being of equal value to the individuals involved? Are younger lives more valuable than older lives? Would your gender, weight, health, criminal record or race be taken into account? Who makes that judgement? In the real World the Utilitarian perspective is a very cold calculation.
And if we run with Kant’s Categorical Imperative then the maxim might be something like: “All cars will kill all drivers all the time.” Or, “No cars will kill any drivers any of the time.” I’d take a punt that the second is the maxim that makes sense. Ethical driverless cars should never kill drivers.