Ethical considerations are inevitable design features of self-driving cars. At the very least, software algorithms telling the car how to respond to road hazards and dangerous traffic situations are by implication imperatives covering what is more valuable (and ought to be preserved) and what is less valuable (and may be sacrificed for what is more valuable, where they conflict).
One interesting question in evolving an ethics of self-driving cars concerns how the evolutionary process will take place. Will it be a top-down process, in which government regulators settle on an ethical orientation and a set of design features flowing from it—embodying these in regulatory standards and amending those regulatory standards as their results in traffic deaths and injuries come in? Will it instead be a bottom-up process, in which self-driving automobile manufacturers pursue a diversity of designs (whether as implementations of differing ethical orientations or as sometimes differing implementations of the same ethical orientation)—and through a game-theoretic process evolve safer responses to others’ designs, to be distributed as software updates?
Here, Christopher Hart, chairman of the U.S. National Transportation Safety Board, weighs in strongly on the side of the top-down approach. Meanwhile, University of Washington robotics law expert Ryan Calo offers some considerations that can be read as favoring the bottom-up approach. Cal Poly San Luis Obispo philosopher Patrick Lin offers comments tending to support a top-down approach. >>>
LINK: Top Safety Official Doesn’t Trust Automakers to Teach Ethics to Self-Driving Cars (by Andrew Rosenblum for MIT Technology Review)
Rapid progress on autonomous driving has led to concerns that future vehicles will have to make ethical choices, for example whether to swerve to avoid a crash if it would cause serious harm to people outside the vehicle.
Christopher Hart, chairman of the National Transportation Safety Board, is one of them. He told MIT Technology Review that federal regulations will be required to set the basic morals of autonomous vehicles, as well as safety standards for how reliable they must be.
…
Hart also said there would need to be rules for how ethical prerogatives are encoded into software. He gave the example of a self-driving car faced with a decision between a potentially fatal collision with an out-of-control truck or heading up on the sidewalk and hitting pedestrians. “That to me is going to take a federal government response to address,” said Hart. “Those kinds of ethical choices will be inevitable.”
…
Ryan Calo, an expert on robotics law at the University of Washington, … believes the real quandary is whether we are willing to deploy vehicles that will prevent many accidents but also make occasional deadly blunders that no human would. “If it encounters a shopping cart and a stroller at the same time, it won’t be able to make a moral decision that groceries are less important than people,” says Calo. “But what if it’s saving tens of thousands of lives overall because it’s safer than people?”
What do you think?
Recent Comments