The prospect of autonomous cars raises a host of ethical challenges, including questions about life and death and what such machines should do — decide to do — when lives are on the line. How should they (given the need) weigh one life against another, or a driver’s single life against the lives of, say, several innocent bystanders? These are deep philosophical questions. But the piece quoted below (summarizing an academic paper) suggests that philosophical ethics isn’t all that relevant, here. The author points out that “lawyers and lawmakers will sort things out” — which they may well do (to their satisfaction, but not necessarily to anyone else’s). One good example: Tesla. The author suggests that rather than setting its autopilot setting to go slow (for safety) or fast (for efficiency), the company programmed it to follow the speed limit, in order to limit legal liability. It listened, in other words, to its lawyers. Of course, if this is true, the company has programmed its cars to act immorally: following the speed limit (say, 100km/h) when everyone else around you is driving 120 is itself unethical, because variation in speed is more dangerous than moderate speeding. There are moral limits to the practice of following lawyers’ advice.
LINK: Lawyers, Not Ethicists, Will Solve the Robocar ‘Trolley Problem’ (by Aarian Marshall for Wired)
… In a paper published in Northwestern University Law Review, Stanford University researcher Bryan Casey deems the trolley problem irrelevant. He argues that it’s already been solved—not by ethicists or engineers, but by the law. The companies building these cars will “less concerned with esoteric questions of right and wrong than with concrete questions of predictive legal liability,” he writes. Meaning, lawyers and lawmakers will sort things out…..
What do you think?