If you think that self-driving automobiles present only a technological challenge, you’re not thinking hard enough. They present an ethical challenge as well. Ought a self-driving car’s algorithm to treat its passengers as objects of fiduciary care (acting so as to maximize the likelihood that passengers of this car survive a bad situation)? Or ought it instead act to minimize all fatalities (even at the cost of the lives of this car’s passengers)? If the manufacturers of self-driving automobiles are going to be free to program their own vehicles, this isn’t (only) a question of bioethics, as the article suggests, but of business ethics. >>>
How will a Google car, or an ultra-safe Volvo, be programmed to handle a no-win situation — a blown tire, perhaps — where it must choose between swerving into oncoming traffic or steering directly into a retaining wall? The computers will certainly be fast enough to make a reasoned judgment within milliseconds. They would have time to scan the cars ahead and identify the one most likely to survive a collision, for example, or the one with the most other humans inside. But should they be programmed to make the decision that is best for their owners? Or the choice that does the least harm — even if that means choosing to slam into a retaining wall to avoid hitting an oncoming school bus? Who will make that call, and how will they decide?
What do you think?