Do Self-Driving Cars Need Morals?

Will an ethical code become a specification in the same way we look at engine specifications or safety equipment?

June 22, 2016 Photo

With news of self-driving cars flooding both online and print media, many carmakers not only are testing autonomous vehicle technology, but also are hinting that we will see multiple versions of self-driving cars in the near future. For example, Google has captured recent headlines with its test fleet of self-driving vehicles that max out at 25 miles per hour. But exactly when can we expect to see these cars on the road and in what numbers?

For many industry experts, seeing 10 million fully autonomous vehicles on the road by 2025 is a likely occurrence. This potential coupled with recent issues surrounding accidents between self-driving and human-driven vehicles brings many concerns to the surface.

To date, the accidents that have been reported are a direct result of the autonomous vehicles’ reliance solely on if-then programming. For example, the autonomous vehicles won’t exceed the set speed limit or cross solid white lines, something human drivers do every day to safely merge into traffic. If there doesn’t appear to be enough space when merging into traffic, then the computer-driven vehicle’s response will be inaction, which is in contrast to human drivers, who will use their turn signal to show intent to merge in hopes that it will cause oncoming drivers to slow down and allow their vehicle into the lane.

Is it truly possible to program a car to be prepared for all situations? I don’t believe it is, but then humans aren’t flawlessly programmed for each situation either. Driving does take good judgment, which may not be completely logical in all circumstances. Humans will veer into the emergency lane or up onto a curb to avoid hitting someone who suddenly brakes in front of them. The fact is that ethics and the law often diverge, and good judgment can cause someone to act illogically. But how would that be programmed?

Beyond these real-world shortfalls comes the potential “trolley problem.” The trolley problem is a thought experiment in ethics that many teachers use to challenge students with a dilemma. The general form of the problem is this: there is a runaway trolley car careening down the track. Ahead on the track there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard next to a lever. If you pull the lever, the trolley will switch to a different spur. However, on that spur there is one person tied to the track. You have two options: (1) do nothing, and the trolley kills the five people on the main track or (2) pull the lever, diverting the trolley onto the side spur where it will kill one person. Which is the correct choice?

Let’s extrapolate that to a self-driving car scenario. An accident is unavoidable: the choices are (1) protect the passenger inside the self-driving car or (2) protect five pedestrians or passengers in another vehicle from injury or death. Which is the right choice? Adding calculations to the driverless car for a moral decision may not be easy. Out of instinct, a human likely will decide first to act out of self or family/passenger preservation, and second, actions may be taken to incur the least amount of damage or injury to others outside the vehicle.

For the self-driving car, it may well come down to the programmer. Would the ethical code become a specification on a driverless car the way we look at engine specifications or safety equipment? With multiple offerings of self-driving technology and different programmers authoring the vehicle’s decision-making capabilities, each vehicle type could have a different ethical imprint. Should that be standardized? If so, who would manage it?

There are many unanswered questions that require thoughtful examination before the autonomous car is ready for mass consumer adoption. I hope the pioneers of self-driving technology are devoting as much time to these questions as they are to getting a vehicle to market, but that requires a high degree of ethics.

photo
About The Authors
Greg Horn

Greg Horn is vice president of industry relations for Mitchell International. He has been a CLM Fellow since 2013 and can be reached at greg.horn@mitchell.com, www.mitchell.com. 

Sponsored Content
photo
Daily Claims News
  Powered by Claims Pages
photo
Community Events
  Claims Management
No community events