Recently, Tesla Motors released a new update for the Model S. The new software version, 7.0, includes Autopilot features such as automatic steering, lane changing, side collision avoidance and parallel parking.
Technology is ever-changing and there is no doubt, with the race to Tesla’s niche, other motor companies are going to add competition and innovation to the market. With this comes new product for consumers. Inevitably, self-driving cars will eventually become a staple in driving across advanced nations and possibly around the world. With this being said, would you trust your car to make decisions for you?
Picture a time in the near future where such cars are a majority on the road. With such advancement, it is likely that cars will be linked to one-another, networked to allow information to be transferred tirelessly. Picture the following situation:
You are enjoying the newspaper and your morning cup of coffee on your commute while your car does all the work. It’s mid-January along the New England coast as snow and ice cover the road. Nevertheless, you are comfortable in your situation as you have been operating such a car flawlessly for years. You come over a two-lane bridge which presents a 20-foot drop to the icy river below. Breaching the hill, you come to find an accident in your lane involving one of the last manually-driven cars on the road. The car appears to have slid and struck a barrier wall near the road. Meanwhile, in the other lane, there is a bus full of children on their way to school. Your car knows it cannot stop in time to avoid the accident ahead. What should your car do?
The situation presented is highly hypothetical and there may be a few ways around this particular situation. However, the concept begs to be explored.
A car is not a morally construct piece of equipment, it cannot make ethical choices. The car must be programmed by humans. Our ethical principles are what come into play with these two options:
Option 1: Your car sacrifices you, drives through the barrier wall into the icy depths, ensuring your demise.
Option 2: Your car runs head-on into the bus full of children. (As previously mentioned, most cars are networked. Your car knows that it would strike a bus.)
Most would agree that there should be a maximization of human life, opting for option 1. Generally, cars programmed to maximize your welfare could be considered immoral. The conundrum does not stop there.
Networked cars would most certainly be able to act and analyze situations much faster than any human could. The car could weigh the options and results of an accident much better than any human. This is where a problem arises. Do programmers value the life of a young parent more than someone of older age? Do we value the life of a sports star more than that of a man working 2 jobs to support his family? Should all lives be created equal?
It is important to understand that the behavior of machines programmed by humans is a direct reflection of human desires. The moral compass of a machine matched the moral compass of a human. As it is widely believed that different cultures represent a moral minimalism of sort, could we all agree on basic principles applied to this situation? Probably not. Curiosity will bring us to observe the outcome over the next few years of research and technological advancement. This situation, or one of the liking, will most likely take place in the coming decades, the results marking a huge decision in the programming of such technology.