Moral dilemmas arise with new self-driving cars and advanced artificial intelligence

by Srinath Somasundaram, Opinions Editor

In October 2017, a bill titled the “AV Start Act,” designed to remove some restrictions on the operation of self-driving car technology, unanimously passed the Senate Commerce Committee in a significant legislative step forward for the future of autonomous technologies.

Backing to bring the bill to a vote stalled when a self-driving Uber car fatally hit 49-year-old Elaine Herzberg in Tempe, Arizona in March of 2018, with self-driving technologies receiving more pushback. But now, in order to ensure the bill’s passage, proponents of the act are trying to attach the bill to a larger package which will certainly be passed. This back-and-forth on the bill’s prospects now brings special attention to the ethical issues surrounding autonomous vehicles.

One of these concerns involves the trolley problem, a thought experiment that has challenged ethicists for over 50 years. In the original version of this infamous experiment, a trolley is barreling down a track which would lead it to crash into five people, and a human is placed in the situation where she can switch the tracks and cause the trolley to hit only one person.

The thought experiment originally also caused dialogue and studies about the nature of what a human would do if put in such a situation. Now, the problem prompts questions about how an autonomous car would deal with situations like this. What type of decision would it make? Would it allow more people to die or would it risk receiving culpability and save more lives?

In addressing the ethical problem, some simply question the practicality of such a thought experiment, and thus whether it is worth discussing.

“When talking about self-driving car problems, a lot people draw up scenarios such as the trolley problem,” Leilani Hendrina Gilpin, a Ph.D student at MIT currently focusing her research on autonomous machines, said. “Those kinds of things don’t really happen in real life, so they aren’t the most meaningful though they do get a lot of press right now.”

Some also believe the only way to solve such a dilemma is to have more advanced self-driving cars that are perfect or close to perfect.

“As humans we can reason that, ‘Oh, this person wanted to save more lives’ or if they didn’t, maybe we would say ‘They were overwhelmed with the decision.’ We are pretty tolerable with humans, while we aren’t tolerable with fallible machines,” former ethics and philosophy teacher Dr. Shaun Jahshan said. “I think what the industry will demand is infallible machines, where there are plenty of failsafes and extra precautions.”  

When self-driving cars do make mistakes, however, another host of questions come with them. The most prominent of these being: whose fault is it when a problem occurs with an autonomous vehicle? This question of blame often becomes tangled between software, hardware, and the human driver.

“In the UBER accident, the lidar did detect a human, but the software characterized it as a false positive, so this question is a very interesting one because who is at fault there?” Gilpin said. “I think we need more information to do answer these questions. I also believe that a really big thing is that we need to make a lot of these data records public.”

Others say that these problems will be encountered and dealt with through time and law.

“I think the same issues of fault apply to the technology we have now, so we will have to build a body of law, of precedent, like we have now for other things.” Dr. Jahshan said. “What will happen is that, year by year, we’ll get more used to this idea of having self-driving cars that function reasonably well most of the time. And as we go through cycles of discussion about these things, then we will be able to come up with norms.”

This problem of blame ties into accountability issues, which may restrain the future autonomy of such vehicles.

“I think taking human drivers completely out of the picture will happen in the distant future. What we’ve seen, say even in the UBER accident, is that when we have humans in the loop, they aren’t perfect at all.” Gilpin said. “However, it’s very easy to blame a person in the front seat if anything goes wrong. I think a big reason that drivers will be around for a long time is liability.”

This piece was originally published in the pages of The Winged Post on Aug. 31, 2018.