Back to Blog

The Problem with The Trolley Problem

Luke Renner |


            Listen Now:  Apple  Spotify  google

Transcript:

Luke Renner: This is Advanced Autonomy. I'm Luke Renner. Today, we're going to discuss the problem with the trolley problem. The trolley problem is an ethical dilemma common to the self-driving space. Here's how it works: there is a runaway trolley barreling down the railway tracks. Ahead on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them.

Now, you are standing some distance away (you're off in the train yard), and you're standing next to a lever. If you pull this lever, the trolley will switch to a different set of tracks; however, you notice another person on that side track.

So the trolley problem asks you to choose between two options. The first is to do nothing and allow the trolley to kill five people on the main track — or pull the lever, diverting the trolley onto the side track where it will kill one person. So which is the more ethical option or more simply, what's the right thing to do?

My guest today, Ben Landen, hates the trolley problem. He thinks it fails to capture the true ethical dilemmas that we find in the self-driving space, and also the trolley problem makes a lot of false assumptions about how self-driving vehicles make decisions.

And so that's what we're going to talk about today: the trolley problem, the problem with the trolley problem, and some alternative ethical dilemmas we should be asking instead.

Hi, Ben. Welcome to the show.

Ben Landen: Hello, glad to be back.

Luke Renner: Yeah, so I gave our listeners an outline of the trolley problem, and I think before we dive into the ethics of autonomy, I thought I'd ask you — what would you do if you were standing by that lever?

Ben Landen: I am essentially a hyper-utilitarian—

Luke Renner: Okay, so you would pull it, no question?

Ben Landen: Yes, I would.

Luke Renner: So the trolley problem is a pervasive ethical dilemma in the self-driving car space. Wired, Forbes, Venture Beat have all written about it, famous think tanks like Brookings have weighed in, and I know that it's a frequently asked question on the festival and autonomy conference circuit so my question for you is why do you think this particular dilemma has such resonance in this space?

Ben Landen: It's really easy to latch onto as a problem that everybody understands and feels strongly about because we're talking about assigning value to life here —

Luke Renner: So the stakes are really high?

BEN LANDEN: The stakes are very high, exactly. It's also a problem that you don't need to be an expert to weigh in on the trolley problem. Everybody's opinion is just as valuable as the next person's because it's a humanitarian question and so it's interesting to explore, and finally it is the type of question that does not have a right answer. It is the type of thing I equate to like large political issues. There will not be a majority that agrees that this is the right way, and so it's really fun to argue about it and it's very difficult to change anybody's minds. So when you combine all those things, it's really easy and fun and interesting to talk about.

Luke Renner: Yeah but you said you hate it.

Ben Landen: I have a love-hate relationship with the trolley problem as it's applied to autonomous vehicles. I hate it for two reasons. One is the assumption that in the real world we can create this omniscient snapshot that enables us to ask the questions that the trolley problem experiment asks. The second issue is that even if you assume that you have this level of omniscience where you know everything about everything in the scene that you can then actually act in the physical world in a way that is guaranteed to result in the outcome that you have tried to optimize for. 

Both of those, I think, is absolute fallacy, right?

Luke Renner: So in the case of the trolley problem, like, it posits that you know you're going to kill another person and really what you're saying is that there's a lot of uncertainty and we're going to bring that uncertainty to AV decision-making as well.

Ben Landen: Exactly. It actually goes beyond that. So that's one level of it and you can really think of uncertainty that exists in every module within the AV stack: mapping and localization, perception, path-planning and decision-making, and ultimately actuation. So you described the uncertainty that can occur in the decision-making and path planning areas. That's one element. Then you have the perception element, for instance, which is to say, do I even have certainty that I'm populating all of the actors upon which I'm making my decisions with 100% accuracy? And the honest answer is no.

Luke Renner: But for me, the trolley problem really gets at this idea of choice, right? Do you proactively choose to harm someone to save lives or do you passively choose to do nothing and let the world burn? But, you know, self-driving cars do have to make decisions all the time so just maybe you could give us a little context about that and tell us how autonomous vehicles actually do make decisions.

Ben Landen: Yeah so step one is sensory data comes in [via] camera, lidar, radar. We don't need to get into the details. What is important is what comes out of that is the object level and 3D understanding of the world around the vehicle.

That's you doing your best effort to build that information that is fed to us in the trolley problem as known. It's not known in the real world so our perception system is doing its best job to build that up. Then you take that information and you do a lot of processing on it, from deep learning to other methods to then connecting time series data so that you don't just have a snapshot you actually understand how things are moving. And then you start to plan the path based on your goal.

In the case of autonomous vehicles, as we know them on the road, that means I'm trying to get to my destination. I'm trying to do it safely so as I mentioned, you can propose thousands of candidate paths — there are various algorithms and approaches to choosing the right one — but you're primarily choosing a cost function that optimizes for being the safest the fastest, whatever it is that you want to optimize for.

Luke Renner: Got it. So the AV takes in information, processes it, and then uses that information to chart a course based on the priorities set by the system. That explanation really surfaces one of the reasons why I think the trolley problem is so resonant because it's about setting those priorities. Now you mentioned that one of the priorities could be speed, one of the others could be safety, What can you tell us about how AV developers are thinking about safety?

Ben Landen: The bottom line with AV safety is less accidents. So people tend to key in first to the human errors because that's what's seemingly easily preventable by making decisions with the machine instead of with humans so that addresses 95-to-96% of accidents. That's when people start thinking about trolley problems because they assume, well, if there's a remaining 5% of accidents that still occur that means that you know some force majeure created this accident.

I think this really gets at the crux of one of my issues with the trolley problem, which is that people assume that the remaining 5% must be trolley-problem-like scenarios in which an accident was unavoidable, a tough decision needs to be made.

I would push back on that to say AVs are not only going to solve the 95% that is directly attributable to human errors but also that 5% that still occur would be preventable with better driving leading up to the point at which you say from here on out an accident could not have been avoided. AVs can address these issues from the fact that the system just is safer and more preventative in its driving, from the fact that the AVs might be fleet managed, meaning that they are maintained better than a consumer might maintain their own vehicles.

And then, I would posit that that remaining 5% that a lot of those accidents actually get addressed and that then the trolley problem really only applies — and by trolley problem I mean something crazy happened [like] I have to hit something — that's really only going to apply in a tiny tiny portion of driving scenarios.

Luke Renner: So I get what you're saying, the trolley problem represents really a false choice because for someone to actually get to that scenario of having to make a life or death decision, you know, between five or one, a lot will have had to have gone wrong already, and a lot of capabilities that will come with the rise of AV will lead to the prevention of these types of worst-case scenarios in the first place.

So with that in mind, I want to transition then to some of the real dilemmas that this industry is facing. I think when you layer capitalism into the development of autonomous vehicles — I think a lot of industry innovators are setting other priorities. You know, some of these priorities are related to the vehicle themselves, others of these priorities are really related to being the first to market. So in light of some of those kinds of economic and market forces that are driving the development of the space, what do you see as some of the real ethical dilemmas we should all be thinking about?

Ben Landen: I do think that many of the more pertinent ethical dilemmas are of the socioeconomic variety. So, for example, what happens when one company clearly has a better — or in other words safer — autonomous system than another? Now there are parallels to this, like, there are vehicles out there that are better and safer than others. The difference is that we, as people who are in control, buy vehicles that are safe enough — they all have to pass some safety standard — so whether we want to spend extra for the extra safest one is our choice. And then, quite a lot of responsibility and — ultimately, in the case of wrongdoing — culpability falls on us as the users.

It's very different when I get into a vehicle that's provided to me by some other company over which I have no responsibility, and I put my life in the hands of that vehicle, and I put the lives of everybody around me in the hands of that vehicle and —

Luke Renner: You're talking about the case of a robotaxi, right?

Ben Landen: Yeah. In those cases at what point is it unethical to say, hey we know it's a capitalistic economy that we operate in, you as a company can offer your autonomous vehicles but one is clearly safer than the other, and is it ethical to incentivize people to use a cheaper solution that's less safe when they have no control over what it does? And I think that's a really difficult question to answer. And it begs the question of, does this have to be a winner-takes-all market then because it's our moral imperative to force everybody to use the safest AV solution?

You can see how this goes and goes as you pull this threat.

Luke Renner: Yeah, absolutely. I think that's just one of many socioeconomic issues we'll be wrestling with —

Ben Landen: The socioeconomic issues, I think, are ones that we need to start thinking about. For example, are we building AVs in a way that they are going to exacerbate the wealth or income gap?

Like, look today at how transportation is used demographically. Higher net worth individuals tend to own their cars. Folks who are lower earners, in many cases, can't justify owning a vehicle. They take public transport. And what a lot of people are talking about that AVs would enable is — like, look at me right now. I live in Silicon Valley because I've worked here for several years. It's very expensive to live here. What if I could move into the Sacramento area, buy a house that costs a third of what my house in Silicon Valley costs, and do that because I don't have to drive two hours back and forth to work anymore? I can order an AV with a bed in it that takes me to and from my job at four in the morning because I can go ahead and sleep there. And now I've just pocketed a bunch of money, I've convenienced myself, I'm paying a bit of a premium but it's nothing compared to the money that I saved on real estate.

Whereas, the person who is taking the bus because that's what they need to do to make ends meet doesn't really have that option, right, so it's widening that that wealth gap.

Luke Renner: Yeah, that is absolutely true, and I think, for sure, it's something that we're going to be wrestling with as things become more autonomous — not just in vehicles but in the rest of our regular lives.

So we're almost out of time here, and I'd like to finish with what I think is the biggest ethical dilemma this industry is facing. To provide our listeners with a little context: in March of 2018, a woman was killed by a self-driving vehicle in Arizona. An investigation by the National Transportation Safety Board found that the system detected the pedestrian 1.2 seconds before impact; however, it didn't take any emergency action because it was actually programmed to wait one full second to calculate various options and alert the human driver to take over. Apparently, this was designed to minimize false alarms and to keep the vehicle from hitting the brakes unnecessarily, and for me, that is an ethical dilemma, right?

The counterargument, though, is 1.35 million people die in vehicle-related fatalities every year, so, I mean, why shouldn't we be doing everything we can to keep those people safe?

Ben Landen: So going back to what I said in the beginning, I'm hyper-utilitarian. I believe that if by deploying autonomous vehicles you can save one life, I say release them. That's my belief, and this is one of the reasons the trolley problem can be argued from — you know, 'til the cows come home.

Other people believe, no, now it's a robot. It's not humans. We know humans are fallible. We forgive ourselves for being fallible. And people have largely arbitrary burdens of proof that they expect machines to have f they're going to replace humans. So, yeah, an arbitrary one that is convenient, because we like round numbers, is like you should improve by an order of magnitude — you should be 10x safer. You should be able to reduce fatal accidents by 90% before we allow autonomous vehicles to proliferate. To which my response is, why?

Yeah, I'm hyper-utilitarian so I think if you're saving any lives you're doing a service to the world, and that's what's at the heart of the first level of the trolley problem: is it worth taking action to try to save certain lives opposed to others?

Luke Renner: Ben I appreciate the time thanks for coming in. I know it was last minute and I'll see you next time, okay?

Ben Landen: Thanks.

Similar posts