The trolley problem – the question of who to save, or kill, in no-win crash situations – continues to divide opinion like no other subject in the driverless world.
I must admit to flip-flopping on it myself. From being quite taken with it in 2018’s Autonomous now: the shift to self-driving, through 2019’s The driverless dilemma: touchstone or red herring?, to last month’s Self-driving experts across the world agree: the trolley problem is a nonsense.
That, I thought, was conclusion reached, the end of the matter. Far from it! In response to the latter article, Karim Jaser, Senior Product Manager specialising in artificial intelligence (AI) and internet of things (IoT) for blue chip companies, posted a resolute defence of the much-maligned thought experiment on our LinkedIn page.
“I do agree that humans don’t go through the trolley problem evaluation in the split decision second, however I also think not all experts agree it is a nonsense,” he said. “It is for society as a whole to discuss these ethical problems. From the point of view of self-driving technology, this can be solved in many ways, with probability theory and estimations on minimal loss, but it is not up to developers or self-driving experts alone to decide how to tackle the point. It needs the involvement of regulators, governments and the industry.”
Well, with our mission to encourage debate about all aspects of autonomous vehicles, how could we resist? We asked Karim if he’d be up for an interview. He kindly agreed and here we present his thoughtful and cohesive opinion.
KJ: “I was always fascinated by robot intelligence, so at university I studied telecommunication engineering. There were lots of exams on probability theory, system control, software engineering. I was also involved in coding in my spare time, and later did it as a job.
“Self-driving is a control problem first and foremost. There are elements of robotics, including perception state estimation and trajectory planning, but also software, hardware and AI working together.
“The interest grew stronger when I started studying machine learning and AI about four years ago. When I was at university in the 90s, AI was not really a popular subject. It was a topic I picked up later in my career. As a senior product manager at a high technology company, AI is everywhere now – it’s an essential part of the skills necessary to perform and innovate, from biometric scans and image recognition to automated travel.
“AI has a lot of potential to have a beneficial impact on society – fewer accidents, better mobility, less pollution, more autonomy for people with disabilities – but it doesn’t come without challenges, for example, cyber threats, and also ethical and regulatory issues, which is why I got involved on the trolley problem.”
“It’s not straightforward. If we take a step back, you we need to understand how self-driving cars take decisions. They’re using supervised learning, reinforcement learning, convolutional neural networks (CNN) and recurrent neural networks (RNN), deep learning for computer vision and prediction. Specifically, reinforcement and inverse reinforcement learning are very tightly linked to the way driverless vehicles behave through means of policies.
“Policies are related to the distribution of probabilities, but the trolley problem is an ethical choice, so I understand why a lot of people in the industry dismiss it. It’s not the way autonomous vehicles take decisions, going through philosophical considerations in a split second, so it might seem irrelevant, right? Like the Turing Test and Asimov’s Robot Rules, the trolley problem can be perceived as a distraction from more practical considerations.
“It can be distracting for two reasons: first, these considerations are corner cases – there are other priorities, more likely scenarios still to be addressed; second, autonomous vehicles will not be given ethical guidelines to link with probabilities.
“With regards to the first objection, as Patrick Lin (director of the Ethics and Emerging Sciences Group at California Polytechnic State University) has pointed out, it shouldn’t matter if these scenarios are impossible, because the job of these thought experiments is to force us to think more carefully about ethical priorities, not to simulate the realities.
“The second objection is related to self-driving cars taking decisions through distribution of probabilities. The actions of these vehicles are linked not to hard coding, but all statistical contextual information, and that makes each scenario difficult to interpret. You can potentially have millions of mini trolley problems in different contexts.
“The trolley problem is a reminder that corner cases and autonomous vehicle behaviours are not a technical irrelevance. This is an issue that belongs to society and should be discussed in the same way as other AI pitfalls like privacy and bias.
“Actually, the trolley problem is more related to the third pitfall of AI, replicability. When trying to understand why and how an autonomous vehicle takes a decision, it is important to note that most autonomous vehicle developers are taking ethical considerations into account.
“In 2017 in America, Apple commended the National Highway Traffic Safety Administration (NHTSA) for including ethical considerations in its Federal Automated Vehicles Policy. It even highlighted three particular areas: 1) the implications of algorithmic decisions for the safety, mobility and legality of automated vehicles and their occupants; 2) the challenge of ensuring privacy and security in the design of automated vehicles; and 3) the impact of automated vehicles on the public good, including their consequences for employment and public spaces.
“The automotive industry has also approached the issue of accidents caused by autonomous vehicles in relation to ethics. For example, Volvo stated in 2015 that it would take responsibility for all Volvo self-driving car accidents. This is an ethical decision, because it did so without regulation forcing it to do so.
“We will see what happens. If there are no ethical decisions by the industry, the regulators will step in. On a fun note, looking to the past, horses were not considered responsible for their actions, the rider was. Whereas in this case, responsibility for the autonomous vehicle will lie not with the owner but with the carmaker.
“So, to conclude, automakers and AV developers are taking ethical and regulatory matters into account, which underlines the importance of these discussions. We cannot just dismiss the trolley problem because it’s not the way an autonomous vehicle decides, or because it distracts from technical development.
“The way to deal with this is to discuss the implications in the right context, being aware of how autonomous vehicles are developed without scaring the public with sensationalist articles. The trolley problem might be perceived as a Terminator-style situation, and that’s where it gets on the nerves of a lot of people that are developing and testing AI. It’s not black and white, it’s a grey area, and that takes us to the path of discussions.
“The trolley problem forces us to consider ethics in vehicle development and confront the fact that ethical principles differ around the world, as documented by the Massachusetts Institute of Technology (MIT) simulation.
“Are we at the point where discussing the trolley problem should be a priority? I believe that would be beneficial to the success of the self-driving industry, guiding us in the thinking process of building the right mix of safeguards and transparency.”