A runaway trolley is on course to collide with and kill five people down the track. A bystander can intervene and divert the vehicle to kill just one person on a different track. What should the bystander do?
This classic thought experiment in philosophy and its many variations perfectly illustrates the moral dilemmas in human decision-making.
The value of philosophy
There is no easy or right answer to the trolley problem, but the philosophical thought process propels us to ask the right questions and to consider all the possible perspectives.
“The benefit of philosophy for the average person, as Bertrand Russell would say, is that it allows you to free yourself from beliefs that you might not have a justification for, it also lets you look at the world in a new way and reminds you that you don’t have to accept things as they are,” says Rana Ahmad, Chair of the Department of Philosophy at Langara College, “It makes you more tolerant; you have to go beyond yourself and consider what other people think and why we dismiss some people and pay attention to others? It can open you to solutions that you might not have thought of before.”
Because of the enduring importance of philosophy in developing human thought and wisdom, UNESCO celebrates World Philosophy Day every year on the third Thursday of November.
This year’s theme is the Forthcoming Human. With an eye on the future, a host of ethical and epistemological dilemmas associated with technology will be examined and discussed as the fate of humanity is increasingly intertwined with technological advances.
AI and its ethical implications
The trolley problem was a thought experiment, but if the bystander is swapped with an autonomous car, the imagined scenario is soon to become a real-life situation waiting both for an engineering solution and some deep philosophical soul searching.
If a self-driving car finds itself in a situation where it must swerve to save its driver, should it swerve to the left if there is a young girl, or to the right if there is a grandma?
“The standard way of looking at the ethics of autonomous cars is to consider dilemmas such as having to choose between killing a child or an older person. Typically, the answer is to save the child because they have more years to live,” Ahmad says, “But the ethical question is not just about which person should the car kill. The question we should be asking is why are we designing these cars to kill at all? This suggests there is something wrong with our current approach. We should change our current infrastructure so that there are no car accidents involving pedestrians rather than developing autonomous cars that must make these decisions.”
Ahmad adds that autonomous cars present a range of interesting ethical issues. For example, if facing a difficult situation, does the car save the driver or does it save the pedestrian? Since no consumer would buy a car that might choose to kill them, the possible result is that the ‘safest’ autonomous cars would be designed to protect the wealthy consumers who can afford them.
The self-driving car is just one example of the ethical implications of Artificial Intelligence (AI) in our society. Ahmad has developed a whole course around such ethical issues.
Ethical decision making
Having studied molecular biology as an undergraduate, Ahmad has an unusually broad background as a philosopher. Her main area of expertise is in ethical decision-making under risk and uncertainty, with a focus on new and emerging science and technology.
Parallel to autonomous cars, another area of major ethical concern according to Ahmad is the development and use of autonomous weapons and drones.
“Ethics is about holding people responsible for their actions based on the assumption that we all experience similar harms and benefits. For example, just war theory has certain rules of engagement based, in part, on this idea. The rise of autonomous weapons has challenged this,” she says. “A ‘just war’ war assumes that we all can be harmed so we have an interest in not getting too violent. But autonomous weapons mean that I now can kill people from a distance without immediate risk. And it is much more difficult to hold people accountable for their actions when they are so far removed from the consequences.”
She adds that operating weapons and drones from a distance over a screen will also have a desensitizing effect on people as they are not seeing the actual consequences of their actions.
The use of AI for data collection and surveillance also raises ethical concerns as the technology can easily be abused.
“Governments want data that they can use to maintain power, but also to shape society in ways that they think are beneficial. One would hope the government is collecting data for the public good but it is very hard to say that private companies are interested in the public good,” Ahmad says, “and many of us have consented to losing our privacy and having our information shared with others. Most of us did not know we consented to this.”
Given the growing prevalence of AI in many aspects of our lives, UNESCO has embarked for the first time also to develop a legal and global document on the ethics of AI.
Most ethical problems don’t have an easy answer, says Ahmad, but she believes it is important to look at an issue from all sides and grapple flexibly with them.
For more information visit: