Technological advancements often bring many changes to our society and it often challenges and forces us to adapt to new ethics, morality, and social constructs. This paper intends to explore one of the emerging technologies – self-driving cars, and its application to one of the ethical dilemmas – the trolley problem. This paper/ I will start with background information on the trolley problem and self-driving cars. Then, I will provide justifications as to why it is potentially necessary to implement the ethical decisions model, also known as moral agents to autonomous systems. Later, I will explore various trolley problem scenarios and attempt to provide my personal perspectives and other perspectives. In addition, I will also provide variations to different scenarios. Moreover, I will attempt to provide potential solutions or risks that should pay close attention to with back-and-forth evaluations. In the end, I will also provide existential justification and counterarguments.
What is the trolley problem? You are standing next to a lever that controls the direction of an approaching train. Unfortunately, there are five people in front of the track and the only method to save them is to pull the lever. However, pulling the lever will kill one person that is on the sidetrack and the only method to save that person is by not pulling the lever ('Trolley problem', 2019). What would you do?
Save your time!
We can take care of your essay
- Proper editing and formatting
- Free revision, title page, and bibliography
- Flexible prices and money-back guarantee
Place an order
That is the trolley problem. An ethical dilemma in which either choice would be bad. The trolley problem dated back to 1967 but since have lots of variations ('Trolley problem', 2019). It serves as one of the cornerstones of evolutionary ethics and moral psychology. It begs the question: to what extent is it acceptable that the greater good is achieved by individual sacrifice? Though, the trolley problem has its own shortcomings. While many people get baffled because it challenges their morality, some are criticized for its lack of real-life applications.
Moving on, self-driving cars as the name implies is an autonomous vehicle that could operate without a driver. Alternatively, it could be named a driverless vehicle. It utilizes artificial intelligence / neural networks to make decisions on its own. People expect self-driving cars to improve quality of life and reduce the number of traffic accidents. Furthermore, it could also be or serve as a catalyst to integrate into smart systems that could further enhance human quality of life.
Scientists have hypothesized a scale to grade self-driving cars with five levels. Level one is assumed as the lowest level while level five is the highest. On level one, vehicles could not operate without a driver and are only capable to improve the driver’s ability at braking or cruising under limited scenarios. A commonly found application would be the highway cruising function for automated acceleration. It still requires the driver to steer for directions and apply brakes when needed. A level five, on the other hand, is considered a fully autonomous vehicle. It is hypothesized to make decisions on its own even in abnormal scenarios in which its level 4 counterpart falls short. A level 2 and 3 are vehicles that can only operate in limited or non-unusual scenarios. Currently, the highest level we have achieved is level 4 by google self-driving project.
However, no technology is perfect and self-driving cars have their own myriad of issues. In 2016, a tesla experimental self-driving car crashed into a pedestrian due to its inability to register the emergency scenario. Since then, there are a total of six fatalities caused by self-driving cars. Although if we compare the number of casualties caused by humans to only six people, we might be able to conclude that it shows a positive trajectory. Still, considering the proportion of active self-driving cars to normal cars and scaling up the casualty by the same ratio, it would be a potentially devastating amount. An amount that may put Chornobyl and Wen Chuan Earthquakes to shame. It may potentially impact how people perceive and later integrate with the technology and demonstrate anti-sentiments close to anti-nuclear power plant movements.
Another concern lies in the passive approach of level 3 self-driving car designs. Level 3 self-driving cars expect drivers to take control during an emergency. It grossly overestimates human reflexes based on past experiments on trolley problem simulations. In those experiments, participants often found freezing (“immobilize physically, mentally and emotionally”) in place when researchers challenge them in a dilemma scenario with no others’ support. Therefore, it is a bad decision to regress the responsivity back to passengers or drivers since humans are bad at making decisions, especially during an emergency.
However, if we would like to have self-driving cars take responsivity, they also need to uphold our moral and ethical codes. Current technologies of neural networks and artificial intelligence depends on largely big datasets of data. Let us not forget the scandal that happened at Amazon when the hiring AI which based its learnings on past resumes showed tendencies of favoring male over female counterparts. It is lucky that we have yet to implement any artificial intelligence with potential tendencies of bias in any life-critical departments. Hence, I would like to discuss some of the scenarios regarding the potential applications between the trolley problem and self-driving cars.
Assuming now we have a self-driving car at level 5. We place it into the classical trolley problem scenario where it would either hit 5 people or take a diverging action that would hit one person. (interpret to be like the original trolley problem) Given that the vehicle cannot avoid hitting at least one person.
My personal perspective aligns with utilitarianism ethics in this scenario. That means we should minimize as much harm as possible; which in turn, means hitting one person in order to save five people. Choosing to hit one person also aligns with other consequentialism ethics because it places the least burden on the social construct. Having fewer injured or dead people would result in fewer medical expenses and legal actions.
However, one might argue from a deontology perspective that hitting one person to save five people is morally wrong. Given a scenario that six kids are going to cross a traffic light. Five chose not to follow the traffic signals and go across the road regardless while one insisted to remain there. It is morally wrong to diverge and hit that one kid who is abiding by the law while the other five have been incentivized to risk their lives.
There is a possible implementation that we could implement a utilitarianism-based system with a safeguard system of legalism. In such implementation, if the vehicle happened to present in a dilemma scenario, it will choose the action that has the least contradiction of law while maximizing the effort of saving the most people. Although not all scenarios based on deontology ethical dilemmas will be a subset of the legalism contradictions, my personal perspective favors saving more people than fewer people. Despite that, it might not be in my best interest to have one of my family members or one of my friends harmed in such scenarios. Although that is largely an egoist perspective since my family members or friends have a higher binding force on me than others.
However, there might be some technical difficulties to implement such frameworks. Laws are usually intangible, which means it usually depends on interpreters/lawyers/judges to assume whether the law applies in such scenarios. Moreover, since laws are changed periodically, it might be impossible to implement such systems unless we fundamentally change how laws are structured.
An interesting phenomenon was observed in MIT’s study of the moral machine. The moral machine is a study consisting of polling from over one hundred countries and gathering their opinions on self-driving cars. Questions consist mainly of the trolley problem and they found that different countries usually differ in their opinions. Although the researchers suggested that data are highly likely to be skewed, it does show that different societies are likely to have different opinions. It shows that it might not be possible to have a universal set standard. What researchers found is that individualistic cultures tend to place a higher emphasis on sparing more lives while collective cultures tend to place a smaller emphasis on sparing more lives.
Let us assume a different scenario. If the self-driving car is between the choice of either killing the passengers on board or a group of pedestrians, what should it do? If we derive from the perspective of product ethics, it is immoral to design a product that has built-in mechanisms of killing the user. Product ethical guidelines suggested there should be no methods of deceiving or harming of any kind to the users. However, one might argue that self-driving cars are not simply personal belongings because training and advancing self-driving cars is a social effort. In addition, according to the criminal definition of intent, it is not against the law if the intent is for achieving the greater good. Therefore, it should be justified to be altruistic conditionally under the terms of agreements.
Still, there is a hidden risk of letting self-driving cars decide whether to kill the owner. Given the previous elaboration on how artificial intelligence could become biased when given large datasets without checks, it is possible that the vehicle will resolve to kill the passenger every single time when a dilemma scenario happens. It is also not a good idea to attempt to save pedestrians all the time as well. When self-driving cars attempt to do maneuvers poses a tremendous risk to both parties because higher complexity had a higher potential for error. In addition, it might not be a good idea when pedestrians have the intent of harm. It would be a disastrous scenario when a group of people suddenly jump on the road and forces the cars seated with their owners to commit suicide.
According to the poll from the moral machine about “How countries compare in sparring pedestrians over passengers”. Japan ranks the highest in favorability to spare the pedestrian while China ranks the highest in favorability to sparing the passenger. It could be partially attributed to one of China’s social problems like pedestrians forcing passengers to commit “suicide”. It is an aspect known as “Peng Ci”, alternatively known as bumping porcelain. An art of scamming when pedestrians pretend to be hit by cars and demanding ransom. It would become even more disastrous if a scammer attempted to “Peng Ci” in front of a self-driving car and the car committed suicide. That would be morally wrong not only from egoism and deontological perspective but also legalism and state consequentialism. Therefore, from my personal perspective, we should explore this concept but pay close attention to the host countries' social problems and closely monitor its application.
An alternative solution is to implement an altruistic toggle switch. Turning it on would result in the vehicle maximizing its effort of saving pedestrians while turning it off maximizes its effort of saving passengers. There could also be a neutral choice between two polar choices. The benefit of having such buttons are immense. Firstly, it is compatible with our current legal frameworks and could act as a temporary solution for the legal systems to adjust themselves for the future. Secondly, the law could mandate all vehicles with children under twelve must have switches turned off, which could actively encourage vehicles to save the children on board. However, it needs to be carefully designed and should not have the options to be tampered with.
How about the scenario between the choice of killing a group of animals and one person? From an egocentric perspective, all lives are equal. Therefore, if we derive from the argument of utilitarianism based on biocentrism, a group of animals should be saved because there are more lives at stake. Unfortunately for the animals though, all legal systems from all cultures place humans over animals. Under all circumstances, whenever human lives are put in competition with animals, it is always the case to prioritize humans ('The Moral Status of Animals Stanford Encyclopedia of Philosophy', 2013) and the poll of moral machines affirmed this consensus.
Thus, another criticism of the previously mentioned altruistic option is the difficulty to differentiate an animal from a human. Current technologies of visual interpretation are really limited. Therefore, it may be the case of considering a deer as a human and ending up killing the passenger.
There are however countless variants adapted from the trolley problem such as whether to spare the young or the elderly; the moral machine poll suggested collective cultures tend to favor sparing the elder while individualistic cultures tend to favor sparing the young.