Ethical Experimentation for Autonomous Vehicles and Utilitarian Cars

Topics:
Words:
1493
Pages:
3
This essay sample was donated by a student to help the academic community. Papers provided by EduBirdie writers usually outdo students' samples.

Cite this essay cite-image

There has been much debate recently about driverless cars regarding the issue of morality. Specifically, one might question whether it is morally right for someone to program a car to save the driver of the vehicle before the lives of those outside of the car. Others might ask if we should even allow people to have driverless cars. To further explore these issues, one must first understand two core moral theories: Utilitarianism and Kantian Ethics.

Utilitarianism is the idea that our actions should maximize utility and minimize pain. In this sense, utility refers to happiness. Utilitarianism focuses on the consequences of one’s actions, and we call this a consequentialist theory. To further explain this idea, John Stuart Mill states “the creed which accepts as the foundation of morals, Utility, or the Greatest Happiness Principle, holds that actions are right in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness” (Mill 82). Utilitarianism isn’t solely about producing happiness or pleasure, rather, it is about acting in such a way that what you do produces a greater amount of happiness than pain. Mill mentions that, in a Utilitarian view, “it is not the agent’s own greatest happiness, but the greatest amount of happiness altogether” (Mill 84). This brings up the question: should some actions which cause pleasurable or happy circumstances to be considered more valuable than others? If so, how do we determine which of these pleasures is more valuable than the others? In his writing about Utilitarianism, John Stuart Mill describes precisely what he means when he refers to happiness or pleasure. He explains that there are two kinds of pleasure: mental and bodily and that everything that is desirable can fit into one of these categories. Mill points out that “utilitarian writers in general have placed the superiority of mental over bodily pleasures (Mill 82). So, when taking a Utilitarian approach, one must decide how much utility an action will bring, and weigh the value of each pleasurable response to that action in order to truly measure an action’s yield of happiness against its yield of the reverse of happiness. Utilitarians should also take into account the ultimate end, which, according to the Greatest Happiness Principle, is “an existence exempt as far as possible from pain, and as rich as possible in enjoyments, both in point of quantity and quality” (Mill 84).

Save your time!
We can take care of your essay
  • Proper editing and formatting
  • Free revision, title page, and bibliography
  • Flexible prices and money-back guarantee
Place an order
document

In contrast to the Utilitarian view, Kantian Ethics is considered to be a deontological, or non-consequentialist theory. Kantian Ethics doesn’t take into account the consequences or end results. Kantian Ethics deal with moral obligation, or duty. The Kantian view says that we can derive these duties, or moral obligations from what Kant calls the Categorical Imperative: “Act only on that maxim whereby thou canst at the same time will that it should become a universal law” (Kant 91). This means that if I wanted to do something, and it would be impossible for everyone to complete the same action, it is immoral. For example, Kant tells the story of someone who is forced to borrow money, and promises he will pay it back, but doesn’t actually intend to. Kant says that if we were to make this maxim a universal law, it would be impossible. He states “for supposing it to be a universal law that everyone, when he thinks himself in a difficulty, should be able to promise whatever he pleases, with the purpose of not keeping his promise, the promise itself would become impossible” (Kant 92). Kantian Ethics state that, in addition to this Categorical Imperative, one must also consider whether they are treating people as merely a means, or as ends in themselves. Kant would say that it is morally wrong to treat someone, even yourself, as merely a means. This is quite different than the Utilitarian view, where one might use someone as a means if it would result in a greater amount of happiness.

Both of these moral theories provide insight into the discussion of driverless cars. Our readings seemed to address common questions surrounding autonomous vehicles without definitively siding with one view or the other. One major topic discussed in both articles and the TED-Ed video was who should create the algorithms used by driverless cars, and what the algorithms should include. The TED-Ed video gave an example of a situation where a morally correct algorithm would be necessary and explained why this could be a controversial topic. The video created a scenario where a self-driving car was boxed in and attempted to avoid a falling object from the vehicle in front of it. Suppose there is a motorcycle on one side of the self-driving car and an SUV on the other. If the car is using a utilitarian-based algorithm, it would have three options: swerve and hit the motorcycle to prevent the least harm to the driver, swerve and hit the SUV as a middle-ground, or allow the car to be hit by the falling object, potentially causing the most danger to the driver, but providing the least harm to other people. The article “Driverless Cars and the 5 Ethical Questions on Risk, Safety, and Trust We Still Need to Answer” describes a similar situation and explains that

People often talk about self-driving cars taking a utilitarian approach on the road: the car acts in a way that maximizes the benefit for the most amount of people … But if you took a more deontological approach — one that was focused primarily on what your duty was — things would be different. (Bogle)

So should driverless cars have a Utilitarian algorithm? I’m inclined to say no, as I lean toward a more Kantian-approach on this topic.

In regards to the Utilitarian algorithmic approach, one must consider how an algorithm would decide which action would bring about the least pain. There are several things that must be considered along with this question. First, would protecting the driver be more or less valuable than protecting other people? Should cars be programmed to always put the safety of the driver before the safety of others? Kantians might say if a program chose to hit pedestrians to provide maximum safety for the driver, that the program was using those pedestrians as simply a means, therefore it would be morally wrong to do so. In this case, I agree with the Kantian view. Second, it is impossible for an algorithm, or the person creating it, to know which outcome would minimize pain. Sure, the algorithm could factor in the type of vehicle, and whether or not a motorcyclist was wearing a helmet. It could potentially take into account the ages of potential victims as well, but there is a wide variety of painful consequences that couldn’t be predicted. For example, there could be many passengers inside a vehicle that the autonomous car can not detect. The motorcyclist could be a single father on the way to pick up his children. Would it be morally right to hit the SUV, potentially hurting a larger number of people? Or would it be more morally correct to hit the motorcycle, creating orphans? What about the third option? Would anyone be to purchase a car that doesn’t have their own best interest - or their passengers' best interest - in mind? I don’t know which of these options is the right one, or if any of them could be considered morally correct. This is one reason why I think we should avoid the mass production and distribution of driverless cars.

The next important question addressed in the reading is: should humans be programming cars to make these decisions? The article “A Huge Global Study on Driverless Car Ethics Found the Elderly are Expendable” allowed participants in a study to look at possible scenarios for car accidents and had them pick what they thought was the morally correct outcome. The article states “the more complex the scenario, the less decisive people were” (Smith). If humans can’t agree on the best possible outcome for complex situations, I don’t think we should let one person, or a group of people, program an algorithm that is supposed to make morally correct decisions.

Utilitarianism and Kantian Ethics are both moral theories. Utilitarianism is a form of consequentialism, while Kantian Ethics describes a deontological theory. Although I would not say I completely agree with either theory, my perspective on driverless cars is more Kantian than Utilitarian. However, I have several issues regarding driverless cars from both views. First, regardless of whether a programmer decided for a car to sacrifice those outside of the car or those inside the car - including the driver - the program would be designed to treat people as a means. Second, autonomous cars will never be able to discern/measure the full extent of pain v. pleasure outcome produced. Autonomous cars will always have moral flaws. I would suggest we simply do-away with driverless cars, as it is impossible to program a car to be morally correct.

Make sure you submit a unique essay

Our writers will provide you with an essay sample written from scratch: any topic, any deadline, any instructions.

Cite this paper

Ethical Experimentation for Autonomous Vehicles and Utilitarian Cars. (2023, April 21). Edubirdie. Retrieved November 2, 2024, from https://edubirdie.com/examples/autonomous-vehicles-need-experimental-ethics-are-we-ready-for-utilitarian-cars/
“Ethical Experimentation for Autonomous Vehicles and Utilitarian Cars.” Edubirdie, 21 Apr. 2023, edubirdie.com/examples/autonomous-vehicles-need-experimental-ethics-are-we-ready-for-utilitarian-cars/
Ethical Experimentation for Autonomous Vehicles and Utilitarian Cars. [online]. Available at: <https://edubirdie.com/examples/autonomous-vehicles-need-experimental-ethics-are-we-ready-for-utilitarian-cars/> [Accessed 2 Nov. 2024].
Ethical Experimentation for Autonomous Vehicles and Utilitarian Cars [Internet]. Edubirdie. 2023 Apr 21 [cited 2024 Nov 2]. Available from: https://edubirdie.com/examples/autonomous-vehicles-need-experimental-ethics-are-we-ready-for-utilitarian-cars/
copy

Join our 150k of happy users

  • Get original paper written according to your instructions
  • Save time for what matters most
Place an order

Fair Use Policy

EduBirdie considers academic integrity to be the essential part of the learning process and does not support any violation of the academic standards. Should you have any questions regarding our Fair Use Policy or become aware of any violations, please do not hesitate to contact us via support@edubirdie.com.

Check it out!
close
search Stuck on your essay?

We are here 24/7 to write your paper in as fast as 3 hours.