There are many differences between utilitarianism and Kantian ethics. In Utilitarianism, our actions should result in more happiness than pain. Act-Utilitarianism is the thought that whether something is right or wrong directly correlates with how much happiness comes from the individual action. Whereas, Rule-Utilitarianism is the idea that rules were created to result in the highest amount of happiness. John Mill states “The creed which accepts as the foundation of morals, Utility, or the Greatest Happiness Principle, holds that actions are right in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness” (Mill 82). In other words, he believes when actions further happiness, they are good. The Greatest Happiness Principle states that actions are moral when they encourage utility. Mill continues, “By happiness is intended pleasure, and the absence of pain; by unhappiness, pain and the privation of pleasure” (Mill 82). He explains the differences of pleasures associated with the brain and thoughts versus the body and which are higher pleasures.
For example, human pleasures would be considered larger than animal pleasures. Later in the text, Mill concludes “Utilitarianism, therefore, could only attain its end by the general cultivation of nobleness of character, even if each individual were only benefited by the nobleness of others, and his own, so far as happiness is concerned, were a sheer deduction from the benefit” (Mill 84). On the other hand, Kantian ethics say that things are moral if they are influenced by only duty. Kant says “Nothing can possibly be conceived in the world, or even out of it, which can be called good, without qualification, except a Good Will” (Kant 85). The categorical imperative, according to Kant is the the claim that does not rely on any want or end. This idea is made up of two formulations. Kants explains, “We must be able to will that a maxim of our action should be a universal law” (Kant 92). When talking about the second formulation of the categorical imperative, Kant says “So act as to treat humanity, whether in thine own person or in that of any other, in every case an an end withal, never as a means only” (Kant 92). This idea would say that lying in any situation is wrong, because if everyone broke their promises, then a promise would not have any meaning. Utilitarianism is a consequentialist theory, while Kantian ethics are a deontological theory. Overall, Utilitarianism and Kantian ethics have many differences.
Have you ever thought about the programming of driverless cars? What happens when there is going to be an accident? Should the car protect you or others outside of the car? Many of these questions have been thought about by Ariel Bogle, Oliver Smith, and Patrick Lin. Bogle’s article, “Driverless Cars and the 5 Ethical Questions on Risk, Safety and Trust We Still Need to Answer”, talks about the ethical challenges that will be faced with self-driving cars. She first asks about which risks are worth taking. Ariel Bogle explains how these types of cars have to judge risk versus reward. As of now, the self-driving cars are against taking any risks. She says, “But no one wants a car that doesn't take any risks at all. It wouldn't leave your driveway” (Bogle). Next, she asks if we are making the choices or if the car is. She brings up the trolly problem. Dr. Danks continues these thoughts on driverless cars and explains that “They don't think in terms of people versus dogs versus light posts, he pointed out. They think in terms of high value or low value” (Bogle). Continuing on, she asks if there are moral principles that we can all agree on or if we should choose our own. Bogle raises many questions on if a car should take a deontological or utilitarian approach when a wreck is about to take place. Should all cars be the same, or should the owner get to program it the way they believe it should be programmed?
Lastly, she asks about trusting our self-driving cars. To trust things, it is important for it to be reliable, as well as, understandable on how and why something works. Oliver Smith’s article, “A Huge Global Study On Driverless Car Ethics Found The Elderly Are Expendable” discusses Moral Machine, which is “a game of ethics which presents players with the kind of road choices which driverless vehicles will soon have to make” (Smith). The results of this game were, much of the time, split evenly between the two options. Concluding, not everyone is going to agree on what these self-driving cars should do when it comes to ethics. Patrick Lin’s video, “The Ethical Dilemma of Self-driving Cars” considers how a car should be programmed when there is an unavoidable wreck. As an accident is approaching, should the car let it happen, putting the passenger in danger, or should it hit a surrounding car, saving the passenger and putting others in danger? There are many questions, similar to this, that need answers. He explains that cars make decisions, while humans have reactions. Bogle, Smith, and Lin all bring up some very important and controversial questions.
I think that these self-driving cars should have a utilitarian algorithm. However, it is very hard to say if this should be its programming all of the time. The cars should protect the passenger because I know that if I am paying a large amount of money for a car, it should at least protect me. This is hard because the car should also protect the people outside of the car. As a person walking across the road or driving next to a car, I think it would be wrong for the car to injure me over their passenger, for doing nothing wrong. If only there was a way to not put others at harm, but also protecting the passengers of the self-driving car. If you have ever seen the movie Demolition Man, when the car goes out of control and crashes, it releases a “secure foam” that protects the person in the car. This would be an ideal way to protect people both inside, and outside of the vehicle. Humans should not be programming their own car to put one person’s life over another, whether it is the person in the car or outside of the car. This could cause many problems because everyone has different ideas on what their car should do. I feel like in the long run, people programming their own cars would end up causing more wrecks than if everyone's car reacted the same to certain situations. I think that I have more of a utilitarian view on this issue, but really it would be best to find a way that everyone could be protected in most cases.