The Ethical Dilemma's Facing Self Driving Cars
Adapted from a college final paper...

Introduction
I wrote this paper as a final for one of my college courses about 2 years ago. As the professor asked to keep it and I beleive the topic is still highly relevant, I have adapted the paper to a Vocal story below. Enjoy!
The ethical dilemma facing self driving cars is; should the cars and/or their human programmers be responsible for ensuring the safest outcome to another motorist, pedestrian, or other road hazard? An additional but separate thought to this would be that the average human cannot understand the complex technology that goes into an autonomous vehicle, such as its algorithms, and the decisions (potentially being ethical ones) that the vehicle makes. In this case, how could the owner of the vehicle be liable for what the vehicle does?
This may leave the manufacturer liable for accidents caused by the car. With self driving vehicles being a quickly growing and changing technology, this is a compelling ethical issue because human lives, manufacturers liability, and the manufacturers stakeholders are at risk. Based on general consensus and ethical principles, which could be described in a separate book, we know that human lives have value.
The Trolley Problem
We can argue that as utilitarians, the “trolley problem” can be applied to this dilemma, posing the question “do we minimize the loss of human life or minimize intervention in a life/death scenario?”. A more basic explanation of a trolley problem is the question “Would you kill one person to save five?” (D'Olimpio). When exploring a basic vs. complex definition of the problem it is easy to see that there are many issues and gray areas such as qualitative vs. quantitative factors, and the question of whether or not the self driving car should be able to intervene at all.
Distingiushing Right & Wrong
Chipman in “Exploring the Ethics Behind Self-Driving Cars.” discusses how minimizing the loss of human life can “feel wrong” citing an example of when an autonomous vehicle has the option of killing one of two motorcyclists. Chipman argues that in theory the car should hit the one wearing the helmet to minimize deaths but that it “doesn't feel right” to take this route, or program the car to take it. There is an underlying question of “why doesn't it feel right?” 1s it because the other motorcyclist should be wearing a helmet and therefore should be hit because of the action s/he did not make? What if the scenarios was four 30 year olds vs. one 30 year old, it would make the most sense to hit the one 30 year old. How about a 10 year old child vs three 50 year old men, or anyone else society may deem of lesser value (i.e. developmentally disabled and handicapped people). This becomes a matter of act vs rule utilitarianism, which again, could have its own book. Suddenly the problem is not just about the greatest happiness for the greatest number anymore, because now the algorithm may be discriminating against certain groups.
Society, or a representative (like government) would likely have to make a ruling as to whether or not strict utilitarianism (only saving the most lives, no concern for other factors) should be applied to these vehicles.
Should Cars Be Able To Make Ethical Decisions?
Additionally, Wamsley's research in “Should Self-Driving Cars Have Ethics?” builds upon what set of ethical values should be applied. Wamsley argues, that self driving cars should not have ethical values in the case that society does not have generally agreeable ethical values that can be applied to the cars (universalism). Universalism is an important factor here as well because the rollout of self driving cars will undoubtably sweep the world quickly, and the cars may need to have different ethics in different places to reflect the differing values among other societies.
Fundamentally the societies are similar but across continents things can change drastically, such as in a place like Saudi Arabia the cars may default to hitting women and not men, as grim of an example as that may be.
Current Regulation & Decisions Around The World
Also noted in the article is that Germany, one of the first nations to release rules for self driving vehicles, has stated that autonomous vehicles cannot make decisions based on peoples gender, age, etc. but can make decisions based on reducing the number of overall injuries. This outlines that Germany is looking at these ethical issues from a strictly utilitarian approach, which appears to be appropriate.
There is a lingering question though, of how this will affect cases of manufacturers liability, as previously mentioned, because the car, on behalf, and programmed by, the manufacturer is making a decision that costs lives. The humans in the car certainly will not understand the decisions it makes too. Chipman also discusses the topic of manufactures liability and how there is potential to have people determine their own level of selfishness of the car (saving their two lives over someone else's three, a trolley problem), which may reduce and potentially rule out manufacturers liability. The potential issue that remains from a perspective of the law is that ultimately a judge may decide the company is liable because they where using driver selection to defer liability. Yet all of these examples are of what can be done, ultimately it will be the responsibility of governments and manufacturers to determine most of the outcomes.
Sven and Smids in “The Ethics of Accident-Algorithms for Self-Driving Cars: an Applied Trolley Problem?” argue that the “Trolley Problem” may not be an accurate ethical dilemma to apply to the ethical issues facing autonomous vehicles, which is an important counter argument to Chipman and Barnsley. They argue that because the trolley problem is set in human morals that, while it is a good way of describing the issues faced in these situations the cars will face, it is not a good way of applying ethics because humans hold each other accountable in a situation but a car cannot be held accountable.
Relevant Case Law
While Sven and Smids ideas have merit, they lack the relation to the laws which will ultimately hold humans and/or corporations accountable to victims of an accident (product liability laws). lt is merely a matter of determining who is, in fact, accountable. A key concept of this is the judgement in the case of MacPherson v Buick
Motor Co. in the New York Court of Appeals, where, the judge created precedent by removing the requirement of privity of contract (being a party to the contract), from manufacturers of products, and creating product liability as a separate type of negligence.
Additionally, expanding on the algorithms and liability thought. . . while many think an algorithm cannot discriminate or be biased, it absolutely can. Take for example an algorithm that was implemented by Microsoft for development and testing purposes. lt employed machine leaning from Microsofts search engine Bing to develop a kind of consciousness to make decisions. While al gorithmic consciousness may be scary, what is more frightening is that this algorithm ended up wildly discriminatory by race, gender, and other parameters. Why did this happen? Its data set for “learning” came from humans web searches, humans are biased and the algorithm learned to mimic those biases instantaneously. This leaves me to ask the question, who then, decides the parameters that would go into such an algorithm that may make the decisions of a self driving vehicle, could the technology ever be perfect?
The ethical issues surrounding self driving cars are a high level thinking problem as well as a waterfall of other issues that come along with its exploration, like manufacturers liability in ethical decisions. In summary, there is still many more ethical thoughts to be discussed on this topic and as a whole, the issue may not focus on ethics of self driving cars but the ethics of algorithms in general, that attempt make such hard decisions without bias.
Bibliography
Chipman, lan. “Exploring the Ethics Behind Self-Driving Cars.” Stanford Graduate School of
Business, 13 Aug. 2015, www.gsb.stanford.edu/insights/exploring-ethics-behind-self- driving-cars.
D'Olimpio, Laura. “The Trolley Dilemma: Would You Kill One Person to Save Five?” The
Conversation, 10 Dec. 2019, theconversation.com/the-trolley-dilemma-would-you-kill-
one-person-to-save-five-57111.
Nyholm, Sven, and Jilles Smids. “The Ethics of Accident-Algorithms for Self-Driving Cars: an
Applied Trolley Problem?” Springer Link, Springer Netherlands, I Jan. 201I›,
link.sprinter.com/article/10.1007/s10fi77-01I›-9745-2.
Wamsley, Laurel. “Should Self-Driving Cars Have Ethics?” NPR, NPR, 2f› Oct. 201S, www.npr.org
201S/10/2f›/f›f›07759l0/should-self-driving-cars-have-ethics.
Unknown. “MacPherson v. Buick Motor Co.” Wikipedia, Wi kimedia Foundation, 13 Dec. 201S,
en.wikipedia.org/wiki/MacPherson v. B nick Motor Co.
About the Creator
Alex Brown
I wear many hats; here's a few you should know about:
Business Owner, Business Administration Degree (Student), and YouTuber.


Comments
There are no comments for this story
Be the first to respond and start the conversation.