Strangely enough, I have been questioning the notion of morality for quite some time. I have been trying to understand whether there are actual moral rules we should follow, or if “morality” is simply a set of lies we told each other to prevent us from harming one another.
I arrived to this strange place because I realized that what we define as “moral” differs from person to person. Specifically, what one person can consider as morally wrong, can be easily twisted to be seen as morally justified. For instance, when someone assaults another person, the assaulter always believes their actions were justified and were morally “okay”, while the person assaulted would believe the assault is morally wrong. In a less extreme example, when you lie to someone, even though it would be considered morally wrong to lie, you can justify why it was okay to lie in that instance and make the act seem morally acceptable. Of course, the argument could be made that there is an overall “moral good” and those who do wrong and justify it are not actually justifying the act, but merely lying to themselves. Yet, I would counter that argument by noting that in nature, there is no right or wrong. When one animal kills another for survival, that act is not morally wrong or right—it is simply just an act. If morality does not exist in the natural world, it must be a man (human) made phenomenon.
Thus, I believe that morality may just be a set of rules we created to establish civilized behavior and protect ourselves. Overall, things that are considered moral are simply things that we would not want done to ourselves. Hence, I make my moral decisions based off whether I would want to that act done upon me. Of course, what we accept as okay to be done to us depends on the culture we are in.
Since morality is dependent on the person as well as the culture(s) the person was formed by, it seems difficult for machines to make “moral” decisions. A machine makes “decisions” based off what an engineer codes it to do. Thus, the machine would rely solely on the moral rules that engineer had; however, one person’s morality cannot translate into the morality of the larger world.
One might argue that there could be a set of moral rules everyone agrees to that an engineer could code into a machine and thus, the machine could be responsible for solely those moral decisions. For instance, no one would claim that murdering a child is a morally good action. However, machines are never used in that context. The context in which machines will be asked to make “moral decisions” is when machines will be autonomous, such as self driving cars, and when they are autonomous, the moral decisions they will be making will not be that simple.
Imagine this. A self driving car is driving through a dark windy path. Out of no where, a child comes running across the path. The car could either hit the child or it could swerve into the side of the road, most likely killing its occupant.
The self-driving car is now faced with a moral decision. Perhaps the engineer coded the car to always save the occupant’s life, regardless of the situation. This would mean the car would freely run over the child.
I would assume if we were in that situation and were actually driving the vehicle, most of us would swerve to avoid hitting the child, because it would seem like the morally right thing to do. However, that is simply because our culture, or at least my culture, prioritizes the life of children. I am sure there are some cultures that would say it is morally right to save yourself and agree with the car’s “decision”. Yet, what if the child on the path was your own child, then what would be morally “right”?
It becomes clear quickly that morality is quiet complex and conditional. I am sure there is an engineer that is extremely intelligent and could code a machine to make moral decisions that most of the world, regardless of culture, could agree to. However, there will be moments, like the one I used as an example, in which the morality of the car’s decisions will become less clear.
It is not that I do not think machines could not make moral decisions, it is that I think it simply may be too complicated to allow machines to. Morality is simply a way we put checks upon one another– it is a human created concept to simplify the human experience and perhaps, it is best left in the hands of humans.
Technology is separate from the notion of morality. Thus, I do not think it possible for technology to help us with moral decision making. While technology is a tool, it cannot directly produce the culture and experiences that we use to define our morality.