Category Archives: What To Change

Morality

The compter is a tool, a tool is something with the purpose of helping us to succeed in the various different tasks that we face as humans. Similar to how a saw can help us cut a plank of wood, a computer is simply a tool which just happens to have many different applications. As we have progressed so have the capabilities of our computers we are now able to face a whole new world of tasks that we were never able to before. And as we begin to rely more and more on computers we have turned more and more things into variables, everything from search histories, to previously purchased items are used by our computers to cater to our individual needs and desires. The things that we decide to measure and change are dependent on our specific needs and views on the world, for example while I may be in the position where I am fortunate enough to measure things like how much fun I am having, someone in a less fortunate position’s focus may be on survival and not recreation.

For me moral decisions are made using a few different things, Firstly I try to empathize and think about how I would feel if what Im doing happened to myself if its something I wouldn’t enjoy happening to me then its likely that its morally wrong. Next I Like to use the experiences of others before me as a moral compass, often times horrible moral tragedies take place because  we don’t look at the past and try to learn from it. If we learn from others experiences we can avoid slipping into the same moral pitfalls as those before us. Finally some of my moral compass was formed by my religion. Religion can be something that is used to pass down morals from one generation to another and form groups of individuals with common viewpoints. However I believe it is important to remember than everything that may be seen as “religious” may not necessarily be morally correct, therefore its important to consider all of the factors and empathize. And as far as technology and morals; I do not believe that technology makes moral decisions easier. While the interconnectivity of the internet may make empathy easier, morality is still something that we need to come up with within. To try to use computers to synthesize all our moral viewpoints into a two “good and bad” sides would inevitably cause more division.

The Variables in Our Lives – Haidt Response

Relative to computer programming, variables are values that can be changed, while variables in our lives are things we need to change within ourselves, such as our opinions, habits, and morals. Every day we are constantly changing some type of variable in our lives. There are so many people in this huge world of ours to not merely have one same thought. Thus, there is a numerous amount of opinions spread throughout the globe and it is impossible to have the same thought.

Morals vary from person to person.  There is no definitive answer for what is”good” and “bad” since we all do not think the same. It remains ambiguous. But I believe that it is safe to say that we make careful and/or impulsive decisions based on our wants, needs, common sense, and our gut.

We have talked about the “universal machine” and if it even exists. When discussing this particular machine, there was talk saying that the perfect machine would have emotions and reactions like a human. But do we really want that? The answer was no. Although it would be a great technological advancement to have, machines shouldn’t be making moral decision making for humans. Machines are programmed one way and only completes their one task. If they receive an input of a task they have never “heard” before, then what are the chances of them being able to have a successful output? They would not be able to tell what is right and wrong, because they don’t have the same instincts and backgrounds that humans have. The experiences we go through as individuals are completely different from every person. There is no possible way for a machine to be doing emotional processes.

Machines represent an interesting dichotomy in their effect on moral decision making. On one hand, computers can represent an ‘ideal’ morally neutral machine- they can be unaffected by societal biases and untainted by hostility and short-sightedness. On the other hand this neutrality can extend into an amoral state if left on their own, prioritizing whatever the programmer instructed without regard to moral consequences. I believe that technology, especially in the age of the internet, has generally assumed the latter role. In the age of algorithms and machine learning, it is widely apparent that largest and most important functions in our society have been delegated to algorithms that seek out objectives without consideration to morality, whether that be maximizing advertisements clicked, videos watched, or monetary value gained.

Variables and the Morals of Technology

Variables are, in a broad sense, anything that changes. Most things aren’t immutable in this world but variables can be more specific than just numbers that change. In programming, variables are more like amounts that change in accordance with changes in input. It is hard to decide what is actually important because sometimes unimportant things seem important and important things seem unimportant. However, any numbers that help classify and keep track of operations are important to measure. This takes computing power but also helps us understand everything that is going on in the equation, not just the beginning and ending, or the input and output.

Making moral decisions is difficult as a human because people are almost always looking out for themselves, and trying to spin it in a way that makes it seem like it is beneficial to others, even if it is not. I make moral decisions by weighing pros and cons for me and everyone that will be impacted by this decision. Machines should not make moral decisions because morals are already very subjective. Computers can only do what they are programmed to do, so they do not have the freedom to make moral decisions that are not controlled by a human being. Furthermore, even if machines were able to understand morals, computers see in 0s and 1s. There is no right or wrong answer to moral questions and computers are built to have two switches, right and wrong. So, they would not be able to make impartial decisions. Technology does not make it easier to make moral decisions because technology allows for people to hide from the truth. People can make decisions very anonymously through technology so they can divorce themselves from the problem, so that they can reap the benefits without having to experience the lows. Technology does allow for the spread of information so that people can make informed decisions, but they could also make decisions based on what other people in separate situations did, which may not apply correctly in real life.

what to change

In programming, variable means a value that can be changed. Variable in this context means a thing that needs to be changed in our lives. For example, we need to change the time we go to bed if we can’t get up in the morning.

 

Normally people believe that moral judgments are made by an individual’s experience, education, cultural backgrounds, etc. However, according to Haidt moral judgments are made from moral sentiments. For example, the feeling of happiness induced by fragrance has been shown to lead to better moral judgments. Thus, we don’t really know where all of those judgments come from. There will never be a clear line between good and evil because, in reality, a large amount of space is occupied by grey.

 

I think that technology has made moral judgments horrible in Here, I want to talk about a case happened in China. Chen Yi Fa was an internet famous. However,because she said something which should not be said about the history of China, she was completely banned from the internet. Her music, videos, photos, blogs, websites, literally everything is erased in 24 hours by the government and those people who hated her. They tried to find other evidence from the internet. That is when technology has made the moral judgments so dangerous: you can find everything about a person from the internet. Once something is on the internet, it can’t be erased, that it can only be hidden. There is no chance for regret.

 

Computers and morality

We turn constants into variables. Things that wont change. Things that we might measure or change are things that require human interaction or things that communicate something to the user. I can speak for everyone but for me personally the Bible is my moral compass. I was raised by it and its what I base my decisions upon. And even though there might be a lot of rules, it all breaks down to the golden rule. “Do unto others as you want to be done to you”. Yes sometimes technology makes morality a bit hard. To paraphrase what Louis C.K. said in an old Conan interview, screens make us loose empathy. When asked why he doesn’t let his children use smartphones, he said something like, in real life when they say something mean to someone, the person on the receiving ends face changes giving his daughters immediate feedback that what he did was wrong. But when they say these things behind a screen they don’t get the privilege of enjoying that real time feedback so they don’t comprehend the consequences of their actions.  Technology or the internet in particular, no matter how privacy invading it is, has brought us a significant amount of anonymity. This anonymity allows us to hide behind our screens and do things that we wouldn’t do in real life.

about moral decisions

When we design our codes, we usually choose the objects which change in the progress to be variables, at least I do in this way because that is what “variable” means. However, more problems are coming, we have to decide which objects should change because that mostly based on our goal. The way could be abstract because we set a series of goals to reach one. Thus, the thing goes like we make decisions based on our needs and curiosity. Then we find our goal and the other goals. Making decisions could be more complicated when meets the moral dilemma. My personal methods to deal with moral decisions is to hurt fewer people. It sounds like machines can do the job in a better way since they are much more rational. However, in my opinion, making moral decisions can never be proper for machines because they can not include emotional costs and benefits into consideration. However, the technology still assists us to make moral decisions by providing data and available examples which widen individual’s view of the society.

“What To Change”

‘Morals’ are something that everyone has obviously or subconsciously. They can be influenced and changed by different experiences such as the environment one was raised in, the life one has lived, and things one has lost. Each person has a different set of morals. Sometimes people share a moral, maybe it has to do with religion or lack thereof, but many of our actions and reactions as humans are controlled by our morals whether we realize it or not.

Sometimes the morals we have go against the things we have learned are ‘good’ and ‘bad’. But the terms ‘good’ and ‘bad’ are just as ambiguous as the word ‘moral’ is. And I don’t like that. They seem tentative but in reality are just vague. When humans all share morals and morally fueled decisions, the second machines come into the picture, questions and arguments arise. “Should machines make moral decisions?” In my opinion, I don’t know if I can have an opinion on it. This question reminds me of my favorite movie ‘I Robot’ with Will Smith. In that movie, robots were capable of living alongside humans as really helpful tools but were not at the stage where they could make decisions for themselves. As the movie goes on, there is a single robot capable of making morally and emotionally fueled decisions which was still a crazy reality in that futuristic world. That one robot brought down the evil artificial intelligence that was about to take control of all the mindless machines and overthrow the humans. Though seemingly far off, this can be related to our reality now. Our dependence on technology is so monumental that when artificial intelligence comes into the picture, who’s to say that humans won’t willingly surrender to keep all the ‘benefits’ of having these tools? Some might argue that technology is turning humans into mindless robots themselves.

Emotional Dog Rational Tail

Variables are the parts of a structure that determine what said structure is. In math, changing the x variable from just 1 to 2 will greatly influence the outcome. In life, variables are the unique conditions each individual experiences that effects their lives and personalities. Some life variables are upbringing, social class, family life, environment, and life changing experiences. These variables are greatly influenced by society. Things that are important to measure and change are the things that effect everyone on a daily basis and anything that may cause bias. I would like to think I have a moral code that I apply to my moral decision making, but after reading Emotional Dog Rational Tail by Haidt, I know it’s not true. When making decisions, I predominantly use intuition. Intuition and instinct are the two most commonly used decision making tools. When one is caught off guard and forced to make a quick decision, there is no other option. After making said decision, the only thing that one can do to make themselves feel better is justify their decisions. These two tools are also greatly influenced by the aforementioned variables. From what I have previously stated, I do not believe machines should make moral decisions for humans. No machine is currently able to understand the conditions and variables of every single person and circumstance that may occur. Technology has only complicated moral decision making. We are now opened to a whole new world of information and are able to try to learn and understand all walks of life. But, with great power comes great responsibility. Those who are more socially aware and socially educated now have the responsibility to be more understanding of others and to educate others. Rather than before, when no one thought twice about what they believed their morals were. It has gone from “I believe this because it is right” to “I believe this, but I don’t know why it’s right.” Therefore, machines have made decision making both easier and harder.

Morality more like MoraliME

Strangely enough, I have been questioning the notion of morality for quite some time. I have been trying to understand whether there are actual moral rules we should follow, or if “morality” is simply a set of lies we told each other to prevent us from harming one another.

I arrived to this strange place because I realized that what we define as “moral” differs from person to person. Specifically, what one person can consider as morally wrong, can be easily twisted to be seen as morally justified. For instance, when someone assaults another person, the assaulter always believes their actions were justified and were morally “okay”, while the person assaulted would believe the assault is morally wrong. In a less extreme example, when you lie to someone, even though it would be considered morally wrong to lie, you can justify why it was okay to lie in that instance and make the act seem morally acceptable. Of course, the argument could be made that there is an overall “moral good” and those who do wrong and justify it are not actually justifying the act, but merely lying to themselves. Yet, I would counter that argument by noting that in nature, there is no right or wrong. When one animal kills another for survival, that act is not morally wrong or right—it is simply just an act. If morality does not exist in the natural world, it must be a man (human) made phenomenon.

Thus, I believe that morality may just be a set of rules we created to establish civilized behavior and protect ourselves. Overall, things that are considered moral are simply things that we would not want done to ourselves. Hence, I make my moral decisions based off whether I would want to that act done upon me. Of course, what we accept as okay to be done to us depends on the culture we are in.

Since morality is dependent on the person as well as the culture(s) the person was formed by, it seems difficult for machines to make “moral” decisions. A machine makes “decisions” based off what an engineer codes it to do. Thus, the machine would rely solely on the moral rules that engineer had; however, one person’s morality cannot translate into the morality of the larger world.

One might argue that there could be a set of moral rules everyone agrees to that an engineer could code into a machine and thus, the machine could be responsible for solely those moral decisions. For instance, no one would claim that murdering a child is a morally good action. However, machines are never used in that context. The context in which machines will be asked to make “moral decisions” is when machines will be autonomous, such as self driving cars, and when they are autonomous, the moral decisions they will be making will not be that simple.

Imagine this. A self driving car is driving through a dark windy path. Out of no where, a child comes running across the path. The car could either hit the child or it could swerve into the side of the road, most likely killing its occupant.

The self-driving car is now faced with a moral decision. Perhaps the engineer coded the car to always save the occupant’s life, regardless of the situation. This would mean the car would freely run over the child.

I would assume if we were in that situation and were actually driving the vehicle, most of us would swerve to avoid hitting the child, because it would seem like the morally right thing to do. However, that is simply because our culture, or at least my culture, prioritizes the life of children. I am sure there are some cultures that would say it is morally right to save yourself and agree with the car’s “decision”. Yet, what if the child on the path was your own child, then what would be morally “right”?

It becomes clear quickly that morality is quiet complex and conditional. I am sure there is an engineer that is extremely intelligent and could code a machine to make moral decisions that most of the world, regardless of culture, could agree to. However, there will be moments, like the one I used as an example, in which the morality of the car’s decisions will become less clear.

It is not that I do not think machines could not make moral decisions, it is that I think it simply may be too complicated to allow machines to. Morality is simply a way we put checks upon one another– it is a human created concept to simplify the human experience and perhaps, it is best left in the hands of humans.

Technology is separate from the notion of morality. Thus, I do not think it possible for technology to help us with moral decision making. While technology is a tool, it cannot directly produce the culture and experiences that we use to define our morality.

Reading Response

Similar to the six tastes that human beings can perceive, there are six main moral taste receptors, which we can turn into variables, such as care and harm, fairness and cheating, liberty and oppression, loyalty and betrayal, authority and subversion, and sanctity and degradation. Contemporarily, especially under the context of the United States, our moral values are pertinent more closely to the liberty and fairness. Since people are living a more independent life, more diversified ethnical values can be resulted. With that said, we measure and compare what we give and what we receive. If we don’t get as much as others gain, or we are not getting what we are paying for, then unfairness would lead us to ask for change. Therefore, it’s reasonable to argue that machines can’t make moral decisions. Technology can both make it hard and easy to make moral decisions. The cyber community constructs a closer interrelationship among people, exposing us to more and more societal issues that may not be relevant to our individual life. As a result, moral decisions are often associated with fame and reputations, which intimidate people from fulfilling their moral decisions.

Moral Machine?

Those question I’ve thought about for long times but did not see clear solutions. What is moral? Who decides what is moral? What is machines? Those question would have many different answers with the change of time and space.

The definitions could not be very general. For example, in Isaac Asimov’s novel, robots were supposed to follow the three role of robotics in which include a robot should do no injuries to human. However, one clever robot decided that the human here should not be the individuals but the whole human beings so he killed two man who’s threatening the human races. This robot did a good for the whole human beings, but he obeyed the original definition of human. But how could we know, when the robot is killing people for the second time, is it killing the right one?

The definitions could also not be to specific. In the history, people ruled each other by calling others slaves which is no longer a human. If the definition is to restricted, there would be many cases in which the computer recognized human non human.

Thus, in my opinion it would never be a good idea to let machines involve moral things. I would be glad to let them do everything else, but not moral which even human beings themselves can not figure out.

Various

What matters to us is dependent on us as individuals. Each person has gone through a set of things in order leading up to this point. Therefore all those things have had an impact on your experience, thus altering what each person values. I may value time with my family, the person next to me may despise it. Both those views are right for each one of us due to our upbringings.

Who makes the decisions? The controversial subject of the self driving car can help understand what I mean.

Like Patrick Lin says, the outcomes of all foreseeable accidents will be determined years before they even happen. Programmers will dictate what happens when a two lives are in danger. So I don’t think it’s machines making decisions, it’s machines following orders. And the difference those two orders can make is between premeditated homicide and an instinctual reaction.

Then the legal implications come in, is the programmer of the code that instructs the car to save you responsible for the death of the person it couldn’t? It’s all more complicated than we think it is.

Various by Haidt Response

Variables are elements, features, or factors that are liable to vary or change. In programming they allow computers to vary and change really fast. With that being clarified, I believe that we should turn preferences, aesthetics, and moral responsibility into variables. These things are all dependent ideas. Given a certain situation, time of day, or even weather forecast, they change and should be allowed to change. We decide what is important to measure and change by comparing outcomes. If there are a lot of outcomes presented and desired by people, we take that into consideration and try to capture all preferred possibilities/desires in order to be inclusive as possible. Focusing more on moral decisions, people tend to base their decisions on their environment. The people, places, and experiences people face in their daily lives lead them to shaping what they thing is morally right and wrong. Also, their class status, race, ethnicity, and age come into play when making moral decisions. It is a very subjective idea. Connecting all of this back to variables in programming, I do not think machines should be able to make moral decisions. Machines are not able to hold the same experiences as humans and won’t empathize while decision making, which bases their whole decision path very logically. This within itself has many cons, such as the ability to be very black and white. The internet today does the opposite though. It makes it harder for us to make decisions and leaves us in a gray area. So much information and different perspectives are given to us that overload our minds. It actually leads us to be more indecisive and neutral given that information on the internet could always have chance to be false and carries bias.

Morality

While I would like to think of myself as a rational person, I know all too well that my moral decisions are almost always based on my emotional state or relationships, some of the most volatile factors in life. Living in my brain has been one of the most nerve-wracking anxiety adventures ever since I realized how fragile the two pillars of my judgement are. When I think I’ve made the right, morally good decision, I regret it about 5 minutes later, backtracking to analyze everything I did wrong. More often than not, I always come to the conclusion in nearly every scenario that goes slightly awry that I am to blame, that the ultimate epitome of moral righteousness comes down to taking the blame onto myself. It’s almost always been the easiest route, always the right one for me.

Technology, or more accurately, media, acts as an unconscious influence on our morals. Too often, it clouds my judgement. From behind a screen, people are less like people, and I start to idolize certain “people”, certain types of behavior, certain sets of morals. When I try to apply them to my own life, let’s just say that the ideal of maintaining a sarcastic cool girl image has backfired on me more than a couple of times when trying to make moral decisions.

The struggle of making moral decisions is one of the defining aspects of humanity. The whole point is that we wrestle with and evolve with  morality so that we may come to a better understanding of it, even if it seems like a fruitless, tedious endeavor. It’s how we can even slightly confront the world before us. If that responsibility fell to machines, I think we would all lose our collective minds. Machines run on algorithms, nothing more and nothing less. Even if we were to program ones with some kind of moral algorithm, they would never see the context, and something would always feel wrong with their decisions. The pillars that uphold my judgement may shake at the slightest disturbance, but it’s that only foundation that I have. It’s not something I can give to a machine.

I believe that what matters to us and what is important to measure and change is largely dependent on the culture/society that we belong to. For example someone who lives in the USA may be focus on finding ways to make driving less work however in other countries the focus may simply be on discovering ways to make better/more functional roads. Moral decisions are touchy because they are also very subject to cultural differences. Something that may be morally wrong in the head of a man of one religion may be completely acceptable for a follower of another religion. That being said I do believe certain things like murder and stealing and lying are (generally) seen as universally wrong. I do not believe that technology should  have the role of making moral decisions for us, by doing that we would be dehumanizing ourselves in a very dramatic way. To many people, what makes us human is our very ability to decide what is right and wrong and make very conscious introspective decisions. If we try to digitize that process we will undoubtedly fail  to represent multiple groups due to the fact that morals are something that change from culture to culture and even from person to person.

About moral decisions

I think moral is a concept based on individual values. Moral standards form differently due to reasons such as living environment, life experience or social community, etc. Some of the occasions are easy for us to make decisions. For example, whether to help/save someone if you have the ability and time to do that. But there are also occasions that every decision has their own advantages and disadvantages, which make us hard to choose. Personally, I might first consider my intuition. I may also think about my social relationships or other factors. It is always struggle to make decision between two or more equally important choices. However, the fact is that in many occasions, we can only choose one and give up others. How to select depends on what I really care about more. Moral standards are just a representation of personal value.

 

From my point of view, machines cannot make moral decisions. They are just tools that human invented to achieve higher efficiency of working. They have neither thoughts nor emotion, all it can do is to follow human’s instruction. May be in the future, machines can make easy decisions on moral problem, but I believe this is just because machines are obeying specific moral standards or rules that human conveys, rather than giving the result after thorough consideration.

 

In many ways, technology makes us easier to make decisions. We can use technology to get rid of the controversial occasions that we must give up one of the choice, and receive a better result.

Morality? What Does that Mean?

Morality could be defined as the principles that revolve around wrong and right or good and bad behavior. Each of us have our own morals and while they may be similar in very general aspects they tend to differ in more specific aspects. For example, one person might feel right to give money to a homeless person they find, while another may not because they don’t know what they would use the money for. Another good example could be if your best friend tells you a secret that you feel is important that another person would know. One person might feel it more important that someone knows, while the other might value their friend’s trust more and keep it between each other, as asked. If a situation comes your way in which you’re conflicted on how to act, your morals are usually what it comes down to to make that decision. So to the next question rises, should machines make moral decisions?

Machines are very blunt, they get an input, and create an output. There are no “maybes”, considerations, or second thoughts. A machine doesn’t know the life and preferences of those who are involved in this decision. While it might seem a machine chose the most efficient way of handling something. While some may disagree, I believe moral decisions should NOT be placed on that of machines.

I believe everyone is different. For every person that may agree with a decision or event, there is a person to disagree. To place such an influential process on a machine that doesn’t understand the backgrounds of those involved I believe a machine can never provide an outcome better than people, who can discuss, compromise, and decide together on a decision that works best. Machines should be used to assist the things we do, and to me, making decisions seems like a function that can only make problems.