Category Archives: Automatic Ifs

Auto If Statement

Algorithms are naturally everywhere. They run on how we run. Everyone has their own personal algorithm based on their own behaviors. They are fed with data intake and organize them to create more outtake for us to intake. For example, agreeing to cookies whenever you enter an app or site.

It’s crazy how much we feed off of technology. And this technology was made by us. People. People who are software engineers and tech researchers all over the globe in fortune 500 companies. They are driven by work and their company’s profit rather than the societal good of trying to keep some parts of a person’s life private.

But let’s not forget the benefits the algorithms that aid us in our daily lives to make it easier or more fun. Apple created a facial recognition technique for those who have their latest phones. Only the right person can unlock their own phone. This is much easier than typing in your password or code. There is also Snapchat who uses facial technology to show their augmented reality filters for their social media app.

Auto If

There are many Algorithms that we deal with in our daily life and many of them are able to enhance our experience while using technology. Algorithms used for programs like snapchat, facial recognition, and voice recognition. While each of these are very useful tools with the potential to make our jobs easier; the problem comes in when these services only cater to a particular demographic (which tends to be the same demographic of the people designing these algorithms)

While all the programs I mentioned above can be helpful resources they are all examples of programs that use algorithms that don’t take into account the different demographics using them. Facial recognition and snapchat filters are both systems that aren’t as effective when being applied to darker skinned faces. This is indicative to the fact that the algorithms are created by and tested on white people. Algorithms like voice recognition fail to accommodate those with speech disabilities or less common dialects

Elevator

I was waiting for the elevator going to SC.

This building is very weird. There’s no stair going to SC apart from the emergency one. Though there’s two elevators, only one could really go to SC. It seemed that if a person is unlucky enough he or she would probably can’t go to the SC forever since the elevator that pick him/her up would always be the one that don’t go to. And this did happened to me once when I’m trying to get the access of the printer at lounge.

Later, a friend of mine told me that if I really want to go to sc, I should press down at 1F. It seemed weird since between 1F and SC there’s -1F and the elevator which have no access do go to -1F. But following this rule, I could really go to SC when ever I want. In that case, the elevator has been set an exception that if it’s clicked at 1F the elevator which goes to SC would have very high priority to pick people up first.

This is one representation of algorithms in our life. In this cases, elevator is designed to do some math sequences according to the orders it already received. If elevator 1 is at 1F and elevator 2 is at 5 F, some one at  2F would usually picked up by elevator 1. And in special cases, it would do special logic like going to the sc.

Automatic If Statement

In their current state, algorithms run our lives through silently organizing and personalizing the data feeds in which we spend our time and learn about our world. The decide on the news we see, the friends we find, and the information we research.  In the future this role in determining societal communications and information distribution will only grow as more of our daily life falls into technological domains in which algorithms govern discourse. These algorithms are made by  teams and committees of software engineers and researchers at major tech companies, meaning that first and foremost they are crafted with a focus on company profit rather than societal good.

AUTO IF

Algorithms run our lives by dictating what we as a society deem as “correct” or “true.” Just like how Joy Buolamwini’s face was unable to be registered in the MIT Media Lab, many times facial recognition is unable to properly recognize the correct user. Joy discovered  that the program was unable to recognize her, the only darker skinned person’s, face in the experiment. What comes to mind is the Apple iPhone problem when the facial recognition lock screen opened for the wrong user. The phone owner and the wrong user were both asian women similar in age. It could be argued that a lot of asian people “look alike,” but in reality this would never happen to any two white people-unless they’re twins.

The reactions to this iPhone fail alone proves how society views each race with different standards. The stereotype that all asian people look alike only exists in America because America is a white dominant country. In asia, they can tell Chinese from Japanese from Korean from just a quick glance. Algorithms are only pushing us to think it is okay to think this was because it will return either an unlocked phone or a locked phone ingraining the racial stereotypes in us without us even noticing.

Algorithms run our lives just like society runs our lives. Many people never stop to think how much society’s rules and ideals have changed the way we think and act, and are ingrained into our way of life. I think algorithms can work the same way. We never stop to think about the ingrained stereotypes that are present in our daily lives.

Algorithms are created by the top 1% to cater to themselves. They are looking for ways to benefit themselves and make many of their choices on money, power, or influence.

Algorithms

Algorithms run our lives because algorithms are in everything. Algorithms are just the equations that decide behavior. Every decision you make is based on your personal algorithm. Other algorithms like ones to predict weather and traffic also run our lives because we make decisions based on what these algorithms say, whether or not they are always correct. Especially with the rise of machine learning, algorithms become more and more important so that computers can learn how to do things, for example, recognizing faces as mentioned in the Joy Buolamwini TED talk. As machines learn to do more and more, it is important to look at how people are deciding what constitutes as what. The people that make these algorithms are not immune to biased tendencies as all human beings are, so it is important that they look at how they choose these data sets that they use to teach machines carefully with an eye for keeping everything diverse. They make these algorithms by setting up parameters and giving machines sets with things that the machine has to identify and things the machine is not supposed to identify. The machine then goes through the set is told whether they identified things correctly or not. Then, they learn from before and when given a new set, they will try again with more success. With more diverse sets, computers will learn to recognize more, which is important for inclusion for everyone in these new technologies. This is what Joy Buolamwini was talking about when she advocated for taking selfies and sending them in so people could make bigger sets for face-identifying robots. Through a community of people that want to make technology more inclusive, people are able to create more sets that better teach computers.

 

How algorithm runs our lives

I never got such close to the algorithm itself before I entered IMA. It is also the first time for me to think about the moral system for coding. From my perspective, making an algorithm is not a work accomplished by a single person. It is more like a relay run. At first, some people establish the system for codings like a base such as  Java, or C++. Then, developers build mansions above them, such as P5. Each building will follow the basic instruction from the base, but also create their own structures. Those pre-established “buildings” allows us to “paint on them”, like what we did for classes. Finally, users living in those finished versions of buildings. Step by step, the algorithm establishes an invisible system for our modern lives. That’s why some very social elements such as bias can be spread by the algorithm. Moreover, internet and data are like the nerve and blood for electronic generation. The more old-fashion machines being transformed into internet versions, the stronger the algorithm is.

Of course coders are people who make the algorithm like engineers for the physical world. They focus on their expert areas. Some are responsible for A.I, and some take the duty for internet safety etc. Maybe some of our classmates or myself will be one of them.

Algorithm

Algorithms are very much present in our everyday lives, particularly in the consumerism sphere. Corporations like Facebook and Google track our activity across websites so that they can display the most relevant ads to us or show us the posts they think we’d find interesting on social media. And I think at this point it’s been confirmed that these companies have access to our microphones and webcams as well, so if you’re talking about buying dog toys, even if you never search up “dog toys” Google will display dog toy ads. While I don’t think algorithms like these really run our lives, I do think they lure us to buy more stuff, like ads always have. Capitalism and all that. I don’t necessarily think they change the way we got about life. However, Joy Buolamwimi’s talk about coding bias brings another lens to this conversation. It’s almost funny how bias factors into so many things, even something as seemingly neutral as coding. Sometimes I forget that computer scientists have been historically white and male, and still make up the majority of the field. It’s probably because ITP and IMA are quite diverse, which I appreciate. The historical lack of diversity has taken its toll, though. Luckily, I think that this can be a fairly simple wrong to right. After all, code can be easily shared and accessed, and in the case of machine learning, anyone can contribute to training sets. Initiatives like Joy’s Coded Gaze and Ari’s Afrotechtopia are truly important moving forward, as we think about the impact and reach of technology.

“Automatic If Statement”

Algorithms run our lives just like society runs our lives. Many people never stop to think how much society’s rules and ideals have changed the way we think and act, and are ingrained into our way of life. I think algorithms can work the same way. We never stop to think about the ingrained stereotypes that are present in our daily lives.

Snapchat is a very popular app in my generation, and a lot of people enjoy using the filters that are offered. But I notice, and \when I point it out to people, they notice it too, that the filters try to make your skin lighter. Your eye color lighter. And they feed into this type of white supremacist ideal that hasn’t really disappeared from our lives. It’s just changed its shape a bit. The belief that lighter skin and lighter eyes make you more beautiful is disguised in everything we see that we don’t even notice that we feed into that belief as well. It’s in magazines, ads, and the businesses know that they can take advantage of the fact that people don’t question this fact anymore. It’s quite sad really, when I see it and still feed into it. It’s because when you grow up in an environment that fosters a way of thinking, it’s not like when you’re older you can just turn it off. Things don’t work like that. Impressions and influences don’t just turn off like that. And it’s quite scary I think.

Unfortunately, this same thing is related to Joy Buolamwini’s finding to the face recognition system that she was experimenting with at the MIT Media Lab. The program just wouldn’t register her face. It would register everyone’s , who had skin lighter than her, but not hers. This ingrained thing , I don’t even have the right word for it, leaves us at a loss. Because like I said, it’s hard to question and change your way of thinking when the society you live deems it okay and even something to be celebrated in some aspects.

The people who make things like snapchat or the algorithms in facial recognition systems I have to guess might be predominantly white. We live in a predominantly white society and usually the 1%ers and the politicians in our society are predominantly white as well. This ultimately affects and has affected how our society functions when it comes to academia and progress.

Algorithm

When I wake up the first thing I look at is Twitter, where they feed me my daily dose of “James”. “James” being the things I’ve liked or favorited over the years that have built my online presence. Twitter, Instagram and Youtube have all built algorithms that have been programmed to keep feeding me things I “like”. This isn’t healthy what so ever. But since it works no one is going to change it. All these apps do is feed me with “good” or things I want to see. And that isn’t what life is about. In the real world we get doses of everything in order to balance everything out. I remember last year I saw a data visualization that showed how different sectors of Twitter almost never intersect each other. These other sectors being Liberals and Conservatives.  This type of Isolation is the perfect breeding ground for extremism. When people with opposing views dont talk to each other, they are never able to express their views and try and empathize or sympathize with the other group. And the sad part is because this algorithm is successful, its going to be the base line for other similar tech of the future because that just how tech development works. They start with an initial somewhat successful tech and built from that. And the problem with that is those initial things aren’t always perfect and may contain some very serious flaws like the one Joy Buolamwini discussed in her Ted talk.

Algorithms

Algorithms actually have been used in many ways in our lives.  Not only on the internet but also in our daily life. Some of the uses, like the examples in the TEDx talk, can be identifying people’s faces and make the decisions. Obviously, it is the human who makes algorithms. And human, of course, have the ability to control and refine algorithms.

That’s the reason why I think the computer or any other technology can never entirely substitute human. Our human brains help the computer to have more functions, but without us, the computer itself cannot generate more functions. The algorithms limit themselves, and only if humans figure out a higher level of idea or concept or innovation can help refine it and make it better.

I think there will always be algorithm bias existed. The only thing we have to deal with is how to make this bias become smaller and smaller. Machines always have problems, that’s the thing we are hard to change. But we can make more people incorporate into the process of establishing the algorithm, test it in more environment to prevent some of the bias occurs.

If a Tree Falls in a Forest

The famous quote, “If a tree falls in a forest and no one is around to hear it, does it make a sound?” reminds us that our realities are dependent on our perceptions. Specifically, what we consider to be “real” or what we consider to have “happened” is entirely based on if we could perceive/observe it.

Most technologies today utilize collaborative filtering algorithms, which collect data about the user and the user’s social network to filter content on their platforms. Youtube uses this to recommend content, while Instagram uses it to also prioritize a user social network’s content (or in other words, prioritize posts by certain friends/followed accounts over others), completely removing the traditional chronological feed. However, if it is true that our reality is dependent on our perceptions, then these algorithms have a direct impact on how realities since they control what we can perceive. Due to these algorithms, we are only able to see certain videos on Youtube, hear specific songs on Spotify, and see specific friends’ contents on Instagram.

Many critics have addressed this issue and pointed out that “tailored content” will directly impact people’s ability to accept new ideas and limit their world view. For example, it can be argued that our nation has become more politically polarized because of tailored content on Facebook, in which democrats only see liberal arguments and new sources and republicans only see conservative arguments and new sources. Since these users can only see specific arguments, they believe no other arguments exist and in turn, are valid. Yet, as I thought about these algorithms more I started to realize there are other effects that may be more personal.

Specifically, if Instagram is choosing which one of our friends’ content we will see first, it is directly influencing our friendships. If I only see “Sam”, “Jordan” and “Katie’s”, content I will only be able to keep up to date on their lives. My friendships with them will become stronger and my friendships with others will become weaker. If the tree falling in a forest argument is true, then slowly the friends whose content I see less will no longer be apart of my reality and in turn, I could forget about them. Thus,  due to Instagram’s algorithm, I lose a significant amount of agency over my friendships. I no longer am maintaining friendships based off my own choices and personal relationships, but who I see the most on social media.

Of course, these algorithms are not pushing certain content randomly and unfortunately, the actual algorithms are not public knowledge. However, I can make a few educated guess on how Instagram’s for instance operates. I am currently in the class Internet Famous in which we discuss the reasons behind why people use social media as well as why certain people gain “fame on the internet”. It all simply comes down to the notion of alphas. Our brains our wired to enjoy looking at the faces and butts of alphas since it provides us an opportunity to learn what are the necessary traits needed to again capitol. I enjoy stalking Kim Kardashian because I must view her as an alpha (most likely because of her immense social capitol).

Algorithms are designed to keep the app popular and bring back users. If Instagram’s success is based on alphas posting content so that users will return to the app to look at the alphas, the algorithm most likely prioritizes “alpha user’s content”. It does that in one way by prioritizing the content that gets the most likes. Yet, it must do it another way: by determining which friends are the alphas in your social circle and prioritizing their content on your feed.

It is really creepy to think that Instagram has an algorithm to determine if you are an alpha or not but, it comes to no surprise. There are already algorithms that can determine if a user will be come famous on the internet years before they do. However, the obvious question rises: are these users actually alphas or is Instagram shaping them into alphas? If Instagram is creating alphas, what are the traits they are looking at and how will society change with these people attaining the most power and capitol?

Caution: Algorithms Ahead

“What would happen if we could rewrite our inner monologues, or even silence them completely on occasion?” I think this question is very valuable and taps into people’s authentic potential. Our minds follow algorithms. Free will isn’t accurate when we really look at how decisions are made: deterministically or randomly. Decisions are also made based on environmental situations and life experiences/ events. All these factors distract and disrupt the way we view our true self and our abilities. As the reading discussed an experiment where a woman felt unstoppable once she put the headset on that distributed electrodes, ultimately being an answer to the question initially posed. Self-doubt and hesitation is a product of the fear of judgment from others and society’s toxic ways. These unconscious algorithms we allow ourselves to fall into stump our growth. We grow up to act and behave in certain ways based on our surroundings. Some ways are great and benefit us, but others hold us back from reaching our full potential.

These negative thoughts that translate into negative algorithms, our mind and body follow, run our lives. These algorithms also form biases that develop discriminatory practices. Once we are conditioned to behave and think a certain way, it unconsciously leaks into our everyday lives. The people in power, the people that are in the positions to control education, laws, and other big institutions, are the leading source of these algorithms. Since these algorithms are unconsciously embedded in ourselves, they could be used as a weapon to secretly oppress others.

Beep Boop Beep

What is an Algorithm? By definition of the Oxford Dictionary, it is “a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.”.

We make algorithms to control systems we make, in a way we can give them their own sort of “decision making” by giving them algorithms as they follow a set of rules that we give them. This is both helpful but also something to be weary about the more developed and advanced technology becomes.

Why should we be more weary? I don’t exactly believe our future is a cliche dystopia of technology that becomes sentient and takes over, but I do believe technology could end up acting in ways we don’t intend if we aren’t careful. A good example of this is seen in Joy Buolamwini’s TED Talk. She discusses a situation in which a machine will use numbers and code to identify situations (As obviously, a machine doe not have a conscious and can’t make conscious or moral judgement.). Because of this, a simple system that is supposed to read and recognize face movement notices certain people who use it, but not her. We can’t exactly call the machine or person who made the machine racist, but as I said before, this kind of user testing and inclusion needs to be considered before putting a project out in the open. In this case it was a simple face recognition system, but technology could go in multiple places. Maybe we will need to consider different languages a machine should understand, or how a blind or deaf person might interact with a machine.

I personally think it’s impossible for a machine to be universal, for anyone, anywhere, but I do believe that we should try our best to include any possible user in the demographic a machine is made for. Machines are automating many processes, and in the future it is possible that machines can carry out a lot of basic functions, so it is best we proceed with machines being prepared for any type of situation.

Automatic If Statement

The Ted talk and reading talks about something that I have never thought about. However, I believe this topic is very important. Just like Joy said in the speech that machine learning is becoming more and more relevant in people’s lives. If we have a biased system, someone will receive unequal treatment without anyone noticing, since it is the machine that makes the decision.

It seems that coders did this unintentionally; however, this unintentional ignorance of the black people is worth considering. In the old times, when a group of people was treated unequally, we can always find a paper that can be used as a proof of mistreatment. We can find the author of the paper to blame. Technology changes the game; coders are too far away and hard to trace. Usually, it is a group of people that make this kind of machine learning code, which makes it harder to find an individual to blame.

Coders are making the decisions. One way to solve this is to advocate for more inclusive codes. The other way is to study hard and become a coder yourself.