All posts by Andri Kumar

Official Final Project Documentation

Here is a copy of the project:

https://drive.google.com/drive/folders/1KwPtWHd_sW3Uts3uCWJijKG2_r8ZjxUl?usp=sharing

Here is the project in action during the Sunday show.

While you cannot hear what the user is hearing, you can see what the user sees on the monitor before them and their reaction during the simulation.


Tools Used:

Illustrator, Photoshop, AfterEffects, Audition, P5.play, P5.js, PoseNet, Brackets, FreeSounds.com, and a lot of office hours

Process:

Creating a Prototype

The process to achieve the final result was surprisingly complicated. For my first step,  I took free images of body parts online (lungs, heart, and veins), made them transparent through Photoshop and then animated them on Adobe After Effects.

asdas

I then created a simple subway animation that would be masked to reveal the user and created a “background” of sorts. Since I was unsure if users would resonate with the subway background, I initially used free stock footage. I also created two text animations: one that provides users context before the simulation and one to provide closure afterwards.

sdfsd

 

Once these first draft animations of the body parts and background were created, I decided to continue working with After Effects to create a prototype of my project. I simply used “movie magic” to apply these animations to  prerecorded webcam footage of myself. This allowed users to get a general understanding of the storyline that would be displayed. Finally, I used Audition and Free Sounds.com to create the audio. There are two pieces of audio: the subway noises which play in the beginning to help add context and the panic attack audio which imitates the internal noises (rapid heartbeat, heavy breathing, scattered/panicky thoughts) that a user would experience during a panic attack.

auditojn

Here is a link to the prototype:

 

User Testing with Prototype

I primarily used the prototype for user testing because it allowed me to make changes easily, quickly, and without the sunk cost that completely coding it first would have. Users primarily gave me feedback on the general storyline, specifically providing insights regarding the mini story that exists when the user “experiences the panic attack” in the subway. Originally, the mini story had thrusted the users into the situation without providing the user time to understand the context and in turn, the simulation. Thus, the user testing feedback helped fixed issues with the overall pacing. User testing also provided insights on the semantics used in the text displayed before and after the “simulation”. Specifically, I discovered that the ending text was abrupt and did not provide the necessary closure that a user needed after experiences such a sensory overload.

 

Creating the final project

After testing with almost 20 users over a course of a week, I finally reached a version of my project that I was content with. Now, all I had to do was bring it to life!

I started to by working to get the webcam and body tracking working. Since I knew I was using large animation files, I opted to use Brackets to code rather than the text editor. For some reason, I experienced a strange amount of problems regarding this because my computer was not properly capturing video feed and the text editor made it difficult to debug.

Thus, I pivoted back  to the text editor. I used facial mapping code instead, mapping the lungs x pixels away from the user’s chin. Then I added “filler” animations to create a general structure of my code. I knew that my animations, regardless of the file type, would be too large for the text editor. However, since I was having trouble debugging without the text editor, I decided to put gifs and .movs files that were small enough for the text editor in the places where the real animations would be placed. In other words, where the subway background would be was a random gif of the earth. I just wanted to have the bones of my code down before I moved back to the local text editor.

While currently, the random earth gif has been replaced with the appropriate subway file, here is a link to the first web editor: https://editor.p5js.org/andrikumar/sketches/BJuBq6cy4

During this time I also recorded my own video footage of the subway and substituted it with the stock footage I had been using for user testing.

With the bones created, I then transitioned back to the text editor so that I could input the correct files; yet, I still faced a lot of hiccups. Essentially, After Effects renders extremely large files that would not even work locally. However, these files needed to maintain their transparency so they could not be compressed post rendering. After playing around for days with different files types and ways to maintain transparency, I finally discovered what to do. I decided to convert the subway background into 5 pngs that would loop using p5.play. I turned the pre text, post text, and lungs animation into gifs. While originally, the lungs gradually increased in speed, I could only render 2 seconds of the animations to avoid having too large of a file size. Now, the user sees rapid breathing throughout the simulation.

Once I successfully added the animations to my code, I used different functions and “addCue” to trigger the animations based off the audio as well as create the interactions.

Here is what I ended up with:

https://drive.google.com/open?id=1rZYTTyByN53vB8ByfKPy5V_aUJrzemkv

You can find my code here which you can open up with a text editor to see the final work! I used Brackets!

Here is my code:

asdasdasassadas

Final Changes for the Show

While presenting the project during class, I realized that facial mapping required an extremely well lit room otherwise the code could not “see” the user’s chin. At first, I thought of simply switching the code to map from the eyes down but if something is being mapped onto a user’s body, they are very likely to move around. If the code used the user’s eyes, then the animations would scatter everywhere. Thus, I needed to use something more stable.

As a result, I converted my code from facial mapping based to PoseNet based, mapping the animation of the body parts between the user’s shoulders. For some reason, I am terrible at doing math and struggled to find the mean distance but luckily I was able to in the end!

Since I also understood p5.play better, I decided to take 15 pngs of the lung animations and animate them through p5.play rather that using the gif. I thought users would appreciate the higher quality animation that p5.play offered. However, after during a few rounds of A/B testing with the gif animation versus the p5.play animation, I discovered users preferred the gif animation. They thought the low quality created an “abstractness”, which allowed them to really be immersed in the story.

 

Conclusion

I am honestly happy that I faced all the issues I did because as a result, I got the opportunity to explore libraries, like p5.play, which we did not get the opportunity to in class. I am quite proud of my work, especially because my freshman year I failed my first coding class and now I coded this entire project! Of course, this project would not exist without the help my professors and friends provided me! It was really rewarding during the show to hear users talk to me after the simulation about how anxiety disorders have effected their lives. A lot of the users mentioned that they had a partner who had panic attacks, and while they had learned how to help their partner get through the attack, they never understood what had been going on. However, this experience gave them a glimpse on what it had been like for their partner and finally helped them understand the situation– something that endless conversation simply could not provide. I really hope to keep developing this project further so that it can serve as an educational tool!

Here is a video of my work during the show:

What I will be working on in the future

After having numerous people try out my project at the show, I was able to get a lot of user feedback! While most of it was positive, many users explained that the conclusion could still use some work. They still felt shocked and were unsure what to do after the simulation. One participant even asked if I had a teddy bear they could hold. I have always struggled with making powerful conclusions and so I think this will be the perfect opportunity to work on this skill.

I also got the opportunity to show my work to a medical student that was going to become a psychiatrist. Ideally, I would love my project to be used to educate medical professionals about mental illness. The student provided me some insights on how I could add to the project to help appeal to medical professional’s needs. For instance, he mentioned that I could have users experience the panic attack on the subway and then “go to the ER and hear from a doctor that it was just a panic attack”. Not only would this have a better story arc, but it would help medical professionals understand the importance of empathizing with their patients that just had a panic attack. I think this was a really powerful insight and I plan on brainstorming around it a bit more!

Final Project Documentation

The attached link is of the documentation of my project. Currently, there is stock footage used; however, I will be shooting my own footage later. There is also a sample of spoken word poem, which I will be swapping out for my own poem. I am currently waiting for my “voice actor” to record the poem so that I can input it.

This is still a work in progress and thus, I apologize about the various spelling errors and strange pacing.

This is a screen recording of what the user would see on the computer screen.

 

 

Prototyping update

Here is a link to the current prototype of my final: https://youtu.be/QbSQyhC7hJM

I added the opening text, changed the photo of me to actual webcam footage, and added a partially complete ending.

Areas of success:

  • I cropped the webcam footage to focus less on the entire body. This helped direct user attention and in turn, offers a more impactful experience
  • I cropped the animation so that it goes outside of the screen borders, helping reduce “sticker effect”
  • Audio is better than last week, but can be a bit more powerful

 

Issues discovered:

  • Transition between the opener (context) and into the story (picking up the phone) is extremely weak and needs to be worked on
  • If users are wearing headphones, it will be confusing to pick up phone: issue that needs to be resolved
  • Smoother transition from panic attack into to closure
  • Veins are now animated to grow on to the body; however, it not noticeable

PoseNet is not my friend

Since I was going to use PoseNet for my final project, I decided that this would be the perfect opportunity to test out applying my animation to a user’s torso.

This was so FRUSTRATING for a variety of reasons, but I did gain a lot of insights regarding my final project.

https://editor.p5js.org/andrikumar/sketches/Hy-ZX5FCX

Here is my code^ using PoseNet (a very broken and terrible code).

Here is what went wrong:

  1. The White Border of Death: For some strange reason, a white border is appearing around my gif even tho there is not a white border in the actual file. I spent multiple hours trying to get my gif to not have a strange white border, but for some reason it does. I tried to use a video instead, however, the file was too large. I then tried to use Processing but I kept receiving an error. Unsure of what to do from here, I had to leave the border in.
  2. A Gif that is not a Gif: In past p5 projects I have uploaded gifs and they looped automatically; however, this gif does not loop! It instead plays once and then disappears. Not only does that not make sense (since code is always looping), but I cannot add any functions to make it loop. I tried loop() as well as I created a function that would look to see if the gif was playing and if it was not it would replay it. Yet, nothing seems to work.
  3. Transparency???: I created my animation/gif to be slightly transparent so that it would feel less like a sticker. However that transparency was lost once it was uploaded into p5. I can still see that the gif is semi transparent on applications like Preview; however, it is completely solid when it appears on p5.
  4. Audio: Apparently only one audio file can play at a time on p5, which is not the worst thing ever, but it certainly is frustrating when I simply wanna prototype different sounds.

Insights:

  1.  I need to redesigns my animation based off the limitations I discovered during this assignment (specifically the limitations p5 and PoseNet have)
  2. I NEED to discover how to use Processing since it may allow me to use a .mov rather than a gif which could alleviate me of my issues
  3. Perhaps I need to move away from a p5  digital mirror and consider other options for presenting the lung/animation as a whole

 

progress on final project

Since I pivoted my idea, I realize that I slightly behind schedule but here is what I have completed so far.

I found images of lungs online, photoshopped them to avoid copyright issues, and then uploaded them to After Effects. On After Effects, I was able to create an animation of lungs during a panic attack. The best part is the animation is transparent so I can easily apply it to a PoseNet code. Here is the animation:

gif

Here is where everything fell apart. While I am able to export the animation as a transparent video, the file is too large for p5. When I try to compress the file, it is no longer transparent. Furthermore, I am missing something in my PoseNet code and as a result, I cannot test with it. I keep getting an error that states “ml5 is not defined”.

Unsure what to do, I decided to test out the lungs animation by putting an image of myself behind it. I showed possible users this very strange prototype to get insights on what other design elements I need. Here is a gif of the “prototype” (ignore the weird photo of me)

gif2

What I discovered was that the lung design needed to be simplified. In particular the part that connects to the throat needs to be removed since users found themselves fixated on it and its placement. Furthermore, more needed to be added to design the connect the lungs to the rest of the body so they do not just feel “glued” on like a sticker.

I am currently playing around with veins as a way to tie in the rest of the user’s body:

dfs

If a Tree Falls in a Forest

The famous quote, “If a tree falls in a forest and no one is around to hear it, does it make a sound?” reminds us that our realities are dependent on our perceptions. Specifically, what we consider to be “real” or what we consider to have “happened” is entirely based on if we could perceive/observe it.

Most technologies today utilize collaborative filtering algorithms, which collect data about the user and the user’s social network to filter content on their platforms. Youtube uses this to recommend content, while Instagram uses it to also prioritize a user social network’s content (or in other words, prioritize posts by certain friends/followed accounts over others), completely removing the traditional chronological feed. However, if it is true that our reality is dependent on our perceptions, then these algorithms have a direct impact on how realities since they control what we can perceive. Due to these algorithms, we are only able to see certain videos on Youtube, hear specific songs on Spotify, and see specific friends’ contents on Instagram.

Many critics have addressed this issue and pointed out that “tailored content” will directly impact people’s ability to accept new ideas and limit their world view. For example, it can be argued that our nation has become more politically polarized because of tailored content on Facebook, in which democrats only see liberal arguments and new sources and republicans only see conservative arguments and new sources. Since these users can only see specific arguments, they believe no other arguments exist and in turn, are valid. Yet, as I thought about these algorithms more I started to realize there are other effects that may be more personal.

Specifically, if Instagram is choosing which one of our friends’ content we will see first, it is directly influencing our friendships. If I only see “Sam”, “Jordan” and “Katie’s”, content I will only be able to keep up to date on their lives. My friendships with them will become stronger and my friendships with others will become weaker. If the tree falling in a forest argument is true, then slowly the friends whose content I see less will no longer be apart of my reality and in turn, I could forget about them. Thus,  due to Instagram’s algorithm, I lose a significant amount of agency over my friendships. I no longer am maintaining friendships based off my own choices and personal relationships, but who I see the most on social media.

Of course, these algorithms are not pushing certain content randomly and unfortunately, the actual algorithms are not public knowledge. However, I can make a few educated guess on how Instagram’s for instance operates. I am currently in the class Internet Famous in which we discuss the reasons behind why people use social media as well as why certain people gain “fame on the internet”. It all simply comes down to the notion of alphas. Our brains our wired to enjoy looking at the faces and butts of alphas since it provides us an opportunity to learn what are the necessary traits needed to again capitol. I enjoy stalking Kim Kardashian because I must view her as an alpha (most likely because of her immense social capitol).

Algorithms are designed to keep the app popular and bring back users. If Instagram’s success is based on alphas posting content so that users will return to the app to look at the alphas, the algorithm most likely prioritizes “alpha user’s content”. It does that in one way by prioritizing the content that gets the most likes. Yet, it must do it another way: by determining which friends are the alphas in your social circle and prioritizing their content on your feed.

It is really creepy to think that Instagram has an algorithm to determine if you are an alpha or not but, it comes to no surprise. There are already algorithms that can determine if a user will be come famous on the internet years before they do. However, the obvious question rises: are these users actually alphas or is Instagram shaping them into alphas? If Instagram is creating alphas, what are the traits they are looking at and how will society change with these people attaining the most power and capitol?

Current Progress: Prototyping

After my materials finally arrived //  over a week later >:(    //,  I was able to finally build a prototype of my lungs. I decided to use red latex balloons to create a “lung” since I thought the latex material would be the easiest to “blow up”, having the least resistance to my fan. I purchased a 12V fan of Amazon and while many customers claimed it to be strong, I knew that it may not be a strong as I needed it to be. Turns out, I was right! It was not strong enough at all.

HA

With the help of Professor David, I was able to learn how to wire my 12V fan to another more powerful power supply. However, the fan was not powerful enough to inflate or deflate my prototype.

Thus, I decided to pivot my idea and take this physical concept and make it AR. Earlier this week, Ellen had asked me if I was incorporating my love for webcams and gifs in my final project and I began to wonder why I had not. I started to brainstorm ways my project could translate into a webcam form and thought of digital mirror in which a user could see their “lungs” and “heart”. When the user did a task, they could see how these organs biologically changed due to anxiety (heart rate increase etc).

I decided to then prototype what that digital mirror could look like. Since I am going home for thanksgiving, I knew I had a lot of time to create the animation that would display on my 6 hour flights (12 hours total).

I wanted to learn exactly what the animation needed to look. I used p5 to generate a simple code that mapped on a heart and then a lung. While p5 claims that it can “tint” images to be transparent, the tint option does not work on gifs. Luckily there are many online resources that can turn a gif transparent.

Here is my output (i cropped out my face cause it was not very flattering haha)

e h

(above is a gif, click to play)

 

I learned that the organs need to be connected in someway that creates an implicit storyline otherwise it feels weird. For instance, the heart just seems strange and perhaps, this is simply because it is reflected. Yet, nonetheless steps should be taken to create a more completed look. I also know that the design must be high quality and look as close to the a real “organ” as possible. The heart seems to be more impactful than the lungs which were lower quality and less realistic.

I tested it out with a user and they provided similar feedback. I will create an animation on my flight on Tuesday and be able to test during the break with more users!

Here is my block diagram for my new concept:

userflow

Sticker Generator

This homework assignment was only completed because of the help of Ellen– thank you!

I learned that I could easily utilize Giphy’s API. Since I am perhaps the biggest fan of the implementation of gifs in p5, I knew I had to use it.

I was able to use another student’s API project that use Giphy’s API as a reference (https://itp.nyu.edu/classes/cc-f18-tth/author/yc3355/). I also had a lot of assistance and guidance from Ellen during office hours, who walked me through API and helped me formulate my code.

At the end of office hours, I had a general code in which it took a user input and produced a transparent gif that was placed over the webcam output to look like a “sticker”.

I wanted to create a code in which a user could type in their name and a gif associated with their name could pop up onto their webcam screen. Thus, I set out to add upon the code that I had created.

However, I discovered that the API search is limited to only words that Giphy recognizes. For instance, while on Giphy.com I can search Andri and receive a gif that the algorithm believes to be close enough to search term. However, when inputing “Andri” into the API url, Giphy breaks and does not provide a gif.

Thus, I had to pivot my idea to something simpler. I thought then I could ask simpler questions that would have typically result in an answer that is a common noun. For example, I could ask “What is your favorite color” and if the user said blue, they could see a blue gif onto their webcam. However, when I tested it out and discovered that at times the sticker seemed completely random. For instance, when I input “blue”, Giphy output stickers relating to sadness rather than color.

While I was frustrated, I realized that I had to simplify my idea to simply ass for “things” because “things” would have most likely not have double meanings or break the code.

Thus, now I have simply have a sticker generator. The user can input their favorite thing and then see a sticker related to that “favorite thing” pop up onto their webcam feed. For some reason, it is really slow system and the sticker takes forever to load, which is even more frustrating! However, this experience was not bad. I loved getting to learn a bit more about Giphy since it is a platform I use every day!

 

fullscreen:

https://editor.p5js.org/andrikumar/full/HynsKK26X

 

edit:

https://editor.p5js.org/andrikumar/sketches/HynsKK26X

 

 

Similar Project + My Own

Similar Projects

It was a bit difficult to find school appropriate examples of similar works, since many of them are focused on creating body parts for specific kinds of robots.

But luckily, and strangely enough, a lot of people have tried to make fake lungs.

Here is someone trying to make it with an Ardunio:

https://www.researchgate.net/figure/The-set-of-artificial-organs-and-body-parts-present-in-the-manikin-From-top-to-bottom_fig14_313687401

Here is someone selling an entire fake lung kit:

https://www.boundtree.com/Training-%26-Simulation/Anatomical-Models/BioQuest-Inflatable-Lung-Kit/p/13277

I also found artists to see how they represented anxiety

This one was very focused on the physical aspects of anxiety, which I think aligns with what my focus is as well:

http://portfolios.risd.edu/gallery/69751543/Anxiety-Installation

However I noticed, most artists tend to not depict anxiety with literal human related images. It makes me a bit nervous because I wonder if there is a reason so many people choose to avoid representing anxiety in a more literal sense.

Current Progress

Unfortunately, I am running a bit behind on my project but I found DC fans which I think I can use to inflate the lungs. Also, as I added above, there are a surprising amount of tutorials and blog posts on creating lungs with Arduinos. I have also ordered red latex balloons to build the lungs. These should arrive Monday night and I will build a mini prototype of the lungs to test out the DC motors with when they finally arrive.

DC fans:

https://www.amazon.com/gp/cart/view.html/ref=nav_crt_ewc_hd

More Help:

https://forum.arduino.cc/index.php?topic=61940.0

Face Filters

Context

I became really intrigued by p5’s ability to track and map a face, so I decided to explore it a bit more for this assignment.

I started by first trying to understand the limitations of the facial map as well as a Mac’s web cam. Using the code that we made in lab, I input a gif where the we were originally created an ellipse. The ellipse was placed on a user’s nose (point 62). However, what I discovered was that gif or images, even though they may be transparent pngs, have an invisible square/rectangular border around them. When placed on a spot on a user’s face on p5, the image will be placed based off the top left corner of the invisible border, not the center.

https://editor.p5js.org/andrikumar/sketches/Hkng-e-TX

https://editor.p5js.org/andrikumar/full/Hkng-e-TX

As a result, when I placed the image onto the nose, the top left corner of the image was on the nose. I played with different face points until I was happy with the location (I settled upon point 19, which is the tip of a user’s left eyebrow). However, I realized that when a user gets really close to the camera, the image/gif does not resize and it becomes obvious that the image is being mapped onto the eyebrow and not the user’s face as whole.

I tried to look up any information regarding resizing but unfortunately, I could not find anything and thus, went to Ellen’s office hours Friday morning. I spent the rest of the time leading up to the office hours conceptualizing my idea.

My Idea:

I was always really inspired by the Shell Silverstein’s poetic drawing “The Thinker of Tender Thoughts” (See below). Yet, Silverstein as a whole was a large part of my elementary education. All my teachers would always read his poems to us during our reading breaks and the students would be fascinated by the magical stories he would weave.

poem

I wanted to amplify that experience a bit more and help elementary kids really engage with Silverstein’s works. His poems have such powerful messages and it would be amazing if these messages stuck with children for the rest of their lives. I decided that I could create a “face filter” to help children really see themselves as part of the poems and hopefully, apply the lessons the poems share in their own lives. Since “The Thinker of Tender Thoughts” has such a special place in my heart, I decided to make my face filter based off that piece.

Finalizing The Code! Thank You Ellen

In office hours, I learned that unfortunately, it is difficult to get images to resize along with the face with a Mac’s webcam since it cannot pick up depth well.

While that was frustrating, Ellen helped me learn how to code a few more interactions into my assignment so that I could really engage children who would use it.

-I wanted kids to be able to click the filter and have something be added to experience and she taught me how to make the “filter” a button

-She also helped me understand how the facial mapping worked, specifically the coordinate system it was based on, which helped me better place the “filter”

End Result

full screen: https://editor.p5js.org/andrikumar/full/SJlVvINTm

code: https://editor.p5js.org/andrikumar/sketches/SJlVvINTm

 

Globalization and Invasive Species

Globalization is the process in which organizations gain international influence and power. It is a process that is as old as time. Even in the earliest stages of the world’s economy, there was international trading in which certain cultures and their items carried larger value and influence than others.

In a more recent example, McDonalds is perhaps the archetype and face of globalization. Now operating in 119 countries, McDonald’s obtains immense international influence, molding the cultures that it enters to align with its consumption and money driven philosophies. Thus, to answer the question, “Has technology caused globalization”, I would argue that globalization has existed long before the technology that we have now. Globalization was a key war strategy since it provides economic and social dominance as well as power. The Romans used Globalization and American has used it ever since War World One.

However, my answer to the following question on whether globalization is good or bad is a bit more complex. In the past, globalization resulted in the bleaching of the cultures and societies that became overpowered by the “globalized” organization/business/country. For instance, we can see American values now integrated in most parts of the world due to the mass globalization of American businesses. Thus, the biggest consequence of globalization was the loss of cultural and philosophical diversity.

Yet, the globalization of today’s technology, such as Amazon, is a bit more worrisome. In the past, when a business globalized it would take years of planning. The business would need to find land in the other country, understand the new market, develop ways to pivot their product, and truly engage the new customers. We see this in the classic example of McDonald’s, who when entering the Indian economy, choose to add curry sauces to appeal to the new market. However, today, technology companies have access to endless amounts of data and thus, can easily enter a new economy and engage customers. A process that would take years can now be done in months and Amazon is the perfect example of rapid globalization.

I have been thinking a lot about society and comparing it with nature. I read an interesting article (one that I cannot find right now but will add to the blog post when I do) that argued that if we rethought society and transformed it to behave more like nature we would be more efficient, have absolutely no waste, more environmentally friendly and stronger to changes. As I thought about globalization, I began to think of it as similar to invasive species (without the negative associations with that word). In nature, due to typically human actions, a new species will be introduced into an environment. This new species disrupts the ecosystem and threatens the lives of the animals in the environment. Most of the time, this new species (or invasive species), will disrupt the food chain, resulting in various animals in the environment starving or dying and in turn, the ecosystem decaying as a whole. However, there are times, when a new species can enter a new environment and not collapse the ecosystem. In these examples, the new species grows at a slow pace within the environment and as result, the ecosystem has enough time to adapt to its presence. In the end, the ecosystem is able to stay healthy and alive, even with the new addition.

Returning back to globalization, in the past, when companies globalized, the globalization occurred slowly and carefully. As a result, the countries these companies entered were able to adapt around the new company. However, if the same laws apply to globalization as they do invasive species and ecosystems, then societies can be in real danger now that companies are entering rapidly.

The speed at which technology companies are growing as a whole are dangerous. Since they are growing so fast, it is hard for the public to monitor them. The article that discussed Amazon warehouses’ work environments was written in May, however, if you Google the words, “Amazon employees” now, you will be bombarded with articles with the recent pay raise (see attached photo). The safety of the work environments already became old news. These companies are growing so fast that it is possible for average person to keep track of exactly what is occurring behind the UI they see.

Globaliztion is a process that has been going on long before Amazon was even conceptualized and while it has had negative consequences in the past, globalization never occurred at the rate it is happening at with technology companies. We should be weary of the rapid globalization of technology because in this context, speed is power– If the technology companies are fast enough, they can overpower us all.

google

Flies and Bacteria

Ideation

After completing a range of production projects, I realized that I enjoy projects that tend to have a real world context the most. While I really loved the “hacked” interaction piece I was able to make, I realized I enjoyed creating it because it started a conversation about data and privacy. Of course, I find creating aesthetically beautiful p5 projects exciting, but I find that the more meaningful and impactful projects are the most rewarding. Thus, my goal for this week was to create an interaction that could serve a purpose.

Process

I began by watching the Coding Train series on arrays and objects. I loved that array of objects allowed for users to interact with a range of objects at once, expanding their interaction abilities. Here is a project I made to practice the skills I was learning (https://editor.p5js.org/andrikumar/sketches/ByKCeTVnX) It is called, “Let there be light”. Users can push rays of light out when dragging their mouse.

I decided to continue watching the series and ended up watching the video on Objects and Images (https://www.youtube.com/watch?v=i2C1hrJMwz0).

The way the images shook reminded me of how bugs move. Inspired by the movement, I realized I wanted to create a game that utilized an array of images of bugs. After some debate, I settled upon flies.

My initial idea was to create an array of flies that would be randomly placed onto a canvas. The user’s goal was to identity the king fly as quickly as possible. The user would identity by clicking on the fly to see if it was the king fly. However, as I reflected on the idea more I realized that it served no real world context. I also realized that individually clicking each fly would cause the user fatigue and decrease the entertainment value.

Iteration

I knew that I wanted to have a game that a had a “real world” application and offered a less tiring interaction behavior for the user. I brainstormed more ideas and decided I would create game in which the user would have to use a fly swatter to swat away flies from a Thanksgiving feast. The goal of the game would be to protect the food. I decided that the user could “swat” by using dragging their mouse over the screen.

I discovered how to remove objects by following another Coding Train video (https://www.youtube.com/watch?v=tA_ZgruFF9k).

I decided to remove the cursor and replace it with a fly swatter to flesh the story line out more.

Here is the game: https://editor.p5js.org/andrikumar/full/SkH8HaEnQ

Here is the code: https://editor.p5js.org/andrikumar/sketches/SkH8HaEnQ

I played the game for some time and while it was relaxing, I kept feeling like it did not truly have a real world application. Thus, I decided to iterate upon my design and change the story line. The new story line is that the user is to brush away the bacteria on teeth with a toothbrush. The toothbrush is the cursors and the bacteria can be removed by dragging the mouse. I thought this game could be easily implemented into a children’s dentist office to educate and excite children about dental hygiene.

Here is the game:

https://editor.p5js.org/andrikumar/full/H1iEqeHnX

Here is the code:

https://editor.p5js.org/andrikumar/sketches/H1iEqeHnX

I decided to reduce the number of bacteria to be less than the number of flies, so that the user could better see the teeth and understand the context of the game.

 

Final Project Ideas

I’ll be adding sketches momentarily. I wanted to finish my documentation for the production project which still is not where I would like it to be.

  1. Human Sculpture

In this idea, I want to create a sculpture of an non-gender human being out of leds. My vision is that way the leds light up will vary based of a user’s interaction with the sculpture. I do not want people necessarily touching the sculpture, but I would want for instance, to have a microphone attached somewhere, so that if a user/audience member yelled at the “human” the leds would turn red to represent anger/pain. Or if an audience member walked away from the sculpture, the leds would turn blue and slowly dim away to represent sadness. I would structure this led human in a manner similar to how light up Christmas decorations are structured.

sketch1christmas

 

2. Disjointed human

After speaking with Zoe Wells, a student in the other section, I learned that we can use 3d printers to create realistic looking lungs, skin, and eyes. I have always been fascinated with the development of human like robots, such as Sophia. My idea is to create a pair of lips and eyes and some pieces of skin. I would then attach servos to them so they had movement. I would lay them on a table separate from one another and have them try to behave as if they were on a real face, talking and blinking. The idea would be to discuss what we define as a human’s face. I have conducted research in the past of facial recognition and processing (how we are able to know the difference between human faces, human faces and animals, etc). It would be interesting to create an art piece that furthers the discussions and insights made in my research.

sketch2

3. Headband

In one of the first projects, I made a head band that those w paralysis could wear. The head band allowed them to simply raise their eye brows to turn on a led. The purpose of the head band was so that they could alert their nurses they  needed medical attention without requiring too much movement. I want to explore this concept a bit more and make a functional prototype of it. Before I begin brainstorming, I would first need to conduct a great deal of research into paralysis and medical devices.

sketch3

 

4. Anxiety

Due to the lack of mental health education and awareness, our society struggles to understand the effects of mental health. While mental health issues effect us emotionally, they also can impact our physical bodies. Returning to the idea I spoke with Zoe about, I would want to 3D print a pair of lungs or a stomach. I would then use servos and motors to animate the lungs and stomach to represent how they begin to behave during panic attacks. Stomach tighten and fill with stomach acid and lungs restrict. It would fascinating to represent mental health in a more physical health context.

sketch4

I wanted to be a mean love guru, but being evil is hard

I decided to recreate an arcade game that I loved playing in middle school for this week’s assignment, but putting a small cynical twist on it in honor of this TRICK or treat season.

Here is a version of the arcade game I am referring to. arcade game

Essentially, a user would place their hands upon one of the two hands on the console and their partner would place their hand upon the other hand print. The game would then “calculate” how strong their love was for one another.

There were other versions of this game that were one player, in which the player would put their hand down on a similar hand print to determine if their crush liked them back or how hot you were.

Overrun by puberty, I loved games like this in middle school. I mean, for just 50 cents I could see if Jordan from 3rd period and I were really going to get married! (Of course, we did not haha).

I wanted to create a version of the one player game. In which, a user could determine how their current partner really feels about them. They would place their hand upon a box with a hand print and discover the answer onto the computer screen.

Here is the box:

box

In the middle of the “hand print” is a light sensor. Once a hand covered the light sensor, the computer screen would show how much that user’s lover actually cares about them on the screen through p5. The box was easily to make however, the following parts of the process is where things became complicated.

Before I begin discussing my process, I want to provide an overview of what my p5 code needed to have.

  1. To take in the light sensor data
  2. Recognize that the light sensor was covered
  3. Have an array of possible sentences (fortunes)
  4. display a random sentence when light sensor is covered

Getting the light sensor data was not a problem and recognizing when the sensor was covered was not difficult either. The problem emerged was having an array of sentences.

When I finally put together my initial code, I realized that when I covered the light sensor, p5 was receiving a range of numbers. It was considered covered at 30 all the way to 100. Yet, the issue was, that it would bounce between this range even though my hand was still and over the light sensor. The light sensor must have been detecting small amounts of light and then bouncing around. As a result, the code would continuously loop and display different sentences (fortunes). To better understand what I am referring to watch the video below.

In order to tackle this issue I decided that I need to take the range of numbers that are associated with the light sensor being covered.

My initial thought was to declare the range in p5 in my if statement, but that did not work. The I decided to try to divide up the range and that did not work. I realized I needed to covert “not covered” into one number and “covered” into another number before it even entered p5. Specifically, I needed to take an analog data and make it digital. I went onto my Arduino code and turned into a boolean. This allowed me to have the desired covered and not covered data input into my p5. Here is the Ardunio code:

arduino code

Now that the I was able to simplify how p5 was reading the light sensor I decided to go back to getting the random fortunes to display.

However, my own luck and fortune was not by my side. p5 crashed meaning that I had to redo my code (which was not that heartbreaking). However, for some reason when I redid the code my serial controller no longer wanted to stay on the serial port I was using and kept returning back to my bluetooth headphones. As a result, I kept getting an error that my serialEvent function was not working properly.

This resulted in a quick one hour long cry out of frustration. Luckily I was able to simply restart my computer, delete my serial controller app and redownload a new one and get it working again. Yet, I needed a break and since this was day two working on this project, I was very emotionally exhausted.

After the break, I was able to finally get it working!

Yet, as I kept playing around with it and testing with different graphics for the display, something terrible happened.

Basically, my light sensor started to think it was alway receiving light. When I checked on the serial monitor it always depicted “1”. Watch the video to see what I mean (also sorry for my gross room in the background):

I tried everything. I took the light sensor out of the box and tried to cover nothing worked. At this point, I am assuming that either this is karma for trying to make people sad about love or (more likely) I got my light sensor from a knock off ardunio kit, so it may simply not be the best quality. I could either add a new light sensor but it was past midnight and the shop was closed so I would be unable to solder it.

Update: I spent all night thinking about what went wrong and perhaps it was because my Arduino code made “no light” false. That meant if there was ANY source of light, it would depict true. However, maybe it should be flipped so that if there is any source of darkness it should be false. It is currently 6am but I’ll retry this after class. I really want to make this box work and I have faith that I can be an evil love guru!

current fullscreen: https://editor.p5js.org/andrikumar/full/r11wX9wjQ

current edit: https://editor.p5js.org/andrikumar/sketches/r11wX9wjQ

current arduino code (now that I am iterating upon it):

new code

mock website

Last week, I was really embarrassed with the lack of skill I could show with p5 on my production assignment. For some reason, the minute I have to think about equations and numbers, my mind goes blank (its also been three years since I have worked with equations to this level). I realized that if I started to think about each aspect as a layer as it is on Adobe Suite, I may be better able to work through my mental roadblocks.

Last Thursday, I lost access to my Facebook and in turn, Instagram. My accounts were “blocked” because Facebook identified them as hacked. Facebook was most likely trying to shut down the accounts that were compromised in its hack scandal. While I was fortunately able to regain access to my accounts, I wanted to create something inspired by hacking. I also wanted to play tribute to Yung Jake and his work title, Embedded, which was the piece that inspired me to become interested in interactive media. (Here is the link to Embedded)

I am still struggling with my project so it is not complete, but I plan to finish by tonight. Yet, I wanted to document my work up till now.

I was able to understand how the “user input” feature works in class. So I decided to first build what I wanted my project to feel like. Basically, when a user will search something into Google. Once they have typed it in and hit enter, there will be  glitch and the user will experience their computer being hacked. I want the experience to be sudden, unexpected, and the user to feel out of control. I think it is difficult to understand how scary  hacking is until you actually experience it and I hope that this project will at least capture a portion of that.

Currently I have built out everything besides the input. I will just iterate upon the code to adapt to the input later, however, I wanted to just get the parts I did not have experience with completed first.

full screen: https://editor.p5js.org/full/BJ1kwC097

edit: https://editor.p5js.org/andrikumar/sketches/BJ1kwC097

This blog post will be updated at the end of the night.

 

UPDATE:  I have finally completed the project to the best of  my abilities. Unfortunately, this is not exactly what I originally wanted.

My Struggles (the thorns):

-I wanted to incorporate the user’s input in the glitch/hack; however, once the “blue glitch” gifs were added, I was no longer able to reveal text in the keyPressed function. For some reason p5 could only depict images, gifs, and videos in the function. However, if I commented out the gifs, then it would show text. I do not know if it is bug in p5 or something I was doing wrong

-I wanted to have multiple layers of interactions. Like after the glitch occurs, if the user double clicked their mouse more glitch effects would happen; however, p5 would not output this.

-I also wanted some of the virus gifs to follow the mouse; however, for some reason mouseX and  mouseY were not being completed.

-I also wanted to end the loop at a certain time and make the screen go black; however, noLoop(); and exit(); were not showing up either. The only thing I could do was add more gifs/videos/images

What I am proud of (rose):

-I was able to get practice using a variety of p5 functions and importing different files

-I was able to get through a lot of emotional and mental road blocks by thinking of my code like layers rather than random equations, etc.

 

In Conclusion,

p5 is really cool and I thought I would be able to successfully create an interactive mock website. However, I clearly need to understand a bit more about debugging and the way p5 functions work before I continue using it. I really have no idea how the functions are working in relation to one another and I think once I do, I will understand why my gifs kept overriding every other function.

Data, Consumers, and Choice

Since we are able to completely dictate the nature of our blog posts, I will be focusing my blog post on topics that were not referenced in the provided question.

I was a bit startled when I read the article. I knew that data had become the new currency in technology and some would even claim that it would become the new “oil”; however, I never really conceptualized what the influence of data gathering and analysis looks like in terms of businesses.

The reading ends with a narrative about Belgium’s Grimbergn and their need to rebrand. The company decided to use data rather than a design team to rebrand the packaging of their beer. By creating a mathematical model, the company was able to discover the best attributes the branding needed to possess to target consumers. The brand design the model created had a 3.5% increase in approval rating.

This kind of data driven targeted marketing occurs in America as well. Not too long ago, Target had done a similar marketing tactic. Target discovered that women tend to remain faithful to a few brands in their lives, but are more likely to change to a new a brand when experiencing a life changing event (such as pregnancy). Thus, in order to onboard new lifelong customers, Target decided to create a model which would use an individuals’ search history and purchase history to determine if they recently became pregnant. For example, if a woman started to purchase magnesium and other prenatal vitamins, the model would identify them as newly pregnant. If determined to be pregnant, Target would begin to send the individual coupons for diapers, cribs, etc. Of course when this marketing tactic became known to public, Target was heavily criticized and as result, Target decided to hide its targeted marketing by still sending targeted coupons but disguising the coupons along with other “general” coupons (like for towels and hand soap).

While it is now fairly known that marketing and ads are targeted to consumers through data, the idea that soon products will be branded based off consumer data is a bit more horrifying. I believe it is because I would like to think that while marketing may manipulate me to purchase a product, I get to determine the quality and thus, get to “choose” whether or not I will purchase it again. Tortilla chips are a great example of this. Sure, certain branding and marketing may sway us to purchase certain chips, but in the end, if they do not meet our taste needs, we will simply purchase another brand.

However, if a product was branded to target me as a consumer, it could blind me to the existence of other products. As a result, I would never reflect on the quality and thus, lose my agency as a consumer. I think I can best explain this “product blindness” through a narrative:

I have always purchased pink Venus disposable razors. I just knew that these were the razors that were best for women. But why would I think that?

razor

The razors are packaged to put that exact idea in my head. By designing the razors to be pink and visually pleasing, Venus targets consumers that socialized feminine and in turn most likely women. Yet, their most important branding decision was to name their company Venus. The name Venus has strong ties to femininity and empowerment. Its competitors have names such as “Bic” and “Schick”– names that feel masculine. Thus, the decision for me to purchase Venus was already made for me. I am a woman that likes to feel strong and good about myself. As I scan through the products in the aisle, only one product makes me feel that way: Venus Razors. However, I have never once considered if another product would be better in quality. I have never thought to pick up a Bic razor and in all honestly, I somehow created a classism with razors and viewed Bic razors are inferior. The strangest and maybe more eery part is, as you can see, no where on the package does it say these razors are meant for women.

I have blindly consumed Venus Razors for a little less than a decade. Thus, I would like to think that I would be able to try out the product, see the quality, and determine if I will continue to purchase it, but as you can see by Venus Razors narrative, that is simply not true. Our consumption choices can be predetermined by companies. They can choose not only to target us as consumers but subtly blind us from other options. One could argue that I am a woman, and like what Target discovered, am less likely to change my consumption choices so it makes sense why I have never left Venus. However, I have changed the kind of tortilla chips I like every few months based off my taste preferences at that time and my favorite clothing brands change on my current fashion icons.

Thus, I do think data analytics is powerful and has amazing potential for innovation. However, I am worried about it in the context of businesses and I do think there needs to be regulation in place to prevent mass corporate manipulation. What will the future of consumption be? What will the effects of this kind of predetermined consumption be on culture and overall society? It is scary to ponder those questions and hope that a more critical discussion starts on data analytics rather than the one the article had which was “digitize everything”, because clearly that is not the best idea.

Devolution of Communication

It seems that communication systems evolve in relation to communities, specifically when the needs of communities change so does the communication systems.

As discussed in the reading, when written language was first created, it was in the form of pictograms. At that time, communities needed to simply record thoughts and what was discussed verbally. Pictograms were simple depictions of conversations and thus, met this need perfectly. However, soon communities needed a way to communicate ideas with one another. As a result, there was a dire need for a communication system that was accessible to all. Since pictograms relied on cultural symbols, pictograms could only be interpreted by those with the same cultural understandings. Thus, this new societal need sprouted the generation of a new communication system: the alphabet. 

Similarly, mankind first used tallies because initially communities simply needed a way to keep count. Yet, when communities began to practice more complex economics and people needed to preform arithmetic, tallies quickly became outdated and numerals evolved. 

Today, society’s needs have evolved once again. While in the past societies craved accessibility, it seems that today, communities desire efficiency. We are at the point in the history, where we are trying to truly enhance the human experience and truly distill the power of mankind. The general consensus is that we can achieve this by increasing efficiency and as a result, every product and service is dedicated to reducing pain points in our daily lives, allowing us to work more quickly and efficiently.

Thus, if I correctly identified the underlying trend and that communication systems evolve in relation with societal needs, I believe communication will evolve to be more efficient. We already see signs of this today. For example, emojis and bitmojis gained exponential success because they allowed people to communicate quicker. Rather than having to type out a statement that resonates compassion and support, people can now send a red heart emoji. Similarly, before on Facebook, users would either be able to like a friend’s post or write a comment. However, sometimes, a friend would post about a loved one that passed away, and “liking” their post seemed strange. Thus, Facebook evolved so that user no longer have to comment a long heartfelt message, but instead can select an emoji reaction that is appropriate.

Surprisingly enough, this development in communication is almost backwards, because we are returning to a form of communication that relies on pictograms. However, the issue pictograms faced was that they were inaccessible in the sense that complex concepts would be depicted through a combination of pictograms, but not every reader was aware of the rules that brought meaning to the combination. Yet, due to technology, these “rules” can spread quickly. For instance, we all know that the praying hands emoji is not a hands clapping emoji, because technology allowed these “rules” and understandings to spread.

As I reflect on the topic, it seems like communication as a whole is undergoing a devolution. When texting was first introduced, it was considered this revolutionary new way to communicate quickly; however, now texting is not quick and efficient enough. We want to type so fast that we shortened many of our words into acronyms and push for companies to develop an omnipotent autocorrect algorithm so that we do not have a sacrifice accuracy for speed.  Many users have even completely abandoned texting entirely and instead, utilize voice messages and “memojis” to communicate. These users argue that it is easier to hit record, speak, and send rather than type a message. These “voice message” users are growing in population and it would seem that the future communication is simply speaking– a complete devolution of communication systems.

I am not sure what avenue for communication will be next. It seems that the most efficient methods of communication are the ones we are born with: our voices and our facial expressions (which we now perform through emojis). If I were to guess, I would suggest that communication will most likely evolve to have products that enhance our biological/natural forms of communication. Emojis serve this exact purpose– they amplify our use of facial expressions by capturing them into animated, easily understandable images. Perhaps, the next form of communication would allow us to send voice messages at different playback speeds. For example, a user could a send a message to their friend, and their friend could choose the speed to listen to it at, allowing our friend to comprehend our message more efficiently. Or perhaps, an there would be a service integrated into a voice message system that would edit out the “umms” and “likes” we say when we speak, relieving our voice messages of the burden these time consuming filler words possess.

Using p5 for the first time

I decided to make two little animations for this week so that I could really get familiar with working with p5. I have had some bad experiences with coding in the past so I wanted to get more comfortable this time before we began larger projects!

 

Here is my first animation (its suppose to be an abstract sunrise (very badly done ofcourse)):

https://editor.p5js.org/andrikumar/sketches/SkexA_LcX

link to solely animation:

https://editor.p5js.org/full/r18BOc7qm

 

Here is the second animation:

https://editor.p5js.org/andrikumar/sketches/r18BOc7qm

link to solely animation:

https://editor.p5js.org/full/SkexA_LcX

 

Here is the code for the first animation:

 

 

Our Goldberg Piece (Aproova and I :) )

We created a machine that when given a light input, will roll a ball down a slide.

We started with a drawing:

a

At first we were going to have a pulley system, but then we decided that a flag would not be a sufficient output and then created a system in which when the motor turned, a little flap on the end would go back and swing forward so that we could knock the ball forward.

Here is the sketch!

bv

And then we started testing it out!

First, to make sure that we got the light sensor plugged in correctly, we tested out the light sensor by seeing if we could light up an LED.

After we got the light sensor working, we hooked it up so that the when the light sensor was activated, we could get the motor to turn.

ashk

After that, we created a little flap that attached to the motor so that something would come in contact with the ball. At the same time we worked on creating a slide so that the ball would have something to roll down. After engineering all our parts together, we were able to create our piece of the Rube Goldberg Machine!

Here is our slide and the platform it would be later attached to.

 hickenslide

And here is our video of our fun chicken slide in action!

The Body

A few weeks ago we discussed how our hands contribute to our understandings of the world around us in a previous reading. This week we are expanding on this discussion and evaluating our entire body as an information synthesizer. While this week’s reading focused on examples like embodied cognition, there are many other examples in which our bodies directly provide us knowledge on our surrounding environment. Our bodies are constantly taking information  to create spatial map of our surroundings. In order to read my writing, you need to look at a screen (a computer or smartphone); yet, as you look upon this screen, you can be confident that the space behind you has not changed.

How can you be confident about that? Your eyes are fixated on to screen and you literally cannot see what is behind you. Our bodies are constantly collecting sensory information that our minds evaluate to create a map of our surroundings. Since the sensory information your body is collecting has not dramatically changed, your mind can assume that your surroundings, even the things behind you have not.

By collecting sensory information, our bodies allow us not navigate our wold more efficiently. We do not need to stop before crossing the street to determine if the sidewalk is stable. Instead if the motion in which our feet hit the ground does not align with our mental definition of “stable” surface (such as, it does not wobble or move), our feet will inform our brains that the road may not be safe. As I write about the relationship of the mind and body, specifically how the body influences the mind, it becomes more and more evident that designers must take into account how our bodies gather information when creating powerful interaction designs. For the past two years I studied cognitive science and simply assumed that the information I know about memory is all I could apply to my UX designs. While it is important to recognize the limitations in a user’s memory, a designer must also be aware that reading/listening is not the only way a user is gathering information about the product.

Designers must also focus on the influence of emotion. Perhaps the most fascinating piece of information I have learned in my background in cognitive science is that memories are dependent on emotions. Unfortunately, your memory of an experience is never accurate. Instead, every time you remember an experience, you are actually simply remembering how you recalled the experience last time. Thus, whatever emotions you were feeling about the experience wash over the memory, greatly altering the memory. For example, childbirth is the most painful experience a woman will have to endure in her lifetime. However, during childbirth and continuously afterward, the new mother’s brain is flooded with dopamine. Why would the brain do that? It is because as the mother starts recalling this extremely painful experience, the extra dopamine ensures that she remembers it less negatively. As a result, the new mother would no longer remember childbirth as extremely painful and maybe even traumatic, and now will be willing to have another child.

By taking into account emotions, designers can be more aware and can better control the experiences their users are having with their design. For instance, if a designer wants the user to continue to use a product, they can add things that would spike up a user’s dopamine level (like a pleasant sound or funny meme) during the experience. They can tailor the notifications a user receives about a product (such as an app) resonate positive emotions and further ensuring that the user remembers the product positively.

One of the most valuable things I learned while taking a course on design thinking is that designers need to truly listen to users and at times even read between the lines, because users may not be able to vocalize what they are experiencing or need. It would be a shock if a user could vocalize what their body or mind was experiencing as they used a product, and thus, by having knowledge on the mind and body, designers can truly understand their users.

I tried to have a disco but I failed

I began this week’s assignment with a vision to create a disco ball. My plan was to attach a RGB LED to a light sensor and when it would be dark (the light sensor would be covered), the RGB Led would light up and change colors, alternating between green, red, and blue.

I began by first creating a system in which a regular LED would turn on when it is dark (or when the light sensor is covered). With help from sources online, specifically https://www.instructables.com/id/Arduino-LDR-With-LED/, I was able to create the required code.

I was fortunately successful in developing this simple dark activated LED. Here is a video of my LED working!

(Just realized how terrible this documentation is and I have learned my lesson! My hand is creating like a “cup” to cover the light sensor)

Confident, I decided to create my little disco ball!

I added a RGB LED, rewiring my board so that each prong had a resistor and was attached to an individual output on the Arduino. I then added to my code, establishing the RGB and stating that when the light sensor detects that is dark, the RGB LED will light up red, then delay, green, then delay, and then blue. I thought this would create a disco ball effect.

Here is a photo of the new set up (up close).

setup

Here is the code I made! I was inspired by Sama’s code! So thank you Sama!

badcode

However, when I plugged in my Arduino, the RGB LED light up for a brief moment and then did not light up again. It was then that I discovered I had accidentally shorted my RGB LED.

Without another RGB LED, I realized my disco dreams had to come to an end for now. I will try again later this week and hopefully have more success! Nevertheless, I was able to properly incorporate digital and analog I/O, which is a bit of a victory.

rip

Morality more like MoraliME

Strangely enough, I have been questioning the notion of morality for quite some time. I have been trying to understand whether there are actual moral rules we should follow, or if “morality” is simply a set of lies we told each other to prevent us from harming one another.

I arrived to this strange place because I realized that what we define as “moral” differs from person to person. Specifically, what one person can consider as morally wrong, can be easily twisted to be seen as morally justified. For instance, when someone assaults another person, the assaulter always believes their actions were justified and were morally “okay”, while the person assaulted would believe the assault is morally wrong. In a less extreme example, when you lie to someone, even though it would be considered morally wrong to lie, you can justify why it was okay to lie in that instance and make the act seem morally acceptable. Of course, the argument could be made that there is an overall “moral good” and those who do wrong and justify it are not actually justifying the act, but merely lying to themselves. Yet, I would counter that argument by noting that in nature, there is no right or wrong. When one animal kills another for survival, that act is not morally wrong or right—it is simply just an act. If morality does not exist in the natural world, it must be a man (human) made phenomenon.

Thus, I believe that morality may just be a set of rules we created to establish civilized behavior and protect ourselves. Overall, things that are considered moral are simply things that we would not want done to ourselves. Hence, I make my moral decisions based off whether I would want to that act done upon me. Of course, what we accept as okay to be done to us depends on the culture we are in.

Since morality is dependent on the person as well as the culture(s) the person was formed by, it seems difficult for machines to make “moral” decisions. A machine makes “decisions” based off what an engineer codes it to do. Thus, the machine would rely solely on the moral rules that engineer had; however, one person’s morality cannot translate into the morality of the larger world.

One might argue that there could be a set of moral rules everyone agrees to that an engineer could code into a machine and thus, the machine could be responsible for solely those moral decisions. For instance, no one would claim that murdering a child is a morally good action. However, machines are never used in that context. The context in which machines will be asked to make “moral decisions” is when machines will be autonomous, such as self driving cars, and when they are autonomous, the moral decisions they will be making will not be that simple.

Imagine this. A self driving car is driving through a dark windy path. Out of no where, a child comes running across the path. The car could either hit the child or it could swerve into the side of the road, most likely killing its occupant.

The self-driving car is now faced with a moral decision. Perhaps the engineer coded the car to always save the occupant’s life, regardless of the situation. This would mean the car would freely run over the child.

I would assume if we were in that situation and were actually driving the vehicle, most of us would swerve to avoid hitting the child, because it would seem like the morally right thing to do. However, that is simply because our culture, or at least my culture, prioritizes the life of children. I am sure there are some cultures that would say it is morally right to save yourself and agree with the car’s “decision”. Yet, what if the child on the path was your own child, then what would be morally “right”?

It becomes clear quickly that morality is quiet complex and conditional. I am sure there is an engineer that is extremely intelligent and could code a machine to make moral decisions that most of the world, regardless of culture, could agree to. However, there will be moments, like the one I used as an example, in which the morality of the car’s decisions will become less clear.

It is not that I do not think machines could not make moral decisions, it is that I think it simply may be too complicated to allow machines to. Morality is simply a way we put checks upon one another– it is a human created concept to simplify the human experience and perhaps, it is best left in the hands of humans.

Technology is separate from the notion of morality. Thus, I do not think it possible for technology to help us with moral decision making. While technology is a tool, it cannot directly produce the culture and experiences that we use to define our morality.

A Switch that Raises Eyebrows (in a good way)

I wanted to challenge myself to create a new “no hands” switch so I put my switch design from last week aside. My goal this week to create a switch that I knew could directly serve a meaningful purpose. I began by brainstorming contexts in which a user would need to turn on a blinking LED without their hands.

I realized that those that are paralyzed may want to indicate they need assistance to their nurses but are unable to since they cannot move their hands to press the assistance button that patients typically press. Many patients that are experiencing paralysis can move some of their facial muscles. Thus, I set out to create a switch that a user could turn on by raising their eyebrows.

I attached a copper rectangular piece to one wire (wire A). I then created a cross of two copper strips onto my forehead and attached another wire to my head (wire B). I attached the wire A to my eyebrow. When I raise my eyebrow the copper rectangle attached to the wire A touches the copper “X” on my head, thus, closing the switch and turning on the LED. I coded the LED to flash so that it could get the nurse’s attention. In future iterations of this, I would want to offer the option for the LED to only blink once, so that those with paralysis could communicate with morse code through the LED.

I followed the schematic we used in lab but replaced the button switch we applied with the eyebrows switch I created.

The only struggle that I had was that I did not have medical tape that could secure the wire to my head. I was very warm (embarrassingly enough) and regular tape was not properly securing the output wire down. As a result, I had to hold down the wire with my hand, but the switch itself did not require any hands, this was simply the result of not having medical tape, which has the proper adhesive to stick things to skin.

Here is a video of my switch!

Unfortunately, the faces I make while raising my eyebrows are extremely embarrassing so I did not want to post the video on youtube and as a result, to access the video, you need to download it. I am sorry!

Movie on 9-14-18 at 2.49 PM

 

Here is my code!

image

Thoughts on Computational Media and Universal Machines

In its simplest definition, computational media is merely the newer form of traditional media. Traditional media, which are things such as radios, are not complicated technically and produced pretty simple experiences, such as listening to music. Computational media are more advanced that traditional media. Computational media is formed by more complex technology than traditional media and offer experiences that are also far more elaborate. Unlike traditional media that offers simple sense based experiences, computational media offers users an experiences that can bend reality. Thus, what distinguishes the two medias is complexity in technology and produced experiences.

A story is narrative that discusses a course of actions while a interaction is an action. While it may seem like these two concepts are different, as we can see from their definitions, they are actually intertwined. In order to begin and have a story, we must first have some sort of interaction.

An interaction triggers the beginning of a new story, because it establishes the action which leads to a course of other actions. For instance, when we touch our lock screen to open our phones, we begin the story of how we are going to use our phone that day. When we open our emails, we begin the story of how we respond to the emails we receive and the news that awaits us. When we slide our fingers across the touch screen to answer a phone call, we begin the story of the phone call we took on that day and the events that will occur as a result of that call.

I think the user is the one who produces and consumes these media forms. After our last discussion, I began to realize that while the technology we have access to is fascinating and truly advances our society, at the end of the day, it still just a tool. Technology, at least for now, does not have a mind of its own. We choose what it should produce and we filter through its productions to determine what we want to consume.

A universal machine would be a machine that offers universal set services. There would be nothing that this machine could not provide the user—it would be the equivalent of a Swift Army knife but of technology. Yet, I do not believe it possible for any machine to be truly universal because there will always be a need that a machine cannot meet. A machine may be able to provide information, like the Google Search Engine, but it will not be able to execute task, such as calling an Uber. Perhaps, voice assistants are the closest thing we have to a universal machine since they can provide information and complete tasks; however, they still are limited in the fact that they cannot provide any visual information.

A Switch that Requires No Hands

Since the challenge was to create a switch that could be activated without hands, I wanted to further push myself and see if I could create a switch that could be activated with another state of matter, specifically gas. I wanted to see if I could get the switch to be turned on by using air in some way. Below is my schematic.

My original idea was to create a pinwheel that would have a piece of copper on one of the wings. When air was blown on to the pinwheel it would rotate the wing with the copper tape. The goal was that enough air would blown, to rotate the copper wing so that it could touch another piece of copper, which was attached to the rest of the circuit, in turn, lighting up the LED. Here is a picture of the very flimsy pinwheel and the diagram I made for it.

 

However, after I constructed the pinwheel with a paper plate and hot glue, I realized that it was difficult to control the amount the pinwheel would rotate. Furthermore, the pinwheel was very flimsy and did not always rotate accurately. I then decided to pivot my design and create a wind tunnel.

The user would blow through the tunnel. At the end of a tunnel a copper flap was created. This flap would fly upwards when air was blown through the tunnel. The copper flap would then hit another copper panel that was taped to another surface. This copper panel would be connected to the breadboard. This can be seen in the photograph and videos below.

At the end, I was able to successfully blow through the tunnel and light up the LED. I feel like a similar system to could be implemented in various medical devices, particularly those created for lung function tests.

I also soldered something! Here is an image of the two wires I was able to solder together! It may seem like a small task, but I am very proud of myself!

 

Who

While I view my computer as merely an object, my computer does not see me as merely a user. In order for it to see me as merely a user, my relationship with my computer would need to be simply input random data. However, the data I input into my computer is far from random, but instead, valuable pieces of information about myself.

It may seem like my late night Google search, anxiously crafted iMessages, and Netflix binges are individually trivial; yet, each input actually provides a tremendous amount of information about my habits and my interests. My computer knows exactly which show is my favorite, which episode I love the most, what I wanted to say in a text message, what I actually sent, and what keeps me up at 3 am. The inputs my computer receives cannot be generated by any user, but instead are specific to me. As a result, my computer is seeing not just a user, but me.

However, the one thing my computer cannot see is how I choose to put together that information together in society. While I share information with my computer freely, when I share about myself in society, I filter the information, carefully picking the right adjectives and order to create a cohesive image of who I am for others. Yet, my computer never gets to see a neatly crafted story. Instead, my computer merely gets access to a bunch of really in-depth but nevertheless erratic pieces of information about me.

It is as if my computer has extremely high-resolution images of parts of me, but no real way to see the full picture and how they all fit together.

Thus, computers will never see the discrepancies in how information is shared by humans. They will never understand that humans apply differently levels of secrecy on the data they hold about ourselves receives depending on the context they are in.

If I were to imagine a more inclusive computer, the more “inclusive” computer would need to understand privacy and the differing levels with privacy. Currently, our computers could “guess” that the things we search in private browsing mode and hide under discrete file names, but the levels of privacy beyond that are unfathomable. A computer does not know that it is okay for your friend to know that you watched 8 hours of Law and Order SVU last night, but not your employer.