Official Final Project Documentation

Here is a copy of the project:

https://drive.google.com/drive/folders/1KwPtWHd_sW3Uts3uCWJijKG2_r8ZjxUl?usp=sharing

Here is the project in action during the Sunday show.

While you cannot hear what the user is hearing, you can see what the user sees on the monitor before them and their reaction during the simulation.


Tools Used:

Illustrator, Photoshop, AfterEffects, Audition, P5.play, P5.js, PoseNet, Brackets, FreeSounds.com, and a lot of office hours

Process:

Creating a Prototype

The process to achieve the final result was surprisingly complicated. For my first step,  I took free images of body parts online (lungs, heart, and veins), made them transparent through Photoshop and then animated them on Adobe After Effects.

asdas

I then created a simple subway animation that would be masked to reveal the user and created a “background” of sorts. Since I was unsure if users would resonate with the subway background, I initially used free stock footage. I also created two text animations: one that provides users context before the simulation and one to provide closure afterwards.

sdfsd

 

Once these first draft animations of the body parts and background were created, I decided to continue working with After Effects to create a prototype of my project. I simply used “movie magic” to apply these animations to  prerecorded webcam footage of myself. This allowed users to get a general understanding of the storyline that would be displayed. Finally, I used Audition and Free Sounds.com to create the audio. There are two pieces of audio: the subway noises which play in the beginning to help add context and the panic attack audio which imitates the internal noises (rapid heartbeat, heavy breathing, scattered/panicky thoughts) that a user would experience during a panic attack.

auditojn

Here is a link to the prototype:

 

User Testing with Prototype

I primarily used the prototype for user testing because it allowed me to make changes easily, quickly, and without the sunk cost that completely coding it first would have. Users primarily gave me feedback on the general storyline, specifically providing insights regarding the mini story that exists when the user “experiences the panic attack” in the subway. Originally, the mini story had thrusted the users into the situation without providing the user time to understand the context and in turn, the simulation. Thus, the user testing feedback helped fixed issues with the overall pacing. User testing also provided insights on the semantics used in the text displayed before and after the “simulation”. Specifically, I discovered that the ending text was abrupt and did not provide the necessary closure that a user needed after experiences such a sensory overload.

 

Creating the final project

After testing with almost 20 users over a course of a week, I finally reached a version of my project that I was content with. Now, all I had to do was bring it to life!

I started to by working to get the webcam and body tracking working. Since I knew I was using large animation files, I opted to use Brackets to code rather than the text editor. For some reason, I experienced a strange amount of problems regarding this because my computer was not properly capturing video feed and the text editor made it difficult to debug.

Thus, I pivoted back  to the text editor. I used facial mapping code instead, mapping the lungs x pixels away from the user’s chin. Then I added “filler” animations to create a general structure of my code. I knew that my animations, regardless of the file type, would be too large for the text editor. However, since I was having trouble debugging without the text editor, I decided to put gifs and .movs files that were small enough for the text editor in the places where the real animations would be placed. In other words, where the subway background would be was a random gif of the earth. I just wanted to have the bones of my code down before I moved back to the local text editor.

While currently, the random earth gif has been replaced with the appropriate subway file, here is a link to the first web editor: https://editor.p5js.org/andrikumar/sketches/BJuBq6cy4

During this time I also recorded my own video footage of the subway and substituted it with the stock footage I had been using for user testing.

With the bones created, I then transitioned back to the text editor so that I could input the correct files; yet, I still faced a lot of hiccups. Essentially, After Effects renders extremely large files that would not even work locally. However, these files needed to maintain their transparency so they could not be compressed post rendering. After playing around for days with different files types and ways to maintain transparency, I finally discovered what to do. I decided to convert the subway background into 5 pngs that would loop using p5.play. I turned the pre text, post text, and lungs animation into gifs. While originally, the lungs gradually increased in speed, I could only render 2 seconds of the animations to avoid having too large of a file size. Now, the user sees rapid breathing throughout the simulation.

Once I successfully added the animations to my code, I used different functions and “addCue” to trigger the animations based off the audio as well as create the interactions.

Here is what I ended up with:

https://drive.google.com/open?id=1rZYTTyByN53vB8ByfKPy5V_aUJrzemkv

You can find my code here which you can open up with a text editor to see the final work! I used Brackets!

Here is my code:

asdasdasassadas

Final Changes for the Show

While presenting the project during class, I realized that facial mapping required an extremely well lit room otherwise the code could not “see” the user’s chin. At first, I thought of simply switching the code to map from the eyes down but if something is being mapped onto a user’s body, they are very likely to move around. If the code used the user’s eyes, then the animations would scatter everywhere. Thus, I needed to use something more stable.

As a result, I converted my code from facial mapping based to PoseNet based, mapping the animation of the body parts between the user’s shoulders. For some reason, I am terrible at doing math and struggled to find the mean distance but luckily I was able to in the end!

Since I also understood p5.play better, I decided to take 15 pngs of the lung animations and animate them through p5.play rather that using the gif. I thought users would appreciate the higher quality animation that p5.play offered. However, after during a few rounds of A/B testing with the gif animation versus the p5.play animation, I discovered users preferred the gif animation. They thought the low quality created an “abstractness”, which allowed them to really be immersed in the story.

 

Conclusion

I am honestly happy that I faced all the issues I did because as a result, I got the opportunity to explore libraries, like p5.play, which we did not get the opportunity to in class. I am quite proud of my work, especially because my freshman year I failed my first coding class and now I coded this entire project! Of course, this project would not exist without the help my professors and friends provided me! It was really rewarding during the show to hear users talk to me after the simulation about how anxiety disorders have effected their lives. A lot of the users mentioned that they had a partner who had panic attacks, and while they had learned how to help their partner get through the attack, they never understood what had been going on. However, this experience gave them a glimpse on what it had been like for their partner and finally helped them understand the situation– something that endless conversation simply could not provide. I really hope to keep developing this project further so that it can serve as an educational tool!

Here is a video of my work during the show:

What I will be working on in the future

After having numerous people try out my project at the show, I was able to get a lot of user feedback! While most of it was positive, many users explained that the conclusion could still use some work. They still felt shocked and were unsure what to do after the simulation. One participant even asked if I had a teddy bear they could hold. I have always struggled with making powerful conclusions and so I think this will be the perfect opportunity to work on this skill.

I also got the opportunity to show my work to a medical student that was going to become a psychiatrist. Ideally, I would love my project to be used to educate medical professionals about mental illness. The student provided me some insights on how I could add to the project to help appeal to medical professional’s needs. For instance, he mentioned that I could have users experience the panic attack on the subway and then “go to the ER and hear from a doctor that it was just a panic attack”. Not only would this have a better story arc, but it would help medical professionals understand the importance of empathizing with their patients that just had a panic attack. I think this was a really powerful insight and I plan on brainstorming around it a bit more!

Leave a Reply