Skip to content

Show-A-Thing follow up

Finished my final Show-A-Thing on Friday with Moon. This progress is super useful. Besides the advice I got, even repeating talking about my thesis helped me clear my mind a lot. Each time I go through the slides, I gain a little and have a better mood for the next time.

I do wish I could have more time to talk with them. I would say 40min*3times is a better plan for me.

I got advice and help from different perspectives and also some useful references.

Heidi helped me to reorganize my slides. Now I know that the slides can have the same order as how I developed my thesis. They all gave me a lot of useful references, including games, websites, and organizations. I have not finished researching all of them, but this information has already helped me reform my thesis.

Moon mentioned that I could have a clearer target player and a clearer purpose. I realized that one side of my game could be an educational game for teenagers. Teenagers may be afraid of talking to the elderly in their family, but the elderly can create a game with the stories and let the young play it. Now the game become the bridge between the old and the young.

Peer Meeting 4

April 10, 2023

How are we already this far through the Spring session?!

It was exciting to see where everyone is in the production process as we get to the end of the Spring session.

Some of the notes we are zeroing in on for my vest project were around the larger experience and the types of sensors/programming to show the same reactions in the vest with different emotions from the story tied to it and how that changes your perception of what you feel in your body.

Also happy to hear other wearables are taking shape, so it’s great to see other folks materializing projects of a similar type.

Onward!

Meeting with mentor #1

Last week, I met with my mentor, Fletcher, for the first time. Fletcher is a freelance creative technologist who specializes in prototyping new hardware for companies. He also has a passion for photography, which is why he was suggested as my mentor.

I explained my thesis project to Fletcher, who responded enthusiastically and answered some questions I had during my Show-and-tell presentation. He recognized that my main goal is to teach people how to slow down the process of making photographs. Fletcher gave me advice on overcoming some of the hardware challenges I have encountered, such as 3D printing the camera body. He also provided suggestions for future production after the prototype stage.

Show-A-Thing Reflection

Show-A-Thing was fun! I did the same presentation and asked the same question, and it was super helpful to hear responses from different perspectives. I came up with an alternative way of implementing audiovisual last minute (when I was making the slides) and it seems like everyone was more interested in the new form. 

Enrique

We talked about lots of technical things like audio+video signal pipeline, Ableton connection kit, TDAbleton in TouchDesigner, OSC, and setting up a local network router for live performance. 

Pierre

In the multi-screen setup:

-Think about real-time vs rendered

-Are the screens showing all perspectives or collectively making one clear statement?

(All perspectives as in people’s different relationships with their bodies reflected in different ways of burial and activities)

-Position of the screens, can the viewers see all of them at once (triptych) or one at a time(space to explore)?

-Is it an elaborate moodboard or am I making a strong point?

-It would be more impactful if I could make the audience engage in ways they usually don’t. 

-Need something unexpected 

Reference: https://ichiehtsai.tumblr.com/

Tiri

We talked about composing and choreography in a multi-screen setup. How long will the performance be? There needs to be clear transitions and cues to direct the audience’s attention and make them not confused. Also talked about a real body in performance vs. a sculptural replica. Tiri also showed me the artist Korakrit Arunanondchai who I found very relevant to my project.

Reference: https://www.carlosishikawa.com/artists/korakritarunanondchai/

Chris

Chris also found a multi-screen setup more intriguing. A suggestion: think about burials in non-literal ways. Try being more abstract. 

Reference: https://rui-sasaki.com/section/445550-Corners-Home-Toyama-Japan.html

Moon

Think about the use of sound and lighting. They need to be tightly linked to the images on screen for the audience to navigate & focus. Avoid becoming an ambient background. Expectations need to be broken. 

We then talked a lot about all the possibilities of executing this project in Shanghai. Different spaces on campus(gallery, courtyard, rooftop) and how they can be conceptualized and added to the narrative. There are abandoned spaces in the outskirts of Shanghai that resemble my sketch. Maybe I can use certain materials and create a site-specific installation. Be ambitious. 

Moon also showed me some works he had done in Shanghai, which was very inspiring! 

FER Test Blog#2

I keep researching and finally found this library called deepface which is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python. It is a hybrid face recognition framework wrapping state-of-the-art models: VGG-Face, Google FaceNet, OpenFace, Facebook DeepFace, DeepID, ArcFace, Dlib and SFace.

It is also easy to get access to a set of features:

  1. Face Verification: The task of face verification refers to comparing a face with another to verify if it is a match or not. Hence, face verification is commonly used to compare a candidate’s face to another. This can be used to confirm that a physical face matches the one in an ID document.
  2. Face Recognition: The task refers to finding a face in an image database. Performing face recognition requires running face verification many times.
  3. Facial Attribute Analysis: The task of facial attribute analysis refers to describing the visual properties of face images. Accordingly, facial attributes analysis is used to extract attributes such as age, gender classification, emotion analysis, or race/ethnicity prediction.
  4. Real-Time Face Analysis: This feature includes testing face recognition and facial attribute analysis with the real-time video feed of your webcam.

https://www.notion.so/Test-Blog-2-f17a53d7403949f3973b23073daa43e5?pvs=4

1-1 Refelction#4

Had a great 1-1 meeting with Sarah on Friday, got lots of questions and great suggestions to help me revisit the Show-A-Thing slide.

In conclusion, Sarah thinks the research part is pretty enough for me to keep developing, and I should playtest the model/prototype and actually build it to reveal what works and doesn’t is helpful. The existing problem in my slide should be considered during the playtest/user testing section, such as deeper goals(what is shared, what isn’t?), PERSONAL thinking that I really want after the research, audience experience, misinterpretation…

List the questions below:

what is the deeper goal, what is shared, what isn’t?
is this something YOU want? what values/dangers do you PERSONALLY see after this research?
how do you get your work to your audience? (can be something you have as an open question for your feedback)
your WHAT still has SO much risk of misinterpretation (that your message supports this technology) – how to combat that? Humor might be one way, are there others???
can you “playtest” this – (just fake the whole thing and make the printout) to see how people react to it?
can you produce a sketch of what this actually looks like as an installation? ^^perhaps actually building it reveal what works and doesn’t (so maybe start ASAP)
how did it feel to be judged inaccurately?
how did that FEELING guide your next steps?
what about when your emotion doesn’t match your facial expression? this is the most obvious critique of this. ^how does the fact that you know you’re being looked at and judged CHANGE the way you physically manifest your emotions?
how do you feel about the work of four little trees?
it’s unlikely that emotion detection will ever truly work (or … is it?) but – even if it did, the idea that people would then be forced to think through the expression of their emotions to avoid surveillance or control is something to raise awareness about now

I will follow up on these questions during the rest of the time, and if anyone is interested to do the user test, please contact me:)

Show-A-Thing follow up

Super helpful and so excited to meet people in different areas, big thanks to Rudi, Christopher, Monika, and YG.

Basically, the thing people asked most about was the experience/tone I wanted to provide. The initial idea is to create a serious experience, but now I might want to envision more experiences I can provide, something that can give a humorous vibe, or allow people to test around with different models. And as for the model, I should also consider the external element like glasses, mask, beard, and even bald head.

Got some inspiration and news as well, I pasted below:

Hans Haacke, News (1969/2008)

Microsoft Scraps Entire Ethical AI Team Amid AI Boom  

Revisit the Dawn of the Digital Age Through These 9 Key Works From LACMA’s Exhibition on Early Computer Art 

Show A Thing Follow Up/Update

Show a Thing was a really good experience for me. It was interesting to get multiple different perspectives on my project and the way it could pan out, and the resources/ideas presented to me were awesome. I steered the conversation with most of the guests towards the digital tool aspect of my proposal, some of the ideas or resources presented to me were:

  • https://www.blasttheory.co.uk/projects/ivy4evr/
  • https://theaterofwar.com/about
  • Streaming a live Theatre of the Oppressed workshop in VR using tools available at NYU Shanghai
  • Streaming Forum Theatre exercises to Twitch using trained actors who take suggestion from the chat
  • Using Zoom as a facilitation tool to do this digitally

And several others. At the moment I’m trying to process all of this information and how it could affect the development of my project, and finishing up the first version of the workshop I’m going to run tomorrow (April 10) at Utah State University.

  • https://docs.google.com/presentation/d/1w7vzgVX9byusTtyzThG5j4i37EKW1-z1LAf5hiUzKSE/edit#slide=id.g2146677c621_0_53 (This is the completed version of the workshop. I edited it to include a section on Image Theatre in case I’m short on time, where my prototype only focused on Forum Theatre. I wrote scenes/scenarios for each, and included a bibliography.

From here I plan to edit the workshop based on my findings there, and hopefully adapt it to a semi-digital format to attempt in New York sometime between April 18-25th.

Mentor Meeting #1

(4.6)I had a meeting with my group and a meeting with my mentor in one night. In the group meeting, Sarah Hakani encouraged me to do what I want to do instead of considering everyone’s feedback.

The chat I had with my mentor Max was just great! He steered the project direction back to my original feeling, which I’m excited to continue!

As it was the first time we met, I shared my previous works and the current prototype. Regarding the pen plotter prototype I was making, he offered some meaningful suggestions and keenly pointed out potential issues with it. Ideas included creating images as a time-series stack, which emphasized cooperation, and offering users credits to attract participants. Having too many interaction steps to show the git flow was one possible issue that needed attention.

I really appreciate the following discussion because Max shared the feeling with me. He encouraged me to continue with my personal feelings. To address the issue of feelings being too abstract, he suggested that I could list emotions to express feelings. This idea opened up my mind! So this was like an “implement” for conveying the feeling. It is difficult to express the exact same feelings, but we can find derivatives or the context that generates sensations! Another idea he gave me was that I could even create a video of a drama that happened in GitHub issues without using high-tech. Both of these ideas liberated my thinking about ideas and forms. Compared to them, the prototype I was making was more like a “downgrade” instead of an “implement”!

So the tasks I can do now are listing emotions and finding stories. I feel like new ideas will come to me soon as new inspirations!