Skip to content

Suirun Zhao

Mentor Meeting Reflection: w/ Cezar Mocan

Had a really nice meeting with Cezar, and showed him the things I have done during these two weeks including user testing, model building and report design. Cezar gave me so much positive feedback and also the inspiration he came up with.

We agreed that the report is the key, and talked about whether should I add the attractiveness score — Cezar suggested that I should only add it if I have a good conceptual reason to. If it’s more along the lines of “I could use one more stat on the report, what could it be?”, or “It’s easy to calculate an attractiveness score based on the data I get from the model”, he said it’s better not to have it, since it’s quite a charged metric, and might make viewers uncomfortable in ways I don’t intend. The mental health aspects are charged as well, but he thinks those are easier to fit into the framework of my project re: tone. However, if my reason is more along the lines of “Bank X uses an attractiveness metric when deciding how big of a credit limit to give clients, because attractiveness is correlated with wealth” or w/e, and I’m making a direct commentary on that, he would say it’s worth considering adding it, but be very careful with how I frame it, and make it obvious that it’s satire/commentary / etc.
As for the idea of making the system’s bias more obvious by giving it an internal state/emotion. which would make the scores be higher or lower, that could be simply a one-dimensional noise function that evolves over time, so the system has one emotion, let’s say valence, that’s more positive or negative, and drags the scores up or down with it. Of course, it could expand into multiple dimensions (e.g. valence – is it positive or negative? which could impact the scores; tension – is it angry or calm? which could impact word choices on the report, make the report feel more friendly or more aggressive, using just synonyms of the words I use for the metrics, etc.) Another way to model the emotion of the system could be as a running average based on the previous readings — let’s say if it had 3 people in a row with a high predicted success score, the system is excited about life and tends to be biased towards giving more positive scores, etc. Kinda similar to how priming the human brain works — like if I have 2 meetings one after another, and both of them feel super inspiring, I’m more likely to find the next meeting inspiring/exciting as well because I’m in the right head space for that. Just food for thought.
Later we talked about the technical part and he send me the wristband project he mentioned(with Ian Cheng), this is a guided tour of that show, which shows that process as well. He also said if I need help for creating the printing system he would love to share the package with me which is sooooo nice:)))

FER Test Blog#2

I keep researching and finally found this library called deepface which is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python. It is a hybrid face recognition framework wrapping state-of-the-art models: VGG-Face, Google FaceNet, OpenFace, Facebook DeepFace, DeepID, ArcFace, Dlib and SFace.

It is also easy to get access to a set of features:

  1. Face Verification: The task of face verification refers to comparing a face with another to verify if it is a match or not. Hence, face verification is commonly used to compare a candidate’s face to another. This can be used to confirm that a physical face matches the one in an ID document.
  2. Face Recognition: The task refers to finding a face in an image database. Performing face recognition requires running face verification many times.
  3. Facial Attribute Analysis: The task of facial attribute analysis refers to describing the visual properties of face images. Accordingly, facial attributes analysis is used to extract attributes such as age, gender classification, emotion analysis, or race/ethnicity prediction.
  4. Real-Time Face Analysis: This feature includes testing face recognition and facial attribute analysis with the real-time video feed of your webcam.

https://www.notion.so/Test-Blog-2-f17a53d7403949f3973b23073daa43e5?pvs=4

1-1 Refelction#4

Had a great 1-1 meeting with Sarah on Friday, got lots of questions and great suggestions to help me revisit the Show-A-Thing slide.

In conclusion, Sarah thinks the research part is pretty enough for me to keep developing, and I should playtest the model/prototype and actually build it to reveal what works and doesn’t is helpful. The existing problem in my slide should be considered during the playtest/user testing section, such as deeper goals(what is shared, what isn’t?), PERSONAL thinking that I really want after the research, audience experience, misinterpretation…

List the questions below:

what is the deeper goal, what is shared, what isn’t?
is this something YOU want? what values/dangers do you PERSONALLY see after this research?
how do you get your work to your audience? (can be something you have as an open question for your feedback)
your WHAT still has SO much risk of misinterpretation (that your message supports this technology) – how to combat that? Humor might be one way, are there others???
can you “playtest” this – (just fake the whole thing and make the printout) to see how people react to it?
can you produce a sketch of what this actually looks like as an installation? ^^perhaps actually building it reveal what works and doesn’t (so maybe start ASAP)
how did it feel to be judged inaccurately?
how did that FEELING guide your next steps?
what about when your emotion doesn’t match your facial expression? this is the most obvious critique of this. ^how does the fact that you know you’re being looked at and judged CHANGE the way you physically manifest your emotions?
how do you feel about the work of four little trees?
it’s unlikely that emotion detection will ever truly work (or 
 is it?) but – even if it did, the idea that people would then be forced to think through the expression of their emotions to avoid surveillance or control is something to raise awareness about now

I will follow up on these questions during the rest of the time, and if anyone is interested to do the user test, please contact me:)

Show-A-Thing follow up

Super helpful and so excited to meet people in different areas, big thanks to Rudi, Christopher, Monika, and YG.

Basically, the thing people asked most about was the experience/tone I wanted to provide. The initial idea is to create a serious experience, but now I might want to envision more experiences I can provide, something that can give a humorous vibe, or allow people to test around with different models. And as for the model, I should also consider the external element like glasses, mask, beard, and even bald head.

Got some inspiration and news as well, I pasted below:

Hans Haacke, News (1969/2008)

Microsoft Scraps Entire Ethical AI Team Amid AI Boom  

Revisit the Dawn of the Digital Age Through These 9 Key Works From LACMA’s Exhibition on Early Computer Art 

Quick Reflection: w/ Craig Protzel

After resubmitting my thesis proposal, Craig also gave me a lot of helpful responses,

  1. Should keep thinking about the intention of my project, because it sounds contradictory to my proposal. The tone of my project is not clear yet.
  2. Encourage me to have a different perspective view on the training data. For example, provide three different models “where data is collected from dramatically different populations”,  to allow a user to choose which training data model they want to be judged with.
  3. Do the work I like to do, while also delivering an engaging and memorable experience for participants should be my goal.

So I started to revise the third thesis proposal edition to adjust properly the central question and final purpose.

Quick Reflection: w/ Ellen Nickles

After a quick introduction to my thesis topic during Recode class, Ellen showed me some articles and projects that related to my topic after class:

  1. AI ‘emotion recognition’ can’t be trusted: The belief that facial expressions reliably correspond to emotions is unfounded, says a new review of the field. (It includes a link to a research paper that I also cited, “Emotional Expressions Reconsidered: Challenges to Inferring Emotion From Human Facial Movements.”)
  2. ITP project from several years ago, part installation and part performance: https://winnieyoe.com/Smile-Please-1
  3. The work of Duchenne from the 1800s–researching for the universal human facial expression is not new. See the “The Mechanism of Human Facial Expression” in the Wikipedia entry here and his original text: https://archive.org/details/mechanismofhuman0000duch/mode/2up

Really appreciate and truly inspired me a lot about the experience that I can provide and the history of facial expressions.

Mentor Meeting Reflection: w/ Cezar Mocan

Had the first meeting with my mentor Cezar, got super helpful feedback during the last meeting and really appreciate him pushing me to go further. It is so interesting to have the mentor who is an artist and programmer interested in the relationship between the natural landscape and technology based in Lisbon, and had a different perspective from him.

Cezar encouraged me to experiment with different types of audience responses rather than fear, and explore what’s could be other options to get the audience to think critically about FER. We also talked about Sam Lavigne and Adam Harvey to imagine how can I use a “silly”(humor) way/tone to point out the problem.

As for the prediction report, Cezar shows his interest and think it is the key to the whole project, he encouraged me to think more about the content and showed me his collaborative NFT works with Ian Cheng(one of my favorite artists!) which will “reads your wallet’s public transaction history and infers the inner forces that compose your personality.”

Another suggestion he gave me is to talk to someone who works at these technology companies such as 4 Little Trees or uses their services, and interview them with some questions to have a deeper understanding of why FER is still being used. I will try to connect with someone this week if possible.

The last thing we talked about is the accuracy of the model and I started to think maybe can create a user guide to teach the audience how to act appropriately to get a higher score if I run into a bottleneck.

1-1 Refelction#3 Follow up

Audience, verb, successful?

Who was the audience for each?

What was the verb?

Was it successful in enacting that verb?

Also – these projects are all a few years old – what does it mean to do this project right now!?

Or, like raphael lozano-hemmer’s project: is there a more specific application of face detection you’re interested in? for instance, the school example was interesting – does your installation mimic a classroom?

https://www.lozano-hemmer.com/level_of_confidence.php

Level of Confidence is an art project to commemorate the mass kidnapping of 43 students from the Ayotzinapa Rural Teachers’ College in Iguala, Guerrero, Mexico.

  1. Audience: for people who are willing to help, people who disagree with these police technologies are used
  2. Verb: subverts, create empathy, maintain visibility on the tragedy
  3. It was successful in using this tragedy to create empathy and a way to generate funds for the community, also mentions the police technologies to use it in a totally different way: instead of looking for suspicious culprits but search for the victims.

https://kcimc.medium.com/working-with-faces-e63a86391a93

  1. Audience: for people who disagree with the use of face analysis software which continues to grow, enhance the feeling; for white Ameracan who never confronted how whiteness operates
  2. Verb: critical, create increasingly uncomfortable criteria, admit the individual complicities and universal dignity
  3. It was successful that start with a simple smile and sunglasses, then gradually become to focus on race, criminality, or sexual orientation. In this way, users experience for themselves how we are seen by the algorithms that are increasingly running the world with the increasingly uncomfortable feeling.

https://www.eyebeam.org/kyle-mcdonald-against-fa/

  1. Audience: people against face analysis, against the violence of the police and carceral system that is an extension of slavery.
  2. Verb: subvert and misuse, critique, weaken
  3. It is successful in research and also found out the problems under the system, which is built on violence

https://www.engadget.com/2017-11-09-untrained-eyes-engadget-experience.html

Couldn’t open the link because of the district’s restriction, so I found this link

  1. Audience: people who are interested in this technology and people agree with the use but didn’t realize the bias of image searches on the internet
  2. Verb: shed light, challenge, disprove
  3. I don’t think it is successful, because people will only care about the interesting experience of who they look like but wouldn’t realize the bias and inherent flaws of AI algorithms.

https://adam.harvey.studio/cvdazzle/

  1. Audience: People who agree with the use of face detector, and never aware of the harm of the system.
  2. Verb: challenge to the growing power asymmetries in computer vision and, in particular, widespread deployment of facial recognition technology, discourages experimenting and probing opaque algorithmic systems, question the system, attempt to break
  3. Super successful, and the concepts spread over distance and time, create a steadily growing resistance to face recognition