Winnie Yoe – ITP / IMA Winter Show 2018 /shows/winter2018/ A Show for the Recently Possible. Sun, 16 Dec 2018 17:25:33 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 /shows/winter2018/wp-content/uploads/sites/45/2018/11/cropped-ITP_logo_2014-01.original-1-32x32.png Winnie Yoe – ITP / IMA Winter Show 2018 /shows/winter2018/ 32 32 Code to Racist Technologies /shows/winter2018/code-to-racist-technologies/ /shows/winter2018/code-to-racist-technologies/#respond Fri, 14 Dec 2018 18:25:05 +0000 https://itp.nyu.edu/shows/winter2018/code-to-racist-technologies/ Continue reading "Code to Racist Technologies"

]]>
Winnie Yoe

"Code to Racist Technologies" is a project about implicit racial bias and colorism, and a subversive argument against technologies developed without thoughtful considerations of implications.

https://www.winnieyoe.com/icm/racist-webcam

Main Project Image

Description

“Code to Racist Technologies” is a project about implicit racial bias and colorism. It is also a subversive argument against technologies developed without thoughtful considerations of implications. The project is inspired by Ekene Ijeoma’s Ethnic Filter, Joy Buolamwini’s research “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification” and Dr. Safiya Noble’s talk “Algorithms of Oppression: How Search Engines Reinforce Racism” at Data & Society.

As machine learning is increasingly infiltrating aspects of our society, it is important to recognize how our biases, in this case, implicit racial bias, are translated to technologies we design, and how that could lead to drastic and often detrimental effects.

The project, in form of an interface experience is two-folded. User first completes an implicit bias association test, which I partially adapted from a test developed by researchers from Harvard, UVA and University of Washington. Once completed, user enter section two, where they find out their probability of developing a racist technology and how this technology they develop will affect people of different skin tones. What the user does not know is that none of their test results are recorded. In fact, their test result always shows that they prefer light-skin relative to dark-skin, with a random percentage from 80-90% that they will develop a racist technology.

Initially, the project only consist of section 2, a “racist webcam” that isolates skin tone from the background, determines where the skin tone falls on the Fitzpatrick Skin Type Classification, then map the value and change the degree of pixelation in the user's webcam capture. The darker your skin tone is, the more pixelated your video appears, and the less visible you are. The layout of the program is designed that each time it runs, the user’s video capture will be shown alongside eight pre-recorded videos. The juxtaposition of the user’s experience with the other participants’ heightens the visual metaphor of the effects of racial bias on one’s visibility and voice.

My goal for this project, and the reason why I added section 1, is because I hope that users will realize all of us are bias, and it is only with a very conscious awareness that we will stop creating racist technologies that bring detrimental effects.

What is the code to racist technologies? The code is you and me, especially the you and me who assume because we are “liberal” and “progressive”, we are not or will not be a part of the problem.

Classes

Introduction to Computational Media

]]>
/shows/winter2018/code-to-racist-technologies/feed/ 0
Smile, Please /shows/winter2018/smile-please/ /shows/winter2018/smile-please/#respond Fri, 14 Dec 2018 18:23:57 +0000 https://itp.nyu.edu/shows/winter2018/smile-please/ Continue reading "Smile, Please"

]]>
Chenshan Gao, Winnie Yoe

“Smile, Please”, a speculative and dystopian system that assesses your facial expression, shocks you, and prints out a photo and grading record to train you for a perfect smile.

https://chenshangao.squarespace.com/smile-please/

Main Project Image

Description

According to UK innovation foundation Nesta, the prevalence of AI in emotion prediction is one of the predicted trends in innovation in 2018. Companies such as Affectiva and Beyond Verbal already own huge deposit of emotion database around the world. The Facebook and Cambridge Analytica scandal proves the danger and impact of “psychological warfares” in tech. In the near future, as our emotion becomes an asset that is trackable and predictable, would it also be controllable and “hackable”?

Against this background, we created “Smile, Please”, a system that detects smiles and uses a thermal printer to tell users if their smile is “good” enough or not while also using electrodes to shock the person if their smile was not “good” enough. In addition, users will be given “The Manual of Smile Etiquette” after each experience. The project is a response to the prevalence of emotion AI and current coercive societal forces that manipulate our emotions. This project combines concepts from Physical Computing and Design for Discomfort (e.g. creating and closing the magic circle, use of visceral effect and taboo). Through an extreme approach, dark humor and by creating visceral discomfort, with mechanism referencing to Palvov’s classical conditioning and the Milgram experiment, we hoped to shock our audience in thinking about the implications and ownership of our emotions in the current societal and technological landscape.

*Considering the ethics of this project, we will be following IRB’s guidelines and referencing other artists who have used TENS unit/electric shock in their work.

Classes

Design for Discomfort, Introduction to Physical Computing

]]>
/shows/winter2018/smile-please/feed/ 0