As technology increasingly becomes integrated with our daily lives, we rely on it more and more to make decisions for us, from things such as how we should get from one place to the next (Waze), to what we should have for dinner (yelp, tripadvisor), to what content we should see on the internet (social media), to who we should date (dating apps). What would a future feel like when all of our decisions are made by these machines?
The self-driving human simulates this scenario by making choices for the human in areas that are currently not decided by technology. Itâ€™s a portable device that detects objects around the person using a camera and machine learning, and gives commands to the user on how to interact with the environment based on an arbitrary algorithm that changes day by day. It allows the person to outsource thinking and decision making to an algorithm that they do not entirely understand.
The agentâ€™s decision space is limited to what the machine has been trained to see. The user does not know what the logic of the algorithms are, and while the machine does know the objective, it does not know if it is good or bad. What happens when the objectives of this machine differ from the objectives of the person its meant to serve? How do the choices made by the algorithm collide with the humans desires when theyâ€™re disconnected from the biases and emotions of that the person has learned throughout his or her life?
The device is portable and works totally offline using machine learning on the edge, allowing for real-time response even where there is no internet connection, and maintaining privacy as all data stays on the device.
For the performance it is carried by me in the real world, where I listen to its commands and attempt to do as weâ€™re instructed, no matter how uncomfortable it makes me.
LIME is a patchable device that provides the user with a platform for using light as a control medium for musical expression. Through a combination of lighting, sensors, fiber optic patch cables, and processing units, the system allows the user to design cable flows which generate and modulate light patterns before translating the patterned light into musical events interpreted by computer-based virtual instruments and effects.
Over the past 80 years, electronics, especially computers, have had a massive impact upon the ways that music is created. In more recent years, technologies have introduced incredible new capabilities, however, these have often come at the cost of increased complexity and a growing level of abstraction between the sounds that are made and the signals used to create them. This phenomena has created some disconnect between performers and the audience as the connection between gestures on stage and the sounds that are heard become increasingly dissociated. A prime example of this can be seen in modular synthesizers where even the player can become confused by the intricate, yet abstract, programming of the instruments, especially on-stage.
But, what if audio signals could be made visible while theyâ€™re communicating? Could patch cables that expose the underlying signal patterns improve oneâ€™s comprehension of their compositions or, perhaps, enhance the experience of composing as well as viewing? LIME is a response to these questions by offering a semi-modular system for using light as a control medium in musical expression. Where modular synthesizers generate and modulate raw audio, LIME operates in the computer space utilizing the MIDI protocol. Through a combination of lighting, sensors, fiber optic patch cables, and processing units, the system allows the user to design signal flows made of light patterns which are then translated into sound. The patch cables provide real-time views into the signal flows as they pass between different modules. The pulsing and breathing light visualizes the signal flow in a physical form. As a composer or performer, this visual component is useful in that it allows for quickly understanding how a signal is affected by different modules and how it fits into a larger musical piece. Another benefit comes in the form of reduced error and easier troubleshooting. While computer software allows for internal signal routing, it can quickly lead to extremely complex scenarios with confusion abound. The physical interaction of â€˜patchingâ€™ from point to point provides a more tangible understanding of the connections as they are formed. Unfortunately, as patches grow, they too can become a chaotic mess of wires with little ability to quickly discern the meaning of individual connections. With LIME, signal generators, processors, and inputs are combinable to compose sounds and rhythms while simultaneously providing the path taken to achieve them, visually.
Magical Pencil is a video game telling a story about a journey of a deserter who has a magical pencil by which whatever is drawn becomes real. In the game, players can create whatever they want by hand drawing to help the character to return to his home. The game can recognize what the player is drawing, and spawn a corresponding item in the game world, with an appearance of the player’s doodle, to interact with the player.
We all remember how cool it is when we saw Neo in Matrix saying â€œGuns, lots of guns.â€ So what if we get whatever we need in a video game just as simple as that? Then the purpose of the game switches from managing to obtain the game item to figure out what item is one of the solutions.
Moreover, hand drawing recognized by the game is a huge â€œWowâ€ moment, which is powered by the magic of Machine Learning. Riding on a motorcycle that was drawing by the player is another level of mind-blowing.
Widowhood is an invisible state in todayâ€™s society. “Widow” is an interactive textile sculpture that exposes the wounds, pain, and emotions that embody my experience as a widow. With this installation, I aim to inspire others in the hopes that they become aware of all the devastating losses that come after the tremendous passing of a spouse.
Historically, we widowed women have been portrayed wearing long black dresses, a distinctive image that summons the iconographic fashion from the Victorian era in which mourning rites were strict and remarkably complex, following the example of Queen Victoria after the death of her husband, Prince Albert. Victorian widows endured this burden for four years. This fashion comprised heavy black clothes with thick veils of crepe, and hats equally black and dense that lacked any kind of decoration. This mourning clothing, known as “widow’s weeds “, distinguished the grieving widows from the rest of society. It was a visible indication of the pain for the death of their spouses, with black signifying the absence of light, represented the spiritual seclusion of the mourning woman.
This textile sculpture references the â€œwidowâ€™s weedsâ€ and the social implications they represent. In a close examination of this costume, the structure underneath those dresses or â€œcage skirtsâ€ is the tangible metaphor I used to recreate and inhabit my own twisted cage skirt. It is a meditative space where I invite the public to take part in and to reflect on the journey and the humiliation I experienced as a young widow. This installation represents my fight against the stigma that is deeply rooted in widowâ€™s fashion and the other social roles that women were expected to follow when they lost their husbands.
Solus is a series of smart devices that highlights our sense of smell so that people who feel alone can find a state of solitude and joy. How can we transform the experience of this feeling of loneliness into solitude, a state of being alone without being lonely, by exploring our senses and mental states?
Solus explores how to regulate our physiological and emotional equilibrium through sensory experiences. The project chooses scent as a method to shift the brain from feeling lonely to embracing solitude and a strong sense of joy.
The aim is for three different smart devices to disrupt this perspective of aloneness by using the most pervasive but often forgotten human sense of smell.
The three devices are Scent Notification, Scent Clock, and Scent Speaker. Each device was designed to create a multi-sensory experience that impacts mood and memory to transform space in meaningful ways. The scents were designed through the research process to help us feel at peace in our own company and achieve the state of solitude.
Drawing has always been a way for me to process my thoughts. As technology continues to advance and our daily life becomes increasingly digital, I now realize the value drawing is able to offer to meâ€”the freedom and the time for self-reflection. How can drawing help us â€œseeâ€ ourselves better and empower us through self-discovery? I want to design a drawing experience that allow people to capture their own essence —their inner thoughts, voices, emotions, personalities, and quirks.
The Drawing Booth resembles a photo booth, but instead of taking photos, visitors capture themselves through drawing,by recording and revealing their drawing process. Upon entering the Drawing Booth, the visitor will be guided to randomly select a prompt, for example, â€œdraw something that makes you smile no matter whatâ€. A webcam is placed above the drawing paper to record the drawing process. Once the visitor is done drawing, a time-lapse video showing the visitorâ€™s characteristic process of drawing is generated as a â€œportraitâ€ of the visitor.
Accessibility is the heart of experience design because itâ€™s not usable if itâ€™s not accessible. For a sighted designer, itâ€™s an opportunity to improve the design process since understanding the userâ€™s needs require listening to and working alongside them to design something together.
The particular accessibility use case came from making the Physical Computing coursework more accessible, since the schematics on the classâ€™s site are images. While sighted learners rely on schematic images, blind and low vision learners rely on circuit descriptions to understand how electronics work. No graphical representation has yet been able to compete with circuit descriptions. This pain point became the focal point of the subsequent design research.
Using participatory and human-centered design with 5 blind and low vision participants through NYUâ€™s Institutional Review Board (IRB), a set of design standards and best practices were developed to illustrate how to design a readable tactile schematic. These standards were then applied to the 50+ schematics from the Physical Computing site. The standards and best practices and book of tactile schematics were made available for download by the public.
Undercover is an autobiographical performance that weaves vignettes of memory, music and pop culture imagery into a narrative that explores misogyny, gender, and identity from the perspective of a transman.
As a transgender man, Iâ€™ve realized that there are things that men say to each other in the absence of women and things women say to each other in the absence of men. In fact, there are many changes Iâ€™ve noticed, particularly in the way that people treat and perceive me after transitioning. I now find myself in the predicament of being held accountable for a social history of misogyny that Iâ€™ve lived on the receiving end of for many years prior. Undercover shares this perspective through a 5-minute show.
The viewer doesnâ€™t know any of this backstory when they enter the space. During the piece, the viewer listens and watches a narrative that uses sound, light, images, a two-way mirror, with an element of surprise at the end. It takes place inside of a curtained off, dark space, similar to a photo or fortune-telling booth, where the viewer enters and sits down on the chair in front of a black wall that frames a 12â€ x 12â€ mirror and puts on headphones. The narration begins with the viewer looking at their own reflection. A minute in, the lights are gradually dimmed, and images relating to the audio are projected on a screen on the other side of the mirror. In the last 45 seconds, the images disappear and the light inside the space changes to reveal the narrator (me) sitting on the other side of the mirror. Putting myself in the space with the viewer on the other side of the divider, and making eye contact through the two-way mirror during the last moments of the show, breaks down the fourth wall and further humanizes the story. I understand that this may be a bit uncomfortable and that is by design.
This is a series of avatar-based, Augmented Reality body sculptures of surreal scenes from everyday life in an imaginary future where the roles of human and machine are reversed. Combining social commentary and satire, this project reflects alienation of and between individuals and their surrounding environments due to increasing complexity and capability of technology in the age of algorithm.
â€œDaily Dividualsâ€ includes a series of Augmented Reality (AR) sculptures presented in short movie clips to provide context. To reflect on the core concept, human alienation in the digital age, the project layers the theme and presents, through an AR lens, the imaginary transformation of the human body merging with and functioning as everyday objects.
The project contains two experiments with three examples in each. The student scanned her body to create a hyper-flexible avatar model, and used the model as the key material to construct scenes in the project. The first experiment creatively associates body avatars with everyday objects in shape and function, and explores the possibility and flexibility to duplicate and modify avatar models in an AR environment.
The second experiment is a continuation of the first. It purposely reverses the conventional ways a human interacts with common devices, thus presenting an unorthodox perspective in which to reexamine the alienation of humans. The second experiment has more interactivity involved. The audience can explore layers of the AR sculptures through various actions. Therefore, the â€œseeingâ€ and the â€œbeing seenâ€ together fulfill the concept of this experience.
Playing with dystopian connotations through the jarring and surreal images, the project prompts viewers to pause and consider the phenomena of estrangement observed in our evolving relationship to and dependence on machines.
When the public becomes insensitive to being treated as measurable data and samples; when people are used to only see highly abstract representation of the complex digital world; when intensive, repetitive human labor is constantly fed into the artificial intelligence industry, are we still dominating the machines, or are we being dominated? We strive to make machines more like us, but it may be that we are becoming more like machines. In a way, we are meeting machines in the middle.