“The Tuning House” is a voice-activated light and sound installation where you use your voice to choose a color and sound palette. The frequency of the participant’s voice is translated into a synthesized pitch and color, creating a personalized expression of chromo-synesthesia. Isolating color and sound pairings in this way highlights a simplified and subjective relationship between frequencies which are present all around us.
The idea behind this project is to customize sounds and related colors using something personal to the participant as an input. For this installation, the personalized input is the voice. It encourages humming, toning and actively using the voice to customize pitches and related colors. As the participant's voice is synthesized into minimalist color, light, and sonic tones, a feedback loop is created that can continue or be frozen to create a chord or meditative drone. This interaction can create joy out of thin air and/or quieten the mind and thoughts depending on the way the participant chooses to interact with it.
As part of my thesis project, this is an installation that consists of an automatic dice roller that rolls dice by triggering reggaeton sounds, shaking a container attached to the speaker, making the dice dance. Then it captures the result using a computer vision system, stores the dice result sequence, and also creates a “digital painting” with the resulting numbers. The viewers can also see the number of times the dice has resulted in each of the six numbers, and compare these numbers to the visual result on the projection.
Breathe We Live is an interactive installation that presents the invisible connection between human and natural environment through shedding lights on the activity of breathing. The invisible exchange of oxygen is brought into the projection of the natural world and human connection. The breathing sensors track the user in real time of their breathing frequency, and the interaction with the plants will help the user to meditate and slow down their breathing. The goal of this installation is to encourage us to rethink our connection and responsibility towards nature. The installation invites people to pay close attention to their body through a session of breathing meditation practice. I created a meditative space, with embedded interactive components, that enables intimate experience between viewers and the natural environment. Only one or two people can enter the space at a time. The installation uses human breath as an input to produce corresponding visuals in real-time. A Kinect camera mounted inside the space will detect the user’s movement to provide a false feedback loop for breathing, when the user is moving and breathing too fast, the visual and music will slow down to give the user feedback to breathe slower. According to my research about mediation, walking through nature is one of the best ways to relieve stress. I wanted to create a meditative space for people to understand their breathing activity and their close relationship to nature. I hope the experience helps the users to reflect their relationship with the current state of nature.
Sound Playground consists of a set of objects that emit sound upon being moved, touched, placed next to other objects, and activated by outside sounds. The system is semi-autonomous – the objects have a degree of agency. They can emit sound when they want to, sometimes triggering one another’s sonic behaviors. They can decide when they want to react to your tactile or sonic input and when to refuse cooperation by going into sleeping mode. They can decide which sonic behaviors to exhibit at a given time. Finally, they can decide when to record small snippets of environmental sound in order to insert them to an ongoing composition and when to play this composition back into the world.
Due to the ever-shifting behaviors of this free-willed, punky Sound Playground, your experience with it will never be the same. However, you may be able to notice patterns as you get to know the personality of each sound object and as you observe the socialites that these objects form with one another. While you, as the human, are gently decentralized, you can impress your sounds and interactions onto the playground’s memory, shaping some of its current and future behaviors. I call this collaboration.
The goal of this project is to create a meditative environment where you can focus on tactile sensations and create sounds with an opaque, autonomous ecosystem by observing and discovering its properties. It is also an opportunity to be with technology that demands respect and that respects your opacity in return. Neither party is expected to be fully knowable in order for collaboration to happen. This experiment aims to examine the effects of such encounters.
The experience is for those who might enjoy an anti-Alexa in their life. For those who might want to rethink their relationship with technology and other non-human ecosystems, question our privileges and limitations as humans.
Futureâ€™s Market is a store of tomorrow: a predicted intervention into predictive systems, a performance of ubiquitous surveillance infrastructure, or a look at a world where walls and wires and bank accounts heave and palpitate with a million unhidden eyes.
Futureâ€™s Market is the performance of a real store. Practically it operates as a kiosk facilitated by Jim Future, it’s main proprietor, played by Alden. They sell a variety of different speculative services meant to be interventions into the predictive economy of tomorrow, which, thanks to ubiquitous IoT sensing technologies, has become a near perfect loop between surveillance, prediction, and behavior modification. Customers are able to buy new personalities, edit their biometric profiles, bottle up emotions to save for later, have their trash examined and scored, or get their identity scrambled. Each of these services is played not so much as a subversion of this new economy but as an unexpected extension of it, a disruption that plays by the same rules, a way of gaming the system by taking it at face value. If someone’s Google search history can be used to infer their personality, why couldn’t their personality be changed by targeted searching?
Jim is played as the type of man who’s never been told “no” by life yet whose greatest ambition as a small business owner is to one day pack up shop and go on a never-ending Carnival Cruise. A little bit outlandish and stylish but only so far as to drum up new business, inside he’s just an everyman trying to make by without his own mediocrity getting in the way. He isn’t really cut out for this world but then, who is?
Through an educational learning hub and an interactive playground, both hosted on the same web app, Paralang aims to cultivate literacy, inspire curiosity, and arouse concern with respect to emerging neural language models.
Recently released, state of the art language models have been shown to be able to produce text that is nearly indistinguishable from that produced by humans.
These recent advances, which have proved plenty controversial within machine learning circles, have caused ripples in the general media landscape as well, where coverage has been largely hyperbolic, excessive, and occasionally uninformed or even incorrect.
With the belief that this natural language generation technology, more than mere novelty, will gradually assume a more and more pervasive role in our everyday lives, I wanted to intervene, however modestly, and provide an accessible, beginner-friendly platform to help secularize this technology and elaborate on some of its inner-workings as well as its repercussions both for us as individuals and a society. Iâ€™d like to help answer questions like: what makes these recent advances so compelling and new? Or: how might existing societal problems by reproduced and reinforced by these advanced language models?
Ultimately, my aim is to help cultivate a more level-headed literacy as well as inspire both a sense of informed curiosity and concern with respect to these emerging models and their ramifications, with an emphasis on the recent and state of the art (particularly Googleâ€™s BERT and OpenAIâ€™s GPT-2).
The platform consists of two components, both hosted on a single web app. One is educational, revolving around a learning hub, glossary, and resources curated for all skill levels â€” newcomer, intermediate, and advanced. The other is interactive, comprising of a â€œplaygroundâ€ encouraging hands-on experimentation with some of the language models featured in the educational component.
Altogether, the platform is built to accommodate non-linear engagements â€” users can begin with the learning hub and progress through to the playground, or simply jump to the playground, or maybe even just skip around between glossary and resources.
My thesis project, Letâ€™s Read A Story, is a speculative exploration on how computers and technology can turn story time into a conversation between parents, children and a computer. Human gestures — speaking, drawing, typing â€“ allows the reader to participate in a new form of conversation on a smart device (tablet or a smart speaker) that yields a surprising new story.
Technology and smart devices are ever present in childrenâ€™s everyday lives and their development. Letâ€™s Read A Story investigates the possibilities of enhancing storytime for children with a Machine Learning-based program to engaging a child’s creativity, imagination, and inventiveness. The program is intended to be an activity joined by parents and child.
The project addresses the following questions:
1. can technology augment how parents read and tell stories to children?
2. Can a child interact with a piece of technology to create a meaningful connection through literature, sound and visual art?
Using recently possible machine learning techniques, various childrenâ€™s literature corpuses have been analyzed in order to build a conversational platform that allows the reader to navigate through different narrative bits and pieces to weave a new, original immersive story of his own. The core of the experience is in the form of generated text that is a response or an answer to a human gesture (e.g speaking or drawing). The text is the first layer that carries the plot progression forward, on top of which, layers of generative sound and illustration are formed to enhance the text and based on various predictive models.
Lastly, the reader can change and bend the story as it progresses, drawing illustrations of their own and changing lines of text as their heart desires.
I have designed an experiment that aims at helping individuals to better think about other people’s feeling and thoughts using interactive 360 videos. In fact, I want to help audience thinks about answering the question “Who is right?” during her exploration of my project. The user will acquire different perceptions about one specific situation which I designed in my story and she can find out how hard it is to put herself/himself in someone elseâ€™s shoes? and answering the main question.
The project is an interactive 360/VR art piece. My objective in this project is to design an experiment that aims at helping individuals to think more about other peopleâ€™s feeling and thoughts. Not only does it help people experience other peopleâ€™s thoughts, but also also it gets them to think about the reasons which make people’s perceptions different. The thesis takes a multidisciplinary approach to contribute to the research on human mind by combining concepts from psychology, communication, and interactive storytelling. From psychology, the top-down and bottom-up approaches are being used for getting the project message across. These two strategies of information processing and knowledge ordering, widely used in various scientific and humanistic research, are utilized in this project. My main question is â€Who is right?â€. The user will acquire different perceptions about one specific situation which I designed in my story, which is about peopleâ€™s romantic relationships. I shot some videos based on my story and the user will put on the headset and will see two scene in two versions. So she will watch part of four videos due to her interactions.
My thesis project â€œBlind Dateâ€ is an interactive story with game elements in the format of a phone-based website. It features a 28-year-old single Chinese womanâ€™s struggles with enormous pressure to get married early. It is a reflection and critique of the conventions surrounding marriage in contemporary Chinese society.
â€œBlind Dateâ€ is a storytelling project that talks about marriage in the format of a web-based interactive story. A user plays the role of a 28-year old Chinese single woman who is being pressured by her friends and family to get married.
There are three chapters in this game. The first chapter features the protagonistâ€™s conversation with her best friend, Lorna, about the wedding of her younger friend, Cici. Along with the conversation, Lorna encourages the protagonist to date someone and warns her about the danger of becoming a leftover woman. In chapter two, the protagonist has a WeChat talk with her mother. Marriage is brought up in this conversation. And the resistance from the protagonist to get married angers her mother and leads to a fight between them. The protagonist compromises in the end and agrees to go on blind dates with several different men. That introduces the third chapter of the game.
In the first two chapters of the game, the user will be clicking through different buttons and graphics to follow the story. In the third chapter, the user will make some decisions before she goes on blind dates and the choices she makes will effectively change the ending of the whole story.
The user will be able to feel the increasing level of pressure the protagonist feels as the story goes on and hopefully he or she can take some insights away after experiencing this story.
How light interacts with surfaces, lenses and our eyes is fundamental to how visual arts are created and perceived. Despite this importance, education around basic optical principles tends to employ a science-first approach which may not resonate within an artistic community. This installation attempts to bridge that gap by encouraging audience members to holistically engage with optics and the phenomenon of refraction.
This installation consists of a series of engagements with playful and impractical lenses. A custom software tool distorts images such that they can only be seen through these lenses (a process known as anamorphosis). In the first such engagement, audience members are invited to draw on a digital canvas while looking through one such lens. They are then able to view the results of their work with and without the lens. In the second engagement, audience members encounter a large, amorphous video projection. They later realize that the imagery can be decoded through a viewer mounted within the space. These experiments aim to inspire audience members’ curiosity about the behavior of light.
As an artist’s understanding of foundational optical principles grows, their palette is expanded to allow aesthetic exploration and play using these elements. This installation aims to reduce technical barriers to entry and inspire artists to incorporate creative custom optics into their practice.