Audance invites users to create beautiful, expressive music together over a networked connection. Using the computer's keyboard, mouse and webcam as musical controllers, each click or keystroke and its corresponding sound are broadcast over the internet, allowing participants to play together in realtime, regardless of their proximity to each other.
Each keypress triggers a different note, either playing it once, holding it, or retriggering it on a loop. With the mouse, you can control filters on the keyboard synthesizer or click and drag to play an additional instrument. If selected, the webcam can also be used as a musical controller, either as an additional filter on the keyboard synthesizer or as an instrument on its own.
With sound synthesized in Tone.js and visuals animated in p5.js, Audance provides rich visual feedback for each musical event, allowing users to find their creative flow regardless of their level of musical experience.
This project aims to create a space where the digital world meets the real world, bleeds into it and overlaps with it. Based on and inspired by the existing technologies and works that have been previously done by artists in the field of projection mapping, I would like to explore more about how we can bridge over from traditional video arts to the reality (real places, physical objects), and what roles can the audience play in communicating the two ends.
“The Tuning House” is a voice-activated light and sound installation where you use your voice to choose a color and sound palette. The frequency of the participant’s voice is translated into a synthesized pitch and color, creating a personalized expression of chromo-synesthesia. Isolating color and sound pairings in this way highlights a simplified and subjective relationship between frequencies which are present all around us.
The idea behind this project is to customize sounds and related colors using something personal to the participant as an input. For this installation, the personalized input is the voice. It encourages humming, toning and actively using the voice to customize pitches and related colors. As the participant's voice is synthesized into minimalist color, light, and sonic tones, a feedback loop is created that can continue or be frozen to create a chord or meditative drone. This interaction can create joy out of thin air and/or quieten the mind and thoughts depending on the way the participant chooses to interact with it.
Sound Playground consists of a set of objects that emit sound upon being moved, touched, placed next to other objects, and activated by outside sounds. The system is semi-autonomous – the objects have a degree of agency. They can emit sound when they want to, sometimes triggering one another’s sonic behaviors. They can decide when they want to react to your tactile or sonic input and when to refuse cooperation by going into sleeping mode. They can decide which sonic behaviors to exhibit at a given time. Finally, they can decide when to record small snippets of environmental sound in order to insert them to an ongoing composition and when to play this composition back into the world.
Due to the ever-shifting behaviors of this free-willed, punky Sound Playground, your experience with it will never be the same. However, you may be able to notice patterns as you get to know the personality of each sound object and as you observe the socialites that these objects form with one another. While you, as the human, are gently decentralized, you can impress your sounds and interactions onto the playground’s memory, shaping some of its current and future behaviors. I call this collaboration.
The goal of this project is to create a meditative environment where you can focus on tactile sensations and create sounds with an opaque, autonomous ecosystem by observing and discovering its properties. It is also an opportunity to be with technology that demands respect and that respects your opacity in return. Neither party is expected to be fully knowable in order for collaboration to happen. This experiment aims to examine the effects of such encounters.
The experience is for those who might enjoy an anti-Alexa in their life. For those who might want to rethink their relationship with technology and other non-human ecosystems, question our privileges and limitations as humans.
How light interacts with surfaces, lenses and our eyes is fundamental to how visual arts are created and perceived. Despite this importance, education around basic optical principles tends to employ a science-first approach which may not resonate within an artistic community. This installation attempts to bridge that gap by encouraging audience members to holistically engage with optics and the phenomenon of refraction.
This installation consists of a series of engagements with playful and impractical lenses. A custom software tool distorts images such that they can only be seen through these lenses (a process known as anamorphosis). In the first such engagement, audience members are invited to draw on a digital canvas while looking through one such lens. They are then able to view the results of their work with and without the lens. In the second engagement, audience members encounter a large, amorphous video projection. They later realize that the imagery can be decoded through a viewer mounted within the space. These experiments aim to inspire audience members’ curiosity about the behavior of light.
As an artist’s understanding of foundational optical principles grows, their palette is expanded to allow aesthetic exploration and play using these elements. This installation aims to reduce technical barriers to entry and inspire artists to incorporate creative custom optics into their practice.
LIME is a patchable device that provides the user with a platform for using light as a control medium for musical expression. Through a combination of lighting, sensors, fiber optic patch cables, and processing units, the system allows the user to design cable flows which generate and modulate light patterns before translating the patterned light into musical events interpreted by computer-based virtual instruments and effects.
Over the past 80 years, electronics, especially computers, have had a massive impact upon the ways that music is created. In more recent years, technologies have introduced incredible new capabilities, however, these have often come at the cost of increased complexity and a growing level of abstraction between the sounds that are made and the signals used to create them. This phenomena has created some disconnect between performers and the audience as the connection between gestures on stage and the sounds that are heard become increasingly dissociated. A prime example of this can be seen in modular synthesizers where even the player can become confused by the intricate, yet abstract, programming of the instruments, especially on-stage.
But, what if audio signals could be made visible while theyâ€™re communicating? Could patch cables that expose the underlying signal patterns improve oneâ€™s comprehension of their compositions or, perhaps, enhance the experience of composing as well as viewing? LIME is a response to these questions by offering a semi-modular system for using light as a control medium in musical expression. Where modular synthesizers generate and modulate raw audio, LIME operates in the computer space utilizing the MIDI protocol. Through a combination of lighting, sensors, fiber optic patch cables, and processing units, the system allows the user to design signal flows made of light patterns which are then translated into sound. The patch cables provide real-time views into the signal flows as they pass between different modules. The pulsing and breathing light visualizes the signal flow in a physical form. As a composer or performer, this visual component is useful in that it allows for quickly understanding how a signal is affected by different modules and how it fits into a larger musical piece. Another benefit comes in the form of reduced error and easier troubleshooting. While computer software allows for internal signal routing, it can quickly lead to extremely complex scenarios with confusion abound. The physical interaction of â€˜patchingâ€™ from point to point provides a more tangible understanding of the connections as they are formed. Unfortunately, as patches grow, they too can become a chaotic mess of wires with little ability to quickly discern the meaning of individual connections. With LIME, signal generators, processors, and inputs are combinable to compose sounds and rhythms while simultaneously providing the path taken to achieve them, visually.