During  the weekend, I spent some time on first experiment on how to create visual avatar. The more I work with this sketch the more I found the way of listening is restricting the user experience (direct voiced sound routing through headphone). I did not spend huge enthusiastic amount of time to continue with the first experiment at this moment as I feel there is a part of topic I am missing which is how we listen. I left this exploration set aside and look for threads to continue. Through more interviews with a newly met public speaking coach and sound engineer friend who I just got in touch again. The idea of voice command, voice signification, and even make connection of voice and music seems to be a more resonates with them. It is user intuitive to say whether if they can use their voice to create something, exchange something. More importantly, an expectation to experience something about a voiced sound in a different way sort of speak. I guess I will further articulate about this later this week and continue with more interviews.

Just two days ago, after my four interviews. I messaged Rudi and try to get his perspective on implementation and possibility of my project form. In the various message, he brought me with with the idea of working with MAX MXP and sensors. Both options gave me some ideas about space, interaction and performance. It allows me started to think about how audience can perceive their voiced sound and and start to looking into this area of practice. Few examples that I found interesting such as Industrial Instruments, and Yamaha Design’s pulse in Milan Design Week 2019 and various work in this link1 and link2.

Just today, I found Multi-channel audio as creative space: Inside Max 8’s MC | Loop gave me a great sense of how a voiced sound can be heard with textures that is not been sensed in a recording or a conference by any digital means. Something more complex that our voice can generate in a face to face hearing experience. MC means a multi-channel simulator that creates huge number of sound threads to create a final sound. The final effect may sounds like a vocoder in a distance but there is some nuance in pitch, temperament possibility and way to activate a sound. The message took place in part one 11:30~17:55 and part two 17:55-23:05. I realized this multi-channel is available then I started to explore if i can feed my voice into it. I eventually took sometime to figure out how to make patch connection for a realtime audio input using my microphone in order to hook on to the independent note demo MC system– robotvoiceclean MC patch example. It turned out to my lucky day today as some result in below (my first day of using MAX!):

Eexperiment 2 MAX MC
Experiment 2 MAX MC + Microphone input

Please click below video!!!

Finally, I hope to use this as a prototype to have a conversation with faculty, artists in tomorrow’s session. The good thing about this prototype is that MAX “seems” to have a versatile platform of audio and even visual. And I hope to build on it, think about if I can bring the multichannel in real life, in this track it could possibly bring me a form of spatial audio installation.