I would like to use this post to comment on the progress I have made this week…

Meeting with Andrew

Last Friday, I met with Andrew Schneider to talk about theatre making using technology. We had a great conversation on a lot of topics, but the biggest takeaway for me was him saying that it should never feel like “tech & performance” but rather just performance. He gave the example of a rock concert where a guitar chord is smashed and all of the lights explode and everything works perfectly in unison to blow you away. Ideally, every bit of performance will be equally integrated and serve the overall production. I want to keep this in mind as I move forward and make choices about the tools I use. I also met again with Craig this week and was reminded to “take the path of least resistance” when crafting this performance. Everything should be focused on serving the performance.

Beginnings of the First Performance

As I began crafting what I am for now calling “The Breathing Room,” I realized that it may be difficult to use the smart home coordinator built into my echo to control my philips lights. I think this is because philips has their own cloud services that automatically process all commands from an alexa skill. I was pleased to figure out that my hues do automatically connect to my zigbee2mqtt mosquitto broker, and I was using this framework to begin working on the performance.

For this current iteration, I have decided to use this flow instead of one in the Amazon developer tools:

Microphone —> Locally Hosted Webpage Using p5 Sound —> MQTT Broker On RaspberryPi —> Philips Hue via Zigbee

I thought it may be extra work to try and have everything integrated into an Alexa skill. I think it is possible, however the major hurdle would be sending the microphone binary to Amazon IoT Core, which I am not completely sure how to do yet (and if it must take place using AVS or not). For now, I am happy to use this and have a separate alexa skill running just of Alexa’s voice output.

For the effects of the lights, I feel I came up with a very interesting framework for having them respond to breathing. Because my broker can only process MQTT messages so quickly, I decided to have my webpage determine when I am inhaling and when I am exhaling and to command my lights to begin getting brighter or less bright based on those triggers. I then had Garageband running to connect my microphone to my Echo’s output. I would like to find a way to do this in the webpage, however I was pleasantly surprised to find myself playing with feedback between my microphone and the Echo. By turning the gain knob on my microphone, I am able to manipulate the feedback levels in real time. It may have been because it was very late, but the effect was really exciting and eerie. Here is the github for all of that code.

In terms of the Alexa skill, I still have a lot to do on the writing. However, I was playing around with SSML and made Alexa do the craziest shit! By slowing the speech down to 20% and making her whisper, it produced the creepiest fucking voice I have ever heard come out of an Alexa. It sounded incredibly human and my girlfriend called me from the living room to ask who I was listening too. When I told her it was Alexa, she also thought it was incredibly creepy. While this creepy, eerie feeling is not exactly what I was imagining, I do want to lean in and continue exploring it. Like I said, the writing for the SSML content needs a lot of work. I began with that question I keep returning to “when was the last time you tasted a blueberry without thinking about anything other than tasting that blueberry,” and continued with the other questions I ask myself whenever I notice how weird life is centered around these devices. However, the effect of the speech distortion makes is difficult to follow the train of thought. I will have to do some tuning, but the sounds of the words were accentuated over the meanings themselves.

My plan for this week is to keep developing “Performance 1” and to hopefully have a 5 minute prototype by next Thursday.