I would love to use this first journal/blog post to take stock of my thesis ideas and progress so far, as well as to lay out where I plan to look next.

Thesis Idea Origins

In the Spring of 2020, I was part of the Brendan Bradley Integrative Technology Lab in the NYU Tisch Drama undergraduate theatre program. For part of the course we were working with Stewart Lane and Bonnie Comley, the founders of theatre-streaming service BroadwayHD which, considering it was the start of the pandemic, was receiving a lot of attention at the time. For the final portion of the class, we had to present one innovative idea incorporating technology into live performance for a panel of teachers and guests. My idea was to use smart home devices to enhance the experience of virtual audiences by adding practical effects into their homes (ex. If I am streaming a production of Hamlet, when the ghost appear my lights start to flicker off and my connected thermostat lowers the temperature by 20 degrees). I came up with this proof of concept for a show that would run exclusively on smart devices, using voice over artists to tell the story of how each generation imprints ideologies on their tech. At this point, I had no technical experience to actualize any of these ideas, but I was intrigued nonetheless.

Changes Over The Last Year

Flash forward to February 2021. I am still interested in using smart devices in live performance. I’m still unsure how I will do it all, but at least I have quite a bit more technical knowledge than I did a year ago. My reason for wanting to work on this has also changed significantly. From Concepts, Culture, and Critique and Radical Networks, as well as our visit from Surya Mattu and Lauren Kirchner, I am more interested in telling stories about what makes smart devices so fraught. I believe that by casting these devices as characters in a performance (they are listening and reacting, after all) there are so many stories that can be told about privacy, emotional intimacy with technology, reality as perceived through them, and the persistence of our personalities on our unliving listeners. Sarah introduced those of us in Radical Networks to Lauren McCarthy’s LAUREN, which I think is a fantastic exploration of these topics. As a constraint, I may want to focus any stories I produce on the Amazon Alexa suite of connected devices. From a lifelong family friend who works at Silicon Labs (owner of the Z-Wave protocol and a big part of the IoT industry) I know that Amazon has a large market share of smart home devices and plans to release more Amazon-brand devices at much cheaper rates. Inspired by John Cayley’s The Listeners, I think that Amazon is a particularly interesting subject for performance.

Is This A Performance or A Tool?

One dilemma I am facing right now is whether to focus my thesis on making a specific performance or a tool for theatre-makers to create performances using connected devices as characters. I feel I am leaning toward making a tool because it will stretch me technically and hopefully be something I make work with in the future. However, I worry I may get bogged down in functionality to the point where I never practice making with it (I felt like this happened a bit with Mindful Breathing in Connections Lab last semester). I’m sure Andrew will be able to help considering his ITP thesis, however I hope that I would be able to do both at the same time, developing the tool while I make stories about the content I am interested in.

If This Is A Tool, What Will It Look Like? 

For this week’s 3-designs exercise, I considered what the interface of my tool may look like, inspired by three softwares I have either used or seen used in the past. One thing I need to look into is whether what I am trying to do would already be a possible with any of these programs (especially Isadora, which I know has TCP/IP functionality). Regardless, this exercise was helpful for me to clarify what technical aspects I will be dealing with when giving a user the ability to customize their connections through time. Overall, I realized that a tool for this would need to be able to identify the origin (ie. external Kinect2 data using Kinectron), the connection of the origin to a performance coordinator (ie. Kinectron sending computer vis data over an API), any rules for processing the origin (ie. rules for the Kinect2 data to identify when a performer has clapped), the connection to the destination (ie. sending mqtt to a mqtt broker that translates the message into a Zigbee signal), and the destination (a Zigbee controlled smart plug that toggles every time a performer claps). Here are my interface designs:

#1: Inspired by Isadora                   #2 Inspired by QLab                      #3 Inspired by After Effects

Where I Am Technically

At Feedback Collective this week, I presented a demo I had made of a lamp turning off/on whenever I clap. The lamp is plugged into an Innr 224 Smartplug that has Zigbee Capability. On my raspberrypi, I have an mqtt broker running along with a program called zigbee2mqtt that translates mqtt requests into zigbee signals using a TI CC2531 zigbee dongle. The tutorial on how to do this can be found on the zigbee2mqtt website and was relatively simple to complete (of course, the hours Sarah spent helping me with my Radical Networks project certainly helped me get a head start on programming from the command line. Thanks again Sarah!!!). For the motion capture, I was simply running Posenet using ML5 in the browser that was connected to the mqtt broker. I have all that code on this github.

Moving Forward

On the technical side, I have a few areas I am interested in learning about. First, I would like to replicate Surya Mattu’s tutorial that he used for “The House that Spied on Me” with Kashmir Hill to get a better sense of what it means to look at surveillance. There is also a tutorial for sniffing zigbee packets using Wireshark that I am planning to look at. Second, I would like to look into the Amazon Alexa SDK and figure out what is possible for me to do with the Echo Smart Hub, which has Zigbee capability. My intuition is to try to use the smart hub to coordinate a performance using Amazon devices, however I don’t know if this is something that is heavily encrypted or inaccessible to developers. Third, I would like to gain a better understanding of how Zigbee messaging works. I plan to look deeper into the zigbee2mqtt documentation to get a sense of this. Where I am confused is the relationship between zigbee and mqtt. Is it standard for zigbee devices to communicate using mqtt that is then translated? What is the best practice for low-latency communication? How does the mesh of communications between zigbee devices work and how can I look into the signaling between them? Finally, I want to look more deeply into security and encryption. Surya had a fantastic reference to a group called Mon(IoT)r based at Northeastern that has published some papers showing their investigations into privacy. I know they work a lot with metadata, however there are more references, like this, that describe the security behind these devices.

On the theoretical/performance generation side, I have been in contact with Tom Igoe and some other theatre/tech practitioners who I hope to have more conversations with about the place of technology in live performance. Tom was so helpful and provided me with many resources including this article by Adam Greenfield. I don’t know why, but it inspired this image of our devices as hyperactive listeners so terrified that they will become obsolete. I am imagining a performance in which our devices are constantly yelling to each other about us, trying to make meaning out of every small movement, gesture and behavior because, if they can’t, they will no longer be supported. I am going to read more of Adam’s writing in his book Radical Technologies. I hope it will inspire more ideas.