I would like use this post to discuss this week’s progress, assignments, and my thoughts for how to move forward.

Weekly Experiment

This week I was planning to use my Echo speaker with the mitmproxy I have running on my raspberrypi to have Alexa comment on my HTTP/S traffic. Unfortunately, my Echo has still not arrived due to winter storm delays. I hope to have it by the 28th. Instead, I chose to spend my time completing a Zigbee traffic sniffing tutorial, looking at some other methods of capturing zigbee traffic, and exploring the Amazon developer tools/Alexa SDK. I was able to setup the zigbee packet sniffer on my raspberrypi which I will be able to use to monitor traffic on the Echo Smart Home hub. I have been working on gaining more familiarity with the security of Zigbee. I found this incredibly helpful talk from Black Hat 2015 by Tobias Zillner & Sebastian Strobl that stepped through the vulnerabilities associated with a low-power/low cost protocol like Zigbee. The main takeaway for me is that the network is only as strong as its weakest link and that it is difficult to standardize security protocols across venders to ensure uniform safety. I wonder how this compares to the security of something like Z-Wave which is more standardized? Their talk was very similar to some of the materials I had seen from the Mon(IoT)r Group and it directed me to other programs like KillerBee, Scapy-Radio, and their program, SecBee.

I also looked into the Amazon Alex SDK and the general developer tools for AWS. I was pretty overwhelmed. I gained a basic familiarity with how to create an Alexa skill. Unfortunately, it looks like there may be some limitations to creating smart home skills with the Alexa Skills SDK. However, it is possible I could create a function in Lambda using the AWS IoT Core that will allow me to accomplish what I want. There were so many tools and was so much to learn that I felt I needed to reconsider my project. I met with Ruta to discuss after this, because I got the feeling that it would be unnecessary to make a tool with all of this developer infrastructure available. Ruta gave the good advice that I will need to first acquire more familiarity before I am able to make a tool, so I should focus on conducting technical experiments and perhaps making smaller performances for now. She also asked me some helpful questions about what I am hoping to say with any performances or tools that I develop. I realized that I need to hone in on this idea further and used the other weekly assignments to narrow in.

Brainswarms

Thematically, I think that I want to hone in on the major tradeoffs I make when I allow these devices into my home. Tradeoffs like, giving up my freedom and privacy for the ability to be super lazy and ask a robot to make coffee for me in the morning. It feels very humorous and incredibly dumb to me. After seeing the AWS developer tools, and knowing what Craig showed us about the climate footprint of these devices, it feels so silly that these devices, which are the pinnacle of software development, network development, engineering, supply chain management, end up being used for completely stupid shit like making coffee or turning off a light. It makes me think about what Katherine said in her talk last semester about a sensor that will let you know when to water your plant: “Just put your finger in the damn soil!” In addition, I think that it is much more terrifying to think that the data collected in our homes, much like the Progressive Snapshot car-plugin, could and probably will be used by insurance companies, healthcare companies, etc, to determine if we qualify for lower rates based on our behavior. I am hung up on this idea of these devices as hyper-anxious listeners, terrified they will become obsolete, mixed with us, also completely anxious in our private domiciles because we know we are being watched by companies that can make major impacts on our ability to afford services. It feels completely dumb and hilariously ironic.

Cornell Box

Going off this image that I was hung up on, I imagined a little blue couch in the middle of a bunch of connected ears, anxiously listening. Here is the representation that I made:

3 Artists of Interest

There are three main artist that I continue to return to when looking at this. I have mentioned before John Cayley’s The Listeners, which is an Alexa skill that leads an audience member through an interactive performance. The second is Lauren McCarthy’s LAUREN, which had Lauren acting as a smart home automation AI controlling several people’s lives. The last is Kyle McDonald, whose project with Lauren, M/WIMTwas incredibly inspiring for me as an example of creative technologists showing audiences the inner workings of these devices and the decisions that are being made by developers in the technology that influence our lives. Lauren and Kyle led me to the website https://aiartists.org/ which I plan to look into further for examples of artists making work exploring artificial intelligence.

Plans for the Future

Once my Echo arrives, I want to dive much more into exploring the Amazon SDK/Developer tools, making some smaller performances with Alexa and the Speaker as subject material. As I originally intended, I would like to create a small performance with Alexa commenting on my HTTP/S traffic. This could be as simple as my friend Adam improvising in response to the traffic into a microphone connected to the speaker via Bluetooth. I would then like to look into how I can control smart home devices with the developer tools using the built-in Echo hub as a coordinator. I am still learning about the complex Amazon IoT stack, but from what I understand, the hub is automatically assigned as the coordinator for the network and functions as an MQTT broker as well, very similar to what I have running on my raspberrypi.

In preparation for 50 Days of Making, I would also like to work on a small project using the Amazon Developer Tools to create a framework for that upcoming work. Very generally, I want to choreograph dances in conversation with Alexa. My idea is to connect my KinectV2 to a program using Kinectron that will map the speed of my movement to a Perlin noise sound synthesizer which will output to the Echo. I plan to base my code of one of Mimi’s examples  and have speed inversely correlated to the noise of the synthesizer so that the slower I move, the faster Alexa’s voice garbles. My plan is to choreograph one minute dances each day for the 50 days of making project.

Finally, I am really enjoying learning about the security of these devices and gaining more familiarity with the way they are networked. I would like to lean more heavily into this as research for next week. The thought of spending a ton of time developing Alexa skills or an interface for a tool is less appealing than just building skills and gaining knowledge for now. It may take much longer than I have to actually build a tool, but I would like to learn more about networking for now.