I would like to use this post to reflect on some of the progress I made this week.

Meeting With Tyson

This week I met with an old family friend, Tyson Tuttle, who has been the CEO of Silicon Labs for the past 7 years, a major player in developing IoT hardware. They have chips in pretty much everything except Apple, he said, and specialize in interoperability and multi-protocol support for consumer and industrial applications. Going into the meeting, I watched some of the talks Tyson has given in the past few years (#1 & #2) to get a better understanding of what IoT is hoping to become in the coming years. I was really interested in asking him about more thematic details that only someone in his position would know about, like the future of predictive analysis and ML in the field, however I feel like I chickened out, worried about what he would say of my critical take on IoT. We still had a great conversation for a half hour; He is also an engineer and gave me a lot of tips about how I may realize my project. He reassured me that working in the Cloud might still be okay and that, in my situation, the developer tools in Amazon may be a better solution than building my own local infrastructure. He also mentioned that both Signify (Philips Hue) and Sonos are working on entertainment applications of Smart Home IoT and may be good to look into.

Stacey Higginbotham Podcast and Meeting with Craig

Two quick points here:

  1. I discovered The Internet of Things Podcast by Stacey Higginbotham, a well known IoT journalist, and started listening to a couple of episodes. Clearly Stacey is a proponent of IoT in the future, however I liked the conversations she was having about privacy and which products are potentially harmful to users. Most of all, I was intrigued by her criticism of Ring in episode 311 and her discussion in episode 310 about the future of IoT subscriptions. In terms of Ring, she led me to a story of Ring’s partnership with 400 police services to automatically request smart lock footage from users within 200 feet of a crime scene. In terms of IoT subscriptions, she discussed with an IoT executive how the future of the tech industry is to lease physical devices and pay for the services, from things like coffee makers all the way up to cars. I found myself hooked on the idea of never owning anything anymore, just paying for services and how that may make me feel.
  2. I met with Craig to talk about coming to New York this summer, which I am really hoping to do. However, we also had a great conversation about a project he did a couple years back with the Alexa Skill’s Kit to investigate automation in the golf industry, specifically caddies. He shared this tutorial that he was working with which connected a raspberry pi to AWS’ IoT service, something I will certainly have to do in the future. This inspired me to look more into the documentation for AWS IoT Core and to lay out the stack of technologies I need to work with to make my first performance happen.

Meeting With Ellen and Andrew

I had a really nice meeting with Ellen and Andrew tonight. They were very encouraging and, overall, I agree that I need to enter maker mode. I have such clear images of these performances in my head that I need to actualize them, even if they end up not being theoretically sound. This has inspired me to move forward with performance one. My goal is to have the technical framework in place by next week and to spend the week following working on the actual performance. They also reminded me of George Saunders, one of my favorie

Work on Performance #1

The image for this first performance is a room that is breathing. I imagine it beginning with the skill invocation “Alexa, help me feel alive” which dims the lights completely. I sit in front of a microphone and begin breathing audibly on the inhale (like Ujjayi breathing from Asthanga) and silently on the exhale. As I inhale, an orange-yellow glow grows from the lamps behind me and slowly fades out in the silence. Alexa slowly whispers a series of questions based on the original: “When is the last time you tasted a blueberry without thinking about anything except tasting that blueberry?”

I know there may be easier ways to make this happen, however I want to commit this week to trying to make this project work in an Alexa Skill. My plan is to connect my microphone to a raspberrypi which will publish MQTT messages with audio data to the Amazon IoT Core. From here, I will use AVS integration for the IoT core to play this sound out of the my Echo through a lambda connection. I will also use the IoT Core and Lambda to connect to my Philips Hue devices. I am not sure if I have to connect to the Hue Hub/Hue Web services, however I do know the Hue’s I have are Alexa enabled and Hub optional. I have to look in to how to connect to a device with its own cloud services, but I know this functionality must exist somewhere.

In preparation for this undertaking, I have completed the following tutorials (connected RaspberryPi to IoT Core, Rules Engine #1 #2 #3 #4). I feel that I have a working knowledge of the tools, but there are still several challenges in the way. 1. Find examples for live streaming microphone binary using MQTT from a raspberryPi. 2. How to target devices with their own established cloud services, including both the Hue and Echo for output. 3. How to trigger all of this through an Alexa skill.