Digestion of Research into MITMProxy and Alexa Radio, Music, Podcast Skills
I would like to use this post to digest some of my findings from tonight. One of my goals for this week is to have a proof of concept for live performance coming through my Echo speaker. I am planning on making two version and showing them this week. In the first, I will be searching with my phone connected to my mitmproxy and my friend Adam will comment on the live http/https data stream with his voice coming out from my echo connected by bluetooth. I plan to just use zoom screen sharing to achieve this. Super lame technically but I think this work around will do the trick. For the second version, I was hoping to take the data captured from mitmproxy and turn it into some computational output using Alexa skills. There are two challenges here I am facing: 1. Getting the data from my raspberrypi live. 2. Playing the output from the Echo without Bluetooth using a Lambda function.
- I was looking into the mitmproxy documentation for the mitmweb tool. I found some options to save all logs to a file where I could possibly find a way to take it from the raspberrypi over ethernet and analyze it. This seems like it would be really difficult and not conducive to live data analysis. Since I am able to access the mitmproxy GUI locally, there must be a way to get the data out of there, maybe even just the domain names. I looked into some other programs that use .pcap files, but I just don’t know yet how to get the data out of this program.
- I want to do my best to have any audio output over Bluetooth in this method. I could simply write some Lambda function to analyze the mitmproxy data and return outputs which Alexa Voice Services will simply read out. I could also write a function located somewhere else and broadcast it over radio that an Alexa Radio skill could tune into. I’m not sure I want to do that, it seems incredibly technical. Regardless, I still don’t quite have a grasp of using Lambda functions (all the functions I’ve made some far have been hosted on the Alexa skills console). I should probably just look into Lambda functions for now so that I can send strings from the live mitmproxy data to be read out by AVS.
The goal of this experiment is to show the difference between human and computational commentary using the Alexa. I am bummed there is not another way other than Bluetooth to have live audio output over the speaker, but I guess it also makes sense since using the cloud would probably take forever. How is Alexa able to connect phone calls? This must also be over Bluetooth. Anyway, I plan on showing these proofs of concept to Andrew, my section, and maybe even feedback collective this week to get feedback. I feel both have potential for the commentary I am going for and the difference is related to that question Andrew raised: Do I want the audience sitting on the little blue couch or watching the room from the outside?
Related Posts
Leave a Reply Cancel reply
You must be logged in to post a comment.
Kat Sullivan
Adam Colestock
Helen (Chenuan) Wu
Christina Lan
Dorian Janezic
George Faya
Julia Myers
Kelsie Smith
Michael Morran
Po-Wen Shih
Liu Siyan
Fisher Yu
—
Craig Protzel
Christopher Wray
Haoqi Xia
Hayden Carey
Katherine Nicoleta Helén
Maria Maciak
Parisa Shemshaki
Sakar Pudasaini
Skyler Pierce
Steven Doughty
Yiqi Wang
—
Andrew Lazarow
Benoit Belsot
Enrique García Alcalá
Hongyi Zhang
Jay Mollica
Li Shu
Teddy (Jian) Guo
Monika Lin
Wenye Xie
Yiru Lu