This week, I spent some time reworking performance one and developing a first prototype for performance two. I would also like to discuss my plans for performance three…

Performance 1 Rework:

While developing performance 2 was my main goal for this week, I did have time to address some of the comments and ideas from the quick and dirty show. My main idea for continuing to develop this performance is to lean into the idea of “editing memories,” especially those that may be unpleasant, or even to enhance memories with details that never happened. I am going to build a couple of interactions after the “you are feeling sad because you cannot remember the last time you sailed with your sister.” I want an alienating moment from the emotional journey I have just gone on. My character in the performance, instead of being emotionally generous, will craft this raw memory into something that is altogether different and fake. I have built in interactions like “alexa, make it more exciting” and “alexa, take out that last sad bit” that I believe will make the meaning of the performance more deliberate.

I still need to consider world building for this performance and the framing of the opening moments which still seem general. Whether it is throwing in an ad or something else, I want to show that the whole room is attempting to calm me down when I come in distressed. I also need to edit down the middle memory so that it is more concise, understandable, and remix-able in the final third of the performance. I think that through the repetition of the changes the audience will have an easier time understanding the memory.

As I was making these changes this week, I couldn’t help but wonder if I should be continuing to use SSML or find a voice actor? I like the idea of keeping the performance pure, using the markup language and skills kit, however I know that the theatrical gesture must take precedence over any technical purities. I am finding it somewhat limiting to work with the SSML, especially when developing effects outside of the creepy whispering.

Performance 2 Prototype:

My goal was to put out a prototype of this performance by the end of the week since I am technically a week behind schedule. It is something but it is not fantastic at all. My improvisation is pretty terrible, but I felt I needed to make something and dive into the deep end. I hope to do a lot more user testing so that I can hone in on the story of this performance. I ended up choosing Michelob Ultra vs. a bowl of lettuce, which has very little to do with smart devices. I was originally thinking a smart toothbrush vs a smart sex toy, however I do not own either. I dressed my speakers up as a little angel and a little devil, which I like design-wise. I also had the lights shift from red to blue whenever one speaker is talking. Getting the speakers set up in stereo and using Tone.js to pan the user input from my webpage was where I spent most of my time this week. The connections between my speakers and my laptop can still be a bit finicky, however I think the sound works quite nicely and will translate well in person. I had to switch over to an HTTPS express server now that I am working with Tone.js and UserMedia. There is the most bizarre issue where I have to delete the remembered permissions from the webpage within google chrome otherwise the sound input will always default to my computer instead of my external microphone. Something to look into more.

Overall with this performance I would like it to be fun and I am up for the challenge of puppeteering two speakers as once. I need to do a lot more user testing for this performance and am wondering what way is best to do this since I am currently remote? I had the idea to build a simple web page to try this out, however I don’t want to work with WebRTC that I am unfamiliar with and may otherwise not use in my final thesis. Perhaps I could just make a simple webpage for the visuals and do the sound over Zoom. In either case the main goal in these nexts weeks will be to do a bunch of user testing for this performance so that I can begin to develop some plan, practice, and familiarity with this interactive performance framework.

 

Performance 3 Plans:

My goal for next week will also be to develop a prototype for performance three, or the one where I balance three morning routines at once in order to save tons of money on health insurance. I foresee the majority of my time being spent choreographing this crazy routine with little tech development. I do think there should be some live technical response whenever I am successful at doing some part of the routine; perhaps an interface that shows how much I am saving with every action. I could use pose estimation and classification to trigger an output on this interface whenever I complete a task successfully. However I could also just have someone pressing a key backstage. I like the challenge of using pose estimation and like what this might say about the flaws of quantifying ‘healthy’ behavior, especially with machine learning in the future. For example, what happens if I do something perfectly but, because of a lighting issue, the algorithm isn’t able to register it, or worse, registers it as something ‘unhealthy.’ In any case, I hope I can have some fun this week choreographing this performance of morning routines that goes horribly wrong.