For efficient production, I’m staying faithful to Ableton’s software and interface as the layout allows for quick changes in notes and layering whereas Chrome Lab’s version is based on a wheel of major and minor notes. The addition of the drum machine below is also a great way to begin producing music. For experimentation, on the other hand, Chrome Lab’s version is great as it allows you to try out different notes on the fly without committing to them in your timeline. The Piano Keyboard wasn’t working for me on Chrome or on Safari so I didn’t really get a chance to experiment with it. From what I saw, there was no way to save your choices, as is the case with Chrome Lab, both programs are on-the-fly experimentation tools whereas Ableton is focused on production and layering. By being able to save the notes you choose and play them over and over again, you’re able to layer and create more intricate and detailed harmonies and music overall.
Attached below are my two experimental harmonies, I’m especially proud of the Ableton one and I can genuinely see it being implemented into future work. The ability to “Export to Live”, the downloadable software, is also a plus as you can be playing around, create something you like, and instantly get to work on the full-fledge software to produce music.
I’m trying to recreate the image classifier using the Mobilenet library/data base. I failed, however, and cannot get p5 to load the ml5 library into the sketch.
So I tried to create a weather visualizer following this tutorial – which was very similar to the example in class. I’ve created the query string so that the url that is created and thus where the information is drawn from is in the form:
url = api + input.value() + apiKey + units;
This, however, does not seem to want to work and I cannot figure it out. No errors pop up, nothing just happens. I do, in fact, see the application of this – quite literally. All the weather apps that exist follow this same structure or principle in drawing information from a website, processing it via JSON or whatever proprietary method they choose, and then presenting it in a user friendly and aesthetically pleasing manner to the audience.
That final portion of the process would, from what I’m assuming, be set up only to have variables to show the temperature. In the case of a graph, each instance on the x coordinate would have a variable y that would be dictated by the api url.
This began as a mission to get my copycat Arduino working (finally). I had to download drivers and copy paste stuff into terminal and all that kind of stuff that I don’t understand. That worked – for a good two hours until my Arduino doesn’t know how to do what it’s supposed to do anymore.
I really struggled with this one. As usual. My Serial Communication assignment consists of a potentiometer controlling the greyscale background hue of the p5 sketch. Linked above.
I think our thinking process is anything but linear. Every thought is in succession with the previous and in reference to something you thought of long ago. It’s almost like a file directory; it opens the file from the destination in which it was saved – but in order to open it the computer must process the language the file was written in and follow procedure until the file is opened. Each thought is a conglomeration of different, previous, thoughts and therefore it jumps back in different instances of time to complete your intended thought.
Hypertext is a luxury that, if eradicated today, would make us lose our minds. We rely on it so heavily that in order to write this blog post I had to click on six different hyperlinks to get here. I’m referring to hyperlinks as not only sets of text that are linked to a webpage, but also to various functions. Such as opening Safari or clicking “log-in”. I think we’ve reached a point where almost everything we do on a computer can be seen as a hyperlink. From opening or closing a tab to rendering out an entire animation sequence. It has simply just become a necessity.
Our thinking and computers are pretty similarly designed I believe and I also think that that’s done purposefully. We refer back to memories and thoughts the same way computers refer back to code and file directories. We built our computers to be like our minds so that they become extensions of it. We place some thoughts on our hardrives that we can’t, as efficiently, store in our head and vice versa.
So, firstly I created a piece of code copying from a video from The Coding Train in which I created a fractal tree using recursive functions. Here.
Next I tried to create my own slider for another recursive function,
Charles Darwin’s Theory of Evolution states that it is a “process by which organisms change over time as a result of changes in heritable physical or behavioral traits. Changes that allow an organism to better adapt to its environment will help it survive and have more offspring,” (Livescience). I think the same applies to technology. Better products come, older ones disappear. Apple and Samsung are companies that embody this belief, making small, incremental improvements to their designs and products every year and releasing them. Up until AMD released Ryzen, Intel carried their operations the same way. They made slight enhancements to their processors because there was no competition – or no push natural selection.
With this push towards better, faster, and more reliable technology, the communication aspect gets pushed towards those same goals. The same way the telephone evolved into something we carry in our pockets, I can see that being pushed into something near the telepathic level – implants, earpieces, etc. Instead of speech-to-text it could be think-to-text. We see this mirrored in ourselves as well. Many humans, now, lack a ligament in the wrist that was useful for climbing trees and aided with grip strength.We no longer need that ligament in our wrist. So why put energy activating the muscle when it serves no purpose? We see this same phenomenon in processing units as they go from 28nm to 14nm – getting rid of unnecessary components to increase efficiency of batteries.
Evolution pushes our bodies, minds, our technology, and in turn, our methods of communication forwards – and these past few decades have shown that.
A tool enhances our every day experience in life. Glasses help us see better. A watch keeps us organized. A can helps us walk. These pieces of technology help us every day. To further enhance this we can further develop these technologies. Glasses can become smart glasses, providing us directions and notifications. Watches have become smart watches providing step counters and notifications and directions. But they can become attachments of our bodies by adding hologram capable. Canes can include 911/emergency buttons. Our bodies can act as empty breadboards and we can keep adding innovation to them to enhance our experience.
Our emotions drive most of our thoughts. From when our parents tell us to study instead of watching television to when we lay on our death beds. Every thought in between that, no matter how rational we think we’re being, our emotions have impacted the decision we’re making. Computers can take emotion out of the equation. But when placed in a life or death situation, which rule fits best? The greater good theory, where the least amount of lives are taken for the greater good, or the one that protects you? Should it even protect you? What if there are only two people? Which life does it take? Yours or the other persons? These questions are important and they lead us to ask: Do we really want emotions taken out of the equation?
Initially, I wanted to make a car-stereo-like knob that turned up the value of a static synth. But yet again, my skill lacked. I settled for a sound detector. This can be used to for deaf people and can flash a light for when someone is talking to them (by mistake and they’re turned around). It can be tied to a certain frequency so that it only picks up the most common range of frequencies that vocal chords produce. For example, a deaf person forgets something in their uber and leaves and the driver yells from behind them to tell them they forgot their, let’s say, jacket. The deaf person will get alerted. My model simple turns on when light is inputed. Details below.
What matters to us is dependent on us as individuals. Each person has gone through a set of things in order leading up to this point. Therefore all those things have had an impact on your experience, thus altering what each person values. I may value time with my family, the person next to me may despise it. Both those views are right for each one of us due to our upbringings.
Who makes the decisions? The controversial subject of the self driving car can help understand what I mean.
Like Patrick Lin says, the outcomes of all foreseeable accidents will be determined years before they even happen. Programmers will dictate what happens when a two lives are in danger. So I don’t think it’s machines making decisions, it’s machines following orders. And the difference those two orders can make is between premeditated homicide and an instinctual reaction.
Then the legal implications come in, is the programmer of the code that instructs the car to save you responsible for the death of the person it couldn’t? It’s all more complicated than we think it is.
I probably struggled the most with this assignment by far. In reflection I need to work on my schematic reading skills, and digital i/o skills.
I began by making the regular button work with the LED. I don’t have any pictures of that but, that itself took me a while. Upon making the circuit for that work, I realized that my LED would stay on until I clicked on the button – which was one of my main issues with this assignment. I later made the switch with the two pieces of metal. Again, the light would turn off when I would touch the two pieces of metal together.
Thus I began to begin my code – where I struggled with most. I used the template from class for the setup, but I had figure out how to have the light to flash when the circuit was complete. I had a lot of help from the ITP floor. There were three revisions to the code until I landed on this one. And as you can tell by the title of the code, I was struggling.
The code works. It took a lot of time. But here’s the outcome:
To apply this to a different body part, I taped one metal piece on the bottom of my shoe and stepped on the other plate. There could be different applications to this, a subway card integrated into your shoe or to alert you if you’re standing to close to an edge. The possibilities are endless.
With “traditional” media being considered as the pre-information-age art forms like painting, drawing, singing, etc in this blog post, computational media has a much larger presence in the todays world. It’s everywhere around us: video-games, movies, user interface design, billboards, etc. The difference between the two is a matter of the medium and audience that is targeted. Many times, like in the CGI or visual effects world, good computational media is only considered good if (with photorealism as the intention) it can go unnoticed.
the goodthe ugly
Interaction in Storytelling
Interaction and stories usually didn’t go hand in hand. Early forms could be considered of the overlap may include magicians’ acts with their audience being the interactive aspect of the story. Nowadays, the aforementioned art forms (video-games for one) only exist due to the overlap in story and interaction. There are two categories for this, I believe. One in which the story is dependent on the interaction, like in the Tell-Tale Batman Series and Arkham Knight or even FIFA), and where interaction only supplements the story, most free-roam games. The former feature alternate endings whereas the latter have a linear story-arc.
The Universal Machine
Alan Turing made “a universal Turing machine is a Turing machine that can simulate an arbitrary Turing machine on arbitrary input. The universal machine essentially achieves this by reading both the description of the machine to be simulated as well as the input thereof from its own tape”
When I think of the word universal machine, I think of levers, calculators, and mobile phones. Machines in the hands of everyone. Everyone in functioning society can understand the use of these machines and figure out how to use them. A universal machine can exist and currently does.
I simply activated the switch using my feet. I found this gold colored metal in the scrap pile and bent it over and over again until I tore the two apart, The using alligator clamps, an LED, the UNO, and the wires I made this work.
I’m the user. That’s the simple answer isn’t it? I press a key, it performs said action. My computer enabled me to do what I do as an animator – completely incompetent when it comes to drawing or painting, I need this machine to enact as my artistic outlet. It’s how and where I story-board, create, and share my work. I am to a computer what pen is to paper or a paintbrush is to a canvas.
What I look like to the computer, I believe, is irrelevant to it. It takes in only the information I need it to via the camera, keyboard, mic, and mouse. To a computer, I’m a source of strictly information. To me, a computer is an outlet for emotion, art, assignments, storage, and memories.
Jarvis or Friday, is Tony Starks AI powered computer. It really is all inclusive. He can talk to it, take whats on a screen and create holograms, and integrate it into his brain to create extensions of himself on demand without having to utter a word. I can see myself pinching onto the viewfinder in C4D and manipulated the geometry in real time through holograms. Viewing each polygon at whichever angle I wish as opposed to on a pixel-dense screen. That’s where I see the future of the computer.