Here is some documentation of testing out our sensors, LEDs, and circuit before we put it all together.
Here are some videos from me, Mary, and Allison’s sensor testing:
This first one was a test to see what sounds could be made with just the Arduino’s pitch and tone.
These two videos are tests using Arduinon& Max/MSP to control midi and audio clips Ableton Live:
Here is some documentation of the Transistor lab.
I had to do it twice. My first try was at the end of a long day and it did not work. However after talking to some friends, some hard thinking, and a good night’s rest, I realized the mistake I made.
It may be hard to see but I accidentally soldered my voltage adapter in a confusing way. The Black wire is the power and the Red wire is the ground.
Not the worst mistake but if there is a way to re-solder this I think I would like to do that.
Once I realized this mistake, I wired the transistor and motor as the lab instructed. And while trying to document I felt like the motor spin would not be visible so i added a twist-tie.
More limitations: I did not think ahead of time to give the motor some longer wires. So the twist tie acted like a weed-wacker against my hand. The quick fix was to add a piece of paper.
For me this paper was a beautiful accident. It started making me think about persistance of vision and animation. Maybe I can use this technique (or some variation of it) in a future project.
Once again I did not have all the parts (accelerometer) for the lab so i had to make due with the kit switch, potentiometer, and an FSR:
It took me some time for the principles of serial communication to set in. But after a few errors I eventually got both the “handshake” and the “comma separation” methods to communicate with the Processing Sketch. (Videos are all screen grabs from my slow computer so some of them are glitchy).
I also later on used these sketches to test out some stretch sensor values to check its range and responsiveness.
It was pretty responsive but the range is slightly unpredictable, but mapping the vaules to the size of the sketch gave it a sense of stability. I do not yet know how this will affect our musical midterm.
Here is more documentation from the tone labs. I’m getting better at remembering the Arduino syntax so it didn’t take as long to troubleshoot this time. I also got my hands on the correct resistor (100 Ohm) for the speaker so now the sound is actually audible:
Here is a tone from two photocells in series acting as a variable resistor:
Here is the “3 note instrument” part of the lab. I did not have three FSRs so for “keys” instead i have an FSR, the kit potentiometer, and two photocells in series acting as a variable resistor.
On my first try I got a continuous tone from the start and I did not understand why.
Then i realized that the threshold function was too low. The lab told us to set it to 10, but after taking readings from all three “keys” I realized that the photocells read in the middle of their range when neither cell is covered. So even though their range is from like 80 – 900, if there is no difference between the two they will stay between 450-600 (depending on the room). So I boosted the threshold to like 620 and the instrument was silent until i activated one or more of the sensors. Here are two videos of that:
this week i revisited the analog sensor lab and (finally!) connected the potentiometer and FSR to trigger my LEDS. At first the FSR Was working fine. But the potentiometer was acting like a switch. No gradual illumination, just on and off. After much pin switching and recoding, i realized i had a resistor connected to the potentiometer and this was causing the problem. Now it works!
When I saw the “Productivity Future Vision” video I found myself impressed with certain aspects of Microsoft’s future vision. The cleaner design and total integration of technology is beautiful to look at and the actors seem to be able to work and communicate very efficiently.
But, the tools being used are not much different from the email, search, video chat, and cloud sharing functions that already exist on our clunkier desktops, tablets, and mobile devices. It seems to me that these visions of technology are redundant. They are glossy and pretty, but not much more innovative than the . Do we really need an app to tell us what’s in our refrigerator? Just open it! Look at the actual objects in the fridge with your own eyes. Crawford might say that this is an example of form superseding function. This would be a case of products being superfluously redesigned.
By Victor’s working definition of a tool (addressing human needs by amplifying human capabilities) the tools being used in the video do not necessarily “amplify” the users capabilities. The only thing that seems to be amplified is the space between all the users. They are estranged from their families and co-workers and they interact almost exclusively through technologies which mediate their interactions. Yes, their ability to share information while being apart from each other is amplified. But if the goal is increased interactions between humans, then this vision of the future is only slightly more successful than the technology we have now. For the people working together on a spreadsheet at their job, this technology is successful. It is successful because their primary interaction before the introduction of the “future technology” was based around efficiently processing and manipulating spreadsheets and data.
But for other interactions such as the businesswoman and the hotel attendant, the technology does not do much to help the people interact with each other. Instead they act more like research tools. If you are working at the hotel you can download the client’s information and facilitate a speedy transition from the taxi to the hotel room. But this could easily be done with out a human attendant at all. After all the attendant was simply silently accompanying her to her room. Presumably a system of conveyor belts for the luggage and prompts from the screens for directions could have provided the same service.
In the house hold all the screens were larger and on different surfaces, but they did not necessarily enhance the person to person experience anymore than what we have already done. The child is playing video games and video chatting with mom. The husband is updating a digital calendar. But this is not necessarily innovative or even much more efficient. It seems like more of a novel reformatting of existing tools.
I don’t think that all technology has to cater to human to human interaction. But I do think that the technologies presenting themselves as communications tools should better facilitate human to human interaction. Meaning they should not just make all human interaction faster or more efficient, but they should somehow enrich our experience so that we do not feel like we are just looking at “pictures behind a glass” but rather that we are actually connected in a tangible and physical way to the world around us. Can there even be a technological analog to shaking an attendants hand and introducing yourself? Or being with your spouse and child in person? I’m not sure, but I think this is something we should keep in mind when designing new tools for interaction. After all human interaction is often more than just productivity.