All posts by Eleni Giannopoulos

Final Project

I made an interactive poem that should be read aloud, and when certain keywords are said, pieces of an illustration appear on screen so when you’re finished you are left with an image.

I did this because I was thinking about imagery in creative text, and how words are so apt to inspire visuals. Reading aloud, especially is a potent way to bring text “to life” and I thought there should be some sort of interface that makes this intangible feeling more tangible. There is an important connection between text, sound, and images and here, the text inspires the sound (reading aloud) and the  sound inspires the illustration.

I did this by linking words to hidden images in a code that combines css html and javascript (basically a website) and set them as a trigger so when they are activated the hidden images become unhidden. Then I used the voice recognition software annyang to make the triggers voice activated.

Now, all didn’t quite work out how I wanted it to, or expected it too. The software if relatively unresponsive so you have to be aggressive in speaking the keywords and they can’t just be read in context of the entire line. This makes for a dramatic reading with too many pauses and lots of yelling out of frustration on my part. It’s a lot more effort than I thought, and it kills the readers ability to read the text naturally.

HOWEVER, I realized that this is probably a more accurate representation of what I was trying to achieve. It’s pretty much impossible to try and FORCE someone to see what you are seeing when you are reading a text. You have to try so hard, repeat words and yell sometimes to communicate a visual message with just words. That’s the beauty of text, people see different things when they read it. Its very frustrating to reach a common ground. I think the overall effect and difficulties involved emulated the process of translation between text, speaking, and images.

CODE https://github.com/g2eleni/Final-Project

Final Project Progress

I’ve been working on getting an illustration and a piece of writing done. The different components of the illustration correspond to different words in the piece of writing. Right now, I haven’t got the voice activation working yet but when you click on different words the visuals appear on screen.

This is the fist two verses of the poem with the words unchecked, half clicked, and all clicked.

Here’s the code.

https://github.com/g2eleni/Final-Project-Progress

 

Now I need to work on the voice, and fixing some of the glitches in the text to image interface, and probably adding some color too.

Characteristics

Computers were first created as a tool of order.  They were designed to streamline complicated processes by putting the the responsibility of keeping track of many things in the computer itself. It was all centered mainly around calculations.

Charles Babbage, a 19th century mathematics professor created the first computer to solve mathematical formulas and print the result. It wasn’t so much capable of solving an equation independently, but offered the used with various functions to facilitate the process. Since solving a math problem required many different steps, this allowed the user to streamline the various sub-calculations.

The hard thing about communication, I think, is the need for instinctive responsiveness and emotional nuance. When you have a conversation with someone there isn’t time to plan every single aspect. You have mere seconds to formulate a verbal response so the whole experience is sort of like a giant improvisation. Beyond that though, a huge part of communication (still in context of a conversation) is inflection, tone of voice, facial expressions, body language, and so much more. There is an entire language beyond just words that we are incapable of planning. For the most part we can utilize these other channels of communication by learned instinct but it still provides difficulties.

It can be hard even to just congratulate your friend on an award they just received. It should be just a simple word in theory, but maybe you’re a little jealous of their success, and something in your one doesn’t come across as genuine. Suddenly there’s tension between the two of you. Even if you mean only to communicate your joy for them, it is so hard to control every aspect of how you’re saying things. Uniting your entire being to communicate a message with barely an preparation, though we’ve been practicing all our lives, is hard.

Computers here, are capable of adding a sense of organization to this disorder. They can give us the time to plan, and to control every variable. You can proofread, make corrections, add, delete and tinker with your message until you are satisfied.  This process is important beyond its technical function, it’s important as way of thinking. Tackling a problem with the computer angle means you break it down into pieces, see what can be done to address each moving part, and how all the pieces can function together to create a larger solution. Anil Dash addresses this, saying “aside from simply teaching how programming works, we need to ensure that young people can understand the way that human concerns are translated into problems that computers can help solve.” She says that coding should be taught not just so kids can go into the computer field, but so that they will understand the thought process, and be able to apply technological benefits to any field.  “Every industry, every creative discipline, every line of work from farming to fashion, engineering to english, management to marketing, can be improved by including insights provided by being deeply technologically literate.”  Computers, though they differ greatly from the complexities and disarray of standard communication, add a whole dimension that can be applied most broadly to problem solving, and encourage an analytical mindset.

Computational Media certainly adds many more tools to the resources we have available for artistic expression. The ability to process the environment around us and relay it in a digital fashion is certainly

approaching how people function on an everyday basis. We take in stimuli and then formulate a response which composes our output.  In this way, computational media is representative of the human process of reaction. This should make it more adept to express internal life. However, it is still just a tool. The creator and the user have a fundamental role in how the media will function, how it will be presented, and if it will resonate with the audience. I think, regardless of the capabilities the ideas still come from the artist and that will determine how much a piece of work can express your internal life.

Furthermore I don’t think I identity with he sense of order in computational media. When setting up a program everything has to be perfect to function when I view some complex piece of code I know it is going through a very precise series of steps to get an output. I feel like my internal life has more of chaos, and while computational media is adept to harness and organize the chaos, it still can’t capture that wild disarray, the sense that anything can happen, and the ability to function despite the disorder.

Narrative and interactivity do go together. Not exclusively, but there is certainly potential for narrative with interactive qualities to have a niche in the larger world of story. The funny thing about narrative is that it is often used as an outlet for reader interaction in a way, with characters created and played out in an identifiable way that lets the audience “enter” the story. I took a class on story and we learned that the more this gap is created for the readers to come into the world, and feel a part, the more successful the narrative was. So following along that logic, a narrative in which the audience can actually make decisions and influence the outcome should be the ultimate goal.

Yet artists mostly want control over the way their piece will go. It may be engaging for the audience, but if too much is put out of the artists hands how much of it is theirs in the end? I think some interactivity is great, but it can be helpful to control possible outcomes. Like in those choose your won adventure novels we read as kids, and it seemed like anything could happen based on the pages we flipped to, but the author had written the whole thing anyways. This way it’s still his work. If the viewer is given infinite capability it no longer becomes an artwork, or a narrative someone else has created, it instead becomes a tool.

Computational media in itself is a tool. It can be used for many means, and though their are certain limitations in its strict structure, it can be used very creatively. The important thing to remember is how basic the concepts are, and how they go beyond just the computer world. It is a way of thinking, a way of externalizing the most calculative of the mind’s basic operations. Variables, arrays, functions, for loops, can all be ways of thinking.

In this way, computational media can be a unifying force. Though it is used a tool to create, exposing the process could show the thought behind an artwork and help us realize the most fundamental ways the artist put it together. I know it’s not usually considered good form to break things down to their smallest parts and see how things work. You look at the whole painting, not the brushstrokes.But sometimes the world is so complicated, a and the messages we want to convey are even more so. Finding ways to think about these message and convey them in simpler ways could be a streamlined way to effectively communicate, and possibly start to solve the problems being considered, one step at a time.

As Jack Kerouac, a long winded and verbose novelist once said, “One day I will find the right words, and they will be simple.”

 

Folgers Piano

I wanted to work more with button response type scenarios because my final project calls for this type of interaction. Though I will eventually work with sound as the trigger, I am currently working with clicking as the trigger to elicit a response. Here I tried to utilize the sound library so that when you click on the buttons here you get different sounds. It’s kind of like a virtual piano.

Unlike a piano though I didn’t organize it in ascending tones. I determined the notes you need to make the folgers theme song by listening to it a bunch of times and then came up with this.

G A B B C G A

G A C E D C

Then I put the buttons in the correct order so if you click them with the right rhythm you can play the folders jingle. It’s also coffee colored and has coffee icon keys. I think  folders would be proud of this as an advertising tactic. You thought you were going to play an ascending scale? Just kidding, it’s folders jingle. You can also play if backwards easily to try and find a hidden message.

Code:  https://github.com/g2eleni/Sound

Sources (p5 sound examples and library)
https://p5js.org/reference/#/libraries/p5.sound
https://p5js.org/reference/#/p5.sound/midiToFreq
https://p5js.org/examples/sound-load-and-play-sound.html
https://p5js.org/examples/hello-p5-song.html

Contrary to what people keep asking me, I don’t like coffee.

 

Many Things

I decided to work with the organized code I made last week but instead of making one image, I made many by including an array.  When I thought about many thing I was thinking about video, and how that’s just many images in order. I loaded a bunch of images (which were all frames to a short video clip) and one by one painted them on the screen with the pointillism code from p5.js
https://p5js.org/examples/image-pointillism.html
This was sped up by my tinkering from last week. I set it up to cycle through the images automatically and recorded the screen as video on quicktime.  This of course created a very long slow video, so then I put it in i-movie and sped it up so now when you watch it looks kind of like the original clip but all distorted and pulled apart into dots. This is only vaguely recognizable. I think it would be better suited if I screen-shotted each frame and put it all back together again as a video afterwards, that might make it cleaner, but this I thought was a cool way of many things coming together to make one thing.

heres the code:

https://github.com/g2eleni/ManyThings

here’s the video

Organized Dots

For this homework I was thinking of just making a bunch of organized dots,

then I stated thinking how all images are basically just a bunch of organized dots.  There’s a code I found in the p5.js reference examples that takes an image and draws a bunch of ellipses on a canvas that eventually forms the image.  The order of dot placement is random and the size is based on mouseX position. Here’s that link:

https://p5js.org/examples/image-pointillism.html

It’s cool, but my problem is it takes FOREVER for the whole image to take shape and I don’t have that kind of time. SO organized the code with functions  and for loops so instead of dot by dot, the image appears many dots a time. I wrote a function for 1 dot, and then referenced that function in my function to create many dots by putting it in a for loop.  The more dots you ask to appear on screen per frame, the slower the code lags though so I had to find a good balance. For instance, you could ask 10,000 dots to appear all at once and get a pretty well formed image all at once, but you have to wait like 2 minutes for it to appear which isn’t fun.

Unrelated, I also tried out the save frame command. If you press enter, the image on screen is saved to your downloads. This however, is also on a lag an takes a very long time to actually save and freezes the code while its happening. This whole project is fun but  definitely conspiring to eat up all my time. Here’s the code:

https://github.com/g2eleni/Organized

Potentiometer Controls the Sheep

I built off the same code as last time where there’s and alien spaceship chasing a sheep and the sheep gets levitated up and down. This time I made it so the levitation movement of the sheep is controlled by the potentiometer. I had the map the function to my canvas (may have overshot it a bit since the sheep can still disappear) and attach y movement to the rating from the potentiometer. It still needs more components to make it an actual game though (object with functions, a way to win etc). The potentiometer gives it a cool arcade feel, but I wouldn’t play it in an arcade yet. Maybe next week.

Code: https://github.com/g2eleni/AnalogControlGame

Final Project Ideas

I’ve always been interested in interdisciplinary work and I was hearing a weird thing on NPR back when Leonard Cohen died that got me thinking about a project idea I was never sure how to execute (http://www.npr.org/2012/12/02/166112493/dozens-of-covers-later-hallelujah-endures) They said his famous song, “Hallelujah” is usually sung in about 3-4 minutes but when Cohen wrote it, it has close to 80 verses. over the years hundreds of artists have covered it, and those that do just pick which verses mean the most to them creating many iterations of the same song.

This got me interested in the idea of some sort of artistic reserve, where the artist creates a wide selection of components that all work together but the viewer participate in the piece by selecting which components stand out to them the most. This leaves each viewer with a slightly different experience that should resonate with them on a higher level.

I’d like to apply that to images and writing, maybe having a piece of writing and different phrases have different components to an illustration attached to them somehow. The images are called on by voice recognition software, so when they say out loud a key word, the image appears on screen along with the text. Not all the readers have to say all the words aloud though, giving them the ability to customize the resulting image based on what they want to say.

(I’m also very interesting in speaking vs. reading, and if reading aloud is a more emotion experience for some people. (https://hbr.org/2011/10/the-benefits-of-speaking-aloud))

I’m not set on this one idea but I do want to do something with reserves, and viewer control over what artwork they end up seeing.

 

Motor and Movement/Self Portrait

I did my movement assignment for my roommate who likes “cool decorations” and “outer space.” I made a moving room decoration in the shape of a star for her using the servo motor. First I hooked up the servo motor and set it up to spin 180 degrees one way and then the other on a loop, with small pauses between the rotation to give the appearance of gentle twinkling. Then I attached the long arm to the motor and put on the star I made (which is basically just cereal box and paint) with duct tape. I mounted it on the wall and let is spin back and forth, and aside from the wire hanging out clearly attached to an arduino and my computer, it looks like it’s spinning by itself.

I also hooked up a button for practicality. If she doesn’t want the distraction of the (somewhat noisy) movement she can hit the button and shut it off. The practicality is only useful in theory though because it’s still attached to an Arduino which needs to in turn be attached to a power source so as a decoration, having all the stuff around isn’t too great. The star is 3D and does a decent job concealing the motor though so if I get other smaller parts, it could work. All in all she was very happy with the effect for about 5 minutes before telling me to get my junk off her bed.

Code for the spinning star:

https://github.com/g2eleni/servo-motor-project/blob/master/servosketch_feb26b.ino

I also made my self portrait, which is very simple and slightly interactive. It was frustrating placing the squares and circles through trial and error because I couldn’t tell just by entering coordinates exactly where they’d end up on the canvas. I had to give each shape many adjustments to get it to the right spot.

I made my portrait so that when you click on the canvas it gets a little angry and upset which I think is very accurate because I just don’t really like people touching me or making touching movements anywhere near me. The canvas here is pretty much my “personal bubble” and the 12 bit anger is pretty much real.

unclicked

Code for my portrait:

https://github.com/g2eleni/SelfPortrait/blob/master/sketch.js

 

Alien Game

This week for animation I tried to make a little game. Its based on the classic alien/sheep scenario combo (which I realized too late is actually an alien/cow scenario combo) but basically there’s an alien UFO with lasers and a sheep running around back and forth and if you catch the sheep between the lasers it disappears. In theory it’s been vaporized onto the UFO.  Sometimes it doesn’t disappear it just flickers though which is a weird glitch I’m trying to work out. At some point I’d like to manage to levitate the sheep instead of just vanishing it.

 

https://github.com/g2eleni/Alien-Game

Kind of a Small Theremin- Eleni Giannopoulos

I’ve always thought theremins were super cool and I would say their whole concept is pretty ridiculous as far stupid pets tricks go. A theremin is an instrument that you play without touching. There’s a rod that senses how close your hand is and raises and lowers the pitch being projected based on distance.  I guess you could play pretty much anything but they sound wild so they’re mostly used for old sci-fi type sound effect. Think about the standard “an alien just appeared” noise and that’s probably a theremin.

Mine is a little different. I used the ultrasonic sensor that came with my kit and detects distance. It uses an echolocation type process so to get the best result you have to have a flat surface directly in front of it. It regularly sends out an ultrasonic wave an measures how long it takes to bounce off the nearest object and return.

I used a conversion to read that number in inches and then assigned the different measurements a tone to play out of the speaker I hooked up. So if the object was 2 inches away the speaker would play a low sound but i it suddenly moves to 4 inches away it would play a slightly higher sound.

True theremins have continuously changing pitches, which is what gives them their funky vibe, but when I tried used a map function to get this happen with my own I didn’t like the resulting sound. My ultrasonic sensor reading wasn’t steady enough I think, so the tones kept jumping back and forth like crazy.

I stuck with certain tones at certain distances and tuned the instrument by looking up the hertz associated with various notes in a scale. I gave a it range of c4  all the way to a5 on an F major scale (no chromatics) and I made a little paddle out of a playing card to give the sensor a flat surface and a steady reading.

It’s still a little glitchy, flickering between notes, but I can now play some songs. My set list includes:
auld lang zyne
amazing grace
hey jude
loch lomond
my heart will go on
home on the range
danny boy
oh susanna
red river valley
the chorus of american pie
a barely recognizable version of total eclipse of the heart
christmas medley
2 really old greek songs
a not as old greek song
that one well known opera song

So basically everything you’d wanna hear at a concert.

Here’s the code:  https://github.com/g2eleni/UltrasonicSensorTheremin

Here’s me playing that one well known opera song;

Photophobic Switch- Eleni Giannopoulos

I made a fairly useless switch that’s a little bit of a self portrait. When it’s exposed to light the LED turns red and blinks “SOS” in morse code repeatedly until the lights are turned off, or it is covered up, and when its dark enough it blinks “O OK.” This is pretty how I function when the sun comes up in the morning and my body is like SOS until I wait for night to come again so I can say O OK and go back to sleep.

I did this using the photoresistor. I hooked it up in a separate little circuit on the breadboard and asked it to sense the amount of light in the room, in my code. Then I hooked the RBG LED and said that if the levels were over 200 (which I guess is bright by my room standards) the R pin would go off and blink SOS. Then, if the levels were under 200, which is dark, the G pin would go off and blink O OK. I did the morse code just by appropriately timed delays.

 

It’s hands free because you can wait for dark, or put your mouth over it or something. I guess it’s not a perfect “light” sensor because you can fool it by covering it up with something but then again you can fool me by covering me up with a blanket in the morning, I’ll go right back to sleep.

 

 

Subway Grate Powered Switch- Eleni Giannopoulos

When I started thinking about hands free switches I might’ve taken it a bit too far and started thinking about people free switches. I guess realistically there’s no such thing: there’s always some sort of human component, in the construction or the operation. But still, I was thinking of a way to take the control out of my hands completely.

I took Urban Arts last semester and we talked a lot about the mechanisms of New York City,  and the different byproducts of the way it functions. One of those, that I pinpointed as a useful force for my switch, was te gust of warm air that comes up out of the subway grates when the trains go by. I thought I could use that physical force to close the circuit.

To do this I knew I needed a light material that the air would lift. I shaped a plastic bag into a small dome with tape and by tying it. I also stuck some aluminum foil on top of the bag to be a light conductor. This would be the first part of the switch. For the second part I built a cube type structure around the bag with cardboard, clay, some brads, and these one direction pencils  my roommate gave me as a gag gift that I never knew what to do with (and now its perfect ’cause technically electricity moves in one direction (!!!))

The bottom of the base has a hole to let in the subway air, and the top has another piece of foil, so that when the bag is blown upwards, it’s just the right height and both pieces of foil are pressed together. I also added some clay as weight so when the air stops the bag sinks back down and the circuit is open again (the bag was surprisingly structural and wouldn’t collapse by itself which is surprising for flimsy trash).

This was basically the switch. I tested it on the subway grate first and all the other people on Broadway were a little upset and confused that I was sitting on a grate with a weird trash cube. Then I actually hooked up the wires, connecting the power source and resistor to a wire that was connected to the aluminum on the bag, connecting the LED to the aluminum on top, and then running another wire back to ground. So everyone got even more upset and confused when I sat on the grate with a weird trash cube that lights up every time the train goes by.  I didn’t anticipate the strength of the draft, and I had to hold the whole thing down so it didn’t launch like a parachute but that’s fine and easily fixed. (Consequently I couldn’t also film it in action because I couldn’t find anyone to hang around the grate with me, but I had my roommates help me film a replica of the situation with a blowdryer.)