This was an exercise in creating a bouncing effect. Grab the heads of the plants, drag them around the stage, let go and watch the bounce! The source code can be downloaded from Github:
I am building a tool to teach kids (and eager adults) about the basics of storytelling. A participant will be able to combine characters, simple actions and environmental elements on a simple timeline and watch them play out in real time.
The experience will be rich with texture and lighting, giving the experience a homemade, old-time feel.
Below is the documentation of the first-stage build, that demonstrates the look and feel of the environment and characters.
You can create the characters by pressing buttons, lift them into the air and dangle their legs. The Processing source code can be downloaded from Github: https://github.com/yparshina/ITP_1st_semester/tree/master/shadow_puppets
1) Flush out the user experience and user interface roughly outlined in the sketch above:
- The user should be able to tell a story on the timeline but still have the ability to interact with the characters on stage. How will this work?
- What is the most intuitive timeline interaction?
- What are the best actions? They have to be simple enough for the characters to act out programmatically in two-weeks of build time, clear enough for the user to understand from a symbol and fun enough to make an engaging play (I’m half-way there).
2) Implement the “XY coordinates & color combo” technique for better mouse selection of irregular shapes (trees, deer horns, etc).
3) Swap Processing generated bodies of characters for irregular PShapes
4) Complete environments
5) Build out the UI
6) Create a timeline
7) Program actions
8) Polish character behavior (both action and user driven)
9) Add one more character
10) Polish character motion (dangly legs should dangle better, add twitch ears and tails, add wings to the bird)
This is an edit of some early tests for the Fragmentations project.
In my animation work, I often use After Effects to distort reality until it is broken down into lights and patterns. I decided to take this technique a step further and bring it into real-time.
In Fragmentations, I create Processing sketches that break down a live web-cam feed into abstract representations. I then film the result through distorting mediums such as glass vessels filled with various translucent liquids. The final product is a cycle of reality, transformed by technology, transformed by reality.
Below are video documentations of the actual Processing sketches. The files can be downloaded from Github: https://github.com/yparshina/fragmentations
Drawing with Processing
It was a neat brain exercise to think about colors and composition in terms of pure numbers. I now spend my mornings staring at objects in my kitchen, trying to guess their RGB values.
However, I am forcing myself to think and plan ahead in code rather than run on trial and error alone. Placing the dots on the hill by eye took over an hour of adjusting x and y values. Using transparency could have saved some time in RGB number tweaking, though it was a very useful learning experience.