[French émotion, from Old French, from esmovoir, to excite, from Vulgar Latin *exmovre : Latin ex-, ex- + Latin movre, to move; see meu- in Indo-European roots.]
This project will involve using information acquired via Twitter to control physical bots.
The aim is to have a bots that will be driven by mental states that arise spontaneously rather than consciously.
It may be equally as important to consider both if those moods arise from artificially created cause and effect circuitry that mimic biological reactions or from emotions parsed from twitter feeds into commands for kinesis.
This is an experiment that aims to explore these two areas; The function of anthropomorphism, and consequent modes of control.
The ultimate goal is to create something that feels “alive” rather than constructed and based solely on Aristotelian physics(flawed?) and explore the close ties between perturbations in physical motion and emotional states.
A few points I want to explore as I move forward:
Effective intelligence based on cause and effect behaviours triggered by either circuit/sensor or Twitter?
Importance of the bots to be aware of each other/seeking each other?
Bots to act as one unit/is this beneficial?
Is there a way of outputting emotional information externally as a guide to spectators for the bots mood?
Is the motion useful?
How much human involvement?
Clarity of commands i.e (turn left vs Kurosawa Hidden Fortress is great)?