ουτις (pronounced “OO-tis”) is a networked performance piece in which a computer-vision program reads a video as a score, creating an interpretation of what it “sees” as a data-structure which is then translated into sound and visuals as an expression of that thing. During a performance, one or more human performers will participate in the process along with Outis.
I am both a performer of improvised music as well as a composer of algorithmic music. Often the goals of these two activities seem at odds with each other. “Outis” is a piece that is intended to bridge these two rather disparate methodologies by providing algorithmic constraints to otherwise free-form improvisation. In “Outis,” the interpretation of the score is divided into layers. At the base, there is the source material, consisting of an individual or set of video loops or a live camera feed, which is “read” by a computer-vision program. The program attempts to “make sense” of the image by extracting features from it through a blob-detection routine (explained below). The blob-detection routine makes certain information about the image available to the program, which then uses a rule-set to decide which features on which to focus its attention and attempts to create a generalization of that data in a form that is musical and auralize that form. The point at which the derived data is auralized is where the performer(s) can lay their hand on the process and, within the constraints of the system, improvise. Because the program has agency in the interpretation of the score, the program can itself be seen as a performer. The author goes one step further in his conception of the performance situation, seeing the performance as an act of one agent in which the human and machine elements are participating for the duration of the performance act, which is at once algorithmic and improvised, or algo-improvisized.