Audiovisual Software Art
4 Mappings Based on Human Action: Instruments
A variety of performative software systems use participatory human action as a primary input stream for controlling or generating audiovisual experiences. These systems range from screen-based musical games, to deeply expressive audiovisual instruments, to puzzling and mysterious audiovisual toys whose rule-sets must be decoded gradually through interaction. In many cases the boundaries between these forms are quite blurry. Some of these systems are commercial products; others are museum installations or browser-based Internet experiences; and some projects have moved back and forth between these forms and contexts. What these applications all share is a means by which a feedback loop can be established between the system and its user(s), allowing users or visitors to collaborate with the system’s author in exploring the possibility-space of an open work, and thereby to discover their own potential as actors.
The category of performative audiovisual software games is extremely large, and is treated in depth elsewhere in this volume. Here I briefly note games that may also be intended or regarded as artworks, such as Masaya Matsuura’s Vib-Ribbon (1999), a rhythm-matching game, or art/game mods such as retroYOU r/c (1999) by Joan Leandre, in which the code of a race-car game has been creatively corrupted and repurposed. One particularly notable gamelike system is Music Insects (1991–2004) by Toshio Iwai, which functions simultaneously as a paint program and a real-time musical composition system, and which Iwai has presented in a variety of formats, including museum installations and commercial game versions.
Numerous audiovisual instruments have been created which allow for the simultaneous performance of real-time imagery and sound. Many of these screen-based programs use the gestural, temporal act of drawing as a starting point for constructionist audiovisual experiences. A pioneering example of this was Iannis Xenakis’s UPIC (1977–1994), which allowed users to gesturally draw, edit, and store spectrogram images using a graphics tablet; a 1988 version offered performance and improvisation of spectral events entirely in real time.[20] Whereas UPIC was developed to be a visually based instrument for composing and performing sound, other audiovisual performance systems have been explicitly framed as open works or meta-artworks—that is, artworks in their own right, which are only experienced properly when used interactively to produce sound and/or imagery. A good example is Scott Snibbe’s Motion Phone (1991–1995), a software artwork that allows its user to interactively create and perform visual music resembling the geometric abstract films of Oskar Fischinger or Norman McLaren. Motion Phone records its participant’s cursor movements and uses these to animate a variety of simple shapes (such as circles, squares, and triangles), producing silent but expressive computer graphic animations.[21] A related artwork, Golan Levin’s Audiovisual Environment Suite, or AVES (2000), presents a collection of cursor-based interactions by which a user can gesturally perform both dynamic animation and synthetic sound, simultaneously, in real time. Based on the metaphor of an inexhaustible, infinitely variable, time-based, audiovisual ‘substance’ which can be gesturally created, deposited, manipulated and deleted in a free-form, nondiagrammatic image space, Levin’s system uses recordings of the user’s mouse gestures to influence particle simulations, and then applies time-varying properties of these simulations to govern both visual animations and real-time audio synthesis algorithms.[22] Amit Pitaru’s Sonic Wire Sculptor (2003) likewise produces both synthetic sound and animated graphics from the user’s mouse gestures, but shifts the representational metaphor from a 2-D canvas to a 3-D space populated by the user’s ribbonlike drawings.[23] Josh Nimoy’s popular BallDroppings (2003) departs from free-form gestural interaction, presenting instead an elegant mouse-operated construction kit wherein balls fall from the top of the screen and bounce off the lines you are drawing with the mouse. The balls make a percussive and melodic sound, whose pitch depends on how fast the ball is moving when it hits the line. Nimoy articulately summarizes the hybrid nature of such work: BallDroppings is an addicting and noisy play-toy. It can also be seen as an emergence game. Alternatively this software can be taken seriously as an audio-visual performance instrument.[24]
Another genre of performative audiovisual software dispenses with drawing altogether, in favor of a screen space populated (usually a priori) with manipulable graphic objects. Users adjust the visual properties (such as size, position, or orientation) of these objects, which in turn behave like mixing faders for a collection of (often) prerecorded audio fragments. This can be seen in Stretchable Music (1998), an interactive system developed at the Massachusetts Institute of Technology by Pete Rice, in which each of a heterogeneous group of responsive graphical objects represents a track or layer in a precomposed looping MIDI sequence.[25] Other examples of this interaction principle can be seen in John Klima’s interactive Glasbead artwork (2000), a multi-user persistent collaborative musical interface which allows up to 20 online players to manipulate and exchange sound samples,[26] or more recently in Fijuu2 (2004–2006) by Julian Olivier and Steven Pickles, whose adjustable graphical objects allow for audio manipulations that are even more dramatic.
The systems described above were designed for use with the ubiquitous but limited interface devices of desktop computing: the computer mouse and the keyboard. The use of comparatively more expressive user interface devices, such as video cameras and custom tangible objects, considerably expands the expressive scope of instrumental audiovisual software systems, but it also pulls them towards the formats (and physical dimensions) of performances and/or installations. Finnish artist and researcher Erkki Kurenniemi’s landmark 1971 DIMI-O system (Digital Music Instrument, Optical Input) synthesized music from a live video image by scanning the camera signal as if it were a piano roll.[27] David Rokeby’s Very Nervous System, or VNS (1986–1990), explored the use of camera-based full-body interactions for controlling the simultaneous generation of sound and image. Other audiovisual software instruments have employed custom tangible objects as their principal interface, such as Audiopad (2003) by James Patten and ReacTable (2003–2009) by Sergi Jordà, Marcos Alonso, Günter Geiger, and Martin Kaltenbrunner; both of these instruments use real-time data about the positions and orientations of special objects on a tabletop surface to generate music and visual projections.[28]
Works: Audiopad, Audiovisual Environment Suite (AVES), BallDroppings, Dimi-O (Digital Music Instrument, Optical Input), fijuu2, Glasbead, Motion Phone , Music Insects, reacTable, retroYOU r/c , Sonic Wire Sculptor , Stretchable Music, UPIC (Unité Polyagogique Informatique du CEMAMu), Very Nervous System (VNS), Vib-Ribbon
People: Marcos Alonso, Oskar Fischinger, Günter Geiger, Toshio Iwai, Sergi Jordà, Martin Kaltenbrunner, John Klima, Erkki Kurenniemi, Joan Leandre, Golan Levin, Masaya Matsuura, Norman McLaren, Josh Nimoy, Julian Olivier, James Patten, Steven Pickles, Amit Pitaru, Pete Rice, David Rokeby, Scott Snibbe, Iannis Xenakis
Socialbodies: Massachusetts Institute of Technology