Live visual performance, both in clubs and in art venues, has become popular in recent years, particularly with the advent of laptops with substantial real-time graphics capabilities. Nightclubs increasingly employ VJs—visual performers who project ambient visual imagery—to complement the auditory environment created by the DJ. Art venues and festivals have begun in recent years to program live cinema performances—audiovisual performances with seated audiences who experience the performance much as an audience would experience a concert. Contemporary performance contexts—particularly VJ performances in clubs—often situate visual projections in a subservient role as an accompaniment to music. The familiarity of other popular media, such as music videos and software music visualizers, has further solidified the notion in many people’s minds of real-time visuals as either a synesthetic device or an accompaniment to music—and of synchronization between audio and visuals as a central focus of this type of work. Yet a look at historical predecessors of today’s visual performance reveals a variety of approaches to the relationship between sound and visuals. The idea of integrated audiovisual performance goes back centuries; to write a truly comprehensive history, one would need to include opera, dance, theater, performance art, orchestral scores with visual components, and even amusement parks. If we narrow our focus to screen-based antecedents, a complete history would still need to consider films, television, and music videos; there will necessarily be many things left out. This text therefore focuses on major historical performance practices that can be considered direct predecessors to contemporary VJing and live cinema.
A common starting point in visual performance histories is the color organ; the concept goes back about 300 years, which makes its resemblance to some contemporary audiovisual performance tools all the more striking. A color organ is a device that projects areas of light, typically in a variety of colors. It is generally performed in a manner similar to that when playing a musical instrument, and it often is controlled by means of a keyboard. Historically, color organs were sometimes performed with music, but because a number of artists saw the output of the instruments as independent works of art, some color organ performances were silent. However, what is common to most color organs throughout history is that they correlate the performance of light to the performance of sound—whether metaphorically or literally. Some color organ developers have simply used music as a metaphor for the performance of visuals. But numerous inventors, artists, and theorists over the years have developed systems of direct correlation between colors and musical notes. Although various color organs were developed throughout the eighteenth and nineteenth centuries, most of these early inventors focused on the creation of the object itself. Early-twentieth-century color organ/light artists and inventors such as Mary Hallock-Greenewalt and Thomas Wilfred saw their work as twofold: development of an instrument and development of a distinct art form.
While artists continued to develop color organs into the mid-twentieth century and beyond, the cultural changes of the 1960s brought about a new kind of visual performance. Light shows were a manifestation of the social consciousness, communalism, and psychedelia of the era. They also quickly became a popular element of rock concerts and spread to countries throughout the world. Although some light shows were limited to liquid projections and could be performed by one person, larger shows were performed by ensembles that superimposed multiple projected images and effects on the screen. For example, the Los Angeles ensemble Single Wing Turquoise Bird typically projected 16-mm films, still images on lithograph, and liquids simultaneously, often with spinning color and strobe wheels placed in front of the other projections. Light shows stand apart from most historical visual performances in that they were ensemble performances, often with six to nine members projecting at the same time. Although they were normally performed as accompaniment to music, they could also be performed as the central element of a performance. As did some color organ artists, light show performers developed visual structures that were in themselves musical; this musicality was compounded by the interactions inherent in ensemble performance.
By the early 1970s, the availability of portable video equipment had given rise to video art. Meanwhile, analog synthesizers were becoming popular in music performance and recording. Video artists like Nam June Paik and Woody and Steina Vasulka were already exploring distortions of video signals through such means as holding magnets near the screen and creatively manipulating vertical and horizontal hold settings on television sets. A few video artists, including Paik (with Shuya Abe), began building analog video synthesizers, which were basically video equivalents of audio synthesizers, allowing one signal to be used to control another signal in real time. Video synthesizers were often used to alter live camera or videotaped sources, but could also be used in self-contained setups to generate abstract visuals. Analog, and later digital, video synthesizers were used by artists in studio production and occasionally were used in live performance situations. Stephen Beck’s Illuminated Music for example, was performed in auditoriums throughout the United States in the early 1970s. However, the size and expense of analog video synthesizers prevented them from being widely adopted as performance tools.
Although analog video synthesizers remained too bulky to take on the road, in the 1980s and early 1990s video decks and some video mixing gear became portable enough to bring to live performances. The first personal computer video processing tools were also becoming available; for example, the Newtek Video Toaster allowed real-time mixing and effects to be performed on a desktop computer.
At roughly the same time, the Scratch Video movement emerged in the United Kingdom. Scratch Video artists such as Gorilla Tapes and the Duvet Brothers recorded clips from mainstream media and remixed them, generally as political commentary. Although Scratch Video work was neither edited nor performed live, it was a major influence on later remix-based performance work. A related but distinct technique known as video scratching was created in the late 1980s by the American multimedia ensemble Emergency Broadcast Network (EBN). In video scratching, short clips from television programs or other found footage are edited rhythmically to give the effect of spoken lyrics to accompanying music. One of EBN’s best-known videos was a 1991 cover of the Queen song We Will Rock You, with lyrics assembled from clips of a speech about the Gulf War delivered by U.S. President George H. W. Bush. This piece gained international notoriety when the band U2 used it on their Zoo TV tour. In the late 1990s, electronic musicians such as the British DJ duo Coldcut began integrating live video mixing into their performances. Coldcut went on to release its own audiovisual performance software, VJamm, and has remained one of the best-known audiovisual performance ensembles.
By the early 1990s, raves and the house music scene had gained international popularity. The use of Ecstasy at these events had stoked enthusiasm for synesthetic experience—and the VJ scene emerged. Initially, VJs used simple equipment to play videos or produce abstract imagery in real time. By the turn of the twenty-first century, the accessibility and power of portable computer and video equipment had reached a level that made performing sophisticated live visuals at a rave, nightclub, or festival practical for a critical mass of artists. Although equipment varies widely, contemporary VJs’ rigs usually include computers, DVD players, and video mixers—though it is also common to see a VJ using nothing more than a laptop computer. External control interfaces such as MIDI or OSC controllers are often employed, and sometimes the beat of the music is used as a control (either through miking the house sound or through the VJ tapping a tempo into a computer interface). The visuals are typically projected onto a screen at the front of the venue—although video screens are sometimes used in place of or in addition to projection. Because the term DJ had long been used to refer to a performer who mixes audio tracks, the term VJ came to be widely used as a term for a performer who mixes video sequences. However, many VJs create original material live—either exclusively or in combination with remixed material. A number of VJs, such as VJ Miixxy (Melissa Ulto), include live camera feeds as source material, allowing the VJ to enhance the live presence of the visuals by including the audience and/or the performers in the visual mix (Peep Delish: VJ Miixxy—a Selection of Works). In addition to working with external video sources, many VJs generate real-time abstract imagery, reminiscent of earlier performances with color organs, liquid light shows, and video synthesizers. Because the VJ can layer and manipulate images extensively, the line between representational and abstract imagery can be blurred. In this regard, many VJ performances can be compared to the image collages of 1960s light shows. A significant difference, however, is that the VJ’s multilayered imagery is in most cases created by a solo performer rather than by the collaborative improvisation of an ensemble.
In club performances, the role of the VJ is generally seen as ancillary to that of the DJ; it is expected that the VJ’s performance will function as a visual accompaniment to the music. While the DJ typically performs onstage so that the crowd can see the performance, the VJ is rarely seen and frequently performs from a booth at the back of the house. Whether or not individual VJs find this treatment objectionable, many VJs see their mission as accompanying or visualizing the music. Finding this context as well as the club environment itself limiting, many visual artists have chosen to perform in a context known as live cinema.
Live cinema is typically performed in a setting that resembles both a music concert and a film screening. Performances often take place at art-related events and locations: for example, in a theater at an art festival. Unlike at a nightclub, where the music and visuals function as ambient entertainment, in a live cinema performance the audience is typically seated and focused on the performance. Because of this more contemplative reception model, performers often develop loose visual narratives over the course of the performance. As HC Gilje of the live cinema ensemble 242.pilots has noted, audiences in this setting often interpret their own narratives, even when these were not intended by the artists. Similarly, music in live cinema shows tends toward a linear development and is often composed specifically for the live cinema performance. In some cases the same artists develop both the music and the visuals—Ryoichi Kurokawa is one example. In other cases, there are musicians for the music and visual artists for the visuals, though close collaborations between the two are also common.
Live cinema and VJing are not mutually exclusive, and the boundaries are often blurred. Many VJs, for example, focus on developing narratives, not merely accompanying music. Further, the labeling of performances as either live cinema or VJing is by no means standard. For that matter, not all performance contexts are clear-cut: some audiovisual performances do not fit cleanly into either the live cinema or the VJ categories. A number of artists, such as VJ Oxygen (Olga Mink), move between the various audiovisual performance contexts.
In addition to the development of live cinema and VJing as genres, another hallmark of visual performance in the 2000s has been the proliferation of digital tools for real-time visual performance, both commercially produced and artist-produced. Commercial tools such as Modul8 and VJamm allow for easy mixing and triggering of video clips and require little or no programming experience. However, many visual performers develop their own visual tools because they desire more flexibility.
Visual performance software can be written in virtually any language, and a few artists—such as Dave Griffiths—develop their own languages. However, patch-based development environments, such as Max/MSP/Jitter and Pd/Gem, have become popular in recent years with visual performers. Originally developed as environments for real-time sound and device control, Max/MSP and Pure Data (Pd), both created by Miller Puckette, are graphical programming environments that function similarly to early analog audio and video synthesizers, which used patch cords to connect hardware modules. Thus, the Max or Pd programmer can develop a program by patching together pre-existing software objects; the programmer can also develop his or her own objects. Jitter and Gem add objects for visual processing to Max and Pd, respectively. Various factors influence whether an artist chooses to work with Max/Jitter or Pd/Gem. One major consideration is that Max is commercial software, now sold by Cycling ’74, Pd is open-source, free software. Besides the obvious cost difference, many artists prefer to work with open source tools, which empower the user community to revise and extend the software in any way they choose. Thus, while many visual performers find Jitter to be more powerful than Gem, there are many who use Pd/Gem out of a preference for open-source software. Yet the large number of available user-contributed objects for both Max/Jitter and Pd/Gem means that both projects are largely community-developed efforts.
Whereas patch-based environments are popular with many performers, other performers prefer to develop in more traditional text-based programming languages. The open source Processing language, initiated by Ben Fry and Casey Reas, while not originally geared toward performance, has become increasingly popular for developing live visual performance tools. There are also numerous artist-written tools using proprietary development environments like Flash, as well as tools that combine a proprietary development platform with an open source approach, such as Onyx.
In the latter part of the 2000s, visual artists have become increasingly interested in expanding their use of hardware interfaces for performance. Although video mixers have been employed in performance since predigital days, most computer-based visual performers’ setups have otherwise used standard computer interfaces (i.e., mice and keyboards), or repurposed electronic music or game controllers (e.g., keyboards, drum pads, joysticks).
However, in recent years, some dedicated devices for visual performance have begun to emerge. The Pioneer DVJ-X1, introduced in 2004, and its successor, the DVJ-1000, enable performers to scratch and loop video DVDs much as DJs do with audio CDs. The units allow audio and video to be mixed simultaneously by a single performer, often called a DVJ. In addition to commercial products, do-it-yourself visual performance hardware has also begun to emerge: the open source Tagtool, initiated by OMA International, can be built and programmed by a performer using instructions available on the Tagtool website. The Tagtool facilitates real-time drawing and animation for live visual performance.
Visual performers who put themselves into the performance, such as VJ Miixxy, prompt one of the questions that has arisen as laptop-based computer performance has gained in popularity: because the visual and sonic output of laptop performance often does not correlate with the visible physical gestures of the performer, what is the visual role of the performer? Musical performance on more traditional instruments might be considered audiovisual by nature, owing to the audience experience of watching the performers’ gestures integrated with the music. Laptop performers are now beginning to address the question of performativity. Some laptop musicians and visual performers are making their performances more gestural by using traditional computer music interfaces, like MIDI keyboards, or less conventional performance interfaces, such as Wii controllers and Tagtools. Other performers, such as those who practice livecoding, choose to project their screen interface as part of the show so that the audience can observe their actions on the screen. Still others, particularly visual artists, prefer that their performance actions be invisible so as to focus audience attention on the images and sound. Many VJs, DJs, and laptop musicians feel that projected visuals themselves take the place of watching a performer make music through gesture. Although most discussion of audiovisual integration in contemporary performance currently focuses on the relationship between the sounds and images generated by the performers, the visual relationship between the performer and the performance is likely to emerge in the near future as another important consideration in the audiovisual experience.
 Fred Collopy, “Color Scales?”, RhythmicLight.com, 2007, http://rhythmiclight.com/archives/ideas/colorscales.html.
 Steve Beck, Illuminated Music, http://www.stevebeck.tv/ill.htm.
 Bram Crevits, “The Roots of VJing: A Historical Overview,” in VJ: Audio-Visual Art and VJ Culture, ed. Michael Faulkner and D-Fuse (London: Laurence King Publishing, 2006), 15.
 Paul Spinrad, The VJ Book: Inspirations and Practical Advice for Live Visuals Performance (Port Townsend: Feral House, 2005), 23.
 Musical instrument digital interface (MIDI) is an industry-standard digital protocol, defined in 1982, that enables electronic musical instruments such as keyboards and controllers to transmit control signals (event messages) to compatible soft- or hardware. MIDI signals can be used to control the pitch of an audio-synthesizer or the intensity of an electronic image by means of physical interfaces such as buttons, knobs, and sliders. Open Sound Control (OSC) is a newer protocol for communication and meant to supersede the MIDI standard, which many consider inadequate for modern multimedia purposes. The advantages of OSC over MIDI are primarily speed and throughput data-type resolution and Internet connectivity. OSC was developed at the UC Berkeley Center for New Music and Audio Technology (CNMAT).
 In the 1980s, MTV popularized the term VJ as a title for its on-air hosts. There is often confusion between the two meanings of the term.
 Although Max and Pd are the most popular patch-based environments used by audio and visual performers, other patch environments such as open-source project vvvv, which focuses on real-time video synthesis, and Eyesweb, which focuses on computer vision for gestural control, have been gaining popularity with visual performers. There are also various open source graphical/visual libraries for Pd in addition to Gem.
 The author of this text performs as VJ Übergeek, a VJ who puts herself into the performance onstage rather than onscreen.
 Commercial product developers have begun to recognize electronic musicians’ and visual performers’ growing interest in gestural interfaces. JazzMutant’s Lemur, for example, is a multitouch interface which lets performers slide their fingers along a touchpad rather than click and scroll with a computer mouse.
 Musical instrument digital interface (MIDI) is an industry-standard digital protocol, defined in 1982, that enables electronic musical instruments such as keyboards and controllers to transmit control signals (event messages) to compatible soft- or hardware. MIDI signals can be used to control the pitch of an audio-synthesizer or the intensity of an electronic image by means of physical interfaces such as buttons, knobs, and sliders. Livecoding is the name given to the practice of writing software in real time as part of a performance.
1720 until today