Conducting Sound in Space

For a more detailed article see under Writings: Conducting Sound in Space

Introduction

The goal of this project was to explore the potential of electronic music that combines production, performance and diffusion into a single integrated creative process. In pursuing this goal I hoped to develop intuitive ways of controlling sound in space that might be relevant not only for my own practice as a composer and performer but also for other artists facing the challenges of creating spatial electronic music within the context of live performance.

Background

Since 1995 I have explored the use of motion-tracking technology as a means of controlling musical parameters in live performance. This exploration began with the DIEM Digital Dance project, which focused on tracking the motion of dancers, allowing them to control musical elements. Custom motion-tracking hardware using flex sensors was developed and two interactive dance performance works were created: Movement Study and Sisters. In 2008 I continued my work with interactive dance using other types of hardware, including camera-based technology and accelerometers, in collaboration with dancers and choreographers in what was called The Pandora Project. The camera-based technology that I used consisted of digital cameras mounted in front of and above the stage for tracking the movement of the dancers. I developed interactive software using the cv.jit (computer vision) library in the Max/MSP programming environment. For testing mapping between movement and sound I used my laptop computer with its built-in camera.

Two Hands (not clapping)

In the process of testing and experimenting with this setup I found myself waving my hands in front of my laptop, controlling the sounds intended to be controlled by the dancers. It dawned on me that this activity was both enjoyable and musically interesting with obvious parallels to historical electronic music interfaces such as the Theremin (1920) and The Hands (1984). I decided to use this system in a new composition, which led to Two Hands (not clapping) for solo performer and motion-tracking performance system, a work commissioned by the Dark Music Days Festival and premiered in Reykjavik in 2010.

No Water, No moon

In 2011 I was invited to work with the sound system at the Royal Library in Copenhagen. The building, known as the Black Diamond, includes a large public space with a glass facade overlooking the harbor. Permanently installed in this space is a powerful 12-channel sound system with four large speakers (Meyer UP1) on each of the three levels and two subwoofers on the second level.

By coincidence, Two Hands (not clapping) used 12 independent audio channels or voices mixed down to stereo output. It seemed natural that the 12 voices could be routed directly to the 12 loudspeakers in the Black Diamond to create a 12-channel version. In testing this setup I found that the result was fascinating. When I moved my hands higher, the sounds activated were routed to the upper speakers. When I moved my hands lower, the sounds activated were routed to the lower speakers. By moving my hands I could very intuitively control not only which sounds I wanted to hear but also which of the 12 speakers I wanted to hear.

The experience of performing Two Hands on this 12-channel sound system inspired me to create a new, site-specific 12-channel work entitled No Water, No Moon, which was commissioned by the Danish Composers Union to commemorate its centennial anniversary and premiered at the Black Diamond on May 4th, 2013.

Copenhagen Jazz Festival 2011.

Wayne Siegel performing at the Black Diamond, Copenhagen

Motion-Tracking Hardware

The use of computer vision to control sound had become an intuitive means of musical expression for me. I wanted to expand this live composition environment to include live control of sound diffusion. Up to this point I had routed each of the 12 channels of my setup to one of twelve speakers. I wanted to be able to control sound diffusion or live panning using motion-tracking technology. My criteria for choosing hardware and software to control sound diffusion were 1) the system must be intuitive and fairly easy to learn how to use and 2) the system must not inhibit or interfere with body movement already being used to control sound. I experimented with three different types of motion-tracking hardware for controlling sound diffusion.

Choice of hardware: two Hot Hands

After testing these  systems I decided to use two Hot Hand controllers, one worn on the middle finger of each hand. The use of accelerometers did not interfere directly with the computer vision tracking already in use and I found the interface to be stable and intuitive.

The Hot Hand outputs 3 controller parameters: X, Y and Z coordinates. I began experimenting with a single Hot Hand controller, but found it difficult to map these three parameters independently to sound diffusion parameters. For example, rotating my hand changed at least two parameters simultaneously. For this reason I decided to use two Hot Hands, mapping only one parameter from each: X-axis on my right hand and Z-axis on my left hand. For my right hand, the “neutral” position was holding my hand with my palm facing left in relation to myself. By rotating my right hand counter-clockwise I could increase controller values, by rotating my hand clockwise I could decrease controller values. For my left hand the neutral position was with the palm facing down. By raising my left hand (palm facing forward) I could increase controller values, by lowering my left hand (middle finger pointing down) I could decrease controller values.

Choice of software

The concept that I chose was a simple one that I call rotational panning. Instead of thinking in terms of panning individual sound sources between speakers, I imagined the whole room rotating left and right, or back and forth. All 12 voices rotated as a group. Each of the twelve channels or voices of my setup were routed to one of twelve fixed speakers. Values transmitted by the two Hot Hand controllers were mapped to panning functions. When I rotated my right hand clockwise all of the speaker positions would rotate to the right, as if I was floating in a fixed position while the whole room rotated clockwise. When I raised my left hand all of the speaker positions would rotate backwards, as if the whole room was rotating backwards.

Two Hands on

Intuitive control of sound diffusion is a complex issue. It can be difficult to imagine, realize or even perceive multiple audio sources moving in various patterns and at various speeds at the same time. Controlling complex spatial movement in a live situation can be a great challenge.

I had an opportunity to experiment with live diffusion in three very different spaces. My approach was experimental and site-specific. I viewed the multichannel sound systems embedded in these three spaces not as vehicles for linear sound reproduction but rather as acoustic environments, each with its own unique characteristics.

My first experiments in working with live control of diffusion took place at ZKM in September 2015. The Klangdom at ZKM is a small concert hall equipped with a digital mixer and 47 independent speakers. The setup is made up of four rings of speakers. Channels 1- 14 constitute an outer/lower ring, channels 15-28 constitute a slightly higher ring, channels 29-36 constitute a more centered, higher ring, channels 37-42 constitute an even more centered, still higher ring, channel 43 is at zenith and channels 44-47 are subwoofers (one in each corner).

Klangdom

The Klangdom at ZKM, Karlsruhe

My laptop computer was connected to the mixer in the Klangdom via a MADI interface, allowing direct access to all 47 channels from my Max/MSP patch. Much to my delight, this setup was up and running perfectly in less than an hour.

At ZKM I tested my concept of rotational panning using two different configurations. The first configuration can be called 12-12 routing and used only 12 (out of 43 possible) discrete speakers. Rotational panning consisted of changing panning positions between the 12. The second configuration can be called 12-42 routing and used a total of 42 speakers, plus subwoofers (only the zenith speaker was not in use). With 12-42 routing each of the 12 voices was by default routed to a single speaker, but the voice could be panned to six other speakers. This allowed each voice to be panned to any of a total of 7 speakers (original position, left front, right front, left center, right center, left rear, right rear).

After my visit to ZKM I had an opportunity to conduct further experiments at the Black Diamond as composer in residence at the Royal Library. The permanent 12-channel sound system at the Black Diamond has 12 main speakers and 2 subwoofers hidden in the ceilings on three levels of the main foyer or atrium. The speaker setup is asymmetrical in correspondence with the asymmetrical architecture. The 12-channel setup consists of three trapezoids on three different floors or levels. Distances between the speakers in each trapezoid range from about 10-14 meters. Ceiling height is about 5 meters, with the second level speakers about 11 meters above the ground floor and the third level speakers about 17 meters above the floor. When performing I stand on a bridge on the second level overlooking the harbor.

Finally I experimented using a 12-channel speaker setup at Symphony Hall in Aarhus, a hall with a seating capacity of 1,200 that was acoustically designed for symphonic music (Figure 4). There is no permanent sound diffusion system in the hall, so I was at liberty to place the 12 speakers wherever I wished. I chose a flattened setup, using only two levels: 1) stage/ground and 2) balcony (the balcony surrounds the entire hall including behind the stage, the sides and the rear). Speakers were place on stage (narrow stereo pair plus subwoofers) above/behind the stage (wide stereo pair), two on each side of the audience on the ground floor, two on the side balconies and two on the rear balcony.

Symfonisk_Sal

Symphony Hall, Aarhus

Conclusion and future work

Working with various types of motion tracking for controlling sound diffusion has provided me with insight and inspiration in relation to creating spatial electronic music within the context of live performance. Based on my experiments I found that using two accelerometers to control rotational panning could be combined with camera-based motion tracking to provide a flexible and intuitive interface for controlling live diffusion. I ultimately chose this configuration for a new work for solo performer and motion-tracking system.

This work, entitled Ritual, employs both camera-based motion tracking using the webcam of a laptop computer as well as a pair of accelerometers, one worn on each hand. The webcam controls sound in two different ways: 1) altering the amplitude envelope of continuous looped samples and 2) triggering single samples when movement in any given zone increases beyond a fixed threshold. The accelerometers are used to control live rotational panning using only one control parameter from each accelerometer.

The simplicity of this mapping has made the interface fairly easy for me to learn to use. In spite of this simplicity, I have found that complex sound textures can be created by combining and mixing 12 voices and that subtle and musically relevant multi-channel sound diffusion can be controlled during a performance. The potential of creating varied sonic textures and multi-channel panning by means of a few simple parameters controlled by motion tracking hardware and software continues to fascinate me, and this fascination has inspired me to create a new work that explores the idea of conducting sound in space.

Acknowledgments

I would like to thank the Royal Academy of Music in Aarhus, the Royal Library in Copenhagen and ZKM in Karlsruhe for supporting this project. I would also like to thank the Danish Arts Foundation and the KUV fund of the Danish Ministry of Culture for financial support.