Creating this evolving collection of pieces is my way to study a work or a group of works of a specific artist.
- ‘Piano phase’ by Steve Reich
- ‘I am sitting in a room’ by Alvin Lucier
- ‘Piano phase’ and ‘Drumming’ by Steve Reich
As an exercise to learn patterns in the SuperCollider audio synthesis software, I’ve emulated ‘Piano Phase’ from Steve Reich. I’ve worked from the 1967 score republished in 1980 by Universal Editions (Copyrighted but available on the net).
The coding of the patterns tries to stick to the score, i.e.:
- The number of bars for each phrase is decided randomly at the beginning of each phrase
- The left piano gives the cue for the end 4 bars before the end
There is an attempt to humanise the playing by adding some randomness using the following tricks:
- Random note trigger
- Random note release
- Slight detune of one of the pianos
- Randomized velocity patterns, quieter for right hand
- Randomised accelerandos
- Non linear (de)crescendos
Below is one possible performance of the piece and you can download the code and run in Supercollider to get your own unique performance (replace ‘.txt’ by ‘.scd’, WordPress didn’t allow the upload of SCD files).
Alvin Lucier’s ‘I am sitting in a room’ (YouTube) is probably one of the most iconic process driven composition. He recorded himself saying a short text, then played that recording and recorded the sound coming out of the speakers. He repeated the process until his voice was replaced with a sort of feedback sound. Lucier recorded the original piece in his own kitchen at night to avoid any noise from the environment. For my own take, I wanted to let the environment in.
My first attempt happened in the kitchen of an old house in a village near Viseu, Portugal where I spent a few weeks holidays. I was away from my usual gear so I used my trusty H2n microphone and my laptop speakers. I also altered the text slightly to reflect the change in the process. You’ll clearly hear a rooster and and I kept the recorder’s handling noises, as they illustrate the progressive modification of the sounds.
For the next experiment, I automatised the whole layering process so that I didn’t have to stop the recording to add the next layer.
In the setup on the left, the H2n is input into the DAW with a 1 minute delay and the DAW plays what’s being recorded through the monitors. So we end up with a setup very similar to the original piece, but without any noticeable transition as the layers are added.
I’ve kept the first silent minute of the recording to illustrate how the process works so feel free to jump ahead. At the beginning, we hear quite a lot of traffic and I find it difficult to figure out which is real and which is repeated through the speakers. Around 5:50, you can hear a short conversation with my partner, and how it fades out one minute later as layers are added. As the recording progresses, the resonance grows at about 850Hz. I put a dynamic EQ to cut that frequency to avoid saturation. It starts to kick in around minute 15. Please beware, there are a few loud transients, especially around 8:30.
This second setup make me think that the effect in the original piece is akin to a slowed down Larsen effect, which is caused by the the electronics and not the resonant frequency of the room. But I haven’t made any further research into this.
This piece is my take on the early phasing work of Steve Reich, and in particular Piano Phase and Drumming. It combines two samples of people laughing taken from my earlier piece Contagion. Each laugh sample is broken into component pieces and each piece is taken to represent a drum beat. The piece is composed of three parts:
- First sample: Progressive substitution of rests for beat, then progressive superposition of a second, phased version of the same sample.
- Progressive substitution of the second laughing sample and a second phased version.
- Progressive synchronisation of the second laughing sample and its phased version then progressive substitution of beats for rests.