Shifting Signals I – more ideas

I’ve decided to name my google hangout piece Shifting Signals I. More of what the piece is going to do both creatively and logistically is forming in my head. Here’s a more formal thought document about it.

The piece was originally going to be about five minutes long, but now i think it’s more likely to be eight to twelve minutes long because it’s going to be in three sections (but one movement). I haven’t quite decided what the piece is going to be stylistically exactly, but it’s going to be more improv/cell based than standard music notation based. Part of it depends on what happens with the Transmitting Station described below, mainly how that station is being used to manipulate audio.

Performer Space and Receiving Stations

There are three “receiving” computer stations that are going to be placed at hard stage left, hard stage right, and hard stage front. Each station is connected to the internet and has a camera/microphone setup that can transmit video/audio to the web via the google hangout. The performer stands in the middle of the setup, facing one of the receiving stations depending on the section of music – in the first section, the performer will face the stage left station, in the second section, the performer will face the stage right station, and in the last section, the performer will face the stage front station.

To avoid the feedback loop issue, the three receiving stations will have their computer volume muted (but their microphones on, otherwise the video effect doesn’t work).

Transmitting Station

The fourth computer station is the “transmitting” station. That computer will also be connected to the Google Hangout, but its video and audio connection to the google hangout will be muted so the only thing that it does is receive the hangout video and audio from the other three stations.

Physically, this station’s video out will be transmitted on a projector screen that will live behind the performer. The audio out will also be connected through the recital hall’s sound system. Originally i wasn’t going to do this because of the fear of feedback loop issues based on where the speakers are located in the recital hall at Tulane, but to add depth to the performer’s performance, the sound screams to be manipulated and to restrict that based on a particular space when i want to have this performed in other locales is silly. The speakers can be moved to a place where feedback isn’t going to happen without any real issues.

From a software perspective, I want to take the audio signal of the performer playing (from a microphone, not from the google hangout audio) and run it through an audio manipulating program. Ideally i want it to run through Max/MSP to make it truly interactive and to create a UI that makes it easy for the performer to start in the middle of the piece and also monitor where things are happening in the piece and to have effects manipulated more precisely, but that might be out of scope for this project because i’d need to buy a new copy of Max/MSP and it would take time for me to program the piece during a period where i’m going to be incredibly busy. So instead i may opt for something different, either through Ableton Live or through Lisa.

Additionally, the Transmitting Station is going to put out its own audio like a standard “tape piece”, which will be used to create atmosphere and also help give aural clues to section changes. Depending on what sort of software is being used, those sections can be determined by what the performer does, either by landing on a particular pitch or controlling a MIDI trigger.

Style of Piece

I’m not sure what the piece is going to sound like yet or what the logistics are. In my head it’s pretty minimalist and is also more “cell” based or improv based rather than standard music notation. This is partially to avoid logistics of music stands because i don’t want to add that to the setup, particularly since the stage is going to be dark. If it’s going to be three sections, those three sections need to have some distinction to them, or at least be an ABA sort of format. The tone for that will be set by the audio that accompanies the solo performer.

It would be pretty easy to create the audio completely through Live’s instruments, but i want to add some sort of more traditional “sample” as well. One option is to try to take the DAT recordings i have of bowed crotales and put that into the mix (although i’d have to find a way to get them off since i don’t own a DAT player). The other option is do some instrument recordings of clean clarinet/saxophone notes or find some online somewhere, or find some other sort of atmospheric sound such as riding on the streetcar and manipulate the audio like i did a long time ago with my I-5 piece.

Now that i feel a lot more comfortable with how the setup of the piece is going to work, the actual musical conception will start to write itself. I’ll probably start to write musical material over christmas break with the intent of realizing the tape part and the big picture structure through january with the intent of trying to rehearse an initial draft of it in the beginning of februrary, then polishing it so that it’s ready for April.

I had put some thought into allowing people outside of the concert venue to be a part of the hangout also, but that adds a variable that i don’t want to deal with with this project. Maybe in Shifting Signals II.

Leave a Reply!