Over the weekend I broke out my LEAP Motion Controller for the first time, hooked it up to Max, and started experimenting. I did a couple of videos of that experimentation, the second of which showcases some basic audio manipulation i did using the aka.leapmotion object and the grainstretch~ object:
Currently, i’m typing this blog entry in the electronic music studio at Tulane. I got in here around 22:30 and spent a good hour or so familiarizing myself with the space here – the mapping of the mixing board, how that maps to the 8-channel speaker set up, and also some Reaper basics (since at home I use Audacity) and how to control that 8-channel space. I had a tough time remembering some of what Rick showed me when he introduced me to the space last week, and 8-channel sound is a very new animal to me, so it took me a bit longer to figure it out than i would have liked, but i figured it out and started building some of the vocabulary of the sounds in Reaper in the way that i wanted as a starting point.
After i did that, i got out of the lab, walked around a little bit in the quiet of the now-locked music building, and started some hard thinking. I thought about my weekend LEAP experiments, i thought about my experience with the In The Grid concert here, and I tried to put into focus what I wanted to accomplish with this piece – mainly whether or not the piece should be a pure audio-only piece with no added elements, or if it should be an interactive one that potentially uses the LEAP as its interactive instrument and conduit. Mainly i was trying to answer the big question: would using the LEAP for the piece be something that ultimately enhances it or detracts from it?
In thinking about it, there were a few important elements to consider. The first is one that i addressed in that blog entry – could I create a visual aesthetic that enhanced and supported what i was trying to accomplish with the piece rather than distract from it? The second consideration was – if i were to make the piece interactive, how much of the piece did i want to have actually controlled by the performer as opposed to premeditated? If i wanted, i could make it so that the LEAP acts purely as a trigger conduit – enter the field, trigger a cue. enter the field again, trigger another cue. The visual aspect of the piece in that way is more theatrical than actual. By contrast, i could make it so that the LEAP could control a great deal of the piece. Control the speed of my sounds, control the pitch of my sounds, control directly how it moves around the 8-channel space.
Instinctively my brain rejected the idea of making the piece too interactive because in my head there was a particular way that i wanted the piece to sound – the pacing, the elements involved, all of those were fixed ideas in my head. I didn’t want to make that variable, something that was left to chance and prone to error. i wanted it to sound the way that i wanted it to sound.
But then i thought about it as it related to my acoustic compositions and acoustic compositions in general. One of the strengths of live performance, particularly in the classical realm is that a single piece of music can sound very different even if the printed music is the same. One performer’s interpretation of Beethoven or Bartok could be drastically contrasting to another’s, and there’s something about that idea, the idea that music in this form is something that belongs not just to the composer but also to the performer and/or the conductor that has always been appealing to me, even when at times it’s resulted in my music not always sounding the way i initially conceived or wanted it to sound. If interpretation, imperfections, and spontaneity in live performance weren’t important, people would just stay home and be satisfied listening to their CDs or MP3s.
In that light, you can interpret the music on the page as being the map to the music rather than the music itself, and because of that, it makes more sense to me that I make this piece interactive – that the elements that i want in the piece are the map to the music, a map that is clear to the performer (i’m not planning on making the piece aleatoric), but is still something that the performer can own as much as a performer can own any other piece of music.
Once i made that decision, i had some choices to make as to how the piece itself was going to be controlled by the LEAP – how much is going to be controlled by the performer, and how can i use the LEAP in a way that leaves little room for error in the performance due to counterintuitive use of the LEAP, sloppy programming and/or lack of attention to details and the creation of failsafes.
Currently my thinking is to still use palm data rather than finger data and have most of the piece controlled by tracking the X position and velocity. Y positioning and velocity may come into play too and/or replace that if the X plane is too small. The way that it would control the piece would be: triggering entrances, triggering some basic speed of the samples themselves, and triggering how it moves around the 8-channel space. What it won’t control is pitch – when the piece reaches the middle of its development that starts to make pitch that much more important, the amount of error-control i would have to program would be too much effort for very little return. Exactly how that section gets dealt with visually is still something i have to work out.
Time to start building some patches.