May 16

Minkowski Etudes: The Aftermath

It was about one year ago when I made the decision to write Minkowski Etudes as a work for solo trumpet and interactive electronics. Last week my performer Dylan premiered it in its entirety for his senior recital and he also played it as a part of the Southern Sonic Festival. The Max programming needs some final tweaking and I may want to redo my cue structure by using Antescofo (I have to decide if I want to pay for the annual Ircam fee), but given that a bulk of the creative, notational, and programming work is now complete, I thought I’d write a quick retrospective about it.

First off, I found it fascinating to get different people’s reactions to it during its premiere performances, primarily which parts of the piece that they liked the most. It was spread pretty evenly across all three movements and all for different reasons, and I think that makes the piece a success because different parts can appeal to a wide variety of people.

At this point for me personally, I find that I dislike the second movement the most. Part of that comes from some of the technical difficulties in error-free execution – the primary reason I want to potentially use Antescofo in the first place – but another part is that the metronomic nature of the movement means that it’s structurally the most rigid and inflexible, limiting the performer’s ability to add their own personal musical expression and leaving little margin for mistakes in execution. In the first and last movement, the electronics are the vehicle for the performer being the forefront, whereas in the second movement, the performer ends up being a vehicle of the electronics as a forefront. That feels counter to why the piece is in an interactive form in the first place – if i wanted it to be like that, I would have just created a tape accompaniment for the performer to play to. I’m not sure what to do about that given the nature of the material, but it’s something I’m thinking about as a consideration for use of that mechanic for future works.

The amount of work I put into the Max programming was a significant time chunk, although I would say that the time invested was well worth it – I learned a lot about MSP, a side of Max that I had never really used before, and all of the patches I created that went into this project can be more easily reused and adapted for future works as opposed to having to start from scratch. Even so, it’s worth remembering that creating a work that involves interactive electronics with the kind of attention to detail that I require as a fairly detail-oriented musician and a programmer doubles or more than doubles the amount of time and energy that I would put into any other kind of composition.

That might seem like something that would discourage me, but it actually does quite the opposite. The work I did on this project and the passion I had and still have for its final outcome has helped me realize that I think I have a lot of unique things to say in the interactive electronic medium that could have a lot of legs for my compositional career. I’m hoping that after tweaking the Max programming to make it as error free as possible, I can get this performed in Oregon and Pennsylvania with my alma mater universities, but I also have ambitions to publish this work and have it potentially played by other trumpet performers. If that happens, that could encourage me to devote more energy to the interactive electronic space as well as open up future opportunities and commissions for those that might come across this work and find it valuable.

Where I go from here immediately will take shape in the next few years. I’m close to closing a commission to write a wind ensemble piece for the spring of 2020 here at Tulane, which will be the first time I’ll have written for a large concert ensemble since beauty…beholder back in 2012. I’m also close to closing a commission deal to write a percussion duet for the 2018-2019 school year. That will likely be purely acoustic, as I already have a few conceptual ideas that are best fit in the purely acoustic space.

After that, I have the framework for a piece that I was originally going to be make as a standalone digital audio piece that I’m now inclined to make a work for solo cello and interactive electronics, specifically for my colleague Elise who plays with me as a part of Versipel New Music. I originally wanted to do that next year, but given the scale of the wind ensemble piece, i’m now thinking that I’ll have to put that off until the fall of 2020 or the spring of 2021. I’ve also been having some initial talks with a dancer/choreographer to maybe do some collaborative work with her and interactive video. That has no timeline, but given that I would have to spend time learning how to use Jitter, I imagine that that would have to be 2021 or later.

The other thing that I’m thinking about is taking the concepts that I’ve put into Minkowski and turning it into a series of pieces – using similar interactive and creative concepts and some of the same Max work for other instruments in the same way as Erin Gee’s Mouthpiece series or Berlioz’s Sequenza series. It would be a lot of fun to write a Minkowski for percussion and another one for clarinet. We’ll see what happens as I let this piece germinate and start to market it. If people want to play it and it’s received well, then it will definitely happen.

Some of the Max programming mechanics that I’ve done for this work have been put into my Kaizen YouTube series, and I’ll be posting up at least one more video that talks about it in the near future. For now, below is the most recent that talks about my custom interactive cue engine.

Oct 24

The eight-year reconnection – Chain Factor to Universal Paperclips

Back in 2009 I was fairly obsessed with Chain Factor – a game by Frank Lantz that would later become Zynga’s Drop7. I got good enough at it to be a consistent name in the all-time ranked leaderboard, always trading top 10 scores with some other person whose user handle i can’t recall anymore, so i decided that I wanted to record a video of me doing a decent run. The run took 22 minutes, and at the time YouTube’s maximum video length was 10 minutes, so i had to find a way to edit/speed it up.

That led to me creating my first real video editing project that i eventually titled Chain Factor Chaos:

It’s a pretty rough final product execution wise, and conceptually i don’t like what I did with the first big “section” anymore (the first 3’40”), but I’m still incredibly happy with the rest of it leading to the recap transition (3’40”-9’00″ish). A part of me would love to take a second crack at it given the sort of video editing chops I have now, but a) i don’t know that I still have access to the source video anymore, and b) if it came down to it i’d rather do something new from scratch than re-hash an old project.

In any case, when I posted my blog entry that talked about the project, Frank Lantz happened to come across it and commented on it saying how much he liked it. I remember feeling very touched (and, truth be told, a little overwhelmed) that he took the time to write to me. I wrote him an email to say “you’re welcome”, and we had a brief email exchange where he gave me more nice words about it. After that exchange, that was that.

Fast forward eight years later to yesterday.

Recently, a new browser game called Universal Paperclips has made the viral rounds. It’s what some people classify as an “idler”, and it’s a game type i’ve enjoyed playing in the past, so when my brother shared it with me, I said, “sure, i’ll give it a shot.” After I finally finished the game (which ended up taking a few days), there was an end credit line that said, “(c) 2017 by Frank Lantz”.

And i was like, “i recognize that name… oh! It’s the Chain Factor guy!” It took me a moment, but even after eight years I remembered who he was, the interactions we exchanged. So i found him on twitter, and said, “hey, i just finished your new game, remember me?” and he tweeted me back and said, “Of course!”, said he still found the video amazing, and it was nice to reconnect. I told him that his game was great and that I was going to play it as part of a video game marathon for charity, and he tweeted a link to my charity page and also gave me a donation.

Such a random eight-years-apart reconnection made with a damned awesome guy. I might start using twitter more often because it definitely shouldn’t be another eight years before we interact again.

 

Oct 12

Thoughts about my Buffer Loop Patches

I’ve created a Buffer Recorder and Buffer Loop engine for Minkowski Etudes for Dylan and I’ve hit upon a small programming snag that means I might have to modify how I tag loops and how they end.

Some background first:

This is my BufferRecord patch:

It’s pretty straightforward – you give a command for what buffer you want to record into, it activates the recording and sets a timer to track the length of time recorded. When you’re done recording, it turns off the Record function and stores the length in ms into a list that it can then use for reference for looped playback (because otherwise it would loop the whole buffer which i’ve set at a default for 20s).

This is my Loop patch:

There’s a lot of small things going on there, but the relevant point to this entry is that the actual engine for playback is going through that [poly~] object. [Poly~] is a way to “clone” a subpatch with multiple instances without actually copy/pasting those individual instances which necessitates routing signals to those individual instances via cumbersome gates and switches. Everything is instead set by a single [poly`] object that has a definable number of “voices” that can be dealt with all at once or individually. In this patch I have 16 different voices. When a loop is activated, a counter iterates to the next [poly~] voice and then all of the loop info is assigned to that voice – it iterates to one voice higher in number to the one that was last used, so if the last voice used was 8, the next loop activation chooses voice 9.

Here’s the patch that exists inside of the [poly~] object: 

The [groove~] object is what actually plays the loop. I send in the buffer name to loop (which replaces the default i created “Buffer1”) as well as all of the variables of loop start/end time and speed.

The problem is: since i’m not personally tracking which [poly~] voice is being used for an individual loop (because i just have the counter iterating to the next available voice), I needed to find an alternate way to find a specific loop within a voice instance so i could end just that specific loop as needed. I decided to use the Buffer name (in this instance “1-01”) as a means of doing that, so if i send the command [LoopEnd 1-01], it would find the [poly~] voice that’s looping 1-01 by name and then turn it off. Except what if I want to run multiple voices with the same buffer simultaneously and I only want to turn off some of those voices later? Sending the command [LoopEnd 1-01] would turn off all voices that hold a buffer named 1-01 at once with no ability to deal with that partially.

There’s a few different ways to address this. The quick and dirty method is to take any instance in which i want the same buffer materials looped and write that material into multiple buffers. That way, voices in the [poly~] will never have duplicate names. It’s sloppy and would require a ton of extra CPU buffer depending on how many copies of a single buffer i would want to create.

A more programmatically clean but inflexible way is to identify the loops by poly voice only with no concern for what’s there. That could create some potential problems down the road if, say, one Record Buffer fails to trigger or the counter iterates wrong or something and now sample 1 is not located in voice 1 where it’s supposed to be and i accidentally activate or cut the wrong loop.

The most airtight but most difficult to program way is to somehow link the name of the sample with either the voice it’s connected to or its simultaneous iteration number and then somehow program LoopEnds to know which voice each simultaneous iteration is connected to even if that voice is different every performance. As i type this out, I may have a strategy to address this, but I still need to work out some of the details in my head to make sure it will work. If it doesn’t, i’ll probably go quick and dirty and hope that my laptop can handle the CPU load that would be necessary. We’ll see what happens.

Older posts «