My new piece for double bell trumpet, baritone, electronics and (optional) video will be premiered this weekend at ECLAT.
Over the last few years, my pieces have grown more and more intimate with the words and viewpoints of their performers. I’ve recently started asking each of them the same seven (or so) questions, and then obsessively listening to, transcribing (both text and music) and – normally – performing acts of composition on the results to create and recreate various material(s).
For this piece, I* took the answers and shuffled the words around in a nearly-random way: all that was preserved was syntax. (Nouns were shifted with nouns, verbs with verbs, and so forth). The result is a text often very close to what was actually said (but never quite what was actually said) and yet, filled with (almost) meaning:
…I imagine a quieted world. You are in your cave during the day. Home and hunting food. Is hunting successful or not? Try roots, berries, and sit by the fire with the clan. Tell stories those days, probably using things they all have: tools. Can they use their lips to vibrate and sound? This massive sound in this cave! I gave them magical powers.
I find this a lot like memory, and a lot like life. And this “not the text” then became “not the score” in a sort of dialogue play that then became (little by little) more like music. I have to say, I had an enormous amount of fun as I wrote it.
We started today, and Marco and Ty are unsurprisingly, utterly fantastic. I am so looking forward to the rest of this week!
*I did this using ChatGPT. I’m not exactly sure why I feel the need to clarify this, and this is categorically not a piece about AI. It was simply the fastest way to accomplish this process.