Rambling 01.05OCT2025

Splitting audio / each line treated as a totally independent instrument, literally an ensemble member who was the “solo” representative of that particular sound. The transformation of that signal into a MIDI stream which then be controlled with complete of the originating audio stream. This would allow painting different timbres, possibly diametrically opposed one, with one note. This would be especially true in the case of the advanced application of delays, reverbs, and reversal. This could be accomplished with the use of software and hardware and any combinations possible with them.

The use of guitar pedals on synths, the development of boutique boxes (I hesitate to simply call one of my favorites — the Empress Effects Zoia — a pedal), and the parallel development of the various “*-berry) Linux devices are examples of using various technologies in unintended manner with highly-desired results (from an aesthetic/artistic perspective).

The marriage of proven and robust MIDI implementations with increasingly sophisticated capabilities of digital technologies suggest a rich area for artistic exploration.

For those about to rock…

Whew! Improv isn’t just what you do musically, it’s what you do.

Period.

Last night, I played my first face-to-face live gig in a very long time. Streaming live from a studio, where you can perform in your pajamas (if that’s your thing and maybe you’re not doing video streaming) on a platform like Second Life has its own challenges (does it ever!) but the audience rarely sees what is happening*.

So last night was an exercise in real-time damage control, improvising a solution that would make them smile and applaud (throwing money, underwear, or t-shirts is always optional).

https://www.facebook.com/emom.mtl/videos/786197931074043/

Eric Wrazen, master of ceremonies and no slouch on the synth himself, presided over the first EMOM performance of the Fall season at The Wheel Club here in Montréal. It was my first performance there and only the second time I’d been there for an EMOM event (I was lucky to have learned of it, shortly after getting settled in to my new apartment here). If I said it went off without a hitch it would be a bald-faced lie.

It didn’t.

I think it was Dwight Eisenhower who said that plans mean nothing, planning means everything. What I’d planned, while only mildly complicated considering all that was involved, and what happened were two quite different things. My original “blueprint” involved using two main audio sources: the Arturia MicroFreak and the Modal Argon8m. I’d brought The Three Sisters (my three Empress Zoias, Amelie, Brigitte, and Chloë — yes, the instruments are named and labelled so it’s easier to keep track of what’s been programmed), a Boss RC-1 for some simple looping, and an RC-500 for reverb on the Argon8m. Finally, I’d brought a Focusrite Scarlet to interface the iPad with the Argon8m.

What I find continuously challenging with Argon8m is using the software — I am not a big fan of using a laptop in live settings after having a hard drive catastrophically fail just moments before doing a gig in Fort Worth many years ago (yes, I salvaged the performance and no one even knew what happened). Rather than use a laptop, I’ve been using an iPad Pro, which has brings its own issues (Gee, THANKS Apple for making it so damned difficult to interface externals with the iPad, ESEPCIALLY the whole “Pro” series — another discussion for another day, but I’ve been using a Pro in one form or another since Generation 1.0 before I ever left the States). The MODALapp is pretty dependable on the computer, less so on the iPad, so it’s always a tiny gamble.

And last night, the gamble failed. Hard.

I could not get the Argon8m to send audio to the mix. For whatever reason, nothing from “that side” of the mix was working, or sending me output to the little mixer I had brought along. As I was the first person scheduled to perform, the pressure was on to get it sorted out and time was working against me. Eric was gracious enough to start without me and I frantically began pulling cables and re-configuring the set up to work with the ‘Freak and two of the Zoias. I managed to come up with a working arrangement and then it was “showtime!”

Give a listen to the video excerpt above; I have a WAV of the performance that I will be listening to in hopes of pulling some usable audio out of for later.

*At one point in the history of performances in Second Life, the question came up as to whether or not certain performers were actually “performing” live or simply having their avatar appear and running a prerecorded track — the VR version of lip-synching. The controversy is still there and, to my mind as a performer, has cast a permanent shadow over the whole scene.

No Vaccines for this one..

“Language is a virus” sang Laurie Anderson some years ago, riffing on William S. Burroughs’s “The Ticket That Exploded” and one wonders. How does it infect us and how do we spread it?

Does it lie dormant in our very DNA and only awaits the right circumstances to trigger, just waiting to explode into out consciousness? What would be its trigger(s)?

And then there’s the act of reading a book. Reading is staring at the hieroglyphics imprinted upon sheets of dead tree byproducts and hallucinating wildly — for hours on end! I thought this graphic below summed things up quite nicely:

[Click to see the article where I found this]

Now consider how we recruit/radicalize/train others to be able to self-hallucinate… and not just the “text”-based but things like music scores and EVERYTHING we see with our eyes…

Most of this has been triggered off of the reports of the hallucinating chatbots — given that they are based on LLMs or image databases, how does this happen?

And a happy end of June Thursday to you all!

[More on reading and hallucination: Two sources for the same paper]

1. https://www.sciencedirect.com/science/article/pii/S1053810017300314
“Uncharted features and dynamics of reading: Voices, characters, and crossing of experiences”

2. https://pmc.ncbi.nlm.nih.gov/articles/PMC5361686/
“Uncharted features and dynamics of reading: Voices, characters, and crossing of experiences”

“The Hubris of The DAW”

So another release on Bandcamp just went live this afternoon. Messing about with Wotja as a generative source for MIDI sequences, playing with a stochastic sequencer (I’d forgotten how much fun THAT was), a bit of live performance on the Seaboard and a lot of long delays with a touch of Valhalla reverb.

I continue to find Arturia’s Pigments and Roli’s Equator 2 to be good tools to work with, especially when using Plogue Bidule. They all play nicely together.

So here’s a link and I hope you enjoy:

https://audiozoloft.bandcamp.com/album/the-hubris-of-the-daw

… you might recognize the artwork :).

Un bouquet de morceaux…

Just finished uploading a few audio morsels that you may enjoy. Do check out the link to the Bandcamp website and give a listen. Individual tracks are available as well as, of course, the album.

For the nerds who might want more context:

The tracks were recorded entirely in “software” using the Wotja Generative Music System with zero audio output, using only the MIDI output to drive either/or software synths or hardware. Currently, the hardware synths environment consists of a MODAL Argon8M, an Arturia MicroFreak, three Empress Effects Zoias, a norn shields, and a Behringer Crave are available for use. Due to my current geopolitical situation, no guitars were harmed in this recording session and only software synths were utilized.

On the software side of things, I used Rogue Amoeba’s Audio Hijack to collect the audio output of Arturia Pigment 6 and ROLI Equator 2 softsynths. That output was recorded in WAV format, edited in Audacity to apply compression, fades, reverses, and panning before outputting to FLAC format for upload to Bandcamp.

If you are curious about the equal length of each track, this was a result of having set Audio Hijack to only record files of 254MB in size, creating roughly 5 minute long tracks. The actual timing worked out to 5 minutes, 49 seconds. Both Wotja and Audio Hijack can be set with timers and Audio Hijack has the ability to record to specific file sizes. It’s a handy aspect of both these tools and I use it from time to time.

Is this AI music? Well…

A simple answer is that “it’s complicated”. In effect I have used the software to generate a MIDI skeleton, that drives a software synth, which I am manipulating in real time and then doing a fair bit of post-processing, making decisions about panning, combining raw tracks (there are at least two tracks here that comprise multiple original WAV files), or applying reversals. So I hesitate to just call it “AI” music; rather, I have used the generative nature of Wotja to perform as an ensemble member wherein I am both fellow performer and conductor (as well as editor in the end).

Enjoy!

festive red and gold decorations hung across a pedestrian "passage", shot at night