Music: Melody Meets Motion

I’m working on a music compositional project of “small songs” and am about 14 songs in. These are pieces less than 2 minutes long. Yesterday, I finished this one, with a guitar focus.

Peace (with six strings or more),
Kevin

Saturday Morning: Poems and Music

Exceptional Sounds

A few things emerging from a creative Saturday morning … the poem above is from a one-word prompt (“exceptional”); the comic poem comes from Grant Snyder’s Comic Poetry Month daily prompts (“messy”); and the music track was something I tinkered with, liked and completed, and the title (“In An Otherwise Odd World”) was strange enough to generate an interesting image via Adobe Firefly.

Words in Motion comic poem

Peace (making it),
Kevin

The Beatles Final Collaboration (Thanks To Machine Learning)

I remembering reading something about Paul McCartney saying there was one more Beatles song under production, now that the Age of Artificial Intelligence was here, and to be frank, I thought: oh no. Please don’t let it be John Lennon AI Voice singing in the mix. Please don’t let it be AI George Harrison guitar.

It isn’t.

Instead, as I learned when I watched this short documentary last night, it’s a song that Paul, Ringo and George tried to work on decades ago to honor Lennon, with permission of his family, but the rough tracks that Lennon had recorded for a song that he never finished were distorted with loud piano and soft voice.

They gave up in the early 1990s. But now that Machine Learning is here and film director Peter Jackson has the technical skills, Paul realized, the computer algorithms and power could isolate Lennon’s voice and separate it from the rough mix that Lennon had made, and once the voice was isolated, they could build a song around it.

Harrison passed away in the meantime, so along with Lennon’s voice, Harrison’s slide guitar leads were also added into the recording, with McCartney and Ringo Starr playing along, allowing the claim that this is the Last Beatles’ Song to be true, such as it goes.

The song gets released today (Nov2), I believe. The documentary is worth a look.

Peace (and Sound),
Kevin

Music: Rhythms and Rivers

I was tinkering around with loops yesterday and began to imagine a river as I was working on this piece. Thus, the title — Rhythms And Rivers — and the image, generated by Adobe Firefly AI.

Peace (and sound),
Kevin

In The Test Kitchen With AI (MusicLM)

 

I got invited into the AI Test Kitchen by Google to begin beta testing out some early versions of their AI apps. The only one I saw available to me at this point in time was MusicLM, which was fine since I am curious about how text might be transformed into music by AI. (I’ve done some various explorations around AI and music lately. See here and here).

MusicLM was simple to use — write a text describing a kind of music (instrument, style, etc.) and you can add things like a mood or atmosphere and it kicks out two sample tracks, with an invitation to choose the best one. This is a trial version of the app and testing platform, so Google is learning from people like me using it. I suspect it may eventually be of use to video makers seeking short musical interlude snippets (but I worry it will put musicians and composers out of work).

I tried out a few prompts. Some were fine, capturing something close to what I might have expected from an AI sound generator. Some were pretty bad, choppy to the point you could almost hear the music samples being stitched together to make the file. Like I said, it’s learning.

The site does let you download your file, so I grabbed a file and took a screenshot and created the media piece above (here is direct link). My prompt here was: “Electronic keys over minor chords.” (An earlier prompt — a solo saxophone — gave me a pretty strange mix and I think I heard some Charlie Parker in there.

Here is what the Google folks write about what they are up to with MusicLM:

We introduce MusicLM, a model generating high-fidelity music from text descriptions such as “a calming violin melody backed by a distorted guitar riff”. MusicLM casts the process of conditional music generation as a hierarchical sequence-to-sequence modeling task, and it generates music at 24 kHz that remains consistent over several minutes. Our experiments show that MusicLM outperforms previous systems both in audio quality and adherence to the text description. Moreover, we demonstrate that MusicLM can be conditioned on both text and a melody in that it can transform whistled and hummed melodies according to the style described in a text caption.

I guess Google will be adding new AI-engined apps into the kitchen for testing. I’ll be curious.

Peace (and Sound),
Kevin

Slice of Life: Piano Crates And Imagination Rhythm

I’ve had time on my hands this past week and so I’ve wandered into making a few tracks of music. Here are two songs from yesterday, each capturing a little different emotional spin from being stuck temporarily at home. (Note: I think the songs are best experienced in headphones.)

 

Piano Crates (link)

 

Imagination Rhythm (link)

Peace (Rhythm and Sound),
Kevin

Exploring Aspects Of AI Music Generation

Cartoon Music Machine by AI

Image collage created by AI via Bored Humans website

This post is really just an attempt of mine to gather together some of the explorations I have been doing to see what progress is being made with AI technology and the making of music and sound. It’s all pretty strange worlds out there right now.

Some sites are starting to use inputted text from users to generate sound and music. Others are built where the user does not have agency to create music, only to experience songs based on some choices, like mood or genre or artist. None of it, to my ears, sounds authentically human (yet?).

Here are a few sites I’ve explored:

Riffusion takes a phrase of writing put into its engine and moves it into a sort of song pattern, and the site features a rolling musical pattern that is designed to make visual the algorithmic patterns being used. Here is one that I did for a Daily Create that featured Riffusion with the phrase: A haiku is a moment in time. See the details on how Riffusion works — it is pretty fascinating. (Free to use)

Google is developing its own text-to-music AI generator, called MusicLM, which takes inputted text and creates a moody, evocative soundtrack to go with the words. There are plenty of intriguing examples, although it seems like Google is maybe working to figure out the copyright logistics of its database, where it has collected sound and loops that its machine uses to generate the soundtracks from text. Google also has the Magenta Project, another AI development that’s been around for bit, and the Magenta site does feature some examples of how it is being used to merge creativity with machine learning. (MusicLM not yet publicly available for use other than samples – Magenta data can be used for projects, I think)

OpenAI — the group behind ChatGPT — has Jukebox on its drawing board, and the algorithms are being fed music and sound and song lyrics, and it is learning how to create music tracks in the styles of those artists. It’s a bit jarring, to me, to hear how the machine uses Ella Fitzgerald as its source. OpenAI also has something called MuseNet, which seems similar to Jukebox. (Not yet publicly available other than samples)

The Bored Humans website has an AI Music Generator that uses neural network learning to produce entirely computer-generated songs, with lyrics. None of it is much fun to listen to for any extended period of time, in my opinion, but that it is being done is something to take note of, and worth playing around with. They even host a Lyric Generator.  (Free to use)

Soundraw is a site that generates music by mood, and gives you a handful of song options to choose from. I found many of the tracks sounded the same, but I didn’t do a full explore of the possibilities there. Here is a track for the mood of “hopeful” as an example. (Free to use, but requires account for any download)

Melobytes is another odd one, but it gives you plenty of options to manipulate the sounds the AI generates as a sort of “song” from text — although every time I have used Melobytes, the song sounds pretty jarring and off-kilter to me. (Account required).

And I am sure there are others out there, too, and more to come, no doubt.

Whether this is all good for the field of music creation or not will be debated for a long time, I am sure. Just as ChatGPT and its field of chatbots has many thinking deeply on the art and creative act of writing, so too will the field of AI music generators have musicians wondering about how the field of sound is being transformed by algorithms, and what it means to be an artist in the field of music. (I also worry these AIs will put a lot of musicians who work on films and television and other media venues out of work, just as the DJ put many live bands out of work for weddings and other gigs.)

Peace (and sound),
Kevin