I used mentor poems from my friend, Deanna, and wrote 25 poems each day from December 2 (I wrote two that first day, to catch up, after realizing the first day had passed me by) through the 25th, and then designed this calendar of links to the mentor text poems, my poems, and the visual version (each used an AI-generated image based on the context or words of my poem).
My process: I would read each poem shared by Deanna each morning, mull over the mentor text, find a line or phrase, then build a poem off that idea. Next, I would go into Firefly AI and work to get an image based on my poem, and then use Pablo to layer the text of the poem with the AI imagery. Finally, I would come to my own calendar and update the day.
Last night, my eldest son – Colin — and his film-making friend, Lucas, hosted a free event at our city’s old and beautiful Academy of Music. The event — Emerging Filmmakers of Western Massachusetts — featured a series of short documentaries and movies from local people in our area. My son and his friend had two movies on the agenda — a music video and a documentary.
The theater with the big screen was packed with people, and it was a fun night but also a proud night for my wife and I as we watched our son showcase his talents and shine a light on other filmmakers, too. Plus, donations helped support the local food bank.
I was reminded, as we watched the movies, of how many short films my son and I did when he was young (and then his youngest brother, now in college, studying film, took up the baton years later — and in fact, his movie was shown in the same theater as part of kids’ film festival many years ago).
It was a lovely evening, with many old friends and families reconnecting.
Each year, I try to do at least one or two different things with a song for the holidays that I wrote some years back with my friend, John Graiff. This year, I tried an acoustic version, with some slight musical and lyric changes, and last night, my friends Bob and Greg joined in playing and recording the song.
My neighbor, Greg, and I have been practicing my Gift of Peace for guitar and bass, and this shot of the tree, with the guitar and music stand, seemed quite lovely. I used it this morning for a Daily Create.
I’m doing a five-week, every-Monday remix of a piece of art from the earlier days of DS106. (Read more about what I am up to here). This is week four. So one more week to go. It’s unusual for me to stay with something like over many weeks time. I am usually a move-quick and get-it-done-and-published, but I like taking some time here over a period of a few weeks.
For this week’s activity, I took the text from the original image (the text has become my main focus for remix) and pulled it into Lumen5, which is a video-making site. I decided against photographic images, which is usually how Lumen videos are created, and instead, I used one of its templates with designs. I think that makes more sense for what I was looking for in the words flowing from one phrasing to another.
I have long been interested (sometimes, alarmed) by how some of the new machine learning/AI tools might impact the making of music. Mostly, I have not been all that impressed (good thing for human music composers, right?) but I also know that the technology is only getting better. (See some earlier posts)
The results have been awful or weird or un-listenable.
Google just released its new tool called MusicFX in its AI Test Kitchen (so, you know, beta) and, well, it’s a big leap forward to what I was playing around with just a few months ago. You write in text about the kind of music you want, and you can add genre, instrumentation, etc, and the site generates a 30 second track. The few experiments I did sounded decent.
Bad news for human music composers? Maybe.
Peace (and sound),
PS — there’s also a new TextFX in the same platform but I can’t for the life of me figure out its value.