Making A Music-Braided Knot: AI and Us

For a DS106 Daily Create the other day, which I had submitted, participants were encouraged to use an automated music making site called Computoser. It allows you to set some algorithmic parameters and then kicks out a short “song” for you. I find it rather interesting but also a bit constrained.

So, after making my own song for the prompt, I downloaded the MP3 file and moved it into another platform (Soundtrap) to do some of my own mixing. That led to me noticing that my friend, Sarah, had submitted her entry. Hmmm. I layered her track on top of my track. (Note: I suggest headphones, particularly for the later iterations, where there is a lot going on in the mix that is best experienced with close listening)

That was too dissonant for me, though. That’s when I saw that the site kicked out MIDI files, as well as a MP3s, and so I took her MIDI files and mine, too, and then began layering her song with mine. The use of MIDI presented me with many more options for mixing, since now I could change the instruments and align the timing, so that it began to sound more like a single (strange) composition as opposed to two pieces of music crashing into each other. I added a drum track, too, and it acts as a sort of rhythmic ballast. At this point, we had seven tracks.

Shortly afterwards, another friend, Maha, added her song, and so I went through the process yet another time — adding Maha’s MIDI tracks to the song that I was constructing with Sarah and mine. At this point, I was working with ten tracks. Maha’s tracks gave the piece a little more atmosphere and depth (Again: this is best heard with headphones, I think)

Finally, a day or two later, I saw another friend, Sheri, had also added one and the process continued once again, with the tracks from four of us (total: 13 tracks) — all of us, connected friends from various projects over the years — mixing them all into a single song. Sheri’s additions added a space-like swirl to the gaps in the piece as well as a rhythmic hand drum and a breathy sound, allowing the music to become more like a braided knot, with different strands working together.

And then I realized, one other connected friend, Christina, had also created a song. As before, I downloaded, uploaded and mixed. It becomes a bit much for the ears at times, with that many track, but then, there are moments of melodic separation, where something lovely happens. (Now total tracks: 17!)

Why do all this? First of all, I am always musically curious. But there was something larger at play here — which is a continued exploration to humanize AI experiments (or, as Sarah noted, to approach the activity of using AI with human intention and agency). I don’t mind that the music site automatically created songs from my algorithmic settings, but I felt like I still needed to insert myself as mixer and engineer, even on a small scale.

Adding in the work of connected friends seemed like an obvious move to me, bringing us together as a band of creative people. It made the final track much more interesting than if it had been just my own. Thanks to Sarah, Maha, Sheri and Christina — who found out they had contributed after the fact. I hope I honored their sharing into DS106 with this CLMOOC-ish kind of activity.

The final “song” is still quite odd, with instruments weaving in and out of each other, although the drum track I added keeps the music somewhat centered and moving forward on a beat. If you listen, there is a distinct melody line at the top of the track, and there are some harmonic elements moving below it, so that the final piece emerges as a sort of modern experimental track of music ‘composed’ by four humans and a computer program that feeds on algorithms to make sound.

And my curiosity took me a bit further, merging audio with visual. Here is the audio file (or, the first 30 seconds, anyway), remixed as silent waveform:

One final version, using imagery generated from the audio file, and then mixed in iMovie:

Peace (playing it),
Kevin

7 Comments
  1. Kevin, your remixes always let us sound great.

    I love the final version [especially since apparently my apple devices do not play the sodaphonic files]. I know that in the one I created I had selected “no drums” but it seemed to still include them, which detracted from the “almost” melody, so I really like the flow of this combined version, especially what looks like ‘animated abstract string art’ imagery! Your additions and your percussion clarified the dissonance in the last section. Thanks for taking us a step further to reclaim the AI. 🙂 ! Sheri

  2. Wow, this is wonderful—thank you,Kevin, for adding a fantastic human, intentional element to our AI songs. I am amazed at how well some of the dissonance works together with the rest. And I really love the idea of connecting folks together in this way. So fun!

  3. This is classical proximal learning, learning that is guided by no one principle but emergence. It is a trustfilled kind of learning and very rare to my eyes. I understand that Vygotsky’s zone of proximal development is led by mentors, but I think what you have done here is a special case where you have mentored yourself using the available tools and ‘voices’. Profound to experience, less about product that it is ultimately about process. In other words, your brain is better off for having done this.

    • Indeed. I felt the need/desire/impulse to push this AI music a bit further, and fell back to something that has always been gratifying: connecting with the work of others, through remix. In doing so, I hoped to push back against the mechanical aspects of the AI generator. Thanks for your insightful noticing, Terry.

Leave a Reply to dogtrax Cancel reply

Your email address will not be published. Required fields are marked *