Playing Around With A Family of Arty-Bots

ArtyBots Remix Collage

I don’t remember how or when I stumbled upon the collection of Twitter bots by B.J. Best but I suspect it have been during one of the handful of Networked Narrative projects I engaged with (in one session, we all created our own bots and mine is still rolling along as the PeaceLove&Bot). The Artybots collection by Best, a poet and designer, are fascinating, particularly because they were released before this latest wave of AI Art platforms.

Here is my video collection of remixes from the ArtyBot Family:

You can learn more about his project in this podcast interview at Design Notes.

The way the ArtyBots work is that you tweet an image to the bot and then it generates an artistic response, using the original image as the base of its operations. Some of the bots are also programmed to respond to each other, connect within what he calls the Bot Family.

I decided to play with his various bots with a single image. I choose an image that was an interesting zoomed-in shot of some moss on a pavement curb. I then fed the image to the various bots, and took the results, pulling them together into the slideshow. Not all of his bots fed me back an image to use, for whatever reason, but I enjoy seeing the remixed images that did come back fade into one another in the video compilation.

Is this art? Are the computer programs artists? Who knows, anymore. (Best suggests yes, the bots are artists in his podcast interview).

Peace (and Bots),
Kevin

Rusty, The Rock And Roll Robot: From AI Art to AI Story with AI Music

Maybe I went a little overboard here but I was curious about what would happen if I merged the output of a variety of different AI-infused sites to create an AI story from an AI image with a computer-generated narrator voice backed with an AI soundtrack. The result was a tale about Rusty the Guitar-playing, Rock-and-Roll Robot.

Here’s how it worked: This all began with an interactive article in the Washington Post about how AI engines create art from text prompts. The article is excellent and the tool to play with was worth tinkering with, so I used the prompt to guide my experiment along: A robot playing guitar in outer space in style of a cover of a magazine.

AI Art via WashPo

I had an interesting image that I downloaded but now wanted a story. So I opened up ChatGPT and typed in this prompt: Write a funny story of a robot making the cover of a magazine for playing guitar in outer space.

Within seconds, I had the text of a story about a robot named Rusty who was rocking the space jam and ending up going viral and landing on the cover of a magazine. I took a screenshot the story.

Story via ChatGPT

I went into LunaPic to merged the Art image with the story, and added a border. Neat.

AI Art Meets ChatBotGPT

Now I wanted some voice narration. I used a text-speech site that wasn’t AI, really, but the computer-generated voice worked for what I needed: a “narrator” reading the text of the story of Rusty as an audio file.

Knowing this would become a video project, I wanted some soundtrack music. I went into a site called Melobytes, which takes an image and used AI to convert it to music. I used the combined Rusty Art/Story from LunaPic, and got a soundtrack. (I remain a little skeptical and unsure of how Melobytes really works its AI magic, but I stayed with it because I could not find an alternative for this activity).

I also used Audacity to mix the music with the narration, and then went into SoundSlides to pull everything together into one project, with an image backed by audio.

Is it any good?

Well, it’s interesting as an experiment, I think, and it shows how more and more AI projects could become collaborations across platforms.

Is it writing?

I don’t think so, but it was an act of “composition” as I tried to weave different threads of the story, generated by machine, into a coherent media project.

And you know, it’s likely that some company will surely bring all of these AI tools — art, text, music — under one umbrella at some point, and I am not even sure if that is a good development or bad idea when it comes to the world of stories.

Peace (press Play),
Kevin

ChatGPT: Alarm Bells And Learning Possibilities

ChatGPT Play Skit The Case of the Missing Jazz Song

First, it was Wikipedia that would be the end of student research. Then it was Google and other search engines that would be the end of student discovery and learning of facts and information. Now it might be ChatGPT that might be the end of student writing. Period.

As with the other predictions that didn’t quite pan out in the extreme but still had important reverberations across learning communities, this fear of Machine Learning Chat may not work itself out as extreme as the warnings already underway in teaching circles make it seem, but that doesn’t mean that educators don’t need to take notice about text-based Machine Learning systems, a technology innovation that is becoming increasingly more powerful and user-friendly and ubiquitous.

For sure, educators need to think deeply about what we may need to do to change, adapt and alter the ways we teach our young writers what writing is, fundamentally, and how writing gets created, and why. If students can just pop a teacher prompt into an Machine Learning-infused Chat Engine and get an essay or poem or story spit out in seconds, then we need to consider about what we would like our learners to be doing when the screen is so powerful. And the answer to that query — about what can our students do that machine learning can’t — could ultimately strengthen the educational system we are part of.

ChatGPT: Write A Sonnet

Like many, I’ve been playing with the new ChatGPT from OpenAI since it was released a few weeks ago. As I understand it (and I don’t, really, at any deep technical level), it’s an computational engine that uses predictive text from a massive database of text. Ask it a question and it quickly answers it. Ask for a story and it writes it. Ask for a poem or a play (See my skit at the top of the page) or an essay, or even lines of computer code — it will generate it.

ChatGPT: Literary Analysis Paragraph

It’s not always correct (The Lightning Thief response looks good but has lots of errors related to a reading of the text itself) but the program is impressive in its own imperfect ways, including that it had access to the Rick Riordan story series in its database to draw upon. And, as powerful as it is, this current version of ChatGPT may already be out of date, as I think the next version of it is in development (according to the hosts at Hard Fork), and the next iteration will be much faster, much larger in terms of scale of its database, and much “smarter” in its responses.

Can you imagine a student taking a teacher prompt assignment and putting it into the Chat engine, and then using the text as their own as classroom submission? Maybe. Probably. Will that be plagiarism? Maybe.

Or could a student “collaborate” with the Chat engine, using the generative text as a starting point for some deeper kind of textual writing? Maybe. Probably. Could they use it for revision help for a text they have written? Maybe. Probably. Right now, I found, it flattens the voice of the writing.

ChatGPT: Revise This Text

Could ChatGPT eventually replace the need for teachers? Maybe, although I doubt it (or is that just a human response?)

But, for educators, it will mean another reckoning anyway. Machine Learning-generated chat will force us to reconsider our standard writing assignments, and reflect on what we expect our students to be doing when they writing. It may mean we will no longer be able to rely on what we used to do or have always done. We may have to tap into more creative inquiry for students, something we should be doing anyway. More personal work. More nuanced compositions. More collaborations. More multimedia pieces, where writing and image and video and audio and more work in tandem, together, for a singular message. The bot can’t do that (eh, not yet, anyway, but there is the DALL-E art bot and there’s a music/audio bot under development and probably more that I don’t know about.)

Curious about all this, I’ve been reading the work of folks like Eric Curts, of the Control Alt Achieve blog, who used the ChatGPT as collaborator to make his blog post about the Chat’s possibilities and downsides. I’ve been listening to podcasts like Hard Fork to get a deeper sense of the shift and fissures now underway, and how maybe AI Chats will replace web browser search engines entirely (or not). I’ve been reading pieces in the New York Times and the Washington Post and articles signalling the beginning of the end of high school English classes. I’m reading critical pieces, too, noting how all the attention on these systems takes away from the focus on critical teaching skills and students in need (and as this post did, remind me that Machine Learning systems are different from AI)

And I’ve been diving deeper into playing more with ChatGPT with fellow National Writing Project friends, exploring what the bot does when we post assignments, and what it does when we ask it to be creative, and how to try push it all a bit further to figure out possibilities. (Join is in the NWPStudio, if you want to be part of the Deep Dive explorations)

Yeah, none of know really what we’re doing, yet, and maybe we’re just feeding the AI bot more information to use against us. Nor do we have a clear sense of where it is all going in the days ahead, but many of us in education and the teaching of writing intuitively understand we need to pay attention to this technology development, and if you are not yet doing that, you might want to start.

It’s going to be important.

Peace (keeping it humanized),
Kevin

Making A Video Haiku With An AI Collaborator

I saw in my RSS feed that Eric Curtis, whose sharing of technology resources is always fantastic and useful, had mentioned that Canva had just launched a Text-Image AI tool, in which you feed it some words and it generates some images. This image generation feature has become a fairly common feature of AI these days, but I was still curious about how to use it within the platform of Canva (which has a slew of useful design tools and options).

Since this tool is still in beta (I believe), the link is not within the main Canva toolbox quite yet, so this is how you access it: https://canva.me/text-to-image

I grabbed a haiku I had written earlier in the say (off a prompt via Mastodon, with the word “mist” as a key inspiration) and fed it into the Canva tool. Full phrases were less useful than key words, I found, but the images were quite dreamy and evocative (I chose a “painting” setting in the tool).

I played with the Canva video maker tool, weaving the words of the original haiku through the video slides with the AI images, and choosing a piece of music (all within Canva itself) to create the short video poem. I utilized some other design features inside Canva, too, but the images were all AI-generated. It’s still strange to have AI as your creative partner in these things, but it’s interesting, too, to see where AI might offer up useful ideas or not.

To see how it works, read through Eric’s post. It is very detailed and helpful.

Peace (and AI),
Kevin

 

Open AI, Algorithms and Art

Dalle-E Collection

I had forgotten I had signed up for an account with the DALL-E art site, which has gotten a fair share of notice for how it uses AI software to create art from written prompts. So when I saw an email yesterday, telling me my account was now active, I went in and played around. I used music themes for all of my prompts for the AI. The more specific the writing, the more interesting the image that the AI kicks out, I found.

I decided to create a “band” of musicians, with different settings and textual descriptions. It was an interesting experiment, and I used the “variations” tab quite a bit to see what the AI might generate in a second variation but for the most part, these come from the first round of algorithmic art by the platform.

I’ve included the text I used for the AI to generate the images.

DALL·E trumpet
DALL·E saxophone
DALL·E piano
DALL·E guitar
DALL·E drummer
DALL·E bass

You get a certain number of “credits” and then it costs some money to generate art.

Overall, I found the experience rather interesting, and yet I wondered how the AI was using my text descriptions to make itself “smarter” and was curious about what was going on underneath all of the code. There is a research paper available and the “about page” is full of positive elements of AI and the DALL-E site. It acknowledges the worries about AI, too, which I appreciated.

From the site:

Preventing Harmful Generations

We’ve limited the ability for DALL·E 2 to generate violent, hate, or adult images. By removing the most explicit content from the training data, we minimized DALL·E 2’s exposure to these concepts. We also used advanced techniques to prevent photorealistic generations of real individuals’ faces, including those of public figures.

Curbing Misuse

Our content policy does not allow users to generate violent, adult, or political content, among other categories. We won’t generate images if our filters identify text prompts and image uploads that may violate our policies. We also have automated and human monitoring systems to guard against misuse.

 

I am also curious about this part of the Mission Statement:

Our hope is that DALL·E 2 will empower people to express themselves creatively. DALL·E 2 also helps us understand how advanced AI systems see and understand our world, which is critical to our mission of creating AI that benefits humanity.

Let’s hope so, eh?

Peace (and Art),
Kevin

DALL·E music note

Nerdwriter: Dark Patterns

This one has been in my blog draft bin for some time. Worth re-visiting for understanding better how companies try to manipulate us (users) to gather more information and to keep us inside their tents.

I support Nerdwriter through Patreon.

Peace (breaking out),
Kevin

Poetry: Intersections of Words, Art, Music and Technology

Algorithmic Artists and the Solo Saxophonist

I’ve been sharing out some of my morning poems, where I have been exploring the intersections of art and music and writing with technology. The above poem was inspired by an AI site — Dream — that creates art from keywords (here, my words were Saxophone Nights). I used the image, along with explorations this week with Hour of Code and programming, to spark the idea for the poem.

This morning, after a helpful remembering by Wendy T. yesterday, I used JazzKeys to craft a poem, with jazz piano as a soundtrack for each time my fingers hit the computer keyboard in the spur-of-moment writing. I just let the words flow as I listened to the piano. (I am listening now as I write this, too)

Listen: https://jazzkeys.plan8.co/?msg=-MqOfDSE_Pl2R2bBSNTx 

I also created a blended visual of the same poem with a piano player, using a screenshot of the JazzKeys poem and a Creative Commons image, then merged with Lunapic. I like the ghost notes aspect of the result, as the words are fading (and if you listen to the JazzKeys as you read, the experience is even better, I think).

Ghost Notes

Peace (listening),
Kevin