My friend, Rob, who plays bass in my band, shared a story of 9/11 with me. He was in New York City at the time of the attack, watching it from a rooftop and then going to try to help amid the confusion. He moved away from the city afterwards, unable to remain in the space where the disaster unfolded.
After our talk over a band dinner, he sent me some writing he had done, as a way to continue to process and remember. He said I could use his writing as I wanted, so I made a found poem as a way to honor his sharing of his story with me.
This video version of the found poem — I Am Witness — uses Keynote for some simple text animation … and the music is something I composed and created in an app called Thumbjam as a soundtrack for the poem. I shared versions of this project at the new yap.net site, as I was working on finalizing a few things, and I appreciate the feedback from there.
The Mozilla Foundation recently put out its 2019 Internet Health Report, and I kept meaning to dive in a little deeper to understand some of the trends of online activity, if only to better comprehend the world in which my young students are moving into (or are already immersed into).
The study also makes three key policy suggestions for moving forward to a better Internet:
Give local governments and organizations more control over the Internet as they are more apt to have individual experiences and the public good in mind
Revamp the whole way advertising is delivered in view of how surveillance and psychological tools for hooking people into games and apps has taken root in so many advertising design elements
Purposefully consider the rise of AI through the lens of ethics and responsibility
Overall, the report surfaces some positive trends around privacy and responsibility, but also notes a continuing worry about censorship and the coming AI innovations on the horizon. I found some elements of the report intriguing, and worth a deeper dive, as it seems to provide information and balance, too.
Then, I started to think about how to find a poem inside the text generated by another poem. Could I surface something from inside of something else, inspired by something else altogether? Another nested poem? I’d find out.
Here’s what I did (in case you want to ever do your own):
I went into Google Slides (but any slideshow program would work because when you move across slides, it looks like animation) and began to cross out words (blackout poem style).
Then I removed the excess words (I cheated, by turning the font color the same color as background, so white text against white background is no longer visible; otherwise, it would have a long formatting exercise of adding spaces where words had been).
Finally, I pulled the remaining, revealed text into another poem. I used transitions and animations to make the process more visible in the slides (the whole thing is as visual hoax, really, using different slides layered on each other to seem like the text is being animated).
Sort of odd. I like that kind of weird writing and weird writing processes.
What happens when you hand off your poem to a “modern neural network”? Something strange, with a hint of interesting. I was using a site called Talk to Transformer, which is built on the back of some neural network mapping of OpenAI and which is designed to complete your text, using its signifiers and databases.
The site explains that it is:
… an easier way to play with OpenAI’s new machine learning model. In February, OpenAI unveiled a language model called GPT-2 that generates coherent paragraphs of text one word at a time … While GPT-2 was only trained to predict the next word in a text, it surprisingly learned basic competence in some tasks like translating between languages and answering questions.
So, of course, I could not resist feeding it some words to see what would happen, starting the lines of a poem about context and constraints, and in the image above, you can see what it spit out for me. There is something beautiful surfacing there, in the juxtaposition of my poem starter and its story extension, although I am at a loss to really understand how it made the leap from my words to its text.
For example, the point of view shifts from third person to first person, and suddenly, the narrator is talking of their mother’s love (or lack of) in a world fallen apart. But look at the last three lines it generated … it’s almost like the start of something else altogether, maybe a new poem generated by human hand … Maybe the game turns to me to continue onward with the AI’s idea ..
I am what I am when I’m no longer something that mustn’t be forgotten… a person so beautiful
So remember me; you must remember us,
as I remember this wasted Earth
when love was nearly lost
and all we had left to hold was each other,
in the days after fallen trees
and warming seas
I still carry the bones of my mother,
that which the soil would no longer hold:
I am young; I am old
The image is a layered gif that I made in Lunapic because I wanted to do something more with the writing. I purposely added non-digital writing tools to contrast the use of AI to make a piece of writing.
Early on, I was pretty active in the Networked Narratives course as an open online participant, as a sort of satellite with a few others to the actual university course being taught by Alan Levine and Mia Zamora. My comic strip alter-egos — The Internet Kid and Horse with No Name — were also part of the Twitter conversations and activities.
At some point, I admittedly lost track in a peripheral way (this is the beauty of RSS feeds — I kept up with the basics of the course progress in my RSS reader from the NetNarr site).
So, I was pleasantly surprised by Mia’s sharing of the completed NetNarr Field Digital Alchemy Guide that all the classroom students contributed to as part of their research (I am not sure if any open folks added to the journal, too). The course itself began pretty dark — with all the ways technology is used against us, in terms of privacy and surveillance and more — and then moved into the light — how can we, as individuals, can make a difference and maybe help foster change for the better.
Each piece of writing is a review of one specific issue of concern about the internet of 2019, following ones we studied, e.g. the surveillance economy, digital identity theft, fake news, digital redlining, toxic data, self expression, bots. Writers were not asked to “fix” or “solve” these big problems, but offer suggestions for individuals how to better thrive in these environments, hence the idea of a “field guide”.
They are written as a dialogue between the students and their invented digital alchemist mentor and will include links to the “notes in the field” left as web annotations.
The work done in the Journal is really rich with topics and insights and resources, and I applaud my former classmates (sort of) for the depth of their sharing in this journal, which is a valuable resource for anyone struggling with finding balance between the potential and the pitfalls of this technologically connected world.
I, for one, only vaguely know what F-insta is, so that’s where I’m heading off to learn more from the NetNarr-ians. Which topic grabs your attention? Be a real reader, and leave some comments for the explorers. Pose a question. Offer insight. Engage.
I am still tinkering around with different apps that animate words. This week, I explored the apps Plays, MOTT and then came back to Legend (re-found in the Google Play store after it disappeared but not found anymore in the Apple App store). None of these fit exactly what I am looking for but some come close enough to have fun with. Some of these examples here are riffs off others work (Terry, in particular) and others are just isolated word play or riffs off my own poems. I explored some others in an earlier post.
This is how it begins. An invitation to write. It knows my weakness.
“Your word will be instantly incorporated into an original two line poem generated by an algorithm trained on over 20 million words of 19th century poetry.”
Call me intrigued.
I arrive at this Google experiment (privacy hackles, dutifully raised) in poetry via Terry entitled Poem Portraits, and so I dig in, and learn that it is a collaborative poetry that is “ever evolving” as people add words and Google’s AI system culls through a myriad of texts it has in its data banks. They call it “An experiment at the boundaries of AI and human collaboration.”
As you add a word (my donated word: Harmonize), it uses your contribution to generate new lines of text, adding to an ever-expanding ongoing poem collaboration between human and machine. The AI asks for a selfie (but you don’t need to do one to add a word), and this is where I paused but then decided to do it and go further.
I had seen Terry’s, and then Sarah’s, and then Charlene’s, and then Sheri’s, and the fact is, I was still intrigued by the mix of poetry, text, words and collaboration.
The result is your word, and the words of your part of the poem, projected and mapped on your face, so that you become part of the poem. (Who knows where all those selfies go .. I suspect it becomes part of Google’s facial recognition data base. I’m sure I am already in there, but I would not likely bring students to this kind of poetry experiment).
I wanted to do more with the photo that gets generated. When you get to this step of your poem on your face, you can also read the larger, collaborative, AI-generated (with your word now added) unfolding on the page (You can access the scrolling poem without participating if you stop before adding a word).
So, I relocated my poem-selfie into the mobile app Fused, and began to layer in some visual static, working to deliberatively create a sort of fuzzy overlay of the selfie poem, as a means to represent some discomfort with how I willingly gave my image to Google.
Then, I wrote a short piece of music in Thumbjam, keeping the idea of my word — Harmonize — in mind, and working to layer three different musical sounds that work in harmony, and a bit of disharmony, too.
Finally, I took all of those pieces into iMovie and wove the media together, with a vocal reading of the text that filters across my face as part of my stanza of the poem.
The result of my playing is the video above … which starts as AI machine but ends with me, the pesky human, taking control of the image and poem again. (Or so I imagine).
Peace (in poems),
PS — this is how Terry played with his results, calling it his “ghost”
During April, every day, I woke up, not knowing what I was going to write. As part of my Random Access Poetry activity, my goal was to use a few different tools and sites to find an unexpected image that could spark a poem for the day. So, for 30 mornings, that’s what I would do — grab a cup of coffee, go to one of my image-finding spaces, land on an image and write small poems.
Here are some of the places I went to for random photo inspiration:
John Johnston’s Flickr Promptr (which he set up after I asked if anyone had anything that would generate a random image for poetry, and I so deeply appreciate that he took that idea and built something in Github)
John Johnston’s Flickr Stampr — which is as Creative Commons search engine
John Johnston’s (he’s great, right!) Flickr Blendr site, which randomly grabs two images and blends them together
Looking back over the 30 poems from April, there were some decent writing days, more than a few mediocre days and a couple of blah days with the poems. Some poems just worked and some poems just didn’t. Some poems seemed to write themselves — I would start and the lines would flow, and I’d try to figure out where the poem was going as it was being written. That’s an awfully strange and interesting experience. Other days, I’d get stuck mid-way into the piece, force myself to plow through and get to a good-enough stopping place.
What I found, as I was about to start writing each morning by calling up a photo with one of the tools above, is that I was searching for a hook in the visual image — something that grabbed my attention, a spark of a hidden story, or a character on the edges, or a small moment, or an emotion. I didn’t know what I was looking for as I was looking but I was fairly confident I might find it if I looked close enough with my writing eyes. Only once or twice did I not use the very first image I found and reset the process. Mostly, I let the random nature of my search become the inspiration, and just went with it.
The thing about poems is that they are designed to evoke, and photos can do the same. Evocation is also a tricky business for a writer in a rush — I wrote poems in a short span of time — and that’s why they don’t always work in this format. There was often a tension between what I saw, what I wrote, and what I aimed to accomplish. But I often left the writing with a phrase or line or stanza on the screen that I found worthy of the page, and for that, I was always inspired and confident as a poet.
If you bothered to read any of the poems, thank you. I hope you were writing, too.
I want to thank Sheri for remixing my video about remix that I shared out this week. Her visual interpretation of the video is wonderful, and useful, capturing my points from another angle. Even more, her exploration of remix at her blog is a valuable insight into what we are talking about when we talk about remix as an act of appreciation of another’s work of art.
This morning’s DS106 Daily Create call for “making stuff” was to merge a name with a famous person with the name of something else. Betty White Cake was the example (chuckle).
I wanted to do Edge of Darkness, with the U2 guitarist. And I wanted to merge two animated GIFs — the Edge with a dark scene — but I didn’t know how to do that. So I learned how.
I searched the Net and re-discovered Animated GIF Maker (which I have used before to make a single GIF) and learned that you can upload multiple GIFs at a time and then arrange and re-arrange the frames. It’s not perfect but it worked for what I wanted, a hint of the darkness of The Edge.
Now all I need is a soundtrack for the end of the world …