This was my poem this morning for Advent Of Hope by Deanna M
It was inspired by “A House Called Tomorrow” by Alberto Ríos https://poets.org/poem/house-called-tomorrow
Peace (keep on singing it),
Kevin
If you don't live it, it won't come out of your horn. ~ Charlie Parker
This was my poem this morning for Advent Of Hope by Deanna M
It was inspired by “A House Called Tomorrow” by Alberto Ríos https://poets.org/poem/house-called-tomorrow
Peace (keep on singing it),
Kevin
I don’t have time to get into the whole story of where this came from, but it started with Terry Elliott and a poem he wrote, then an invitation from Terry into his NotebookLM to explore elements of his poem through an AI lens, my own fascination with the actual invasive plant at the heart of his work of art, leading to some research of my own, which inspired me to write my own poem, and then wondering how my own NotebookLM might weave our poems together into one single poem (image above).
Since Terry often shares the AI voice podcast analysis as part of what he is doing, here is what my NotebookLM podcast generated for me, with both of our poems as the source for inquiry, and a guiding question from me to explore common themes across the poems.
Here is a link. (I laughed out loud when the woman’s voice, near the start, says, We’re familiar with their work.)
Peace (rooted and resilient),
Kevin
PS — and further iteration of combination of our words
I continue to use the 2025 Advent Poem Calendar from Deanna, who built it with poems of a hopeful nature, to inspire my morning poems. Here are a few.
Peace (I hope),
Kevin
On this final day of the 12 Days of AI, participants are asked to reflect a bit on the activities from the past eleven inquiry prompts. I’ve been an outsider of sorts in this project, since I never joined the Microsoft Teams set up for discussion and I imagine — based on the prompts — that the people who are participating are university people in Europe, not elementary teachers in the US, like me.
I tagged along because I am curious. Though wary of the power and potential of the some of the Generative AI tools out there, I admit that, as an educator and writer and artists, I wonder about how we might find ways to harness these tools for creativity. So I played along, and explored some platforms I already knew and some I didn’t know, and tried to be thoughtful about ethics, data and learning as I went along.
I enjoyed the Mapify platform for making AI Mind Maps. I returned to the Holly AI site to explore sound, which was a strange experience and I am not sure the practicality of it. I used a few video AI generating sites, and while I could see all of the flaws, I could also recognize the potential and the potential harm as these sites get better and smoother, and the AI deep fakes and fake people get more realistic. I am curious about Miro, a site I didn’t tap because it seemed like it was available already to the university folks but it seems to more multi-media Generative AI than most. I wondered how the university people were using it, to be honest.
The final activity is to create an artifact of the 12 days of learning, maybe by moving across platforms.
I decided to ask Mapify to create a map of learning for someone new to Generative AI, who wants to learn slowly, and end with a final project. It did. I like how it ended in a creative art project.
I then took that concept into both the Holly audio transformer and an AI Voice site, added some text to be spoken, and then wove those audio files together. For the image, I took the Mind Map and used it as a visual reference in Adobe Firefly, and asked it to create an image of someone new to AI, in exploration.
The result is odd, and the seamlessness of crossing AI platforms is not there yet.
My next steps? Go back into the accounts I used for 12 Days, and delete the ones I know I will never use again.
Peace (splattering paint),
Kevin
La Commedia umana – The Human Comedy flickr photo by JuanGalvez68 shared under a Creative Commons (BY-SA) license
On this eleventh day of the 12 Days of AI, the theme was to look at asynchronous collaboration with Mira, an AI platform that I don’t have access to (but the university hosting the 12 Days, does — I am taking part of in the 12 Days as an outsider) so I went into Claude, instead, to explore the concept through conversation with Claude, asking questions about the pros and cons of live interactions with Generative AI. (Mira can produce sticky notes, artwork, graphs, etc. so my explorations here were mostly limited to text responses).
Pros Of Collab (via Claude)
My Observation: The “creativity and ideation” point is one that intrigues me, I think, and wonders how artists and creative people can best harness this technology to push art in new directions. Not by just asking AI to do something, to make something, but to move to the edges of the possible, and then help the artist go a few steps further.
Cons of Collab (via Claude)
My Observation: So many of these are on my own radar, but the point about “dependency and skill atrophy” and the over-reliance on AI was interesting, as you can see a version of that happening with the advent of smart phones and GPS mapping, and how our reliance on our technology has changed the way we learn, use and retain new information in our heads. If everything is infused with Generative AI, will we still find ways to think through problems on our own and find original, creative solutions?
I also wondered about real-world examples of positive collaborations. Claude abides.
It was this last section, on real-world applications, that makes me hopeful that the advancements in AI (did you read that Google says it has begun Quantum Computing? That’s huge!) could have a positive impact on our world, through fields of health and science, in particular. But of course, there will need to be more guardrails.
Peace (pondering),
Kevin
inspired by May Perpetual Light Shine
by Patricia Spears Jones
https://poets.org/poem/may-perpetual-light-shine
Peace (in rhythm),
Kevin
On this tenth day of the 12 Days of AI, we are exploring the bias that comes within AI systems, either intentionally or unintentionally. All systems have inherent bias because of the data sets that work beneath the interface. Some AI sites do a better job at countering bias than others.
The post reminds us:
“AI generative output is not creativity, but a statistical variation without intent and meaning. AI is a statistical artist. It introduces variations into its work, but its “choices” are driven on patterns and probabilities rather than deep understanding of lived experiences and emotions. AI generative outputs, while devoid of the intent and meaning characteristic of human creativity, possess significant potential to speed, support, and extend the creative process.”
The prompt was to go into an AI image generator, and use a prompt of a person in a setting, doing some action (they frame it as a scene from a movie) and look to see if there were any apparent bias in the results. I used Adobe Firefly and asked it to generate a “stern-looking school principal talking to an elementary teacher about student work.”
Is there bias in the two collections of results I received?
Out of the eight images, it seems as if five (and maybe six? Depends how you look at who is who) show a white male in the role of principal, and of the eight, only two images show non-white adults being generated. Two women seem to be in the administrative role. I didn’t have enough credits in Firefly to keep running the experiment a few more times, which would have been ideal. Would I have kept finding similar genders and races of the administrators?
What about if I refined my prompt, adding loaded terms like “urban school” or “struggling teacher” or “disruptive student”? How would other platforms perform with the same prompt? Would there be some gender/racial stereotypes into who is an administrator and who is an educator?
Peace (in explorations),
Kevin
How Does Digital Technology Affect You? flickr photo by schopie1 shared under a Creative Commons (BY-SA) license
On this ninth day of the 12 Days of AI, we are looking at Deep Fake technology, examining both the possibilities and the concerns of how advancements in video production powered by AI makes for possibilities. We are asked to create our own Deep Fake video through a site called HeyGen, with a script created by Generative AI. I used Claude, and asked it to generate a script about jazz pianist Thelonius Monk.
I wasn’t necessarily all that impressed with the results, to be honest (but she seems pretty happy in her delivery). Still, you can see the potential, and maybe a paid account gives more flexibility and tools for voice, etc.
I followed the steps, but choices are pretty limited with a free account. I chose a host, and a voice, added the script, and al little text and background image. I wouldn’t quite put what I did under the Deep Fake category — given all the synthetic media I used. I think of Deep Fakes more connected to real people’s image and voice and video.
As it turns out, I have plans to chat about Deep Fakes with my students today, as part of a larger unit about technology in their lives. So I appreciated this chart, shared off the main post today:
Since the 12 Days is run by university folks, one of the ethical questions here is whether these AI videos could play a role in the delivery of education at an institution. Lord, I hope not. I can’t imagine sitting through a course with an AI Deep Fake Teacher, but I bet somewhere, some administrators are definitely imagining it (and thinking of the cost savings).
Peace (real, if we imagine it),
Kevin