A Poem, Woven From Weeds

Broomsedge

I don’t have time to get into the whole story of where this came from, but it started with Terry Elliott and a poem he wrote, then an invitation from Terry into his NotebookLM to explore elements of his poem through an AI lens, my own fascination with the actual invasive plant at the heart of his work of art, leading to some research of my own, which inspired me to write my own poem, and then wondering how my own NotebookLM might weave our poems together into one single poem (image above).

Since Terry often shares the AI voice podcast analysis as part of what he is doing, here is what my NotebookLM podcast generated for me, with both of our poems as the source for inquiry, and a guiding question from me to explore common themes across the poems.

Here is a link. (I laughed out loud when the woman’s voice, near the start, says, We’re familiar with their work.)

Peace (rooted and resilient),
Kevin

PS — and further iteration of combination of our words

12 Days of AI (Day Twelve): Final Reflections and a Media Artifact

12 Days of AI ending

On this final day of the 12 Days of AI, participants are asked to reflect a bit on the activities from the past eleven inquiry prompts. I’ve been an outsider of sorts in this project, since I never joined the Microsoft Teams set up for discussion and I imagine — based on the prompts — that the people who are participating are university people in Europe, not elementary teachers in the US, like me.

I tagged along because I am curious. Though wary of the power and potential of the some of the Generative AI tools out there, I admit that, as an educator and writer and artists, I wonder about how we might find ways to harness these tools for creativity. So I played along, and explored some platforms I already knew and some I didn’t know, and tried to be thoughtful about ethics, data and learning as I went along.

I enjoyed the Mapify platform for making AI Mind Maps. I returned to the Holly AI site to explore sound, which was a strange experience and I am not sure the practicality of it. I used a few video AI generating sites, and while I could see all of the flaws, I could also recognize the potential and the potential harm as these sites get better and smoother, and the AI deep fakes and fake people get more realistic. I am curious about Miro, a site I didn’t tap because it seemed like it was available already to the university folks but it seems to more multi-media Generative AI than most. I wondered how the university people were using it, to be honest.

The final activity is to create an artifact of the 12 days of learning, maybe by moving across platforms.

I decided to ask Mapify to create a map of learning for someone new to Generative AI, who wants to learn slowly, and end with a final project. It did. I like how it ended in a creative art project.

Mapify: Final Project

I then took that concept into both the Holly audio transformer and an AI Voice site, added some text to be spoken, and then wove those audio files together. For the image, I took the Mind Map and used it as a visual reference in Adobe Firefly, and asked it to create an image of someone new to AI, in exploration.

The result is odd, and the seamlessness of crossing AI platforms is not there yet.

My next steps? Go back into the accounts I used for 12 Days, and delete the ones I know I will never use again.

Peace (splattering paint),
Kevin

12 Days of AI (Day Eleven): Asynchronous Collaboration with AI

La Commedia umana - The Human Comedy
La Commedia umana – The Human Comedy flickr photo by JuanGalvez68 shared under a Creative Commons (BY-SA) license

On this eleventh day of the 12 Days of AI, the theme was to look at asynchronous collaboration with Mira, an AI platform that I don’t have access to (but the university hosting the 12 Days, does — I am taking part of in the 12 Days as an outsider) so I went into Claude, instead, to explore the concept through conversation with Claude, asking questions about the pros and cons of live interactions with Generative AI. (Mira can produce sticky notes, artwork, graphs, etc. so my explorations here were mostly limited to text responses).

Pros Of Collab (via Claude)

  • Enhanced Productivity and Efficiency Asynchronous AI collaboration allows humans to work on complex tasks without being constrained by real-time interactions. Users can initiate a task, receive detailed outputs, and then refine or build upon the AI’s work at their own pace. This approach enables more thoughtful, iterative problem-solving without the pressure of immediate responses.
  • Flexible Knowledge Expansion Unlike synchronous interactions, asynchronous collaboration gives humans time to deeply reflect on AI-generated content. They can carefully review suggestions, research additional context, and gradually develop more nuanced understanding. This method supports deeper learning and more comprehensive exploration of complex topics.
  • Reduced Cognitive Load By breaking down complex tasks into manageable chunks, asynchronous AI collaboration helps reduce mental fatigue. Humans can tackle different aspects of a project incrementally, leveraging AI to handle initial drafting, research, or analysis, and then focus their energy on high-value creative and strategic refinements.
  • Comprehensive Document and Project Development Asynchronous modes are particularly powerful for long-form content creation. Whether it’s writing reports, developing software, creating educational materials, or designing complex documents, humans can gradually shape and refine AI-generated content, adding personal insights, context, and nuance over time.
  • Persistent Context and Iterative Improvement Modern AI collaboration tools maintain context across interactions, allowing for continuous refinement of work. Users can return to previous drafts, request specific modifications, and build upon earlier AI-generated content without losing the thread of their original intent.
  • Enhanced Creativity and Ideation Generative AI can serve as a powerful brainstorming partner in an asynchronous mode. It can quickly generate multiple perspectives, draft initial concepts, or provide creative prompts that humans can then critically evaluate, modify, and expand upon at their own pace.
  • Skill Development and Learning Asynchronous AI collaboration provides a unique opportunity for skill development. Users can observe how AI approaches problems, learn from its initial drafts, and gradually develop more sophisticated prompting and collaboration techniques.
  • Accessibility and Time Zone Independence Unlike synchronous interactions, asynchronous AI collaboration transcends time zone limitations. Users can work on projects whenever convenient, receiving sophisticated AI assistance that doesn’t require simultaneous availability.
  • Risk Mitigation and Verification The slower pace of asynchronous collaboration allows for more thorough fact-checking, verification, and refinement. Humans have time to cross-reference AI-generated content, ensuring accuracy and reducing the risks of misinformation or hasty decision-making.
  • Personalized and Contextual Outputs Through iterative interactions, users can progressively refine AI outputs to better match their specific context, writing style, or project requirements. This collaborative approach produces more tailored and precise results compared to one-off interactions.

My Observation: The “creativity and ideation” point is one that intrigues me, I think, and wonders how artists and creative people can best harness this technology to push art in new directions. Not by just asking AI to do something, to make something, but to move to the edges of the possible, and then help the artist go a few steps further.

Cons of Collab (via Claude)

  • Potential for Misinterpretation Without real-time dialogue, there’s a higher risk of misunderstanding the original intent. Nuanced requirements or complex context can be lost or misinterpreted between interaction cycles, leading to progressively less relevant AI outputs.
  • Increased Cognitive Overhead Paradoxically, while aiming to reduce cognitive load, asynchronous collaboration can actually increase mental effort. Users must carefully craft initial prompts, review outputs critically, and manage multiple iterations of refinement, which can be more mentally taxing than direct, synchronous communication.
  • Context Drift Over multiple interactions, the original project goal can subtly shift or become diluted. Without immediate clarification, AI might progressively move further from the initial objective, creating outputs that become increasingly disconnected from the user’s original vision.
  • Reduced Spontaneity and Dynamic Interaction Real-time dialogue allows for immediate clarification, spontaneous idea generation, and dynamic problem-solving. Asynchronous modes lose this conversational fluidity, making the collaboration feel more mechanical and less organic.
  • Delayed Problem Resolution Complex problems that would benefit from immediate back-and-forth dialogue get stretched out over longer periods. What might take 15 minutes in a synchronous conversation could extend to hours or days in an asynchronous mode.
  • Increased Risk of AI Hallucinations Without immediate human intervention, AI might generate more elaborate or convincing but factually incorrect information. The lack of real-time fact-checking increases the potential for sophisticated but inaccurate content.
  • Emotional and Creative Disconnection Asynchronous collaboration can feel impersonal. The iterative, delayed nature removes the energetic exchange of ideas, potentially reducing the emotional engagement and creative spark that comes from real-time brainstorming.
  • Dependency and Skill Atrophy Overreliance on asynchronous AI collaboration might lead to diminished personal problem-solving skills. Users might become too dependent on AI-generated content, potentially reducing their own critical thinking and creative capabilities.
  • Complex Workflow Management Tracking changes, managing multiple versions, and maintaining a coherent development process becomes more challenging. Users must carefully manage versions, remember context across interactions, and maintain a clear mental model of the project’s evolution.
  • Technological and Interface Limitations Not all AI collaboration tools are equally equipped for asynchronous work. Some platforms have limited version tracking, poor context retention, or clunky interfaces that make iterative collaboration frustrating.
  • Ethical and Accountability Challenges In asynchronous modes, the lines of intellectual ownership and contribution become blurrier. It’s less clear who is truly responsible for the final output – the human, the AI, or some hybrid collaboration.
  • Performance and Cost Considerations Multiple interactions and iterations can potentially increase computational costs and response times, especially if working with more advanced AI models that require significant processing power.

My Observation: So many of these are on my own radar, but the point about “dependency and skill atrophy” and the over-reliance on AI was interesting, as you can see a version of that happening with the advent of smart phones and GPS mapping, and how our reliance on our technology has changed the way we learn, use and retain new information in our heads. If everything is infused with Generative AI, will we still find ways to think through problems on our own and find original, creative solutions?

I also wondered about real-world examples of positive collaborations. Claude abides.

  1. Medical Diagnostics and Research
  • Radiologists use AI algorithms to analyze medical imaging, detecting potential tumors or abnormalities with greater accuracy and speed.
  • Researchers at Stanford University developed an AI system that can detect skin cancer with accuracy comparable to dermatologists, serving as a screening tool that assists human experts rather than replacing them.
  • Drug discovery teams use AI to predict molecular interactions and potential drug candidates, dramatically reducing the time and cost of developing new medications.
  1. Scientific Research
  • NASA uses AI to analyze vast amounts of astronomical data from telescopes, helping scientists identify potentially habitable exoplanets and unusual cosmic phenomena.
  • Climate scientists leverage AI to process complex climate models, identifying patterns and making more accurate predictions about climate change impacts.
  • Protein folding research saw a breakthrough with DeepMind’s AlphaFold, which helps researchers understand protein structures, accelerating biological and medical research.
  1. Creative Industries
  • Film and animation studios use AI for pre-visualization, helping artists quickly prototype complex scenes and visual effects.
  • Musicians collaborate with AI tools to generate unique musical compositions, explore new melodic patterns, and overcome creative blocks.
  • Graphic designers use AI-powered tools to generate initial design concepts, which they then refine and customize.
  1. Education
  • Personalized learning platforms use AI to adapt curriculum to individual student needs, providing tailored learning experiences.
  • Language learning apps like Duolingo use AI to create adaptive learning paths, helping humans learn languages more efficiently.
  • Teachers use AI tools to assist with grading, identify students who might need additional support, and develop more personalized teaching strategies.
  1. Environmental Conservation
  • Conservationists use AI-powered drone and satellite imagery to track wildlife populations, monitor deforestation, and detect illegal poaching activities.
  • Ocean researchers employ AI to analyze marine ecosystem data, tracking marine life migrations and understanding climate change impacts.
  1. Agricultural Innovation
  • Farmers use AI-powered systems to optimize crop yields, predict pest infestations, and make more precise irrigation decisions.
  • Precision agriculture technologies use AI and satellite imagery to provide detailed insights about crop health, soil conditions, and potential improvements.
  1. Accessibility Technologies
  • AI-powered translation tools help break down language barriers, assisting people in real-time communication across different languages.
  • Assistive technologies use AI to help people with disabilities, such as text-to-speech systems for visually impaired individuals or communication aids for those with speech impairments.
  1. Cybersecurity
  • Security teams use AI to detect potential cyber threats, analyze network traffic, and identify unusual patterns that might indicate a security breach.
  • AI helps human cybersecurity experts by providing real-time threat detection and predictive analysis of potential vulnerabilities.
  1. Emergency Response and Disaster Management
  • AI helps predict natural disasters and optimize emergency response strategies.
  • During the COVID-19 pandemic, AI was used to track virus spread, assist in vaccine development, and help health authorities make more informed decisions.
  1. Business and Customer Service
  • Customer service teams use AI chatbots for initial screening and handling routine inquiries, allowing human agents to focus on more complex customer needs.
  • Marketing teams leverage AI for data analysis, helping to develop more targeted and effective marketing strategies.

It was this last section, on real-world applications, that makes me hopeful that the advancements in AI (did you read that Google says it has begun Quantum Computing? That’s huge!) could have a positive impact on our world, through fields of health and science, in particular. But of course, there will need to be more guardrails.

Peace (pondering),
Kevin

12 Days of AI (Day Ten): Exploring Bias In The System

AI Photo Generation

On this tenth day of the 12 Days of AI, we are exploring the bias that comes within AI systems, either intentionally or unintentionally. All systems have inherent bias because of the data sets that work beneath the interface. Some AI sites do a better job at countering bias than others.

The post reminds us:

AI generative output is not creativity, but a statistical variation without intent and meaning. AI is a statistical artist. It introduces variations into its work, but its “choices” are driven on patterns and probabilities rather than deep understanding of lived experiences and emotions. AI generative outputs, while devoid of the intent and meaning characteristic of human creativity, possess significant potential to speed, support, and extend the creative process.”

The prompt was to go into an AI image generator, and use a prompt of a person in a setting, doing some action (they frame it as a scene from a movie) and look to see if there were any apparent bias in the results. I used Adobe Firefly and asked it to generate a “stern-looking school principal talking to an elementary teacher about student work.”

Is there bias in the two collections of results I received?

Out of the eight images, it seems as if five (and maybe six? Depends how you look at who is who) show a white male in the role of principal, and of the eight, only two images show non-white adults being generated. Two women seem to be in the administrative role. I didn’t have enough credits in Firefly to keep running the experiment a few more times, which would have been ideal. Would I have kept finding similar genders and races of the administrators?

What about if I refined my prompt, adding loaded terms like “urban school” or “struggling teacher” or “disruptive student”? How would other platforms perform with the same prompt? Would there be some gender/racial stereotypes into who is an administrator and who is an educator?

Peace (in explorations),
Kevin

12 Days of AI (Day Nine): Deep Fake Videos

How Does Digital Technology Affect You?
How Does Digital Technology Affect You? flickr photo by schopie1 shared under a Creative Commons (BY-SA) license

On this ninth day of the 12 Days of AI, we are looking at Deep Fake technology, examining both the possibilities and the concerns of how advancements in video production powered by AI makes for possibilities. We are asked to create our own Deep Fake video through a site called HeyGen, with a script created by Generative AI. I used Claude, and asked it to generate a script about jazz pianist Thelonius Monk.

I wasn’t necessarily all that impressed with the results, to be honest (but she seems pretty happy in her delivery). Still, you can see the potential, and maybe a paid account gives more flexibility and tools for voice, etc.

I followed the steps, but choices are pretty limited with a free account. I chose a host, and a voice, added the script, and al little text and background image. I wouldn’t quite put what I did under the Deep Fake category — given all the synthetic media I used. I think of Deep Fakes more connected to real people’s image and voice and video.

As it turns out, I have plans to chat about Deep Fakes with my students today, as part of a larger unit about technology in their lives. So I appreciated this chart, shared off the main post today:

Since the 12 Days is run by university folks, one of the ethical questions here is whether these AI videos could play a role in the delivery of education at an institution. Lord, I hope not. I can’t imagine sitting through a course with an AI Deep Fake Teacher, but I bet somewhere, some administrators are definitely imagining it (and thinking of the cost savings).

Peace (real, if we imagine it),
Kevin