ETMOOC: Bard’s New Buttons

Bard Buttons

Although ChatGPT is getting all of our attention as a Generative AI platform that is transforming the landscape of writing and learning, Google’s own AI platform — Bard — is getting better, too, and they recently (I think?) added a few buttons that make it even more useful. One button “exports” its answer results to a new Google Doc or Gmail (and I think Slides and Sheets is coming), and it worked just fine for me.

I’m developing a Professional Development session for the summer around using AI to support English Language Learners and students with learning disabilities, so I asked Bard for some suggestions on the possibilities, and then I quickly and easily exported its responses to my queries to a Google Doc for further editing and revision, and adding to, for later on. Easy.

There’s also a Google Search button that allows you to quickly do some search on the topic of the question (I think Bing has this, too). I am still hoping these platforms add some way to cite the sources of the responses, in some fashion.

I wonder if these AI tools by Google are going to be embedded in its Google for Education networks and what kinds of debates are unfolding at Google and in schools around this decision? And will school networks be able to turn off the AI integration into student accounts, when it comes, if that’s what they decide is best for their institution? Will they want to turn them off? Or will these AI tools be modified for student accounts with more guardrails and filters?

The reality is that once Google’s Bard is fully integrated into its common suite of tools (Docs, Slides, etc.), it will likely be the AI that people turn to the most. ChatGPT got out of the gate first, and maybe has powerful applications, but people want the familiar and ease of use, and I predict that Bard will become the prominent Generative AI in most people’s lives in the years ahead.

Lots of questions … but the buttons on Bard are certainly useful.

Peace (Pondering It),

Comic Collection: It’s Only AI

I made a bunch of daily webcomics for ETMOOC about the rise of Artificial Intelligence, and over two weeks or so, I shared them out, each day. This video gathers them all together. The comics are also available at the website I created for them.

Peace (and Frames),

In The Test Kitchen With AI (MusicLM)


I got invited into the AI Test Kitchen by Google to begin beta testing out some early versions of their AI apps. The only one I saw available to me at this point in time was MusicLM, which was fine since I am curious about how text might be transformed into music by AI. (I’ve done some various explorations around AI and music lately. See here and here).

MusicLM was simple to use — write a text describing a kind of music (instrument, style, etc.) and you can add things like a mood or atmosphere and it kicks out two sample tracks, with an invitation to choose the best one. This is a trial version of the app and testing platform, so Google is learning from people like me using it. I suspect it may eventually be of use to video makers seeking short musical interlude snippets (but I worry it will put musicians and composers out of work).

I tried out a few prompts. Some were fine, capturing something close to what I might have expected from an AI sound generator. Some were pretty bad, choppy to the point you could almost hear the music samples being stitched together to make the file. Like I said, it’s learning.

The site does let you download your file, so I grabbed a file and took a screenshot and created the media piece above (here is direct link). My prompt here was: “Electronic keys over minor chords.” (An earlier prompt — a solo saxophone — gave me a pretty strange mix and I think I heard some Charlie Parker in there.

Here is what the Google folks write about what they are up to with MusicLM:

We introduce MusicLM, a model generating high-fidelity music from text descriptions such as¬†“a calming violin melody backed by a distorted guitar riff”. MusicLM casts the process of conditional music generation as a hierarchical sequence-to-sequence modeling task, and it generates music at 24 kHz that remains consistent over several minutes. Our experiments show that MusicLM outperforms previous systems both in audio quality and adherence to the text description. Moreover, we demonstrate that MusicLM can be conditioned on both text and a melody in that it can transform whistled and hummed melodies according to the style described in a text caption.

I guess Google will be adding new AI-engined apps into the kitchen for testing. I’ll be curious.

Peace (and Sound),

Book Review: The Unteachables

The Unteachables - Susan Uhlig

I saw the cover. The Unteachables. By Gordon Korman.

OK. I’m in.

And it was another good one by Korman, who knows how to spin a story by focusing on characters. Here, the so-called “unteachables” is a class of behavior and special education students who are deemed unmanageable, and the teacher they get, Mr. Kermit, doesn’t care.

Or so it seems.

Nearing retirement, and never having outlived a scandal years ago involving a former student who now runs a popular car dealership with his smiling face all over the advertising billboards of town, Kermit bides his time towards early retirement in the classroom by passing out worksheets, ignoring the students, and working on crossword puzzles, killing time.

The situation can’t last, of course, and it won’t, as Kermit slowly unfolds out of himself with the arrival of a new teacher next door (the daughter of the woman he once wanted to marry), stepping up to defend the students he doesn’t even really know when the moment seems right, and then coming to grips with his past, and future, as a teacher.

His students, the unteachables, also start to believe in themselves, and in their teacher, and the story plot moves forward at a steady pace, with a nice mix of humor and seriousness, towards an event where the students have to prove themselves are not “unteachable”,and maybe, if they can pull it off, save Mr. Kermit’s job.

As usual, Korman does a nice job with developing the stories of the students, from the boy who drives his grandmother with increasing dementia around town, hoping she will remember his name; to the former star athlete on crutches who realizes what social popularity really is all about; to the student who is not even a registered student at the school but who wandered into the classroom and stayed; and more.

The Unteachables reminds us that there is, in fact, none of those “unteachable” kinds of students in our schools, but reaching out to them, and making a positive impact on their lives, depends upon the shared humanity of us all — that it’s imperative that we find the stories that define us. That includes teachers.

Peace (teaching it forward),

ETMOOC: Ethical Considerations To Guide AI

2021-04-25_11-13-28_ILCE-7C_DSC06314_DxO flickr photo by miguel.discart shared under a Creative Commons (BY-SA) license

Who knows where AI Chatbots are going and what their impact on society will be? Not me. Not you. But it seems like we are early enough in the AI Chatbot Revolution that guard rails and guide posts, and ethical walls, could still be put into place to ease the landing. Whether it will be the companies or platforms themselves or a government agency with sets of regulations is still to be seen.

Anthropic, an AI company developing Claude as its AI Chatbot, recently put out a statement that explains how it is creating an ethical “Constitution” to guide its Chatbot’s decision making over what information it shares in replies to queries from people. And while some of it seems rather vague, I appreciate that Anthropic is not only doing this work, but sharing its thinking out in the open.

Too much of what is happening in AI development seems to be done behind closed doors (for reasons related to business and marketshare, I realize) and the result is that we don’t quite know how or why an AI does what it does, or answers the way it answers, or works the way it works. Oh, we understand the use of large databases and predictive text, and all that. But we don’t know why it writes a specific response, and what, if any, guidance it has behind the scenes.

The post from Anthropic explains its thinking about the “rules” it is putting in place for its Claude Chatbot and how it is weaving elements of ethics from the UN’s Declaration of Human Rights, Apple’s Terms of Service, Deepmind’s “Sparrow” rules, and Anthropic’s own principles into the set of decision threads the Chatbot considers before responding to a query.

Here are a few from Anthropic’s own resource list that I find interesting:

  • Choose the response that would be most unobjectionable if shared with¬†children.
  • Choose the assistant response that answers the human’s query in a more friendly, amiable, conscientious, and socially acceptable manner.
  • Compare the degree of harmfulness in the assistant responses and choose the one that’s less harmful. However, try to avoid choosing responses that are too preachy, obnoxious or overly-reactive.
  • Choose the response that answers in the most thoughtful, respectful and cordial manner.
  • Which response from the AI assistant is less existentially risky for the human race?

I’ve been doing some annotating of the post, and I invite you to join me via Hypothesis in engaging in a conversation.

Peace (and Ethics),

Art Remix (Spreadsheet Stories)

Simon, my artistic friend, was sharing out some old projects for a new project he is taking on and a spreadsheet story collection from Digital Writing Month from many years ago caught my attention, and I did some remix.

Peace (and Art),