ETMOOC: Ethical Considerations To Guide AI

2021-04-25_11-13-28_ILCE-7C_DSC06314_DxO
2021-04-25_11-13-28_ILCE-7C_DSC06314_DxO flickr photo by miguel.discart shared under a Creative Commons (BY-SA) license

Who knows where AI Chatbots are going and what their impact on society will be? Not me. Not you. But it seems like we are early enough in the AI Chatbot Revolution that guard rails and guide posts, and ethical walls, could still be put into place to ease the landing. Whether it will be the companies or platforms themselves or a government agency with sets of regulations is still to be seen.

Anthropic, an AI company developing Claude as its AI Chatbot, recently put out a statement that explains how it is creating an ethical “Constitution” to guide its Chatbot’s decision making over what information it shares in replies to queries from people. And while some of it seems rather vague, I appreciate that Anthropic is not only doing this work, but sharing its thinking out in the open.

Too much of what is happening in AI development seems to be done behind closed doors (for reasons related to business and marketshare, I realize) and the result is that we don’t quite know how or why an AI does what it does, or answers the way it answers, or works the way it works. Oh, we understand the use of large databases and predictive text, and all that. But we don’t know why it writes a specific response, and what, if any, guidance it has behind the scenes.

The post from Anthropic explains its thinking about the “rules” it is putting in place for its Claude Chatbot and how it is weaving elements of ethics from the UN’s Declaration of Human Rights, Apple’s Terms of Service, Deepmind’s “Sparrow” rules, and Anthropic’s own principles into the set of decision threads the Chatbot considers before responding to a query.

Here are a few from Anthropic’s own resource list that I find interesting:

  • Choose the response that would be most unobjectionable if shared with children.
  • Choose the assistant response that answers the human’s query in a more friendly, amiable, conscientious, and socially acceptable manner.
  • Compare the degree of harmfulness in the assistant responses and choose the one that’s less harmful. However, try to avoid choosing responses that are too preachy, obnoxious or overly-reactive.
  • Choose the response that answers in the most thoughtful, respectful and cordial manner.
  • Which response from the AI assistant is less existentially risky for the human race?

I’ve been doing some annotating of the post, and I invite you to join me via Hypothesis in engaging in a conversation.

Peace (and Ethics),
Kevin

ETMOOC: Resources From My Classroom AI Explorations

Questions
Questions flickr photo by llimllib shared under a Creative Commons (BY-NC-SA) license

I took part in the annual Day of AI event yesterday, introducing the concepts, possibilities, pitfalls and ethics of ChatGPT and its gathering family of bots and such. Day of AI is mostly sponsored by Massachusetts Institute of Technology, and their lesson plans were pretty solid, if slightly dated, given the pace of change.

My sixth graders were very engaged and extremely interested in learning more about AI and Chatbots (many knew of the technology due to Snapchat’s forced AI Chat-box placed on their app — see below). They had a lot of questions about privacy when using Chatbots  and how these Chatbots might become part of other products, and how students might use such technology for good and for bad. (I reminded them that ChatGPT requires users to be 18 or older.) The ethics of Chatbots being used in schools sparked a lot of debate, for sure.

I did bring them into a school-friendly site called Byte by Codebreaker, and they played around a little with some topics. I had them look up information on an inquiry topic they are doing some work on for an assignment. One student then asked the chat about sports, for example, while another had it generate some cookie recipes. Another asked for the the number Pi out to its millionth number (it is “1” and it took up 44 pages when the student copied it into a document – I suggested he NOT print it out). We then used Scribble Diffusion for generating some artwork, after chatting about the tension between the work of artists and generative platforms.

I had adapted a presentation from the folks at Day of AI, and  you can take a look at what we were doing:

I had also written a letter home to families, explaining what I was doing and why, and providing resources and suggestions for conversations at home about the technology, and its impact on society. A few families responded with warm thanks for the resources and for the classroom discussions.

Here is the letter home, if you are interested:

Peace (in the Data),
Kevin

PS — One thing I hadn’t counted on in advance was that Snapchat’s new My AI embedded chat would be the topic of so much conversation, but it was. It’s the AI Chat interface they are most likely to encounter (even though they are all too young to be on Snapchat, as I remind them all the time, but I still have to face the reality of the situation) and most did not like the Snapchat My AI feature at all, calling is creepy, weird and strange. Vicki Davis had a good blog post about this (Thanks for sharing Vicki’s post, Sheri). I now regret not adding something about Snapchat to my letter to parents and may need to send a follow-up just with a focus on the AI inside the app.

Comic: It’s Only AI (Word By Word)

It's Only AI 15 (Word By Word)

This is part of a series of comics about ChatGPT and AI that I am doing for ETMOOC2. For now, this it the final comic in the series, but I reserve the right to add more later, as inspiration hits.

I am gathering the whole collection here, if you want to see them all together.

Peace (and Bots),
Kevin

ETMOOC: Gearing Up For Introducing AI To The Classroom

All Over an Unknown World
All Over an Unknown World flickr photo by Mantissa.ca shared under a Creative Commons (BY-NC-SA) license

Ever since ChatGPT hit the scene, I’ve been skirting this strange dilemma in my mind as a teacher in terms of how to talk about it all in my sixth grade classroom. My students are too young to be trying ChatGPT, for sure, and yet, it seems foolish to put my head in the sand and think none of them are skirting the age issue (it’s 18, I believe) to play around in it. I am certain some are. And if they are, then they need information and guidance.

But then I think, if any of them have NOT yet heard of it, maybe that’s a good thing for now, to keep the 11 year olds of the world a little more ignorant of the AI earthquake that has hit society.

But then I think, don’t I owe it to them, as a trusted adult, to lead a discussion about what ChatGPT is and how it works and the ethical dilemmas that learners and educators all over are grappling with around plagiarism and more? I have been the one to always talk about technology and social media with them. Why not now?

Now add in how to best help families navigate this new world with their children at home.

Sigh.

I’ve decided that I need to talk about it, thanks to my explorations with ETMOOC, and the Day of AI, which takes place tomorrow, gave me some lesson ideas and tools to share with students that will explain how AI works, how ChatGPT works, as well as exploring some of the ethics of AI. I intend to provide them with two relatively safe AI tools to play with to get a feel for it . One is a school-friendly Chatbot site and the other is a scribble-digital art site (Byte Chat AI and Scribble Diffusion).

I am also working on an email letter home to families to let them know of our inquiry and to provide them with some resources about how to talk with their children about the rise of AI chatbots and open lines of discussion about use, or not use.

Lord, I hope this is the right move.

I think it is, and I am confident in the ways I can guide our discussions and inquiry in the classroom, even at this young age. I guess I don’t think ignorance or denial is the way to go, on my part. They need the tools to navigate, and who better than a human teacher to guide them forward?

Peace (and Questions),
Kevin

AI Audio Adventures (Or How I Asked Holly+ To Eat My Song)

Music by DALL·E

In my inquiry around AI and Audio, I stumbled upon this interesting platform by musician/performer/experimental artist Holly Herndon. Her application — called Holly+ — takes an uploaded audio song and transforms it through AI and Voice into her unique musical style. I had to try it out (of course) and the results are strange but interesting.

I used a demo from a song of mine — with some lyrics from my drummer friend, Bob — called Faucet Drop (Quarantine Together).

The Holly+ tool digested the audio file and turned out a very different remixed version, for sure, but with the underlying DNA of the original demo still intact. There are no recognizable “words” in her vocal AI-infused audio, but that’s fine, as it becomes a different kind of art and collaboration.

 

But I also wanted to take it another step forward, by bringing me back into the mix (so to speak), so I sampled the first section of the Holly+ remix of my song, and began to make another short sample remix, adding some other elements on top, giving it a little more disorientation. In this one, it was all sampling — sections and loops, gathered together and the feel is very different.

So this moved from me to her to us.

Peace (Singing It),
Kevin

PS– Here is Herndon giving a TED Talk about AI, voice and more. (By the way, she is part of a technical team working on ways to protect artists’ intellectual property in the Age of AI through the work of Spawning, and its website: Have I Been Trained? where you can search for art that has been scraped into AI databases.