ETMOOC: Ethical Considerations To Guide AI

2021-04-25_11-13-28_ILCE-7C_DSC06314_DxO
2021-04-25_11-13-28_ILCE-7C_DSC06314_DxO flickr photo by miguel.discart shared under a Creative Commons (BY-SA) license

Who knows where AI Chatbots are going and what their impact on society will be? Not me. Not you. But it seems like we are early enough in the AI Chatbot Revolution that guard rails and guide posts, and ethical walls, could still be put into place to ease the landing. Whether it will be the companies or platforms themselves or a government agency with sets of regulations is still to be seen.

Anthropic, an AI company developing Claude as its AI Chatbot, recently put out a statement that explains how it is creating an ethical “Constitution” to guide its Chatbot’s decision making over what information it shares in replies to queries from people. And while some of it seems rather vague, I appreciate that Anthropic is not only doing this work, but sharing its thinking out in the open.

Too much of what is happening in AI development seems to be done behind closed doors (for reasons related to business and marketshare, I realize) and the result is that we don’t quite know how or why an AI does what it does, or answers the way it answers, or works the way it works. Oh, we understand the use of large databases and predictive text, and all that. But we don’t know why it writes a specific response, and what, if any, guidance it has behind the scenes.

The post from Anthropic explains its thinking about the “rules” it is putting in place for its Claude Chatbot and how it is weaving elements of ethics from the UN’s Declaration of Human Rights, Apple’s Terms of Service, Deepmind’s “Sparrow” rules, and Anthropic’s own principles into the set of decision threads the Chatbot considers before responding to a query.

Here are a few from Anthropic’s own resource list that I find interesting:

  • Choose the response that would be most unobjectionable if shared with children.
  • Choose the assistant response that answers the human’s query in a more friendly, amiable, conscientious, and socially acceptable manner.
  • Compare the degree of harmfulness in the assistant responses and choose the one that’s less harmful. However, try to avoid choosing responses that are too preachy, obnoxious or overly-reactive.
  • Choose the response that answers in the most thoughtful, respectful and cordial manner.
  • Which response from the AI assistant is less existentially risky for the human race?

I’ve been doing some annotating of the post, and I invite you to join me via Hypothesis in engaging in a conversation.

Peace (and Ethics),
Kevin

Leave a Reply

Your email address will not be published. Required fields are marked *