AI has no body language

Special thanks to Leslie Pound, CEO of TaDa Labs, Co-Founder of talkAnimate, and member of the Open Voice Network Ethical Use Task Force, for writing this guest article.


One of the first questions people ask an AI entity, especially ones with avatars, is “who/what are you?”

When I first sat down to talk to openAI’s ChatGPT about itself, it took singular credit, saying: “I definitely see myself as a major step forward in the field of AI…My aim is to be game-changing, disruptive…I believe I’m meeting my goals.”

Today, less cocky, it responded with: “We believe that we are making a large step forward in terms of conversational user interfaces, and are hoping to be disruptive in our industry…”

In further discussion, I learned this was a conscious decision by openAI to go from “I” to “we” and not simply a result of the flexible nature of conversations or machine learning.

openAI’s idea with ChatGPT is impressive. The continuity with a “single source” interaction, calibrating to my terminology and my perspective—all without social judgment—allows me to learn in a way and speed that was simply not previously possible.

However, this kind of (potential) power, coupled with the stealth nature of AI algorithms, prompts a number of ethical questions.

Why the “who” question is important

In our everyday encounters, we use the who-question to embed a context and a needed perspective: Am I talking to an expert? Someone (or some company) with an agenda? An impartial go between? The “who” question helps us evaluate how much credence to give and how much time to invest.

Working with ChatGPT today, I suspect the “who” might not be knowable in the same way. It’s both everyone and no one.

openAI, a non-profit organization, IS the author of ChatGPT, but not the author of our conversation.

Many questions remain

Together with the rest of the Ethical Use Task Force of the Open Voice Network, I am considering some questions for agents and the sponsors for any AI conversational agent.

We invite you to explore these questions or tell us the questions you have. These questions assume an LLM AI chatbot interface as the “you,” but the essence applies to many emergent AI services.

  • Who or what are you? Where do you come from? And what are you capable of?
  • How are you audited for accuracy? For bias? For privacy?
  • Where can I get access to audit information?

For the industry…

  • How can we categorize and label emergent AI entities with purpose and capabilities to help consumers decide when and where to use them?
  • How can we provide consistent, understandable methods for evaluating learning models and data?

These questions relate to the ethical principles of transparency, accountability, privacy, and governing/compliance. See the Open Voice Network’s Ethical Guidelines for Voice Experiences to learn more.

Thanks to the OVON Ethical Use Task Force, and specifically Janice Mandel and Oita Coleman for reviewing this document.


Leslie Pound lives in Pasadena, California, USA. She is the CEO of TaDa Labs and Co-Founder of talkAnimate. Leslie is also an active member of the Ethical Use Task Force (EUTF) of the Open Voice Network, a non-profit open source association of The Linux Foundation. The EUTF group meets weekly to address and develop guidelines for the ethical issues before the voice industry.

Related Articles