The convenience and time-saving abilities that voice assistants and AI-voice bring are undeniable. But what is the cost of such benefits? What will our lives look like after widespread adoption?
The Open Voice Network is a non-profit industry association, formed to make the world of voice assistance worthy of user trust.
We’ll do that through the development of global standards for voice. Much needed, and increasingly overdue.
It’s tempting to start our conversations about standards by discussing what those standards should be.
A dispatch registry—a DNS—for voice, perhaps. Maybe standards for identification and authentication in a multi-platform world. The list goes on.
But equally important is a discussion of what those standards should do. And how those standards should be.
Which is all about values and ethics that will inform and guide proposed standards.
This is the first of a series of overviews—all brought to this space—of some key papers that have addressed the questions of conversational AI and AI ethics over the past two years or so. The literature is extensive, with sponsoring organizations ranging from the European Union to the Vatican.
Consider these quick-read literature reviews, using a simple “Situation-Complication-Implication” format. Shared today and in the coming weeks to inform, and for thought provocation.
With this caveat: errors of interpretation are all mine.
Today’s paper: “Toward an Ethics of AI Assistants: An Initial Framework,” by John Danaher, Ph.D., June 2018. Professor Danaher is currently a member of the faculty of the National University of Ireland, Galway.
This is about the personal use of AI assistants. Professor Danaher raises three big questions. Might AI assistants (in time) make us lazy, less smart, passive players in the game of life? Might they (in time) reduce our autonomy, and manipulate or “nudge” our decision-making? Might they allow Big Tech platforms to do so? Might they (in time) damage our interpersonal communication?
The Situation:
AI assistance is a new form of algorithmic, cognitive outsourcing. Much as we use accountants to do our taxes, or gardeners to trim hedges, we now use AI to outsource brain work. It’s also a form of automation—operating independently from human choice.
Danaher’s focus is on personal, not organizational use. And, he defines the AI assistant as any computer-coded software system/program that can act in a goal-directed manner—say, to select the best flight or appointment, or restaurant from a list of options.
The Complications:
Do AI assistants—outsourced, automated, and goal-directed—bring with them unique and distinctive ethical consequences?
Yes. AI assistants could separate us from the cognitive work of doing and deciding.
At first glance, this may be to our detriment. In time, we might be tempted to turn over all cognitive effort and deep thinking to an assistant…which, in turn, could lead us down the path to the so-called degeneration effect, with loss of memory and conceptual ability. Especially if the alternative to the doing and deciding is deeper immersion in the junk food of contemporary intellectual life.
Bottom line: though the AI is smarter, we get less so. Might we lose the ability to think deep thoughts for ourselves?
On the other hand, the separation from doing and deciding may be of benefit.
It may give us time to do other things, to increase our personal productivity. It may allow us (given the limits of personal cognitive ability) to achieve better outcomes than ever imagined. It could balance resource inequities.
Yes. AI assistants could reduce our autonomy and responsibility. And increase the opportunity for third-party manipulation.
Autonomy and responsibility—the ability to make and own one’s own decisions—are cherished values in modern societies. It is commonly believed that happiness and self-fulfillment are best served when we pursue goals of our own choosing.
Does personal AI assistance undermine personal responsibility? Does it threaten to manipulate, filter, or otherwise structure our choices, making us act for reasons or beliefs not necessarily our own?
(Ask the second question in the realm of voice, commerce, and platforms. Please.)
Automation takes away autonomy and responsibility. The cleaning of a rug via a Roomba is different from vacuuming it with a Hoover…or even more so than taking the rug outside, hanging it over a line, and beating it with a whisk.
Automation via AI is about the outsourcing of cognition. It replaces our cognition with automated, outsourced cognition. We can become lazy; it can sap our ability to explore options and exercise judgment.
If we let it—or, if we operate within a proprietary walled garden of AI assistance, we could be slowly imprisoned, shackled as mere implementers of a platform’s suggestions. Here are your (limited) options; you may choose from these, and no more.
The proprietary AI assistant, using the tools of behavioral science, could gently, quietly “nudge” you into a set of preferences and beliefs about the world. And do so on a millisecond-by-millisecond basis, learning from every choice.
Of course, there is a valid counter-argument: that AI assistants can (with nudging or not) help cut through the overwhelming amount of stuff currently offered for nearly every need. They can—and will—reduce complexity.
And, there will be individuals for whom the loss of autonomy and responsibility is no big deal. For whom the loss of autonomy in a given segment of life is a blessing, nor a burden. Let someone else do the grocery shopping. Let someone else choose the outfits. It’s simply not important to me.
Yes. AI assistance could reduce our frequency and capacity for interpersonal communication. The substitution of machine-to-human exchanges for human-to-human talk is not without consequences.
Worrisome? Well, there’s a level of inauthenticity—the AI assistant is choosing words for you (“sounds great!”), and the AI message is chosen for convenience, not accuracy. And, there’s the scenario of an AI assistant arranging for flowers for the wife and voicing sweet words of assurance, while the AI owner dallies with a mistress.
Might AI assistants encourage more thoughtlessness, more automatically-delivered white lies?
The Implications:
Here’s an outline of an ethical framework for the use of AI assistants.
When using AI assistants, be aware of:
- The potential for cognitive degeneration. In this specific situation, should you think for yourself?
- The potential threat to your personal autonomy. Be discerning. Does it provide answers or options? Does it simplify or nudge?
- The potential for diminishing interpersonal activity. Does it replace immediate, conscious engagement?