The Importance of Voice and Data Privacy

Fri, Oct 25, 2019

Read in 5 minutes

As we dive further into the age of AI-voice and its effects on our personal data, it's important to start asking the right questions. Is my data protected? How will it be used? What safeguards are in place that will keep control in the hands of the user? Consumers need to know these answers before giving their data way freely.

The Importance of Voice and Data Privacy

AI-voice will soon be a primary interface to the internet and the world of smart systems.

We’re on that path now. We talk to the internet, and it talks back. We talk to smart systems, and they talk back. In time, talk will move to dialogue and human-to-machine conversation. Conversations will become documents of record. QWERTY may fade into memory.

In time.

Voice is now in its early days. Similar to the early, browser-war days of the internet. Access is limited. Interoperability, nil. Usage is cramped and constrained in walled gardens.

And some very big questions wait for answers.

One of the largest, thorniest questions is that of voice and consumer data privacy.

Spoiler alert: we may not have the answer today, but we do know this—it goes well beyond best-in-class data security protocols or careful observance of the General Data Protection Regulation (GDPR).

Let’s start with the basics.

First and foremost: voice is a biometric identifier, not only of individuals (via a voiceprint), but also of emotion and physical and mental health. Your voice, with 99+ percent accuracy, will identify you. Voice and language patterns can be used to diagnose psychosis, mania, depression, and post-traumatic stress disorder, as well as neurological disorders such as Parkinson’s disease and Alzheimer’s. Analysis of the tone and cadence of voice can be used to infer an emotional state—confidence, happiness, or the lack thereof.

It’s who said it. And how it’s said.

Second: voice assistants listen constantly. That’s how they work. And that includes everything in the surrounding environment. Yes, they’re activated by “wake words,” but who (or who should) have the right to activate? To activate and order? What happens with background conversations, the chatter at parties or family gatherings, the so-called “bycatch?” What if a voice assistant hears threats or indicators of violence?

Yes, the Big-Assistant-in-the-Sky is listening—but is it for good or bad? In their interests or yours? Might certain users, for security reasons or for the care of the elderly and home-bound, desire some level of voice surveillance?

It’s what’s being said. And what’s being heard.

Third: voice assistance is a massive, data-intensive artificial intelligence project. Think for a moment about the complexity, the number of languages and dialects, the daily improvisations of slang. Consider the enormous task of training voice data, of building and training algorithms to recognize words and contextual meaning. And yes, humans are and will be involved in the process.

It’s who said it. And what’s being said. And who’s reviewing it, and for what purpose.

Now add what may be the unique voice-specific twist: voice is the voluntary biometric.

Most biometric identification (from fingerprints to facial recognition) is used in an external, government, or enterprise process of authentication. Are they the persons they claim to be? Are they authorized for entry or access?

It’s different altogether with voice. We freely engage with our assistants, and the fact that it’s a biometric identifier is simply not an issue. It’s set aside, ignored, in the convenience and simplicity of easy, just-speak connection and communication. We choose voice. And when we do so, we share—every time—a 99 percent accurate identifier.

As voice-based policies and standards evolve, it may be that we’ll use this identifier as our authenticator for voice-based purchasing or for managing voice-based access by minors or guests.

We may choose to embrace it. But we cannot and must not forget: it’s a biometric identifier.

As we know, there are today a number of overlapping consumer data privacy regulations, ranging from the GDPR of the European Union to current and soon-to-be-enacted legislation across multiple U.S. states. These all speak in detail to the data privacy issues posed by voice.

Generally speaking, the regulations and legislation require that those who acquire and seek analysis of personal data must disclose the process of acquisition, its intended use, how long it will be retained, and whether it will be shared with third parties. Biometric data can only be collected and shared with third parties with explicit consent. Data that is used for artificial intelligence learning and inference must be anonymized or pseudonymized, and data sets must not be publicly available without explicit, informed consent. Entities that acquire and manage personal data must adopt and actively practice best-in-class data security.

Knowledge and stringent observance of these regulations and legislation is essential for any enterprise or developer working in voice.

But we would argue that such knowledge and observance is just the starting point.

That’s because voice is more than data, more than words. It will become a conversation.

A give-and-take, with words and phrases laded with emotion. In a context, within an environment, within a given time frame. Inside of all that is a mountain of meaningful data.

And with that, it’s no longer just about data protection. The issue expands from security to use. Not only by bad guys on the outside, but by my provider and my brands. As such, the issue evolves—and becomes all about trust.

Like the trust that undergirds a good and close friendship.

It’s trust that you will not only protect my data—all my data—but also you will use it to my benefit. You won’t go behind my back. You won’t dish on me to someone else.

And you’ll be very honest and transparent. If you need to use my data to train the AI, you’ll tell me, you’ll tell me why, you’ll tell me how you’ll protect me, and you’ll ask my permission. In a clear, easy-to-understand way. Which also informs me how I can change my mind.

Consent? Of course. But the current practice of consumer consent, given all that voice is and will be, feels far too thin.

How will we earn—and continually deserve—trust?

This is but one of the issues being explored by the Open Voice Network, an industry association dedicated to artificial intelligence-enabled voice that is open: standards-based, interoperable, accessible to all and through all, and secure.

For more information, visit www.openvoicenetwork.org, or find us on LinkedIn and Facebook.