Register Now for the Voice & AI Conference in Washington, DC – 5-7 September, 2023

Join the Open Voice Network for a Special Pre-Conference Workshop on 5 September

You want to use – or want to sell – voice and the remarkable tools of generative AI. Standing before you: legal, security, and compliance officers who know the latest draft AI regulations (EU AIA) and guidelines (NIST) word for word. To them, you represent risk – and if you’re not knowledgeable, more risk than they’ll accept.

Join the Open Voice Network and an expert team of advisors for a two-hour briefing on what you need to know to minimize risk for your clients and your business. You’ll gain the latest from Brussels and Washington, learn the critical terms and definitions, and take the first steps on your plan for better, trustworthy business in the new world. As a special bonus, we will be offering a 1-hour consultation as a follow-up to assist attendees with outlining their risk mitigation plans!

Workshop Overview

  • Two hours
  • Maximum 40 participants
  • Participants seated at tables for notetaking (preference for six per, and rounds)
  • Raised platform for speakers, panel discussions
  • Screen, projector for PowerPoint presentations


Timing

  • Call time: 2:45PM ET. Please know that we may ask your help in rapidly re-setting the room from the prior presentation.
  • Run of Show:
    • 3:15 – 3:25 Intro / Purpose (Oita Coleman)
    • 3:25 – 3:30 Transition / Intro of first session
    • 3:30 – 3:45 Framing the FUD (Jon Stine / Christian Wuttke / James Poulter)
    • 3:45 – 3:50 Transition / Intro of next session
    • 3:50 – 4:20 Navigating the Complex Legal Terrain (Brenda Leong)
    • 4:20 – 4:25 Transition / Intro of next session
    • 4:25 – 5:00 TrustMark Initiative / Risk Mitigation Working Session (Emily Banzhaf / Valeria Barbero / Hans van Dam)
    • 5:00 – 5:15 Closing / Call to Action (Oita Coleman)


Workshop Details

In the workshop, attendees will learn a working understanding of:

  • The critical perspectives, terms, and definitions within the EU AI Act and NIST AI Framework for conversational AI
  • The ethical/legal perspectives of IP ownership and copyright (including synthetic voice)
  • Effective measures that organizations can take to prepare themselves for the regulatory landscape
  • The role of these regulations and principles in fostering increased trust within the field of conversational AI
  • The significance of upholding ethical principles of privacy, inclusivity, accountability, transparency, sustainability, and compliance
  • Methods and approaches to actively engage and contribute, ensuring that all voices are given the opportunity to be heard
 
Upon completion of the workshop, attendees will begin to:
 
  • Assess the level of risk they pose to clients and potential partners – and at multiple levels
  • Develop a risk mitigation strategy specific to their technology and anticipated use
  • Communicate why and how their solution/proposals are trustworthy in the eyes of the EU AIA
  • Formulate an organizational “TrustMark” program of education and ongoing assessment
 

The workshop will be divided into three parts:

1. Framing the FUD (3:30 – 3:45) – Stine / Wuttke / Poulter

  • This sub-session is scheduled for 15 minutes and will be presented as a conversation between OVON Executive Director Jon Stine and two expert practitioners of voice and conversational AI: James Poulter of Vixen Labs, and Christian Wuttke, Schwarz Gruppe. Jon will open, manage, and close the session.

  • The purpose of this session is to:
    • Concisely and clearly identify the legal and ethical issues, and developer-user implications now surrounding the sale, deployment, and use of today’s conversational AI technology. Or, in simple terms, to “frame the FUD.”
    • Foreshadow the legal advice to be provided by Brenda Leong, esq., and the risk mitigation strategies to be presented by the TrustMark team.

2. Navigating the Complex Legal Terrain (3:50 – 4:20) – Leong

  • This sub-session is scheduled for 30 minutes and will be presentation and group discussion led by legal expert Brenda Leong.

  • The purpose of this session is to:
    • Explore the intricate aspects of intellectual property ownership and copyright within the realm of large language models (LLMs) and generative AI.
    • Define the risks associated with generative AI / LLM: misinformation and fake content, ethical and moral concerns, privacy violations, harm to reputation or brand.
    • Define truth vs. validity vs. accuracy as it relates to AI-generated content.

  • Workshop attendees will be led in an exercise with predetermined prompts to show how the results vary from person to person, and with the possibility of misinformation and bias.

  • This will lead to a discussion into intellectual property and ownership rights. What risks does this pose to the company, its customers, its stakeholders (including employees), its brands? Why is this worth all the risk? What do they need to do to prepare themselves from legal liability in light of EU AI Act and NIST AI RMF?

3. Forging Your Mitigation Strategy – TrustMark Initiative / Risk Mitigation Working Session (4:25 – 5:00) – Banzhaf / Barbero / van Dam

  • The Open Voice Network TrustMark Initiative can play a significant role in helping organizations develop a risk mitigation strategy in the context of voice and conversational AI development and deployment.

  • By following the TrustMark’s standards, organizations can proactively address potential risks and ensure compliance with emerging regulations, including the EU AI Act and NIST AI RMF.

Here’s how it could assist organizations in their risk mitigation efforts:

  • Ethical Principles: A set of ethical principles have been outlined that describe the characteristics of trustworthy conversational AI technology: Privacy, Transparency, Accountability, Inclusivity, Sustainability, and Governance/Compliance.

  • Clear Guidelines: The Ethical Guidelines for Voice Experiences provides clear guidance and recommendations for developing trustworthy voice and conversational AI systems. These guidelines can serve as a foundation for organizations to align their practices with ethical and legal requirements.

  • Educational Resources: The TrustMark Initiative will provide educational resources, workshops, and training sessions to equip participants with the knowledge needed to integrate responsible practices into their development processes, thus reducing the risk of unintended consequences.

  • Assessment tools: Ethical self-assessment tool and maturity model provides a standardized, well-defined, and practical approach for organizations to conduct conformity assessments to meet the requirements of the AI Frameworks – specifically focused on conversational AI products and services. This voluntary validation can serve as a signal of trust to users, partners, and stakeholders, thereby mitigating the risk of reputational damage due to unethical AI practices. 
 

Open Discussion – Forging Your Risk Mitigation Strategy

Engaging participants in a group discussion to start outlining their own risk mitigation strategy involves encouraging critical thinking and practical application of knowledge:
 
Potential Discussion Questions

  • What are the major fears, concerns, and issues that are before you and your plans/proposals? What are the objections you hear from stakeholders in, say, marketing, IT, data security, and legal? What’s the #1 fear of your senior decision-makers in regard to conversational AI and/or voice?
  • As you listened to today’s conversations, what might be the holes/areas of concern/areas of weakness that you see in current policies or practices? What might be most important?
  • What are the main AI applications your organization is currently developing or using? What potential risks or challenges are associated with each AI application? How might these risks intersect with the requirements of the EU AI Act, NIST AI RMF, or other global frameworks?
  • How will you continuously monitor the evolving regulations and guidelines, such as the EU AI Act and NIST AI RMF, to ensure ongoing compliance?
  • How could non-compliance to these ethical frameworks affect your customers, users, and stakeholders?
  • How can participants begin to use the tools provided by the TrustMark Initiative?
  • What barriers exist within your organization that may hinder adoption?
  • What roles within your organization should be engaged to implement the self-assessment tools?
  • Who within your organization should participate in the training course?
  • How can you be an advocate within your organization? What support would you like from OVON to guide you?


The Future – Moving Towards a Risk Mitigation Plan

What additional components would participants like to see from the TrustMark Initiative (example suggestions)?

  • Guidance on developing an ethical framework
  • Best practices for data privacy and security
  • Guidelines on mitigating bias and fairness concerns
  • Recommendations for transparent and ethical design
  • Mapping of OVON guidance with existing regulatory frameworks
  • Additional educational resources and workshops
  • Specific guidance on creating a risk mitigation strategy

Register Now and Use Code OVON20 for 20% Off Your Registration

The Risk Mitigation for Conversational AI workshop will take place on Tuesday, 5 September at 3:15PM ET during the pre-conference portion of the event. Tap the button below to register for the entire conference.

Use code OVON20 for 20% off your registration fee!

Don't Miss These Expert Advisors

The Open Voice Network is excited to welcome a number of experts to help lead our two-hour session. Speakers participating in our Risk Mitigation  for Conversational AI workshop at the Voice & AI Conference include:

Oita Coleman

Senior Advisor, Open Voice Network

Jon Stine

Executive Director, Open Voice Network

Hans van Dam

Founder, Conversation Design Institute

James Poulter

CEO, Vixen Labs

Christian Wuttke

Chat & Voice Technology Lead, Schwarz Gruppe

Brenda Leong

Partner, Luminos.Law

Valeria Barbero

Global Client Lead, Mother Tongue

Emily Banzhaf

Content Designer, WillowTree

About OVON

The Open Voice Network (OVON), an open source association of The Linux Foundation, seeks to make voice technology worthy of user trust—a task of critical importance as voice emerges as a primary, multi-device portal to the digital and IOT worlds, and as independent, specialist voice assistants take their place next to general purpose platforms. The Open Voice Network will achieve its vision through the communal development and adoption of industry standards and usage guidelines, industry education and advocacy initiatives, and the development and documentation of voice-centric value propositions. As a directed fund of The Linux Foundation, OVON enjoys access to the expertise and shared legal, operational, and marketing services of The Linux Foundation, a world leader in the creation of open source projects and ecosystems.

Related Articles

X