Regulate? Why, and If So, How?

And so it came to pass that Sam Altman went to Washington earlier this month to recommend the regulation of conversational artificial intelligence.

Mr. Altman’s congressional testimony on 15 May followed by several weeks the publication of the European Union’s proposed regulations on artificial intelligence. It was also accompanied by public cries for action, ranging from vague pleas to do something to recommendations for an international agency — along the lines of the International Atomic Energy Agency, or International Civil Organization – that might govern the use of conversational artificial intelligence.

Of the many reasons for regulation, one is paramount: fear of conversational AI tools in the wrong hands – a fear that portends a misinformation tsunami that would wash even more distrust into society, with deadly consequences.

But there are other good reasons as well.

Given this, what should you and I do?

I’d argue that it’s impermissible to ignore this topic, to dismiss it as too difficult, or to hope that someone somewhere in some position of power does something about it – as long it doesn’t damage monetization plans.

If you’re in this industry, you’re both a part of the problem and part of the solution. We need not only to think about it, but take tangible steps – perhaps small today, perhaps larger tomorrow – to do something about it.

Allow me to suggest four things you can do today:

1. Withdraw your straw – at least for the moment – from the large language model Kool-Aid.

Why? Because more important than instant creativity and a blizzard of ideas is trust in information accuracy – and its business partner, actionability.

That’s what you and I need day in and day out. It’s what responsible enterprises need day in and day out. Information that can be acted upon with confidence. Trust that you, me, the reader or the listener is not about to scammed or flimflammed.

I’m not a Conversational AI Luddite, ready to destroy the looms and burn the factories. Hell no. I love what generative AI can and will do.

However: accuracy and trustworthiness is where the focus needs to be. I want my queries to be answered with information from trusted, reviewed, and cited sources. I want my health advice from Mayo, not an online Bro.

A friend used this analogy:

Language models are like pools of water from which we drink. The larger the pool – and a large language model is the equivalent of a North American Great Lake – the less pure it will be. You could get a flow of raw sewage here, a few toxins there, rotting flesh on the bottom, and some run-off from a polluting factory over here.

Yes, it can refresh without danger – but if you take steps to make it so. If you have the filters, chemicals, and the processes to make it safe.

On the other hand, you could dip your straw into a trustworthy source of H2O. (Domain- or brand-specific language models, anyone?)

Which leads us to number two.

2. Applaud and support those conversational AI developers – the humans behind the technology – who take this topic very seriously.

Developers committed not only to further innovation, but putting in place the filters, chemicals, and processes to keep disinformation at bay.

Let’s applaud and support firms that prioritize and enable source recognition and citation. Let’s applaud firms that respect privacy, and provide opt-in and opt-out options. Let’s applaud firms that are doing their best to address issues of intellectual property and copyright.

Let’s applaud OpenAI, which disallows the use of its models for a host of activities, requires consumer-facing use to include a disclaimer, and recently stepped forward to request a Washington, D.C. firm to withdraw its claim (and plans) to use ChatGPT to boost productivity in “the multi-billion dollar lobbying and advocacy industry.”

But let’s also acknowledge that no well-meaning technology firm will be able to police every corner of the digital universe.

There are plenty of individuals who today and tomorrow will profit from disinformation. The best of corporate intentions will not make them go away.

And thus, point three:

3. If there’s reason for regulation, how might we do it?

Read the superb paper authored by European academics Philipp Hacker, Andreas Engel, and Marco Mauer: Regulating ChatGPT and other Large Generative AI Models.

Yes, it’s a thick read, but this is a thick topic. It’s worth your time and the thoughts it will provoke.

The authors endorse regulation – but with a different twist. And with some exceptions, not as proposed in the draft EU AI regulations.

At the risk of too much simplification, here’s their argument:

Let’s not try to regulate the technology. It’s moving too fast; today’s regulatory definitions will be obsolete tomorrow, may constrict desired innovation and competition, and – in reality – may be impossible to implement and manage.

Instead, it’s best that we focus on the use of the technology – applying, in many cases, existing, technology-neutral laws designed to prevent discrimination and to protect privacy.

A focus on user behavior starts with the principles of transparency, accountability, and responsibility – applied across the value chain of any generative AI language model. In the value chain are five personas:

  • Developers: the entity originally creating and (pre-) training the model. Examples: OpenAI, Stability, Google.

  • Deployers: the entity fine-tuning the model for a specific user case.

  • Users, of two types: professional users (an entity using AI output for professional purposes, as defined in EU consumer law), and non-professional users (an entity using AI output for non-professional purposes). An example of the former: the use of generative AI language model to develop an academic paper. Of the latter: a parent using generative AI to develop birthday party invitations.

  • Recipients: the entities consuming the product offered by the user – at the receiving, passive end of the pipeline . Often an individual consumer, but recipients may also include a company, NGO, administrative agency, court of law, or legislator.


The focus of regulation should fall on deployers and users. They’re the ones who determine the purpose and intended audience of the output. The responsibility is theirs; the potential liability must be theirs.

There are regulations and laws now in place that could be applied to generative AI language models to address questions of discrimination and bias and data privacy. There are policies in place at leading development houses to monitor and manage disinformation.

Given all that, here's what's needed:

  • Transparency requirements for developers, deployers, and users: for developers and deployers, the provenance and curation of training data and mitigation strategies for potentially harmful content. For users, the disclosure of what public content was generated by or adapted from one or more generative AI language models – supported by standardized watermarks.

  • Staged release risk management by deployers and users – especially when a particular language model may be used in higher-risk situations. What it means: release the next model privately, with access to security researchers and third-party stakeholders. Test, trial, revise. Rinse and repeat.

  • Non-discrimination and training data. For developers and deployers. It’s too important a risk to be delegated to users; audits of training data representation must be pursued upstream.

  • Content moderation. For developers and deployers. This will require trusted flaggers, individuals who can find violations of use.

4. Endorse the TrustMark Initiative

The TrustMark Initiative is where you can state your and your firm’s commitment to doing the right thing in the emerging world of voice and generative AI language models.

It’s where you will find this summer an online ethical educational course (offered through the Linux Foundation) and an ethical self-assessment tool for deployers and users.

It’s where you will find a committed community of practitioners, ethicists, and academics.

It’s where you will find a focus on voice (NLP-NLU-NLG) and conversational AI.

Enterprise decision-makers – the recipients of deployer and user services – increasingly demand proof of ethical compliance as investments are considered and purchase orders written.

The TrustMark Initiative helps you ask the right questions. Points you to best-in-class answers. And is an important component in your competitive differentiation.

In a journey of miles, this is a very important first step.

Join us.

Related Articles

X