Policy makers: Please don’t fall for the distractions of #AIhype

Emily M. Bender
5 min readMar 29, 2023

Below is a lightly edited version of the tweet/toot thread I put together in the evening of Tuesday March 28, in reaction to the open letter put out by the Future of Life institute that same day.

Photo of a white dog standing on its hind legs looking up eagerly at a tree.
Photo by Pamela licensed under CC BY 2.0

Okay, so that AI letter signed by lots of AI researchers calling for a “Pause [on] Giant AI Experiments”? It’s just dripping with #AIhype. Here’s a quick rundown.

The letter can be found here: https://futureoflife.org/open-letter/pause-giant-ai-experiments/

First, for context, note that URL? The Future of Life Institute is a longtermist operation. You know, the people who are focused on maximizing the happiness of billions of future beings who live in computer simulations.

For some context, see: https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo

So that already tells you something about where this is coming from. This is gonna be a hot mess.

There a few things in the letter that I do agree with, I’ll try to pull them out of the dreck as I go along. With that, into the #AIhype. It starts with “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1]”.

Screencap of first para of open letter, beginning: “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.“
Screencap from the open letter. This is the first paragraph, accessible at the link above.

Footnote 1 there points to a lot of papers, starting with Stochastic Parrots. But in that paper, we are not talking about hypothetical “AI systems with human-competitive intelligence” in that paper. We’re talking about large language models.

And as for the rest of that paragraph: Yes, AI labs are locked in an out-of-control race, but no one has developed a “digital mind” and they aren’t in the process of doing that.

Could the creators “reliably control” #ChatGPT et al? Yes, they could — by simply not setting them up as easily accessible sources of non-information poisoning our information ecosystem.

Could folks “understand” these systems? There are plenty of open questions about how deep neural nets map inputs to outputs, but we’d be much better positioned to study them if the AI labs provided transparency about training data, model architecture, and training regimes.

Next paragraph:

Screencap of 2nd para, beginning: “Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to …”
Screen cap of second paragraph of open letter

Human-competitive at general tasks, eh? What does footnote 3 reference? The speculative fiction novella known as the “Sparks paper” and OpenAI’s non-technical ad copy for GPT4. ROFLMAO.

On the “sparks” paper, see:
https://twitter.com/emilymbender/status/1638891855718002691?s=20

On the GPT-4 ad copy, see:
https://twitter.com/emilymbender/status/1635697381244272640?s=20

And on “generality” in so-called “AI” tasks, see: Raji et al. 2021. AI and the Everything in the Whole Wide World Benchmark from NeurIPS 2021 Track on Datasets and Benchmarks.

I mean, I’m glad that the letter authors & signatories are asking “Should we let machines flood our information channels with propaganda and untruth?” but the questions after that are just unhinged #AIhype, helping those building this stuff sell it.

Okay, calling for a pause, something like a truce amongst the AI labs. Maybe the folks who think they’re really building AI will consider it framed like this?

Screencap, starting: “”Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium. AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are …”
Screen cap of 3rd & 4th paragraphs of the open letter

Just sayin’: We wrote a whole paper in late 2020 (Stochastic Parrots, published in 2021) pointing out that this head-long rush to ever larger language models without considering risks was a bad thing. But the risks and harms have never been about “too powerful AI”.

Instead: They’re about concentration of power in the hands of people, about reproducing systems of oppression, about damage to the information ecosystem, and about damage to the natural ecosystem (through profligate use of energy resources).

They then say: “AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”

Uh, accurate, transparent and interpretable make sense. “Safe”, depending on what they imagine is “unsafe”. “Aligned” is a codeword for weird AGI fantasies. And “loyal” conjures up autonomous, sentient entities. #AIhype

Some of these policy goals make sense:

Screencap of 7th para, beginning: “In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem;”
Screen cap of 7th paragraph of the linked letter

Yes, we should have regulation that requires provenance and watermarking systems. (And it should ALWAYS be obvious when you’ve encountered synthetic text, images, voices, etc.

Yes, there should be liability — but that liability should clearly rest with people & corporations. “AI-caused harm” already makes it sound like there aren’t *people* deciding to deploy these things.

Yes, there should be robust public funding but I’d prioritize non-CS fields that look at the impacts of these things over “technical AI safety research”.

Also “the dramatic economic and political disruptions that AI will cause”. Uh, we don’t have AI. We do have corporations and VCs looking to make the most $$ possible with little care for what it does to democracy (and the environment).

Policymakers: Don’t waste your time on the fantasies of the techbros saying “Oh noes, we’re building something TOO powerful.” Listen instead to those who are studying how corporations (and governments) are using technology (and the narratives of “AI”) to concentrate and wield power.

Start with the work of brilliant scholars like Ruha Benjamin, Meredith Broussard, Safiya Noble, Timnit Gebru, Sasha Costanza-Chock and journalists like Karen Hao and Billy Perrigo.

Update 3/31/23: The listed authors of the Stochastic Parrots paper have put out a joint statement responding to the open letter.

--

--

Emily M. Bender

Professor, Linguistics, University of Washington// Faculty Director, Professional MS Program in Computational Linguistics (CLMS) faculty.washington.edu/ebender