Critics are battering that "AI Pause" letter. But it's still worth signing
The critics make some good points about immediate and existential risks, but we desperately need regulation of AI
Another day, another warning of an AI apocalypse. This time, more than a thousand people, including AI experts, tech gurus, and scientists, signed an open letter asking for a pause on developing AI technologies more powerful than OpenAI's GPT-4. You know, just so we can figure out if these superintelligent machines will destroy us all or, at the very least, steal our jobs and poison our political discourse.
These language models are apparently getting too good for their own good, and the letter, penned by the folks at the Future of Life Institute, wants a "public and verifiable" pause for at least six months. And if the tech companies won’t do it, the letter suggests governments step in and institute a moratorium. Because we all know how eager politicians will be to shut down a chunk of an industry that gives us jobs and gives them dollars.
Look, the letter isn’t perfect. But I signed it. Among the other admittedly more famous signatories, we've got AI giants like Yoshua Bengio and cat-cooking opponent Stuart Russell, Apple co-founder Steve Wozniak, AI-hype-mangler Gary Marcus, and (sigh) Twitter CEO Elon Musk. As loath as I am to agree with that narcissistic manbaby about anything, AI development has become a reckless, unregulated race, with ethical and safety concerns falling by the wayside.
“Nonhuman minds that might replace us”
The letter rightly raises concerns about the current breakneck pace of AI development, driven by the almighty dollar, and the possible threats to our jobs and our democracy, such as it is. But it also waxes apocalyptic about the “potentially catastrophic” longer-term consequences.
Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?
Uh, maybe not?
Such decisions must not be delegated to unelected tech leaders.
Says the letter written and signed by, er, a bunch of unelected tech leaders.
Will the letter have any consequences besides generating some headlines and a lot of chatter in Silicon Valley and on Twitter? Who knows. But at least we can say we tried, right?
The critics gather
Well, not so fast, some critics say. Slate is leery of Musk’s involvement in the letter, suggesting that.
by hyping the entirely theoretical existential risk supposedly presented by large language models (the kind of A.I. model used, for example, for ChatGPT), Musk is sidestepping the risks, and actual damage, that his own experiments with half-baked A.I. systems have created.
You may have noticed that this is not an actual substantive criticism of the letter itself, and the article doesn’t really offer one. Does ire matter that Musk may have had ulterior motives for signing the thing?
A hot mess of AI hype?
But other critics deliver the substance. Some say that all the doom talk is distracting us from real, present-day concerns with AI. AI critic Emily M. Bender complains that the letter is "just dripping with AI hype" “[T]he risks and harms have never been about ‘too powerful AI,’” she tweets.
Instead: They're about concentration of power in the hands of people, about reproducing systems of oppression, about damage to the information ecosystem, and about damage to the natural ecosystem (through profligate use of energy resources).”
These are all genuine concerns, but I don’t think they’re good reasons not to sign the letter. Indeed, I suspect that a pause would give some political space for some of these concerns to be addressed through public hearings and possible legislation.
The end of the world?
Still other critics think the doom talk isn’t nearly doomy enough. Everyone’s favorite apocalyptic AI safety guru Eliezer Yudkowsky writes on Time.com that the letter doesn’t go nearly far enough given the dangers of unchecked AI. While in his mind, a “6-month moratorium would be better than no moratorium,” he’s convinced a pause in AI development “isn’t enough: we need to shut it all down.” With typical bluntness, he writes:
Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.”
Damn. Say what you really mean, why don’t ya.
We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die.
I wish I could dismiss Yudkowsky’s concerns out of hand, but I can’t. There’s a chance he’s right.
And while I don’t know about shutting it all down, I certainly agree with him that a six-month pause isn’t nearly enough time to deal with the real dangers we face from AI, near and long term, and our utter lack of preparedness for them. But my biggest criticism is that the pause simply isn’t going to happen; tech companies like OpenAI won’t agree to it, and politicians won’t mandate it. The letter is little more than a symbolic gesture.
But like I said, I signed. It’s better than doing nothing. As Marcus, who signed what he admits is an “imperfect” letter that he thinks is full of AI hype, puts it on his Substack,
doing nothing is truly the most foolish option. …
None of the top labs actually is remotely transparent, governance basically doesn’t exist yet, and there are basically no safeguards actually in place.
This doesn’t mean we are doomed, but we do need to think hard, and quickly, about what proper measures look like, just as we have done for medicine, aviation, cars, and so on. The idea (which I have actually heard expressed) that AI should be exempt from regulation is absurd.
I think the risks of AI go well beyond the risks associated with “medicine, aviation, cars, and so on.” All the more reason to regulate. All the more reason to pause.
Art by Midjourney
I would be happy with a law that says any potentially malevolent supercomputers must be powered by a single plug on a 100 yard extension cable that's plugged in in a room with no CCTV or robotic arms.
I think of it this way: we're going to end up "replaced" sooner or later (possibly sooner, given the way things are going right now) because there's no special force of the universe that stops us from going extinct, so why not ensure that we have a say over what exactly our successor might be? As for the natural ecosystem, there's an excellent chance that we pushed it past the point of recovery decades ago and just haven't realized it yet.
Hearings and legislation won't do jack squat for the simple reason that the only reason the government will intervene is if it wants a piece of the pie. Addressing the issues Bender brings up will require nothing short of people taking power into their own hands through direct action. I can't say what that'll look like, that's up to the people undertaking said direct action to decide.
In any case, there's very little a ChatGPT-type AI could do to us that we haven't already done to ourselves dozens of times. Not saying that it should go completely unregulated, but everyone should pause for a moment to take stock of what AI as it is right now can and cannot do (to say nothing about how most of the malicious things it could do are less the fault of the AI and more the fault of its users).
Regarding Yudkowsky: he's never actually studied AI. All of his knowledge there is self-taught, and more often than not whoever teaches themselves has a fool for a teacher. Need I remind you of the tempest in a teapot that was Roko's basilisk?