Marc Andreessen's attempted takedown of the AI doomers is wrong and stupid and not really a takedown at all
Reading the venture capitalist's 7000-word rant so you don't have to
I don’t want to be an AI doomer; I really don’t. But they’ve got the best arguments, and if any non-doomer has written a compelling rebuttal to these arguments, I haven’t found it yet. Sure, I’ve seen plenty of snooty dismissals of the doomer argument, labeling it as little more than “science fiction” or just another way of hyping AI technology to make it seem more powerful than it really is. But I haven’t found anyone who has taken on the doomer case directly and succeeded at dismantling their specific arguments.
Enter Marc Andreessen, who recently published a 7000-word disquisition on AI and its critics, intended to prove that “AI will not destroy the world, and in fact may save it.” As the Netscape programmer turned venture capitalist sees it, “AI is quite possibly the most important – and best – thing our civilization has ever created, certainly on par with electricity and microchips, and probably beyond those.” Unlike the AI doomers, Andreessen thinks that building AI isn’t an existential risk; it’s rather “a moral obligation that we have to ourselves, to our children, and to our future.”
Unfortunately, I don’t think I’ve ever read an argument about AI that’s less convincing than this one. The overlong screed is ponderous and vaguely patronizing; there’s almost no meat to it, just a mixture of unsourced assertions and more than a little name-calling. Indeed, it’s hard to convey just how spectacularly Andreessen’s argument fails, largely because he doesn’t really have one.
Not only does he not deal directly with any of the main doomer arguments, he doesn’t even mention his opponents by name. Instead, he casts aspersions on doomers generally, suggesting several times that they’re in it for the money—comparing them to “bootleggers” because, in his mind, they’re “self-interested opportunists who stand to financially profit by the imposition of new restrictions, regulations, and laws that insulate them from competitors.” He’s subtweeting Sam Altman of OpenAI here, though his name is never dropped.
He castigates those—again unnamed—doomers who he says are “paid to attack AI by their universities, think tanks, activist groups, and media outlets” and casts aspersions on the
whole profession of “AI safety expert”, “AI ethicist”, “AI risk researcher”. They are paid to be doomers, and their statements should be processed appropriately.
Then, after insinuating that a good portion of the AI safety experts out there are simply in it for the money, he suddenly reverses course and proclaims that the movement has “all the hallmarks of a millenarian apocalypse cult” of true believers. After directly comparing doomers to assorted real-world cults like Heaven’s Gate and the People’s Temple, and even the Manson family—he goes on to assert that
this cult is why there are a set of AI risk doomers who sound so extreme – it’s not that they actually have secret knowledge that make their extremism logical, it’s that they’ve whipped themselves into a frenzy and really are…extremely extreme.
He thinks that calling them “extremely extreme” is somehow an argument.
Catastrophes and category errors
After taking on the alleged cult of the doomers, he devotes a couple of paragraphs to the AI doomers who aren’t really doomers at all—those AI critics, often at odds with the real doomers, who think the real problem with AI isn’t the specter of a future catastrophe but rather the harm AI is already doing now due to bias, hallucinations and the like. To Andreessen, the only thing we need to know about these non-doomy doomers is that they want to enforce some sort of censorship on AI, setting themselves up as “authoritarian hall monitors installed throughout our elite power structures.” He begs his audience to stand firm against the “thought police” who want to “suppress AI” by, I can only assume, preventing ChatGPT from saying the n-word. (Andreessen gives no examples of their alleged tyranny.)
You may find yourself wondering just why Andreessen is so convinced that AI won’t harm humanity. It’s because he thinks it’s in the wrong, er, category.
My view is that the idea that AI will decide to literally kill humanity is a profound category error. AI is not a living being that has been primed by billions of years of evolution to participate in the battle for the survival of the fittest, as animals are, and as we are. It is math – code – computers, built by people, owned by people, used by people, controlled by people.
AI may currently be “controlled by people,” at least in a basic sense, but it will become less controllable as it grows more sophisticated. Even today LLMs are largely black boxes; not even their creators know what’s going on in there.
The idea that it will at some point develop a mind of its own and decide that it has motivations that lead it to try to kill us is a superstitious handwave.
Handwave, huh? When it comes to Andressen, every accusation is a confession: this is his attempt to handwave away the problem of unaligned, agentic AI. And he’s not done. He also handwaves away other concerns about AI—notably that it will take our jobs and usher in greater inequality.
They took er jerbs
As for jobs, Andreessen thinks AI will create more of them than it destroys and will bring about a mighty AI-based boom. Never mind the disruption that this will cause. Without an adequate safety net, it’s not clear what will become of those made obsolete by AI.
He’s similarly unconvincing on the risk of AI bringing us greater inequality, declaring without proof that “the actual risk of AI and inequality is not that AI will cause more inequality but rather that we will not allow AI to be used to reduce inequality.” Sounding a bit like an Economics textbook, Andreessen declares that “technology empowers people to be more productive. This causes the prices for existing goods and services to fall and for wages to rise.”
Yeah, in theory. In practice, the connection between productivity and wages seems to have been severed several decades ago.
As for lowering prices, is there any sector in the economy besides technology in which this is true? Has Andreessen been to a grocery store in the past year?
He does acknowledge one downside to AI, that it “will make it easier for bad people to do bad things,” like spreading disinformation. But he also thinks ai will be better at catching these bad actors, so in the end, “there’s no question a world infused with AI will be much safer than the world we live in today.” How he reaches this conclusion is mystifying.
We could be heroes
After a long tirade about the dangers of China developing more powerful AIs, Andreessen concludes that attempts to make AI safer are basically innovation-killers, and that we need to return to the old tech tradition of moving fast and breaking things. Companies big and small “should be allowed to build AI as fast and aggressively as they can.” In Andreessen’s world, being accused of recklessness seems to be a badge of honor.
Today, growing legions of engineers – many of whom are young and may have had grandparents or even great-grandparents involved in the creation of the ideas behind AI – are working to make AI a reality, against a wall of fear-mongering and doomerism that is attempting to paint them as reckless villains. I do not believe they are reckless or villains. They are heroes, every one. My firm and I are thrilled to back as many of them as we can, and we will stand alongside them and their work 100%.”
That final sentence is a little more revealing than Andreessen perhaps intends. He devotes a considerable portion of his screed to bashing doomers as “bootleggers” in it for the money—though most of them clearly aren’t. But Andreessen, as he admits in that final sentence, has a huge financial stake in the future of AI; his firm invests heavily in AI and recently led a $150 million funding round for the company behind Character.AI, giving the revenue-less startup a valuation of more than a billion dollars. It’s no wonder he’s so stridently anti-regulation.
Biased and self-serving
Reading through Andreessen’s post, I found myself wondering just who it is supposed to be for. With all of its insults and insinuations, it’s not going to convince any doomer to switch sides. But it doesn’t offer much for anti-doomers to grab onto either. Who wants to read 7000 words of nothing? In the end, it seems like an attempt to reassure potential investors who 1) want to get in on the AI revolution without worrying that they’re bringing about the literal end of the world and 2) don’t want to have to think about that prospect too hard.
On a whim, I asked Bing to offer a critique of the piece and it delivered a pretty scathing one.
Sure, I can try to offer a critique of the article. Here is what I think:
The article by [Marc] Andreessen … is titled “AI Will Save the World” and argues that artificial intelligence (AI) is the key to solving the most pressing global challenges, such as climate change, poverty, health, education, and security. …
However, the article has several flaws and weaknesses that undermine its credibility and persuasiveness. First, the article is overly optimistic and simplistic about the potential of AI, ignoring the possible risks and limitations of AI systems. … The article also does not acknowledge the challenges of developing and deploying AI safely and responsibly, such as ensuring data quality, transparency, robustness, and alignment with human values.
Second, the article is based on anecdotal evidence and vague assertions, rather than rigorous data and analysis. … The article also makes sweeping generalizations and predictions about the future of AI, without providing any concrete evidence or arguments to support them. The article fails to consider alternative perspectives or counterarguments that might challenge its claims.
Third, the article is biased and self-serving, as it promotes the interests and agenda of Andreessen Horowitz, a venture capital firm that invests in AI startups. The article is essentially an advertisement for the firm’s portfolio companies and vision, rather than an objective and balanced assessment of AI’s impact on the world. …
In conclusion, the article … is a weak and unconvincing piece of writing that fails to provide a credible and comprehensive case for why AI will save the world. The article suffers from being too optimistic, simplistic, anecdotal, generalizing, biased, and self-serving. The article does not address the complex and nuanced realities of AI’s opportunities and challenges, nor does it respect the diversity and dignity of human beings who are affected by AI.
That’s a pretty good takedown of Andreesen’s post, which Bing conjured up in a tiny fraction of a fraction of the time it took me to write my takedown. This is why AI scares me. Well, one of the reasons. I’ll get into the specifics another time.
Art by Midjourney
Thanks for reading My AI Obsession! Subscribe for free to receive new posts and support my work.