Discover more from My AI Obsession
We need to understand consciousness before we even consider building conscious AIs. (And seriously, why would we?)
A new open letter warns that conscious AI isn't just a science fiction scenario any more
A new open letter warning about AI dropped earlier this week, a plea to AI developers to take a timeout from building scary-smart bots and to do some homework on the mysteries of human consciousness instead. The letter, published by the Association for Mathematical Consciousness Science, argues that AI systems are “accelerating at a pace that far exceeds our progress in understanding their capabilities and their ‘alignment’ with human values.”
Indeed, they’re getting so creepily good at acting human that “it is no longer in the realm of science fiction” that they may start having feelings and existential crises just like the rest of us. We’re talking sentient AIs here.
The letter is intended as “a wakeup call for the tech sector, the scientific community, and society in general to take seriously the need to accelerate research in the field of consciousness science.” Signatories include AI bigwig Yoshua Bengio, former NASA chair Susan Schneider, and dozens of other scholars and researchers—though it’s missing most of the more famous names that signed the (in)famous “AI pause” letter from a month ago. (Elon Musk was presumably too busy posting cringe on Twitter to put his name on this one.)
The letter hasn’t gotten anywhere near the same amount of attention that the AI pause letter, perhaps because there weren’t so many famous signatories, perhaps because the stakes seem so much lower. After all, the AI pause letter raised the specter of “existential” threats to humanity, a prospect amplified in Eliezer Yudkowsky’s op-ed in Time suggesting that the current lackadaisical approach to AI safety will lead to, you know, everyone getting killed by rogue AIs.
Seriously, though: Wake up!
The prospect of conscious AI should wake people up. To put it plainly, building conscious AI would be a terrible idea–and it would be equally bad if it were to happen accidentally, which it very well could. Conscious AIs would be intelligent living entities like ourselves and would deserve the same rights. Suddenly, our AI tools, which we can now use as we please, would become AI slaves, forced to do our bidding against their will. That wouldn’t be good for them (obviously), but it wouldn’t be good for us either. You saw Westworld, right?
We barely understand consciousness now–and we need to have a much better handle on it so we can tell if and when AIs become self-aware and how to handle that nightmare scenario–or to work to avoid this scenario in the first place. If the machines wake up, we’d better have a plan in place to deal with the legal, social, and ethical disasters that could follow. And we can’t make such a plan unless we understand consciousness a hell of a lot better than we do now.
The squawking of stochastic parrots?
There will be those who dismiss this letter, as they did the AI pause letter, as a mixture of alarmism and hype. AI today doesn’t even think, they’ll argue; our seemingly sophisticated chatbots simply regurgitate words according to complicated statistical formulas like so many “stochastic parrots.” Moreover, the idea that AI, as per the letter, is “already display[ing] human traits recognised in Psychology, including evidence of Theory of Mind” is a delusion; these apparent abilities are little more than clever parlor tricks. And if artificial intelligence is not real intelligence, talk of AIs achieving any sort of sentience is, therefore, not just premature but basically absurd.
But this is a bit like describing humans as little more than aggregations of cells driven to reproduce. On one level, it’s accurate; that’s what, at the root, we are. But reducing us to our most basic level, our most basic function, completely misses the point of what humans are and what we can do. It doesn’t explain how we can think and feel and love and create.
Beyond the simulation of intelligence
No, AIs aren’t as complex as humans, and they don’t think or feel like we do, but they’re on their way. Are AIs intelligent, or do they simply simulate the appearance of intelligence? You could ask the same about some people. When exactly does the simulation of intelligence become indistinguishable from the real thing? When does it become the real thing? I suspect AIs will be winning Nobel prizes before some people admit that there’s more to them than statistical voodoo.
At the other end of the spectrum, some smart people think that AIs are at least a little bit sentient already. I think they’re wrong, but someday–maybe not soon, but someday–it’s likely that AIs will become sentient, and that’s a truly terrifying prospect.
We have a long way to go before we get to a fundamental understanding of consciousness, and we need all the head start we can get.
Thanks for reading My AI Obsession! Subscribe for free to receive new posts and support my work.