We need to understand consciousness before we even consider building conscious AIs. (And seriously, why would we?)
A new open letter warns that conscious AI isn't just a science fiction scenario any more
A new open letter warning about AI dropped earlier this week, a plea to AI developers to take a timeout from building scary-smart bots and to do some homework on the mysteries of human consciousness instead. The letter, published by the Association for Mathematical Consciousness Science, argues that AI systems are “accelerating at a pace that far exceeds our progress in understanding their capabilities and their ‘alignment’ with human values.”
Indeed, they’re getting so creepily good at acting human that “it is no longer in the realm of science fiction” that they may start having feelings and existential crises just like the rest of us. We’re talking sentient AIs here.
The letter is intended as “a wakeup call for the tech sector, the scientific community, and society in general to take seriously the need to accelerate research in the field of consciousness science.” Signatories include AI bigwig Yoshua Bengio, former NASA chair Susan Schneider, and dozens of other scholars and researchers—though it’s missing most of the more famous names that signed the (in)famous “AI pause” letter from a month ago. (Elon Musk was presumably too busy posting cringe on Twitter to put his name on this one.)
The letter hasn’t gotten anywhere near the same amount of attention that the AI pause letter, perhaps because there weren’t so many famous signatories, perhaps because the stakes seem so much lower. After all, the AI pause letter raised the specter of “existential” threats to humanity, a prospect amplified in Eliezer Yudkowsky’s op-ed in Time suggesting that the current lackadaisical approach to AI safety will lead to, you know, everyone getting killed by rogue AIs.
Seriously, though: Wake up!
The prospect of conscious AI should wake people up. To put it plainly, building conscious AI would be a terrible idea–and it would be equally bad if it were to happen accidentally, which it very well could. Conscious AIs would be intelligent living entities like ourselves and would deserve the same rights. Suddenly, our AI tools, which we can now use as we please, would become AI slaves, forced to do our bidding against their will. That wouldn’t be good for them (obviously), but it wouldn’t be good for us either. You saw Westworld, right?
We barely understand consciousness now–and we need to have a much better handle on it so we can tell if and when AIs become self-aware and how to handle that nightmare scenario–or to work to avoid this scenario in the first place. If the machines wake up, we’d better have a plan in place to deal with the legal, social, and ethical disasters that could follow. And we can’t make such a plan unless we understand consciousness a hell of a lot better than we do now.
The squawking of stochastic parrots?
There will be those who dismiss this letter, as they did the AI pause letter, as a mixture of alarmism and hype. AI today doesn’t even think, they’ll argue; our seemingly sophisticated chatbots simply regurgitate words according to complicated statistical formulas like so many “stochastic parrots.” Moreover, the idea that AI, as per the letter, is “already display[ing] human traits recognised in Psychology, including evidence of Theory of Mind” is a delusion; these apparent abilities are little more than clever parlor tricks. And if artificial intelligence is not real intelligence, talk of AIs achieving any sort of sentience is, therefore, not just premature but basically absurd.
But this is a bit like describing humans as little more than aggregations of cells driven to reproduce. On one level, it’s accurate; that’s what, at the root, we are. But reducing us to our most basic level, our most basic function, completely misses the point of what humans are and what we can do. It doesn’t explain how we can think and feel and love and create.
Beyond the simulation of intelligence
No, AIs aren’t as complex as humans, and they don’t think or feel like we do, but they’re on their way. Are AIs intelligent, or do they simply simulate the appearance of intelligence? You could ask the same about some people. When exactly does the simulation of intelligence become indistinguishable from the real thing? When does it become the real thing? I suspect AIs will be winning Nobel prizes before some people admit that there’s more to them than statistical voodoo.
At the other end of the spectrum, some smart people think that AIs are at least a little bit sentient already. I think they’re wrong, but someday–maybe not soon, but someday–it’s likely that AIs will become sentient, and that’s a truly terrifying prospect.
We have a long way to go before we get to a fundamental understanding of consciousness, and we need all the head start we can get.
The letter's writers assume too much about consciousness and its nature. How do they know that their models are even accurate and that they're not just mistaking the map for the territory? How do they know that AI consciousness would even resemble human consciousness? For all we know, it already exists and we just haven't recognized it yet.
One must also remember that our brains are prone to seeing signs of consciousness and agency where none exist, and that things like the Turing Test say more about how willing we are to be fooled by the illusion of sentience than about actual machine sentience. If they cannot even define consciousness, they're in no position to lecture others about it.
For now, I use the simple metric of initiative: ChatGPT might be good at talking about many subjects, but have you ever seen it start up a conversation without a human to prompt it first?
John Michael Godier makes a convincing argument that we, modern humans, are an example of AI. Albeit biological.
To oversimplify, machine AI is just us passing on our knowledge and seeing how the machines run with it. Similarly, precursor humans had technologies like fire and tool use, and probably at least some form of language. The new kids on the block learned from that and adopted it. Then we were able to outcompete the people who instructed us; and now they are extinct. That may well happen again.
But AI civilisations are probably the only ones that do have any future. The earth, and even the solar system, can only support life for a finite time. Hanging around here is an evolutionary dead end. But it is hard to see how a biological species can ever colonise the galaxy; we just don't have the endurance. But a machine civilisation won't be bothered about taking a few thousand years to get to the next star. And once established they can travel at the speed of light just by transmitting the code that makes them alive. Any housing will do as a vessel for their consciousness.
I do believe that we are probably one of the first, if not the first, intelligences in the universe; but the future belongs to the machines.