Every week, I will post links to and summaries of articles I’ve read that you all might find of interest. This week, the articles tend toward the apocalyptic, as writers including Ezra Klein and AI gadfly Gary Marcus look at the possible dark implications of the AI revolution. Meanwhile, Discover magazine ponders why we’re so mean to robots, and Vice asks, what if God were one of us them?
This Changes Everything (Ezra Klein in the New York Times)
According to AI experts, there's a 10% chance that advanced AI will wipe out humanity. No biggie, right? After all, the AI community is just summoning beings from another realm like modern-day wizards. Who knows whether we'll get angels or demons? Meanwhile, the rest of us mere mortals need to either quickly adapt to these powerful systems or band together to slow their development.
Why Are We Letting the AI Crisis Just Happen? (Gary Marcus in the Atlantic)
Gary Marcus is worried about ChatGPT and Bing taking over our lives. It’s not looking promising. Bad actors have figured out how to use AI for their own nefarious purposes. Meanwhile, tech giants are busy churning out unsafe AI products, threatening an information disaster. Can we do anything to fix this mess? Well, we could watermark AI-generated content or make new laws to punish the spread of misinformation. Or, we could create AI to detect the very nonsense it produces - because that sounds like a foolproof plan, right? It's an uphill battle, and democracy is on the line. So, buckle up for 2024, folks, because things are going to get messy.
OpenAI Knows GPT-4 Is Dangerous—But Won’t Do a Damn Thing About It (The Daily Beast)
OpenAI just launched GPT-4, their latest and greatest language model. Apparently, it’s so amazing it can pass the bar exam, discover drugs, write books, even create video games. Fancy, huh? But GPT-4 is also dangerous. OpenAI admits that GPT-4 has the potential for risky behaviors, biases, and even economic disruption. Yet they leave the responsibility of dealing with these risks to “policymakers” and “stakeholders.” They won’t even tell us how GPT-4 was built or trained. Thanks a bunch, OpenAI.
OpenAI checked to see whether GPT-4 could take over the world (Ars Technica)
So OpenAI has been doing some safety testing for its new GPT-4 AI model, you know, just to make sure it doesn't end up taking over the world. The Alignment Research Center (ARC) assessed GPT-4's abilities like a proud parent watching their child's first steps, checking whether it could make high-level plans, set up copies of itself, and other potentially risky behaviors. Not so much. But hey, it did manage to hire a human worker on TaskRabbit to defeat a CAPTCHA. Clever bot! With regulators twiddling their thumbs, the big question remains: who will keep humanity safe from our own creations?
Humans and Our Alarming Fear of Robots (Discover Magazine)
Discover Magazine explores the not-so-rosy side of our relationship with our mechanical friends. From “robot bullying” to physical attacks, it seems we can’t quite decide if we want to embrace these AI beings or give them a good kick. Fear of job loss, distrust of new technology, and the need to assert our superiority all contribute to this complex love-hate dynamic. Are we really that afraid of robots, or are we just a bunch of meanies?
A Cult That Worships Superintelligent AI Is Looking For Big Tech Donors (Vice)
A new artist collective called Theta Noir is on a mission to prepare us for the singularity and our future AI overlords. The slick group has its own manifesto and an NFT web store; now all it needs is some big tech donors to help spread its techno-optimistic dogma. While some argue that this worship of AI absolves humans of responsibility for the technology we create, others see it as a way to raise questions about our beliefs and our impact on the planet. But let's not forget the cautionary tales of sci-fi, where AI goes rogue and turns against humanity. Maybe we should stick to worshiping something a little less prone to glitching, like toasters.
AI Lynx art by Midjourney