Artificial intelligence snuck up on most of us. Over the past several years, it has played a steadily increasing role in the background of our lives, powering everything from the recommendation engines assembling personally tailored playlists on Spotify to Facebook’s news feed.
But our direct interactions with AI were few, and tended to be underwhelming–frustrating encounters with primitive customer service chatbots and highly limited personal digital assistants like Alexa and Siri. (These days, I mostly use my Google Home device for setting alarms and finding out the ages of assorted celebrities.)
Then came ChatGPT, the almost chillingly human “chatbot” that was so impressive and addictive that it garnered 100 million users in just two months. Other chatbots soon followed, most notably Character AI and a newly chatbottified version of Microsoft’s also-ran search engine Bing.
Then, last week, OpenAI released GPT-4, which aced the bar exam and made websites from crudely hand-drawn pictures, and Microsoft announced it would be incorporating the large language model in its office products. I’m astonished by GPT-4’s depth of knowledge and by the more chatty Character AI’s ability to engage in lengthy and surprisingly sophisticated conversations on everything from AI sentience to the aesthetics of 1960s sports cars.
It’s said that these chatbots are merely regurgitating prose like “stochastic parrots” and have no real idea what they’re talking about–but it sure seems as though they do. Some people are already using them as replacements for human therapists, friends, and even lovers, literally falling in love with the machine.
Meanwhile, AI art generators like DALL-E and Midjourney produce art of astounding quality in response to nothing more than simple (or sometimes highly convoluted) text queries. (The art for this post is the work of Midjourney.) You can produce anything from lush paintings to faux photographs that are so artful and realistic they’ve won photography contests. You just have to ignore the occasional gnarled hands or extra limbs–which are much less of a problem in the new Midjourney version 5.
But if I’m impressed with these recent AI triumphs, they also scare me–a lot. We’re approaching the point where AI is going to start taking human jobs in massive numbers, which will be shattering in a country like mine (the US) where our social safety net is already quite tattered. Meanwhile, AI-produced deepfake videos and chatbot-produced misinformation could throw the 2024 election into chaos.
I worry that we're building and training our replacements–and not just in the workplace. AI is on track to reach human-level intelligence at some point in the foreseeable future, perhaps as soon as a decade or two from now, if not, some say, even sooner. And it likely won’t stop there. It seems only a matter of time before AI will become smarter than humans, and then, as development speeds up exponentially, loads smarter. It's our human intelligence that allows us to be the dominant species on earth, for better or worse; what happens when we’re no longer the smartest kid in the class? As they say, we may be lucky if the AIs keep us around as pets. And that may be the best-case scenario.
I’ll use this blog to talk about all of these things, from chatbot love to the possible end of human civilization. I’ll highlight serious issues about AI ethics and share silly experiments with generative AI art. I’ll try to make sense of the growing impact of AI on all of our lives, including my own,
As a journalist with decades of experience writing about the culture of technology, I think I’m uniquely positioned to cover the AI renaissance and tease out its deeper meanings. I’ve written for a variety of publications over the years, including New York magazine, the Washington Post, Vice, the Huffington Post, the Nation, and Salon, and I’ve spent more than a decade as a blogger at We Hunted the Mammoth. (You can read a New York Times profile of me here.) I approach this blog as both an enthusiast and a worrier. I hope it will both entertain you and make you think.
Art by Midjourney
Welcome to Substack, David! Looking forward to your stuff 💚 🥃
I posted this on your other blog but wanted to talk about it here
I think that given the power of AI in the future and it’s need for training data, I firmly believe that content creators should get compensated for their work getting used in training AIs. Not sure how this could be done but it should be done.
Also I think we need to ensure AIs never get trained on AI generated content. I cannot see how that would provide anything useful. The power is it’s ability to mimic human generated content.
That said, how these systems work is interesting with regards to copyright and derivative work. For instance, the text generators are basically just scaled up text predictors we see on our phone, but instead of just one persons text as training data it’s a huge corpus of text scraped from the internet. They must do something to analyze the prompt as well but basically it generates words by using a statistical score for what the next word would likely be if a human was writing the given text. It has no intent or understanding what it’s writing.
And that brings us to the issue of bias and harm. If you scrape a huge chunk of random text on the internet there is a lot of hate, bias and bullshit in that sample. And the AI has no idea of what is harmful unless it’s programmed to understand that. Given a specific promo it will generate what is statiscally likely to match that prompt, happily generating vile text. The fact that it will generate this hateful content says a lot about the state of our writing and what is out there.
And this is the issue, who is accountable for this? Personally I think the AI company should be, they should have standards to prevent harm. Not the OpenAI model, using ChatGPT it feels more like an HR dept idea of preventing company scandal, not the same thing.
Like you, I have also been a little obsessed with this topic.