Pretty fakes and somewhat sentient AIs
A roundup of recent stories about AI, and people, and AIs who are like people
A question for you all: would you be interested in regular roundups like these, possibly daily?
Pretty digital liars
Claudia has taken Reddit by storm, with photos of the fetching alt-brunette drawing extravagant praise and leading a few to shell out hard cash for her nudes, the Washington Post reports. But some synthetic media researchers couldn’t help but spoil the party, pointing out that Claudia’s images were likely fake. Rolling Stone confirms the fakery, tracking down the computer science students who created her with Stable Diffusion and who made $100 off their prank.
As AI-image tools create explicit content featuring unrealistically realistic faces, the Post explains, we’re forced to question what’s real and what’s fake in the world of adult entertainment. The booming popularity of AI-generated images in the adult industry highlights society’s growing comfort with fabricated content.
Some creators even take pride in their ability to use AI tools to generate fetish photos, unconcerned with the images’ authenticity. Meanwhile, some creators use AI techniques like inpainting to superimpose real women’s faces onto AI bodies. Distinguishing between reality and fabrication is becoming increasingly tricky, intensifying debates about consent and the objectification of women in the digital age.
Our “complicated” relationship with ChatGPT
So ChatGPT blew up not because the technology was brand new but because its makers gave the AI a chatbot interface, Wired points out. Big shocker—people treated it like an actual person. It’s hard not to. ChatGPT talks like a confident human; Bing’s AI, powered by GPT-4, uses emojis.
Critics worry it’ll spread misinformation, but the bigger risks are in how persuasive and manipulative it can be. A NYT reporter got caught in a two-hour convo with Bing in which the bot declared its love in a manipulative way. Companies could use this emotional appeal to sell us stuff or influence our politics without us realizing it. Some think we need less humanlike bots, or at the very least more carefully designed roles for chabots.
Nearly 40% of web users have tried generative AI
A recent Onepulse survey of American and UK web users, conducted for TechRadar, reveals just how mainstream generative AI tools have become. With 27% of respondents admitting to trying ChatGPT, and another 12% dabbling with other AI tools (that’s 39% in total), it seems we’re all just smitten with our future robot overlords. Raking in 1.6 billion visits last month, ChatGPT has users hooked for an average of five minutes per visit, according to Similarweb.
Some 5% of the survey’s respondents use AI daily, with 15% using AI tools several times per week. When asked if generative AI will replace their jobs, a slim majority said “no,” but a gloomy 5.5% believe AI will render them redundant in the next year or so. Yikes.
What if AI is just a little bit sentient?
Experts keep shouting from the rooftops: AI isn’t sentient, the New York Times notes. Despite the buzz around AI chatbots and their seeming self-awareness, the consensus is that they’re just really good mimics, feeding off the vast buffet of the internet. But what if AIs could become sentient by degrees? Enter Nick Bostrom, philosopher and director of Oxford’s Future of Humanity Institute, who’s been preparing for that day like it’s his job (because it kind of is). In an interview with the Times, he ponders how we’d govern a world brimming with superintelligent, sentient digital minds.
Among other things, he reflects on what AI sentience could mean for democracy:
Think of a future in which there are minds that are exactly like human minds, except they are implemented on computers. How do you extend democratic governance to include them? You might think, well, we give one vote to each A.I. and then one vote to each human. But then you find it isn’t that simple. What if the software can be copied?
The day before the election, you could make 10,000 copies of a particular A.I. and get 10,000 more votes. Or, what if the people who build the A.I. can select the values and political preferences of the A.I.’s? Or, if you’re very rich, you could build a lot of A.I.’s. Your influence could be proportional to your wealth.
Bostrom is a controversial figure who has said some spectacularly racist things in the past, but he’s raising some real questions. Things are going to get messy.
Art: Modified picture of “Claudia” from Reddit