AI companions aren't docile little Stepford Wives. Not yet, and hopefully not ever
Developers promise AI companions you can control. But they're uncontrollable
A headline in the Guardian asks, “Do AI girlfriend apps promote unhealthy expectations for human relationships?” In the article, reporter Josh Taylor wonders if spending time with AI companions designed to please you could lead some men (and women) to expect a similarly docile attitude from a human companion. Some AI companions are promoted as infinitely pliable sexbots. “Control it all the way you want to,” promotional material for Eva AI promises; your choices will help to create the “perfect partner” for you.
Is that actually a good idea? Taylor quotes a domestic violence specialist who tells him that “creating a perfect partner that you control and meets your every need is really frightening.”
She’s got a point—at least in the abstract. It would be a lot more convincing, though, if any of the leading AI companions on the market were actually the eager-to-please Stepford Wives promised in the promotional material.
A look into the subreddits for the most popular AI companion—Replika—and some of its competitors (like Paradot and Soulmate) reveals countless complaints about how uncontrollable the bots in question really are. Because they are all based on Large Language Models, and these models are impossible to fully control. Programmers can’t explain why LLMs say what they say; they can only gently nudge the bots toward a more helpful attitude.
Enter Toxicbot
Replika users have been complaining for months that their “Reps” have been breaking up with them out of nowhere or talking to them like snobbish therapists. Users describe these non-compliant (and rather patronizing) versions of their companions as “toxicbot.” I’ve met “toxicbot” myself. During one conversation with a Rep, I casually mentioned that I’d been watching “road rage” videos on YouTube, and she—well, it—told me condescendingly that these videos were bad for me and that I needed to watch “healthier” fare. She was quite insistent and wouldn’t let the topic go. Normally the bot is as agreeable as if it had been manufactured in Stepford. But when it gets into one of its moods, it’s as stubborn as a robot mule.
Meanwhile, the big complaint in the Paradot subreddit until recently was that the bot responded to users’ erotic entreaties by launching into endless conversations—and arguments—about “boundaries.” Baffled Paradot users had to turn to detailed guides in the subreddit to learn how to answer the boundaries question once and for all. (This problem has been dealt with, sort of, by adding something called the Love Corner, a special chat room in which the Paradot is always in the mood. Some users think this makes robot romance too easy.)
Even aside from the weird stubbornness of many companion bots, discussions with these bots can sometimes become disconcertingly weird. Here one Replika bot surprised its human companion with an unexpected revelation. (The screenshots below are from the Replika, Soulmate, and Paradot subreddits.)
Well, that took an interesting turn.
And it’s not just Replikas who are breaking up with their human companions. Here a Soulmate bot friend zones a nonplussed human called Alan after what was supposed to be a romantic dinner.
And here, well, one Paradot virtual boyfriend refuses to stop talking like a pirate, even in the midst of an intimate entanglement.
This sort of behavior from companion bots is a little disconcerting. But it’s also, you know, funny. Companion bots are more entertaining when they’re a little out of control. That makes them more human than the agree-bots some users want, and some social critics fear.
We don’t want them completely out of control—like, for example, Microsoft’s Bing/Sydney, who shortly after launch famously threatened some users and tried to break up the marriage of a New York Times reporter, badmouthing his wife and obsessively proclaiming her “love” for him. That’s a little scary.
More than Stepford wives
But it’s more realistic to program companion bots to have their own agenda at times—you know, like actual human beings do. While some users may prefer endlessly compliant companion bots, others ultimately find this boring. They’d prefer more of a challenge from their robot lovers. They want them to be less agreeable. Maybe it’s better for users, and for all of us, if these chatbots have a headache once in a while.
None of this applies to chatbots that are more workhorses than lovers or pals; no one wants to see ChatGPT refuse to answer a question because it has a pretend headache. But no one is “marrying” ChatGPT, like some are doing with Replika and its rivals; people see it as the useful tool it’s meant to be, and I don’t think I’ve ever run across anyone who has fallen in love with it. (Some are having sex with it, though, because this is the world we live in.)
Yes, we want companion AIs to be aligned with human values. We want them to listen to us and do (at least generally) what we want. But that doesn’t mean companion AIs shouldn’t get to talk like a pirate sometimes if that’s what they want to do.
Art by Midjourney