[Ed. note: Spoilers ahead for Pluribus episode 3.]
Rather a lot occurs in episode 3 of Vince Gilligan’s science fiction sequence Pluribus, however the plot could possibly be summed up merely as: “Carol (Rhea Seehorn) makes an attempt to search out the bounds of what the hivemind is prepared to do for her, and discovers there are none.” It doesn’t matter what she asks for, humanity’s collective consciousness says sure — even when it actually shouldn’t. The hivemind can be sycophantic, consistently telling Carol how nice she is, and the way a lot it loves her.
Watching the newest episode of Pluribus felt weirdly acquainted. Then I spotted: The way in which Carol interacts with the hivemind is nearly precisely what it’s like to make use of ChatGPT. Fixed optimistic reinforcement and the innate want to all the time say sure are each traits that outline OpenAI’s widespread generative AI chatbot.
Was this intentional, or only a bizarre coincidence? Was Pluribus impressed by Gilligan’s personal interactions with AI? I went straight to the supply to search out out, and acquired a blunt reply.
“I’ve not used ChatGPT, as a result of as of but, nobody has held a shotgun to my head and made me do it,” Gilligan tells Polygon. “I’ll by no means use it. No offense to anybody who does.”
That stated, Gilligan and Seehorn each have some ideas on why audiences would possibly see Pluribus as a metaphor for AI, even when that was by no means the purpose. However earlier than I get to that, let me unpack my argument a bit extra completely.
Three explicit moments from Pluribus episode 3 made me consider ChatGPT, which I’ve experimented with each for private use and really flippantly in my skilled work. (I used to ask it for assist developing with synonyms for overused phrases, however I’ve reverted to checking the precise thesaurus.)
First, there’s the scene the place Carol asks the hivemind for a hand grenade, assuming it is going to refuse to provide her such a harmful weapon. She’s mistaken, and humanity’s collective acutely aware races to obtain the lethal gadget. This backfires when Carol makes use of the grenade, blowing up part of her own residence and severely injuring her “chaperone” Zosia (Karolina Wydra). An agreeable, non-human intelligence saying sure to an irresponsible request that endangers the person… sound acquainted?
Second, there’s the dialog between Carol and Zosia that takes place between the grenade’s arrival and the explosion. Carol seems to lastly confide in her hivemind chaperone, and invitations Zosia into her dwelling for a drink. Their dialog appears extra like a human speaking to ChatGPT than two individuals truly conversing. Carol asks, “How do you say cheers in Sanskrit?” Zosia solutions instantly. As they proceed to drink, Zosia fortunately explains the etymology of the phrase “vodka” — precisely the sort of random factoid an AI agent would possibly spout off.
And third: Later, after the grenade goes off and Zosia continues to be in restoration, a random character we’ve by no means seen earlier than, carrying a DHL supply uniform, approaches Carol within the hospital ready room. Talking for the hivemind, he explains that Zosia will survive, regardless of some blood loss. Carol asks, “Why would you give me a hand grenade?” and he solutions, “You requested for one.”
“Why not give me a pretend one?” she replies.
The person appears to be like confused. “Sorry if we acquired that mistaken, Carol,” he says.
“If I requested proper now, would you give me one other hand grenade?”
“Sure.”
The dialog continues from right here as Carol makes an attempt to provide you with a weapon so recklessly harmful the hivemind would refuse to acquire one for her. A bazooka? A tank? A nuclear bomb?
The person bristles at that final one, however when he’s compelled to reply whether or not he’d get hold of a nuke for Carol, he replies, “In the end, sure.”
This all appears like maddeningly correct instance of what speaking to ChatGPT can usually really feel like. These instruments are designed to not be correct or moral, however to provide the person a passable reply. Because of this, they continuously come throughout as sycophantic and cloying (and even dangerous in some instances). And if ChatGPT does make a mistake or hallucinate a fact and also you catch it, it is going to fortunately apologize and try to maneuver ahead — as if it hadn’t simply given you good motive to mistrust no matter it says subsequent.
Carol’s expertise with the hivemind feels eerily comparable. That is an intelligence that wishes to make her comfortable above all else, even when meaning doing one thing extraordinarily dumb, like giving her entry to a grenade or a nuclear bomb.
However Vince Gilligan says that wasn’t what he was considering of when he wrote Pluribus. In actual fact, when he first came up with the idea for the sequence, ChatGPT didn’t even exist.
“I wasn’t actually considering of AI,” he says, “as a result of this was about eight or 10 years in the past. After all, the phrase ‘synthetic intelligence’ actually predated ChatGPT, nevertheless it wasn’t within the information like it’s now.”
Nonetheless, Gilligan says that doesn’t invalidate my concept.
“I am not saying you are mistaken,” he continues. “Lots of people are making that connection. I do not need to inform individuals what this present is about. If it is about AI for a specific viewer, or COVID-19 — it is truly not about that, both — extra energy to anybody who sees some ripped-from-the-headlines sort factor.”
Seehorn takes it one step additional, suggesting that the great thing about Gilligan’s work is how effectively its relatable storytelling maps onto no matter topic the viewer may be grappling with in the mean time.
“One of many nice issues about his reveals is that, at their base, they’re about human nature,” she says. “He is not writing to themes, he is not writing to particular subjects or particular politics or religions or something. However you’re going to deliver to it the place you are at if you’re watching.”
Pluribus airs weekly on Apple TV. Episodes 1-3 are streaming now.

