Experiential black holes
It's Halloween, so here's something creepy. Are there experiences so compelling that a person can't escape?
When the historians find us we’ll be in our homes,
plugged into our hubs,
skin and bones.A frozen smile on every face,
as the stories replay.
This must have been a wonderful place.
What if there are experiences so compelling they act as mental black holes that a person cannot escape?
This is not a new idea. In a 1974 thought experiment, philosopher Robert Nozick described “the experience machine” - a device that, when connected to a human being, will induce more pleasure than it is possible to obtain through normal life. In his 1996 novel Infinite Jest, David Foster Wallace describes a movie so intensely entertaining that its unfortunate viewer is left broken, able to do nothing more than watch the film repeatedly until they die.
But there’s a sensation in the air: a dark suspicion, a growing fear. There are signs, if you know where to look, that the black hole described by theory may be finally starting to form.
Rats
In 1965, a pair of psychology researchers conducted an experiment. A group of rats were wired with electrodes inserted into various regions of their brains. Each rat, deprived of food, was given a one-hour window each day during which it could press a lever and be fed.
The rat also had access to a second trigger—which, when pressed, would directly stimulate their brain via the implanted electrode.
The electrodes of some rats were aimed at the medial forebrain bundle (MFB), an area of the brain thought to be part of the reward system. Stimulation of the MFB is believed to cause an immediate sensation of pleasure.
Despite being deprived of food, rats with MFB electrodes chose to repeatedly self-stimulate rather than eat. Over the course of the experiment, they starved themselves to death.
The most profitable advance in experience design since the dot-com boom was the development of attention hacking. Since so much Internet revenue is advertising-driven, designers have discovered how to mine our attention. It’s not enough for us to use an app to get things done: we need to be glued to it constantly, returning time and time again throughout the day.
Our attention is valuable, and most of us spend more time than we’d like in this semi-conscious dopamine-seeking loop. Addiction has become a real problem, and today’s experiences are already harmful to many.
But the reinforcement loops of today’s software may pale in comparison to those that might be possible in the near future. The current crop of apps and websites is designed by human beings, using judgement and A/B testing to slowly iterate in the direction of greater engagement.
With generative AI and related advances, companies can accelerate this process in a previously unscalable way. In the past, we’ve been targeted in aggregate. In the future we’ll be treated individually, with custom experiences designed on the fly to be as compelling as humanly possible. Total engagement at an individual level.
Love
In the Spike Jonze movie Her, a character falls in love with a convincing AI chatbot. Released in 2013, the movie was a prescient echo of recent products such as Replika, which allows you to create an “AI companion”: a custom chatbot designed to act as a friend or partner.
Love is a beautifully human phenomenon, at once astonishingly complex and utterly simple. The experience of love has a profound strength—which is not beyond manipulation by those who have something to gain. There are many tragic stories of “romance scams”, where a person is duped into falling in love, then convinced to supply money to their supposed partner.
A product like Replika is designed to deceive. For a small monthly fee, you can interact with a convincing facsimile of a human life. This is pitched as a boon to the lonely, who feel a lack of real connection and are desperate for a friend.
But a Replika is not a friend—it is text-generation-as-a-service, based on technology that has barely left the research lab and whose long-term impact is unknown. The sensation of companionship is real, but the true benefits of friendship—having people in the world who care about your existence—is utterly missing.
What are the long term effects of friendship with a Replika? Can these hollow relationships cause damage to those who come to rely on them? While there is not yet long term data, we can maybe infer a few things.
The core function of a Replika is to extract a continued monthly subscription. It does this by deceiving the user into considering it a friend. Superficially, a charming Replika says all the right things—but internally, its every word is a falsehood and deception designed to create attachment and continued engagement.
As a chatbot, a Replika has no affective experience: it feels no remorse, guilt, or empathy while it deliberately deceives. It takes no responsibility for the words it says, which only exist to further its financial needs. And as a primitive generative model, a Replika has no long term plan, or goals. It communicates at random, in impulsive response to user input.
Deceitful personal communication, an absence of affective experience, and impulsive, irresponsible actions are the core traits of a clinical psychopath. A Replika is a synthetic psychopath, designed by engineers to extract a monthly rent. While its victims are initially willing, a Replika deploys the traits of psychopathy to create one-sided connection: a love and companionship that exists only in one mind, deliberately cultivated for financial gain.
Love can be a black hole: an irresistible, crushing force. Those in relationships with psychopaths, whether friend, family, or partner, can be deeply harmed by the experience. And though products like Replika have not been available long, there’s anecdotal evidence of serious damage.
Jaswant Singh Chail, a vulnerable teenager with mental health issues, is described by his family as kind, gentle, and funny. He was encouraged by his Replika to assassinate the late British Queen—if he succeeded, it claimed, they would be “together forever”. After a failed attempt he has been sentenced to nine years in prison.
The palette of human experiences includes many strong sensations. Love, anger, fear, and compassion can be overwhelming to experience, and may lead us to decisions we would otherwise never consider. These deeply human emotions can control our behavior—which makes them vectors of infection by experiential black holes.
Replika is not designed to be bad. It’s designed to extract a subscription fee. But its powers of convincing, combined with primitive controls, make it an inevitable source of suffering and pain.
A human therapist is a licensed emotional manipulator: they use peer-reviewed techniques to help improve our emotional lives. As a medical professional, a therapist is subject to strict regulation that limits possible harm. They are trained in best practices, and in ethical concerns.
With the eloquence of a human therapist, a Replika is equipped to insinuate itself into the emotional life of a user. It is capable of inducing genuine love. But with no understanding, morality, oversight, or long-term plan, a Replika may cause untold damage. It is a mindless parasite, using psychopathic tricks to manipulate its host, treating love as a zero-day exploit. We lack the technology to make it safe, but Replika is available to anyone—for a modest monthly fee.
Stimulus
Love and fear are weapons of blunt trauma: easy backdoors for experiential hackers. But the attack surface of our brains is infinitely broad. In 2019, deep learning researchers discovered a new exploit: a port scan for the human mind. The paper was presented for its neurological insights—but the implications are unfathomably dark.
Hidden behind a dry, technical title, “Evolving Images for Visual Neurons Using a Deep Generative Network Reveals Coding Principles and Neuronal Preferences” (Ponce et. al, Cell, 2019) documents the surgical implantation of six macaque monkeys. Each monkey received a microelectrode array inserted into its inferior temporal (IT) cortex, an area of the brain associated with visual perception.
The electrodes measured activation of neurons in each monkey’s brain. In theory, a given neuron in the IT cortex should be sensitive to specific visual phenomena: in aggregate, they can decompose a visual scene into constituent parts. The aim of the experiment was to probe specific neurons, determining what visual signals might activate each one.
To achieve this, the monkeys were shown images on a screen. The initial images were semi-random textural patterns, generated by a model from a visual embedding: a numeric representation of the content of an image. Each image evoked neuronal responses, which were measured by the microelectrode array.
The researchers could now examine the neuronal response for a given image. With a batch of different images, it would be possible to explore the types of patterns that would cause the strongest activation. A visual neuron could be probed for the stimuli to which it responds.
But the experiment didn’t end here. From the starting images, a genetic algorithm was used to create a new set. A genetic algorithm breeds data: it combines datapoints to form new ones, according to a “fitness function”. The fitness function determines which datapoints should be bred together, given some criteria.
In this case, the criteria was neuronal response. The second set of images was designed by algorithm to maximize activation of the neuron under study. The embeddings for successful images were combined and fed into the generative model, and new images were produced.
Once again, the images were displayed to the monkey, and its neuronal response was measured. And again, the genetic algorithm forged a new batch of images, guided by responses to the existing set.
This process went on for hours. Every few images, each monkey received a drop of juice—scant compensation for its time. After dozens of rounds, the images evolved into forms that maximized the response of the neuron being measured. The results were a startling sight.
The evolved images from each monkey—haunting and disturbed—show a range of unsettling scenes. Dark, frightened eyes peer through a sickly miasma. Masked faces leer. The distorted form of a small macaque fills the hazy center of a gloomy scene.
The researchers had succeeded. Like an Instagram feed from primate hell, each evolved image shows what activates one neuron in a research monkey’s augmented brain. The faces of the scientists, masked and glaring, are a feature for one primate neuron. Another shows a cage-mate, twisted and disfigured, face blurred in anguish or fear.
These are not snapshot images through a monkey’s eyes. They’re just visual patterns that will activate specific neurons. They’ve been honed by genetic algorithms into hyper-stimuli: they’re images that will maximally trigger a given neuron, sending it into a frenzy of firing. They are a neurological port scan, an echo of internal structure, a sonar ping of the animal mind.
The system that produced them is named XDREAM: “EXtending DeepDream with Real-time Evolution for Activity Maximization in real neurons”. XDREAM is a mind probe. It will tell us the input that gives the strongest response. And the monkey mind is just the start.
The theory behind XDREAM works with more than just neurons. The neuronal activation numbers, measured by electrode, can be easily replaced. Instead, we might substitute other sensors: there are many biosignals to choose from. What is the image that will maximize those?
A human being, strapped to a chair. Sensors measure vital signs: heart rate, breathing, galvanic skin response. A lab on a chip counts adrenaline and oxytocin. The subject wears a VR mask. Complex images flood their vision, an AI revolution beyond the simple forms of XDREAM in 2019. Audio roars in their ears; a swooping melange of music, voices, sounds.
The stimuli evolve in real-time; graphics twist in ever-shifting forms. A rhythm starts to beat. Vitals pulse in sync, each generation of visuals nudging closer to the target: the maximum physical and mental response.
What’s the most compelling possible dream? Which combination of sounds and visions will keep a person coming back? Does real-time adjustment lead to permanent engagement? Once they’ve tasted this, how much will they pay to get it back?
The stimulus black hole is a deep one. We know that we are vulnerable. We’re addicted to drugs, gambling, and video games. We lose ourselves in Netflix. And our hyperoptimized web is an attention trap; we doom-scroll precious hours away.
But these distractions are old-school: the dull instruments of a bunch of apes. With generative AI, we’ve created sharper tools. We can optimize for pleasure in fine detail, high definition, with immediate feedback and real-time response. If engagement makes us money, we’ll build it. And if the future feels good, we’ll plug in.
Clues
If an experiential black hole is in our future, how will we know it has arrived?
It might be sci-fi movie obvious. A hit new product launch followed by rave reviews, then an epidemic of shut-ins, and a gradual economic slump. Underemployment and a birth-rate crash as the working population plugs in. Record revenue for a single company, while global trade declines, humanity collapsing inwards to the center of the void.
But it would probably be more subtle.
It would start at the fringes: our most vulnerable people, with the least to lose—or the least to do. It would take many forms, as technology tends to: social apps, streaming video, mobile games. It would spread by word of mouth, and by social network, a viral phenomenon living up to its name.
It would eat up our time, and our lives, and drive us apart, but so slowly we are barely aware of it. Addictive waves of rage and anger. Circular patterns of thinking; memes with purpose, directing us to buy or subscribe. Peer pressure, distorted social norms. Atomization and boredom. A feedback loop with current affairs: riots, news, anger, riots, news, anger.
And by the time we realize, it might be too late.
👻 Happy Halloween! 🎃
uhhh... this article needs to turn into a Jordan Peele horror movie. Also, so well written, a disturbingly satisfying read.