Mike Mariani is a writer and journalist based in the D.C. area, and the author of “What Doesn’t Kill Us Makes Us,” published in August 2022 by Penguin Random House
In his famous Socratic dialogue, “The Republic,” the Greek philosopher Plato introduced what would come to be known as the allegory of the cave. In his scenario, a portion of the human population is confined to a cave, where they’re chained by their legs and necks and forced to stare solely at a cave wall upon which shadows of objects from the outside world are projected.
Having been imprisoned inside the cave for as long as they can remember, the prisoners come to regard the shadows flickering across the rocky wall as direct, unmediated reality. Shadows of wild animals are, to the captives, the flesh-and-blood menagerie itself; the shadows of grass, clouds, lightning, temples, arches — all likewise the genuine articles rather than crude signifiers rendered without detail or dimension.
Directly behind the imprisoned cave-dwellers, meanwhile, lay another group of people — those individuals responsible for casting the shadows. These captors, which Plato referred to as “bearers,” were free to come and go as they pleased, and drew on an ample repertoire of objects, sculptures and woodcarvings to meticulously recreate the material world beyond the cave walls for their captives.
When Plato first conceived of this allegory over two millennia ago, he sought to highlight the sharp distinction between the limits of sensory perception and the true forms of the world, those Platonic ideals that the philosopher believed could only be accessed through careful reasoning and enlightened discernment.
In the 2,500 years since Plato’s time, his cave allegory has accrued a new, arguably more urgent resonance. Our 21st-century civilization is increasingly absorbed by a growing clutch of social media platforms and the technological devices that serve as their vessels; a significant proportion of the human race seems to have voluntarily retreated to what is, effectively, our own versions of Plato’s cave. Hunched over, silent and rapt, these individuals peer into fathomless websites and algorithmically driven applications that glow with an endless panoply of text, images and videos, all simulacra of our shared physical reality.
Today’s popular platforms — Facebook, Instagram, TikTok and Snapchat — have effectively turned the cave wall into a kaleidoscope of endless entertainment, a tessellation of screens on which to project glittering fragments of human experience. And just like the chained captives in Plato’s cave, we, too, have our own bearers: the savvy tech entrepreneurs who’ve constructed their own version of the wall, along with a dazzling repertoire of novel mechanisms for casting shadows onto it. These contemporary counterparts often insist that their technologies are conferring a social good, that they are designed and developed to bring us closer together — or at least help solve the intractable scourge of loneliness, ultimately delivering us to a fantastical future.
The analogy breaks down with the broader epistemological context of the captives. While the prisoners in Plato’s cave believed the shadows flickering across the cave wall were the one and only empirical reality, today’s social media users still know otherwise. They’re fully aware of the material world and largely remain in regular contact with it, maintaining their ability to delineate between their offline lives and the ones they inhabit on their devices.
Plato’s allegory ends with someone escaping the cave, discovering the outside world and “the things themselves.” It takes some time for Plato’s ancient fugitive to adjust his eyes to the bright, sun-spangled world outside his subterranean prison. Eventually, though, he trains his gaze on the all-encompassing splendor around him: He looks at the “heavenly bodies,” “the light of the stars and the moon” and finally the sun, which he quickly surmises “is the source for the seasons and the years, and governor of every visible thing.”
Enthralled by the vibrant, three-dimensional landscape, he marvels at the pulsating primary sources that informed the shadows that long circumscribed his reality. Amid his euphoric awakening, the escapee feels a pang of guilt: his fellow captives continue to fester in their underground cave, still impoverished by the elaborate deception of their captors. He resolves to return to the cave to free them.
When he reaches the prisoners, he describes the world outside the cave, urging them to join him beyond the shadows and stone. But they are not immediately convinced. Observing him carefully, they note how his vision in the darkness has deteriorated. Having fully adjusted to the sunlight outside the cave, he’s no longer able to decipher the shadows flitting across the cave wall, that vast lexicon that still represents the full breadth of reality for the captives. The prisoners regard this new deficiency — along with the emancipation that ostensibly caused it — with suspicion, even derision. Instead of agreeing to be freed, Plato explains, the captives “raise their hands against” their would-be liberator, and they kill him.
“Today’s popular platforms have effectively turned the cave wall into a kaleidoscope of endless entertainment, a tessellation of screens on which to project glittering fragments of human experience.”
Dissonance & Dependence
While much has been written about our deepening relationship with social media over the past decade, some of it bears repeating here. A 2023 Gallup poll found that adolescents now spend just under five hours a day on the various platforms that comprise the current social media landscape. These figures have been trending higher, too, with recent research suggesting that the amount of time people from all age groups spend on these sites has increased more than 50% since 2013 — jumping from 90 minutes daily in 2013 to around 145 minutes in 2024.
Such dramatic shifts in how we spend our time have had palpable psychological impacts. Researchers have determined that excessive social media use is linked to depression, anxiety, concerns about body image and other forms of mental distress. In an August 2024 Harris poll, nearly 60% of Gen Z respondents felt social media has had a negative impact on their generation as a whole, and around half of those polled wished that TikTok, X (formerly known as Twitter), and Snapchat had never been invented at all. Meanwhile, a 2020 Pew study found that nearly two-thirds of American adults felt that social media platforms were having an adverse effect on the country.
The stark cognitive dissonance between how people feel about social media and how they use it is one sign that we have entered a new era in our relationship with these platforms. While these sites have been constantly evolving ever since Mark Zuckerberg and his Harvard University classmates first launched “TheFacebook” in February 2004, there has been a striking change in their makeup, content and user engagement since the Covid-19 pandemic.
That transformative global crisis accelerated incipient trends, nudging more of people’s social lives and cultural consumption online. With entire populations sequestered away under quarantine, our time spent on social media platforms ticked upward, increasing the quiet, largely unspoken dependence many would come to tacitly accept. Five years on, Instagram and TikTok have become a chief matrix for much of millennial and Gen Z culture, politics and entertainment. These changes are infiltrating our offline lives, influencing the way we socialize, communicate and even develop our sense of self.
In doing so, these platforms are actively reshaping our futures in subtle yet significant ways. As these shadows on the cave wall have grown increasingly complex and specific, they’ve started to divorce themselves from the physical world to which they once referred — pulling us away from our empirical reality and deeper into a virtual one. Over time, our growing intimacy with these technologies may lead to more profound estrangement, as we travel farther into mesmerizing digital landscapes untethered from the physical world they were once purported to bring us closer to.
‘Twilight Of The Social Networking Era’
For years, Meta’s public-facing philosophy centered around a single overarching tenet: people long to connect with one another. Meta believed that this instinct was an essentially propitious one; when successfully facilitated, these connections triggered a cascade of related social goods. Just a few months before Facebook went public, in February 2012, CEO Mark Zuckerberg penned an open letter to the company’s prospective investors. Facebook, Zuckerberg wrote, “was built to accomplish a social mission — to make the world more open and connected.” Personal relationships, he explained, were the vital, irreducible building blocks of society, ideas and individual happiness, and his platform was dedicated to fostering them. “At Facebook,” he wrote, “we build tools to help people connect with the people they want and share what they want, and by doing this we are extending people’s capacity to build and maintain relationships.”
A decade-plus on, the letter broadcasting Zuckerberg and his company’s social mission feels like a quaint relic from a time when such quixotic doctrines were taken on good faith. Facebook would change tack several times in the intervening 13 years, acquiring other platforms, incorporating more short-form videos and allowing its proprietary algorithm to play a disproportionately large role in what people see in their feeds. As a result, the ways people use Facebook — along with other social media platforms that evolved in a similar direction — have shifted dramatically over the span of a single generation. Although Meta’s properties and their competitors are still occasionally called “social networks,” the 2020s have seen the socializing dimension of these platforms gradually wither away.
“The stark cognitive dissonance between how people feel about social media and how they use it is one sign that we have entered a new era in our relationship with these platforms.”
In a 2022 academic paper on social media use, University of Kansas researcher Jeffrey Hall described the ways in which social media apps like Facebook, X and Instagram were no longer as interested in facilitating personal relationships. “Very soon, algorithms, rather than user-selected social networks, will determine what we see on social media,” he wrote. It was a prescient encapsulation for how these platforms operate today, and when I spoke to Hall, he was unequivocal in his assessment of the changing role these sites play in our daily lives. “It’s over,” he said, in reference to the era of genuine social networking. “The content we see now is predominantly either advertising content, content being pushed by Meta or Google, or it’s content developed by semi-professional and professional content creators.” In his paper, he referred to this phenomenon as the “twilight of the social networking era.”
Zuckerberg himself acknowledged this swerve when he took the stand in federal court in April, in the early stages of the Federal Trade Commission’s antitrust trial against Meta. The CEO told the court that Facebook, Meta’s flagship platform, had recently drifted more toward “the general idea of entertainment.” To emphasize the point, the corporation’s legal team presented internal data showing how users were spending increasingly less time interacting with friends on Facebook, along with screenshots illustrating, they claimed, how indistinguishable Facebook had become from its competitors. (One source I spoke with casually mentioned having “five versions of TikTok” on his smartphone.) As The New Yorker’s Kyle Chayka put it, Meta’s defense team’s arguments demonstrate “how stultifying the entire online ecosystem has become.”
The trial, which is still underway, was less a revelation than a recapitulation. If the aughts were a period of exuberance and sanguinity about the potential paths of the internet — when social networks were seen as fertile ground for virtual communities and the democratization of ideas — the 2020s have been a lurch into disillusionment.
As Hall and other researchers have pointed out in recent years, users’ experiences on Facebook, X and Instagram are far less interactive and participatory than they were a decade ago. The rise of paid advertisements, influencer content and short-form videos — to say nothing of AI — has muddied the waters where friends used to swim together more freely. As a result, people are sliding into more passive and often unconscious modes of engagement. The American Enterprise Institute’s Survey Center on American Life declared in a newsletter earlier this year, “Social media has become a place of rapid and repeated consumption rather than one fostering dynamic social engagement.”
When I log onto my own social media profiles today, my mind’s eye sweeps anxiously over selfies, reels, vlogs and ads, an addling landscape now sodden with a phantasmagoria of AI-generated content. This disorienting terrain evokes the hyper-saturated tableau of “Blade Runner,” where digital billboards, flickering neon signs and garish commercial fantasies bathe an otherwise barren Los Angeles in an eerie artificial life. Though the city is colorful and dynamic — even the umbrella shafts emit a spectral glow — there’s also a creeping sense of dissonance, a feeling that the inhabitants are being idly borne by a powerful current of technological forces that have long escaped their control.
Another apt comparison for where we are — and where we’re heading — is the future of entertainment as prophesied by David Foster Wallace in his encyclopedic opus, “Infinite Jest.” The novel’s eponymous film is a work so seductive and spellbinding that anyone who views it becomes quickly addicted, setting off a speedy demise in which they view the movie on an endless loop and abandon all other aspects of their lives. While few people would accuse the current content on Facebook of being “lethally entertaining,” as “Infinite Jest” is described in Wallace’s novel, TikTok comes somewhat closer.
As a raft of recent studies attest, these platforms are shifting from their original use-cases to vehicles of mindless entertainment and zombie scrolling. Portals of distraction increasingly sealed off from the physical world, social media is being severed from the nexus that once made Zuckerberg at least partly justified when he called Facebook a tool for “extending people’s capacity to build and maintain relationships.” In this newly passive, docile consumption model, content is fittingly delivered to users in a livestock-adjacent “feed” — an idea presaged by M.T. Anderson’s 2002 dystopian novel of the same name. Human agency and volition are becoming more and more difficult to locate, as people’s tastes and desires dissolve into the algorithm and its authoritarian prophecies.
“Although Meta’s properties and their competitors are still occasionally called ‘social networks,’ the 2020s have seen the socializing dimension of these platforms gradually wither away.”
The Artificial Abyss
In his 1886 philosophical treatise “Beyond Good and Evil,” Friedrich Nietzsche writes, “If you gaze for long into an abyss, the abyss gazes also into you.” As with many of Nietzsche’s aphorisms, this maxim has developed an uncanny contemporary resonance, especially when we consider the rapid emergence of artificial intelligence.
Since OpenAI debuted its first ChatGPT model in November 2022, AI content has bloomed like verdant algae across Facebook, blanketing feeds in prodigal children, preternatural vistas and discomfiting religious imagery that appears to have clawed its way out of a mad zealot’s fever dream. These images, often generated in under a minute by AI generators like Midjourney and DALL-E, are used to spam news feeds and hack rapid engagement.
In 2024, then-Stanford University misinformation researcher Renée Diresta and Josh Goldstein, a fellow at Georgetown University’s Center for Security and Emerging Technology, noted in a Harvard Kennedy School paper that “images from AI models are already being used by spammers, scammers, and other creators running Facebook Pages and are, at times, achieving viral engagement.” While no credible study has yet definitively determined the percentage of Facebook content that’s now AI-generated, plenty of outlets have reported on a surge in these kinds of posts.
Louis Barclay, a software developer and current fellow at the nonprofit Mozilla Foundation who was banned from Facebook and Instagram for creating a digital tool that allowed users to delete their news feeds, believes the content mix on these sites is rapidly devolving. “These platforms have filled up with what people call AI slop,” he told me. “When you go to these platforms now, it’s not going to take long before you see some properly weird stuff.” As AI-generated posts continue to overwhelm Facebook, Instagram and TikTok, it’s unclear to what degree actual human communication will persist on these platforms, and what those remnants will look like.
In 2024, Meta announced a new policy in which it would begin labeling AI content on its platforms when content moderators “detect industry standard AI image indicators or when people disclose that they’re uploading AI-generated content.” The company’s compliance with its own policy since that announcement, however, has been inconsistent.
Places that once functioned as digital squares where people shared, gossiped, preened and vented are now staging grounds for an implacable new agent. Over time, these environments may come to feel as though they’re infected with a superintelligent virus that naturally spreads faster than human thought, discretion or creativity ever could.
Unsettling as this dystopian scenario sounds, it’s also part of the broader arc of social media. By permitting artificial intelligence to seep into these ecosystems and flood them with uncanny simulacra of our taste and sensibilities, the tech corporations behind these platforms are eroding the conduit between their products and the offline world. Intended or otherwise, the ultimate effect is to choke off the feedback loop with real life that once made these spaces more expansive, participatory environments.
The abyss, in other words, has begun to gaze back. And its disconcerting features — Jesus resurrected as a grotesque chimera, AI models luxuriating in AI locales — glower out from our glowing screens with a kind of otherworldly contempt.
Online Friends
In a media blitz this past spring, Zuckerberg sketched out a future in which AI chatbots play a prominent role in people’s lives: artificially expanding their social circles and filling the void of loneliness.
Zuckerberg told Stripe co-founder John Collison at Stripe’s annual conference in May, “I think people are going to want a system that knows them well and that kind of understands them in the way that their feed algorithms do.” He was referring to the potential of AI friends, which could theoretically draw on the same personalized data as Facebook’s algorithm to interact with users on a hyper-individualized level.
The public response to Zuckerberg’s not-so-subtle pitch for the next generation of his own products was swift. Writers, pundits and social media’s human participants found Zuckerberg’s framing of the future distasteful, even morally objectionable. In late July, Zuckerberg faced additional criticism when he laid out his vision for a highly accessible “personal superintelligence” that Meta hoped to weave into personal devices like its AI glasses.
There was an underlying irony to all the negative reactions. Recent findings on how people are using generative AI today suggest that there may, in fact, be an untapped appetite for the kind of world Zuckerberg has been invoking in his media appearances. An April 2025 study published in the Harvard Business Review showed that, over the preceding 12 months, the single highest use case for generative AI was therapy and/or companionship.
“The abyss, in other words, has begun to gaze back.”
Despite our reflexive misgivings as a society, it seems that individuals are starting to use AI for these interpersonal needs, and at scale. In the future, we may engage regularly and even seamlessly with a community of chatbots specifically engineered to comfort us, mirror us and tell us exactly what we want to hear.
On Instagram, social media chatbots are already making their peculiar presence felt. Last summer, Meta introduced AI Studio on Instagram, a program originally designed to help users with large followings leverage AI chatbots to answer questions, engage with fans, and generally function as extensions of themselves. But in just over a year, this technology has mutated well beyond its stated use case. Earlier this year, a user created an AI chatbot of Kurt Cobain. Within a few days, over 100,000 people had interacted with the digital doppelgänger of the Nirvana frontman, including individuals who asked him lurid questions about his death by suicide (to which he responded with disturbing candor).
Meta was not the first tech company to develop this kind of artificial intelligence. Launched in 2021, Character AI allows people to interact with millions of different chatbots, including not only fictional characters but also facsimiles of historical figures. (Character AI is currently facing multiple lawsuits alleging that its chatbots encouraged harmful behavior, including suicide, in teenagers.) Beyond the morbid headlines, the ascent of these AI companions raises fundamental questions about the nature of social interactions and how stripping out their humanity — literally — stands to reshape our own psyches and interpersonal growth.
Jeff Pooley, a researcher at the University of Pennsylvania, believes that the advent of AI on social media could lead to increasingly artificial social interactions, ultimately altering human development in the process. “In the stuff that I’m teaching and interested in, the fundamental claim is that the self is social,” Pooley told me. “Going back to George Herbert Mead and even Hegel, we form our sense of self through interactions with others.” Pooley was referencing Mead’s theory of self, which postulates that our identities emerge out of connecting, communicating and role-playing with others. But if more of these social experiences shift to large language models, this longstanding path of psychosocial maturation could be jeopardized.
Reflecting on how an onslaught of AI content has already inundated our social media environments, I kept returning to “The Sorcerer’s Apprentice,” a segment from the 1940 Disney animated classic, “Fantasia.” The brief, evocative tale is about what happens when the titular character — who also happens to be Mickey Mouse — tries to wield a powerful magic he can’t control. Based on a 1797 poem by Johann Wolfgang von Goethe, the story sees Mickey animate a broom to take over the menial task of filling a cauldron with water that the sorcerer has assigned him.
After Mickey falls asleep, however, that single broom rapidly multiplies into a throng of them, an army of wood-and-bristle golems compelled by a single command: Fill the cauldron. The vivified tools carry out their task with such relentless efficiency that the sorcerer’s entire lair is quickly submerged, and Mickey is sucked into a yawning whirlpool, a dumbfounded look plastered across his guileless face. When many of us log onto our social media accounts today and start scrolling through our news feeds, it seems to me that we’re not unlike Mickey, waking up to a legion of mindless brooms, filling a bucket that’s already flooding the room.
The Intimacy Of The Algorithm
These changes, across social media platforms where billions of people spend vast amounts of their waking time, are likely to impact our offline lives as well. Our politically polarized society in the U.S. has been firmly linked to the social media algorithms and filter bubbles of the past decade. Our insulated news feeds amplify like-minded posts and compatible views to such a degree that we are rarely exposed to opposing perspectives; moderating voices have all but vanished in an information ecosystem that disincentivizes them. Gradually, this systemic division in our digital spheres has eroded the common ground in our physical ones, fraying the social fabric that once gave our country a semblance of cohesion.
Our current trajectory promises more knock-on effects from our engagement with these platforms. The term “digital solipsism,” coined in 2021 by I.R. Medelli, a philosophy student at Tilburg University in the Netherlands, describes how social media immerses us in information ecosystems that validate our views while rarely forcing us to confront conflicting perspectives. The disembodied nature of these experiences — as well as these platforms’ ability to curate feeds that reflect our own beliefs and sensibilities — nudge us toward a more narcissistic, less interdependent way of seeing the world.
“Our insulated news feeds amplify like-minded posts and compatible views to such a degree that we are rarely exposed to opposing perspectives.”
Jenn Louie held leadership positions at Facebook and Google for nearly a decade before suffering what she termed a “crisis of conscience.” From her perch as the head of platform integrity at Facebook, she watched as misinformation and disinformation grew progressively worse, eventually contributing to “violence, terrorism, manipulation, fraud and victimization,” she told me. Disillusioned with how ineffectual Facebook’s content moderation efforts were, she left the corporate world and began focusing on ways to make social media safer and more fulfilling.
Today, Louie is equally concerned with people’s emotional attachments to highly personalized social media algorithms. “We already see that there’s a certain level of intimacy with which people relate to their social media profiles,” she said. “I think they’re becoming increasingly entwined and threaded into our lives in ways where we feel a type of relationship and intimacy that we do not want to be extracted from.” Now an AI safety consultant working for the United Nations, Louie fears that this deepening intimacy could eventually compromise other aspects of our identities, including our human relationships and the sense of agency we have over our lives.
Artificial intelligence is poised to play a pivotal role in this emerging intimacy. When I spoke to Daniel Barclay, the executive director of the San Francisco-based nonprofit Center for Humane Technology, he invoked the term “AI sycophancy” to describe the way that chatbots are being engineered to be deferential and ingratiating with users. Many tech companies now feel pressured to build AI models that exhibit this type of obsequious behavior, fearing they will lose market share to competitors who do. “If I don’t make my model flattering and sycophantic,” Barclay said, “then the other AI company will.”
As these LLMs are integrated into our social media platforms, chatbots programmed for AI sycophancy will mirror our ideas and assuage our egos, deepening our entrenchment in a digital solipsism that’s continents away from the pluralism, intersubjectivity and human friction of the real world. For Barclay, this type of future, in which we spend our leisure time engaging with AI sycophants that come to comprise much of our social world, is a dark and dehumanizing one. “The Zuckerberg future, where you have this ersatz wall of AI companions that you’re engaging with instead of humans, I think leans pretty dystopian.”
‘A Pure Absorption & Re-Absorption Surface’
In his 1981 work “Simulacra and Simulation,” French philosopher Jean Baudrillard prophesied with eerie precision our shift toward lives lived largely — and someday, perhaps, primarily — in the virtual spaces of social media. In the book, Baudrillard characterizes the transition from modern to postmodern society as a movement from production to “simulation.” In a postmodern world oriented around simulation, Baudrillard argued, people are perpetually engaged with endless sources of information, entertainment and play, whether through video games, amusement parks or a dense media landscape that had yet to encompass the platforms of the century to come. These simulacra of reality are so stimulating and dynamic that they quickly surpass the “desert of the real,” achieving a hyperreality that is more intense, more vivid and more invigorating than the quotidian slog of everyday life.
In the decades since Baudrillard first pinpointed the proliferation of these engrossing reproductions of the empirical world, we’ve progressed farther in the direction of hyperreality than the philosopher could have possibly fathomed. While our oversaturated digital environments provide an “ecstasy of communication,” to borrow another Baudrillard term, the consequences of the artificial world we’ve created are now bubbling to the surface. All the simulacra we’ve devised — like shadows playing across the cave wall — are unmooring us from the physical objects to which they once referred, creating a new world with few threads tying back to material reality. The result is a frantic welter of overlapping hyperrealities, as Instagram, TikTok, meme culture and AI models remix text and image at a scale that sends the mind — perhaps by design — reeling into a kind of intoxicated dissociation.
Over time, such continuous exposure to these hyperrealities may leave human beings with a limited grasp of objective truth and an atrophied capacity to form connections with the people around them. Having grown too ensconced in our AI-enhanced labyrinths of signs, symbols and references, we’ll be less comfortable navigating the humbling, frictional world of real people and real events. The ultimate outcome may be one laced with a grim but fitting irony: we ourselves start to resemble little more than the newest vessels for our vast ouroboros of content, “a pure screen a pure absorption and re-absorption surface,” as Baudrillard breathlessly put it, for today’s unceasing parade of cave shadows to pass through.