Zeve Sanderson is the executive director of NYU’s Center for Social Media & Politics, and a research affiliate at the NYU Center on Technology Policy.
Scott Babwah Brennen is the director of the Center on Technology Policy at NYU and a consultant on technology policy issues.
The early months of Donald Trump’s second administration have, much like his first four years, been defined by lies: strange lies, self-serving lies and inhumane lies. “A deluge of falsehoods,” as Democratic Sen. Chuck Schumer described the president’s March 2025 address before a joint session of Congress.
We often call these lies misinformation. While the term has been around since at least the 16th century, it really entered the public vernacular in 2016, with the outcome of the Brexit referendum and Trump’s first election win being largely ascribed to lies circulating on social media.
Over the last decade, it’s hard to overstate how much it has become part of the zeitgeist. In 2016, Oxford Dictionaries selected “post-truth” as its word of the year; in 2017, Collins Dictionary selected “fake news”; and in 2018, Dictionary.com picked “misinformation.”
For the second year in a row, surveyed world leaders in academia, business, government and civil society ranked misinformation and disinformation as the highest short-term risks, above weather events, inflation and war, according to the World Economic Forum’s Global Risks Perception Survey.
There has been a field of research dedicated to mis- and disinformation; journalism and white papers; advisory boards and symposia; and even laws passed to stop its spread.
But after nearly a decade of concerted effort to combat misinformation, we must ask: to what effect? It’s unsettling to realize that, at least in the U.S., we have made little, if any, discernible progress. While the American public has never been particularly well-informed, it certainly isn’t today. Perceptions of what constitutes truth and who can credibly claim it have polarized. Trust in institutions, which was already dropping, has further decayed. Many platforms have shifted away from moderating misinformation to varying degrees.
Given all this, it’s hard to feel confident that the work of the last decade has made measurable progress in curing our so-called “information disorder.” It’s also impossible to prove a negative. Perhaps we would have been even worse off without this work. Some may feel that this means we haven’t done enough. But the lack of meaningful results begs the question: Did we even understand the problem to begin with?
The Emergence Of A Paradigm
While misinformation has existed since the dawn of human communication, or so the story goes, technology has changed the dynamic. With the advent of information technologies like social media, search engines and generative AI, misinformation can now travel at unprecedented speed and scale.
To make sense of the new online environment and its impact on democracy, a dominant paradigm emerged across journalism, academia and civil society that was largely built around a single axis: true or false, information or misinformation.
By focusing on facticity, the paradigm emphasized a specific danger: persuasion. Misinformation threatens society specifically because it can mislead the public, persuading them to think Trump won the 2020 election, that they should exit the European Union to promote economic growth or that the Earth isn’t warming.
The paradigm also implied a solution: If something is false, it should be corrected. And correct we did. Fact-checking, once a cottage industry, has become a mainstay of political coverage, with newspapers and television stations providing them in real time. Researchers (including us) focused considerable energy on measuring the efficacy of these efforts to correct misinformation. Characteristic scholarly disagreements followed. Should we prebunk or debunk? Does repeating the lie help it spread, even if it’s to disprove it? Do fact-checks “backfire” by further entrenching beliefs? Do simple nudges toward accuracy really have outsized effects?
Alongside this work, pressures were placed on technology companies to slow the spread of misinformation. While their actions were often insufficient, the task was also daunting. As described by CEOs in congressional testimony, platforms worked to identify misinformation at an unimaginable scale through a combination of expert evaluations, user signals and automated systems. Content found or predicted to be false was labeled and downranked or sometimes removed entirely. Before the recent divestment in such efforts, Facebook spent a self-reported $13 billion over a five-year period on safety and security, with entire teams dedicated to curbing the spread of false information.
Misinformation, often conceptualized as a virus infecting the minds of the public, certainly wasn’t going to be cured, but a line of defense had formed to protect against the insidious force spreading across the body politic.
Critiques Coalesce
Today, we have the benefit of knowing how the story ends. After years of going all in on the misinformation paradigm, we’re arguably worse off. The “Stop the Steal,” climate denial and vaccine skepticism movements are all still alive. According to a 2021 survey by the Cato Institute and YouGov, most Americans distrust social media platforms to moderate content. Companies have largely stepped back from their misinformation policies and enforcement, while news outlets continue to die off. Fact-checking the administration’s misinformation — about Ukraine, government spending, public health, tariffs — seems to do little.
“After nearly a decade of concerted effort to combat misinformation, we must ask: to what effect?”
What we know, from decades of research across psychology, political science and other disciplines, is that the public is hard to persuade and behaviors are difficult to alter. Recent empirical studies suggest misinformation is no different, calling into question the dominant paradigm.
“Misinformation on Misinformation,” reads the title of a well-cited academic article, covering six misconceptions about the topic. A news feature published last year in Science explored the “field’s dilemmas,” highlighting various challenges to misinformation research. This academic conversation has even emerged from behind the often paywalled pages of scholarly journals. Competing essays in The Chronicle of Higher Education debate whether misinformation should be studied. “Is the misinformation crisis overblown?” a recent podcast asked two guest researchers.
At a moment of real-world change, disagreement among scholars can seem like academic navel gazing. But the dominant misinformation paradigm was, in large part, shaped and legitimized by academics whose research helped define the problem, influence journalism and policy, and guide platform interventions. Now, scholars are among its sharpest critics.
In our research for this essay, we found three interconnected critiques — the definitional, prevalence and causal critiques — of the dominant misinformation paradigm that may help illuminate a path forward.
The definitional critique points to the challenge of categorizing the world’s information as true or false. In many high-stakes contexts — such as elections, wars or public health crises — information is dynamic, with truth not only uncertain but being discovered (and re-discovered) in real time.
In this quickly changing world, can a researcher authoritatively identify misinformation? An archetypal example of this is the lab leak theory of Covid-19’s origins: The claim was initially dismissed as misinformation and, for months, moderated across many social media platforms. Now, intelligence officials consider it a credible theory.
The prevalence critique both builds on and reinforces the definitional critique. Social media data are large and optimized for search, enabling anecdata to be produced for almost any phenomenon of interest. However, this suffers from the denominator problem. Take, for example, the fact that the Russian-backed Internet Research Agency posted roughly 80,000 pieces of content on Facebook pages between 2015 to 2017, reaching an estimated 126 million users, according to a New York Times report. During roughly the same period, however, U.S. users saw more than 11 trillion posts from pages on the platform overall.
Recent work, which measures misinformation as a proportion of overall information exposure, similarly finds the prevalence of misinformation to be small, bordering on insignificant. Fake news accounts for a mere “0.15% of Americans’ daily media diet,” according to one study published in Science Advances. In studies where the definition of misinformation is expanded — for example, to articles published by what experts have identified as low-quality news domains — or the focus on a specific platform or media type is narrowed, the prevalence of misinformation rises, but the proportion remains relatively small (roughly 5-10%, depending on the study) and interpreting these results suffers from definitional challenges. Moreover, even though a small percentage of internet users consume a much higher proportion of verifiably false information, they tend to be concentrated in the “long tails” of the distribution: hyper-partisans who opt into extreme information networks and are often predisposed to the expounded beliefs.
This dynamic — the concentration of misinformation among hyper-partisans — leads to the causal critique. Studies that measure the impact of online misinformation on political attitudes and behaviors suggest there are limited, if any, impacts. Beliefs tend to be entrenched, evolving over years of diverse social, experiential and informational inputs. Simply put, the public is not easily moved by new pieces of information; rather, people are often motivated to interpret the information they are exposed to through the lens of their established worldview.
When social media users do encounter misinformation, they largely follow accounts with whom they are likely to agree and consume outlets that reflect their perspectives. As a result, digital misinformation generally preaches to the choir, potentially making attitudes or behaviors more extreme but not acting as vectors of mass influence or persuasion. If anything, the causal arrows may face in the opposite directions: beliefs may explain digital misinformation consumption more than the other way around.
Beyond True (& False)
These critiques have sparked scholarly disagreement regarding how we should define misinformation and what the literature truly teaches us. It’s easy to go further down the academic rabbit hole and come out the other side with uncertainty or, worse, intellectual tribalism. So we won’t.
“In rushing to group together falsehoods under the same analytical lens, we have jettisoned any understanding of how communication actually functions.”
Rather than endlessly refining definitions or debating study methodologies, we believe there’s a deeper issue embedded in the very word. One reason our collective efforts to combat misinformation have failed is that in rushing to group together falsehoods under the same analytical lens, we have jettisoned any understanding of how communication actually functions. We now have deep knowledge about how false information spreads and who may be more likely to believe it. But we have failed to fully account for how communication, culture, identity and politics are deeply entwined in the present moment.
Take, for example, Trump’s amplification of a false claim that Haitian immigrants were eating cats and dogs in Springfield, Ohio. Following the playbook of the misinformation paradigm, the claim was thoroughly evaluated. News articles and fact-checks proliferated, correcting the record.
In the week that followed, it became clear that the false claim was not really about the facts. As JD Vance said in defense of the pet-eating claims: “If I have to create stories so that the American media actually pays attention to the suffering of the American people, then that’s what I’m going to do.” The false claim aimed to build salience around immigration in general and Biden’s policies in particular — in this case, many of the Haitian immigrants in Springfield had immigrated legally through the Humanitarian Parole Program. The point of it, it appears, was to communicate a visceral disgust for immigrants and Biden’s immigration policy.
Similar dynamics were at play with the frequent lies from Musk about DOGE. Fact-checking the “wall of receipts” does little if the actual communications goal is to keep people talking about government spending or to wage a thinly veiled war on perceived sources of liberal power.
In this way, misinformation can sidestep our attempts to protect healthy discourse when a speaker’s aims are more about agenda setting or mobilization, for example, than transmitting factual content. Information can be communicated to shape identities, influence culture, strategically impact the media environment and more. It can be especially pernicious when false information is used toward these ends. The truth, of course, matters, but it also clearly does not define the myriad effects of information.
This dynamic may also explain why doomsday fears about AI-powered misinformation haven’t come to pass, especially with regard to the 2024 election, leading some commentators to claim that we were “deepfaked by election deepfakes.” The framing of AI’s destructive impact on the public was built on the same faulty assumptions of the misinformation paradigm. Most analyses accepted a straightforward model of persuasion in which synthetic content could dupe the masses, altering beliefs and behaviors at scale. And yet the AI-generated content that circulated in the latest U.S. presidential election was mostly “cartoons and agitprop,” as Matteo Wong put it in The Atlantic, such as an AI-generated image of Trump in a prison jumpsuit, that largely played to people’s pre-existing beliefs. This content can communicate emotions and mobilize the public, while shaping the aesthetic language of contemporary politics. But it hardly rises to the level of a democratic threat foretold by many experts.
As technology improves and becomes more accessible, AI-generated content could certainly become more effective. But until then, it seems to be the consumption of news coverage, especially television, about AI-powered misinformation that seems to most erode public trust in the information ecosystem, as well as the fact that public figures can rely on the liar’s dividend — calling into question even legitimate content given the perceived ubiquity of deepfakes — to evade accountability.
Into The Storm
With the misinformation paradigm facing criticism from all sides, the primary critique in recent months has been a political one. Many of the self-described protectors of speech have become our primary censors. A House subcommittee led by Republican Rep. Jim Jordan, investigating efforts to counter misinformation, issued letters requesting information and documents to chill the speech of academics. Elon Musk suspended or temporarily banned some journalists from X, threatened legal action against people who report on the identities of DOGE employees and filed a lawsuit against a research group engaged in constitutionally protected speech. Trump has authorized a list of words prohibited from federally funded science. The remainder of Trump’s term will certainly carry more of such twisted attempts to “bring back free speech” by controlling and shaping information flows.
Unlike the other critiques of misinformation, the politicized critique leaves us with an attenuated view of democracy. Attempts by pro-democracy actors to protect against genuinely harmful misinformation, such as questioning the integrity of elections, face congressional investigations, legal action or online harassment.
“We cannot continue to do things as we have, in hopes of better results.”
Some policymakers pressure platforms to remove content they disagree with, perpetrating the same perceived censorship they once condemned. It seems like these political actors were never intent on creating a fairer game; they simply were “working the refs” to their own political victory. Unsurprisingly, many of these same actors are also undermining democratic institutions. President Trump still refuses to accept his loss of the 2020 election, as do many of his appointees. His vice president has suggested that the executive branch ignore adverse court rulings.
It can be appealing to assume that the misinformation paradigm is justified through a loose transitive logic: The most powerful critics of work to combat misinformation are also those who seek to erode democracy. So if we want to protect democracy, we must recommit to the ecosystem that has emerged over the past decade. This, we think, is the wrong approach.
We cannot continue to do things as we have, in hopes of better results. The prevailing paradigm of misinformation focuses on a statement’s truthfulness (over other features), emphasizes its potential for persuasion (over other harms) and demands corrections (over other strategies).
Amid political and legal attacks on misinformation research, it may seem like the wrong moment to question whether the field should continue on its current path. And yet we believe the moment for renewed thinking is not only ripe, but also urgent.
Renewed Thinking
To be clear, the dominant paradigm is not wrong about the democratic challenges of a public that cannot agree on basic facts or the unique dynamics introduced by digital platforms. It would be foolish not to heed Hannah Arendt’s warnings of how authoritarian leaders thrive on epistemic uncertainty, which allows them to not only consolidate control over the truth but also cast aside the independent foundations of a shared reality.
But the dominant paradigm has largely framed the relationship between misinformation and democracy as a mechanical problem with mechanical solutions. Efforts to combat misinformation have largely left us defensive, reacting to strategies and narratives used by those who spread it. The last decade has made clear that we aren’t going to fact-check or inoculate our way toward a healthier civic culture. It’s not enough to observe that democracy suffers for falsehoods.
Instead, we must begin with a more holistic understanding of how communication functions, and move beyond straightforward harms like persuasion towards more diffuse and more pernicious challenges, such as trust, identity and polarization. Although this presents a less clear-cut path forward, there are already many new intellectual currents and programmatic efforts that point in the right direction.
Rather than correcting misinformation, some journalists and civil society organizations are working to meet the informational needs of communities directly — needs that can not be reduced to sorting out truth from lies. Newsrooms both big and small, from USA Today to Chicago’s City Bureau, have been experimenting with more direct communication between journalists and readers. For example, in 2023 the Information Futures Lab at Brown University School of Public Health partnered with a Spanish-language fact-checking site, Factchequeado, and a Miami-based communications agency, We Are Más, to respond to questions from Hispanic diaspora communities in South Florida via a bilingual WhatsApp group. Using posts provided by a research team, community members answered questions ranging from “How do I get a mammogram when I am underinsured?” to “How are people handling the severe side effects of getting a fourth Covid Shot?”
These efforts can’t be boiled down to fact-checks, which generally respond to claims already in circulation. Instead, these efforts empower communities to ask questions themselves and place experts in the position of meeting those needs.
This strategy has the potential for broad effects. Helping a community member obtain health care can both fill a critical informational need and function as a bulwark against health misinformation, which often preys on uncertainty. Getting valuable information directly to those who need it also works to build foundational trust in community institutions.
Technologists and policymakers are also working on “middleware,” in this case, third-party software that sits between platforms and users, that can help facilitate individual choice and possibly more democratic platform experiences. Researchers hope that middleware, broadly speaking, will address two critical areas — how information is selected and organized, and the moderation of harmful content — by providing users with a range of options to determine their own online environments.
“Getting valuable information directly to those who need it also works to build foundational trust in community institutions.”
Recent work by academics and practitioners has described middleware’s transformative potential as “an alternative to both centrally controlled, opaque platforms and an unmoderated, uncurated internet.” Internet scholar and activist Ethan Zuckerman has been at the forefront of this movement, recently filing a lawsuit against Meta to establish legal protections for third-party tools that give users more control over their social media feeds. His legal challenge argued that users should be able to utilize externally developed software like Unfollow Everything to, for example, delete their newsfeeds on Meta’s platforms.
This approach represents a fundamental shift from company-controlled platform environments to user-driven architectures; individuals are empowered to choose from competing algorithms and filtering systems rather than being subject to platform-determined information diets. Middleware solutions could, for example, prioritize high-quality news outlets or increase the ideological diversity of sources to combat filter bubbles.
Technologists and scholars are also looking beyond individual tools to imagine an ecosystem where federated platforms like Mastodon and Bluesky integrate middleware as a core feature of the platform experience. This provides communities with the infrastructure and tools to proactively shape their own information environments, according to their values, preferences and needs — rather than fighting misinformation after it spreads.
Finally, scholars have already been working to expand our understanding of how information can impact us, recognizing that the most salient features may not necessarily be the content’s truth or falsity. Some scholars place the emphasis on the financial motives of those behind disinformation campaigns, highlighting the troubling history between corporate power and scientific inquiry. Others have examined the “social roles” of fake news and explored how people make use of both true and false content in their communities to communicate, collaborate and make sense of the world. For example, in “Strangers in Their Own Land,” Arlie Hochschild examines how both high and low quality information helps Louisianians make sense of ecological collapse, government aid, and growing economic precarity by providing “deep stories,” emotionally grounded narratives that provide order to the world regardless of veracity.
Researchers are also examining how identity determines who is exposed to what kinds of information, who shares and believes it, and how it is produced. Much of this work focuses on how the deep memetic frames, or the socially shared lens through which we interpret the world, within misinformation shape our politics and influence how we relate to each other.
There are, of course, other laudable efforts that we aren’t able to explore here: efforts to bridge divides and reduce polarization, to rethink technologies for civic engagement and preserve our attention spans, to create localized social media environments. Academics have launched partnerships with practitioners — such as the University of Washington’s Center for an Informed Public’s partnership with local libraries and Stanford University researchers’ collaboration with school districts on civil online reasoning — in order to run large-scale program evaluations in the wild.
These strands of research have produced important insights. But these approaches have rarely received the funding or media coverage that goes to studies that, for example, focus more narrowly on tracking misinformation, especially concerning the now modern-day adage: “false news travels faster than true news.”
Taken together, these ideas all point toward a different project: centering and empowering communities, and building healthier information environments before falsehoods take root. What they lack is the common vocabulary, infrastructure, and investment that once bound the misinformation field — and that is the challenge ahead.
Better Together
The philosopher Daniel Williams argues that the misinformation paradigm gained salience because it offers elites the mirage that the public can be controlled by pulling the right levers. As he wrote in the Boston Review in 2023, “Our political adversaries are simply ignorant dupes, and with enough education and critical thinking, they will come to agree with us; there is no need to reimagine other social institutions or build the political power necessary to do so.”
What Williams downplays is that the misinformation paradigm itself is a social institution that carries political power. Over the last decade, a diverse ecosystem has coalesced around the topic: journalists, civil society organizations, community groups, academics, policymakers, technology companies, funders and more. Members of this ecosystem charted a common direction with shared language, overlapping priorities and interconnected networks. Few subject areas have been able to coalesce such a broad set of actors so quickly, organized around a single problem.
Our challenge now is to expand the scope of how we defend democracy in the digital age while preserving the institutional momentum of the past decade. The misinformation paradigm, for all its limitations, has shown that rapid, large-scale coordination around democratic challenges is possible.
Coordination among such a diverse group is both nearly impossible and utterly essential to avoid balkanization. It will not be easy to maintain that institutional energy as we figure out how our contemporary communication system can strengthen democracy, rather than myopically focus on correcting falsehoods. In this moment of increasing attacks on the foundations of our democracy, it is essential that we eschew the simplified frameworks and old playbooks that have failed to make meaningful progress in improving our informational lives. Not to the detriment of democracy, but in defense of it.