Building A Prosocial Media Ecosystem

Credits

Glen Weyl is Founder of the RadicalxChange Foundation, Microsoft Research Plural Technology Collaboratory and Plurality Institute and co-author of “Radical Markets: Uprooting Capitalism and Democracy for a Just Society” and “⿻ 數位 Plurality: The Future of Collaborative Technology and Democracy.”

Audrey Tang is Taiwan’s cyber ambassador and served as its first digital minister (2016-2024). Tang is also a senior fellow at the Project Liberty Institute and Omidyar Senior Advisor at Mozilla Foundation.

Jacob Mchangama is the executive director of The Future of Free Speech and a research professor at Vanderbilt University. Mchangama is the author of Free Speech: A History From Socrates to Social Media.

In 1942, shortly after the United States entered World War II, there was widespread concern among media leaders that the free press would be among the war’s casualties. The polarized yellow journalism of the interwar period was widely blamed for fueling isolationism and undermining national morale. To preempt the potential nationalization of media, Time magazine founder Henry Luce asked University of Chicago President Robert Hutchins to convene a commission on press freedom and responsibility. Its goal was to lay out standards of professionalism that would come to define the U.S. industry and its regulation in the following decades.

Their 1947 report laid out principles that can be briefly summarized as: 

  1. Drawing a distinction between “news” and “opinion”
  2. Defining news as information broadly agreed upon across a diverse society
  3. Ensuring that opinions dividing society are represented with balance

For much of the postwar period, implementing the Hutchins principles was the central responsibility of editorial staff in American media institutions and, through the “fairness doctrine,” of federal media regulators. Prior to the Hutchins Commission, U.S. media was generally filled with more misinformation and was more polarized than it is today. The “golden age” of journalism that many look back fondly on was, to a large extent, a result of the Hutchins Commission’s standards of journalistic integrity that separated “news” that bridges divides from “opinion” that reflects them. 

But there is no going back to the simplistic and often exclusionary media landscape of the mid-20th century. For worse and often for better, 21st-century social media enables a cacophony of voices to be accessible to anyone with an internet connection, platforming a diversity of ideas and opinions and stories across the world. But over the years, it has become clear that social media faces a crisis perhaps even more dramatic than that which precipitated the Hutchins Commission — schools banning social media and even smartphones as evidence of their psychological harms mounts; Elon Musk turning X into a mouthpiece for his right-wing personal and policy priorities; Meta adjusting its content moderation policies in ways that appear to align with the priorities of the Trump administration; and TikTok potentially being banned outright. 

We believe, therefore, that it’s time to relearn some of the commission’s lessons and adapt them to our pluralistic, digital age.

With a deeply polarizing U.S. election fresh in our minds, the need to redesign platforms that bridge divides has never been more urgent. In a paper this essay is adapted from, we and our co-authors showed how the lessons of the Hutchins Commission can be adapted to the far more plural and digital world we live in today. There is no longer a monolithic “U.S. audience,” but we have advanced computational tools that allow us to implement the Hutchins principles across the diverse and intersecting audiences that social media reaches. We suggest three concrete strategies to achieve this:

  1. Provide social context, such as by encouraging the communities among which social posts are widely accepted or divisive to annotate them. The goal here is to facilitate the production of common understandings or “meta-consensus” that undergirds the social fabric. Algorithms that cluster social networks by communities, which drive recommender systems, already track this information; we should just make it transparent to users.
  2. Ranking content to surface what relevant communities have in common and ensure that all relevant communities receive a fair share of attention. This would be to address social fragmentation — algorithms that are already being harnessed in systems like X’s Community Notes could algorithmically scale the Hutchins principles in a diverse social network.
  3. Harness data to facilitate the formation of cross-cutting communities and a business model under which platforms can profit from encouraging the emergence and development of such communities.

Thus, rather than amplifying content that spreads misinformation or fuels outrage, social media platforms would profit from prioritizing user-driven promotion of constructive content. Instead of relying on government mandates or the decisions of tech billionaires, platforms could integrate mechanisms that enable users to provide context, fostering greater understanding and strengthening community ties.

These aren’t far-fetched ideas or utopian fantasies. They are proven, tested strategies that helped transform Taiwan into one of the most civically engaged and least socially polarized places in the world. 

For too long, social media has been driven by a single-minded pursuit: maximizing engagement, often at the cost of a shared social fabric. People are accustomed to platforms serving engrossing, personalized content, often fueled by outrage and disgust, with little or no context of who else is seeing or approving the content. This makes it difficult to understand who is consuming content and who agrees with what. The result? A fragmented, increasingly hostile and alienating digital landscape where genuine connection is sacrificed for clicks that make us anxious and isolated.

The usual response — top-down speech restrictions — undermines freedom and further disempowers communities by concentrating power in technology and technocratic elites who, intentionally or not, silence voices that challenge their narrow perspectives.

But it doesn’t have to be this way. In 2014, concern about a trade agreement that would have allowed technology controlled by Beijing to pervade Taiwanese society led to a three-week-long (peaceful) occupation of the national legislature. As part of the g0v civic tech movement, one of us — Audrey — harnessed digital tools to help bring the conflicting sides together and surface a coherent set of demands that allowed this to become perhaps the only example of an Occupy movement whose demands were fully adopted by a sitting government. 

Audrey was then invited to serve as a cabinet minister and tasked with scaling up these tools to empower citizens, especially young people, to build support for legislation that bridges social divisions. This effort led to the enactment of dozens of new laws, established Taiwan as Asia’s leader in freedom of speech and online rights, fought Chinese disinformation without censorship, and fostered a youth population that is among the world’s least divided and most civically engaged, according to international evaluations. 

This experience inspired X’s Birdwatch (now Community Notes) system, which has become a gold standard for involving citizens in uplifting content that brings people together. In fact, despite his other radical changes to the platform and occasional grumbles, Musk has left this wildly successful and popular feature in place. X volunteers add contextualizing notes to content that they believe is misleading and vote on the notes proposed by others. Users are classified by the political leanings of their votes and only notes that create consensus across political divides are prominently displayed. 

This system has become the heart of X’s attempts to anchor conversation on the platform in shared reality and was particularly significant in maintaining a basis of fact during the polarizing aftermath of the October 7 attacks in Israel. Academic research has consistently shown that this community-driven approach creates greater trust and thus is more effective in grounding online discourse than those based on top-down content moderation and fact-checking by often distrusted experts.

“The need to redesign platforms that bridge divides has never been more urgent.”

It should be no surprise, then, that many other platforms are looking to adopt this approach. YouTube announced its intention to build a similar system this past summer. In response to the election, Meta recently announced it plans to replace its existing content moderation system with one inspired by Community Notes. Yet while these programs hold some promise, they do not address the root issue: Community Notes adds context to posts that are brought to prominence by algorithms that often promote outrageous, misleading and decontextualized content. 

In this moment of uncertainty for social media, we should have higher ambitions and aim to reimagine the whole experience, recommendation system and business model of social media to better serve communities, culture and self-government.

What the Taiwan and Community Notes experience highlights is that “problematic content” is not the main problem, and thus “content moderation” (i.e., censorship) is not the solution. The real problem is that platforms obscure the social context of what we see, leading us to mistakenly believe that “viral” content reflects a broad consensus, when in reality it often reflects the views of a narrow community. This leaves us with an isolating uncertainty of who such content is for, and who is engaging with it. Thus, political elites tend to significantly overestimate the share, reach and impact of misinformation on social media — even though, despite all its visibility, such content is primarily shared and believed by a small subset of hyper-partisan “supersharers.”

The problem, therefore, is too little speech and information, not too much.

This creates two problematic effects. First, the “false consensus effect” makes us overestimate agreement, believing that content we see — because it is popular in a narrow community we are part of or familiar with — is broadly agreed upon. Second, the “hidden agreement effect” (also known as “pluralistic ignorance”) leaves valuable shared beliefs unspoken and unseen. Together, these effects obscure the potential connections we might forge, deepening divisions and leaving us isolated from those who share our cultural and experiential commonalities.

These effects are bad for business! Platforms that did a better job of creating a sense of common audiences, shared experiences and mutual understanding are much more attractive sites for brand advertising. This could open entirely new revenue streams, charging communities (like local governments, companies and religious organizations) for highlighting content that brings their members to the common understanding those organizations have always striven to achieve.

Therefore, instead of removing or shadowbanning controversial content, platforms should empower participants to help offer meaningful context. This can include clearly showing which communities embrace certain views or watch certain videos and labeling content as “shared ground” when it reflects widely accepted views or “different perspectives” when it’s more controversial. In fact, these labels may differ across community contexts: One piece of content might be, for example, controversial in my church but a consensus in my political party, or vice versa.

By revealing the social provenance of information, users can gauge credibility and consensus for themselves based on the reactions of other users, rather than relying on a gatekeeper. In doing so, we protect free speech while ensuring people have the tools they need to navigate misinformation and heated debates.

“Problematic content is not the main problem and thus censorship is not the solution.”

Such journalistic ethics are obviously more challenging to manage in the much more diverse environment of social media, with millions of audiences and communities around the world. But systems like Community Notes show how the much richer input from users can be leveraged to extend it and thereby empower self-government by users like never before. Thriving platforms like Reddit, Japan’s SmartNews or Ground News in the U.S. have long prized the value of context and community input. Reddit allows its myriad of topic-specific “subreddits” to manage their own content moderation. SmartNews uses tools like those underlying Community Notes to allow readers to navigate a diversity of perspectives on any news area. Even LinkedIn showcases verified user affiliations (educational, professional, volunteer) and helps them discover the topics that are important to those intersecting communities.

Of course, arguments about ethics or responsibility will not sway most platform leaders, who may be more focused on profit or seek to drive divisive political agendas. But still, these approaches may prove useful for them too.

First, platforms are leaving an important revenue source on the table: charging communities for the value of social cohesion. Nonprofit, public and government media invest billions every year in promoting this much less effectively than platforms could, offering a revenue stream that is comparable to and potentially greater than advertising. And even advertising revenue might be significantly enhanced by creating spaces where people know others will see ads, allowing them to build a brand rather than just induce people to return to purchase from abandoned online shopping carts. After all, Super Bowl ads are far more valuable per viewer than online ads.

Second, even if platforms explicitly seek to tilt their content politically, this only affects one divide among many. They still have an interest in promoting cohesion within their partisan bloc and across non-political communities in their ecosystem. After all, the Hutchins Commission did little to promote balanced views of competing international systems like the Western and Soviet blocs. Instead, it promoted coherence within the U.S. Thus, algorithmic applications of its principles are relevant even within sub-communities that are themselves divided from others and for non-political communities.

Finally, the social media landscape is becoming increasingly fragmented and competitive, as our collaborator Renée DiResta observed in a recent Noema piece that helped inspire this one. Politics and business ructions are driving participants away from dominant platforms like X towards new entrants like Bluesky, Mastodon and Truth Social. The rise of AI and a broader diversity of content formats (like virtual reality) are disrupting more culturally-oriented platforms and fueling the rise of TikTok, Roblox and more. And law, ranging from antitrust to interventions to address the problem of external manipulation and disinformation, is disrupting many existing platforms.

All of these forces are making the “safe bets” of existing social media business models increasingly risky and creating spaces for a range of competing alternatives and innovation at previously entrenched platforms. This makes today the most realistic moment since social media was born in the mid-2000s to make it the tool it always should have been — for it to strengthen rather than strip-mine the social fabric. In doing so, it would both revive the ailing commitment to internet freedom and signal a return to the best parts of America’s postwar media golden age — combining the best of new and old media in a way made possible by the advent of cheap and powerful AI.

If social platforms can provide such bridging and balancing content to their users, it would give them a sense of their place in a rich and diverse social landscape, allowing them to form connections and understanding rather than to retreat into anxious extremism. Concretely, platforms should provide:

  1. Social provenance transparency: Label which communities are engaging with content. For instance, a viral post could display whether it resonates across diverse groups (shared ground) or primarily within a specific niche (different perspectives).
  2. Community-driven curation: Empower communities to promote content they value through mechanisms like financial contributions, democratic voting or subscriptions. This shifts power from algorithms to people, emphasizing collective values over clickbait.
  3. User agency: Provide tools for individuals to prioritize content aligned with their values and communities, fostering a sense of control and belonging. Users could customize feeds based on interests or trusted sources, mitigating algorithmic manipulation.

By avoiding centralized and often opaque speech restrictions, this approach can build both freedom and trust that participants are not being targeted by platform administrators for personal or geopolitical reasons. Instead, they empower individuals and communities to exercise their rights to free expression as responsible and informed citizens by providing the nuanced context needed to navigate our increasingly complex world.

Of course, such approaches will not, by themselves, create any kind of informational utopia. Many citizens will still choose to devote their time to the communities they most value, which in many cases may be narrower than one might hope. At the same time, large established communities will be better organized and therefore financed, and thus better able to pay to boost content that brings them together, potentially limiting social dynamism and the strength of historically marginalized communities. Our design does little to directly address these challenges.

However, by making social connections, understanding and cohesion explicit and enabling investment in them, our design would clear pathways to address these challenges through investment. Instead of complaints or calls for regulation and censorship, those concerned about division or social stasis would have obvious ways to invest. And platforms hoping to encourage new business would have a compelling reason to support emerging communities. Furthermore, while our design would do nothing to directly force citizens to broaden their associations, they would create a transparency that currently doesn’t exist about the choices citizens are implicitly making to associate narrowly — choices they might make differently if they saw them clearly. 

“The problem is too little speech and information, not too much.”

Platforms like Bluesky and Skylight, with their emphasis on openness and experimentation, are built on the idea that most citizens do not want social media that isolates and divides them and that, given meaningful choice, they will embrace healthier options. Innovation there could offer a path to signal the rebirth of TikTok as an American platform free from opaque manipulation by foreign actors. By more broadly applying such principles, which they have already praised, platforms like Meta and X could demonstrate the genuineness of their commitment to both free speech and social cohesion as their social role dramatically evolves. And they offer a natural way for platforms like SmartNews, LinkedIn and Reddit to expand their strengths and business models. 

Equally important, a prosocial media ecosystem will encourage regulatory frameworks that align with democratic values — openness, transparency and free expression — rather than the current trajectory, where democracies find themselves on the defensive, resorting to measures long championed by closed societies to safeguard their systems of governance.

The stakes are high, but so is the potential for a brighter future. Through this vision, social media can become a force for unity through diversity rather than division and isolation. This transformation requires collaboration between policymakers, technologists and communities, but the outcome is worth it: a connected world where we engage with one another thoughtfully, building shared understanding and a stronger social fabric.