OpenAI Proposes A ‘Social Contract’ For The Intelligence Age

It fills the vacuum left by an unimaginative political class.

Artwork by Ibrahim Rayintakath for Noema Magazine. Artwork by Ibrahim Rayintakath for Noema Magazine.
Ibrahim Rayintakath for Noema Magazine
Credits

Nathan Gardels is the editor-in-chief of Noema Magazine. He is also the co-founder of and a senior adviser to the Berggruen Institute.

It is a mark of the paucity of social imagination among America’s political class, whether a supine Congress beholden to the president’s personality cult or the moribund Democratic Party bereft of fresh ideas, that thinking through the big picture of a new social contract for the Age of AI has been left to the Big Tech disrupters themselves.

Obviously, one must take with a wary grain of silicon whatever Big Tech proposes on the warranted suspicion that it will primarily serve their self-interest. Yet when a company like Anthropic, for example, pushes back against the Pentagon over the use of its frontier models for mass surveillance and autonomous weapons, its principled stance is worthy of embrace.

It is in this context that OpenAI’s proposed “Industrial Policy For The Intelligence Age: Ideas To Keep People First,” released last week, should be taken seriously. It is more visionary and comprehensive than anything that’s emerged so far from the sluggish precincts of public policymakers.

The Case For A New Industrial Policy

“Society has navigated major technological transitions before, but not without real disruption and dislocation along the way,” the OpenAI appeal begins. “While those transitions ultimately created more prosperity, they required proactive political choices to ensure that growth translates into broader opportunity and greater security.

“For example, following the transition to the Industrial Age, the Progressive Era and the New Deal helped modernize the social contract for a world reshaped by electricity, the combustion engine, and mass production. They did so by building new public institutions, protections, and expectations about what a fair economy should provide, including labor protections, safety standards, social safety nets, and expanded access to education.

“History shows that democratic societies can respond to technological upheaval with ambition: reimagining the social contract, mediating between capital and labor, and encouraging broad distribution of the benefits of technological progress while preserving pluralism, constitutional checks and balances, and freedom to innovate. The transition to superintelligence will require an even more ambitious form of industrial policy, one that reflects the ability of democratic societies to act collectively, at scale, to shape their economic future so that superintelligence benefits everyone.”

For OpenAI, the scope and scale of the impact of superintelligence means society “is entering a new phase of economic and social organization that will fundamentally reshape work, knowledge, and production.”

It goes on: “In normal times, the case for letting markets work on their own is strong. Historically, competition, entrepreneurship, and open economic participation have lifted living standards and expanded opportunity. Capitalism, imperfect as it is, remains an effective system for translating human ingenuity into shared prosperity. But industrial policy can play an important role when market forces alone aren’t sufficient—when new technologies create opportunities and risks that existing institutions aren’t equipped to manage.”

Three Principles

In OpenAI’s framework, three principles should guide the social contract with superintelligence:

  • Share prosperity broadly. “The promise of advanced AI is not just technological progress, but a higher quality of life for all. Everyone should have the opportunity to participate in the new opportunities AI creates. Living standards should rise, and people should see material improvements through lower costs, better health and education, and more security and opportunity. If AI winds up controlled by, and benefiting only a few, while most people lack agency and access to AI-driven opportunity, we will have failed to deliver on its promise.”
  • Mitigate risks. “The transition toward superintelligence will come with serious risks—from economic disruption, to misuse in areas like cybersecurity and biology, to the loss of alignment or control over increasingly powerful systems. Without effective mitigation, people will be harmed. Avoiding these outcomes requires building new institutions, technical safeguards, and governance frameworks so that advanced systems remain safe, controllable, and aligned—reducing the risk of large-scale harm, protecting critical systems, and ensuring people can rely on AI in their daily lives. As capability scales, safety must scale with it.” In addition, “governments should implement common-sense AI regulation—not to entrench incumbents through regulatory capture but to protect children, mitigate national security risks, and encourage innovation.”
  • Democratize access and agency. “As capabilities advance, some systems may need to be controlled for safety. But broad participation in the AI economy should not depend on access to the most powerful models—it should depend on access to AI that is useful, affordable, preserves people’s privacy and expands their individual agency. Avoiding a concentration of wealth and control will require ensuring that people everywhere can use AI in ways that give them real influence at work, in markets, and through democratic processes.”

A Public Wealth Fund

One of the most critical structural shifts from the industrial era to the age of intelligence is that productivity growth and wealth creation are being divorced from jobs and income. As the value of labor in production diminishes while the value created by intelligent machines increases, wealth will mostly accrue to those who “own the robots.” Those at the top will get richer while those who labor for a living will grow poorer or be displaced altogether. Thus, sharing the wealth by broadening ownership across the AI economy is key to closing the growing gap of economic disparity.

OpenAI addresses this issue head-on by calling for the creation of a “Public Wealth Fund that provides every citizen—including those not invested in financial markets—with a stake in AI-driven economic growth. A Public Wealth Fund is designed to ensure that people directly share in the upside of that growth. Policymakers and AI companies should work together to determine how to best seed the Fund, which could invest in diversified, long-term assets that capture growth in both AI companies and the broader set of firms adopting and deploying AI. Returns from the Fund could be distributed directly to citizens, allowing more people to participate directly in the upside of AI-driven growth, regardless of their starting wealth or access to capital.”

While the formal OpenAI proposal does not specify “how best to seed” such a public wealth fund, Sam Altman has proposed elsewhere that it be “capitalized by taxing companies above a certain valuation at 2.5% of their market value each year, payable in shares transferred to the fund.”

If universal savings/investment accounts were created in this way, all citizens could share in the compounding returns generated by the growth of AI companies, just like the top 10% in America who own 93% of all equities do today.

OpenAI released its proposed new social contract not so much as a finished blueprint but as a prompt and starting point for “a conversation” about what kind of future open societies should strive for in a world transformed by technology.

It is incumbent upon the political class to take up the challenge, appropriating and translating this vision into policies that encompass the general interest of society.