A transformation is underway that promises — or threatens — to disrupt virtually all of our long-standing conceptions of our place on the planet and our planet’s place in the cosmos.
The Earth is in the process of growing a planetary-scale technostructure of computation — an almost inconceivably vast and complex interlocking system (or system of systems) of sensors, satellites, cables, communications protocols and software. The development of this structure reveals and deepens our fundamental condition of planetarity — the techno-mediated self-awareness of the inescapability of our embeddedness in an Earth-spanning biogeochemical system that is undergoing severe disruptions from the relative stability of the previous ten millennia. This system is both an evolving physical and empirical fact and, perhaps even more importantly, a radical philosophical event — one that is at once forcing us to face up to how differently we will have to live, and enabling us, in practice, to live differently.
To help us understand the implications of this event, the Berggruen Institute is launching a new research program area, in partnership with the One Project foundation: Antikythera, a project to explore the speculative philosophy of computation, incubated under the direction of philosopher of technology Benjamin Bratton.
The purpose of Antikythera is to use the emergence of planetary-scale computation as an opportunity to rethink the fundamental categories that have long been used to make sense of the world: economics, politics, society, intelligence and even the very idea of the human as distinct from both machines and nature. Questioning these concepts has of course long been at the heart of the Berggruen Institute’s research agenda, from the Future of Capitalism and the Future of Democracy, to Planetary Governance, the Transformations of the Human, and Future Humans. The Antikythera program described here exists on its own, but also in dialogue with each of these other areas.
For Bratton and the Antikythera team, planetary-scale computation demands that we reconsider: geopolitics, which will increasingly be organized around parallel and often competing “hemispherical stacks” of computational infrastructure; the process of production, distribution and consumption, which will now take the form of “synthetic catallaxy;” the nature of computational cognition and sense-making, which is no longer attempting merely to artificially mimic human intelligence, but is instead producing radically new forms of “synthetic intelligence;” the collective capacity of such intelligences, which is not located only in individual sentient minds, but rather forms an organic and integrated whole we can better think of as an emergent form of “planetary sapience;” and finally, the use of modeling to make sense of the world, which is increasingly done through the computational “recursive simulation” of many possible futures.
Applications are now open to join the program’s fully funded five-month interdisciplinary research studio, based from February-June 2023 in Los Angeles, Mexico City and Seoul. This studio will be joined by a cohort of over 70 leading philosophers, research scientists and designers.
To mark Antikythera’s launch, Noema Deputy Editor Nils Gilman spoke with Bratton about the key concepts motivating the program.
Nils Gilman: The Antikythera mechanism was discovered in 1901 in a shipwreck off the coast of a Greek island. Dated to roughly 200 BC, the mechanism was an astronomical device that not only calculated things, but was likely used to orient navigation across the surface of the globe in relation to the movements of planets and stars. Tell me why this object is an inspiration for the program.
Benjamin Bratton: For us, the Antikythera mechanism represents both the origin of computation, and an inspiration for the potential future of computation. Antikythera locates the origin of computation in navigation, orientation and, indeed, in cosmology — in both the astronomic and anthropological senses of the term. Antikythera configures computation as a technology of the “planetary,” and the planetary as a figure of technological thought. It demonstrates, contrary to much of continental philosophical orthodoxy, that thinking through the computational mechanism allows not only “mere calculation,” but for intelligence to orient itself in relation to its planetary condition. By thinking with the abstractions so afforded, intelligence has some inkling of its own possibility and agency.
The model of computation that we seek to develop isn’t limited to this particular mechanism, which happened to emerge in roughly the same time and place as the birth of Western philosophy. Connecting a philosophical trajectory to this mechanism suggests a genealogy of computation that includes, for example, the Event Horizon Telescope, which stretched across one side of the globe to produce an image of a black hole. Closer at hand, it also includes the emergence of planetary-scale computation in the middle of the 20th century, from which we have deduced other essential facts about the planetary effects of human agency, including climate change itself.
Gilman: How exactly is this concept of climate change a result of planetary scale computation?
Bratton: The models that we have of climate change are ones that emerge from supercomputing simulations of Earth’s past, present and future. This is a self-disclosure of Earth’s intelligence and agency, accomplished by thinking through and with a computational model. The planetary condition is demystified and comes into view. The social, political, economic and cultural — and, of course, philosophical — implications of that demystification are not calculated or computed directly. They are qualitative as much as quantitative. But the condition itself, and thus the ground upon which philosophy can generate concepts, is only possible through what is abstracted in relation to such mechanisms.
Gilman: Does this imply that computation is as much about discovery of how the world works as it is about how it functions as a tool?
Bratton: Yes, but the two poles are necessarily combined. One might consider this in relation to what the great Polish science-fiction writer, Stanislaw Lem, called “existential technologies.” I draw a related distinction between instrumental and epistemological technologies: those, on the one hand, whose primary social impact is how they mechanically transform the world as tools, and those, on the other, that impact society more fundamentally, by revealing something otherwise inconceivable about how the universe works. The latter are rare and precious.
At the same time, planetary-scale computation is also instrumentally transforming the world, physically terraforming the planet in its image through fiber-optic cables linking continents and data centers bored into mountains, satellites encrusting the atmosphere, all linked to the glowing glass rectangles we hold in our hands. But computation is also an epistemological technology. As it drives astronomy, climate science, genomics, neuroscience, artificial intelligence, medicine, geology and so on, computation has revealed and demystified the world and ourselves and the interrelations between them.
Gilman: This agenda seems rather different than how philosophy and the humanities deal with the question concerning computation.
Bratton: The present orthodoxy is that what is most essential — philosophically, ethically, politically — is the uncomputable. It is the uncontrollable, the indescribable, the unmeasurable, the unrepresentable. It is that which exceeds signification or representation — the ineffable. For much of the Continental tradition, calculation has been understood as a degraded, tertiary, alienated, violently stupid form of thought. Can we count the number of times that Jacques Derrida, for example, uses the term “mere calculation” to differentiate it from the really deep, significant philosophical work?
The Antikythera program clearly takes a different approach. We know that thinking with the mechanism is a precondition for grasping what formal conceptualization and speculative thought must grapple with. What is at stake is not simply a better philosophical orientation, but the futures before us that must be conceived and built. Besides the noble projects I have described, many of the other purposes to which planetary-scale computation is applied are deeply destructive. We turned it into a giant slot machine that gives people what their lizard brain asks for. Computation is perhaps based on too much “human centered design” in the conventional sense. This isn’t inevitable. It’s the result of the misorientation of the technology and a disorientation of our concepts for it.
The agenda of the program isn’t just to map computation but rather to redefine the question of what planetary scale computation is for. How must computation be enrolled in the organization of a viable planetary condition? It’s a condition from which humans emerge, but for the foreseeable future, it will be composed in relation to the concepts that humans conceive.
Gilman: What makes the current emergent forms “planetary”? In other words, what do you mean by “planetary scale” computation?
Bratton: First, it must be affirmed that computation was discovered as much as it was invented. The artificial computational appliances that we have developed to date pale in comparison to the computational efficiencies of matter itself. In this sense, computation is always planetary in scale; it’s something that biology does and arguably biospheres as a whole. However, what we’re really referring to is the emergence, in the middle of the 20th century, of planetary computational systems operating at continental and atmospheric scale. Railroads linked continents, as did telephone cables, but now we have infrastructures that are computational at their core.
There is continuity with this history and there are qualitative breaks. These infrastructures not only transmit information, but also structure, and they rationalize information along the way. We have constructed, in essence, not a single giant computer, but a massively distributed accidental megastructure. This accidental megastructure is something that we all inhabit, that is above us and in front of us, in the sky and in the ground. It’s at once a technical and an institutional system; it both reflects our societies and comes to constitute them. It’s a figure of totality, both physically and symbolically.
Gilman: Computation is itself an enormous topic. How do you break it down into more specific areas for focused research?
Bratton: The Antikythera program has five areas of focused research: Synthetic Intelligence, the longer-term implications of machine intelligence, particularly through the lens of natural-language processing; Hemispherical Stacks, the multipolar geopolitics of planetary computation; Recursive Simulations, the emergence of simulation as an epistemological technology, from scientific simulation to VR/AR; Synthetic Catallaxy, the ongoing organization of artificial computational economics, pricing and planning; and Planetary Sapience, the evolutionary emergence of natural/artificial intelligence and how it must now conceive and compose a viable planetarity.
Let me quickly expand on each of them, though each could fill out our discussion all on its own. “Synthetic intelligence” refers to what is now often called “AI,” but takes a different approach to what is and isn’t “artificial.” We are working on the potential and problems of implementing Large Language Models at platform scale, a topic I have written on recently. The “recursive simulations” area looks at the role of computational simulations as epistemological technologies. By this I mean that while scientific simulations — of Earth’s climate, for example — provide abstractions that access some ground truth, virtual and augmented reality provide artificial phenomenological experiences that allow us to take leave of ground truth. In between is where we live and where a politics of simulations is to be developed.
Gilman: Both of these speak to how computation functions as a technology that reveals how things work and challenges us to understand our own thinking differently. What about the politics of this? What about computation as infrastructure?
Bratton: Two other research areas focus on this. “Hemispherical stacks” looks at the increasingly multipolar geopolitics of planetary-scale computation and the segmentation into enclosed quasi-sovereign domains. “The Stack” is the multilayered architecture of planetary computation, comprised of earth, cloud, city, address, interface and user layers. Each of these layers is a new battlefield. The strategic mobilization around chip manufacturing is one aspect of this, but it extends all the way to blocked apps, proposals for new IP addressing systems, cloud platforms taking on roles once controlled by states and vice versa. For this, we are working with a number of science-fiction writers to develop scenarios that will help navigate these uncharted waters.
The area we call “synthetic catallaxy” deals with computational economics. It considers the macroeconomic effects of automation and the prospects of universal basic services, new forms of pricing and price signaling that include negative externalities and the return of planning as a form of economic intelligence cognizant of its own future.
Gilman: How does all this relate to the big-picture claims you make about computation and the evolution of intelligence? In other words, is there a framing of how everything from artificial intelligence to new economic platforms adds up to something?
Bratton: What we call “planetary sapience” is the fifth research area. It considers the role of computation in the revealing of the planetary as a condition, and the emergence of planetary intelligence in various forms (and, unfortunately, prevention of planetary intelligence). We are asking: machine intelligence, for what? There is, without question, intrinsic value in learning to make rocks process information in ways once reserved only for primates. But in the conjunction of humans and machine intelligence, for example, what are the paths that would enable, not destroy, the prospect of a viable planetarity, a future worth the name? As I asked in a Noema essay last year, what forms of intelligence are preconditions to that accomplishment?
Gilman: Antikythera is a philosophical research program focused on computation, but also has a design studio aspect to it. How does that work?
Bratton: The studio component of Antikythera is based on the architectural studio model but focuses on software and systems, not buildings and cities. Society now asks of software things that it used to ask of architecture, namely the organization of people in space and time. Architecture as a discourse and discipline has for hundreds of years built a studio culture in which the speculative and experimental modes of research have a degree of autonomy from the professional application. This has allowed it to explore the city, habitation, the diagrammatic representation of nested perspectives and scales and so on, in ways that have produced a priceless legacy and archive of thinking with models. Software needs the same kind of experimental studio culture, one that focuses on foundational questions of what computational systems are and can be, what is necessary and what is not, and mapping lines of flight accordingly.
Gilman: Who are you involving in the Antikythera Studio?
Bratton: We are enrolling some of the most interesting and important thinkers working today not only in the philosophy of computation proper but also planetary science, computer science, economics, international relations, science-fiction literature and more. We are accepting applications to join our fully-funded research studio next Spring.
The same interdisciplinary vision will inform how we admit resident researchers who apply to the program. The researchers we plan to bring into the program will include not only philosophers but designers, scientists, economists, computer scientists — many of whom are already involved in building the apparatuses that we are describing. They will work collaboratively with political scientists, artists, architects and filmmakers, all of whom have something important to contribute. To say that the program is highly interdisciplinary is an understatement.
Gilman: Given that the Studio will integrate such an interdisciplinary group, what methodologies are you planning on using to bring these researchers together? Are there specific mechanisms of anticipation, speculation and futurity that you intend to promote?
Bratton: One of the ways in which philosophy can get in trouble is when it becomes entirely “philosophy about philosophy” and bounded by this interiority. I don’t mean to disqualify this tradition whatsoever, but I would contrast it with the approach of the Antikythera program.
Arguably, reality has surpassed the concepts we have available at hand to map and model it, to make and steer it. If so, then the project isn’t simply to apply philosophy to questions concerning computation technology: What would Hegel think about Google? What would Plato say about virtual reality? Why do the concepts we’ve inherited from these traditions so often fail us today? These are surely interesting questions, but Antikythera is starting with a more direct encounter with the complexity of socio-technical forms and trying to generate new conceptual tools accordingly in relation to these, directly. The project is to invent “small p” philosophical concepts that might give shape to ideas and cohere positions of agency and interventions that wouldn’t have been otherwise possible.
Gilman: How does that level of interdisciplinarity work? How can people from these different backgrounds collaborate on projects if their approaches and skill sets are so different?
Bratton: All those disciplines have an analytical aspect and a projective or productive aspect. Some lean in one direction more than others, but they all both analyze and produce. Collaboration is based on the rotation between analytic and critical modes of thought, on the one hand, and propositional and speculative processes, on the other. The boundary between seminar space and studio space is porous and fluid. Seminar, charette, scenario and project all inform one another. Design thus becomes a way of doing philosophy, just as philosophy becomes a way of doing design.
Gilman: What kinds of studio projects do you foresee? By that I mean not just forms and formats, but what approach will you take this sort of analytical + speculative design? Is it utopian? Dystopian? Something else?
Bratton: Speculative philosophy and speculative design inform one another. We recognize that some genres of speculative design are superficial, anodyne or saccharine, but they’re meant to be positive proclamations about ideal situations, which are, ultimately, performative utopian wishes. They may be therapeutic, but I think we don’t learn that much from that.
At the same time, there is a complementary genre of speculative design that is symmetrically dystopian, based on critical posturing about collapse. It demonstrates its bonafides as a critical stance, but we also don’t really learn much from it: it mostly ends up repeating things that we already know, aspects of the status quo that are already clear, and ironically ends up reinforcing them almost as dogma. It codifies an “official dystopia.” For some, this can be simultaneously demoralizing and comforting, but for us that’s not particularly interesting.
What we’d like to do is develop projects about which we are, ourselves, critically ambivalent. The ideal project for us is one which leaves us unsure, in advance, whether its speculations coming true would be the best thing in the world or the worst. We like projects where the more we think through the project, the less sure we are. As some might say, it is kind of pharmakon, a technology that is both remedy and poison, and we hope to suspend any resolution of that ambiguity for as long as we can. We believe that projects that we aren’t quite sure how to judge as good or evil are far more likely to end up generating durable and influential ideas.
Gilman: You’ve often argued that philosophy and technology evolve in relation to one another. Is that idea an important part of the method?
Bratton: Inevitably, yes. One generates machines which inspire thought experiments, which give rise to new machines, and so on, in a double-helix of conceptualization and engineering. The interplay between Alan Turing’s speculative and real designs most clearly exemplifies this, but the process extends beyond any one person or project. Real technologies can and should not only magnetize philosophical debates but alter their premises. For Antikythera, that is our sincere hope.
Gilman: Lastly, let me ask the question “why philosophy?” Why would something so abstract be important at a time when so much is at stake?
Bratton: In the past half century, but really since the beginning of the 21st century, there has been a rush to build planetary-scale computation as fast as possible and to monetize and capitalize this construction by whatever means are most expedient and optimizable (such as advertising and attention). As such, the planetary scale computation we have isn’t the technological and infrastructural stack we really want or need. It’s not the one with which complex planetary civilizations can thrive.
The societies, economies and ecologies we require can’t emerge by simply extrapolating the present into the future. So what is the stack-to-come? The answers come down to navigation, orientation and how intelligence is reflected and extended by computation, and how, through the mechanism, it grasps its own predicament and planetary condition. This is why the Antikythera device is our guiding figure.