Benjamin Bratton
A Philosophy of Planetary Computation
Recorded live on Jan 29, 02025 at Cowell Theater in Fort Mason Center
We find ourselves in a pre-paradigmatic moment in which our technology has outpaced our theories of what to do with it.
The task of philosophy today is to catch up.
In his Long Now Talk, Philosopher of Technology Benjamin Bratton took us on a whirlwind philosophical journey into the concept of Planetary Computation — a journey that began in classical Greece with the story of the Antikythera mechanism, the analog computer that gave his think-tank Antikythera its name. But his inquiry stretched far beyond antiquity — back to the very origins of biological life itself and forward to a present and future where we must increasingly grapple with artificial life and intelligence on a planetary scale in time and space.
How might complex planetary intelligence thrive over the long now? To Bratton, that intelligence is a “emergent phenomenon of an ancient and deep biogeochemical flux” — not merely resident to the Earth but an outcropping from it. Our planet has evolved us, and we have in turn evolved a stack of technologies that can help us understand and govern that very same planet that produced us.
The preconditions for long-term adaptiveness, Bratton argues, will need to be artificially realized, and we won’t be able to control what happens as a result of bringing them into existence. This, Bratton says, is the Copernican trauma of our time.
In concluding his remarks, Bratton turns to James Lovelock, the pioneering environmental scientist who first proposed the Gaia Hypothesis. Referencing Lovelock’s final book, Novacene: The Coming Age of Hyperintelligence (02019), Bratton notes that for both Lovelock and himself the potential coming of post-human intelligence was not cause for “grief.” Instead, the frame of the planetary makes it so that finding ourselves in a grander story where “the evolution of intelligence does not peak with one terraforming species of nomadic primates,” is, to Bratton, “the happiest news possible.”
watch
primer
“The Earth has very recently evolved a smart exoskeleton,” posits Benjamin Bratton in an essay about what he terms “planetary sapience.” The existence of this exoskeleton — an orbiting web of satellites monitoring and internetworking every corner of this planet, a weave of undersea and underground cables transmitting data across continents at two-thirds the speed of light, and an uncountable mass of computing devices communicating across those networks — is indisputable. The more pressing question, however, is what we ought to do with it.
Bratton, as Director of the Antikythera program at Berggruen Institute, aims to answer that question. The advent of planetary sapience — the ability to understand our planet and compute its workings at scales far larger and smaller than the human — is not merely a scientific or technological advance but a “philosophical event.” It is a moment, Bratton argues, that demands we expand the frames we use to understand who and what we are.
Why This Talk Matters Now
The interconnected global crises of the past decade — from the pandemic and its accompanying economic disturbances to our ongoing reckoning with artificial intelligence to the ever-advancing reality of climate change — are all rooted in what Bratton identifies as our technological capacity to understand the world. What Bratton further proposes is that we must also use that same capacity to address those crises — to pair the inadvertent and perhaps damaging terraforming that humanity has conducted over the past centuries with a more intentional, thoughtful mode of planetary computation and governance.
The key to Bratton’s work is the concept of the planetary — distinct from the “international,” “global,” or “worldwide.” In his context, the planetary refers not just to the scope or scale of our intelligence and capacity to effect change, but to a deeper root. He sees human culture itself as an “emergent phenomenon of an ancient and deep biogeochemical flux” — not merely resident to the Earth but an outcropping from it. Our planet has evolved us, and we have in turn evolved a stack of technologies that can help us understand and govern that very same planet that produced us.
The Long View
The idea of the planetary has many roots, but one key moment in its history was the release of the first photos of the Earth from space. The outward flowering of culture and philosophy inspired by those photos in the late 01960s — from the dawn of the environmental movement to the publication of the Whole Earth Catalog to Martin Heidegger’s reaction of shock and uprooting — reflects a point of inflection for our capacity as a species to think about the big here and the long now.
Where to go next
- Read Bratton’s introduction on Antikythera and what it means to develop a “new speculative philosophy of computation.”
- In collaboration with MIT Press, Antikythera is launching a new journal.
- In conversation with Nils Gilman in Noema, Bratton discusses how Antikythera’s work is tied to the “futures before us that must be conceived and built.”
- In an essay in Noema, Bratton explores Planetary Sapience further, placing it in the setting of a historical moment that “feels long but may be fleeting.”
- Watch Bratton’s 02023 talk on Synthetic Intelligence in the context of the planetary model of computation.
transcript
Rebecca Lendl:
Welcome to The Long Now Podcast — thank you for being with us. I’m your host Rebecca Lendl, Executive Director here at The Long Now Foundation.
Today we’re going to get into something that many of us may not often think about, but that’s radically transforming us: planetary computation. The technologies and tools, sensors and satellites, that drive our data and discoveries. We can think of planetary computation as a process we shape that’s in turn shaping us.
We’re here with Benjamin Bratton, Director of the Antikythera program at Berggruen Institute. Benjamin is a philosopher of technology. He’s in the business of creating new language, new ways of making sense of who and what we are — an essential task for a time like this in which our technologies are outpacing our capacities. So we’re here to do a little catching up.
Buckle up because today’s talk is a bold one. Bratton invites us to step outside ourselves. Far outside. We find Bratton to be a kind voyager into conceptual unknowns and he’s here to share his discoveries with all of us.
To give you a sense of what’s to come, one of the examples of planetary computation that Bratton offers us is the journey from Copernicus to Blue Marble to Black Hole —
Now Copernicus as we know gave us the radical idea that the sun, not the earth, was at the center.
And Blue Marble gave us the famous photograph of earth from space, catalyzing a reconsideration of our relationship to each other and our planet.
And now just as radical is the image we call Black Hole, taken of a black hole 50 million light years away, created by a global network of telescopes working together and using the rotation of the earth to essentially turn our collective head and open our new eyes to peer out into the depths of the universe.
This is all totally new for us — planetary computation can now unlock so much more knowledge about the universe than we’re able to absorb as a culture. Bratton invites us to tighten up that disconnect.
Is this idea of stepping outside ourselves a bit of a scary proposition? Maybe. But maybe finding ourselves in the middle of a story where the evolution of intelligence does not peak with one species of primates is actually, as Bratton says, the happiest news possible.
As you may discover, this material is dense. Take your time with it. Have fun. And if you’re interested in learning more, you’ll find a ton of great resources in our show notes.
Benjamin Bratton:
Well, thank you everyone I want to begin also just saying that what a pleasure it is to be able to share some of this work with you all at Long Now Foundation, an institution that I've long admired, no pun intended,
And to take a moment perhaps to take a step back and away from the persistent weirdness of our times, one in which we get free AI from a hedge fund and $200 a month AI from a non-profit, never would've called that one, but more broadly speaking, if you take one sort of idea away from this is that I generally think that we are in a kind of pre-paradigmatic moment. That a lot of ideas are floating around that increasingly look kind of similar to one another and are coming together into something that may constitute a frame of reference that may be more useful to us.
But this is a difficult process. It's one that we'll kind of have to invent. I'll put it this way, there's certain times in history when our ideas of what it is that we would like to do are way ahead of the technological capacity to do that. And there are other times when the technology is in essence ahead of our concepts. And I think this is probably more where we're at. And in those moments, what we call philosophy, its job is to try to conjure those concepts and bring them into being so that they may be put to some use. Unfortunately, I don't know that that's entirely what's going on. A lot of our most important epistemic institutions such as universities I fear to report are a bit of a holding pattern, and that the sciences are focused on a kind of do now, think later, mode. And the humanities, where I spend most of my time, it's more of a critique now, and perhaps act later, but we'll maybe not.
In other words, this is exactly the wrong time to not be inventing those concepts and not be setting the initial conditions for the society to come, and yet here we are.
So, let's begin. Our topic is computation. And as you'll see, we sort of mean this in a somewhat idiosyncratic way, it's less to do with the mathematics and algorithms or with these kind of little appliances that we've constructed. But rather as computation as Patrick intimated as a planetary phenomenon. Not just something that humans do, but indeed through humans, something that the planet does. Our first proposition for the evening would is this. That computation was discovered as much as it was invented. And so we may think of natural computation, artificial computation. Stephen Wolfram recently published a paper in which he argues that the entire universe is a computational hypergraph, and that time is the rate of refresh, and dark energy is the heat exhaust of the big computation that is the universe. And I don't know, could be. Sure.
Okay, but we think of computation in terms of planetary system. This is not a sort of new thing, it's also really where it comes from. That computation was born of cosmology, and I mean that in both senses of the term cosmology, which I'll talk a little about later. This is the picture of the Antikythera mechanism, It was as far as we know, from 200 BC, It is probably apocryphally understood as the first computer, but it was not only a calculation device, it was also an astronomical device. It was used to orient its user through the stars in relationship to his or her situation spatially, but also temporally, for allowing a kind of simulated movement back and forward in time.
And so the idea that computation begins with this orientation of intelligence in relationship to its planetary condition seems to us a good starting point for this. But computation quickly became something more than this, also a bit more practical than this. It also became a calculation as a kind of world order as societies became more complex the necessity to calculate and compose not only what was happening right now, but indeed what would happen in the past and what could happen in the future, gave rise to forms of computation like this, Sumerian cuneiform, which you find this earliest form of writing, you try to imagine what could possibly would be the first thoughts of humans put down in writing, and it turns out it's mostly receipts.
And so in a way, everything we've done since then all sort of written language is sort of variations of accounting, which I like to remind my friends in the literature department. So when we talk about a kind of school of thought or philosophy where we might expect it to come from. Well, a few clues. I mean, one way of thinking of this is that, as I say, sciences are born when philosophy learns to ask the right questions. Most of the things we call sciences, at least historically began as philosophy. And philosophies are born when technologies force the birth of new languages, which is hopefully where we are what can happen in our present situation, one in which as we might put it, the new things with which we are surrounded have outrun the available nouns that we have to contain them.
This is Stanislaw Lem, the Polish science fiction author, in terms of a distinction that Lem makes between what he called existential technologies and instrumental technologies We think of instrumental technologies in terms of tools. Their main social impact or impact on society is what they do as a tool. So bulldozers move dirt very well, and so you can make cities faster. There are other kinds of technologies much more rarefied, that when used properly, change how we understand how the universe works. Telescopes and microscopes are kind of obvious examples, but as I will argue and as a kind of key thesis of our program and my work is that computation very much is both. And that preserving the space for computation as an existential technology
Think about the role of the telescope for Galileo, and ultimately for the deduction of heliocentrism. Without this technological alienation of seeing the world, perceiving the world in a way that would be otherwise impossible, we really wouldn't know where we are.
And so there is then a fundamental relationship between technology and what Freud called Copernican traumas. Copernican traumas are those priceless moments, priceless accomplishments really, by which we deduce that the world, the universe, doesn't work quite the way it looks like it might work, and we decenter ourselves or get outside ourselves and figure out again, who, what, and where we are in some way. Now the cycle of this is That because we have a complex model of the world and how the world works, we build technologies based on the implications of that model that would allow us to measure something or see something or perceive something or calculate something based on the logic of that model. But when we use that technology properly, we figure out that the model that made that technology possible is wrong. And there needs to be then a kind of resolution of the implications with the model that ultimately gave rise to them.
This in a more general framework, we might say, is the role of technology more broadly. Now we're speaking of this in sort of more historical terms, but the implications of this of Antikythera and forms of planetary computation as an existential technology are actually quite pressing. I would make the argument that the scientific concept of climate change itself is an intellectual accomplishment of planetary computation. Without the sensors and satellites and oceanic temperature sensors and so forth, and ice core samples, and most importantly the supercomputing simulations of climate, past, present, and future, we wouldn't have been able to perceive these temporal dynamic transformations in planetary systems in which we are embedded. This in a rough sense is really what planetary computation is for. Now this obviously then has not only scientific import, but also philosophical and ethical import as well. Because we understand climate change, we come to, Crutzen and others, came to reckon with the concept of the Anthropocene, problematic as it may be.
But for sure Anthropocene, to the extent to which it arises from climate sciences, which is predicated on planetary computation, is a kind of second order concept derived from planetary computation. It's a good example of how it is that computation as an existential technology can give rise to really fundamental shifts in our thinking and understanding of our agency as a species in one way or another. We're transforming the planet, and only through the deduction of this, by understanding and measuring how much we had artificialized the planet,
Did the possibility of recognizing agency become possible? but I think this is actually an important lesson. We tend to think about it in philosophy that first you train subjectivity and that this will give rise to better other forms of agency. In many ways, it often works the other way around. The subjectivity and the possibility of a subjectivity as a planetary subject only becomes possible once agency is mapped.
All right, let me speak a little bit about planetary computation. This term that we use, both what it is and what it's for.
the Lunar Orbiter image from 1966. This is the first image of the Earth taken from the moon and it was on the cover of every newspaper in 1966. It's a little bit forgotten But in 1966 it was quite a big deal, so much so that when Der Spiegel was interviewing the notorious German philosopher Martin Heidegger they had presented him a copy, this image, and asked him to comment upon it. Heidegger, said that this image, he was horrified by what he saw, literally shaken. And that he said, " we don't need nuclear weapons to destroy the world because this image has destroyed the world already."
And what he meant was that a intuitive phenomenological, egocentric, perspectival understanding of the world and being is something that is properly manifested by the technologizing its relationship has now been overwhelmed and overcome by this allocentric perspective, but we can never quite believe that the world is the same way it was before now that we understand a little bit more where we are. Now, for us this is a feature. For him, it was the biggest bug, I suppose, of all.
Now, another little thought experiment though, imagine the Blue Marble image not as an image but in essence as a movie a movie that expands the entire four point some billion year career of the Earth. But thankfully, on a super fast-forward. What you would see is the Earth spinning, volcanoes, Pangaea is over here. The very last instance of this, you would see something extraordinary. This little organism would sprout a exoskeleton, this sensory epidermal exoskeleton of satellites and various other set of mechanisms by which the surface of this organism can relay information from one point to another,
This is where I would think about the location of planetary computation is not only something that humans do in their industry, but indeed something that the planet has done. It needs to be understood as part of the evolution of the planet as a dynamic system.
Well, to dive right into the AI and in terms of our work, AI has co-evolved with the philosophy of AI quite closely that from Turing and Searle's thought experiments that the thought experiments about AI have driven the technology, and of course in turn the technology drives the philosophy. There's a double helix in the conjunction of these that you don't find in other sorts of ways. So another way of putting it is matter, thinking about matter, making matter that thinks. That's what we're up to.
Now, our position on this is something like this, that ultimately AI will teach us as much about what thinking is than we will teach it, that as you artificialize something, to artificialize something is indeed to discover what it is. We learn the lesson of climate science is to artificialize a climate is in fact the way you wish you become eventually possible to even know you have that agency.
A more recent piece called the Five Stages of AI Grief is a map of the different ways in which the discourses around AIs are defining different ways to be inadequate to the task, AI denial, anger, bargaining, depression, and acceptance. That is you can read and just being to think about it like, "Oh yeah, I know who's in this category," and it goes as well. It's one of those things where that started as a bit of a joke and then you realize actually those are usually the best ones. Another area about the positions we take on this has to do with the question of alignment, and that somewhat contrarian position on this is alignment to what exactly it is. A lot of the discourse around AI alignment presumes a relatively naive image as I've seen a relatively naive image of what human needs, desires, values, ethics are, and sometimes a very spoken, sometimes unspoken presumption that by simply amplifying unidirectionally the manifestation of those needs and desires, that everything will work out in the end.
This strikes me as a inversion of the Copernican implications of AI in a reversion to an unnecessary anthropocentrism and even anthropomorphism. One way to think about this is in terms of Turing's famous thought experiment where what Turing had initially proposed as a sufficient condition that is if the computer can fool player A, then at a functional level we have to grant that there's something going on in there, and that's a sufficient condition.
Unfortunately, in many ways is I think familiar, the Turing test as a metaphor has become more like a necessary condition. That is unless the AI can perform thinking the way that humans think that humans think, it is disqualified. And this over-normativization of the human or the reflection of the human as a model I think certainly said is too much of the AI alignment conversation.
Also a lot of the alignment discussion here, and again, I'm not really talking about yes, if you ask AI to do something, you want it to do that thing and you don't want it to make chemical weapons. These are unproblematic and I'm not really arguing against these. It's the presumption that really by making AI reflect exactness of human culture, human values, human dependencies, what humans are most likely to do and think as the North Star for the artificial evolution of AI. It's like, have you ever met humans? Are you sure that that's what you want?
Another basic thing is what we call reflectionism, that is there's a part of discourse where on the one hand you hear people say, The problem is that AI is not enough like humans and human society. We need to bend it towards that and that'll be very..." You have others who basically say, "No, AI is only the manifestation of the socioeconomic systems of power of human society, and that's the problem." So it's either exactly like us and that's the problem or it's not at all like us and that's the problem. And somehow people will say both at the same time, which is usually a sign that there's something a little bit of afoot.
Now, if you hold that the AI as an existential technology is a value that you would want to hold onto in terms of these conversations, that is there are ways in which it will teach us what thinking is, ways in which it will disclose the workings of the world and the universe in our own dynamic processes or agencies of activity to us in ways in which we cannot possibly have imagined yet, it means that... And it said once we discover these things, it would transform our cosmology in the anthropological sense in important ways, implies at the very least alignment needs to be bidirectional, both in terms of AI to where we want to direct it and us to the outputs that it might have.
So another way of putting it is that the presumption that the greater societal risk comes from not regulating AI may be quite wrong. So just for the sake of argument, you could think of where the area that we are exploring is a bit like this. We're familiar with the quadrant of less alignment, more bad, more alignment, more good. It's the less alignment, more good that we would like to account for at the very least, not only because it's probably underexplored, but also because I think we make the case that alignment overfitting making AIs really do exactly what people want in every case is actually kind of the real risk
The pinnacle of human-centered design arguably is the slot machine. A mechanism that does exactly what the human wants to do is like this is something to be avoided at all costs. And so the cultivation of what we call productive dis-alignments is what we see as part of the agenda line here as well.
And it also I think has to allow for otherwise unpredictable cascades of causality that the kinds of causal relationships between one thing and another in terms of those productive dis-alignments are not always what you would expect.
Now, thinking of this, that in relationship with a little bit of what the deep seek news from last week, you might also think of some of these kinds of cascades. Cheap energy produces cheap complexity. This is a truism of Santa Fe Institute for example. Cheap complexity allows for cheap inference. Cheap inference allows for cheap intelligence. Cheap intelligence allows for cheap energy. These kinds of cascades are exactly what I mean by the kinds of productive dis-alignments to be protected. It's also one in which an understanding of AI and its relationship to governance and the composition of society is not necessarily one that the question of control and agency is a little bit undecided.
As we say, AI is less a tool of industrial policy than industrial policy is a tool of AI, but also in terms of the consolidation issues and where power lays with this, these are serious issues in terms of centralization and decentralization. Thinking about other kinds of means of production that have structured society, AI is one that's available for a monthly subscription. That may really be, that fact may really be the democratizing factor.
A couple of other productive disalignments I want to put on the table, there's been an enormous interesting explosion of work and insights in non-human cognition, animal cognition, plant intelligence, and this is happening at the same time by which we are through artificializing intelligence in a mineral substrate through AI. And we're beginning to see a similar comparative non-human cognition discourse by which in ways that are just beginning to come together in some ways and sometimes rather explicit. Sharing and mirroring these in ways, I think that'll be increasingly very important.
It'd be hard to get away from, unless a few months the claim that next year is going to be all about agential AI, that it's all about agents that will be making things worse, and also that the consensus prediction that AGI will appear somewhere in 2027. So, you've put these two things together and you have a potentially very complicated scenario, by which if you have an explosion of AI agents that are roughly AGI level, whatever you take that to mean, you can imagine a scenario by which you may have 8 billion human-level minds that are human and 80 billion human- level minds that are not human. A ratio of 10 to 1, or later perhaps 100 to 1, 1,000 to 1. In those cases, what even constitutes a society goes back to first principles. For us, it is these [inaudible 00:53:21] that what we call the weirdness right in front of us, from which we try to gain some sort of insights.
I want to make the case to you that technology literally evolves. and it does so in ways in which it's never really ever just a tool.
All technologies are built out of earlier technologies. There's a scaffolding process, where one technology becomes the scaffold by which a yet more complex technology develops. Even thinking more in terms of anthropogeny, with the beginning of humans, that there's always been a coupling between biogenesis and technogenesis. Literally, the shape of our anatomy, like opposable thumbs for example, is a imprint of earlier forms of making use of the world for directed kinds of purposes. This structural deepening, it becomes more complex over time, and ultimately becomes a way of mapping evolutionary time.
Again, when we say evolves, what do you mean, evolves how? One, it's component scaffolding towards more complexity, complex thing emerges, it becomes a component, something yet more complex, which becomes a component, something yet more complex. We see adaptation and acceptation, that not only are there niches into which certain kinds of technologies are fit, but also technologies that were designed for one purpose then become very useful for completely other kinds of purposes nevertheless. They become components for things that are well beyond their original intention, just like biological adaptations. And that looking at the maps, we see both convergent-divergent path dependencies in terms of the directionality by which all of these things may be moving. Everything that I've described in terms of technological evolution is also true of biological evolution, which is the interesting fact. Or as Bogna Konior puts it, "What if humans are a phase in the history of technology?".
Now, evolution of computation or evolution as computation, to move this a little bit further along. Computation as a kind of technology is itself evolving. We can map forms of complex, cognitive intelligence in this way. This object... This is not my refrigerator by the way, don't worry. This complex object and the prefrontal cortex that wraps it around is one of the most remarkable accomplishments of biological evolution, to produce an object that is capable of these feats of predictive information processing. In other words, over long periods of time, the planet folded itself in such a way to produce this object by which it ultimately came to deduce things about itself. But now the substrate of complex intelligence includes both the biosphere and the lithosphere. We, the fire apes, by folding bits of metal and rock and running electric currents through it, figured out how to make the rocks think. This is news. And it's also then part of not only our evolutionary trajectory, it is also part of the evolutionary trajectory of the rocks.
Species that are good at artificialization do well. It's part of what evolution selects for. If a species is good at finding ways of building technologies, for example, that allow those species to capture more energy, information and matter by artificializing its environment, that population can grow. The capacity for artificialization is an adaptive process. Another way of thinking of this is the distinction between autopoiesis and allopoiesis. Autopoiesis, we learn from cybernetics, is how it is that a system uses the external environment to reproduce itself. Allopoiesis is the way in which that agent uses the external environment to produce something extrinsic to itself. Let me make the case then here, try to locate AI within this a little bit more explicitly, and give a bit of a timeline, that AI is actually the artificialization of artificialization itself.
One of the arguments that Sara Walker is going to make to you when she comes to speak is that natural selection doesn't begin with biology, it actually begins with chemistry, that certain kinds of molecules are stable and are able to reproduce each other. The capacity for selection, evolution ultimately stabilizes into certain forms of life, that is entities that are capable of autopoiesis and the internalization of energy, information and matter for reproduction of themselves. To get really good at that, to get really good at life, it is selected for it to get really good at artificialization. In order to do autopoiesis, you need to get really good at allopoiesis. And the better you get at allopoiesis, the better you will be at autopoiesis. By making things that allow you to capture more information in matter that's extrinsic, you can therefore take more that is intrinsic.
To get really good at artificialization, you have to be smart about it, not only individually, but collectively. You need to be able to imagine future, possible states and mechanisms by which you can do allopoiesis. To get really good at artificialization, you need to evolve intelligence. I'm not really proposing this as the new scientific methods. Imagine I'm drawing you a napkin sketch. That's really how this is meant. To get really good at intelligence, you need to be able to communicate abstract ideas between the nodes and the intelligence system that is each of the individuals. To get really good at intelligence, something like symbolic language becomes very, very useful. And again, each of these things, like the ability to do one, feeds back on the ability of the other.
To get really good at symbolic language and to use symbolic language as a way to accelerate and collectivize and to make into a generative dynamic, the artificialization of intelligence itself becomes quite useful. So, there's a cascading sequence again. Each one of these things goes in a particular order and the accomplishments of one in essence becomes scaffolds for the other. I would also say that there's a feedback loop here as well. Once you get good at this one, it actually changes how that one works. Once artificialization becomes robust, it actually changes the dynamics of symbolic language. Inevitably, over the next few years, it will transform the kinds of symbolic languages that we speak and work with and that exist. And in turn, the kinds of symbolic languages that we have have transformed the kinds of intelligence that we work. We think in languages and ways we might not have before, and indeed this recursive feedback would likely continue, and such back to here, that the ways in which AI changes, the language will change. The forms of intelligence will ultimately change the capacities for artificialization itself. This is what I mean by the cheap intelligence equals cheap energy in this way as well. This is what I mean by AI is actually the artificialization of artificialization.
Locating then in past, present and future, and where to situate this present point, it's important to remember that this is not the end of the cycle. As life becomes a scaffold for autopoiesis, autopoiesis becomes a scaffold for allopoiesis, becomes a scaffold for ultimate symbolic language, and so forth and so on. What is all of this a scaffold for? What is this scaffold? And in turn, what is that a scaffold for? And in turn, what is that a scaffold for? 10,000 years in the future we'll have some idea. But this is a way to map in advance a little of what this might mean.
When I mentioned earlier that I think we're in a pre-paradigmatic moment, let me give you one example of that. And that has to do with the both functional and intellectual definitions of life intelligence and technology. One of the things you spend a lot of time thinking about this you notice is that increasingly contemporary definitions of life as a autopoietic-allopoietic phenomenon that uses predictive modeling to use environment to recurse. It looks a lot like our most contemporary definitions of intelligence, which look a lot like the more evolutionary theories of technology. They're beginning to look a lot like each other, and there's something to that.
Like technology and like intelligence, life is based on evolutionary scaffolds built on past scaffolds in which itself is a scaffold for forms to come. If both life and technology are not just kinds of matter, categories of matter in the Aristotelian sense, but rather processes that produce kinds of matter, then in what ways and at what level of abstraction are they in fact the same process? I don't know. We might presume it's a way that any sufficiently advanced technology is indistinguishable from life, and perhaps vice versa. We'll find out.
Last two bits I'd like to share with you this evening: One is a very quick run-through of the Antikythera program.Its rationale is, as I've already suggested to you, that there's a rather potentially disastrous gap between our capabilities and our concepts, and that new forms of epistemic institutions are needed in order to try to develop what that vocabulary would be. We're incubated by the Berggruen Institute, which for a decade or more has supported cutting-edge thinking and philosophy and politics,
Some of the things that we do: We host conferences. Just a few weeks ago we were at MIT Media Lab where we brought together a lot of our key researchers over the course of the whole day. We've done several others. We host salons which are shorter, one-day, very intensive discussions with like-minded confederates in different cities The main area though, arguably one of the more important areas of the program is The Studio
The key idea here of this is architecture as a discipline has benefited from this exploratory experimental studio culture To the extent that society now asks of software things that it used to ask of architecture, the organization of people and space and time, software needs a similar kind of studio space to organize and do projects from first principles
Our other major initiative is a new partnership with MIT Press. Is a book series What Is Intelligence? would be the first major title. We also have a peer-reviewed journal. The impetus behind the journal is to pair some of the most interesting thinkers with some of the most interesting designers, and to develop definitive versions of those ideas in a context by which we can take some of this visual intelligence and include it as part of the ways in which we tell the story.
And also again, exhibitions. We will be at Palazzo Diedo in May with an exhibition showing a lot of this, a lot of the work. Antikythera is very much a collaboration. We are astrophysicists, zoologists. We have faculty part of our network from all of the major universities and companies. If you want to see the full roster of the people we got to work with you can also see our site.
Okay, last point, One of the [inaudible 01:16:37] that you imagined the way in which the future was understood in the 20th century was something to be accomplished. If things come together, we'll accomplish this idea of the future. My son is 16 and watching the ways in which the future has been part of the pedagogy that he's been exposed to. It's presented more as something to be prevented. We talk about the year 2050, IPCC reports, is how do we make sure that future doesn't happen? Which there's some validity to, but I also think it's important to think about the ways in which another pays for it.
Now, lastly, I just want to leave with this in terms of the situations we remedy. One is, I have intimated a few times, there is a kind of dire disconnect at present between cosmology in the astronomic sense and cosmology in the anthropological sense. When I talk to my friends in astrophysics, I talk about cosmology. It's about black holes and dark energy. When I talk to my friends in the anthropology department, it's about how cultures understand their position and significance.
Traditionally, as we imagine it, these go in long step. At this point, there's a, as I say, a kind of dangerous disconnect. We know so much more about how the universe works than has been absorbed by the intelligence of our cultures. Now, one way to think of this is that the science needs to figure out how to bend to the dispositions of the culture. Another is, in essence, that we need to update the culture to what it is that we actually know to be true. You might suspect I recommend the latter.
So where are we really are then? Well, you could put it this way. Lithosphere makes up biosphere, that makes up technosphere, that is now part of the noosphere. There you go. That's the history of the earth in 15 words. That's the where and when that we might say where we are. In terms of where the agency constitutes going forward, the argument I'm making is not one of mastery. It's not one of total control. It's not one of like, if we increase, maximize intelligence that the normative decisions about what needs to happen or even the controllability of the outcome, the cascades of those conditions is something we can entirely control. But nevertheless, it's something where, in essence, it's not optional because humans are doomed to compose their own evolution and blessed to never truly understand or control that process.
So back to what I was saying, the issue we sort of wrestle with is what are the conditions by which complex planetary intelligence could exist and grow and thrive for that 10,000 year span? What would make it adaptive? You could think of it this way. That in the short-term, complex intelligence, as I showed you on the timeline, is very, very evolutionary adaptive. It allows for species such as ourselves to do things that would otherwise be impossible.
But the situation we're facing now is that there may be a point by which the continuance of complex intelligence, in the form that it has taken, may be the thing that ultimately undermines the possibility of the continuance of complex intelligence. It may be maladaptive in the long-term, which is an idea that many, including Arthur C. Clarke, had posited. So the question to be posed is what are the preconditions for that long-term adaptiveness? What would have to happen? What would need to be worked? What would be the preconditions by which that long-term capability would be possible? This is where our work and Long Now Foundation's work so profoundly overlaps. One way I think to start with this, to understand that those preconditions are one that will have to be artificially realized for the most part. They may be ones to be discovered, but they also may be ones that need to be brought into existence. And understanding both the necessity of bringing them into existence and the impossibility of controlling the implications of bringing them into existence is traumatic as an idea. But eventually, this may be really the most important Copernican trauma at hand.
Let me leave you with one piece that I'll read here as a way to sort of tie this up and then... I appreciate your patience. When James Lovelock knew that he was dying when he wrote his last book, Novacene: The Coming Age of Hyperintelligence, and he concludes his own personal life's work with a chapter that must startle some of the more mystically-minded admirers of Gaia theory. He calmly reports that earth life, as we know it, may be giving way to abiotic forms of life intelligence. And that as far as he is concerned, that's just fine. He tells us quite directly that he's happy to sign off from this mortal coil knowing that the era of the human substrate for computational intelligence may be giving way to something else. Not as transcendence, not as magic, not as leveling up, but simply as a phase shift in the very same ongoing process of selection, complexification, and aggregation that is life, that is us.
So part of what made Lovelock at peace with this conclusion is I think that whatever the AI Copernican trauma means, it does not mean that humans are irrelevant, are replaceable, or are at war with their own creations. Advanced machine intelligence does not suggest our extinction, neither as noble abdication nor as bugs screaming into the void. It does mean, however, that human intelligence is not what human intelligence thought it was all this time. It is both something we possess but which possesses us even more. It exists not in individual brains, but even more so in the durable structures of communication between them.
For example, in the form of language. Like life, intelligence is modular, flexible, and scalar, extending to the ingenious work of subcellular living machines and through the depths of evolutionary time. It also extends to much larger aggregations of which each of us is a part. And also, which each of us is an instant. There's no reason to believe that the story would or should end with us. Eschatology is useless. The evolution of intelligence does not peak with one terraforming species of nomadic primates. This, I think, is the happiest news possible. And so, like Lovelock, grief is not what I feel. So let me end there. Thank you.
Patrick Dowd:
Thank you for that incredible presentation.
Benjamin Bratton:
Thank you. My pleasure.
Patrick Dowd:
Benjamin, we were so excited to invite you to give the first talk of our first quarter century because your work with building this new school of thought is truly a long-term project. And I want to ask you some questions about building a school of-
Benjamin Bratton:
Oh, thank you.
Patrick Dowd:
... building a school of thought in this era. And in particular, what do you see as the role of human thought and machine thought and how the interplay of that might evolve over the next 25 years as your project continues to grow and blossom?
Benjamin Bratton:
That's a big question. Let me answer it rather directly and honestly. In terms of the school of... This term school of thought is something... When people ask us what the Antikythera Program is about and what it tries to accomplish, the answer I will sometimes give is that we want to establish a new school of thought for this pre-paradigmatic moment.
One that would not necessarily have all the answers, but would change the kinds of questions that are asked. Such that the better answers are more likely to emerge. And a lot of that has to do, as I said about the role of generating the philosophy from the direct encounter with the technology. Rather than projecting the philosophy onto the technology is which we have a lot of from certain forms. I mean, like in AI ethics discourse or, "What would Kant think about driverless cars?"
Much more important is to essentially invent the concepts bottom up, from the technology itself. And in doing so, constitute this language. And I think in terms of a goal for the program, if we can get people speaking our language and using our language to define the problem space, then even if they disagree with us, we win.
Now, in terms of the second part of your question about this alignment of human thought and machine thought in some way, because I don't really have a really strong dichotomous thinking of this as well. I take as a given that in this... The anthropogenesis, how humans became human. Technogenesis, how technologies evolve. That there's been a deep coupling of these going deep and deep in time. To think of human thought as something that's separate from the technologies of thought would be the wrong starting point. So it's not so much that you've got human thought here, machine thought here, and what happens when they come together.
It's to be more like, "Now that they are coming together in this very amazing and explicit way, first, how does this change our understanding of that long-term trajectory of how we got here?" In essence, forces us to rewrite history, but then perhaps gives a little bit arc of this going forward. And generally, I would say I learned enough structuralism and post-structuralism as a graduate student to presume that we speak language. But more, language speaks us. The fact that language turned out to be a repository of intelligence. If you could model language, you have this general purpose capacity for intelligence to be derived from this. Like why was language that turned out to be the trick, not games, for example? DeepMind had a big bet on games, it turned out to be language. It shouldn't be so surprising. And so, in other words, we've constructed this language. The language is the model by which the language models are working. It's all tied together.
Patrick Dowd:
It's interesting what you mentioned about language and language models because we're living in a moment where people all around the world are discovering the magic of generative AI for creating language. And in many instances, finding that they can outsource their thinking to these tools. So people are thinking less and less. And at the same time, you are making space and time with your school of thought to think and create new ideas, which seems novel in the context of these LLMs being able to see and think of everything. So what is the value of thinking now and how do you conceive of that as a leader?
Benjamin Bratton:
Am I a leader?
Patrick Dowd:
Of the school of thought. Yeah.
Benjamin Bratton:
I don't know that people are thinking less. I don't know if I agree with that. I mean, there's a... I'll speak to the generative AI, I think, in a moment. But when you were talking about... I'm reminded of... It's one of the dialogues of Plato. I think it's one of the third one, but where Socrates is critiquing this newfangled technology called writing. And he was really dismayed like this because he thought it would destroy our capacity for memory, because you're just outsourcing memory to this sort of thing as well. That you couldn't have a direct Socratic dialogue with the person who wrote this. And so, you're susceptible to all kinds of deception. And even more, he was concerned like you could actually communicate with dead people, which was horrifying for him.
And so, the term he used to describe this new thing, which was both amazing and horrible at the same time was pharmakon, from which the term pharmacy comes. And what pharmakon means is something that is both remedy and poison at the same time. It's not, "We'll figure out whether it's remedy or poison some later date," it will always be both. For sure, AI is pharmakon and those type of [inaudible 01:32:41].
Now, for generative AI... Yeah. I mean, I think the discussion around generative AI needs... You referred like a lot of this is a bit, I think, very short-term thinking. More generally, I think we should imagine like everything you do from when you wake up in the morning to when you go to sleep is training data for the future's model of the past. It's a big responsibility that you have. It's a bit of living in the third person, I suppose. But it demonstrates, in a way, in which I think you think about generative AI not as a kind of instrument or tool. But rather as a way in which collective intelligence is modeling itself over the long-term.
Patrick Dowd:
One of the terms that you used that I found really compelling was extrinsic philosophy and this being related to-
Benjamin Bratton:
Allocentric.
Patrick Dowd:
Allocentric as opposed to anthropocentric. And what are some of the ways that you could see culture being updated, as you put it, to reflect allocentrism instead of anthropocentrism?
Benjamin Bratton:
Again, I'm thinking in terms of my son. I think the issue is that we know so much more about in sort a way that hasn't really percolated in. So I mean, there's so many ways to think about how this would work, even just very obvious things like pedagogy. I find it completely amazing, bizarre but amazing, that... For the most part, UCSD is a bit of an exception. But for the most part... Pittsburgh too. Most philosophies department don't teach any neuroscience. Why would you have a whole department about how is it that we think and how it is we might think and completely ignore what we know about how we think and how we might think, for example. You around and you find more of these kinds of... Just looking at how... You see my point.
Patrick Dowd:
Mm-hmm.
Benjamin Bratton:
How we teach astronomy, how we teach neuroscience, how we teach genomics, how we teach... My son's high school, they're forbidden to use AI. I remember when I went to high school in the 14th century that if you use a calculator in class, you got in trouble. And now, if you don't use a calculator in class, you get in trouble. But the end result of putting calculators in the calculus in the class means that students are taking calculus a year earlier than they were before. Because they don't do this busy work of the arithmetic, they can focus on the concepts. So I mean, I don't think they're thinking less.
Now, the question, I think, for AI is like, " Okay. Given this certain degree of inevitability, what are the ways in which... What's the analogy of that for all the things we teach the next generation? What's the putting the calculator in the class version of this as well?" And I also happen to be on a University of California committee for what should be the policy for AI in the classroom at the UC. And this is also a committee where when I spend time there, I feel like pulling my hair out because most of the discussion is like, "What are the ways in which we can prevent and forbid this?" As if that's even possible.
The way I see it as a teacher, like when I teach my undergraduates, it's like it's up to me to presume they're using the large language models and to build better assignments. Like the types of things that people are capable of doing with these kinds of tools, we need to rethink this in this sort of way. So I don't know. There's bigger picture ways of answering your question and I think there's other ways in which a lot of it's just right in front of us.
Patrick Dowd:
I'm curious why you have constructed your program with Antikythera in the way you have instead of within a traditional university context, for instance, and how is this working for you?
Benjamin Bratton:
Yeah. I mean, it's not entirely outside. We're a kind of pirate ship that moves in and out of different ports in different ways and can shuttle things, hopefully not rats, from one port to another. But I am in academia, right? I'm a full professor at University of California San Diego. It's a wonderful thing to be, but it would be absolutely impossible to... The main answer to your question, it would be impossible. There's just no way. There's no economic format for doing the kind of work that we do within the university. I tried
Patrick Dowd:
I'm curious, what would be your response to our community member, Jessie Kate's question, who asks, "As an archetypical pattern of relative association, how might you read institutions through the lens of planetary computation and what could be the future of institutions?"
Benjamin Bratton:
That's a good question. Yeah. I mean, the value of institutions, to a certain extent, is their durability. That there are ways in which people can come together to construct a system for the solving a particular kind of problem, the cultivation of particular kinds of questions that grows over time. And that every iteration by which that institution runs its cycle, it evolves. But it evolves in a certain way that it also has scaffolds internal to itself. So it does something, that becomes a component to something it does later, that becomes something, a component to [inaudible 01:40:46] does later and more complex. That is an evolution and that takes time. That takes time.
And so, yeah. So I don't think it's so complicated. In a world in which things seem to be liquefying at such a pace, doing so in a way in which pushing against things that have a bit more durability remains important. But there's different kinds of institutions. I would extend the institution not only to be things like institutions that are built of humans and institutions that have boards of directors. But there are technical institutions, there are forms of structure that operate in a very similar kind of way. In the lobby before, we were talking about the metric system as a kind of platform. What this allows you to do is you don't have to decide how wide things are going to be. It becomes a way in which you don't have to do that work. That you can get on with it and get on with this one way or another. And I think the best kinds of institutions are ones that not only are doing the work, but they're doing the work so that everyone else can get on with it.
Patrick Dowd:
Well, Benjamin, thank you so much for being with us tonight to introduce Antikythera to our community. We look forward to collaborating with you and cheering you and the school of thought on over the next quarter-century. Thank you, Benjamin.
Benjamin Bratton:
Thank you. I appreciate it very much. Thank you.
Rebecca Lendl:
If you enjoyed this Long Now Talk, head over to longnow.org to check out more Long Now Talks and programs, and of course to become a member and get connected to a whole world of long-term thinking.
As always we’d like to thank our generous speaker, Benjamin Bratton, along with his team at the Antikythera program at Berggruen Institute — Haley Albert, Nicolay Boyadjiev, and others.
And you, our dear listeners, and our thousands of Long Now members and supporters around the globe. Also a big thanks to Anthropic, our lead sponsor for this year’s Long Now Talks. And appreciation to our podcast and video producers: Justin Oli-font and Shannon Breen and to our entire team at Long Now who bring Long Now Talks and programs to life.
Today’s music comes from Jason Wool, as well as Brian Eno’s “January 07003: Bell Studies for the Clock of the Long Now”.
Stay tuned and onward!
Join our newsletter for the latest in long-term thinking
Subscribe