K Allado-McDowell
On Neural Media
Recorded live on Feb 25, 02025 at Cowell Theater in Fort Mason Center
How will AI shape our understanding of our creativity and ourselves?
In February, artist and technologist K Allado-McDowell delivered a fascinating Long Now Talk that explored the dimensions of Neural Media — their term for an emerging set of creative forms that use artificial neural networks inspired by the connective design of the human brain.
Their Long Now Talk is a journey through the strange valleys and outcroppings of this age of neural media. That journey began in 02015, in the wake of K Allado-McDowell’s encounter with an image known as “trippysquirrel.jpg.” That picture — a squirrel flowing into dog into a slug, a hallucinogenic collection of misplaced eyes and waves of color — was generated by what was then a cutting-edge artificial intelligence system: a convolutional neural network.
What AI researchers did with the creation of images like “trippysquirrel.jpeg” was to invert the traditional role of the neural network as classifier: transforming it into a tool for the generation of novel material. The captivating, uncanny potential of these AI-generated images inspired Allado-McDowell to form and lead the Artists + Machine Intelligence program at Google, and to begin their own explorations into co-creating art with artificial intelligence.
Now, after a decade spent composing novels, operas, and more alongside a variety of AI models, Allado-McDowell sees the mode of creativity offered by these non-human intelligences as not just a novelty but an entirely new, sometimes bizarre paradigm of media. Allado-McDowell tells a fascinating story involving statistical distributions, anti-aging influencers at war with death itself, and vast quantities of “AI Slop,” the low-quality, faintly surreal output of cheap, rapidly proliferating image models.
Yet even in this morass of slop Allado-McDowell sees reason for optimism. Referring to the title of their 02020 book Pharmako-AI, which was co-written with GPT-3, Allado-McDowell notes that the Greek word pharmakon could mean both drug and cure. What may seem poisonous or dangerous in this new paradigm of neural media could also unlock for us new and deeper ways of understanding ourselves, our planet, and all of the intelligent networks that live within it.
watch
primer
K Allado-McDowell sees our culture as one erupting into a new age of creative practice: one of “neural media.” In their view, we are in the early days of a great technological and artistic shift flowering out of prior modes of broadcast and network culture — a shift fueled by new tools and phenomena like AI and generative media.
As a writer, artist, and technologist, K’s work actively explores the bounds of Neural Media as a form, seeking to identify both what we are able to create using this new paradigm and how it recursively shapes our own thinking and creative processes. Drawing on histories of design, technology, and culture, Allado-McDowell will reveal how previous media regimes shaped culture and subjectivity, and how neural media like AI now shape our perception, self-conception, and knowledge of reality. Against the backdrop of climate change and mass extinction, neural media present unique challenges and opportunities, which Allado-McDowell explores through their own work.
Why This Talk Matters Now
Over the past decade, the technological and creative potentials of artificial intelligence have become increasingly clear. The text and images produced by generative AI models have gone from science fiction to novelties to commonplace parts of the creative landscape.
When Allado-McDowell refers to “neural media,” they refer not just to generative AI but all forms of media based around networks of neurons (or mathematical abstractions modeling them, in the case of AI). As we engage with neural media, we interface not just with each individual node or neuron within the net, but the layers of meaning embedded in the connections between those points. In our encounter with artificial neural networks, we find ourselves — for our brains, too, are neural media — the first of their kind, but not the last.
The Long View
Neural Media is one of the three core frames through which Long Now Talks seek to understand our world in the long view. For more on these frames, read Long Now Board President Patrick Dowd’s introductory essay on Reframing the Future.
Allado-McDowell places their thinking in the context of media theorist Fred Turner’s work. Turner, who has previously given a Long Now Talk about Technology & Counterculture from World War II to Today, has written extensively about how new media technologies reshaped American and global culture over the course of the twentieth century. In The Democratic Surround and From Counterculture to Cyberculture, Turner identifies the shifts from the broadcast media of the early twentieth century to the network media of the 01980s and onwards. Allado-McDowell sees neural media as the successor to network media, a further emergence that builds upon the technocultural infrastructure of the twentieth century but is a phenomenon all its own.
Where to go next
- Read K Allado-McDowell’s essays on Designing Neural Media and Neural Interpellation in Gropius Bau.
- In MoMA’s Magazine, K Allado-McDowell explores the potential of AI as a poison — one that can be used to harm or to heal.
- Watch K Allado-McDowell’s TED talk on how Our Creative Relationship With AI Is Just Beginning.
transcript
Rebecca Lendl:
Welcome to The Long Now Podcast — thank you for being with us. I’m your host Rebecca Lendl, Executive Director here at The Long Now Foundation.
Our journey today is a bit of psychedelic media theory, exploring the strange patterns and contours of the last century or so of media — from Broadcast media, to Immersive media, to Network media, landing us where we find ourselves today, in what K Allado-McDowell has coined “Neural media.”
What happens in a media ecosystem where our surroundings can sense and model us? Where we experience ourselves not just through how we feel, but through what our biometric data reflects back to us? Where we’re shaping and being shaped by new prismatic understandings of ourselves? What happens to our identity, to governance, to creativity?
K has been tackling these kinds of questions across all kinds of mediums — from establishing the Artists + Machine Intelligence program at Google to co-creating novels, operas, and more in collaboration with artificial intelligence models.
Their frameworks invite us to think about media and creativity in our age of AI and how we might begin to fill in what K calls ‘the negative space of knowledge”?
Rather than training AI on human knowledge that reflects our own content back to us in an endless loop, how do we connect our AI with the broader forms of evolutionary intelligence and interspecies wisdom all around us? Imagine what we might learn from each other.
If the current moment leaves us feeling like we can’t always find solid ground beneath our feet, K’s work offers stepping stones that allow us to keep exploring out past the edge. If you’re interested in learning more, you’ll find a ton of great resources in our show notes.
Now, before we dive in, a quick note —
In our age of compounding crises, this work of imagining new possibilities may seem daunting. But challenges that feel impossible to tackle within a single human lifetime become conceivable when you have a longer timescale — and a community collaborating across generations.
Here at The Long Now Foundation, we are a counterweight — deepening our capacity to move wisely in these times of uncertainty. If you feel so inspired, we hope you’ll join us. Head over to longnow.org/donate to become a member and get connected to a whole world of long-term thinking.
With that, we’re excited to share with you — On Neural Media with K Allado-McDowell
K Allado-McDowell:
Hello. Hi, nice to meet you all. My name is K Allado-McDowell. That was a sufficient introduction, but I'll just flash a couple photos of my books and things so you can get a sense of what they're like and what matters about them. This is the first book I wrote in 2020 called Pharmako-AI, and I wrote it with an early version of the GPT-3 Playground when you could construct sentences with the model.
I also create operas. I did one in 2020 but what was interesting, I think for the conversation that we're about to have in that project was that we were doing a neuroscience study at the same time and we were using brain waves to control AI-generated visuals that were part of the opera. But what I'm going to talk to you about now is a sort of framework for thinking about types of media and thinking about media and the history of media in the moment that we are in with AI.
But AI is not the only interesting thing that's happening. For example, in this project, we were adding BCI to the AI, in other words, a brain-computer interface. And so I got to thinking what would you call a medium that includes not just AI, but sensing of human brains and other neural structures. And so I came up with this term, neural media. And so what I'm going to talk to you about today is a kind of artificial framework or a way of interpreting the history of media that adds some clarity to the picture and helps us think about what might be coming and how we can shape what's coming towards what we want.
So my journey, I guess you might say with generative AI and creativity begins in 2015. Has anybody seen this picture before? Okay. I like to ask this because this was a mind-blowing image that was leaked onto the internet in 2015. It was on a Reddit board. It got leaked from an internal Google social network, and it's called trippysquirrel.jpg, and obviously it's very weird looking, and no one had ever seen anything like that in 2015 and it went viral on the internet.
This is more or less the first AI generated image, or at least the first one that was widely seen and understood and went viral. That resulted in a program which I led and co-led for eight years. It's still running now. It's called Artists and Machine Intelligence. It's based in Google Research. We've worked with numerous artists and researchers and philosophers, and our mission was... And remains to bring artists and philosophers in to work with researchers and help have a bigger conversation about what it is that they're doing and what they could be doing, and to fund art and basically encourage a new field. Now, this was all before DALL-E and Midjourney and this kind of generative systems, so it was quite early.
One of the books that I relied on for my research was this book called The Democratic Surround by Fred Turner. This book really blew my mind in terms of looking at the history of art and understanding it from a political and subjective lens and looking at the ways that it was shaped, in particular the theme of the book, which is the origin of what Turner calls a surround media environment in something called the Committee for National Morale. So this was formed by the Roosevelt administration in order to counter German propaganda during World War II. And the idea was that each nation had a certain character and German propaganda would be effective in a certain way, and American government needed to find a way to improve the morale of the nation through propaganda that would have to function differently based on the anti-authoritarian character of the American people. And this had a lot to do with media. The way they saw it was that broadcast media as it sort of entrained a massive people to a single signal on a single message was correlated to fascism. And they theorized what Fred Turner calls a surround, that they believed would encourage democracy through individuation and free choice within a media environment. And so, I tried to build on Turner's framework.
In this framework, each media type goes through a 30-year process from birth to maturation, at which point it becomes the dominant media form. So broadcast media, I think the first broadcast, the first radio broadcast was in 1919 or so. This process for broadcast media goes from 1920 to 1950 at which point television is now the dominant medium.
What the Committee for National Morale ultimately created and I'll show you how that happened. That began in the '50s and came to a type of dominance in the '80s. What comes after is network media, which again begins in earnest in the '80s with the early internet and then matures into 2010. And in the last 15 years, we've seen the growth of AI with deep learning, with algorithms that are part of the platforms that we use online. And according to this chronology, we're about halfway through that process. So the premise here is that by understanding the process we're in, understanding the effects of these different media. Historically, we can understand maybe what we could be doing with the medium that we're now dealing with. And I think for many of the people here, the medium that they have an influence over or at least are very much participating in. So let's take a look at broadcast media.
The way I structured my investigation was to look for three different properties in each of these media types. So I was looking for a space, I was looking for a kind of content, and I was looking for a type of identity. So that pattern's going to show up for all these different types. With broadcast media, you have a centralized space with programmatic content that produces demographic identities. So what does that mean? Well, the centralized space is literally a physical space that is centralized because when you watch a TV, you have to gather around it. And so it's been shown through studies that the introduction of televisions into homes, into public environments actually did bring people together into homes and out of public spaces. Some of those are public spaces also where there were televisions, but people gathered and it centralized attention, but it also centralized power organizationally. So you had the big three media networks in the US.
So this would be a diagram that expresses the structure of a broadcaster transmitting one to many. So that's what I mean when I say that this is a centralized space. The content was programmatic, meaning you would literally watch a program on TV. Time blocks were created, and within that you had shows and stories and those had persona. So if you think about the wisecracking late night show host, the authoritative news anchor, the women and people of color that were portrayed on television, you can start to get a sense of the way that identities are produced and distributed amass from one to many producing norms and producing programs. They also produce relations. So the nuclear family, the dominance of that image during that time is definitely related to the programs that were being presented.
Now, this wasn't happening in a vacuum. There was beginning in 1923, a method of sensing an audience. So Nielsen, they would do other kinds of surveys, but they began surveying media and they produce a machine called the audimeter, which was inside of a television or radio and could record when the device was in use and what channels it was tuned to. So this is really the very beginning of ad tech. You also had viewing diaries and people would manually fill out and record what they watched. I remember my parents doing that.
What you get is demographics. And here you can see the demographics are fairly crude. You have men and women and children and you have their basic age groups and that's it. And 18 to 49, they have more money. So we want them.
So just to summarize broadcast media, you have a centralized space with programmatic content and demographic identity, and these identities are being sent back in a feedback loop.
Now, this immersive media concept, what Fred Turner calls us surround, I decided to call immersive media because there's a lot of buzz around immersive media and it feels like it's something maybe that's pretty new in the last 10 years, at least in terms of museums and entertainment. But the truth is that this was something that was conceived during World War II and rose to dominance from 1950 to 1980.
An immersive medium is a distributed space providing an experience for constructing identity. So the content is an experience and the identity that it produces is a constructed identity. So again, looking back to the Committee for National Morale, let's trace some of the roots of their thinking. Herbert Bayer, the graphic designer and exhibition designer, had in 1935 formulated this idea of a dynamic perspective. So he was looking at the history of art and seeing a sort of fixed perspective within the Renaissance and wanting to design an experience where the perspective of the viewer could be more dynamic. It could move around the space and construct the experience as they move through the space. You can see some of his experiments five years earlier in Paris that produced this idea. And this is the beginning of a type of immersion in exhibition design and in media.
So later in 1943, he did an exhibition at MoMA called Airways to Peace that makes use of this dome structure, this globe structure. And this is a really important image that occurs throughout this history is the image of the dome. We have a lot of associations with it, but this is in many ways the first instance of it in this modern incarnation in exhibition design and as a kind of ideological or political structure, I'll just read the wall text from this exhibition specifically, Airways to Peace.
“Peace must be planned on a world basis. Continents and oceans are plainly only parts of a whole seen from the air, and it is inescapable that there can be no peace for any part of the world unless the foundations of peace are made secure throughout all parts of the world. Our thinking in the future must be worldwide.”
So this is a really tense convergence of some different ideas. The idea that we could have global peace, which is in certain ways utopian, but the fact that it would need to be enforced from the air and that the globe or the dome structure that we could go inside of to see it.
Now, what was inside the dome was a map of all the airways that had been developed for military and industrial use. So you were going inside this globe and you were kind of seeing the world from the perspective of a pilot, but it's turned inside out. So this is a military in industrial global peace program that's being presented in the MoMA. So let's just save that. And here's our little dome. We'll hang onto that for later. Two of the people on the Committee for National Morale were Gregory Bateson and Margaret Mead. Anthropologists. And at the same time that Bayer was formulating this dynamic surround and this dynamic perspective, they were taking photographs in Bali and generally doing their worldwide anthropology practice, which involved looking at gestures, looking at language, looking at the ways that different societies were constructed, and trying to find a perspective that could be presented to Americans that would produce tolerance and individual choice and self-actualization through a global perspective that was fundamentally anthropological.
By the time you get to 1955 with Edward Steichen's Family of Man exhibition at MoMA, you're seeing photographers from around the world composing what he calls a forthright declaration of global solidarity, celebrating the universal aspects of the human experience. So right now, there's a lot of conversation happening about the world order, if it's what kind of polarity or multipolarity it might become. This is a very clear example of a unipolar idea of what the world could be represented in media through a very individualistic, but also I guess evolutionary psychological framework. The idea that we would step through these spaces, see what the world was like, and this would produce tolerance within us, it would produce interest in other possibilities of being, and it would produce a global solidarity.
Now, this paradigm for presenting media in a surround environment, which is what Turner's book is all about, gets exported in the late '50s through American pavilions like the American National Exhibition in Moscow. Here, this is called Glimpses of the USA. I would like to point out the dome structure again as well as a sort of persistent emphasis on transportation. I think the American psyche has a certain idea that transportation is a form of self-actualization. I mean, I live in Los Angeles County, so I know that, and so here we have that. Allan Kaprow, another artist who was pushing the bounds of what could be done in a gallery and producing an immersive experience again with tires. This is called Yard. And this was a happening if you're at all familiar with fluxes or those kinds of mid-century proto-conceptual art movements where people would just kind of do weird stuff and make you figure it out. That's what this was. And it has the same pattern of like there's things happening and you have to find your way through it. And this is a process for you to create your own identity and role within that.
More revolutionary transportation art with The Merry Pranksters, who we all know and love here in San Francisco, I'm sure. Again, the laser light shows. This is Stan VanDerBeek's more technocratic say version of it. Stan VanDerBeek was at Bell Labs but also in a more grassroots form in the trips festivals and these laser light shows and things. So these are surround environments where images are everywhere. You're moving through it. There is an ideology of self-actualization in particular when you start to add things like LSD into the mix, and it goes beyond just being a visual experience.
And then we see again this enclosure, this dome form appear as a utopian back to the land symbol via Buckminster Fuller and places like Drop City where these domes were built as a kind of self-reliant form of architecture that could be produced fairly easily. And Drop City famously didn't exchange money. It was quite utopian until it wasn't. But let's copy that into our dome mood board here. And this back to the land movement with the Fuller Dome and what-have-you also appears in the Whole Earth Catalog, as does the blue dot, which again, harkens back to that first dome where we were looking at the Earth from above trying to see past these national borders and identities, but also perhaps imposing a hegemonic single order on top of all of it. And that could be something that was military or it could be something that was done through soft power like those Charles and Ray Eames exhibitions.
But this focus on tools and the idea of bringing things back to self-reliance, back to the land was the seedbed for home computing and the Homebrew Computer Club just happening across the way. Of course, this is how we get Apple computer. Silicon Valley as we know it now, home computing. In this case, this is a kind of precursor for the dome becoming something much bigger, the immersive environment, the surround becoming something much bigger. I think the best embodiment of it now is probably Burning Man. The images, the experiences are everywhere you go there to construct an identity. You might have a new name while you're there, but it's also indoors too. In immersive exhibitions like this one by teamLab or the van Gogh experience, numerous artists have been working in this way. And as somebody who is very involved in the intersection of art and technology, I was tickled to find out that this immersive idea had been around for quite a while. Right?
Let's get one more dome in here. This one's in Las Vegas. It's called The Sphere. And I think this is really the epitome of the dome structure as an immersive media environment. It's structured differently than the kind of wander around and the tires one, but it has a similar ethos. And obviously with the Grateful Dead being a significant part of its launch, we can see that motivation to be immersive. And that generational tendency to buy into the story that was created by the Committee for National Morale and make it their own is now playing out at this very, very large scale.
We can pull on that home computing idea a little bit and end up with virtual reality. The dome is now on your face and it's all around you in a simulation. So this is what we have for our little collection of domes for the immersive medium. And just to reiterate, it is a distributed space that produces an experience for constructing an identity, which brings us to network media.
Network media is a circulatory space for memetic content and fractal identities. So what does that mean? Well, let's go back to the very earliest network example. This is Doug Engelbart's, Mother of All Demos happened down here in Silicon Valley where he showed networked computers for the first time. And in a way we've been living in the echo of this demo for quite a long time. This is where the dome lives now. It has become instead of a sphere, it becomes a hypersphere and it blows it spores all over everything. So now your wrist is talking to your hand is talking to your face, it's talking to everyone else's thing, and it's talking to your TV. And this is the dome. It's everywhere. The immersive media environment is everywhere. It's networked. And the important thing to remember about these phases is that these 30 year cycles are not just breaks that stop and then a new thing starts. They come out of each other. And in Marshall McLuhan's words, the content of every medium is the previous medium. And so they consume each other in a way.
The broadcast images, the film images get projected into the walls of the dome. The dome becomes everything via networking. So here's now our collection of domes. So what does it mean for it to be in a circulatory environment? Well, here's the internet mapping project visualization of the internet at a certain point. In this paradigm, when we are in network media, each of us is a node in this network or multiple nodes in this network. And so we pass information forward and back. We are responsible for circulating the information that's inside of this expanded network dome hypersphere. So if the feedback loop and the one-to-many transmission of broadcast media looked like what we have on the left here, then what we have on the right is the network structure where it branches and distributes and things can move in both directions.
Now, there are certain kinds of content that naturally thrive in a networked environment, and those would be viral content or what I called memetic here. So this is from a paper analyzing a few different social network structures for viral potential. And the rightmost image is the one that produces the most viral content. And the reason it does that is because there's more branching within the structure. So if the ones on the left and the center look a little more like broadcast media, they're also less viral. And the one on the right is the most networked in nature and is also the most viral. So there's a truth to this intuition that viral mimetic medium content exists inside of a network medium by nature. And in order to really understand what those things are, what that content is, we would have to ask Richard Dawkins, the evolutionary biologist and daddy of memes.
Now, when he invented the term memes, he didn't mean the kind of things that we share with each other on social media to get a laugh out of our friends. He was talking about anything that can be mimicked and reproduced. So it could be a gene, it could be a behavior, it could be an idea for us within network media that is highly visual and uses text just like immersive media and broadcast media before it. That is a combination of images and text. And these evolve very quickly. Some of these memes are not the freshest now.
When you consider the horseshoe theory of politics, which is that the extremes converge, you end up with combining that with the political compass. You end up with something like this, a kind of lemniscate or figure eight, political tourists or something. But this is the kind of thing that is, in my opinion, innate to identity within this network environment. As somebody who's circulating media, who's circulating memes, who's perceiving politics through the evolution of ideas in that space, you can move around on the board, you can recirculate, you can find little pockets and whirlpools, and this is why I call it a fractal identity is because I think this is the nature of political identity and identity generally online, is that we are moving through echo chambers, we're being reflected down rabbit holes. People are getting radicalized and then switching sides. These are all normal phenomena in movement through this type of media space.
So if we were to scale that up beyond just politics, into perhaps even cosmology, what would we call that? Conspiracy theories is what we call it. These are ontologies. Epistemologies are just structures of reality that get amplified and mutated within these circulatory echo chambers and pockets of extreme perspective and semiotic overload. And we're just seeing the tip and just the first part of the iceberg here, but I promise you this stuff down below is even crazier.
So network media to reiterate is circulatory in space. Its content is memetic and its identity is fractal.
So this is the circulatory space from which emerges neural media and neural media is high dimensional. Its content is hallucinated, and the identities within it are embedded. And these might sound a little strange, so I'm going to walk through how we get to all that.
But let's go back to the very origins of neural imaging and the inspiration for AI, which is Santiago Ramon y Cajal's cell-stained neural tissue from 1890. These are really beautiful, but not only were they beautiful, they were revolutionary in terms of revealing the structure of our cognitive architecture, which inspired Warren McCulloch and Walter Pitts in 1943 to describe the mathematics of a computational neuron. The perceptron, which was the first neural net, was described by Alan Turing shortly thereafter. And this is where we can get into an understanding of the high dimensional nature of these types of systems. So one of the things that really shocked me, I came into AI research as a UX engineer. I wrote code and did interfaces and things like that.
And I spent a lot of time talking to proper AI researchers and trying to understand how they understood neural nets to work. And one the things they would constantly talk about is the high dimensional nature of them. And it's a hard thing to grasp because we live in 3D. We look at a piece of paper and we see things in 2D. Things have length, width, and depth. We move around in an XYZ type of Cartesian space, but it's not really that complicated to think of high dimensionality in the mathematics that are responsible for neural nets. If we think about, for example, this simple five node network in the perceptron, if we were to take three of those and say one is X, one is Y and one is Z, then we can just imagine that as 3D space. Simply add one or two more dimensions, and you can have combinations of numbers like you have down there below.
Here's a way of trying to visualize how to get outside of 3D. I think there's a famous quote attributed to Geoff Hinton maybe about AI researchers. They just understand 17 dimensional space. They just look at a 3D cube and shout 17 at it. So this is hard for us to grasp, but that's the important point is that, fundamentally, this is how it works, is by having, for example, multiple layers of networks that are interconnected. You can have thousands or millions or billions of these nodes and these locations and the location in your XYZ, et cetera space is what's important here.
So what you see this diagram is this is a computer vision system recognizing a picture of an orange and labeling it as an orange. And it's been trained on pictures of different things. And one of the things that's able to recognize by the knowledge that's embedded in these layers is that it's looking at an orange. If you were to try to break down how some of that worked, you would see that the different layers are coming from the bottom up to the top. They're recognizing patterns at different levels of abstraction and finally landing on a label, which is orange. And there are people that believe that our own ability to recognize an orange is coming from a similar process or perhaps our ability to recognize an AI generated image, which is what this is.
So let's go back to Trippy Squirrel.
Again, something that was generated by turning an image recognition system like this backwards. This image initiated the program that I started at Google and the very first neural net art show here in San Francisco at Gray Area, which was called DeepDream, the Art of Neural Networks. Here's a picture by my colleague Mike Tyka, who was one of the people that worked on the DeepDream paper. And since then there's been much advancement in AI generated imagery. So even between 2022 and 2023, you can see the quality of change in the images. Now, we're looking at the most advanced images, but I want to talk a bit about some of the less advanced images and what that might mean for our sense of identity to look at images in this way.
When I think about what it means to have an identity within this space, it's a little hard to explain, but the fundamental mathematics behind all these things is statistical. So I think it's very important to start looking at examples of statistical identification and culture. So this is the bell curve. It's probably the most widely understood statistical distribution.
Early image generation models like variable audio encoders would actually snap their networks to a bell curve. And what you would get is a kind of weird distortion of the images. This is what a lot of people are calling AI slop. And in these images you can see how the prompts are coming through the picture. There's these weird overabundance of uniforms in the background and shoes and things like this. And that's the style of the image. And this is probably because it's made with a cheap or free generative model that's not very sophisticated, but also potentially even is snapping things to this bell curve, and it is using a certain kind of distribution in its internal structure to generate the image. But an important thing about this image is where it lives and who it's targeted at.
So these two images are from an account called Insane Facebook AI slop, which is really fascinating because these come from Facebook. These are posted on Facebook by bot accounts that are presumably trying to drive traffic or get attention, get likes, but also potentially to drive a certain political message. And the character of these tends to be pretty militaristic pronatal to an absurd extent. They'll just be huge piles of babies and it doesn't make any sense. And it's also pretty family oriented. It's reproducing a certain kind of identity relation and ideology that is essentially at the middle of the bell curve in terms of the American population. I don't really understand, again, it is surreal to me. I don't understand who it's for, but it's doing something very interesting, which is bringing together a formal kind of mid-quality or slop quality with a mid-distribution or the largest peak of the distribution of ideology and identity. Some of it's animated. There's even physical objects that are being produced with this kind of aesthetic, I guess you might say.
But again, what you're seeing here is somebody who's gone to a model and said, let's make an image, and it's the first thing that comes out, which means it's the kind of the most likely thing. It's the thing at the peak of the bell curve. And so, this idea of inhabiting these statistical distributions and finding identity within it is I think the key to understanding what these things are going to do to us at a large scale. We're already seeing it with dating discourse and data visualization of someone's dating app experience or surveys about the values that different people have or perceptions they have of each other.
When you interact with companies that provide, say, fire insurance or credit, you are engaging with a statistical system. And now more and more these systems are being run by machine learning models. So to back up a little bit, the early internet platforms that we know now, when they first started, for example, social media would have a linear chronological timeline at a certain point in the 2010s at the very birth phase of neural media, the back end of those systems began to be run by algorithms. And so early machine learning systems were for content recommendations, for shopping recommendations, for streaming recommendations. And the idea of modeling you in that statistical space began to become an important part of how the system worked. And these are called embeddings. So your location, your kind of vector in this high-dimensional space, your XYZ, et cetera, coordinate is a location in a statistical distribution that's like a landscape.
The term for it is embedding, but the way to think about it is just like those early Nielsen systems or just like ad tech on the early internet, the system is sensing you and modeling you and predicting what you're going to want or trying to get you to want the certain things that it has. And so the story here is one of increasing fidelity and resolution of those systems in detail and in time. So the crude demographic blocks of broadcast media become the network web analytics, become the AI backend to a platform, become an agent that's modeling you and interacting over your personal data with you very closely. And so this is inevitably going to influence our sense of ourselves because every decision we make, there will be these factors in the environment like, can I get insured or can I get credit, or am I considered a risk or am I considered desirable?
According to the statistical model, and according to the algorithm that mediates that statistical model, you might find that your identity is being radically changed socially because you're not getting dates because your profile doesn't do X, Y or Z, et cetera. Or you might not be able to get a job or live in a certain way because of how you're perceived by the system. The system is sensing you. And with generative media, it begins to generate in relation to that embedding, so your media environment starts to reflect you in this way.
There are also new biometric tools like the Oura Ring or like the BCI, the Brain-Computer Interface for doing EEG measurement, or even just the gyrometer in your phone that tracks your steps that are now producing a mathematical image of you as an active body in space and of your health. And these are certainly going to be used as data for machine learning. But in as much as that happens, you exist within the field of all possible uses of all possible biometric situation, so I'm now positioned relative to somebody that's super active or somebody that's not active at all. Now, I can see myself not just through how I feel, but through the way that my biometric signals construct me in relation to other people. And I might take the advice that the system gives me and could say, you need to do this, or you don't need to do that. Now I'm engaging with it and it's reshaping me and I'm taking on certain aspects of its understanding of me.
I think the person who probably embodies this the most is Bryan Johnson, famous for his Don't Die campaign, and he's really maxing out the biometric measurement of himself. This is a very self-oriented way of understanding this information.
One of the things I think that's the most hopeful let's say, or the most interesting possibility where the most potential lies is actually in training our neural perception systems, our AI systems, our technologies on non-humans. This is a screenshot from the Project SETI website. They do research on whales to try to understand their expressions. The first phase of their project is to capture all this information, and the next is to apply machine learning to try to understand it. And they're doing some of that already. And a number of the people in the field in general of cetacean understanding are using algorithms to segment and organize their recordings of these animals and to get a better understanding of them.
Similarly, there are projects like SPUN, the Society for Protection of Underground Networks that study Mycelia. They have an incredible geospatial data set of mycelium distributions around the Earth. And there are projects like more than human life, which encourages legislation and the translation of indigenous ontologies into legislation that allows for things, for example, in Ecuador, like the granting of legal personhood to forests. Let me just add one more piece. Geospatial models. These are from NASA that map and understand the Earth from above using satellite imagery. There are many types of data that are geospatially located that can be brought together to get a different picture of reality and to get a different picture of Earth. So in showing you these things, I'm suggesting that the where our attention is focused with AI systems is going to have a really profound effect on our sense of identity.
And so I think the important potential within these systems is to amplify our attention. And given that we're halfway through the maturation cycle, according to my little scheme, I think we have the possibility now of actually doing that. There are a number of factors involved in that.
But again, the term that I've put here is embedded identity. Our identity is embedded in these different systems that look at us and locate us in their own terms as an embedding in a high-dimensional space. They sense us, they try to understand us, they map us into the space of all possibilities that they understand, and that embedding space is highly human-centric, it's very product-centric. Could we have an embedding space that understood the Earth better and reflected us within that? Would that give us a different kind of embedded identity and more possibility for what AI could become? Could it unlock new, not just tech trees, but social trees or value trees or ecological understanding?
So to summarize, neural media is high-dimensional. Its content is hallucinated, generated and the identities it produces are embedded. And just to hone in on that a little bit more, these are some of the elements that I believe are an important piece of this embedded identity. There's analytics, there's biometrics. I think adding non-human sensing and geospatial, these things already exist. But if we start to look at those together, I think we can get a sense of what kind of embedded identity would be nice to have. And then certain qualities of that, I think we are getting these from a lot of different directions at once. But in terms of AI and in terms of looking at ourselves through embedding as a model and a prismatic one with non-human intelligence of different kinds, it would be relational, it would be perspectival, there would be interdependence, and it would be eco-centric.
Now, these sound like good things, but I just want to remind everyone, in case you caught the Pharmako, pharmakon reference in my first book, I like to think about these things in terms of poisons. The pharmakon is poison, the cure and the scapegoat, but the poison can be a cure depending on the dosage.
And each of these qualities can also can be positive. It can be negative, that an eco-centric perspective could be one in which you fear predators. It could be one in which you worry about toxins. Interdependence can mean lack when the species that you depend on are no longer there. Relationality can include conflict. So I just want to drive that home that there's a possibility here and my motivation to do this thinking is that I think we can thread the needle into the next part of the 21st century by looking for these opportunities to push neural media in potentially other directions, and I think we'll leave it right there. Thank you.
Patrick Dowd:
K, thank you so much for that amazing wide-ranging talk. I want to steer our questions around two areas. One is the idea of neural media as ecology, and the other is of neural media and ecology. On the former, neural media as ecology, we see these elements of neural media being something that we can consume, neural media being something that we can create with and neural media that has varying kinds of composition. So if we view it as a sort of ecology or a jungle, how would you encourage people to think about how they consume it, think about how they create with it and think about what it's composed of?
K Allado-McDowell:
Yeah, I think when you say neural media as ecology, what I'm understanding you to mean is that we are in an ecosystem of media, which is also itself in an ecosystem and has relations with the real ecosystem. But I think when it comes to a media ecosystem where you have entities that can sense you and model you, this is more like a natural ecosystem. And in a natural ecosystem, those entities could be your friends or they could be predators. And often animals in a natural ecosystem will disguise themselves as other things. And this is kind of the bootstrap of a lot of evolution, is this idea of being perceived and being misperceived or guiding misperception. So being in a state of awareness of how you're being perceived and how these things are perceiving you and moving with awareness of that, I think is important, first step in starting to understand what's going on with these systems.
Another one would be, I think paying attention to your own motivations and getting clear on your own motivations. Because if you spend a lot of time in a media environment, especially one that can adapt to you and nudge you in certain directions, you could be manipulated. So I think in any ecosystem, your motivations are existing with the motivations of others. And so trying to get clear on what it is that you want from your engagement with these tools, whether it's to be creative or if it's to get information, and then knowing, having a sense of when your motivations are being supported or when they're being directed in another direction is I think a really important piece.
Patrick Dowd:
I believe many people are struggling with right now, the questions around authorship and authenticity vis-a-vis AI. And in the past, before AI, it was the case that you'd have to spend a long time writing a book or making work of art, and now with the proper direction that can be achieved very quickly. So how do you think about evolving norms of authorship and authenticity when creating in collaboration with AI?
K Allado-McDowell:
Yeah, it's a really good question. I think one of the hardest things to do with AI is to create a context for it to be meaningful. So I showed a lot of AI slop just like weird kind of melty hands and gross stuff. And on its own, it could be maybe visually interesting, but the process of refining slop or putting slop into the right container or putting something that's better than slop into the right container, this to me feels like part of the job of an artist working with AI is to give meaning to it. Because essentially when you have this statistical system and it's just churning out stuff, it doesn't inherently have meaning. Meaning comes from context, it comes from experience. And so that's what we impute into an artwork, and that's what the decisions that we make, the ways that we contextualize what's happening are the things that are read in the artwork.
And so this is part of what works with Pharmako-AI is that you can see the decisions being made on the page, and it also provides a context. The book itself is just about context of being in that moment and trying to grapple with this emerging technology as well as time and ancestry and what's happening with the ecosystem. And COVID was happening then too, but I think this is one big piece of it is how do you put these things into context? Because it's about constructing meaning, which is always happening in collaboration anyway. So it's not a big shock to bring in another vector of meaning production, but you have to make meaning with it. So it is collaborative in that sense that meaning-making in art is collaborative with the viewer, with the context, and so it's another piece of that. And so if you're thinking in terms of media ecologies, it's always interrelated with other things, and this is another thing to be interrelated with, but the process is the making of meaning.
Patrick Dowd:
So much of our current neural media is trained on the artifacts of human generated networks media. So it's very reflective of our own neuroses and ways of thinking and being. But as you mentioned from the feedback people give you from other talks that you've given, there's a whole much broader world of intelligence. And I believe this concept that you've introduced of involving our identity to be seen in an embedded context is really a sort of Copernican opportunity before us. And just in the same way that before realizing that our solar system revolved around the sun, one might ask, what is the utility of having such a idea on which none of our current reality seems to depend, but in fact, there were so many inventions and amazing things that came from evolving our perspective to be more in tune with the reality of nature, and that's what you're calling for with AI in the way that AI is trained.
K Allado-McDowell:
Yeah, that's exactly right. I am the most hopeful about things like geospatial models, interspecies understanding, and the idea that we could move our attention away from ourselves. And the parallel to the Copernican revolution would be displacing the human from the center of the universe and seeing it in this more prismatic way in a relational way. And this, I think in certain ways is just maybe for some people is really obvious, but it's interesting to me that AI forces that conversation. And the way I like to put it sometimes is you can either get the infinite black hole TikTok at the end of time, or you can get a better understanding of nature.
And when we talk about AI, there are resource constraints and companies are actually setting aside their sustainability goals because they know they're not going to make those goals because of the energy needs of AI. And the market has kind of taken something that was being done fairly conservatively even a few years ago and turned it into an arms race for market domination. And that just is what it is. But when we talk about these systems because of the heavy resource use, we do have to prioritize what is used for what. And I think that there are some things that are... I like cat videos and I like lots of animal videos. But how do we figure that out? How do we figure out what's the most valuable use of our computation? And this is where it goes outside the realm of technology and into the realm of policy and into the realm of shared values and into the realm of guiding ourselves as a species and coordinating as a species because this is really the issue I see. It's not an information problem, it's a coordination problem.
Patrick Dowd:
Well, there's the issue of sustainability and then the issue of intelligence and these things overlap, but you're making a point which relates to your interest also in extinction, another major phenomena occurring on our planet right now. How does your focus on extinction through the artwork and monuments that you're developing connect with your interest in making AI more reflective of different forms of non-human intelligence?
K Allado-McDowell:
Yeah, I mean, in a way it's quite simple, which is that same insight that people we're coming when they were coming to me to ask about non-human intelligence after a talk or something. And it made me reflect and realize that there's an incredible amount of intelligence that has evolved through a quite costly process in terms of life and the biosphere to hold, to embody in physical form, intelligence about a biome about relationships with other species. I'm talking about the actual animal forms and the beings that exist and specialize to live in a certain way.
So when people fetishize the intelligence that's possible with machines. I would look at any animal and say, this thing is literally the physical embodiment of a kind of intelligence. There's lots of great writing about that idea. But the idea of if we wanted to maximize intelligence on earth, we would definitely be focusing on biodiversity because this is intelligence. It's taken millions of years to accumulate and to find its place and to exist with other intelligences. And so if you think about the space of all possible intelligence, if you're to carve out all those forms of intelligence and just replace it with AI, it's vastly insufficient.
We're not just talking about semiotic in terms of language, but in terms of sensory ability to perceive the way that birds perceive electromagnetic spectrum or the ways that spiders do and can fly on that, I mean, these are miraculous feats of intelligence that it can't be reproduced. And so to me, this has been actually a really debilitating thing to focus on as an artist because it's very hard to even make art in that context. And so the newest opera project I'm working on, the libretto for the first act is 200,000 names of species that, according to paleobiologists, are all the species that have ever lived and gone extinct according to our knowledge.
So I'm trying to create a story, I guess, of all these names that can be sung, and I'll be doing an art installation where people can sing and say those names in a mock-up of a monument that I would like to produce, which is all those names carved into stone for people thousands of years from now to see, to show that we were aware of what was here, and even if it's all gone by then, there will be some record. The second act of the opera is the inscription of a new name when inevitably a species does go extinct. So when I figured out that I could do that, I started feeling really enthusiastic about making art, but I was actually quite frustrated because it just seemed like to play around, and not to be depressing about it, but to play with intelligence and to try to evolve this kind of intelligence at the same time that we're losing all this other intelligence was just like, how can I? You know?
So this was an unlock for me, was just to look at the information and say, what is it? I was really scared of extinction. I was scared to even understand it, and I started looking at the information I had. My research assistant, Akim, let's get all the names, let's get all of it, everything. And then you look at it and you're like, well, that's a lot of names. And then after a while you're like, that's nothing. The Earth has been here for so long and this is just the fossil record, and we just made up these stories. And then you just realize the biosphere, the vastness of the biosphere and time and space, and it is quite a profound liberation from your own belief in what you understand.
Patrick Dowd:
So you're saying that the greater intelligence we may be searching for is not necessarily to be found in the purchasing of more GPUs and energy to power them, but in actual living beings that are currently here on this Earth?
K Allado-McDowell:
I mean, that's the obvious answer. I mean, that's the obvious answer. The question is what do you do knowing that? And that's why I think this idea of focus, we're in an attention economy. What is our attention being placed on? Can we place it on that form of intelligence? What would happen if we took the synthetic intelligence we're creating, placed it on the intelligence that's all around us, and then became a part of that? That to me, is the most exciting possibility for these things. I mean, this is a much deeper question of should we make technology at all? But if it's going to be happening and I'm not the one to really be able to stop it, if it's going to be happening, then I think this is I'm advocating for. There's a really amazing possibility.
And I think there are some recent signs that applications like this, like in medicine, so there's a tool called Co-Scientist that recently came out that accelerated the discovery of a hypothesis, that took 10 years, to do manually, it could be done in 48 hours. These are the kinds of things, there's potential of Renaissance waiting inside of these tools, but they have to be focused on the right thing.
Patrick Dowd:
Well, I think this idea of a potential Renaissance waiting behind the kind of questions that you've been raising for us tonight is a perfect note to end on, and also a great set of questions to be animating our community at the outset of our second quarter-century. K, thank you so much for being the best tonight.
K Allado-McDowell:
Thank you. Thank you.
Rebecca Lendl:
If you enjoyed this Long Now Talk, head over to longnow.org to check out more Long Now Talks and programs, and of course to become a member and get connected to a whole world of long-term thinking.
Huge thanks to our generous speaker, K Allado-McDowell.
And, as always, thanks to you, our dear listeners, and our thousands of Long Now members and supporters around the globe.
Also a big thanks to Anthropic, our lead sponsor for this year’s Long Now Talks.
And appreciation to our podcast and video producers: Justin Oli-font and Shannon Breen and to our entire team at Long Now who bring Long Now Talks and programs to life.
Today’s music comes from Jason Wool, and Brian Eno’s “January 07003: Bell Studies for the Clock of the Long Now”.
Stay tuned and onward!
Join our newsletter for the latest in long-term thinking
Subscribe