The Artangel Longplayer Letters: Stewart Brand writes to Esther Dyson

In July, Nassim Nicholas Taleb wrote a letter to Long Now co-founder Stewart Brand as part of the Artangel Longplayer Letters series. The series is a relay-style correspondence: The first letter was written by Brian Eno to Taleb. Taleb then wrote to Stewart Brand. Brand’s response is now addressed to Esther Dyson, who will respond with a letter to a recipient of her choosing.

The discussion thus far has focused on how humanity can increase technological capacity to meet real global needs without incurring catastrophic unintended consequences.

Dear Esther,

Ghosts don’t exist, but ghost stories sure do. We love frightening ourselves with narratives built around a horrifying logic that emerges with the telling of the tale, ideally capped with a moral lesson.

“The Monkey’s Paw” is a three-wishes fable where innocent-seeming wishes go hideously astray. A mother mad with grief wishes her dead son alive again. When the knock comes at the door, the father realizes that the thing knocking is horribly mangled and rotted, and he uses the third wish to destroy it. Powers that appear benign, we learn, can have unintended consequences.

One of the classics is Mary Shelley’s Frankenstein: or, The Modern Prometheus, born in part from her belief in “the Romantic ideal that misused power could destroy society.” (Wikipedia) The ambitious scientist Victor Frankenstein, trying to create life, creates a monster. “I saw the pale student of unhallowed arts,” Shelley recalled, “kneeling beside the thing he had put together. I saw the hideous phantasm of a man stretched out, and then, on the working of some powerful engine, show signs of life, and stir with an uneasy, half vital motion.“ We shiver, and we learn the fruits of hubris. The monster kills Victor Frankenstein’s friends and family and blights his life.

What happens when we apply stories like these to thinking about complex issues such as how to deal with new technologies? Nassim Taleb advises that we watch out for what he calls the “narrative fallacy:”

The narrative fallacy addresses our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship, upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense. Where this propensity can go wrong is when it increases our impression of understanding. (The Black Swan, p. 43)

For illustration, Taleb notes how the bare facts “The king died and the queen died” are much more compelling, and perhaps misleading, as a story: “The king died, and then the queen died of grief.”

Daniel Kahneman has dissected our cognitive bias for simple story over complex facts. Elaborating on Taleb’s idea of the Narrative Fallacy, he writes:

The explanatory stories that people find compelling are simple; are concrete rather than abstract; assign a larger role to talent, stupidity, and intentions than to luck; and focus on a few striking events that happened rather than on the countless events that failed to happen. Any recent salient event is a candidate to become the kernel of a causative narrative. Taleb suggests that we humans constantly fool ourselves by constructing flimsy accounts of the past and believing they are true. (Thinking, Fast and Slow, p. 199)

Such flimsy accounts and foolishness are the norm. We haunt ourselves with ghost stories, often in the service of bigger simplistic narratives such as “The Tragedy of Human History” or “The Greed of Business Titans” or “The Hubris of Arrogant Scientists.”

The habit of viewing with alarm and condemnation yields the satisfaction of a closed system, always self-coherent, but it has to suppress curiosity, because, as Kahneman points out, “it is easier to construct a coherent story when you know little, when there are fewer pieces to fit into the puzzle. Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance.” (Thinking, p. 201)

In this letter I want to make the case for an opposite approach, a non-narrative embrace of complexity through close observation of how skeins of related evidence-based arguments play out over time.

Conveniently, the previous correspondents in the series of letters you are joining (Brian Eno and Nassim Taleb) wrote about matters (nuclear power and GMOs) that I studied in some detail for my book, Whole Earth Discipline (02010). To make his general point about always defaulting to the longer-term frame of reference in order to think most usefully about something, Brian notes that “Fukushima” has become a classic instance of the “lurching from one panic to another” that characterizes short-term thinking, with the result that some nations are turning away from something which offers the best hope of averting the long-term calamity of climate change, nuclear power.

To my eye, the ghost in the Fukushima ghost story is the belief that all radiation is harmful. Horribly, we can’t see it or smell it, but it can kill us. (In fact, radioactivity is so easy to detect accurately with dosimeters that no deaths or illness resulted from exposure at Fukushima.) The ghost replies, “I’ll haunt you forever! Even low dosage will kill you with cancer sometime in the future.“ (In fact, life evolved from the very beginning to manage the low dosage of background radiation, and none of the many long-term epidemiological studies of moderate increases in radiation have detected measurable cancer effects.)

Nassim’s letter takes on the important mission of helping develop antifragility at the societal level. He encourages widespread small-scale tinkering, where failures remain local but successes can be adopted widely, versus top-down large-scale tinkering that can unleash systemic threats that are very Black Swans indeed. He cites GMO crops as such a top-down threat.

I agree profoundly with his goal, and not a bit with his example. The mechanism for GMOs is the moving of a gene from one species to another. This has been going on at a massive scale among all microbes (meaning, most of the biomass on Earth) for 3.8 billion years; it goes on hourly and massively in our own guts. It is one of the most bottom-up mechanisms in all of life. Human bioengineers have been doing modest versions of the same thing since the mid-seventies. The technique is now the engine for medicines such as insulin and artemisinin, foods such as cheese and beer, and crops such as corn, soybeans, and sugar. Countless tons have been consumed by billions of humans. Exactly no harmful effects have been proven, despite constant efforts to find some.

Nassim vaunts “mother nature” as the wise source of safe small-scale tinkering. There are indeed Black Swans—civilization-scale systemic threats—that have come from genetic tinkering. Every one of them was concocted by mother nature—bubonic plague, the 1918 flu, AIDS, malaria, smallpox, and dozens more. No new diseases whatever have come from human laboratories. Cures have, however. Smallpox is gone now, thanks to top-down efforts by science and government. Guinea worm is about to be eradicated permanently. Hopes are high to do the same with polio and even malaria. In the domain of disease, science is antifragile.

The same is true in agriculture. The science of genetic engineering is far more precise than blind selective breeding, and for that reason it is even safer.

I think that the ghost in the GMO ghost story is a misplaced idea of contagion. Any transferred gene, people imagine, might be like a loose plague virus. It might infect everything, or it might hide for years and then emerge catastrophically. But genes don’t work like that. They are nothing but extremely specific tools, operative in extremely specific organisms. A gene is not a germ and cannot act like a germ.

Nassim evokes what he calls a “non-naive Precautionary Principle” to warn about all manner of human innovation. Daniel Kahneman takes an opposing view:

As the jurist Cass Sunstein points out, the precautionary principle is costly, and when interpreted strictly it can be paralyzing. He mentions an impressive list of innovations that would not have passed the test, including “airplanes, air conditioning, antibiotics, automobiles, chlorine, the measles vaccine, open-heart surgery, radio, refrigeration, smallpox vaccine, and X-rays.” The strong version of the precautionary principle is obviously untenable. (Thinking, p. 351)

To achieve Nassim’s goal of an antifragile society, I think we can build on his core idea, which is that, over time, whatever is fragile inevitably breaks, while systems that are antifragile use time to grow stronger. The question is, how do we mix innovative boldness with caution in a way that gradually reduces fragile ideas and systems while promoting antifragile ideas and systems? How do we think ahead without paralyzing ourselves with ghost stories, or indeed with any simplistic narrative?

I‘ve been proposing a process I call “Cautionary Vigilance.” It’s a form of issue mapping. Any new technology, any innovation, can be thought through by dissecting the full range of its complexity into an array of specific arguments whose outcomes are determined by evidence that emerges over time. The narrative fallacy is headed off with the open-minded embrace of complexity. Paralysis is headed off by focussing on arguments that can have outcomes. Policy can be guided toward antifragility by ongoing net assessment of the aggregate direction of the arguments over time. Are the worries proving out more than the hopes?

With nuclear power, radiation is just one of many issues to bear in mind while assessing benefits and harms. There are matters of air pollution; of greenhouse gases; of cost; of new designs; of fuel type; of waste storage; of weapons proliferation; et cetera. The list is large but finite. The same goes for GMOs. Along with the health questions are considerations of productivity and land sparing; of pesticide use; of herbicide use; of no-till agriculture; of medicinal foods; of more precise techniques (synthetic biology); of adaptive weeds and pests; of gene flow to other crops; et cetera.

The arguments that interest nuclear engineers now are completely different from the arguments of the 1950s, when the technology was first developing, but the public discourse has not kept up. Likewise with GMOs. Agriculture professionals who had one set of worries and hopes in the 1990s have a quite different set now, but the public debate seems stuck in 1996. “Time,” Nassim noted, “is a bullshit detector.” As evidence accumulates, discussion moves on.

Ideology and ghost stories are timeless. What I’m proposing is the difference between fiction and nonfiction, between imagination and reporting. The questions about What-Might-Happen convert, over time, into answers about What-Happened. As that occurs, our hopes and worries about What-Might-Happen should shift, building on the new baseline of What-Happened.

Esther, you (and your father Freeman and brother George) have been exceptionally insightful about emerging technologies and ways to think ahead about them in a long-term framework. You’ve now read my thoughts, Nassim Taleb’s, and Brian Eno’s.

How do you think responsible foresight works best?

Fondly as ever,

Stewart

Future letters will be published on the Longplayer site, the Long Now blog and Artangel’s site. Please leave comments, if you have them, on the Longplayer site.

Share on Facebook Share on Twitter

More from Civilization

What is the long now?

The Long Now Foundation is a nonprofit established in 01996 to foster long-term thinking. Our work encourages imagination at the timescale of civilization — the next and last 10,000 years — a timespan we call the long now.

Learn more

Join our newsletter for the latest in long-term thinking