In April, Brian Eno wrote to Nassim Nicholas Taleb, asking, “how can we even think about designing for a future that we can’t imagine?”
The letter he sent was the inaugural Longplayer Letter, the first in a series of letters published by ArtAngel and Jem Finer’s Longplayer – a project to compose and perform a 1,000 year-long piece of music (running now for 13 years).
The letters are to be written in relay-style: in responding to Eno’s musings, Taleb wrote his letter to Long Now co-founder Stewart Brand. In it, he proposes a methodology for assessing risks to our planet:
Dear Stewart,
I would like to reply to Brian Eno’s important letter by proposing a methodology to deal with risks to our planet, and I chose you because of your Long Now mission.
First let us put forward the Principle of Fragility As Nonlinear (Concave) Response as a central idea that touches about anything.
1. PRINCIPLE OF FRAGILITY AS NONLINEAR (CONCAVE) RESPONSE
If I fall from a height of 10 meters I am injured more than 10 times than if I fell from a height of 1 meter, or more than 1000 times than if I fell from a height of 1 centimeter, hence I am fragile. Every additional meter, up to the point of my destruction, hurts me more than the previous one. This nonlinear response is central for everything on planet earth, from objects to ideas to companies to technologies.
Another example. If I am hit with a big stone I will be harmed a lot more than if I were pelted serially with pebbles of the same weight.
If you plot this response with harm on the vertical and event size on the horizontal, you would notice the plot curving inward, hence the “concave” shape, which in the next figure I compare to a linear response. We can already see that the fragile is harmed disproportionately more by a large event (Black Swans) than by a moderate one.
Figure 1 – The nonlinear response compared to the linear.
The general principle is as follows:
Everything that is fragile and still in existence (that is, unbroken), will be harmed more by a certain stressor of intensity X than by k times a stressor of intensity X/k, up to the point of breaking.
Why is it a general rule? This has something to do with the statistical structure of stressors, with small deviations much, much more frequent than large ones. Look at the coffee cup on the table: there are millions of recorded earthquakes every year. Simply, if the coffee cup were linearly sensitive to earthquakes, it would not have existed at all as it would have been broken in the early stages of the graph.
Anything linear in harm is already gone, and what is left are things that are nonlinear.
Now that we have this principle, let us apply it to life on earth. This is the basis of a non-naive Precautionary Principle that the philosopher Rupert Read and I are in the process of elaborating, with precise policy implications on the part of states and individuals.
Everything flows —by theorems — from the principle of nonlinear response.
2. PRECAUTIONARY RULES
Rule 1 – Size Effects. Everything you do to planet earth is disproportionally more harmful in large quantities than in small ones. Hence we need to split sources of harm as much as we can (provided these don’t interact). If we dropped our carbon by, say, 20% we may reduce the harm by more than 50%. Conversely we may double our risk with just an increase of 10%.
It is wrong to discuss “good” or “bad” without assigning a certain quantity to it. Most things are harmless in some small quantity and harmful in larger ones.
Because of the “globalization” and the uniformization of tastes we now concentrate our consumption across the same items, say, tuna and wheat, whereas ancient population were more opportunistic and engaged in “cycling”, picking up what was overabundant so to speak.
Rule 2 – Errors. What is fragile dislikes the “disorder cluster” beyond a point, which includes volatility, variability, error, time, randomness, and stressors (The “Fragility” Theorem).
This rule means that we can —and should— treat errors as random variables. And we can treat what we don’t know —including potential threats— as random variables as well. We live in a world of higher unpredictability than we tend to believe. We have never been able to predict our own errors, and things will not change any time soon. But we can consider types of errors within the framework presented here.
Now, for mathematical reasons (a mechanism called the “Lindy Effect”), linked to the relationship between time and fragility, mother nature is vastly “wiser” so to speak than humans, as time has a lot of value in detecting what is breakable and what is not. Time is also a bullshit detector. Nothing humans have introduced in modern times has made us unconditionally better without unpredictable side effects, and ones that are usually detected with considerable delays (transfats, steroids, tobacco, Thalidomide, etc.)
Rule 3 – Decentralizing Variations (the 1/N rule). Mother nature produces small isolated generally independent variations (technically belonging to the thin-tailed category, or “Mediocristan”) and humans produce fewer but larger ones (technically, “fat tailed” category, or “Extremistan”). In other words nature is a sum of micro variations (with, on the occasion, larger ones), human systems tend to create macro shocks.
By a statistical argument, had nature not produced thin-tailed variations, we would not be here today. One in the trillions, perhaps the trillions of trillions, of variations would have terminated life on the planet.
The next two figures show the difference between the two separate statistical properties.
Figure 2 Tinkering Bottom Up, Broad Design. Mother Nature: no single variation represents a large share of the sum of the total variations. Even occasional mass extinctions are a blip in the total variations
Figure 3 Top-down, Concentrated Design Human made clustering of variations, where a single deviation will eventually dominate the sum.
Now apply the Principle of Fragility As Nonlinear (Concave) Response to Figures 2 and 3. As you can see a large deviation harms a lot more than the cumulative effect of small ones because of concavity.
This in a nutshell explains why a decentralized system is more effective than one that is command-and-control and bureaucratic in style —it is that errors are decentralized and do not spread. It also explains why large corporations are problematic, particularly when powerful enough to lobby their way into state support.
This method is called the 1/N rule of maximal diversification of source of problems —a general one I apply when confronting decisions in fat-tailed domains.
Rule 4 – Nature and Evidence. Nature is a better statistician than humans, having produced > trillions of “errors” or variations without blowing up; it is a much better risk manager (thanks to the Lindy effect). What people call the “naturalistic fallacy” applies to the moral domain, not in the statistical or the risk areas. Nature is certainly not optimal but it has trillions of times the sample evidence of humans, and it is still around. It is a matter of a long multidimensional track record versus a short low-dimensional one.
In a complex system it is impossible to see the consequences of a positive action (from the Bar Yam theorem), so one needs —like nature— to keep errors isolated and thin-tailed.
Implication 1 (Burden of Evidence). The burden of evidence is not on nature but on humans disrupting anything top-down to prove their errors don’t spread and don’t carry consequences. Absence of evidence is vastly more nonlinear than evidence of absence. So if someone asks “do you have evidence that I am harming the planet?”, ignore him: he should be the one producing evidence, not you. It is shocking how people can put the burden of evidence the wrong way.
Implication 2 (Via Negativa). If we can’t predict the effects of a positive action (adding something new), we can predict the effect of removing a substance that has not been historically part of the system (removal of smoking, carbon pollution, carbs from diets).
3. POLICY IMPLICATIONS
This tool of analysis is more robust than current climate modeling, as it is anticipatory, not backward fitting. The policy implications are:
Genetically Modified Organisms, GMOs. Top-down modifications to the system (through GMOs) are categorically and statistically different from bottom up ones (regular farming, progressive tinkering with crops, etc.) To borrow from Rupert Read, there is no comparison between the tinkering of selective breeding and the top-down engineering of taking a gene from a fish and putting it into a tomato. Saying that such a product is natural misses the statistical process by which things become “natural”.
What people miss is that the modification of crops impacts everyone and exports the error from the local to the global. I do not wish to pay —or have my descendants pay — for errors by executives of Monsanto. We should exert the precautionary principle there —our non-naive version — simply because we would discover errors after considerable damage.
Nuclear. In large quantities we should worry about an unseen risk from nuclear energy. In small quantities it may be OK —how small we should determine, making sure threats never cease to be local. Keep in mind that small mistakes with the storage of the nuclear are compounded by the length of time they stay around. The same with fossil fuels. The same with other sources of pollution.
But certainly not GMOs, because their risk is not local. Invoking the risk of “famine” is a poor strategy, no different from urging people to play Russian roulette in order to get out of poverty. And calling the GMO approach “scientific” betrays a very poor —indeed warped —understanding of probabilistic payoffs and risk management.
The general idea is that we should limit pollution to small, very small sources, and multiply them even if the “scientists” promoting them deem any of them safe.
**********
There is some class of irreversible systemic risks that show up too late, that I do not believe are worth bearing. Further, these tend to harm other people than those who profit from them. So here is my closing quandary.
The problem of execution: So far we’ve outlined a policy, not how to implement it. Now, as a localist fearful of the centralized top-down state, I wish to live in a society that functions with similar statistical properties as nature, with small thin-tailed non-spreading mistakes, an environment in which the so-called “wisdom of crowds” works well and the state intervention is limited to law enforcement (and that of contracts).
Indeed, we should worry about the lobby-infested state, given the historical tendency of bureaucrats to produce macro harm (wars, disastrous farming policies, crop subsidies encouraging the spread of corn syrup, etc.) But there exists an environment that is not quite that of the “wisdom of crowds”, in which spontaneous corrections are not possible, and legal liabilities difficult to identify. I’ve discussed this in my book Antifragile where some people have an asymmetric payoff at the expense of society: keep the profits and transfer harm to others.
In general, the solution is to move from regulation to penalties, by imposing skin-in-the game-style methods to penalize those who play with our collective safety —no different from our treatment of terrorist threats and dangers to our security. But in the presence of systemic —and branching out —consequences the solution may be to rely on the state to ban harm to citizens (via negativa style ), in areas where legal liabilities may not be obvious and easy to track, particularly harm hundreds of years into the future. For the place of the state is not to get distracted in trying to promote things and concentrate errors, but in protecting our safety. It is hard to understand how we can live in a world where minor risks are banned by the states, say marijuana or other drugs, but systemic threats such as those represented by GMOs encouraged by them. What is proposed here is a mechanism of subsidiarity: the only function of the state is to do things that cannot be solved otherwise. But then, it should do them well.
**********
I thank Brian Eno for the letter and for making me aware of all these difficulties. I hope that the principle of fragility helps you, Stewart in your noble mission to insure longevity for the planet and the human race. We are not that many Extremistan-style mistakes away from extinction. I therefore sign this letter by adopting your style of adding a 0 to the calendar date:
Nassim Nicholas Taleb,
July 3, 02013References:
Bar-Yam, Y., 1997, Dynamics of Complex Systems, Westview Press, p 752
Taleb, N. N., 2012, Antifragile: Things that Gain From Disorder, Penguin and Random House.
Taleb, N. N., and Douady, R., 2012, Mathematical Definition, Mapping, and Detection of (Anti)Fragility, in print, Quant Fin, Preprint: http://arxiv.org/abs/1208.1189
With thanks to William Goodlad.
© Nassim Nicholas Taleb, 2013
Future letters will be published on the Longplayer site, the Long Now blog and Artangel’s site. Please leave comments, if you have them, on the Longplayer site.