Much like the Centre for Existential Risk at Cambridge, the Future of Humanity Institute at Oxford spends significant effort grappling with scenarios that could lead to the human species’ demise.
The Institute is headed by Nick Bostrom, a scholar of philosophy, physics, computational neuroscience, and mathematical logic. Aeon Magazine’s Ross Anderson recently spoke with Bostrom and several other researchers at the Institute to ask what kinds of risks we should really be taking seriously:
The risks that keep Bostrom up at night are those for which there are no geological case studies, and no human track record of survival. These risks arise from human technology, a force capable of introducing entirely new phenomena into the world.
Studying risk of any kind leads inevitably to questions of statistics and probability – things human intuition is generally very very bad at comprehending. Fortunately, what nature did not give us, we can still nurture in ourselves. Bostrom is relentless is his mathematical and logical approach to the probability of different possibilities and the utility they afford the human race. Depicting his utilitarian approach, Anderson paraphrases Bostrom’s explanation for why studying existential risk is so valuable:
We might be 7 billion strong, but we are also a fire hose of future lives, that extinction would choke off forever. The casualties of human extinction would include not only the corpses of the final generation, but also all of our potential descendants, a number that could reach into the trillions.
Read: Omens by Ross Anderson