Existential Risk
Our thoughts were provoked by a thought-provoking paper that recently appeared in Risk Analysis, the academic journal of the Society for Risk Analysis to which we belong. The paper[1] was written by two New Zealanders who examined existential risks, which are events that, in their worst case, eliminate humanity entirely
The paper presents a risk register of such events:
The paper goes on to briefly discuss each event and then asks the question why, other than nuclear war, do these risks not receive much attention?
They conclude the reasons for this lack of attention involve both relatively weak modelling of the climate, ecological and societal systems and (thankfully) lack of historical data on existential events.
Finally, the authors explore mitigation of each existential risk. The challenges that were noted in risk mitigation for these types of events include the difficulty with global political consensus, the high uncertainty on the event probabilities (or even possibilities) and, importantly, the difficulty in measuring the benefit cost of various mitigation measures in the absence of credible likelihood values and a measure of the impact of human extinction.
Given the primarily “sudden” nature of the existential risks (vis a vis the more gradual and non-existential nature of climate change risk), the authors conclude that “analysis should move beyond the statistics of rhetoric and look at what is actually being done”.
Mr.Boyd and Wilson’s logic and message is clearly meant to be both thought-provoking and action-provoking for national and international policy-makers.
The paper is thought-provoking for us simple risk management folks who advise policy-makers with practical reasoning about risk and decisions. But we are used to having at least some clue on event likelihoods and impacts. To get these, some fairly complex modelling of each event seems necessary, similar in complexity to say climate change models.
For high impact events, one should measure the impact after adjustment for risk aversion (or “loss aversion” if you will). Risk aversion is the omni-present cognitive bias in mankind of decreasing utility of incremental gain (or on downside risk… the increasing incremental disutility of loss).
When one cannot meet the fundamental Maslow need of survival of one’s self and family, or even tribe, how does risk aversion apply that person’s incremental disutility arising out of further extinction of one’s species. How could or should we measure the species’ aversion to extinction risk for the purposes of benefit-cost analyses of existential risk mitigation alternatives?
Respectfully submitted, our brains are full.
[1] Existential Risks to Humanity Should Concern International Policymakers and More Could Be Done in Considering Them at the International Governance Level; Matt Boyd and Nick Wilson; Risk Analysis Volume 40, Number 11; November 2020.