Seattle, WA, United States
Seattle, WA, United States

Time filter

Source Type

Baum S.D.,Global Catastrophic Risk Institute | Baum S.D.,Pennsylvania State University | Baum S.D.,Columbia University | Baum S.D.,Blue Marble Space Institute of Science | And 4 more authors.
Environmentalist | Year: 2013

Perceived failure to reduce greenhouse gas emissions has prompted interest in avoiding the harms of climate change via geoengineering, that is, the intentional manipulation of Earth system processes. Perhaps the most promising geoengineering technique is stratospheric aerosol injection (SAI), which reflects incoming solar radiation, thereby lowering surface temperatures. This paper analyzes a scenario in which SAI brings great harm on its own. The scenario is based on the issue of SAI intermittency, in which aerosol injection is halted, sending temperatures rapidly back toward where they would have been without SAI. The rapid temperature increase could be quite damaging, which in turn creates a strong incentive to avoid intermittency. In the scenario, a catastrophic societal collapse eliminates society's ability to continue SAI, despite the incentive. The collapse could be caused by a pandemic, nuclear war, or other global catastrophe. The ensuing intermittency hits a population that is already vulnerable from the initial collapse, making for a double catastrophe. While the outcomes of the double catastrophe are difficult to predict, plausible worst-case scenarios include human extinction. The decision to implement SAI is found to depend on whether global catastrophe is more likely from double catastrophe or from climate change alone. The SAI double catastrophe scenario also strengthens arguments for greenhouse gas emissions reductions and against SAI, as well as for building communities that could be self-sufficient during global catastrophes. Finally, the paper demonstrates the value of integrative, systems-based global catastrophic risk analysis. © 2013 Springer Science+Business Media New York.


Maher Jr. T.M.,Global Catastrophic Risk Institute | Maher Jr. T.M.,Bard College | Baum S.D.,Global Catastrophic Risk Institute
Sustainability (Switzerland) | Year: 2013

Global catastrophes, such as nuclear war, pandemics and ecological collapse threaten the sustainability of human civilization. To date, most work on global catastrophes has focused on preventing the catastrophes, neglecting what happens to any catastrophe survivors. To address this gap in the literature, this paper discusses adaptation to and recovery from global catastrophe. The paper begins by discussing the importance of global catastrophe adaptation and recovery, noting that successful adaptation/recovery could have value on even astronomical scales. The paper then discusses how the adaptation/recovery could proceed and makes connections to several lines of research. Research on resilience theory is considered in detail and used to develop a new method for analyzing the environmental and social stressors that global catastrophe survivors would face. This method can help identify options for increasing survivor resilience and promoting successful adaptation and recovery. A key point is that survivors may exist in small isolated communities disconnected from global trade and, thus, must be able to survive and rebuild on their own. Understanding the conditions facing isolated survivors can help promote successful adaptation and recovery. That said, the processes of global catastrophe adaptation and recovery are highly complex and uncertain; further research would be of great value. © 2013 by the authors.


Denkenberger D.C.,Global Catastrophic Risk Institute | Denkenberger D.C.,Tennessee State University | Pearce J.M.,Michigan Technological University
International Journal of Disaster Risk Science | Year: 2016

The literature suggests there is about a 1 % risk per year of a 10 % global agricultural shortfall due to catastrophes such as a large volcanic eruption, a medium asteroid or comet impact, regional nuclear war, abrupt climate change, and extreme weather causing multiple breadbasket failures. This shortfall has an expected mortality of about 500 million people. To prevent such mass starvation, alternate foods can be deployed that utilize stored biomass. This study developed a model with literature values for variables and, where no values existed, used large error bounds to recognize uncertainty. Then Monte Carlo analysis was performed on three interventions: planning, research, and development. The results show that even the upper bound of USD 400 per life saved by these interventions is far lower than what is typically paid to save a life in a less-developed country. Furthermore, every day of delay on the implementation of these interventions costs 100–40,000 expected lives (number of lives saved multiplied by the probability that alternate foods would be required). These interventions plus training would save 1–300 million expected lives. In general, these solutions would reduce the possibility of civilization collapse, could assist in providing food outside of catastrophic situations, and would result in billions of dollars per year of return. © 2016, The Author(s).


Baum S.D.,Global Catastrophic Risk Institute | Handoh I.C.,Humanity
Ecological Economics | Year: 2014

Planetary boundaries (PBs) and global catastrophic risk (GCR) have emerged in recent years as important paradigms for understanding and addressing global threats to humanity and the environment. This article compares the PBs and GCR paradigms and integrates them into a unified PBs-GCR conceptual framework, which we call Boundary Risk for Humanity and Nature (BRIHN). PBs emphasizes global environmental threats, whereas GCR emphasizes threats to human civilization. Both paradigms rate their global threats as top priorities for humanity but lack precision on key aspects of the impacts of the threats. Our integrated BRIHN framework combines elements from both paradigms' treatments of uncertainty and impacts. The BRIHN framework offers PBs a means of handling human impacts and offers GCR a theoretically precise definition of global catastrophe. The BRIHN framework also offers a concise stage for telling a stylized version of the story of humanity and nature co-evolving from the distant past to the present to multiple possible futures. The BRIHN framework is illustrated using the case of disruptions to the global phosphorus biogeochemical cycle. © 2014 Elsevier B.V.


Denkenberger D.C.,Global Catastrophic Risk Institute | Pearce J.M.,Michigan Technological University
Futures | Year: 2015

Mass human starvation is currently likely if global agricultural production is dramatically reduced for several years following a global catastrophe, e.g. super volcanic eruption, asteroid or comet impact, nuclear winter, abrupt climate change, super weed, extirpating crop pathogen, super bacterium, or super crop pest. This study summarizes the severity and probabilities of such scenarios, and provides an order of magnitude technical analysis comparing caloric requirements of all humans for 5 years with conversion of existing vegetation and fossil fuels to edible food. Here we present mechanisms for global-scale conversion including natural gas-digesting bacteria, extracting food from leaves, and conversion of fiber by enzymes, mushroom or bacteria growth, or a two-step process involving partial decomposition of fiber by fungi and/or bacteria and feeding them to animals such as beetles, ruminants (cattle, sheep, etc.), rats and chickens. We perform an analysis to determine the ramp rates for each option and the results show that careful planning and global cooperation could maintain humanity and the bulk of biodiversity. © 2014 Elsevier Ltd.


Barrett A.M.,Global Catastrophic Risk Institute | Baum S.D.,Global Catastrophic Risk Institute | Hostetler K.,Global Catastrophic Risk Institute
Science and Global Security | Year: 2013

This article develops a mathematical modeling framework using fault trees and Pois-son processes for analyzing the risks of inadvertent nuclear war from U.S. or Russian misinterpretation of false alarms in early warning systems, and for assessing the potential value of options to reduce the risks of inadvertent nuclear war. The model also uses publicly available information on early warning systems, near-miss incidents, and other factors to estimate probabilities of a U.S.-Russia crisis, the rates of false alarms, and the probabilities that leaders will launch missiles in response to a false alarm. The article discusses results, uncertainties, limitations, and policy implications. Supplemental materials are available for this article. Go to the publisher's online edition of Science & Global Security to view the free online appendix with additional tables and figures. Copyright © Taylor & Francis Group, LLC.


Baum S.D.,Global Catastrophic Risk Institute
AI and Society | Year: 2016

This paper discusses means for promoting artificial intelligence (AI) that is designed to be safe and beneficial for society (or simply “beneficial AI”). The promotion of beneficial AI is a social challenge because it seeks to motivate AI developers to choose beneficial AI designs. Currently, the AI field is focused mainly on building AIs that are more capable, with little regard to social impacts. Two types of measures are available for encouraging the AI field to shift more toward building beneficial AI. Extrinsic measures impose constraints or incentives on AI researchers to induce them to pursue beneficial AI even if they do not want to. Intrinsic measures encourage AI researchers to want to pursue beneficial AI. Prior research focuses on extrinsic measures, but intrinsic measures are at least as important. Indeed, intrinsic factors can determine the success of extrinsic measures. Efforts to promote beneficial AI must consider intrinsic factors by studying the social psychology of AI research communities. © 2016 Springer-Verlag London


Barrett A.M.,Global Catastrophic Risk Institute | Baum S.D.,Global Catastrophic Risk Institute
Journal of Experimental and Theoretical Artificial Intelligence | Year: 2016

An artificial superintelligence (ASI) is an artificial intelligence that is significantly more intelligent than humans in all respects. Whilst ASI does not currently exist, some scholars propose that it could be created sometime in the future, and furthermore that its creation could cause a severe global catastrophe, possibly even resulting in human extinction. Given the high stakes, it is important to analyze ASI risk and factor the risk into decisions related to ASI research and development. This paper presents a graphical model of major pathways to ASI catastrophe, focusing on ASI created via recursive self-improvement. The model uses the established risk and decision analysis modelling paradigms of fault trees and influence diagrams in order to depict combinations of events and conditions that could lead to AI catastrophe, as well as intervention options that could decrease risks. The events and conditions include select aspects of the ASI itself as well as the human process of ASI research, development and management. Model structure is derived from published literature on ASI risk. The model offers a foundation for rigorous quantitative evaluation and decision-making on the long-term risk of ASI catastrophe. © 2016 Informa UK Limited, trading as Taylor & Francis Group


Baum S.D.,Global Catastrophic Risk Institute
Environment Systems and Decisions | Year: 2015

Risk and resilience are important paradigms for analyzing and guiding decisions about uncertain threats. Resilience has sometimes been favored for threats that are unknown, unquantifiable, systemic, and unlikely/catastrophic. This paper addresses the suitability of each paradigm for such threats, finding that they are comparably suitable. Threats are rarely completely unknown or unquantifiable; what limited information is typically available enables the use of both paradigms. Either paradigm can in practice mishandle systemic or unlikely/catastrophic threats, but this is inadequate implementation of the paradigms, not inadequacy of the paradigms themselves. Three examples are described: (a) Venice in the Black Death plague, (b) artificial intelligence (AI), and (c) extraterrestrials. The Venice example suggests effectiveness for each paradigm for certain unknown, unquantifiable, systemic, and unlikely/catastrophic threats. The AI and extraterrestrials examples suggest how increasing resilience may be less effective, and reducing threat probability may be more effective, for certain threats that are significantly unknown, unquantifiable, and unlikely/catastrophic. © 2015, Springer Science+Business Media New York.


Baum S.D.,Global Catastrophic Risk Institute
Physica Scripta | Year: 2014

Some emerging technologies promise to significantly improve the human condition, but come with a risk of failure so catastrophic that human civilization may not survive. This article discusses the great downside dilemma posed by the decision of whether or not to use these technologies. The dilemma is: use the technology, and risk the downside of catastrophic failure, or do not use the technology, and suffer through life without it. Historical precedents include the first nuclear weapon test and messaging to extraterrestrial intelligence. Contemporary examples include stratospheric geoengineering, a technology under development in response to global warming, and artificial general intelligence, a technology that could even take over the world. How the dilemma should be resolved depends on the details of each technology's downside risk and on what the human condition would otherwise be. Meanwhile, other technologies do not pose this dilemma, including sustainable design technologies, nuclear fusion power, and space colonization. Decisions on all of these technologies should be made with the long-term interests of human civilization in mind. This paper is part of a series of papers based on presentations at the Emerging Technologies and the Future of Humanity event held at the Royal Swedish Academy of Sciences on 17 March 2014. © 2014 The Royal Swedish Academy of Sciences.

Loading Global Catastrophic Risk Institute collaborators
Loading Global Catastrophic Risk Institute collaborators