The Future of Humanity Institute ( FHI ) is an interdisciplinary research center at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and Oxford Martin School , England , United Kingdom . [1] Its director is philosopher Nick Bostrom , and its research staff and associates include futurist Anders Sandberg , engineer K. Eric Drexler , economist Robin Hanson , and Giving What We Can founderToby Ord . [2]
The role of the Institute for Effective Altruism , the Institute is stated to be one of the most important areas in the world. [3] [4] It engages in a mix of academic and outreach activities, businesses, universities, and other organizations.
History
Nick Bostrom established the Institute in November 2005 as part of the Oxford Martin School, then the James Martin 21st Century School. [1] Between 2008 and 2010, FHI hosted the Global Catastrophic Risks Conference, wrote 22 academic journal articles, and published 34 chapters in academic volumes. FHI researchers have been mentioned over 5,000 times in the media [5] and have given the World Economic Forum’s advice to the private and non-profit sector (such as the Macarthur Foundation , and the World Health Organization ), as well as to governmental bodies in Sweden, Singapore, Belgium, the United Kingdom, and the United States. Bostrom and bioethicist Julian Savulescualso published the book Human Enhancement in March 2009. [6] Most recently, FHI has focused on the dangers of advanced artificial intelligence (AI). In 2014, his researchers published several books on AI risk, including Stuart Armstrong’s Smarter Than Us and Bostrom’s Superintelligence: Paths, Dangers, Strategies . [7] [8]
Existential risk
The Largest topic FHI HAS spent time exploring is global catastrophic risk , and In Particular existential risk . In a 2002 paper, Bostrom defined an “existential risk” as one “where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential”. [9] This includes scenarios Where humanity is not Directly harmed, aim it fails to colonize space and make use of the observable universe’s available resources in humanly valuable projects, as Discussed in Bostrom’s 2003 paper, “Astronomical Waste: The Opportunity Cost of Delayed Technological Development “. [10]
Bostrom and Milan Ćirković ‘s 2008 book Global Catastrophic Risks collects on a variety of these risks, both natural and anthropogenic. Possible catastrophic risks from nature include super-volcanism , impact events , and energetic astronomical events such as gamma-ray bursts , cosmic rays , solar flares , and supernovae . These dangers are caractérisée as well Relatively Relatively Small and Understood, though pandemics May be exceptions as a result of being white more common, and of Dovetailing with technological trends. [11] [4]
Synthetic pandemics via weaponized biological agents are given more attention by FHI. Technological outcomes The Institute is particularly interested in the subject of anthropogenic climate change , nuclear warfare and nuclear terrorism , molecular nanotechnology , and artificial general intelligence . In particular, FHI agrees with other existing technologies, and the FHI agrees with other existential risk reduction organizations, such as the Center for the Study of Existential Risk and the Machine Intelligence Research Institute . [12] [13]FHI researchers have also studied the impact of technological progress on social and institutional risks, such as totalitarianism , automation-driven unemployment , and information hazards. [14]
Anthropic reasoning
FHI devotes much of its attention to exotic threats that have been little explored by other organizations, and to methodological considerations. The Institute has a particular anthropic reasoning in its research, as an under-explored area with general epistemological implications.
Anthropic arguments FHI has the following argument , which claims that it is likely to go extinct because it is unlikely that it is unlikely that it will be extremely early. Instead, present-day humans are likely to be near the middle of the distribution of humans that will ever live. [11] Bostrom has also popularized the simulation argument , which suggests that we are likely to avoid existential risks, and the world is not unlikely to be a simulation . [15]
A recurring theme in FHI’s research is the Fermi paradox , the surprising absence of observable alien civilizations. Robin Hanson has argued that there must be a ” Great Filter ” preventing space colonization to account for the paradox. That filter may be in the past, if it is more likely that the current biology would predict; or it may be in the future, if it is currently recognized.
Human enhancement and rationality
Closely linked to FHI’s work on risk assessment, astronomical waste, and the dangers of future technologies is its work on the promise and risks of human enhancement. The modifications in question may be biological, digital, or sociological, and an emphasis is placed on the most radical hypothesized changes, rather than on the likeliest short-term innovations. FHI’s bioethics research focuses on the potential consequences of gene therapy , life extension , brain implants and brain-computer interfaces , and mind uploading . [16]
FHI’s focus has been on the methods of assessing and enhancing human intelligence and rationality. FHI’s work on human irrationality, as exemplified in cognitive heuristics and biases , includes an ongoing collaboration with Amlinto study the systemic risk arising from biases in modeling. [17] [18]
Selected publications
- Superintelligence: Paths, Hazards, Strategies ISBN 0-415-93858-9
- Nick Bostrom and Milan Cirkovic: Global Catastrophic Risks ISBN 978-0-19-857050-9
- Nick Bostrom and Anders Sandberg: Brain Emulation Roadmap
- Nick Bostrom and Julian Savulescu: Human Enhancement ISBN 0-19-929972-2
- Anthropic Bias: Observation Selection Effects in Science and Philosophy ISBN 0-415-93858-9
See also
- Future of Life Institute
- Global catastrophic risks
- Human enhancement
- Leverhulme Center for the Future of Intelligence
- Nick Bostrom
- Anders Sandberg
- K. Eric Drexler
- Robin Hanson
- Toby Ord
- Effective altruism
- Superintelligence: Paths, Dangers, Strategies
References
- ^ Jump up to:a b “Humanity’s Future: Future of Humanity Institute” . Oxford Martin School . Retrieved 28 March 2014 .
- Jump up^ “Staff” . Future of Humanity Institute . Retrieved 28 March 2014 .
- Jump up^ “About FHI” . Future of Humanity Institute . Retrieved 28 March 2014 .
- ^ Jump up to:a b Ross Andersen (25 February 2013). “Omens” . Aeon Magazine . Retrieved 28 March 2014 .
- Jump up^ “Google News” . Google News . Retrieved 30 March 2015 .
- Jump up^ Nick Bostrom (18 July 2007). Achievements Report: 2008-2010 (PDF)(Report). Future of Humanity Institute. Archived from the original (PDF) on 21 December 2012 . Retrieved 31 March 2014 .
- Jump up^ Mark Piesing (17 May 2012). “Uprising AI: humans will be outsourced, not obliterated” . Wired . Retrieved 31 March 2014 .
- Jump up^ Coughlan, Sean (24 April 2013). “How are humans going to become extinct?” . BBC News . Retrieved 29 March 2014 .
- Jump up^ Nick Bostrom (March 2002). “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards” . Journal of Evolution and Technology . 15 (3): 308-314 . Retrieved 31 March 2014 .
- Jump up^ Nick Bostrom (November 2003). “Astronomical Waste: The Opportunity Cost of Delayed Technological Development” . Utilitas . 15 (3): 308-314. doi : 10.1017 / s0953820800004076 . Retrieved 31 March 2014 .
- ^ Jump up to:a b Ross Andersen (6 March 2012). “We’re underestimating the risk of Human Extinction” . The Atlantic . Retrieved 29 March 2014 .
- Jump up^ Kate Whitehead (16 March 2014). “Cambridge University study center focuses on risks that could annihilate mankind” . South China Morning Post . Retrieved 29 March 2014 .
- Jump up^ Jenny Hollander (September 2012). “Oxford Future of Humanity Institute knows what will make us extinct” . Bustle . Retrieved 31 March 2014 .
- Jump up^ Nick Bostrom. “Hazard Information: A Typology of Potential Harms from Knowledge” (PDF) . Future of Humanity Institute . Retrieved 31 March2014 .
- Jump up^ John Tierney (13 August 2007). “Even if Life Is a Computer Simulation ..”The New York Times . Retrieved 31 March 2014 .
- Jump up^ Anders Sandberg and Nick Bostrom. “Whole Brain Emulation: A Roadmap”(PDF) . Future of Humanity Institute . Retrieved 31 March 2014 .
- Jump up^ “Amlin and Oxford University launch major research project into the Systemic Risk of Modeling” (Press release). Amlin. February 11, 2014 . Retrieved 2014-03-31 .
- Jump up^ “Amlin and Oxford University to collaborate on modeling risk study” . Continuity, Insurance & Risk Magazine . February 11, 2014 . Retrieved 31 March 2014 .