The Machine Intelligence Research Institute ( MIRI ), formerly the Singularity Institute for Artificial Intelligence ( SIAI ), is a non-profit organization founded in 2000 [1] to research safety issues related to the development of Strong AI .
MIRI’s technical agenda states that new formal tools are needed to ensure the future operation of AI software ( friendly artificial intelligence ). [4] The organization hosts regular research workshops to develop mathematical foundations for this project, [5] and has been cited as one of several academic and nonprofit groups. [6] [7] [8]
History
In 2000, Eliezer Yudkowsky , Brian and Sabine Atkins founded the Singularity Institute for Artificial Intelligence to “help humanity prepare for the moment when machine intelligence exceeded human intelligence”. [9] [10] [11] In early 2005, SIAI relocated from Atlanta, Georgia to Silicon Valley. From 2006 to 2012, the Institute collaborated with Singularity University to produce the Singularity Summit , a science and technology conference. Speakers included Steven Pinker , Peter Norvig , Stephen Wolfram , John Tooby , James Randi , and Douglas Hofstadter .[12] [13] [14]
In mid-2012, the Institute spun off a new organization called the Center for Applied Rationality , which focuses on cognitive science to improve people’s effectiveness in their daily lives. [15] [16] [17] Having previously shortened its name to “Singularity Institute”, in January 2013 SIAI changed its name to “Machine Intelligence Research Institute” in order to avoid confusion with Singularity University. MIRI gave control of the Singularity Summit to Singularity University and shifted its focus towards research in mathematics and theoretical computer science. [18]
In mid-2014, Nick Bostrom’s book Superintelligence: Paths, Dangers, Strategies helped spark public discussion about AI’s long-run social impact, and endorsements of Bill Gates and Elon Musk . [19] [20] [21] [22] Stephen Hawking and AI pioneer Stuart Russell co-authored at Huffington Post article citing the work of MIRI and other organizations in the area:
The short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all. […] Although we are facing the prospect of a greater or lesser incidence of this problem, we believe that it is possible that the Cambridge Center for Existential Risk , the Future of Humanity Institute , the Machine Intelligence Research Institute, and the Future of Life Institute . [7]
In early 2015, MIRI’s research is cited in a document that is more important than that of AI that is robust and beneficial. [23] Bostrom, Russell, Bart Selman , Francesca Rossi , Thomas Dietterich , Manuela M. Veloso , and researchers at MIRI. [8] [24]
In 2016, MIRI published a paper, Logical induction . A positive, unpublished review of this paper led the Open Philanthropy Project to award MIRI $ 3.75 million. [25] [26]
Research
Forecasting
MIRI studies strategic issues related to AI, such as: What can we predict about future AI technology? How can we improve our forecasting ability? Which interventions available today appear to be the most beneficial, given what we know? [27]
Beginning in 2014, MIRI has been working with the AI Impacts project. Impacts studies historical instances of discontinuous technological change, and has developed new measures of the relative computational power of humans and computer hardware. [28] [29]
MIRI researchers ‘interest in discontinuous AI progress stems from I. J. Good ‘ s argument that will be able to finally advanced systems in software engineering, leading to a feedback loops of capable AI systems: [4] [30] [23] [ 22]
Let an ultraintelligent machine be defined as a machine that can surpass all the intellectual activities of any man but clever. Since the design of machines is one of those intellectual activities, an ultraintelligent machine could design even better machines; There would be an explosion of intelligence, and the intelligence of man would be far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. [31]
Writers like Bostrom uses the term superintelligence in place of Good’s ultraintelligence . [19] Following Vernor Vinge , Good’s idea of intelligence has come to be associated with the idea of a ” singularity “. [32] [33] [34] Bostrom and researchers at MIRI have expressed skepticism about the views of singularity advocates like Ray Kurzweil’s superintelligence is “just around the corner”. MIRI researchers advocated early safety work as a precautionary measure, while arguing that predictions of AI progress have not proven reliable. [35] [22] [19]
Eliezer Yudkowsky, MIRI’s co-founder and senior researcher, is frequently cited for his writing on the long-term social impact of progress in AI. Russell and Norvig’s Artificial Intelligence: A Modern Approach , the standard textbook in the field of AI, summarizes Yudkowsky’s thesis:
If ultraintelligent machines are a possibility, we would make sure that we design their predecessors in such a way that they design themselves to treat us well. […] Yudkowsky (2008) [30] goes into more detail about how to design a Friendly AI . He asserts that friendliness should be designed, but that the designers should recognize that their own designs may be flawed, and that the robot will learn and evolve over time. Thus the problem is one of mechanism design-to define a mechanism for evolving the system of checks and balances, and to give the systems utility functions that will remain friendly in the face of such changes. [33]
Yudkowsky writes on the importance of friendly artificial intelligence in smarter-than-human systems. [36] This informal goal is reflected in MIRI’s recent publications as the requirement that Al systems be “aligned with human interests”. [4] Following Bostrom and Steve Omohundro , MIRI researchers believe that the intelligent world is not a problem, but it is a question of obstacles, or threats if they are not specifically designed to promote their operators’ goals. [37] [38] [19] [8]
High reliability and error tolerance in AI
The Future of Life Institute (FLI)
Mathematical tools such as formal logic, probability, and decision theory have made significant insight into the foundations of reasoning and decision-making. However, there are still many open problems in the foundations of reasoning and decision. Solutions to these problems can make the behavior of the system much more reliable and predictable. Example research topics in this area include reasoning and decision-making under computational resources at Horvitz and Russell, how to take into account correlations between systems and behaviors and their environment? reason, and how to reason about uncertainty over logical consequences of beliefs or other deterministic computations. These topics can benefit from being considered together,[23]
The priorities document quotes MIRI publications in the relevant areas: formalizing cooperation in the prisoner’s dilemma between ” superrational ” software agents; [39] defining alternatives to causal decision theory and evidential decision theory in Newcomb’s problem ; [40] and developing alternatives to Solomonoff’s theory of inductive inference for agents embedded in physical environments [41] and agents reasoning without logical omniscience. [42] [22]
Standard decision procedures are not well-specified enough (eg, with regard to counterfactuals) to be instantiated as algorithms. MIRI researcher Benja Fallenstein and then-researcher Nate Soares that decision theory is “unstable under reflection” in the sense that a rational agent following causal decision theory “correctly identifies that the agent should modify itself to stop using CDT [causal decision theory] to make decisions “. MIRI researchers identify “logical decision theories” as alternatives that perform better in general decision-making tasks. [40]
MIRI also studies self-monitoring and self-verifying software. The FLI research priorities document notes that “a formal system that is rather powerful in the sense of gaining assurance about the accuracy of functionally similar formal systems, on the breadth of inconsistency via Gödel’s incompleteness theorems “. [23] MIRI’s publications on Vingean reflection attempts to model the Gödelian limits on self-referential reasoning and identify practically useful exceptions. [43]
Soares and Fallenstein classifies the above research programs as a high level of transparency and transparency in agent behavior. They also recommend research into “error-tolerant” software systems, citing human error and default incentives as sources of serious risk. [37] [8] The FLI research priorities document adds:
If a system is selecting the actions that are best suited to a given task, then avoiding conditions is a natural subgoal (and conversely, seeking unconstrained situations is sometimes a useful heuristic). This could become problematic, however, if we wish to repackage the system, to deactivate it, or to significantly alter its decision-making process; such a system would rationally avoid these changes. Systems That do not exhibit thesis Behaviors-have-been termed correctable systems, and Both theoretical and practical work in this area Appears towable and Useful.
MIRI’s priorities in these areas are summarized in their 2015 technical agenda. [4]
Value specification
Soares and Fallenstein write, “the ‘intentions’ of the operators are a complex, vague, fuzzy, context-dependent notion (Yudkowsky 2011). [44] Concretely writing out the full intentions of the operators in a machine-readable format is implausible if not impossible, even for simple tasks. ” Soares and Fallenstein proposes that autonomous AI systems instead be designed to inductively learn the values of humans from observational data. [4]
Soares discusses several technical obstacles to value learning in AI: changes in the agent’s beliefs may result in a mismatch between the agent’s values and its ontology; agents who are well-behaved in training and human operators may make it difficult to identify or anticipate incorrect inductions. [22] [45] Bostrom’s Superintelligence discusses the philosophical problems raised by value learning at greater length. [19]
See also
- Allen Institute for Artificial Intelligence
- Effective altruism
- Future of Humanity Institute
- Institute for Ethics and Emerging Technologies
- LessWrong
References
- ^ Jump up to:a b “Transparency and Financials” . Machine Intelligence Research Institute . Retrieved February 19, 2017 .
- Jump up^ “IRS Form 990” (PDF) . Machine Intelligence Research Institute. 2013 . Retrieved 12 October 2015 .
- Jump up^ “Team” . Machine Intelligence Research Institute. 2016 . Retrieved 4 October 2016 .
- ^ Jump up to:a b c d e Soares, Nate; Fallenstein, Benja (2015). “Aligning Superintelligence with Human Interests: A Technical Research Agenda”(PDF) . In Miller, James; Yampolskiy, Roman; Armstrong, Stuart; et al. The Technological Singularity: Managing the Journey . Springer.
- Jump up^ “Research Workshops” . Machine Intelligence Research Institute. 2013 . Retrieved 11 October 2015 .
- Jump up^ GiveWell (2015). Potential risks from advanced artificial intelligence(Report) . Retrieved 11 October 2015 .
- ^ Jump up to:a b Hawking, Stephen ; Tegmark, Max ; Russell, Stuart ; Wilczek, Frank(2014). “Transcending Complacency on Superintelligent Machines” . The Huffington Post . Retrieved 11 October 2015 .
- ^ Jump up to:a b c d Basulto, Dominic (2015). “The very best ideas for preventing artificial intelligence from wrecking the planet” . The Washington Post . Retrieved 11 October 2015 .
- Jump up^ Ackerman, Elise (2008). “Annual AI conference to be held this Saturday in San Jose” . San Jose Mercury News . Retrieved 11 October 2015 .
- Jump up^ “Singularity Institute Strategic Plan” (PDF) . Machine Intelligence Research Institute. 2011 . Retrieved 12 October 2015 .
- Jump up^ “Fear Day Scientists Computers Become Smarter Than Humans” . Fox News Channel . Associated Press. 2007 . Retrieved 12 October 2015 .
- Jump up^ Abate, Tom (2006). “Smarter than thou?” . San Francisco Chronicle . Retrieved 12 October 2015 .
- Jump up^ Abate, Tom (2007). “Public meeting will re-examine future of artificial intelligence” . San Francisco Chronicle . Retrieved 12 October 2015 .
- Jump up^ “Singularity Summit: An Annual Conference on Science, Technology, and the Future” . Machine Intelligence Research Institute. 2012 . Retrieved 12 October 2015 .
- Jump up^ Muehlhauser, Luke (2012). “July 2012 Newsletter” . MIRI Blog . Retrieved 12 October 2015 .
- Jump up^ Stiefel, Todd; Metskas, Amanda K. (22 May 2013). “Julia Galef” . The Humanist Hour . Episode 083. The Humanist . Retrieved 3 March 2015 .
- Jump up^ Chen, Angela (2014). “More Rational Resolutions” . The Wall Street Journal . Retrieved 5 March 2015 .
- Jump up^ Muehlhauser, Luke (2013). “We are now the Machine Intelligence Research Institute” (MIRI) . MIRI Blog . Retrieved 12 October 2015 .
- ^ Jump up to:a b c d e Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies (First edition, ed.). ISBN 0199678111 .
- Jump up^ Muehlhauser, Luke (2015). “Musk and Gates on superintelligence and fast takeoff” . Luke Muehlhauser Blog . Retrieved 12 October 2015 .
- Jump up^ D’Orazio, Dante (2014). “Elon Musk says artificial intelligence is ‘potentially more dangerous than nukes ‘ ” . The Verge . Retrieved 5 October 2015 .
- ^ Jump up to:a b c d e LaFrance, Adrienne (2015). “Building Robots With Better Morals Than Humans” . The Atlantic . Retrieved 12 October 2015 .
- ^ Jump up to:a b c d Future of Life Institute (2015). Research priorities for robust and beneficial artificial intelligence (PDF) (Report) . Retrieved 4 October 2015 .
- Jump up^ “2015 Awardees” . Future of Life Institute . 2015 . Retrieved 5 October2015 .
- Jump up^ https://intelligence.org/2017/11/08/major-grant-open-phil/
- Jump up^ https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017
- Jump up^ Bostrom, Nick ; Yudkowsky, Eliezer (2014). “The Ethics of Artificial Intelligence” (PDF) . In Frankish, Keith; Ramsey, William. Cambridge Handbook of Artificial Intelligence . New York: Cambridge University Press. ISBN 978-0-521-87142-6 .
- Jump up^ Hsu, Jeremy (2015). “Making Sure AI’s Rapid Rise Is No Surprise” . Discover . Retrieved 12 October 2015 .
- Jump up^ From Looper, Christian (2015). “Research Suggests Human Brain Is 30 Times As Powerful As The Best Supercomputers” . Tech Times . Retrieved 12 October 2015 .
- ^ Jump up to:Yudkowsky a b , Eliezer (2008). “Artificial Intelligence as a Positive and Negative Factor in Global Risk” (PDF) . In Bostrom, Nick ; Ćirković, Milan. Global Catastrophic Risks . Oxford University Press. ISBN 978-0199606504.
- Jump up^ Good, Irving (1965). “Speculations Concerning the First Ultraintelligent Machine” (PDF) . Advances in Computers . 6 . Retrieved 4 October 2015 .
- Jump up^ Vinge, Vernor (1993). “The Coming Technological Singularity: How to Survive in the Post-Human Era” . Whole Earth Review . Retrieved 12 October 2015 .
- ^ Jump up to:a b Russell, Stuart ; Norvig, Peter (2009). Artificial Intelligence: A Modern Approach . Prentice Hall. ISBN 978-0-13-604259-4 .
- Jump up^ Yudkowsky, Eliezer (2007). “Three Major Singularity Schools” . MIRI Blog . Retrieved 11 October 2015 .
- Jump up^ Bensinger, Rob (2015). “Brooks and Searle on AGI volition and timelines” . MIRI Blog . Retrieved 12 October 2015 .
- Jump up^ Tegmark, Max (2014). “Life, Our Universe and Everything”. Our Mathematical Universe: My Quest for the Ultimate Nature of Reality (First edition, ed.). ISBN 9780307744258 .
- ^ Jump up to:a b Soares, Nate; Fallenstein, Benja; Yudkowsky, Eliezer ; Armstrong, Stuart (2015). “Corrigibility” . AAAI Workshops: Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, TX, January 25-26, 2015 . AAAI Publications.
- Jump up^ Omohundro, Steve (2008). “The Basic AI Drives” (PDF) . Artificial General Intelligence 2008: Proceedings of the First AGI Conference . Amsterdam: IOS.
- Jump up^ LaVictoire, Patrick; Fallenstein, Benja; Yudkowsky, Eliezer ; Bárász, Mihály; Christiano, Paul; Herreshoff, Marcello (2014). “Program Equilibrium in the Prisoner’s Dilemma via Löb’s Theorem” . Multiagent Interaction without Prior Co-ordination: Papers from the AAAI-14 Workshop . AAAI Publications.
- ^ Jump up to:a b Soares, Nate; Fallenstein, Benja (2015). “Toward Idealized Decision Theory”. arXiv : 1507.01986 [ cs.AI ].
- Jump up^ Soares, Nate (2015). Formalizing Two Problems of Realistic World-Models (PDF) (Technical report). Machine Intelligence Research Institute. 2015-3.
- Jump up^ Soares, Nate; Fallenstein, Benja (2015). Questions of Reasoning under Logical Uncertainty (PDF) (Technical report). Machine Intelligence Research Institute. 2015-1.
- Jump up^ Fallenstein, Benja; Soares, Nate (2015). Vingean Reflection: Reliable Reasoning for Self-Improving Agents (PDF) (Technical report). Machine Intelligence Research Institute. 2015-2.
- Jump up^ Yudkowsky, Eliezer (2011). “Complex Value Systems in Friendly AI”(PDF) . Artificial General Intelligence: 4th International Conference, AGI 2011, Mountain View, CA, USA, August 3-6, 2011 . Berlin: Springer.
- Jump up^ Soares, Nate (2015). The Value Learning Problem (PDF) (Technical Report). Machine Intelligence Research Institute. 2015-4.