The intelligence explosion is a possible outcome of artificial humanity general intelligence (AGI). AGI would be capable of recursive self-improvement of ASI ( artificial superintelligence ), the limits of which are unknown. An intelligence explosion would be associated with a singularity .
The notion of an “intelligence explosion” was first described by Good (1965) , who speculated on the effects of superhuman machines, should they ever be invented:
Let an ultraintelligent machine be defined as a machine that can surpass all the intellectual activities of any man but clever. Since the design of machines is one of those intellectual activities, an ultraintelligent machine could design even better machines; There would be an explosion of intelligence, and the intelligence of man would be far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.
Has been accelerating, it has been limited by the basic intelligence of the human brain, which has not, according to Paul R Ehrlich , changed significantly for millennia. [1] HOWEVER, with the Increasing power of computers and other technology, it might be feasible to build Eventually a Machine That is more clever than humans. [2] If a superhuman intelligence were to be invented-either through the amplification of human intelligence or through artificial intelligence-it would bring to bear greater problem-solving and inventive skills than current humans are capable of. Such an AI is referred to as AI [3] [4]Because if it were created with engineering capabilities that matched or exceeded those of its human creators, it would have the potential to autonomously improve its own software and hardware or design an even more capable machine. This more capable machine could then be used as a machine of yet greater capability. These iterations of recursive self-improvement could accelerate. It is speculated that over many iterations, such an AI would far surpass human cognitive abilities .
Plausibility
The concept of superhuman or transhumanism is one of the following: intelligence amplification of human brains and artificial intelligence. The means to produce increased intelligence are numerous, and include bioengineering , genetic engineering , nootropic drugs, AI assistants, direct brain-computer interfaces and mind uploading . The existence of multiple paths to an intelligence explosion makes a singularity more likely; they would have all to fail. [5]
Hanson (1998) is an increase in human intelligence, writing that one of the “low-hanging fruit” of easy methods for increasing human intelligence. Despite the numerous speculated means for amplifying human intelligence, non-human artificial intelligence is the most popular option for organizations attempting to advance the singularity. [ quote needed ]
Whether or not an intelligence explosion occurs depends on three factors. [6] The first, accelerating factor, is the new intelligence enhancement made possible by each previous improvement. Contrariwise, as the intelligences become more advanced, more complicated, possibly overcoming the advantage of increased intelligence. Each improvement must be made to be less than average, on average, for the singularity to continue. Finally the laws of physics will eventually prevent any further improvements.
There are two logically independent, but mutually reinforcing causes of intelligent improvements: increases in the speed of computation, and improvements to the algorithms used. [7] The former is predicted by Moore’s Law and the forecast improvements in hardware, [8] and comparatively similar to the previous progress. On the other hand, most AI researchers [ who? ] believe that software is more important than hardware. [ quote needed ]
A 2017 email survey of authors with publications at the NIPS and ICML machine learning conferences on the chance of an intelligence explosion. 12% said it was “quite likely”, 17% said it was “likely”, 21% said it was “about even”, 24% said it was “unlikely” and 26% said it was “quite improbable”. [9]
Speed improvements
Both for human and artificial intelligence, hardware improvements. Oversimplified, [10] Moore’s Law suggests that if the first doubling of speed took 18 months, the second would take 18 subjective months; or 9 external months, whereafter, four months, two months, and so on towards a speed singularity. [11] An upper bound on speed may eventually be reached, but it is unclear how high this would be. Hawkins (2008) [ citation needed ] , responding to Good, argued that the upper limit is relatively low;
Belief in this idea is based on a naive understanding of what intelligence is. As an analogy, imagine we had a computer that could design new computers (chips, systems, and software) faster than itself. Would such a computer lead to infinitely fast computers or even computers that were faster than anything humans could ever build? No. It might accelerate the rate of improvements for a while, but in the end there are limits to how big and fast computers can be. We would end up in the same place; we’d just get there a bit faster. There would be no singularity.
If it were a lot higher than current human levels of intelligence, the effects of the singularity would be great enough to be indistinguishable (to humans) from a singularity with an upper limit. For example, if the speed of thought could be increased to a million-fold, a subjective year would pass in 30 physical seconds. [5]
It is difficult to directly compare silicon- based hardware with neurons . But Berglas (2008) notes that computer speech recognition is approaching human capabilities, and that this ability seems to require 0.01% of the volume of the brain. This analogy suggests that modern computer hardware is a few orders of magnitude of being powerful in the human brain.
Algorithm improvements
Some intelligence technologies, like “AI seeds”, [3] [4] may also have the potential to make themselves more efficient, not just faster, by modifying their source code . These improvements would make further improvements possible, which would make further improvements possible, and so on.
The mechanism for a recursively self-improving set of algorithms differs from computation speed in two ways. First, it does not require external influence: machines designing faster hardware would still require humans to create the hardware, or to program factories appropriately. [ citation needed ] An AI which was rewriting its own source code, however, could be so contained in an AI box .
Second, with Vernor Vinge’s conception of the singularity, it is much harder to predict the outcome. While speed increases seem to be only a quantitative difference from human intelligence, actual algorithm improvements would be qualitatively different. Eliezer Yudkowsky compares it to the changes that human intelligence brought: humans changed the world. Similarly, the evolution of life has been a massive departure and acceleration from the previous geological rates of change, and improved intelligence. [12]
There are substantial dangers associated with an intelligent explosion singularity of a recursively self-improving set of algorithms. First, the goal structure of the AI may not be invariant under self-improvement, and it may be better than expected. [13] [14] Secondly, they could compete for survival. [15] [16]
While not being malicious, there is no reason to think that they would actively promote human goals unless they could, and if not, might use the resources currently used to support their own goals, causing human extinction. [17] [18] [19]
Carl Shulman and Anders Sandberg suggest that algorithms may be the limiting factor for a single factor because they are more likely to increase their productivity, they are more likely to be bottlenecked by serial, cumulative research. They suggest that the case of a software-limited singularity, intelligence explosion would be more likely to have a hardware-limited singularity, because in the software-limited box, once a human-level AI was developed, it could run serially on very fast hardware, and the abundance of cheap hardware would make AI research less constrained. [20]An abundance of accumulated hardware that can be called “computing overhang.” [21]
Impact
Dramatic changes in the economic growth rate of the past because of some advancement. Based on population growth, the economy doubled every 250,000 years from the Paleolithic era until the Neolithic Revolution . The new agricultural economy doubled every 900 years, a remarkable increase. In the current era, beginning with the Industrial Revolution, the world’s economic output doubles every fifteen years, sixty times faster than during the agricultural era. If the rise of superhuman intelligence causes a similar revolution, argues Robin Hanson, one would expect the economy to be at least quarterly and possibly one to a weekly basis. [22]
Superintelligence
A superintelligence, hyperintelligence, superhuman intelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. “Superintelligence” may also refer to the form or degree of intelligence possessed by such an agent.
Technology forecasters and researchers disagree about when human intelligence is likely to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of future studies is a combination of these two possibilities that allow people to interface with computers , or upload their minds to computers , in a way that allows substantial intelligence amplification.
Existential risk
Berglas (2008) notes that there is no direct evolutionary motivation for an AI to be friendly to humans. An arbitrary optimization process to promote an outcome desired by mankind, rather than inadvertently leading to an AI Nick Bostrom’s whimsical example of an invention which is done with the goal of manufacturing paper clips, so that when it achieves it, it decides to convert the entire planet into a paper clip manufacturing facility. [23] [24] [25] Anders Sandberg also elaborated on this scenario, addressing various common counter-arguments.[26] AI researcher Hugo de Garis suggests that artificial intelligences may simply eliminate the human race, [15] [27] and humans would be powerless to stop them. [28] Alternatively, AIs developed under evolutionary pressure to promote their own survival could outcompete humanity. [19]
Bostrom (2002) discusses human extinction scenarios, and lists superintelligence as a possible cause:
When we create the first superintelligent entity, we can make a mistake and give it to the world. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning the matter into a giant system, in the process of killing the person who asked the question.
A significant problem is that unfriendly artificial intelligence is likely to be much easier to create than AI. While both require extensive advances in recursive optimization process design, it also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification. [29]
Eliezer Yudkowsky Proposed That research be Undertaken to Produce friendly artificial intelligence in order to address the dangers. He noted that the first real AI could have a head start on self-improvement, if friendly, could prevent unfriendly. [18]
Bill Hibbard (2014) proposes an AI design that avoids several dangers including self-delusion, [30] unintended instrumental actions, [13] [31] and corruption of the reward generator. [31] He also discusses the social impacts of AI [32] and testing AI. [33] His 2001 book Super-Intelligent Machines advocates the need for public education about AI and public control over AI. It also has a simple design that has been vulnerable to corruption of the reward generator.
One hypothetical approach to attempting to control an artificial intelligence is an AI box , where the artificial intelligence is kept constrained inside a simulated world and not allowed to affect the external world. However, a smart AI can simply be able to escape by outsmarting its less intelligent human captors. [34] [35] [36]
Stephen Hawking said in 2014 that “Success in creating AI would be the biggest event in human history. Hawking believes that in the coming decades, AI could offer “incalculable benefits and risks” such as “technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we can not even understand.” Hawking believes more should be done for the singularity: [37]
So, possible future of incalculable benefits and risks, the experts are doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilization feels a message saying, “We’ll arrive in a few decades,” we would just reply, “OK, call us when you get here – we’ll leave the lights on”? Probably not – but this is more or less what is happening with AI.
Hard vs. soft takeoff
In a hard takeoff scenario, an AGI rapidly self-improving, “taking control of the world” (perhaps in a matter of hours), too quickly for significant human-initiated error correction or a gradual tuning of the AGI’s goals. In a soft takeoff scenario, AGI is still more powerful than human, but at a human-like pace, it has been timescale when ongoing human interaction and correction can be effectively implemented in AGI’s development. [38] [39]
Rowing Naam argues against a hard takeoff by pointing out that we already see recursive self-improvement by superintelligences, such as corporations. For instance, Intel has “the collective brainpower of tens of thousands of humans and probably millions of CPU cores to .. design better CPUs!” However, this has not led to a hard takeoff; rather, it has a soft takeoff in the form of Moore’s law . [40] Naam further points out that the computational complexity of higher intelligence can be much greater than linear, such that “creating a mind of intelligence 2 is probably more than just a hardening of mind.” [41]
J. Storrs Hall believes that “many of the more commonly seen scenarios for overnight hard takeoff are circular – they seem to assume hyperhuman capabilities at the starting point of the self-improvement process” in order for an AI to be able to make the dramatic domain-general improvements required for takeoff. Hall suggests that rather than recursively self-improving its hardware, software, and infrastructure all of its own, a fledgling AI would be better off when it was most effective and then buys the remaining components on the marketplace, because the quality of products on the marketplace continually improves, and the AI would have a hard time keeping up with the cutting-edge technology used by the rest of the world. [42] Ben Goertzelagrees with Hall’s suggestion that a new human-level AI would do so to use its intelligence to accumulate wealth. The AI’s talents might inspire companies and governments to spread its software throughout society. Goertzel is a very serious, 5-minute takeoff but thinks a takeoff from human to superhuman level of 5 years is reasonable. He calls this a “semihard takeoff”. [43]
Max Moredisagrees, arguing that if they were only a few superfast human-level AIs, they would not radically change the world, because they would still depend on other people to get things done and would still have human cognitive constraints. Even if it is superfast, it is not clear why they would be better off than existing human cognitive scientists at producing superhuman intelligence, but the rate of progress would increase. More such arguments would have a great impact on the world, because it would be better to engage with existing, slow human systems to achieve physical impacts on the world. “The need for collaboration, for organization,[44]
See also
- Accelerating change
- Artificial consciousness
- Flynn effect
- Human intelligence § Improving intelligence
- neuroenhancement
- Outline of transhumanism
- Postbiological evolution
- Robot learning
References
- Jump up^ Ehrlich, Paul. The Dominant Animal: Human Evolution and the Environment
- Jump up^ Superbrains born of silicon will change everything. ArchivedAugust 1, 2010, at theWayback Machine.
- ^ Jump up to:a b Yampolskiy, Roman V. “Analysis of the types of self-Improving software.” Artificial General Intelligence. Springer International Publishing, 2015. 384-393.
- ^ Jump up to:a b Eliezer Yudkowsky . General Intelligence and Seed AI-Creating Complete Capable Minds of Open-Ended Self-Improvement, 2001
- ^ Jump up to:a b “What is the Singularity? | Singularity Institute for Artificial Intelligence” . Singinst.org. Archived from the original on 2011-09-08 . Retrieved 2011-09-09 .
- Jump up^ David Chalmers John Locke Reading, May 10, Exam Schools, Oxford,Presenting a philosophical analysis of the possibility of a singularity gold “intelligence explosion” resulting from recursively self-improving AIArchived2013-01-15 at theWayback Machine..
- Jump up^ The Singularity: A Philosophical Analysis, David J. Chalmers
- Jump up^ “ITRS” (PDF) . Archived from the original (PDF) on 2011-09-29 . Retrieved 2011-09-09 .
- Jump up^ Grace, Katja; Salvatier, John; Dafoe, Allan; Zhang, Baobao; Evans, Owain (24 May 2017). “When Will AI Exceed Human Performance? Evidence from AI Experts”. arXiv : 1705.08807 [ cs.AI ].
- Jump up^ Siracusa, John (2009-08-31). “Mac OS X 10.6 Snow Leopard: the Ars Technica review” . Arstechnica.com . Retrieved 2011-09-09 .
- Jump up^ Eliezer Yudkowsky, 1996″Staring into the Singularity”
- Jump up^ Eliezer S. Yudkowsky. “Power of Intelligence” . Yudkowsky . Retrieved 2011-09-09 .
- ^ Jump up to:a b Omohundro, Stephen M., “The Basic AI Drives.” Artificial General Intelligence, 2008 Proceedings of the First AGI Conference, eds. Pei Wang, Ben Goertzel, and Stan Franklin. Flight. 171. Amsterdam: IOS, 2008
- Jump up^ “Artificial General Intelligence: Now Is the Time” . KurzweilAI . Retrieved 2011-09-09 .
- ^ Jump up to:a b Omohundro, Stephen M., “The Nature of Self-Improving Artificial Intelligence.” Self-Aware Systems. Jan. 21 2008. Web. 07 Jan. 2010.
- Jump up^ Barrat, James (2013). “6,” Four Basic Drives ” “. Our Final Invention (First ed.). New York: St. Martin’s Press. pp. 78-98. ISBN 978-0312622374 .
- Jump up^ “Max More and Ray Kurzweil on the Singularity” . KurzweilAI . Retrieved 2011-09-09 .
- ^ Jump up to:a b “Concise Summary | Singularity Institute for Artificial Intelligence” . Singinst.org . Retrieved 2011-09-09 .
- ^ Jump up to:a b Bostrom, Nick, The Future of Human Evolution, Death and Anti-Death: Two Hundred Years After Kant, Fifty Years After Turing, ed. Charles Tandy, pp. 339-371, 2004, Ria University Press.
- Jump up^ Shulman, Carl; Anders Sandberg (2010). Mainzer, Klaus, ed. “Implications of a Software-Limited Singularity” (PDF) . ECAP10: VIII European Conference on Computing and Philosophy . Retrieved 17 May 2014 .
- Jump up^ Muehlhauser, Luke; Anna Salamon (2012). “Intelligence Explosion: Evidence and Import”. In Amnon Eden; Johnny Søraker; James H. Moor; Eric Steinhart. Singularity Hypotheses: A Scientific and Philosophical Assessment (PDF) . Springer.
- Jump up^ Robin Hanson, “Economics Of The Singularity” , IEEE Spectrum Special Report: The Singularity , retrieved 2008-09-11 &Long-Term Growth As A Sequence of Exponential Methods
- Jump up^ Nick Bostrom,”Ethical Issues in Advanced Artificial Intelligence”, inCognitive, Emotional and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003, pp. 12-17
- Jump up^ Eliezer Yudkowsky:Artificial Intelligence as a Positive and Negative Factor in Global Risk Archived2012-06-11 at theWayback Machine.Draft for a publication inGlobal Catastrophic Riskfrom August 31, 2006, retrieved July 18, 2011 (PDF file)
- Jump up^ The Collecting Device Stamp, Nick Hay
- Jump up^ ‘Why we should fear the Paperclipper’, 2011-02-14 entry of Sandberg ‘Andart’ blog
- Jump up^ Omohundro, Stephen M., “The Basic AI Drives.” Artificial General Intelligence, 2008 Proceedings of the First AGI Conference, eds. Pei Wang, Ben Goertzel, and Stan Franklin. Flight. 171. Amsterdam: IOS, 2008.
- Jump up^ by Garis, Hugo. “The Coming Artilect War”, Forbes.com, 22 June 2009.
- Jump up^ Coherent Extrapolated Volition, Eliezer S. Yudkowsky, May 2004Archived2010-08-15 at theWayback Machine.
- Jump up^ Hibbard, Bill (2012), “Model-Based Utility Functions”, Journal of Artificial General Intelligence , 3 : 1, arXiv : 1111.3934 , Bibcode : 2012JAGI …. 3 …. 1H , doi : 10.2478 / v10229-011-0013-5 .
- ^ Jump up to:a b Avoiding Unintended AI Behaviors. Bill Hibbard. 2012 Proceedings of the Fifth Conference on Artificial General Intelligence, eds. Josh Bach, Ben Goertzel and Matthew Ikle. This paper won the Machine Intelligence Research Institute’s 2012 Turing Prize for the Best AGI Safety Paper .
- Jump up^ Hibbard, Bill (2008), “The Technology of Mind and a New Social Contract” , Journal of Evolution and Technology , 17 .
- Jump up^ Decision Support for Safe AI Design |. Bill Hibbard. 2012 Proceedings of the Fifth Conference on Artificial General Intelligence, eds. Josh Bach, Ben Goertzel and Matthew Ikle.
- Jump up^ Yudkowsky, Eliezer (2008), Bostrom, Nick; Cirkovic, Milan, eds., “Artificial Intelligence as a Positive and Negative Factor in Global Risk” (PDF) ,Global Catastrophic Risks , Oxford University Press: 303, Bibcode :2008gcr..book..303Y , ISBN 978-0-19 -857050-9 , archived from the original (PDF) on 2008-08-07
- Jump up^ Artificial Intelligence Will Kill Our Grandchildren (Singularity), Dr. Anthony Berglas
- Jump up^ The Singularity: A Philosophical Analysis by David J. Chalmers
- Jump up^ Stephen Hawking (1 May 2014). “Stephen Hawking: ‘Transcendence looks at the implications of artificial intelligence – AI Taking aim are we gravement enough? ‘ ” . The Independent . Retrieved May 5, 2014 .
- Jump up^ Bugaj, Vladimir Stephan, and Ben Goertzel. “Five ethical imperatives and their implications for human-AGI interaction.” Dynamical Psychology (2007).
- Jump up^ Sotala, Kaj, and Roman V. Yampolskiy. “Responses to catastrophic AGI risk: a survey.” Physica Scripta 90.1 (2014): 018001.
- Jump up^ Naam, Ramez (2014). “The Singularity Is Further Than It Appears” . Retrieved 16 May 2014 .
- Jump up^ Naam, Ramez (2014). “Why AIs Will not Ascend in the Blink of an Eye – Some Math” . Retrieved 16 May 2014 .
- Jump up^ Hall, J. Storrs (2008). “Engineering Utopia” (PDF) . Artificial General Intelligence, 2008: Proceedings of the First AGI Conference : 460-467 . Retrieved 16 May 2014 .
- Jump up^ Goertzel, Ben (26 Sep 2014). “Superintelligence – Semi-hard Takeoff Scenarios” . h + Magazine . Retrieved 25 October 2014 .
- Jump up^ More, Max. “Singularity Meets Economy” . Retrieved 10 November 2014.