superintelligence

superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. “Superintelligence” may also refer to a property of problem-solving systems (eg, superintelligent language translators or engineering assistants), whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a singularity .

University of Oxford philosopher Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills”. [1] The program Fritz falls to be better than others because it is much better than humans. [2] Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether an artificial or human superintelligence Would Possess Capacities Such As intentionality (see the Chinese Room argument) goldfirst-person consciousness (see the hard problem of consciousness ).

Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of future studies is a combination of these two possibilities that allow people to interface with computers , or upload their minds to computers , in a way that allows substantial intelligence amplification.

Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence . The first senses are capable of becoming more important in the first place than those of a mental capability, including the ability to perfect recall , a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This May give ’em the opportunity to have a single-Either being white gold as a new species -become much More Powerful than humans, and to displace em. [3]

A number of scientists and forecasters argue for prioritizing early research into the potential benefits and risks of human and machine cognitive enhancement , because of the potential social impact of such technologies. [4]

Feasibility of artificial superintelligence

Progress in machine classification of images


The error rate of AI by year. Red line – the error rate of a trained human

Philosopher David Chalmers argues that artificial intelligence is superhuman intelligence. Chalmers breaks this claim down into an argument that it can achieve equivalence to human intelligence, that it can be extended to surpass human intelligence, and that it can be further amplified to completely dominate humans across arbitrary tasks. [5]

Concerning the human-level equivalence, Chalmers argues that the human brain is a mechanical system, and is therefore liable to be emulatable by synthetic materials. [6] He also notes that human intelligence has been able to biologically evolve, making it more likely that human engineers will be able to recapitulate this invention. Evolutionary algorithms in particular should be able to produce human-level AI. [7] Concerning intelligence extension and amplification, Chalmers argues that new technologies can be improved, and that this is particularly likely when the invention can assist in developing new technologies. [8]

If research into strong AI produced sufficiently intelligent software, it would be able to reprogram and improve itself – a feature called “recursive self-improvement”. It would then be even better, and could continue in a leading cycle, leading to a superintelligence. This scenario is known as an intelligence explosion . Such an intelligence would not have the limitations of human intellect, and would be able to invent or discover anything.

Computer components already greatly surpassed human performance in speed. Bostrom writes, “Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~ 2 GHz).” [9] Moreover, neurons transmit spike signals across axons at no greater than 120 m / s, “while existing electronic processing can only communicate optically at the speed of light”. Thus, the simplest example of a superintelligence may be an emulated human mind that is much faster than the brain. A human-like reasoner that could have a greater advantage in the future, especially those that require long strings of action.

Another advantage of computers is modularity, that is, their size or computational capacity can be increased. A non-human (or modified human) brain could become much larger than a present-day human brain, like many supercomputers . Bostrom also thinks of the possibility of collective superintelligence : a large enough number of separate reasoning systems, if they are well coordinated and well coordinated, could act in aggregate with far greater capabilities than any sub-agent.

There may also be ways to qualitatively improve human reasoning and decision-making. Humans appear to differ from chimpanzees in the ways we think more than we differ in brain size or speed. [10] Humans outperforming non-human animals in large part because of new or enhanced reasoning abilities, such as long-term planning and language use . (See evolution of human intelligence and primate cognition .) If there are other possible improvements to reasoning that would have a similarly large impact, this makes it possible that an agent can be built that outperforms humans in the same fashion humans outperform chimpanzees. [11]

All of the above advantages hold for artificial superintelligence, but it is not clear how many hold for biological superintelligence. Physiological constraints limit the speed and size of biological brains in many ways that are inapplicable to machine intelligence. As such, superintelligence writers have much more attention to superintelligent AI scenarios. [12]

Feasibility of biological superintelligence

Carl Sagan suggéré que la advent of Caesarean sections and in vitro fertilization May allowed humans to evolve larger heads, resulting and in improvements through natural selection in the heritable component of human intelligence. [13] By contrast, Gerald Crabtree has argued that reduced selection is a consequence of a slow, centuries-long reduction in human intelligence , and that this process is likely to continue into the future. There is no scientific consensus regarding the possibility, and in both cases, of the biological change would be slow, especially relative to rates of cultural change.

Selective breeding , nootropics , NSI-189 , MAO-I’s , epigenetic modulation , and genetic engineering could improve human intelligence more rapidly. Bostrom writes that if we come to understand the genetic component of intelligence, pre-implantation genetic diagnosis could be used to gain embryos with as much as 4 points of IQ gain (if one embryo is selected out of two), or with larger gains (eg, up to 24.3 IQ points gained if one embryo is selected out of 1000). If this process is iterated over many generations, the gains could be an order of magnitude greater. Bostrom suggests that new generation of embryonic stem cells could be used very rapidly.[14] A well-organized society of high-intelligence of this group could achieve collective superintelligence. [15]

Alternatively, collective intelligence could be constructed by individual beings at present levels of individual intelligence. A number of writers have suggested that human civilization, or some aspect of it (eg, the Internet, or the economy), is coming to function like a global brain with capabilities far beyond its component agents. If this systems-based superintelligence is strongly related to artificial components, however, it may qualify as an AI rather than a biology-based superorganism . [16]

A final method of intelligent amplification would be to directly enhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved using nootropics , somatic gene therapy , or brain-computer interfaces . However, Bostrom expresses an understanding of the scalability of the first two approaches, and argues that designing a superintelligent cyborg interface is an AI-complete problem. [17]

Forecasts

Most surveyed AI researchers expect to eventually be able to rival humans in intelligence, though there is little consensus on when this will happen. At the 2006 AI @ 50 conference, 18% of expecting machines to be able to “learn to read and understand” by 2056; 41% of expected expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone. [18]

In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft academic research), the median year by which respondents expected “that can carry out most human professions at least as a typical human” ( assuming no global disasterwith 20% confidence is 2024 (mean 2034, dev 33 years), with 50% confidence is 2050 (mean 2072, age 110 years), and with 90% confidence is 2070 (mean 2168, st 342 years). These estimates exclude the% u2019% 10% confidence %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence. [19]

Design considerations

Bostrom expressed concern about what values ​​a superintelligence should be designed to have. He compared several proposals: [20]

  • The coherent extrapolated volition (CEV) proposal is that it should have the values ​​upon which humans would converge.
  • The moral rightness (MR) proposition is that it should value moral rightness.
  • The moral permissibility (MP) proposition is that it should be worthwhile within the bounds of moral permissibility (and otherwise have CEV values).

Responding to Bostrom, Santos-Lang Raised Concerns that Developers May Attempt to Start with a Single Kind of Superintelligence. [21]

Danger to human survival and the AI ​​control problem

Main articles: Existential risk from artificial general intelligence and AI control problem
Further information: Friendly artificial intelligence

Learning computers that quickly become superintelligent may take unforeseen actions or robots might-compete humanity (one potential technological singularity scenario). [22] Researchers have argued that, by way of an “intelligence explosion” sometime over the next century, a self-improvement could become unstoppable by humans. [23]

Concerning human extinction scenarios, Bostrom (2002) identifies superintelligence as a possible cause:

When we create the first superintelligent entity, we can make a mistake and give it to the world. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning the matter into a giant system, in the process of killing the person who asked the question.

In theory, since it has been superintelligent, it would have been possible to bring about a possible outcome and to be able to avoid the implementation of its goals, many uncontrolled, unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference. [24]

Eliezer Yudkowsky explains: “It does not matter, you do not like it, but you’re out of it. [25]

This presents the AI control problem : how to build a superintelligent That agent will aid icts creators, while Avoiding inadvertently building a superintelligence That will harm icts creators. The danger of not designing control right “the first time”, is that a misprogrammed superintelligence might rationally decide to “take over the world” and refuse to permit its programmers to modify it. Potential design strategies include “capability control”, and “motivational control” (building an AI that wants to be helpful).

Bill Hibbard advocates for public education and superintelligence. [26]

See also

  • AI takeover
  • Artificial brain
  • Arms race
  • Effective altruism
  • Ethics of artificial intelligence
  • Existential risk
  • Future of Humanity Institute
  • Future of robotics
  • Global catastrophic risk
  • Intelligent agent
  • Machine ethics
  • Machine Intelligence Research Institute
  • Machine learning
  • Outline of artificial intelligence
  • Posthumanism
  • Self-replication
  • Self-replicating machine
  • Superintelligence: Paths, Dangers, Strategies

Quotes

  1. Jump up^ Bostrom, Nick (2006). “How long before superintelligence?” . Linguistic and Philosophical Investigations . 5 (1): 11-30.
  2. Jump up^ Bostrom 2014, p. 22.
  3. Jump up^ Warwick, Kevin (2004). March of the Machines: The Breakthrough in Artificial Intelligence . University of Illinois Press. ISBN  0-252-07223-5 .
  4. Jump up^ Legg 2008, pp. 135-137.
  5. Jump up^ Chalmers 2010, p. 7.
  6. Jump up^ Chalmers 2010, p. 7-9.
  7. Jump up^ Chalmers 2010, p. 10-11.
  8. Jump up^ Chalmers 2010, p. 11-13.
  9. Jump up^ Bostrom 2014, p. 59.
  10. Jump up^ Yudkowsky, Eliezer (2013). Intelligence Explosion Microeconomics(PDF) (Technical report). Machine Intelligence Research Institute . p. 35. 2013-1.
  11. Jump up^ Bostrom 2014, pp. 56-57.
  12. Jump up^ Bostrom 2014, pp. 52, 59-61.
  13. Jump up^ Sagan, Carl (1977). The Dragons of Eden . Random House.
  14. Jump up^ Bostrom 2014, pp. 37-39.
  15. Jump up^ Bostrom 2014, p. 39.
  16. Jump up^ Bostrom 2014, pp. 48-49.
  17. Jump up^ Bostrom 2014, pp. 36-37, 42, 47.
  18. Jump up^ Maker, Meg Houston (July 13, 2006). “AI @ 50: First Poll” . Archived from the original on 2014-05-13.
  19. Jump up^ Müller & Bostrom 2016, pp. 3-4, 6, 9-12.
  20. Jump up^ Bostrom 2014, pp. 209-221.
  21. Jump up^ Santos-Lang 2014, pp. 16-19.
  22. Jump up^ Bill Joy,Why the future does not need us. In:Wired magazine. See alsotechnical singularity. Nick Bostrom2002 Ethical Issues in Advanced Artificial Intelligence
  23. Jump up^ Muehlhauser, Luke, and Louie Helm. 2012. “Intelligence Explosion and Machine Ethics.” In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer.
  24. Jump up^ Bostrom, Nick. 2003. “Ethical Issues in Advanced Artificial Intelligence.” In Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and Artificial Intelligence, edited by Iva Smit and George E. Lasker, 12-17. Flight. 2. Windsor, ON: International Institute for Advanced Studies in Systems Research / Cybernetics.
  25. Jump up^ Eliezer Yudkowsky(2008) in Artificial Intelligence as a Positive and Negative Factor in Global Risk
  26. Jump up^ Hibbard 2002, pp. 155-163.

Leave a Comment

Your email address will not be published. Required fields are marked *