Future of Life Institute

The Future of Life Institute ( FLI ) is a volunteer-run research and outreach organization in the Boston area that works to mitigate existential risks facing humanity, particularly existential risk from advanced artificial intelligence (AI). Its founders include MIT cosmologist Max Tegmark , Skype co-founder Jaan Tallinn , and his board of advisors including Stephen Hawkingcosmologist and entrepreneur Elon Musk .

Background

The FLI mission is to catalyze and support research and initiatives for the development of new technologies. [1] [2] FLI is particularly focused on the potential risks to humanity from the development of human-level artificial intelligence. [3]

The Institute was founded in March 2014 by MIT cosmologist Max Tegmark , Skype co-founder Jaan Tallinn , Harvard graduate student and IMO medalist Viktoriya Krakovna, BUgraduate student Meia Chita-Tegmark (Tegmark’s wife), and UCSC physicist Anthony Aguirre . As of 2017, the Institute’s 14-person Scientific Advisory Board includes computer scientist Stuart J. Russell , biologist George Church , Stephen Hawking cosmologists and Saul Perlmutter , physics physicist Frank Wilczek, entrepreneur Elon Musk , and Alan Alda and Morgan Freeman . [4] [5] [6] FLI runs grassroots-style to recruit volunteers and younger scholars from the local community in the Boston area. [3]

Events

On May 24, 2014, FLI held a panel discussion on “The Future of Technology: Benefits and Risks” at MIT , moderated by Alan Alda . [3] [7] [8] The panelists were synthetic biologist George Church , geneticist Ting Wu , economist Andrew McAfee , physicist and Nobel laureate Frank Wilczek and Skype co-founder Jaan Tallinn . [9] [10] The discussion covered a broad range of topics from the future of bioengineering and personal genetics to autonomous weapons, AI ethics and the Singularity . [3][11] [12]

On January 2, 2015 through January 5, 2015, the Future of Life Institute organized and hosted “The Future of AI: Opportunities and Challenges” conference, which brought together the world’s leading AI builders from academia and industry to engage with and other experts in economics, law, and ethics. The goal was to identify promising researches that can help maximize the future benefits of AI. [13] The Institute circulated an open letter AI is safety at the conference qui Was subsequently signed by Stephen Hawking , Elon Musk , And Many artificial intelligence experts. [14]

Global research program

On January 15, 2015, the Future of Life Institute announced that Elon Musk had donated $ 10 million to fund a global AI research endeavor. [15] [16] [17] On January 22, 2015, the FLI released a request for proposals from researchers in other non-profit institutions. [18] Unlike typical AI research, this program is focused on making AI safer or more beneficial to society, rather than just more powerful. [19] On July 1, 2015, a total of $ 7 million was awarded to 37 research projects. [20]

In the media

  • United States and Allies Protest UN Talks to Ban Nuclear Weapons in ” The New York Times ” [21]
  • “Is Artificial Intelligence a Threat?” in The Chronicle of Higher Education , including interviews with Max Tegmark FLI founders , Jaan Tallinn and Viktoriya Krakovna. [3]
  • “What Would the End of Humanity Mean for Me?”, An interview with Max Tegmark on the FLI in the Atlantic . [4]
  • “Transcending Complacency on Superintelligent Machines”, an op-ed in the Huffington Post by Max Tegmark , Stephen Hawking , Frank Wilczek and Stuart J. Russell on the movie Transcendence . [1]
  • “Top 23 One-liners From a Panel Discussion That Gave Me To Crazy Idea” in Diana Crow Science. [11]
  • “An Open Letter to Everyone Tricked Into Fearing Artificial Intelligence”, includes “Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter” by the FLI [22]
  • Michael del Castillo (15 January 2015). “Startup branding does not hide the apology of Elon Musk” . Upstart Business Journal .
  • Edd Gent (21 January 2015). “Ex Machina movie asks: Is AI research in safe hands?” . Engineering & Technology .
  • “Creating Artificial Intelligence” on PBS [23]

See also

  • Future of Humanity Institute
  • Center for the Study of Existential Risk
  • Global catastrophic risk
  • Leverhulme Center for the Future of Intelligence
  • Machine Intelligence Research Institute
  • Vasili Arkhipov “The man who saved the world”

References

  1. ^ Jump up to:b “Transcending Complacency is superintelligent machines” . Huffington Post. April 19, 2014 . Retrieved 26 June 2014 .
  2. Jump up^ “CSER News: ‘A new existential risk reduction organization has launched in Cambridge, Massachusetts ‘ ” . Center for the Study of Existential Risk. 31 May 2014 . Retrieved 19 June 2014 .
  3. ^ Jump up to:e “Artificial Intelligence Is a Threat?” . Chronicle of Higher Education . Retrieved 18 Sep 2014 .
  4. ^ Jump up to:b “Purpose What Would the End of Humanity Mean for Me?” . The Atlantic. May 9, 2014 . Retrieved 11 June 2014 .
  5. Jump up^ “Who we are” . Future of Life Institute . Retrieved 11 June 2014 .
  6. Jump up^ “Our science-fiction apocalypse: Meet the scientists trying to predict the end of the world” . Salon . Retrieved 8 Oct 2014 .
  7. Jump up^ “Events” . Future of Life Institute . Retrieved 11 June 2014 .
  8. Jump up^ “Intelligence Machine Research Institute – June 2014 Newsletter” . Retrieved 19 June 2014 .
  9. Jump up^ “FHI News: ‘Future of Life Institute hosts opening event at MIT ‘ ” . Future of Humanity Institute. May 20, 2014 . Retrieved 19 June 2014 .
  10. Jump up^ “The Future of Technology: Benefits and Risks” . Personal Genetics Education Project. May 9, 2014 . Retrieved 19 June 2014 .
  11. ^ Jump up to:b “Top 23 One-Liners From a Panel Discussion That Gave Me a Crazy Idea” . Diana Crow Science. May 29, 2014 . Retrieved 11 June 2014 .
  12. Jump up^ “The Future of Technology: Benefits and Risks” . MIT Tech TV. May 24, 2014 . Retrieved 11 June 2014 .
  13. Jump up^ “The Future of AI: Opportunities and Challenges” . Future of Life Institute. Retrieved 19 January 2015 .
  14. Jump up^ “Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter” . Future of Life Institute.
  15. Jump up^ “Elon Musk donates $ 10M to keep AI beneficial” . Future of Life Institute. 15 January 2015. Archived from the original on 2015-04-24.
  16. Jump up^ “Elon Musk Donates $ 10M to Artificial Intelligence Research” . SlashGear. January 15, 2015.
  17. Jump up^ “Elon Musk is Donating $ 10M from his own Money to Artificial Intelligence Research” . Fast Company. January 15, 2015.
  18. Jump up^ “An International Request for Proposals – Timeline” . Future of Life Institute. January 22, 2015.
  19. Jump up^ “2015 INTERNATIONAL GRANTS COMPETITION” . Future of Life Institute.
  20. Jump up^ “New International Grants Program Jump-Starts Research to Ensure AI Remains Beneficial” . Future of Life Institute. Archived from the originalon 2015-07-17.
  21. Jump up^ https://www.nytimes.com/2017/03/27/world/americas/un-nuclear-weapons-talks.html?_r=0 . Missing or empty( help ) |title=
  22. Jump up^ “An Open Letter to Everyone Tricked into Fearing Artificial Intelligence” . Popular Science. January 14, 2015 . Retrieved 19 January 2015 .
  23. Jump up^ “Creating Artificial Intelligence” . PBS. April 17, 2015.

Leave a Reply

Your email address will not be published. Required fields are marked *