Jump to: navigation, search

AI risk

1,680 bytes added, 06:15, 22 February 2016
no edit summary
* Much software having a "black box" nature which means that its behaviour in new circumstances is difficult to predict
* AI components being available as open source, and utilised by third parties in ways their designers didn't intend (or foresee).
== Risks from AI even in the absence of an intelligence explosion ==
Popular accounts of AI risk often focus on two factors thought to be preconditions for any major harm from AI:
* The AI become self-aware
* The AI undergoes an intelligence explosion
However, [[Viktoriya Krakovna]] points out that risks can arise without either of these factors occurring<ref>[ Risks From General Artificial Intelligence Without an Intelligence Explosion]</ref>. Krakovna urges AI risk analysts to pay attention to factors such as
# Human incentives: Researchers, companies and governments have professional and economic incentives to build AI that is as powerful as possible, as quickly as possible
# Convergent instrumental goals: Sufficiently advanced AI systems would by default develop drives like self-preservation, resource acquisition, and preservation of their objective functions, independent of their objective function or design.
# Unintended consequences: As in the stories of Sorcerer’s Apprentice and King Midas, you get what you asked for, but not what you wanted
# Value learning is hard: Specifying common sense and ethics in computer code is no easy feat.
# Value learning is insufficient: Even an AI system with perfect understanding of human values and goals would not necessarily adopt them
# Containment is hard: A general AI system with access to the internet would be able to hack thousands of computers and copy itself onto them, thus becoming difficult or impossible to shut down – this is a serious problem even with present-day computer viruses.
== Pathways to dangerous AIs ==

Navigation menu