Changes

Jump to: navigation, search

AI risk

437 bytes added, 10:41, 26 April 2017
Risks from AI even in the absence of an intelligence explosion
* Specification errors: designers didn't foresee all the circumstances properly (this includes unanticipated interactions between different modules)
* Security errors: the software gets hacked for purposes other than its original design
* AI Control Problem: an AI that can't be controlled.
The potential for harm is compounded by:
Popular accounts of AI risk often focus on two factors thought to be preconditions for any major harm from AI:
* The AI become [[self-aware]]
* The AI undergoes an intelligence explosion
* Independently - Pre-Deployment
* Independently – Post-Deployment
 
==AI Risk Advocates==
One of the most notable Risk Advocates in regards to AI is [[Elon Musk]], which is said to be one of the reasons behind his creation of [[OpenAI]].<ref>https://en.wikipedia.org/wiki/Open_Letter_on_Artificial_Intelligence</ref><ref>http://www.telegraph.co.uk/technology/news/11342200/Top-scientists-call-for-caution-over-artificial-intelligence.html</ref>
 
[[Category:Existential risks]]
2,182
edits

Navigation menu