Jump to: navigation, search

AI risk

437 bytes added, 10:41, 26 April 2017
Risks from AI even in the absence of an intelligence explosion
* Specification errors: designers didn't foresee all the circumstances properly (this includes unanticipated interactions between different modules)
* Security errors: the software gets hacked for purposes other than its original design
* AI Control Problem: an AI that can't be controlled.
The potential for harm is compounded by:
Popular accounts of AI risk often focus on two factors thought to be preconditions for any major harm from AI:
* The AI become [[self-aware]]
* The AI undergoes an intelligence explosion
* Independently - Pre-Deployment
* Independently – Post-Deployment
==AI Risk Advocates==
One of the most notable Risk Advocates in regards to AI is [[Elon Musk]], which is said to be one of the reasons behind his creation of [[OpenAI]].<ref></ref><ref></ref>
[[Category:Existential risks]]

Navigation menu