Changes

Jump to: navigation, search

AI risk

744 bytes added, 10:41, 26 April 2017
Risks from AI even in the absence of an intelligence explosion
[[File:Terminator.jpg|thumb|right|[[Terminator|The Terminator]] - a popular portrayal of an unfriendly AI]]
'''AI risk''' is the potential for artificial intelligent systems to cause unintended harm.
* Specification errors: designers didn't foresee all the circumstances properly (this includes unanticipated interactions between different modules)
* Security errors: the software gets hacked for purposes other than its original design
* AI Control Problem: an AI that can't be controlled.
The potential for harm is compounded by:
Popular accounts of AI risk often focus on two factors thought to be preconditions for any major harm from AI:
* The AI become [[self-aware]]
* The AI undergoes an intelligence explosion
* Independently - Pre-Deployment
* Independently – Post-Deployment
 
==AI Risk Advocates==
One of the most notable Risk Advocates in regards to AI is [[Elon Musk]], which is said to be one of the reasons behind his creation of [[OpenAI]].<ref>https://en.wikipedia.org/wiki/Open_Letter_on_Artificial_Intelligence</ref><ref>http://www.telegraph.co.uk/technology/news/11342200/Top-scientists-call-for-caution-over-artificial-intelligence.html</ref>
 
[[Category:Existential risks]]
 
== See also ==
* [[MIRI]]
== External links ==
* {{wikipedia|Existential risk from advanced artificial intelligence}}
== References ==
{{reflist}}
[[Category:Existential risks]]
[[Category:Artificial intelligence]]
2,180
edits

Navigation menu