Changes

Jump to: navigation, search

AI risk

166 bytes added, 09:41, 26 April 2017
Risks from AI even in the absence of an intelligence explosion
Popular accounts of AI risk often focus on two factors thought to be preconditions for any major harm from AI:
* The AI become [[self-aware]]
* The AI undergoes an intelligence explosion
==AI Risk Advocates==
One of the most notable Risk Advocates in regards to AI is [[Elon Musk]], which is said to be one of the reasons behind his creation of [[OpenAI]].<ref>https://en.wikipedia.org/wiki/Open_Letter_on_Artificial_Intelligence</ref><ref>http://www.telegraph.co.uk/technology/news/11342200/Top-scientists-call-for-caution-over-artificial-intelligence.html</ref>
==Criticism of AI risk advocacy==
 
{{Update}}
[[Category:Existential risks]]
2,180
edits

Navigation menu