AI risk

From H+Pedia
Revision as of 03:59, 22 February 2016 by Davidwwood (talk | contribs)
Jump to: navigation, search

AI risk is the potential for artificial intelligent systems to cause unintended harm.

Sources of harm from AI

AI harm might arise from:

  • Bugs: the software behaves different from the specification
  • Specification errors: designers didn't foresee all the circumstances properly (this includes unanticipated interactions between different modules)
  • Security errors: the software gets hacked for purposes other than its original design

The potential for harm is compounded by:

  • Fierce competitive pressures, which may lead some designers to cut corners
  • Much software having a "black box" nature which means that its behaviour in new circumstances is difficult to predict
  • AI components being available as open source, and utilised by third parties in ways their designers didn't intend (or foresee).

Pathways to dangerous AIs

As classified by Roman Yampolskiy, pathways to dangerous AIs include[1]:

  • On Purpose – Pre-Deployment
  • On Purpose - Post Deployment
  • By Mistake - Pre-Deployment
  • By Mistake - Post-Deployment
  • Environment – Pre-Deployment
  • Environment – Post-Deployment
  • Independently - Pre-Deployment
  • Independently – Post-Deployment

References