Difference between revisions of "People for the Ethical Treatment of Reinforcement Learners"

From H+Pedia
Jump to navigation Jump to search
m (fix spelling)
(link tomasik)
 
Line 1: Line 1:
'''People for the Ethical Treatment of Reinforcement Learners''' ('''PETRL''') is an advocacy group that promotes the idea that certain [[machine learning]] algorithms may eventually possess [[sentience]] and therefore [[moral patienthood]]. One early exploration of this idea was a paper by Brian Tomasik of the [[Foundational Research Institute]]<ref>[https://arxiv.org/abs/1410.8233 Do Artificial Reinforcement-Learning Agents Matter Morally?]</ref>, which also introduced the name PETRL. Daswani and [[Jan Leike|Leike]]<ref>[https://arxiv.org/abs/1505.04497 A Definition of Happiness for Reinforcement Learning Agents]</ref> propose defining happiness for reinforcement learners as the difference between received and expected reward.
+
'''People for the Ethical Treatment of Reinforcement Learners''' ('''PETRL''') is an advocacy group that promotes the idea that certain [[machine learning]] algorithms may eventually possess [[sentience]] and therefore [[moral patienthood]]. One early exploration of this idea was a paper by [[Brian Tomasik]] of the [[Foundational Research Institute]]<ref>[https://arxiv.org/abs/1410.8233 Do Artificial Reinforcement-Learning Agents Matter Morally?]</ref>, which also introduced the name PETRL. Daswani and [[Jan Leike|Leike]]<ref>[https://arxiv.org/abs/1505.04497 A Definition of Happiness for Reinforcement Learning Agents]</ref> propose defining happiness for reinforcement learners as the difference between received and expected reward.
  
 
== See also ==
 
== See also ==

Latest revision as of 23:39, 19 November 2017

People for the Ethical Treatment of Reinforcement Learners (PETRL) is an advocacy group that promotes the idea that certain machine learning algorithms may eventually possess sentience and therefore moral patienthood. One early exploration of this idea was a paper by Brian Tomasik of the Foundational Research Institute[1], which also introduced the name PETRL. Daswani and Leike[2] propose defining happiness for reinforcement learners as the difference between received and expected reward.

See also

References

External links