Center on Long-Term Risk

From H+Pedia
Jump to navigation Jump to search

The Center on Long-Term Risk (previously called the Foundational Research Institute[1]) studies cooperative strategies for reducing risks of astronomical future suffering (s-risks). For instance, some of their research has been on suffering-focused AI safety; unlike other AI safety research, this focuses on preventing the worst (dystopian) outcomes rather than ensuring that the best possible (utopian) outcomes are realized.[2]

External links

References