Utilitarianism
Jump to navigation
Jump to search
Utilitarianism is an ethical theory that holds that the best action is the one that maximizes the aggregate well-being of all sentient beings. Since it focuses solely on the consequences of actions, it is considered a consequentialist theory of ethics. Several different varieties of utilitarianism exist:
Overview
- What is "well-being"? Hedonistic (or experiential) theories of well-being claim the experience of positively-valenced states (such as pleasure) is good, and that the experience of negatively-valences states (such as suffering) is bad. This leads to the implication that wireheading is desirable. On the other hand, desire-based (or preference-based) theories of well-being argue that fulfilling desires is what matters. Depending on how this is specified, this can also lead to counterintuitive implications.[1]
- How should we aggregate well-being across sentient entities? The total and average views are the best known, but each has some counterintuitive implications. Further complications arise when considering cases that trade off between current and future generations.[2]
- Should we evaluate individual actions (act utilitarianism) or adhere to rules aimed at maximizing well-being in the long term (rule utilitarianism)?
- How do we weigh positive well-being (e.g., happiness) and negative well-being (e.g., suffering)? A minority position known as negative utilitarianism (or NU) holds that decreasing suffering is "more important" than increasing happiness; different specific varieties of NU exist.[3] David Pearce is a notable advocate of negative utilitarianism.
- And so on.
The Wikipedia[4] and SEP[5] articles contain further details as well as numerous critiques of utilitarianism. This article will focus on the intersection of utilitarianism and transhumanism.