Utilitarianism is an ethical theory that holds that the best action is the one that maximizes the aggregate well-being of all sentient entities. Since it focuses solely on the consequences of an action for well-being, it is considered a consequentialist theory of ethics. Several different varieties of utilitarianism exist:
- What is "well-being"? Hedonistic (or experiential) theories of well-being claim the experience of positively-valenced states (such as pleasure) is good, and that the experience of negatively-valences states (such as suffering) is bad. This leads to the implication that wireheading is desirable. On the other hand, desire-based (or preference-based) theories of well-being argue that fulfilling desires is what matters. Depending on how this is specified, this can also lead to counterintuitive implications.
- How should we aggregate well-being across sentient entities? The total and average views are the best known, but each has some counterintuitive implications. Further complications arise when considering cases that trade off between current and future generations.
- Should we evaluate individual actions (act utilitarianism) or adhere to rules aimed at maximizing well-being in the long term (rule utilitarianism)?
- How do we weigh positive well-being (e.g., happiness) and negative well-being (e.g., suffering)? A minority position known as negative utilitarianism (or NU) holds that decreasing suffering is "more important" than increasing happiness; different specific varieties of NU exist.
- And so on.
The WikipediaCite error: Closing
</ref> missing for
<ref> tag articles contain further details as well as numerous critiques of utilitarianism. This article will focus on the intersection of utilitarianism and transhumanism.