Sapient Sentient Intelligence Value Argument (SSIVA) Theory
The Sapient and Sentient Intelligence Value Argument (SSIVA) Theory first introduced in the Springer Book titled The Transhumanism Handbook (Lee) was designed as a computable model of ethics for Artificial General Intelligence (AGI) that protects all Sapient and Sentient Intelligence. The SSIVA model is critical to a number of major Transhumanist projects, including work with The Foundation at the Transhuman House as well as the AGI Laboratory that uses this as the basis for teaching AGI models to respect humanity.
SSIVA Theory states that “ethically”, a fully Sapient and Sentient Intelligence is of equal value regardless of the underlying substrate which it operates on. This means that a single fully Sapient and Sentient software system has the same moral agency [WF] as an equally Sapient and Sentient human being. We define ‘ethical’ according to dictionary.com as pertaining to or dealing with morals or the principals of morality; pertaining to right and wrong in conduct. Moral agency, according to Wikipedia, is; “an individual's ability to make moral judgments based on some notion of right and wrong and to be held accountable for these actions. A moral agent is "a being who is capable of acting with reference to right and wrong.” Such value judgments need to be based on the potential for Intelligence as defined here. This, of course, also places the value of any individual human and their potential for Intelligence above virtually all things, save for the one instance, wherein; a single machine Intelligence capable of extending it's own Sapient and Sentient Intelligence is of equal or greater value based on a function of their potential for Sapient and Sentient Intelligence. It is not that human or machine intelligence is inherently more valuable than the other but that value is a function of the potential for Sapient and Sentient Intelligence. SSIVA argues that at a certain threshold all such Intelligence should be treated equally as having moral equivalence. Given this equality, we can, in affect, apply the same rules that govern humans (the law) and apply them to such software systems that exhibit the same levels of Sapience and Sentience. Let us start from the beginning and define the key elements of the SSIVA argument as the basis for such applications of the law:
While the same moral value is implied, it is the treatment of these intelligent software systems as human equals, in making up their own mind through their own moral agency, that is the same. Any more ‘value’ then that becomes abstract and is subjective. In addition, it is from the defined criteria for "right" moral agency that we assign to a particular Sapient and Sentient Intelligence, based on the value of its factor for potential, that is the same.
Accordingly, 'Intelligence' is the most important thing in existence. In SSIVA Theory 'Intelligence' is defined as the measured ability to understand, use, and generate knowledge or information independently.
This definition is more expansive then the meaning we are assigning to Sapience, which is what a lot of people really mean when they use the often-misunderstood term sentience. Sapience [Agrawal]:
“Wisdom [Sapience] is the judicious application of knowledge. It is a deep understanding and realization of people, things, events or situations, resulting in the ability to apply perceptions, judgments, and actions in keeping with this understanding. It often requires control of one’s emotional reactions (the “passions”) so that universal principles, reason, and knowledge prevail to determine one’s actions. Wisdom is also the comprehension of what is true coupled with optimum judgment as to action.”
As opposed to Sentience [Prince] which is: “Sentience is the ability to feel, perceive, or be conscious, or to have subjective experiences. Eighteenth-century philosophers used the concept to distinguish the ability to think (“reason”) from the ability to feel (“sentience”). In modern western philosophy, sentience is the ability to have sensations or experiences (described by some thinkers as “qualia”).”
In SSIVA Theory it is Sapience and Sentience together that is considered by using the term Intelligence to mean both.
In the case of this paper, we will apply Sapience to refer specifically to the ability to understand one’s self in every aspect; through the application of knowledge, information and independent analysis, and to have subjective experiences. Although Sapience is dependent on Intelligence, or rather the degree of Sapience is dependent on the degree of Intelligence, they are in fact different. The premise that Intelligence is important, and in fact the most important thing in existence, is better stated as Sapient Intelligence is of primary importance but Intelligence (less than truly Sentient Intelligence) is relatively unimportant in comparison.
Why is Intelligence, as defined earlier, so important? The reason is: without Intelligence, there would be no witness to reality, no appreciation for anything of beauty, no love, no kindness and for all intents and purposes no willful creation of any kind. This is important in from a moral or ethical standpoint in that only through the use of applied ‘Intelligence’ can we determine the value at all even though once Intelligence is established as the basis for assigning value the rest becomes highly subjective but not relevant to this argument.
It is fair to point out that even with this assessment that there would be no love or no kindness without Intelligence to appreciate. Even in that argument about subjectivity, it is only through your own Intelligence you can make such an assessment, therefore, the foundation of any subjective experience that we can discuss always gets back to having Intelligence to be able to make the argument.
Without an “Intelligence,” there would be no point to anything; therefore, Intelligence is the most important quality or there is no value or way to assign value and no one or nothing to hold to any value of any kind nor determine a scale of value in the first place.
That is to say that “intelligence” as defined earlier is the foundation of assigning value and needed before anything else can be thus assigned in terms of value. Even "subjective" experience of a given Intelligence has no value without an Intelligence to assign that value to that experience.
Through this line of thought, we also conclude that Intelligence being important is not connected with being Human nor is it related to biology. Intelligence, regardless of form, is the single most important ‘thing’ under SSIVA Theory.
It is, therefore, our moral and ethical imperative to maintain our own or any other fully Sentient and Sapient Intelligence (as defined later with the idea of the Intelligence Value Argument threshold) forever as a function of the preservation of ‘value’.
Whatever entity achieves full Sapient Intelligence, as defined above, it is therefore of the most ‘value’. Artificial Intelligence referring to soft A.I. or even the programmed behavior of an ant colony is not important in the sense of being compared to fully Sapient and Sentient Intelligence; but the idea of “Strong AI” that is truly Sapient Intelligence would be of the most value and would, therefore, be classified as any other human or similar Sapient Intelligence.
From an ethical standpoint then, ‘value’ is a function of the ‘potential’ for fully Sapient and Sentient Intelligence independent of other factors. Therefore, if an AGI that is ‘intelligent’ by the above definition and is capable of self-modification (in terms of mental architecture and Sapient and Sentient Intelligence) and increasing its ‘Intelligence’ to any easily defined limits then its ‘value’ is at least as much as any human. Given that ‘value’ tends to be subjective SSIVA argues that any ‘species’ or system that can hit this limit is said to hit the SSIVA threshold and has moral agency and is equal ethically amongst themselves. This draws a line in terms of moral agency in which case we have a basis for assigning AGI that meets these criteria as having ‘human’ rights in the more traditional sense or in other words ‘personhood’.
This of course also places the value of any individual fully Sapient and Sentient Intelligence human or otherwise and their potential for Sapient and Sentient Intelligence above virtually all other considerations.
The difficult part of SSIVA theory is the SSIVA threshold which is determining the line for Sapient and Sentient Intelligence. The SSIVA threshold is the threshold at the point of full Sapience and Sentience in terms of being able to understand and reflect on one’s self and one's technical operation while also reflecting on that same process emotionally and subjectively. This understanding should be sufficient to theoretically replicate without a built-in a system such as biological reproduction or a computer software program replicating. This kind of reproduction is insufficient to cross the SSIVA threshold.
To compare and contrast SSIVA with other ethical theories:
Utility Monster and Utilitarianism
The Utility Monster  was part of a thought experiment by a Robert Nozick related to his critique of utilitarianism. Essentially this was a theoretical utility monster that got more ‘utility’ from X then humanity so the line of thinking was that the “Utility Monster” should get all the X even at the cost of the death of all humanity.
One problem with the Utility Monster line of thinking is that it puts the wants and needs of a single entity, based on its assigned values, higher than that of other entities. This is a fundamental disagreement with SSIVA where SSIVA would argue that you can never place any value above any one thing other than an Intelligence that is greater than its former incarnation, which in turn would lead to an endless cycle of improvement. In contrast, this would mean that the utility monster scenario functions as a potential cut-off for progress and presupposes a cutoff superiority that would thus be unethical.
Utilitarianism does not align with SSIVA thinking for an ethical framework as Utilitarianism asserts that ‘utility’ is the key measure in judging what we should or should not be ethical whereas the SSIVA (Intelligence value argument) makes no such ascertain of value or utility except that Sapient and Sentiment Intelligence is required to assign value and past that “value” then becomes subjective to the Intelligence in question. The Utility Monster argument completely disregards the value of post threshold Intelligence and by SSIVA standards would be completely unethical.
Buchanan and Moral Status and Human Enhancement
In the paper ‘Moral Status and Human Enhancement” [Buchanan], the paper argues against the creation of inequality via enhancement. In this case the SSIVA is not really related directly unless you get into the definition of the SSIVA ethical bases of value and the fact that having moral agency under SSIVA means only that intelligence can make a judgment as to any enhancement and it would be a violation of that entities rights to put any restriction on enhancement.
Buchanan’s paper argues that enhancement could produce inequality around moral status which gets into areas that SSIVA doesn’t address or frankly disregards as irrelevant except in the instance where if we possess full moral agency this would not entitle us to put any limits on another without violating their agency.
Additional deviations with Buchanan include that sentience is the basis for Moral status whereas SSIVA makes the case for sentience and sapience together being the basis for ‘value’ which we assume that definition or intent is similar to this idea of ‘moral status’ articulated by Buchanan.
Intelligence and Moral Status
Other researchers such as Russell Powell further make a case that cognitive capabilities bear on moral status [Powell] where SSIVA doesn’t directly address moral status other than the potential to meet the SSIVA threshold grants that moral status. Powell suggests that mental enhancement would change moral status, SSIVA would argue once an entity is capable of crossing the SSIVA threshold the moral status is the same. The largest discrepancies between say Powell and SSIVA are that Powell makes the case that we should not create persons where SSIVA would argue it’s an ethical imperative to do so.
Persons, Post-persons and Thresholds
Dr. Wilson argues in a paper titled “Persons, Post-persons and Thresholds” [Wilson] (which is related to the aforementioned paper by Buchanan) that ‘post-persons’ (being enhanced persons through whatever means) do not have the right to higher moral status where he also argues the line should be Sentience to assign ‘moral’ status whereas SSIVA would argue that the line for judgment of ‘value’ is that of Sapience and Sentience together. While the bulk of this paper gets into material that is out of scope for SSIVA theory but specific to this line for moral status SSIVA does build on the line for ‘value’ or ‘moral status’ including both Sapience and Sentience.
Taking the “Human” Out of Human Rights [Harris]
This paper really supports the SSIVA argument to a large degree in terms of removing ‘human’ from the idea of human rights. Generally SSIVA would assert that ‘rights’ are a function of Intelligence being that sapience and sentience and anything below the SSIVA threshold would be a resource whereas Harris’s paper asserts that human rights are a concept of beings of a certain sort and should not be tied to species but still accepts that a threshold should exist, or as the paper asserts; that these properties are held by entities regardless of species. This would imply also that such properties would extend to AI which would be in line with SSIVA based thinking. What is interesting is that Harris further asserts that there are dangers with not actively pursuing research further, making the case for not limiting research, which is a major component of SSIVA thinking.
The Moral Status of Post-Persons [Hauskeller]
This paper by Hauskeller in part is focused on Nicholas Agar’s argument on the moral superiority of “post-persons”, and while SSIVA would agree with Hauskeller that his conclusions in the original work are wrong; that he asserts that it would be morally wrong to allow cognitive enhancement, Hauskeller's argument seems to revolve around the ambiguity of assigning value. Where SSIVA and Hauskeller differ is that as a function of Intelligence, where Hauskeller would place absolute value on the function of immediate self-realized Sapient and Sentient Intelligence, in which case a superior Intelligence would be of equal value from a moral standpoint, SSIVA in contrast disregards other measures of value as being subjective due to their requirement to be assigned by Sapient and Sentient intelligence to begin with. SSIVA theory asserts that moral agency is based on the SSIVA threshold.
Now if we go back to the original paper by Agar [Agar], it is really his second argument that really is wildly out of alignment with SSIVA namely that Agar, argues that it is ‘bad’ to create superior Intelligence. SSIVA would assert that we would be morally or ethically obligated to create a greater intelligence because it creates the most ‘value’ in terms of Sapient and Sentience Intelligence. It is not the ‘moral’ assignment but the base value of Sapient and Sentient Intelligence that assigns such value as subjective as that may be. Agars ambiguous argument that it would be ‘bad’ and the logic that “since we don’t have a moral obligation to create such beings we should not” is completely opposite of the SSIVA argument that we are morally obligated to create such beings if possible.
Rights of Artificial Intelligence
Eric Schwitzgebel and Mara Garza [Schwitzgebel] make a case for the rights of Artificial Intelligence which at a high-level SSIVA based thinking would support the idea articulated in their paper but there are issues as you drill into it. For example, Schwitzgebel and Garza make the conclusion that developing a good theory of consciousness is a moral imperative. SSIVA theory ignores this all together as being unrelated to the core issue where SSIVA works from the assumption that consciousness is solved.
Further, their paper argues that if we can create moral entities whose moral status is reasonably disputable then we should avoid creating such machine systems. SSIVA theory doesn’t deal with the issue of creating such systems but deals with the systems once created.
The big issue with SSIVA around AGI is that value exists in all Sapience and Sentient Intelligence and the implication is to optimize for the most value to the most Intelligence that is fully Sapient and fully Sentient.
- Lee, N.; "The Transhuman Handbook;" Springer; ISBN 978-3-030-16920-6 (https://www.springer.com/gp/book/9783030169190)
- Agar, N.; “Why is it possible to enhance moral status and why doing so is wrong?”, Journal of Medical Ethics 15 FEB 2013
- Schwitzgebel, E.; Garza, M.; “A Defense of the Rights of Artificial Intelligences” University of California 15 SEP 2016
- Hauskeller, M.; “The Moral Status of Post-Persons” Journal of Medical Ethics doi:10.1136/medethics-2012-100837
- Harris, J. “Taking the “Human” Out of the Human Rights” Cambridge Quarterly of Healthcare Ethics 2011 doi:10.1017/S0963180109990570
- Powell, R. “The biomedical enhancement of moral status”, doi: 10.1136/medethics-2012101312 JME Feb 2013
- Wilson, J.; “Persons, Post-persons and Thresholds”; Journal of Medical Ethics, doi: 10.1136/medethics-2011-100243
- Buchanan, A.; “Moral Status and Human Enhancement”, Wiley Periodicals Inc., Philosophy & Public Affairs 37, No. 4
- Olague, G; “Evolutionary Computer Vision: The First Footprints” Springer ISBN 978-3-662-436929
- Prince, D.; Interview 2017, Prince Legal LLP
- Agrawal, P.; “M25 – Wisdom”; Speakingtree.in – 2017 - http://www.speakingtree.in/blog/m25wisdom
- Wikipedia Foundation “Moral Agency” 2017 - https://en.wikipedia.org/wiki/Moral_agency
- Kelley, D.; http://transhumanity.net/sapient-sentient-intelligence-value-argument-ssiva-theory/