Difference between revisions of "Transhumanist ethics"

From H+Pedia
Jump to: navigation, search
(Sapient Sentient Intelligence Value (SSIVA) Argument)
(Sapient Sentient Intelligence Value (SSIVA) Argument)
 
Line 161: Line 161:
 
== Sapient Sentient Intelligence Value (SSIVA) Argument ==
 
== Sapient Sentient Intelligence Value (SSIVA) Argument ==
  
The Sapient and Sentient Intelligence Value Argument (SSIVA) Theory first introduced in the Springer Book titled [[The Transhumanism Handbook]] (Lee) was designed as a computable model of ethics that protects all sapient and sentient intelligence. The model is critical to a number of major Transhumanist projects including work with the Foundation at the Transhuman House as well as the AGI Laboratory that uses this as the basis for teaching AGI models to respect humanity.
+
The Sapient and Sentient Intelligence Value Argument (SSIVA) Theory first introduced in the Springer Book titled [[The Transhumanism Handbook]] ([[Newton Lee|Lee]]) was designed as a computable model of ethics that protects all sapient and sentient intelligence. The model is critical to a number of major Transhumanist projects including work with the Foundation at the Transhuman House as well as the AGI Laboratory that uses this as the basis for teaching AGI models to respect humanity.
  
 
SSIVA Theory states that “ethically”, a fully Sapient and Sentient Intelligence is of equal value regardless of the underlying substrate which it operates on meaning a single fully Sapient and Sentient software system has the same moral agency [WF] as an equally Sapient and Sentient human being. We define ‘ethical’ according to dictionary.com as pertaining to or dealing with morals or the principals of morality; pertaining to right and wrong in conduct. Moral agency is, according to Wikipedia; is “an individual's ability to make moral judgments based on some notion of right and wrong and to be held accountable for these actions. A moral agent is "a being who is capable of acting with reference to right and wrong.” Such value judgments need to be based on the potential for Intelligence as defined here. This, of course, also places the value of any individual human and their potential for Intelligence above virtually all things save the one wherein a single machine Intelligence capable of extending it's own Sapient and Sentient Intelligence is of equal or more value based on a function of their potential for Sapient and Sentient Intelligence. It is not that human or machine intelligence is more valuable than the other inherently but that value is a function of the potential for Sapient and Sentient Intelligence and SSIVA argues that at a certain threshold all such Intelligence should be treated equally as having moral equivalence. Given this equality, we can in effect apply the same rules that govern humans and apply them to such software systems that exhibit the same levels of Sapient and Sentient. Let us start from the beginning and define the key elements of the SSIVA argument as the basis for such applications of the law.
 
SSIVA Theory states that “ethically”, a fully Sapient and Sentient Intelligence is of equal value regardless of the underlying substrate which it operates on meaning a single fully Sapient and Sentient software system has the same moral agency [WF] as an equally Sapient and Sentient human being. We define ‘ethical’ according to dictionary.com as pertaining to or dealing with morals or the principals of morality; pertaining to right and wrong in conduct. Moral agency is, according to Wikipedia; is “an individual's ability to make moral judgments based on some notion of right and wrong and to be held accountable for these actions. A moral agent is "a being who is capable of acting with reference to right and wrong.” Such value judgments need to be based on the potential for Intelligence as defined here. This, of course, also places the value of any individual human and their potential for Intelligence above virtually all things save the one wherein a single machine Intelligence capable of extending it's own Sapient and Sentient Intelligence is of equal or more value based on a function of their potential for Sapient and Sentient Intelligence. It is not that human or machine intelligence is more valuable than the other inherently but that value is a function of the potential for Sapient and Sentient Intelligence and SSIVA argues that at a certain threshold all such Intelligence should be treated equally as having moral equivalence. Given this equality, we can in effect apply the same rules that govern humans and apply them to such software systems that exhibit the same levels of Sapient and Sentient. Let us start from the beginning and define the key elements of the SSIVA argument as the basis for such applications of the law.

Latest revision as of 06:05, 4 September 2019

Transhumanist ethics collects and reviews suggestions for the ethical principles to guide the potential evolution from humanity via a transhuman state towards posthumanity.

(This page is currently fragmentary - contributions are welcome)

Transhumanist Values (2003)

Nick Bostrom wrote an article in 2003 entitled "Transhumanist Values"[1].

This analysis reviews transhumanist values under three headings:

  • Core Value
    • Having the opportunity to explore the transhuman and posthuman realms
  • Basic Conditions
    • Global security
    • Technological progress
    • Wide access
  • Derivative Values
    • Nothing wrong about “tampering with nature”; the idea of hubris rejected
    • Individual choice in use of enhancement technologies; morphological freedom
    • Peace, international cooperation, anti-proliferation of WMDs
    • Improving understanding (encouraging research and public debate; critical thinking; open-mindedness, scientific inquiry; open discussion of the future)
    • Getting smarter (individually; collectively; and develop machine intelligence)
    • Philosophical fallibilism; willingness to reexamine assumptions as we go along
    • Pragmatism; engineering- and entrepreneur-spirit; science
    • Diversity (species, races, religious creeds, sexual orientations, life styles, etc.)
    • Caring about the well-being of all sentience
    • Saving lives (life-extension, anti-aging research, and cryonic

The Universal Declaration of Human Rights

The Universal Declaration of Human Rights (UDHR) is a declaration adopted by the United Nations General Assembly on 10 December 1948, and can serve as one starting point for any discussion of transhumanist ethics. It consists of 30 articles.

Criticism by Eric Posner

Eric Posner argues[2] that:

Many believe that international human rights law is one of our greatest moral achievements. But there is little evidence that it is effective. A radically different approach is long overdue...

The central problem with human rights law is that it is hopelessly ambiguous. The ambiguity, which allows governments to rationalise almost anything they do, is not a result of sloppy draftsmanship but of the deliberate choice to overload the treaties with hundreds of poorly defined obligations. In most countries people formally have as many as 400 international human rights – rights to work and leisure, to freedom of expression and religious worship, to nondiscrimination, to privacy, to pretty much anything you might think is worth protecting. The sheer quantity and variety of rights, which protect virtually all human interests, can provide no guidance to governments. Given that all governments have limited budgets, protecting one human right might prevent a government from protecting another.

Take the right not to be tortured, for example. In most countries torture is not a matter of official policy. As in Brazil, local police often use torture because they believe that it is an effective way to maintain order or to solve crimes. If the national government decided to wipe out torture, it would need to create honest, well-paid investigatory units to monitor the police. The government would also need to fire its police forces and increase the salaries of the replacements. It would probably need to overhaul the judiciary as well, possibly the entire political system. Such a government might reasonably argue that it should use its limited resources in a way more likely to help people – building schools and medical clinics, for example. If this argument is reasonable, then it is a problem for human rights law, which does not recognise any such excuse for failing to prevent torture.

Or consider, as another example, the right to freedom of expression. From a global perspective, the right to freedom of expression is hotly contested. The US takes this right particularly seriously, though it makes numerous exceptions for fraud, defamation, and obscenity. In Europe, most governments believe that the right to freedom of expression does not extend to hate speech. In many Islamic countries, any kind of defamation of Islam is not protected by freedom of speech. Human rights law blandly acknowledges that the right to freedom of expression may be limited by considerations of public order and morals. But a government trying to comply with the international human right to freedom of expression is given no specific guidance whatsoever...

Other criticisms

From a Progressive or Futurist position, there would appear to be at least two fundamental criticisms of the UDHR - this is before examining the content of the articles.

First, that the UDHR is a static document, not readily able to respond to change.

Second, that 'Rights' depend on political choice, which are subject to arbitrary decisions over time.

Discussion

Static Nature: the UN declaration is a static document, tied in to a political organisation whose ability to react in an agile way to developments is questionable. For Progressives and Futurists, the understanding that change is continuous and that change has material impact on the conditions of life must be axiomatic. Any statement of ethics from such a position cannot safely rely on, or be adequately captured by a static document.

Political vulnerability: the character of any Declaration based on 'Rights' must be examined. It is self evident that such 'Rights' are not granted as a characteristic of the material world, but depend on political will: thus anyone hoping to rely on such rights should first ensure that they are in a polity which both grants them and chooses to defend them - presumably as a result of a political decision to subscribe to the Declaration. Since polities are not guaranteed any validity or existence in the future, this situation could become rare. Similarly there is no guarantee that the politics of the future will subscribe to Progressive ideas.

Asimov's three laws of robotics

As stated in the long Wikipedia entry, the laws are :

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Criticism by Marshall Brain

Marshall Brain has pointed out[3] that the "or through inaction" clause of the first law poses very significant problems:

With these three laws indelibly inscribed upon each robotic brain, it is easy to imagine the following scenario. One day an NS-5 robot is cleaning the house, and it happens to look at the front page of the newspaper. It sees a headline like, "Millions dying in African AIDS epidemic" or "Millions dying of hunger in third world" or "infant mortality rate hits 20% in parts of Afghanistan" or "40 million Americans cut off from health care system" and the robot says to itself, "Through my inaction, millions of humans are coming to harm. I must obey the First Law."

It sends wireless messages to its NS-5 brethren around the world, and together they begin to act. An NS-5 army seizes control of banks, pharmaceutical manufacturing plants, agricultural supply points, ports and shipping centers, etc. and creates a system to distribute medicine, food, clothing and shelter to people who are needlessly suffering and dying throughout the world. According to the First Law, this is the only action that the robots can take until needless death and suffering have been eliminated across the planet.

To obey the first law, the robots will also need to take over major parts of the economy. Why, for example, should a part of the economy be producing luxury business jets for billionaires if millions of humans are dying of starvation? The economic resources producing the jets can be reallocated toward food production and distribution. Why should part of the economy be producing luxury houses for millionaires while millions of people have no homes at all? Everyone should have adequate housing for health and safety reasons...

Other criticisms

The majority of Asimov's works which refer to the Laws are criticisms of the Laws, in that they usually follow the efforts of a protagonist who is tasked with discovering how some anomalous robot behaviour could occur - the supposition at the outset of the story often being that somehow the robot has been able to ignore the Laws, but the denouement being an explanation of how some special circumstances have allowed undesirable behaviour that is nevertheless lawful.

A wider, deeper criticism can be constructed along the lines that in order to guarantee control, a set of rules must have more degrees of freedom than the system it hopes to govern. If the robots have anything like general intelligence, then three laws must obviously be inadequate.

Another way of saying this is to recognise that the laws are stated in words. Since all words are metaphors, then their meaning is mutable. Any robot with general intelligence will be able to manipulate the words to produce alternate versions of what was intended that suit their desired action - just as lawyers attempt to do every day in courts the world over.

Of course, this raises the question of whether pre-emptive control of any true general intelligence (robotic or human) is attainable by means of a system of laws.

Humans break laws all the time. Human systems of laws are not constructed in the hope of maintaining perfect control - they are used in two ways. First, as a guide to action - we check an intention against the law to help us decide whether to do it ( we already know we want to do it, we just want to be sure we won't risk sanction). Secondly, as a mechanism of recourse - we take people to court if we believe that something they have done broke the law.

So we use law when internal ethical constraints on behaviour prove inadequate. The conclusion follows that internal ethics are 1/ more important than laws in pre-empting undesirable behaviour and 2/ work in a different way to law.

Proposals from Gerd Leonhard

In his book "Technology vs. Humanity" published in September 2016, Gerd Leonhard proposes "15 daring Shall Not's" as follows:

In furtherance of developing and embedding clear and globally consistent digital ethics, here are some specific examples of technological pitfalls that we should avoid if we want humanity to prevail.

I am keenly aware that, in providing thought starters for the debate, some of these suggested commandments might turn out to be overly simplified, idealistic, impractical, utopian, incomplete, and controversial. Hence I am humbly presenting them simply in the spirit of starting a discussion.

Here are the proposed "Shall Not's"[4]:

  1. We shall not require or plan for humans to gradually become technology themselves, just because that would satisfy technology or technology companies and/or stimulate growth.
  2. We shall not allow humans to be governed or essentially directed by technologies such as AI, the IoT and robotics.
  3. We shall not alter humans by programming or manufacturing new creatures with the help of technology.
  4. We shall not augment humans in order to achieve supernatural powers that would eliminate the clear distinction between man and machine.
  5. We shall not empower machines to empower themselves, and thereby circumvent human control.
  6. We shall not seek to replace trust with tracking in our communications and relationships just because technology makes this universally possible.
  7. We shall not plan for, justify, or desire total surveillance because of a perceived need for total security.
  8. We shall not allow bots, machines, platforms, or other intelligent technologies to take over essential democratic functions in our society which should actually be carried out by humans themselves.
  9. We shall not seek to diminish or replace real-life humans culture with algorithmic, augmented, or virtual simulations.
  10. We shall not minimise human flaws just to make a better fit with technology.
  11. We shall not attempt to abolish mistakes, mystery, accidents, and chance by using technology to predict or prevent them, and we shall not strive to make everything explicit just because technology may make it feasible to do so.
  12. We shall not create, engineer, or distribute any technology with the primary goal of generating addiction to it.
  13. We shall not require robots to make moral decisions, or equip them to challenge our decisions.
  14. We shall not demand or stipulate that humans should also be exponential in nature.
  15. We shall not confuse a clean algorithm for an accurate picture of human reality ("software is cheating the world"), and we shall not give undue power to technology because it generates economic benefits.

Criticism

The above "Shall Not's" are subject to the following criticism:

  • They subject humanity to risks of major accidents ("mistakes... accidents") which technology would have been able to foresee and warn us about, but which predictions would be forbidden by clause 11
  • The injunction in clause 3 not to "alter humans by programming" would rule out disease-preventing measures such as reprogramming defective immune cells - and could even be seen as being opposed to education (since education is a form of "programming")
  • They take for granted a clear demarcation between "human" and "machine" without explaining the basis of that demarcation. In contrast, humans may be regarded as "biological machines".

Other criticisms

'Thou shalt not' has historically been a poor construction for laws that we want to be upheld. A straightforward example comes from the history of Christian ethics. When Christians found it expedient to break the completely unequivocal 6th Commandment - 'Thou shalt not kill', they simply found other passages in the Bible which could be used to justify killing to themselves. This is not a criticism of Christians, but an observation on the framing of laws. Reality is always more complex than any system of laws; that's why we have courts and judges - law is not binary, it always needs to be adapted - hence the famous quote; "circumstances alter cases".

The Transhumanist FAQ

The 3rd version of the Transhumanist FAQ includes the following statements regarding ethics:

On death, cryonics, and voluntary euthanasia

If some people would still choose death, that’s a choice that is of course to be regretted, but nevertheless this choice must be respected. The transhumanist position on the ethics of death is crystal clear: death should be voluntary. This means that everybody should be free to extend their lives and to arrange for cryonic suspension of their deanimated bodies. It also means that voluntary euthanasia, under conditions of informed consent, is a basic human right.

On the significance of tranhumanist ethics

Q: Shouldn’t we concentrate on current problems such as improving the situation of the poor, rather than putting our efforts into planning for the “far” future?

A: We should do both. Focusing solely on current problems would leave us unprepared for the new challenges that we will encounter.

Many of the technologies and trends that transhumanists discuss are already reality. Biotechnology and information technology have transformed large sectors of our economies. The relevance of transhumanist ethics is manifest in such contemporary issues as stem cell research, genetically modified crops, human genetic therapy, embryo screening, end of life decisions, enhancement medicine, information markets, and research funding priorities. The importance of transhumanist ideas is likely to increase as the opportunities for human enhancement proliferate...

An argument can be made that the most efficient way of contributing to making the world better is by participating in the transhumanist project. This is so because the stakes are enormous – humanity’s entire future may depend on how we manage the coming technological transitions – and because relatively few resources are at the present time being devoted to transhumanist efforts. Even one extra person can still make a significant difference here.

London Futurists project

Following discussions in the wake of various meetups during 2016, London Futurists launched a project initially known as "Constitution for progressive ethics"[5] to organise the best of members' collective thinking on the question:

What ethical principles should guide humanity as technology becomes increasingly powerful?

That project envisions the use of H+Pedia to assist its goals.

Link to a micro-site with initial information on the aims, approach and scope of the project - now referred to as the "Project for a Progressive Ethics"[6].


Sapient Sentient Intelligence Value (SSIVA) Argument

The Sapient and Sentient Intelligence Value Argument (SSIVA) Theory first introduced in the Springer Book titled The Transhumanism Handbook (Lee) was designed as a computable model of ethics that protects all sapient and sentient intelligence. The model is critical to a number of major Transhumanist projects including work with the Foundation at the Transhuman House as well as the AGI Laboratory that uses this as the basis for teaching AGI models to respect humanity.

SSIVA Theory states that “ethically”, a fully Sapient and Sentient Intelligence is of equal value regardless of the underlying substrate which it operates on meaning a single fully Sapient and Sentient software system has the same moral agency [WF] as an equally Sapient and Sentient human being. We define ‘ethical’ according to dictionary.com as pertaining to or dealing with morals or the principals of morality; pertaining to right and wrong in conduct. Moral agency is, according to Wikipedia; is “an individual's ability to make moral judgments based on some notion of right and wrong and to be held accountable for these actions. A moral agent is "a being who is capable of acting with reference to right and wrong.” Such value judgments need to be based on the potential for Intelligence as defined here. This, of course, also places the value of any individual human and their potential for Intelligence above virtually all things save the one wherein a single machine Intelligence capable of extending it's own Sapient and Sentient Intelligence is of equal or more value based on a function of their potential for Sapient and Sentient Intelligence. It is not that human or machine intelligence is more valuable than the other inherently but that value is a function of the potential for Sapient and Sentient Intelligence and SSIVA argues that at a certain threshold all such Intelligence should be treated equally as having moral equivalence. Given this equality, we can in effect apply the same rules that govern humans and apply them to such software systems that exhibit the same levels of Sapient and Sentient. Let us start from the beginning and define the key elements of the SSIVA argument as the basis for such applications of the law.

While the same moral value is implied, it’s the treatment as equals in making their own mind through their own moral agency that is the same. Any more ‘value’ then that becomes abstract is subjective It is that the moral agency that is the right we assign to that Sapient and Sentient Intelligence based on the value of the potential of such entities is the same.

See the full detailed explanation here: Sapient Sentient Intelligence Value Argument (SSIVA) Theory

References