Difference between revisions of "Criticism of transhumanism"
m (Removed protection from "Criticism of transhumanism")
|Line 401:||Line 401:|
Revision as of 09:41, 5 September 2018
This page looks at some common criticisms of transhumanism, and points out:
- Where these criticisms miss the mark
- Where these criticisms have substance - so that transhumanists ought to pay attention.
- 1 Disbelief in a particular technology or timescale
- 2 Conflation of transhumanism with belief in ongoing exponential trends
- 3 The view that humanity has already reached a final, desirable state
- 4 The view that human existence needs to be simplified, rather than enhanced
- 5 The view that science and technology will likely cause more harm than good
- 6 The view that improving human morality is much harder than transhumanists suppose
- 7 The view that transhumanism would shatter the core equality of existing humans
- 8 The view that transhumanism will lead to 'gigadeath'
- 9 The view that transhumanism would deprive humanity of ultimate meaning
- 10 The view that transhumanism would have adverse social and environmental impacts
- 11 The view that there are more urgent priorities for human attention
- 12 The view that transhumanists have an unhealthy dislike of the physical
- 13 The view that transhumanism is a restatement of the discredited concept of eugenics
- 14 The view that transhumanism is a totalitarian ideology with insufficient respect for human dignity
- 15 The view that transhumanism is a religion in disguise
- 16 Dislike of any "ism", and a preference for action
- 17 Dislike of the word "transhumanism"
- 18 Dislike of association with transhumanists
- 19 See also
- 20 External links
- 21 References
Disbelief in a particular technology or timescale
Main: The Singularity
Some critics claim that a particular technology (e.g. mind uploading) is very unlikely to be achieved in a given timescale (e.g. by the year 2043) and therefore transhumanism must be discredited. These critics have observed prominent transhumanists seemingly speaking overconfidently about particular technological breakthroughs happening by particular future dates (this page lists some examples).
Note, however, that
- None of the standard definitions of transhumanism take for granted any particular attitude towards any particular technology, e.g. mind uploading, cryonics, or artificial general intelligence
- None of these definitions presuppose any given timescale for future developments to take place.
Any such criticisms, therefore, are criticisms of particular views of particular transhumanists, rather than a criticism of transhumanism itself.
Conflation of transhumanism with belief in ongoing exponential trends
Some critics note that transhumanist writers often refer to ongoing exponential trends in technological development. These critics take the view that all such exponential trends are bound to come to an end, and they conclude that transhumanism is a flawed belief.
For example, consider the scornful comments by Linux Torvalds,
Unending exponential growth? What drugs are those people on? I mean, really.
However, transhumanism simply points to the possibility of ongoing technological improvement, without presupposing that it takes place with an unwavering acceleration.
The view that humanity has already reached a final, desirable state
Some critics - including writers who are favourable to traditional religious viewpoints - assert that humanity has been created in what is already a perfect state, and cannot be meaningfully improved by means proposed by transhumanism (including science and technology).
For example, consider these remarks by Leon Kass, 2003:
Most of the given bestowals of nature have their given species-specified natures: they are each and all of a given sort. Cockroaches and humans are equally bestowed but differently natured. To turn a man into a cockroach — as we don’t need Kafka to show us — would be dehumanizing. To try to turn a man into more than a man might be so as well. We need more than generalized appreciation for nature’s gifts. We need a particular regard and respect for the special gift that is our own given nature.
This view is rebutted by the following response from Nick Bostrom, 2005:
Transhumanists counter that nature’s gifts are sometimes poisoned and should not always be accepted. Cancer, malaria, dementia, aging, starvation, unnecessary suffering, cognitive shortcomings are all among the presents that we wisely refuse. Our own species-specified natures are a rich source of much of the thoroughly unrespectable and unacceptable – susceptibility for disease, murder, rape, genocide, cheating, torture, racism. The horrors of nature in general and of our own nature in particular are so well documented that it is astonishing that somebody as distinguished as Leon Kass should still in this day and age be tempted to rely on the natural as a guide to what is desirable or normatively right. We should be grateful that our ancestors were not swept away by the Kassian sentiment, or we would still be picking lice off each other’s backs. Rather than deferring to the natural order, transhumanists maintain that we can legitimately reform ourselves and our natures in accordance with humane values and personal aspirations.
Transhumanists seek to extend a trajectory that has already taken place in history, as increasing numbers of shortcomings of human life have been addressed via medicine, other technologies, and by social reform. Transhumanists assert that:
- This trajectory still has a long way to continue
- This trajectory should not be curtailed just because someone claims, at any given time, that an aspect of human society (such as slavery, poverty, or aging) is a core part of human nature.
This argument is also made in "A letter to Mother Nature" by Max More, 1999:
Dear Mother Nature:
Sorry to disturb you, but we humans—your offspring—come to you with some things to say. (Perhaps you could pass this on to Father, since we never seem to see him around.) We want to thank you for the many wonderful qualities you have bestowed on us with your slow but massive, distributed intelligence. You have raised us from simple self-replicating chemicals to trillion-celled mammals. You have given us free rein of the planet. You have given us a life span longer than that of almost any other animal. You have endowed us with a complex brain giving us the capacity for language, reason, foresight, curiosity, and creativity. You have given us the capacity for self-understanding as well as empathy for others.
Mother Nature, truly we are grateful for what you have made us. No doubt you did the best you could. However, with all due respect, we must say that you have in many ways done a poor job with the human constitution. You have made us vulnerable to disease and damage. You compel us to age and die—just as we’re beginning to attain wisdom. You were miserly in the extent to which you gave us awareness of our somatic, cognitive, and emotional processes. You held out on us by giving the sharpest senses to other animals. You made us functional only under narrow environmental conditions. You gave us limited memory, poor impulse control, and tribalistic, xenophobic urges. And, you forgot to give us the operating manual for ourselves!
What you have made us is glorious, yet deeply flawed. You seem to have lost interest in our further evolution some 100,000 years ago. Or perhaps you have been biding your time, waiting for us to take the next step ourselves. Either way, we have reached our childhood’s end.
We have decided that it is time to amend the human constitution...
The view that human existence needs to be simplified, rather than enhanced
Some critics assert that humanity is currently too profligate in use of planetary resources, and needs to be returned to a simpler lifestyle that uses fewer resources (rather than moving to a lifestyle with a greater reliance on technology).
This criticism of transhumanism can be viewed as a secular version of the religiously-driven criticism from the previous subsection.
However, regarding the question of over-use of planetary resources, and the associated adverse impact on the environment, transhumanists anticipate that smart combinations of improved technology and improved social relations can reserve the trend towards environmental problems. This argument is developed in the book "The Infinite Resource: the power of ideas on a finite planet" by Ramez Naam.
For example, here's an extract from a 2013 article by Ramez Naam, "Science will save the planet (if we let it)":
We live on the edge of what our planet can sustain. Carbon emissions are heating the atmosphere. Fisheries have decimated life in the oceans. Agriculture has led to the destruction of half of our forests. Meanwhile, the global population is set to increase by two billion people by 2050. Billions more will rise out of poverty, hungry for richer diets, larger homes, cars and greater access to energy and manufactured goods.
We will not get out of this situation by living simply. Even the most draconian reductions will not suffice to avoid dangerous climate change or to make enough food available to feed nine billion people a western-style diet. Nor will we escape this by ending growth -- almost all of which will happen in the developing world. It would neither be just nor feasible to deny the majority of humanity access to the riches the developed world has enjoyed for years.
There is only one way out of this situation. We must grow the size of the global pie -- increase the resources we have available, whilst reducing our impact on the planet. That means new developments in science and technology. But they can only help the world if we accept what science has to offer us, even when it conflicts with our reactions...
Our responses to things we deem "unnatural" are strong. But science is our best tool to understand -- and save -- the world around us. We must learn to set our emotions aside and embrace what science tells us. GMOs and nuclear power are two of the most effective and most important green technologies we have. If -- after looking at the data -- you aren't in favour of using them responsibly, you aren't an environmentalist.
The view that science and technology will likely cause more harm than good
Some critics of transhumanism assert that more powerful technology is likely to cause:
- Damage to the world (for example, via environmental stress)
- Damage to social well-being (for example, via technological unemployment, or increased financial inequality)
- The outbreak of dangerous new diseases (for example, via problems with genetic manipulation)
- Loss of privacy, and greater manipulation of people by government and/or corporations(via increased surveillance and big data analytics).
Taking this view to an extreme, these critics point to the possibility of technologies inducing existential risks.
More powerful technology might also magnify the impact of existing character flaws, such as personal meanness, cold-heartedness, and egoism. People might become smarter and stronger as a result of improved technology, but there's no guarantee that they'll also become wiser or kinder.
For these reasons, critics object to the transhumanist project "to seek the continuation and acceleration of the evolution of intelligent life beyond its currently human form and human limitations by means of science and technology". These critics claim that science and technology will likely take intelligent life backwards rather than forwards.
However, these criticisms don't undermine the overall transhumanist initiative. Instead, they show that technological improvement needs to happen in parallel with:
- A continued focus on moral issues
- Smart political and social checks on potential misuse of technology.
This point is emphasised in, for example, the continuation of the excerpt given above from "A letter to Mother Nature" by Max More, 1999:
We have decided that it is time to amend the human constitution.
We do not do this lightly, carelessly, or disrespectfully, but cautiously, intelligently, and in pursuit of excellence. We intend to make you proud of us. Over the coming decades we will pursue a series of changes to our own constitution, initiated with the tools of biotechnology guided by critical and creative thinking...
We will cautiously yet boldly reshape our motivational patterns and emotional responses in ways we, as individuals, deem healthy. We will seek to improve upon typical human emotional excesses, bringing about refined emotions. We will strengthen ourselves so we can let go of unhealthy needs for dogmatic certainty, removing emotional barriers to rational self-correction...
These amendments to our constitution will move us from a human to an transhuman condition as individuals. We believe that individual transhumanizing will also allow us to form relationships, cultures, and polities of unprecedented innovation, richness, freedom, and responsibility.
The view that improving human morality is much harder than transhumanists suppose
The criticism expressed in the previous subsection can be extended as follows. A critic may acknowledge that transhumanism wishes to apply moral reasoning and social and political checks against misuse of technology. However, the critic may assert that the task of improving human nature is much harder than transhumanists suppose. Transhumanists, in this analysis, are likely to overreach their capabilities, with disastrous unintended consequences.
For example, Leda Cosmides asserts that the biological underpinnings of human personality are "exquisitely well-designed mental mechanisms that have been engineered by the evolutionary process to solve problems of survival and reproduction". If true, any attempts to alter these biological underpinnings would likely have unintended deleterious side-effects, regardless of the positive motivation of the people making the changes.
A similar criticism is expressed by Francis Fukuyama, 2009:
Transhumanism’s advocates think they understand what constitutes a good human being, and they are happy to leave behind the limited, mortal, natural beings they see around them in favor of something better. But do they really comprehend ultimate human goods? For all our obvious faults, we humans are miraculously complex products of a long evolutionary process — products whose whole is much more than the sum of our parts. Our good characteristics are intimately connected to our bad ones: If we weren’t violent and aggressive, we wouldn’t be able to defend ourselves; if we didn’t have feelings of exclusivity, we wouldn’t be loyal to those close to us; if we never felt jealousy, we would also never feel love. Even our mortality plays a critical function in allowing our species as a whole to survive and adapt... Modifying any one of our key characteristics inevitably entails modifying a complex, interlinked package of traits, and we will never be able to anticipate the ultimate outcome...
The environmental movement has taught us humility and respect for the integrity of nonhuman nature. We need a similar humility concerning our human nature. If we do not develop it soon, we may unwittingly invite the transhumanists to deface humanity with their genetic bulldozers and psychotropic shopping malls.
Moreover, these critics may accept and approve certain types of improvement of human circumstances, such as education, healthcare, and agriculture, whilst opposing any changes at a more fundamental level in the human makeup (such as genetic engineering of the human germline). These latter changes are viewed as too fraught with difficulty.
However, changes to the human germline have been occurring throughout human history (and prehistory), via natural genetic mutation and variation. The transhumanist proposal is to follow careful scientific and technological processes, to find ways to extend and magnify the positive changes that have already occurred.
Transhumanists acknowledge the risk of unforeseen negative consequences from these changes. However, transhumanists also point to the risk of negative consequences of keeping human nature unchanged. For example, Julian Savulescu and Ingmar Persson argue as follows (emphasis added) in favour of "moral bioenhancement":
Our moral shortcomings are preventing our political institutions from acting effectively. Enhancing our moral motivation would enable us to act better for distant people, future generations, and non-human animals. One method to achieve this enhancement is already practised in all societies: moral education. Al Gore, Friends of the Earth and Oxfam have already had success with campaigns vividly representing the problems our selfish actions are creating for others – others around the world and in the future. But there is another possibility emerging. Our knowledge of human biology – in particular of genetics and neurobiology – is beginning to enable us to directly affect the biological or physiological bases of human motivation, either through drugs, or through genetic selection or engineering, or by using external devices that affect the brain or the learning process. We could use these techniques to overcome the moral and psychological shortcomings that imperil the human species. We are at the early stages of such research, but there are few cogent philosophical or moral objections to the use of specifically biomedical moral enhancement – or moral bioenhancement. In fact, the risks we face are so serious that it is imperative we explore every possibility of developing moral bioenhancement technologies – not to replace traditional moral education, but to complement it. We simply can’t afford to miss opportunities.
We have provided ourselves with the tools to end worthwhile life on Earth forever. Nuclear war, with the weapons already in existence today could achieve this alone. If we must possess such a formidable power, it should be entrusted only to those who are both morally enlightened and adequately informed.
In summary, this criticism gives undue emphasis to the risks of causing problems to humanity by attempts to improve human nature, and fails to give enough attention to the risks inherent in keeping human nature unchanged. As such, the criticism unjustifiably privileges the precautionary principle, whereas transhumanists instead propose the proactionary principle.
A similar argument is made by Steven Pinker, 2015:
Biomedical research... promises vast increases in life, health, and flourishing. Just imagine how much happier you would be if a prematurely deceased loved one were alive, or a debilitated one were vigorous — and multiply that good by several billion, in perpetuity. Given this potential bonanza, the primary moral goal for today’s bioethics can be summarized in a single sentence: "Get out of the way".
A truly ethical bioethics should not bog down research in red tape, moratoria, or threats of prosecution based on nebulous but sweeping principles such as “dignity,” “sacredness,” or “social justice.” Nor should it thwart research that has likely benefits now or in the near future by sowing panic about speculative harms in the distant future...
Some say that it’s simple prudence to pause and consider the long-term implications of research before it rushes headlong into changing the human condition. But this is an illusion.
First, slowing down research has a massive human cost. Even a one-year delay in implementing an effective treatment could spell death, suffering, or disability for millions of people.
Second, technological prediction beyond a horizon of a few years is so futile that any policy based on it is almost certain to do more harm than good... treatments that were decried in their time as paving the road to hell, including vaccination, transfusions, anesthesia, artificial insemination, organ transplants, and in-vitro fertilization, have become unexceptional boons to human well-being.
Biomedical advances will always be incremental and hard-won, and foreseeable harms can be dealt with as they arise. The human body is staggeringly complex, vulnerable to entropy, shaped by evolution for youthful vigor at the expense of longevity, and governed by intricate feedback loops which ensure that any intervention will be compensated for by other parts of the system. Biomedical research will always be closer to Sisyphus than a runaway train — and the last thing we need is a lobby of so-called ethicists helping to push the rock down the hill.
Note this is not an argument to push ahead with all forms of biomedical experimentation regardless of potential consequences. Instead, there is a clear obligation to anticipate foreseeable harms, and to deal with them in a timely manner. However, this principle should not be extended to a state of hyper-caution regarding potential unforeseen harms. That's because inaction (slow progress with biomedical research) also involves foreseeable harms.
The view that transhumanism would shatter the core equality of existing humans
Main: Oligarchic transhumanism
Some critics point out that, if the transhumanist initiative succeeds, a significant divergence may emerge between enhanced and unenhanced humans. This divergence will have profound implications.
For example, Francis Fukuyama, 2009 states this fundamental objection to transhumanism:
The first victim of transhumanism might be equality. The U.S. Declaration of Independence says that "all men are created equal," and the most serious political fights in the history of the United States have been over who qualifies as fully human. Women and blacks did not make the cut in 1776 when Thomas Jefferson penned the declaration. Slowly and painfully, advanced societies have realized that simply being human entitles a person to political and legal equality. In effect, we have drawn a red line around the human being and said that it is sacrosanct.
Underlying this idea of the equality of rights is the belief that we all possess a human essence that dwarfs manifest differences in skin color, beauty, and even intelligence. This essence, and the view that individuals therefore have inherent value, is at the heart of political liberalism. But modifying that essence is the core of the transhumanist project. If we start transforming ourselves into something superior, what rights will these enhanced creatures claim, and what rights will they possess when compared to those left behind? If some move ahead, can anyone afford not to follow? These questions are troubling enough within rich, developed societies. Add in the implications for citizens of the world’s poorest countries — for whom biotechnology’s marvels likely will be out of reach — and the threat to the idea of equality becomes even more menacing.
The greater the degree of enhancement that is adopted by some humans, the bigger the differential in capability will result. Unmodified humans will increasingly be unable to compete. Those humans who wish to exercise their fundamental right not to become enhanced will, therefore, increasingly see enhanced humans as threats.
Transhumanists can respond as follows:
- It is the goal of transhumanism to make 'biotechnology's marvels' - and all other technologies of enhancement - available to anyone who wishes to access them
- Nevertheless, it is a fair point that these technologies will likely result in a greater diversity of human form and experience than has been the case throughout human history, as different people choose different ways to develop themselves
- This greater diversity will bring its own set of challenges, but none of these should be seen as any fundamental reason to oppose the transhumanist project.
The view that transhumanism will lead to 'gigadeath'
Main: Existential risks
As a variant of the previous idea, it may prove to be the case that humans - even if biologically enhanced - are unable to compete with robotic lifeforms. Again, the threat of a fundamental division opens up.
Some writers foresee this division developing into an existential clash, similar to what happened in human prehistory, when the Neanderthals became extinct in the face of competition from Homo Sapiens. For example, Hugo de Garis refers to a forthcoming "Artilect War":
The issue of species dominance will dictate our global politics this century. Given the rate at which technologies are developing that enable “artilects”–artificial intellects–it is likely that humanity will be able to build artilects with mental capacities that are literally trillions upon trillions of times above the human level. Humanity will then have to choose whether to become the No. 2 species on the planet or not...
In about a decade there will be a thriving artificial brain industry, and nearly everyone will have a home robot, which will be upgraded every two or three years. Each new home robot generation will be smarter and more useful than the previous generation, so that as the gap between the human intelligence level and the artificial intelligence level gets smaller every year, the species dominance debate will heat up. Millions of people will be asking such questions as: "Can the machines become smarter than humans? Is that a good thing? Should there be a legislated upper limit to machine intelligence? Can the rise of machine intelligence be stopped? What if China’s soldier robots are smarter than America’s solder robots?" And so on and so forth.
Considering all this, I predict that humanity will split into three major philosophical, ideological, political groups, which I label as follows.
–The Cosmists (based on the word “cosmos”) will be in favor of building these godlike machines (the artilects), who would be immortal, think a million times faster than humans, have unlimited memory, go anywhere, do anything and take any shape...
–The Terrans (based on the word “terra,” meaning the earth) will be opposed to the construction of artilects, fearing that in a highly advanced form, the artilects may decide to wipe us out. To ensure that the probability that this might happen is zero, the Terrans will insist that the artilects are never built in the first place. But this strategy runs utterly contrary to what the Cosmists want. The Terrans will be prepared to go to war against the Cosmists to ensure the survival of the human species.
–The Cyborgists (based on the word “cyborg,” meaning cybernetic organism that is part machine, part human) will want to become artilect gods themselves by adding artilectual components to their own brains, thus avoiding the bitter conflict between the Cosmists and the Terrans.
De Garis contrasts his own forecast of the future with a view held by Ray Kurzweil, among others, that he describes as an "over-optimistic prediction that the rise of the artilect this century will be a positive development for humanity". De Garis continues as follows:
I think it will be a catastrophe. I see a war coming, the “Artilect War,” not between the artilects and human beings, as in the movie Terminator, but between the Terrans, Cosmists and Cyborgists. This will be the worst, most passionate war that humanity has ever known, because the stakes – the survival of our species – have never been so high. Given the period in which this war will occur, the late 21st century, with late 21st century weapons, the scale of the killing will not be in the millions, as in the 20th century (the bloodiest in history, with 200-300 million people killed in wars, purges, holocausts and genocides) but in the billions. There will be gigadeath.
This threat of impending war is made worse by the following consideration:
Kurzweil claims that if ever a war occurred between the Terrans and the other groups it would be a quick no-contest battle. The vastly superior intelligence of the artilect group would quickly overcome the Terrans. Therefore I claim that the Terrans will have to strike first while they can, during the “window of opportunity,” when they have comparable intelligence levels.
Transhumanists can reply as follows:
- The potential of growing divergence needs to be acknowledged in advance
- As stated in the Transhumanist Declaration, "Research effort needs to be invested into understanding these prospects. We need to carefully deliberate how best to reduce risks and expedite beneficial applications".
- It is a core part of the transhumanist project to support initiatives by organisations that address Existential risks.
The view that transhumanism would deprive humanity of ultimate meaning
Some critics refer, not to the risk of wars, social disruptions, or other existential threats, caused by technological advances, but to the loss of features which are held to be essential for the best qualities of human life. For example, Leon Kass, 2003 makes the following argument:
A flourishing human life is... ours only because we are born, age, replace ourselves, decline, and die — and know it. It is a life of aspiration, made possible by and born of experienced lack, of the disproportion between the transcendent longings of the soul and the limited capacities of our bodies and minds. It is a life that stretches towards some fulfillment to which our natural human soul has been oriented... It is a life not of better genes and enhancing chemicals but of love and friendship, song and dance, speech and deed, working and learning, revering and worshipping.
This criticism holds that it is our impending personal decline and death that provides the context for our lives to truly flourish. Without that prospect, there would be less scope for (to use Kass's words) "revering and worshipping". These critics therefore seek to oppose any transhumanist initiative to decrease the likelihood of eventual personal decline and death.
Transhumanists can reply that it is a matter of conjecture whether the prospect of decline and death is needed to provide a fully meaningful life. Transhumanists see no reason to reach that conclusion. On the contrary, transhumanists see that greater health and longer lifetimes are likely to increase many of the positives mentioned by Kass as being fundamentally important to life: "love and friendship, song and dance, speech and deed, working and learning".
Transhumanists are willing to allow critics the option for their own lives to suffer personal decline and death, if that is what they truly wish for themselves. However, that outcome should not be extended to apply to people who lack that belief. Instead, transhumanists assert the moral choice of self-determination: their desire for improved health and lifespan, for themselves and their loved ones, should override any desire by critics to limit and constrain them.
Some critics say that they recognise the attractiveness for individuals to experience improved health and lifespan - in line with the transhumanist initiative - but that if everyone experiences these benefits, society as a whole will suffer:
- Power structures which depend upon death to ensure transition plans, will no longer function
- Dictators and autocrats will no longer be removed from their positions of authority by death
- There will be greater pressures on the environment, caused by ever greater human populations.
Transhumanists can respond that all these suggestions betray a lack of imagination:
- Alternative succession plans, career rotation schemes, and so on, can be designed and implemented
- There are already plenty of examples of dictators and autocrats being removed from authority whilst still in the full flush of health
- There is ample room on the earth, in the sky, and (in due course) in outer space, to accommodate increased population sizes - and there is plenty of energy available (from the sun and beyond) to provide all the needs for these populations.
The view that there are more urgent priorities for human attention
In January 2015, Microsoft founder Bill Gates responded to the following question in an “Ask Me Anything” session held on Reddit:
What do you think about life-extending and immortality research?
Putting into words a thought that is probably shared by large numbers of people, Gates replied:
It seems pretty egocentric while we still have malaria and TB for rich people to fund things so they can live longer. It would be nice to live longer though I admit.
However, transhumanists can reply that the same criticism could be raised against numerous medical research programmes – such as the programmes to cure cancer or heart disease. Fixing these diseases will extend lives too. But whilst people are still dying of malaria and TB (tuberculosis) – diseases that can be treated with relatively little expenditure – it might seem a displaced priority to put large amounts of money into looking for cures for cancer and heart disease. If the criterion is to save the most lives by spending a given amount of money, perhaps it would be best to terminate our cancer research initiatives, and instead purchase more mosquito nets, ensuring they are distributed to all areas still suffering from malaria. That argument shows that things are far from being black and white.
Transhumanists can state that the single biggest killer on the planet is neither malaria nor TB: it’s aging. A successful anti-aging project could, therefore, have the biggest payback of all. Pursuing that goal is far from being egocentric. It’s not just the researchers (and those close to them) who will benefit from the programme. The benefits can reach out to everyone on the planet – including the people in the under-developed communities that are still suffering from outbreaks of malaria and TB. After all, people in these communities suffer from aging too.
The question of the financial effectiveness of different medical programmes hinges on two factors: the number of people who would benefit, and the costs to achieve these benefits. Someone who recognises the huge benefits of an anti-aging initiative could still, rationally, oppose that project, out of a belief that its costs will be vast and the timescales enormously extended.
If someone thinks it will be a very expensive project to bring about widespread healthy lifespan extension, and if they think success with that project lies centuries into the future, they'll be inclined to view rejuvenation researchers as eccentric (mad) and/or egocentric (selfish). However, if they concede the possibility that real progress could happen in the next few decades, for a cost-spend comparable to the amount of money that’s currently being spent on, say, cancer alone, their assessment will change. These critics may start to view transhumanists, not as eccentric, but as inspired; not as egocentric, but as heroic.
The view that transhumanists have an unhealthy dislike of the physical
Some critics assert that transhumanists place too much emphasis on escaping from physical embodiment. By looking forward to uploading their minds into computer systems, transhumanists display (it is said) an unhealthy dislike of the physical.
That may a fair assessment of a number of individual transhumanists. However, there is nothing in transhumanism itself that leads anyone to wish to escape their human body:
- Transhumanism is consistent with seeking improvements and augmentation to human bodies (and brains)
- Transhumanism is consistent with bodily variation, such as acquiring new organs, new appendages, or new sensory capabilities
- Many transhumanists evidently put a high priority on their physical fitness and bodily health.
This criticism, therefore, fails to apply to transhumanism as a whole.
The view that transhumanism is a restatement of the discredited concept of eugenics
Some critics assert that transhumanists are "guilty by association" with ideas adopted by the Nazis. The argument runs as follows:
- Transhumanists are open to the idea of genetic modification
- Historically, genetic modification is associated with the name "eugenics" as practised by racists and fascists (e.g. the Nazis)
- Transhumanists are, therefore, indirectly guilty of racism and fascism.
Transhumanists can reply that similar arguments would indict any practice of vegetarianism or opposition to tobacco, since both these causes were (like eugenics) favoured by Adolf Hitler. Arguments of this "by association" type have no validity.
More specifically, transhumanists can observe a key difference between voluntary and involuntary eugenics:
- Involuntary eugenics involves people being forcibly prevented from having children
- Transhumanist eugenics involves providing greater genetic choice to everyone.
The view that transhumanism is a totalitarian ideology with insufficient respect for human dignity
As a variant of the previous criticism, it can be asserted that transhumanism will inevitably remove freedom of choice from individuals who would prefer to remain unenhanced. To remain unenhanced in the future will be akin to someone choosing to remain illiterate: that choice made sense when many professions had no need of literacy, but anyone in the early 21st century who opts out of learning to read and write would be at a substantial disadvantage.
Socrates is a notable example of someone who decried the need to learn to read and write. As described in the Out of the Jungle blog, 2007:
[Socrates] worried that reliance on writing would erode memory (it has!), but also, and maybe more importantly, that reading would mislead students to think that they had knowledge, when they only had data.
If literacy (generally regarded as being a universal good) has drawbacks as regards human skillsets, similar drawbacks could apply to the enhancements favoured by transhumanists.
This criticism of transhumanism asserts that when significant new body and brain enhancements become available, everyone will be pressurised to adopt them, despite an internal preference to remain unenhanced. Therefore, without intending it, transhumanism will have the effect of a totalitarian ideology, with insufficient respect for human dignity.
Transhumanists can reply that the upsides and downsides of specific enhancements should be widely discussed in advance, to allow a broader appreciation to be reached on how to preserve human dignity at the same time as humanity is enhanced. There is nothing intrinsic to transhumanism that dictates that personal choices would be overridden.
The view that transhumanism is a religion in disguise
Some critics point out that there are similarities between transhumanism and religion:
- Both talk about the potential for human transcendence
- Both claim to provide insight about fundamental purpose
- Both can inspire a sense of loyalty and excitement that leaves careful rationality behind.
In view of these parallels, some critics assert that transhumanism is inherently suspect. Transhumanists are said to give uncritical credence to various "core texts" or "messianic leaders".
However, every philosophy under the sun has its share of uncritical adherents. That, by itself, is no reason to reject the philosophy.
Dislike of any "ism", and a preference for action
A different set of critics propose that it is more important to actually create engineering solutions (that will enhance humanity) than to develop and support a philosophy about the transformation of human nature. For these critics, the symbol "H+" would better imply "happiness enhanced" than "humanity enhanced". In this case, the issue on the minds of critics isn't particularly with transhumanism, but with any "ism".
Transhumanists can reply that a majority focus on engineering solutions is, indeed, desirable, but that it's not possible to avoid the question of the implications of these engineering changes for human existence. The philosophical questions will emerge, sooner or later, when people think through the consequences of technological progress. The point of transhumanism, therefore, is to anticipate these questions, and to have good answers ready.
Moreover, for the full positive potential of technology to be realised, it often needs to be preceded by a change in public mindset. That’s true for the adoption of individual new products, such as smartphones, when consumers needed to grasp that there was a lot more they could enjoy doing with their mobile phones than merely making voice calls. It’s also true for technology as a whole. Once people realise that technology can do a lot more than provide gadgets, toys, and labour-saving devices - that technology can help the positive ongoing evolution of humanity, and the attainment of levels of experience far beyond the current norm - they will be more likely to put pressure on governments and businesses alike to provide these solutions.
Building this positive mindset is a core task of the transhumanist community. That's a task that requires more than technical engineering.
Dislike of the word "transhumanism"
Some critics like what transhumanism is trying to accomplish, but dislike the actual word "transhumanism". They feel that the word "transhumanism":
- Is too complicated (it has too many syllables)
- Has potentially undesired associations with "transsexual" or "transvestite"
- Gives the impression of being part of a cult.
Transhumanists can reply that the word has a strong heritage and its own set of positive meanings. Careful explanations of the meaning of the word (as given in the article Transhumanism) should reduce the anxiety that is sometimes felt about the word.
The following argument for keeping the word "transhumanism" is given by Zoltan Istvan:
Singularity. Posthuman. Techno-Optimism. Cyborgism. Humanity+. Immortalist. Machine intelligence. Biohacker. Robotopia. Life extension. Transhumanism.
These are all terms thrown around trying to describe a future in which mind uploading, indefinite lifespans, artificial intelligence, and bionic augmentation may (and I think will) help us to become far more than just human...
This word war is a clash of intellectual ideals. It goes something like this: The singularity people (many at Singularity University) don't like the term transhumanism. Transhumanists don't like posthumanism. Posthumanists don’t like cyborgism. And cyborgism advocates don't like the life extension tag. If you arrange the groups in any order, the same enmity occurs. All sides are wary of others, fearing they might lose ground in bringing the future closer in precisely their way...
The word transhumanism has also long been in use, pushed by philosophers like Max More, David Pearce, and Nick Bostrom. However, until recently, it remained mostly a cult word, used by smaller futurist associations, tech blogs, and older male academics interested in describing radical technology revolutionizing the human experience. Two years ago, a Google search of the word transhumanism — which literally means beyond human — brought up about 100,000 pages. What a difference a few years makes. Today, the word transhumanism now returns almost 2 million pages on Google. And dozens of large social media groups on Facebook and Google+ — consisting of every type of race, age group, sexual orientation, heritage, religion, and nationality — have transhuman in their titles...
Why did this happen so quickly? As with the evolution of most movements and their names, there were numerous moving parts. Dan Brown’s international best-seller novel The Inferno introduced millions of people to transhumanism. So have media celebrities as diverse as Joe Rogan, Glenn Beck, and Jason Silva, host of National Geographic’s Brain Games — all three have discussed transhumanism in their work. A larger reason probably was that both the public and media were ready for an impactful, straightforward word to describe the general flavor of technological existence sweeping over the human race. In case you haven’t noticed, the dead live via saline-cooling suspended animation, the handicapped walk via exoskeleton technology, and the deaf hear via brain microchip implants. The age of frequent, life-altering science is now upon us, and transhumanism is the most functional word to describe it.
Even though the words singularity, cyborg, and life extension generate far more hits on Google than transhumanism, they just don’t feel right describing an ideal and accurate vision of the future. Few people are willing to call themselves a Singularitarian—someone who advocates for a technological event that involves a helpful superintelligence. And Cyborgism is just weird, since the public isn't ready to be merged with machines yet. Life extension isn’t bad, but it’s generally limited only to living longer.
Almost by default, transhumanism has become the overwhelming leader of the name rivalry. Around the world, a quickly growing number of people know what transhumanism is and also subscribe to some of it. It has become the go-to futurist term to express how science and technology are upending the human playing field.
The best alternatives to "transhumanism" may be "H+" or "Humanity+":
- These are simpler words
- They put more prominence on the human core of the ideas.
In practice, Humanity+ denotes a specific transhumanist organisation. "H+" can stand equally well for both "transhumanism" and "Humanity+". That observation is one reason this wiki has the name "H+Pedia".
Dislike of association with transhumanists
A final reason for critics to be fearful of transhumanism is if they assess the movement as being dominated in practice by people to whom several of the preceding criticisms apply. These are individuals who:
- Pay only lip service to risks of technology going bad, whilst rushing headlong into uncritical adoption of new technologies
- Downplay uncertainties, whilst talking about "the victory of transhumanism being inevitable"
- Focus on promoting their own well-being, whilst showing little concern for people who are presently disadvantaged
- Are collectively fragmented and divisive, rather than being able to cooperate for the greater good
- Are preoccupied with talking rather than doing.
For example, Posthuman critic Katherine Hayles stated in 2011, "transhumanist rhetoric concentrates on individual transcendence; at transhumanist websites, articles, and books, there is a conspicuous absence of considering socioeconomic dynamics beyond the individual.” And platitudes about inclusiveness aside, the Transhumanist road is not one traveled by "the little guy."
At the time of writing, it remains an open question whether the transhumanist community will turn out to be dominated in practice by people with such characteristics, or whether, instead, the movement will live up to its own ideals.
- Transhumanism Criticisms from Future Wikia
- Review by Guilio Prisco of the book 'Against Transhumanism' by Richard Jones