“Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” – Vernor Vinge, Technological Singularity, 1983
Futurist and Inventor Ray Kurzweil has a plan: He wants to never die.
In order to achieve this goal, he currently takes over 150 supplements per day, eats a calorie restricted diet (a proven technique to prolong lifespan), drinks ionized water (a type of alkalinized water that supposedly protects against free radicals in the body), and exercises daily, all to promote the healthy functioning of his body; and at 60 years old, he reportedly has the physiology of a man 20 years younger.
But the human body, no matter how well you take care of it, is susceptible to illness, disease, and senescence – the process of cellular change in the body that results in that little thing we all do called “aging.” (This cellular process is why humans are physiologically unable to live past the age of around 125 years old.) Kurzweil is well aware of this, but has a solution: he is just trying to live long enough in his human body until technology reaches the point where man can meld with machine, and he can survive as a cyborg with robotically enhanced features; survive, that is, until the day when he can eventually upload his consciousness onto a harddrive, enabling him to “live” forever as bits of information stored indefinitely; immortal, in a sense, as long as he has a copy of himself in case the computer fails.
What happens if these technological abilities don’t come soon enough? Kurzweil has a back-up plan. If, for some reason, this mind-machine blend doesn’t occur in his biological lifetime, Kurzweil is signed up at Alcor Life Extension Foundation to be cryonically frozen and kept in Scottsdale, Arizona, amongst approximately 900 other stored bodies (including famous baseball player Ted Williams) who are currently stored. There at Alcor, he will “wait” until the day when scientists discover the ability to reanimate life back into him– and not too long, as Kurzweil believes this day will be in about 50 years.
Watch a video on Alcor and Cryonics here:
Ray Kurzweil is a fascinating and controversial figure, both famous and infamous for his technological predictions. He is a respected scientist and inventor, known for his accurate predictions of a number of technological events, and recently started “The Singularity University” here in Silicon Valley, an interdisciplinary program (funded in part by Google) aimed to “assemble, educate and inspire a cadre of leaders” around issues of accelerating technologies.
Kurzweil’s most well-known predictions are encapsulated in this event he forecasts called “The Singularity,” a period of time he predicts in the next few decades when artificial intelligence will exceed human intelligence, and technologies like genetic engineering, nanotechnology, and computer technology will radically transform human life, enabling mind, body and machine to become one.
He is also a pioneer of a movement called “transhumanism”, which is defined by this belief that technology will ultimately replace biology, and rid human beings of all the things that, well, make us human, like disease, aging, and – you guessed it—death. Why be human when you can be something better? When Artificial intelligence and nanotechnology comes around in the singularity, Kurzweil thinks, being biologically human will become obsolete. With cyborg features and enhanced cognitive capacities, we will have fewer deficiencies, and more capabilities; we will possess the ability to become more like machines, and we’ll be better for it.
Watch A Preview For A Film About Kurzweil entitled “Transcendent Man”:
Kurzweil outlines his vision of our technological future in his article “Reinventing Humanity: The Future of Machine-Human Intelligence” for Futurist Magazine, which raises some juicy points to consider from the perspective of ethics and technology. He explains The Singularity, in his own words,:
“We stand on the threshold of the most profound and transformative event in the history of humanity, the “singularity”.
What is the Singularity? From my perspective, the Singularity is a future period during which the pace of technological change will be so fast and far-reaching that human existence on this planet will be irreversibly altered. We will combine our brain power—the knowledge, skills, and personality quirks that make us human—with our computer power in order to think, reason, communicate, and create in ways we can scarcely even contemplate today.
This merger of man and machine, coupled with the sudden explosion in machine intelligence and rapid innovation in the fields of gene research as well as nanotechnology, will result in a world where there is no distinction between the biological and the mechanical, or between physical and virtual reality. These technological revolutions will allow us to transcend our frail bodies with all their limitations. Illness, as we know it, will be eradicated. Through the use of nanotechnology, we will be able to manufacture almost any physical product upon demand, world hunger and poverty will be solved, and pollution will vanish. Human existence will undergo a quantum leap in evolution. We will be able to live as long as we choose. The coming into being of such a world is, in essence, the Singularity.”
The details of the coming Singularity, Kurzweil outlines, will occur in three areas: The genetic revolution, the nanotech revolution, and strong AI: which means, essentially, machines that are smarter than humans.
The first he describes is the nanotechnology revolution, which refers to a type of technology that manipulates matter on an atomic and molecular scale, potentially allowing us to reassemble matter in a variety of ways. Kurzweil believes nanotechnology will give us the capability to create atomic size “robots” that can clean our blood cells and eradicate disease; he also thinks nanotechnology will allow us to create essentially anything by ‘assembling’ it through nanobots (for example, he thinks that nanotechnology will enable us to e-mail physical things like clothing, much like we can currently e-mail audio-files). He explains:
The nanotechnology revolution will enable us to redesign and rebuild—molecule by molecule—our bodies and brains and the world with which we interact, going far beyond the limitations of biology.
In the future, nanoscale devices will run hundreds of tests simultaneously on tiny samples of a given substance. These devices will allow extensive tests to be conducted on nearly invisible samples of blood.
In the area of treatment, a particularly exciting application of this technology is the harnessing of nanoparticles to deliver medication to specific sites in the body. Nanoparticles can guide drugs into cell walls and through the blood-brain barrier. Nanoscale packages can be designed to hold drugs, protect them through the gastrointestinal tract, ferry them to specific locations, and then release them in sophisticated ways that can be influenced and controlled, wirelessly, from outside the body.
In regards to AI, Kurzweil envisions what will eventually become a post-human future, where we upload our consciousness to computers and live forever as “stored information”:
The implementation of artificial intelligence in our biological systems will mark an evolutionary leap forward for humanity, but it also implies we will indeed become more “machine” than “human.” Billions of nanobots will travel through the bloodstream in our bodies and brains. In our bodies, they will destroy pathogens, correct DNA errors, eliminate toxins, and perform many other tasks to enhance our physical well-being. As a result, we will be able to live indefinitely without aging.
Despite the wonderful future potential of medicine, real human longevity will only be attained when we move away from our biological bodies entirely. As we move toward a software-based existence, we will gain the means of “backing ourselves up” (storing the key patterns underlying our knowledge, skills, and personality in a digital setting) thereby enabling a virtual immortality. Thanks to nanotechnology, we will have bodies that we can not just modify but change into new forms at will. We will be able to quickly change our bodies in full-immersion virtual-reality environments incorporating all of the senses during the 2020s and in real reality in the 2040s.
Now, the idea of becoming nanobot driven robots is hard to wrap one’s head around, particurlaly living in a time when people struggle to get their blue-tooths to work correctly. But even though to most people, these predictions seem very extreme, Kurzweil explains why he thinks these changes are coming fast, even if we can’t conceive of them now. He explains that, in the vein of Moore’s law (which describes how the density of transistors on computer chips has doubled every two years since its invention), technology develops exponentially — and thus the rate of change is rapidly increasing in the modern day:
How is it possible we could be so close to this enormous change and not see it? The answer is the quickening nature of technological innovation. In thinking about the future, few people take into consideration the fact that human scientific progress is exponential…
In other words, the twentieth century was gradually speeding up to today’s rate of progress; its achievements, therefore, were equivalent to about 20 years of progress at the rate of 2000. We’ll make another “20 years” of progress in just 14 years (by 2014), and then do the same again in only seven years. To express this another way, we won’t experience 100 years of technological advance in the twenty-first century; we will witness on the order of 20,000 years of progress (again, when measured by today’s progress rate), or progress on a level of about 1,000 times greater than what was achieved in the twentieth century.
There are so many questions to ask, it’s hard to know where to start. Considering The Singularity, many questions arise (the first, which you’re probably thinking, is “Is this really possible?!”) But that question put temporarily aside, some questions seem to be: what are the promise and perils of nanotechnology, and how can we approach them responsibly? What types of genetic engineering, if any, should we pursue, and what types should we avoid? If we really could live forever, should we–particularly if it meant living no longer as humans, but as machines? And what happens to who we are as human beings — our beliefs, our religions and faiths, our thoughts about our purpose — if we pursue this type of future?
Each of these topics is rife with ethical – and existential – questions; and discussion of many of them requires scientific knowledge that extends beyond my ability to represent them here. But contemplating these questions broadly, even in spite of extensive knowledge of their specifics, brings into focus some fundamental questions about the principles of human experience, and about the broad issue of our technological future and how to approach it. The more we envision a technologically saturated future, I think, the more our human values are called upon to be revealed as we react, respond, flinch, or embrace the pictures of our future reflected in these predictions. They ask us to consider: what do we value about being human? What do we want to hold on to about being human, and what do want to replace, augment, and transform with technology? Is living as ‘stored information’ really any life at all?
In addition to these questions, exploring these “futuristic” issues calls us to consider some of our fundamental principles about technology. A basic yet extremely complex question arises: Should all technology be pursued? In other words, should we ever restrict technological innovation, and say that some technologies, because of their risks — to humanity, or to certain human values– simply shouldn’t be developed?
Reflections on this question bring up the topic of techno-optimism and techno-pessimism, which I wrote about briefly here.
Kurzweil, it seems to go without saying, is a full–fledged techno-optimist, interested in letting technology run its full reign, even if that means leaving everything that is recognizeably human behind. He concedes that we need to be responsible about our use of nanotechnology – a technology which some fear could bring about the end of the world (see the “grey goo” theory) – but for the most part is a proponent of full fledged technological expansion. Reflection is important, but no amount should limit technologies:
“We don’t have to look past today to see the intertwined promise and peril of technological advancement,” he says. “Imagine describing the dangers (atomic and hydrogen bombs for one thing) that exist today to people who lived a couple of hundred years ago. They would think it mad to take such risks. But how many people in 2006 would really want to go back to the short, brutish, disease-filled, poverty-stricken, disaster-prone lives that 99% of the human race struggled through two centuries ago?
We may romanticize the past, but up until fairly recently most of humanity lived extremely fragile lives in which one all-too-common misfortune could spell disaster. Two hundred years ago, life expectancy for females in the record-holding country (Sweden) was roughly 35-five years, very brief compared with the longest life expectancy today-almost 85 years for Japanese women. Life expectancy for males was roughly 33 years, compared with the current 79 years. Half a day was often required to prepare an evening meal, and hard labor characterized most human activity. There were no social safety nets. Substantial portions of our species still live in this precarious way, which is at least one reason to continue technological progress and the economic improvement that accompanies it. Only technology, with its ability to provide orders of magnitude of advances in capability and affordability has the scale to confront problems such as poverty, disease, pollution, and the other overriding concerns of society today. The benefits of applying ourselves to these challenges cannot be overstated.”
But another, more technologically conservative view is important to consider, one characterized by thinkers who question whether these technologies should be proliferated, or even pursued at all.
William Joy, co-founder of Sun Microsystems, famously countered Kurzweil’s predictions in his article, “Why The Future Doesn’t Need Us.” He opens his article discussing his meeting with Kurzweil:
‘From the moment I became involved in the creation of new technologies, their ethical dimensions have concerned me, but it was only in the autumn of 1998 that I became anxiously aware of how great are the dangers facing us in the 21st century. I can date the onset of my unease to the day I met Ray Kurzweil, the deservedly famous inventor of the first reading machine for the blind and many other amazing things.
I had always felt sentient robots were in the realm of science fiction. But now, from someone I respected, I was hearing a strong argument that they were a near-term possibility. I was taken aback, especially given Ray’s proven ability to imagine and create the future. I already knew that new technologies like genetic engineering and nanotechnology were giving us the power to remake the world, but a realistic and imminent scenario for intelligent robots surprised me.”
Joy then discusses how these technologies (namely nanotechnology and artificial intelligence) pose a new, unparralleled threat to humanity, and that as a result, we shouldn’t pursue them – in fact, we should purposefully restrict them, on the principle that the amount of harm and threat they pose to humanity itself outweighs what benefit they could bring.
“ Accustomed to living with almost routine scientific breakthroughs, we have yet to come to terms with the fact that the most compelling 21st-century technologies – robotics, genetic engineering, and nanotechnology – pose a different threat than the technologies that have come before. Specifically, robots, engineered organisms, and nanobots share a dangerous amplifying factor: They can self-replicate. A bomb is blown up only once – but one bot can become many, and quickly get out of control.
…Failing to understand the consequences of our inventions while we are in the rapture of discovery and innovation seems to be a common fault of scientists and technologists; we have long been driven by the overarching desire to know that is the nature of science’s quest, not stopping to notice that the progress to newer and more powerful technologies can take on a life of its own.”
“We are being propelled into this new century with no plan, no control, no brakes. Have we already gone too far down the path to alter course? I don’t believe so, but we aren’t trying yet, and the last chance to assert control – the fail-safe point – is rapidly approaching. We have our first pet robots, as well as commercially available genetic engineering techniques, and our nanoscale techniques are advancing rapidly. While the development of these technologies proceeds through a number of steps, it isn’t necessarily the case – as happened in the Manhattan Project and the Trinity test – that the last step in proving a technology is large and hard. The breakthrough to wild self-replication in robotics, genetic engineering, or nanotechnology could come suddenly, reprising the surprise we felt when we learned of the cloning of a mammal.”
He closes his essay saying:
“Thoreau also said that we will be “rich in proportion to the number of things which we can afford to let alone.” We each seek to be happy, but it would seem worthwhile to question whether we need to take such a high risk of total destruction to gain yet more knowledge and yet more things; common sense says that there is a limit to our material needs – and that certain knowledge is too dangerous and is best forgone.
Neither should we pursue near immortality without considering the costs… A technological approach to Eternity – near immortality through robotics – may not be the most desirable utopia, and its pursuit brings clear dangers. Maybe we should rethink our utopian choices.”
Another view that counters Kurzweil’s is presented by Richard Eckersley, focused a bit less on the scientific dangers and more on the threat to human values:
“Why pursue this(Kurzweil’s) future?…The future world that Ray Kurzweil describes bears almost no relationship to human well-being that I am aware of. In essence, human health and happiness comes from being connected and engaged, from being suspended in a web of relationships and interests—personal, social and spiritual— that give meaning to our lives. The intimacy and support provided by close personal relationships seem to matter most; isolation exacts the highest price. The need to belong is more important than the need to be rich. Meaning matters more than money and what it buys.
We are left with the matter of destiny: it is our preordained fate, Kurzweil suggests, to advance technologically “until the entire universe is at our fingertips.” The question then becomes, preordained by whom or what? Biological evolution has not set this course for us; Is technology itself the planner? Perhaps it will eventually be, but not yet.
We are left to conclude that we will do this because it is we who have decided it is our destiny.”
Joy and Eckersley powerfully warn against our pursuit of a Kurzweil-type future. So we may be able to have the technical ability to achieve machine-like capacities; does that mean we should? This technological future, though perhaps possible, should not be preferable. The technologies that Kurzweil speaks of are dangerous, presenting a new type of threat that we have not before faced as humans — and the risks of pursuing them far outweigh the benefits.
If we are to continue down Kurzweil’s path, we may be able to pursue remarkable things conceived of mostly so far in science fiction — a future where we are no longer humans at all, but artifacts of our own technological creations. But if we are to heed Joy’s and Eckersley’s views, we would practice saying enough is enough – we would say we have sufficient technology to live reasonably happy lives, and by encouraging the development of these new technologies, we might be unleashing entities of pandora’s box that could put humanity in ruins forever. We would say, Yes, there is tremendous promise in these technologies; but there is more so a tremendous risk. We need to hold fast to the human values of restraint and temperance, lest we find ourselves equipped with the capacity to alter ourselves and the world, and yet unable to handle or control that immense power.
So the camps seem to be these: Kurzweil believes technology reduces suffering, and that we should pursue it for that reason to any end – even until we are no longer human, but become technology ourselves. (Indeed, he feels we have a moral imperative to pursue them for this reason.) Joy believes there are too many dangers in this type of future. And Eckersley asks, why would we want this future, anyway? I am left thinking about a number of things:
First, I am intrigued by Kurzweil’s unwavering love for technology — because it seems to me like technology has both its strengths and its weaknesses, and that such faith in a technological system greatly overinflates the capacities of technology to cure all of the world’s problems while overlooking its very real drawbacks. I wonder about putting so much faith in technology, to solve all our ills, and replace all our deficiencies. Is it really such a healing, improving force? Would it really be possible to achieve this technological utopia without some potentially disastrous consequences?
I also can’t help but wonder what role technology, as its own force, plays in this debate. People often fear about rebellious robots or artificially intelligent beings taking over; but is technology already, in a sense guiding us, in control of us, instead of us controlling it? It seems harder and harder to resist the grip of technology, even as we face a future that, as Joy says, “no longer needs us.” Isn’t there something a bit strange about humans contemplating–and preferring– a post-human future? Does it indicate, in some sense, that technology has already overtaken man, and is gearing us down a path until it fully reigns supreme?
I am also left wondering, in part because of the aforementioned reason, whether it is possible to forego the development of certain technologies, as Joy suggests, given our current track record and inclinations towards the use of technology. It always seems with technology that if we have the capacity to do something, then we inevitably will. Is it possible to stop the development of technology, especially if that means also giving up some of its potential benefits? And if we aren’t drawing the line at genetic engineering, nanotechnology, and artificial intelligence, does that mean we will never actually draw a line? What does that say about human nature — that we forever seek this sort of “technological progress”, even when it robs us of what we currently conceive of as making us human? Are there core values to being human that will persevere, or are we really just a fleeting blip in the evolutionary climb towards becoming transhumans?
The ideas Kurzweil puts forth as his vision of our future really forces one to consider what things about being human seem worth holding onto (if any). And even if his predictions don’t materialize in the way or the time frame he anticipates, it does seem undeniable that we are at a critical turning point in our species’ history. Indeed, the decisions we choose to make now in regards to these fundamentally “reshaping” technologies will affect generations to come in a profound way– generations whose lives will be radically different based on what roads we choose to go down in regards to genetic engineering, artificial intelligence, and nanotechnology.
But making these choices is not strictly a “technical” task, concerned merely with what we are able to, technologically speaking, accomplish; rather, it really requires us to decide our core beliefs about what makes a good life; to consider what is worth risking about being human beings, not only to alleviate suffering but also to engage in these “self-enhancing technologies” that will supposedly make us stronger, smarter, and less destructible; and to grapple with these fundamental questions of life and death that are not technological issues but rather metaphysical ones. Indeed, it’s no small philosophical feat to reshape and change the human genome; it’s no small feat to create artificial beings smarter than human beings; and it’s no small feat to eradicate what has, since the birth of mankind, defined our human experience: the fleeting nature of life, and the inevitability of death. Taking this power and control into our own hands requires not just the capability to achieve extended life from a technical standpoint, but a completely redefined scope of who we are, what we want, and what our purpose is on this planet.
There are questions, of course, about the moral decision of living forever. What would we do about overpopulation — would we stop procreating completely? Does a person living now have more of a right to be alive than a person who hasn’t been born yet? Where would we derive purpose from in life if there was no end point? These would all be real questions to consider in this type of scenario; and they are questions that would require real reflection. With a reshaped experience of what it means to be human, we would be required to make decisions about our lives that we’ve never even had to consider making before.
But if Kurzweil is correct, then never have we had such power over our own destinies. In Kurzweil’s world, there is no higher power or God divining our life course, nor is there an afterlife or Heaven worth gaining entrance to. The biological and technical underpinnings of life are, in his view, manipulatable at our will; we can defy what some might call our “God- given” biology and we can become our own makers. We can even make our own rules. And along with that power, would come the responsibility to answer some very weighty philosophical questions, for nothing else would be determining those answers for us.
My question is, do we really want that responsibility? Are we really equipped to handle that type of power? And furthermore, does getting caught up in all the ways these technologies could “enhance” our lives –in getting caught up in the idea that all technological innovation is definitively progress — are we less and less able to step back and ask the philosophical and ethical questions about if this is really what a good life looks like?
When you envision our technological future, do you share Kurzweil’s dreams? Joy’s fears? Eckersley’s questions about our human values being lost?
Should we place limits on certain technologies, given the dangers they present? Are there any types of technologies we simply shouldn’t pursue?Want To Read More About Kurzweil and The Singularity? Here are some articles: Reinventing Humanity: The Future of Machine-Human Intelligence Profile on Kurzweil in Wired Magazine The Singularity Is Near website The Singularity, Explored: Q & A with Michael Vassar
More from my site
8 Responses to “Immortality, Transhumanism, and Ray Kurzweil’s Singularity”
Leave a Reply
You must be logged in to post a comment.