“There is a sacred realm of privacy for every man and woman where he makes his choices and decisions–a realm of his own essential rights and liberties into which the law, generally speaking, must not intrude.” -Geoffrey Fisher
In the times of social networking, the Internet, and personal information everywhere being made public, there is no question that we are experiencing a loss of privacy left and right. One might say that the last bastion of privacy – our own thoughts – is all we have to hold onto (although some people, driven by the age of Twitter, have taken to publishing all of those, too).
But a segment on 60 Minutes last year brought to light that even these private thoughts are up for grabs, with brain scanning technologies “making it possible for the first time in human history to peer directly into the brain to read out the physical make up of our thoughts, some would say, to read out minds.” Functional Magnetic Resonance Imaging (fMRI for short) enables us to scan and see the metabolic activity inside the brain, allowing researchers to begin to identify where thoughts occur, and what they might look like, by measuring changes in blood flow and oxygenation in the brain and linking it with certain mental states. The implications – for the law, for our notions of privacy, for our conceptions of free will– are profound. “We all take as a given that we’ll never really know for sure, that the content of our thoughts is our own. Private, secret, unknowable by anyone else,” Lesley Stahl, 60 Minutes correspondent says. “Until now, that is.”
“Every era has its own defining drug.” – Margaret Talbot
With the high availability of so-called “cognitive enhancing drugs” like Ritalin, Adderall, and Provigil on college campuses, students everywhere are facing the choice of whether or not to take non-prescribed medications to help them “perform better” in school. Studies show that anywhere between 20-35% of college students have used one of these medications without a prescription in their college career, but an informal survey would likely reveal an even higher percentage, as the use of these medications is on the rise. Many claim these drugs help them concentrate, study longer, and juggle more tasks by creating more productive hours in the day. Others rely on them in a crunch, during midterms, finals, or the night before a big test, when the clock is ticking and assignments are due, and there doesn’t seem to be enough time –or brain power–to get everything that needs to get done, done.
The question of whether to use these “cognitive enhancing drugs” poses many ethical concerns– some rooted in the very immediate and direct impact of these drugs on the developing brains of young people, and some rooted more in what these drugs say philosophically about the direction our society is headed in. And with the rate of use tripling within the past ten years, along with the fact that dozens of new cognitive stimulants are currently in the pharmaceutical pipeline, it seems an important issue to examine. Should we embrace the use of these drugs, in hopes of them making us smarter, more efficient, and more productive? Or should we be wary of using them, concerned with the risks that they pose not only to our brains, but to our own personal and societal values?
Who decides what’s right, what is socially appropriate, and what is societally acceptable when it comes to the use of things that alter your brain function?
It’s interesting to consider how we decide what the rules are about which drugs are deemed socially acceptable and which ones are not. We condone (not only condone, but actively rely on) certain substances like caffeine, guzzling down cups of coffee and cans of Red Bull without a second thought about their “ethical implications.” We condemn marijuana as illegal but allow a much more dangerous drug – alcohol – to be consumed at will after the age of 21. We think it’s permissible to use coffee and chain-smoking cigarettes to pull an all nighter to complete work but would gape at someone snorting a line of cocaine for the same reason. How are these lines we draw–the ones that call a certain brain-altering substance taboo and another one completely embraceable– determined? Do they involve a careful assessment of their effects on the brain? A standardized measure of risks? Do they come from some subjective evaluation grandfathered in by socially determined forces?
My previous post about radical life extension presented an extreme picture of the future, where humans are able to live longer and longer as a result of melding with machines, eventually even becoming machines themselves. It’s a fascinating future to consider, but also gets one thinking: are Kurzweil’s visions of immortality even close to being feasible, given the current state and direction of today’s technological advancements? When it comes, realistically, to life extension technologies, where do we really stand today?
There’s perhaps no group of people to better answer this question than the people of Methuselah Foundation, a non-profit organization founded by David Gobel, which supports Aubrey De Grey’s SENS research and is dedicated to enabling humans “to live longer, better and wiser, by defeating age-related disease and suffering.” I had the privilege of speaking to Roger Holzberg, the Chief Marketing Officer and Creative Director of Methuselah Foundation, about the core philosophies of the foundation and the promising research they are involved with. I asked Mr. Holzberg, what are the areas of life extension available now, and in our short-term future? What fundamentally drives the foundation towards seeking these life extension solutions?
“Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” – Vernor Vinge, Technological Singularity, 1983
Futurist and Inventor Ray Kurzweil has a plan: He wants to never die.
In order to achieve this goal, he currently takes over 150 supplements per day, eats a calorie restricted diet (a proven technique to prolong lifespan), drinks ionized water (a type of alkalinized water that supposedly protects against free radicals in the body), and exercises daily, all to promote the healthy functioning of his body; and at 60 years old, he reportedly has the physiology of a man 20 years younger.
But the human body, no matter how well you take care of it, is susceptible to illness, disease, and senescence – the process of cellular change in the body that results in that little thing we all do called “aging.” (This cellular process is why humans are physiologically unable to live past the age of around 125 years old.) Kurzweil is well aware of this, but has a solution: he is just trying to live long enough in his human body until technology reaches the point where man can meld with machine, and he can survive as a cyborg with robotically enhanced features; survive, that is, until the day when he can eventually upload his consciousness onto a harddrive, enabling him to “live” forever as bits of information stored indefinitely; immortal, in a sense, as long as he has a copy of himself in case the computer fails.
What happens if these technological abilities don’t come soon enough? Kurzweil has a back-up plan. If, for some reason, this mind-machine blend doesn’t occur in his biological lifetime, Kurzweil is signed up at Alcor Life Extension Foundation to be cryonically frozen and kept in Scottsdale, Arizona, amongst approximately 900 other stored bodies (including famous baseball player Ted Williams) who are currently stored. There at Alcor, he will “wait” until the day when scientists discover the ability to reanimate life back into him– and not too long, as Kurzweil believes this day will be in about 50 years.
There is something primal about our need for nature — for time in the out doors, for sunshine, for fresh air. Psychologist Paul Bloom writes, “Our hunger for the natural is everywhere…People like to be close to oceans, mountains, and trees. Even in the most urban environments, it is reflected in real estate prices: if you want a view of the trees of Central Park, it’ll cost you. Office buildings have atriums and plants; we give flowers to the sick and the beloved and return home to watch Animal Planet and the Discovery Channel…And many of us seek to escape our manufactured environments whenever we can — to hike, camp, canoe, or hunt.”
Yet on the heels of a study that just came out last week saying that teenagers spend up to 7.5 hours per day on digital devices — up an hour from the previous year — one wonders what is happening to our individual relationships to the natural world as a result of technology. My previous post explored some of the broad ethical relationships between technology, human behavior, and the environment; today, I’m featuring an article which raises an important and related question: Is nature important to our happiness? And if so, then why do we spend so much time attached to our technologies, and detached from nature?
In his article “Natural Happiness,” for The New York Times Magazine’s Green Issue, Paul Bloom, a psychologist from Yale University, asks us to ask ourselves these questions. Read Bloom’s article, ahead.
Each year, we lose over 38 million acres of rainforest as a result of deforestation; rainforests used to cover 14% of the earths surface; now, they cover less than 6%, and are depleting more each year. Our 800 million+ cars in the world emit carbon emissions at such a high level that they erode the atmosphere and are contributing to drastic changes in our weather patterns. The trash we have discarded – including, of course, man-made non-biodegradable plastics– accumulate in landfills throughout the world and leach toxic chemicals into the land and water, greatly affecting the survival of animal and plant life.
And in a pursuit to feed the ever-growing world population, agricultural biotechnologists are altering the genetic make-up of food and plants, splicing the genes from fish into the genes of tomatoes, for example, to increase the amount that we can grow and the “nutrient content” they possess — a type of species cross-breeding that has heretofor never occurred, and never would occur, naturally in nature.
Thinking about modern technologies of the past 100 years, one can’t help but see how they have radically transformed our planet. The cars we drive, the massive amounts of waste we discard, the agricultural techniques we employ, among many other examples: each has led environmental aftereffects such as climate change and depletion of natural resources that have altered the biosphere in which we live in very significant ways.
“Reflections” is a new category of posts aimed to engage discussion about broader issues in technology and ethics. This first “Reflections” post on Techno-optimism and Techno-pessimism asks you to consider, “What are your general views towards technology, and how did you arrive at those views?”
Many of us have opinions about technology that can be classified along the spectrum of being a “techno-optimist” or a “techno-pessimist” — categorizations that reflect our general attitude about our technological past, present, and future.
When you think about the way in which technology has impacted our world—from the environment, to our medical achievements, to human relationships — are you generally optimistic or pessimistic about its influence?
Are you a techno-optimist? Do you think technology has consistently improved our lives for the better, and that it will continue to do so into the future? When you consider problems in society, or even problems with current technology, do you think that the solution to those problems is more technology?
Or would you characterize yourself as a techno-pessimist? Are you generally concerned with the impact that modern technology has had on humanity, believing that it has created just as many problems as solutions? Do you think that seeking out more technology is likely to bring about new problems, because technology inevitably introduces unforeseen consequences and dangers? Do you think that since technology creates so many of its own problems, the answer to human progress often lies in a reduction of technological dependence, rather than an expansion of it?
In the 2004 film I, Robot, Will Smith’s character Detective Spooner harbors a deep grudge for all things technological — and turns out to be justified after a new generation of robots engage in a full out, summer blockbuster-style revolt against their human creators.
Why was Detective Spooner such a Luddite–even before the Robots’ vicious revolt? Much of his resentment stems from a car accident he endured in which a robot saved his life instead of a little girl’s. The robot’s decision haunts Smith’s character throughout the movie; he feels the decision lacked emotion, and what one might call ‘humanity’.
“I was the logical choice,” he says. “(The robot) calculated that I had a 45% chance of survival. Sarah only had an 11% chance.” He continues, dramatically, “But that was somebody’s baby. 11% is more than enough. A human being would’ve known that.”
But what, exactly, is it that the human being would’ve known? And how would they have known it?
A time when if you wanted to get in touch with someone, you had to leave a message, and (gasp!) wait until they returned home to call you back?
A time before digital contact lists, when you memorized your friend’s phone numbers?
A time when if you planned to meet someone at a specific time and they were late, you’d just have to hang around until they got there?
A time when you might have sat for a moment in silence, read a book without interruption, or chatted with someone nearby, instead of constantly grabbing for your phone to send a text or check e-mail?
It’s hard to imagine, but just give it a try: can you remember life before you had a device with you, at all times, everywhere you go?
Today’s post is about the gadget that has wormed its way into the life of over 80% of American’s lives, and explores what it’s like to live in a world where quiet, un-connected moments are few and far between, increasingly replaced by the twitter of texts and cell phone chatter. Guest poster SCU student Chris Kelly explores this everpresent issue in his article Smartphones Distract From Reality, writing that cell phones are “changing the way we think about free time.” Chris’s article, ahead.
Did you know that with $399 and a tube of your saliva, you can find out your genetic predispositions for disease, personality traits, and what medications might work best for you? Or with $149, you can check out your genetic family heritage? How about that for less than $1,000, you will soon be able to get your entire genome mapped?
And what does this mean to you? It seems fair to say that currently, most people don’t concern themselves with their genetic profiles in their day-to-day lives. Surely we read about genetics in the media: what genes are linked with what traits, what advancements are being made in the field of medicine with the growing knowledge of genetic information. But our society certainly doesn’t conduct itself like the science-fiction movie Gattaca, where each person is branded with his or her genetic likelihoods from birth and assigned societal roles accordingly. We are generally oblivious to our own genetic profiles, and pay selective attention to findings about genes mostly when faced with a pressing health problem. For the most part, we carry on our lives with little knowledge about our own genetic makeup and what that information might tell us about ourselves.
Here’s a deal for you: If you can figure out how to control the bubble size in carbonated beverages, or can find a novel approach to protecting corn from insect damage, the website Innocentive will broker a deal where your idea could be purchased for $20,000.
Or maybe chemical compounds aren’t your thing? Head over to Amazon.com’s Mechanical Turk program, and make $1 for identifying in 100 pictures whether the person in the photo is male or female, or earn 5 cents for every city and country you match with the correct overseas zipcodes.
Still need more work? If you successfully pass the interview process at LiveOps.com (also known as the “contact center in the cloud”), you could soon be a call-center employee taking someone’s drive thru order from the Jack-in-the-Box from across town, simply sitting at home on your couch connected to the drive-thru module via your laptop.
Each of these is an example of Ubiquitous Human Computing, a term coined by Harvard Law Professor Jonathan Zittrain used to describe the trend to network and distribute mindpower as a fungible resource on the web.
Last Week, Zittrain came to speak about this topic at Santa Clara University in his lecture entitled “Minds For Sale”, where he dynamically discussed the myriad of issues we are faced by this new wave of the internet.
Out of the many interesting topics Zittrain covered, a few ideas stood out to me:
“It’s not science fiction. Nowadays prospective parents cannot only know the sex of their unborn child but also learn whether it can supply tissue-matched bone marrow to a dying sibling and whether it is predisposed to develop breast cancer or Huntington’s disease — all before the embryo gets implanted into the mother’s womb.” -Esthur Landhuis
Have you heard of “designer babies”? Or perhaps you saw or read My Sister’s Keeper, a story about a young girl who was conceived through In Vitro Fertilization to be a genetically matched donor for her older sister with leukemia? The concept of selecting traits for one’s child comes from a technology called preimplantation genetic diagnosis (PGD), a technique used on embryos acquired during In Vitro Fertilization to screen for genetic diseases. PGD tests embryos for genetic abnormalities, and based on the information gleaned, provides potential parents with the opportunity to select to implant only the “healthy”, non-genetically diseased embryos into the mother. But this genetic testing of the embryo also opens the door for other uses as well, including selecting whether you have a male or female child, or even the possibility of selecting specific features for the child, like eye color. Thus, many ethicists wonder about the future of the technology, and whether it will lead to babies that are “designed” by their parents.
Today’s post is an exploration of the ethical issues raised by prenatal and preimplantation genetic diagnosis, written by Santa Clara Professor Dr. Lawrence Nelson, who has been writing about and teaching bioethics for over 30 years. Read on to examine the many ethical issues raised by this technology.
Be honest: how many other things are you doing right now?
Are you in the midst of responding to your e-mail, while casually browsing the web, scanning your friend’s most recent Facebook updates, chatting on Gchat, and mid-article on your favorite news site or blog?
Go ahead and count them: how many windows are open on your computer right now?
And what else are you doing? Are you listening to music, watching TV, or half-talking to a friend nearby? Is your cell phone within a hands reach, ready to be answered the instant you hear a text message or phone call? Or perhaps you’re even reading this on your cell phone, on your way in between classes or meetings, biding time while waiting for the next thing to require your attention?
No, this isn’t a post about Big Brother watching you; it’s about a term we all know too well: Multitasking. We have become, as writer Christine Rosen says, “mavens of multitasking,” glued to our technological gadgets, driven by our seemingly endless to-do lists of tasks. My post today asks, how have all the technologies we use – the cell phones, computers, PDAs, e-mails, and the like– accelerated the extent to which we multitask? And more importantly, what effect has it had on the way we live our lives? Read more »
“Help loving couples conceive a child! Seeking egg donors with a clear health history, GPA 3.6+ and above 1350 on SAT. Must play a musical instrument. $10,00 Compensation.”
Have you seen an ad like this in your local college newspaper? Chances are if you leaf through the classified sections of any elite university, you’ll find one just like it. The advertisements, placed by couples or agencies looking for women to donate their eggs to be used to help couples conceive through In Vitro Fertilization, appear in college classifieds across the country. They are notoriously featured at Ivy League schools, often targeting high achieving women with superior grades and test scores, offering anywhere from $5,000 to $50,000 for highly qualified donors. Many call for specific qualities in their donors: “Donor ideally has artistic skills, as intended mother is a talented oil painter and piano player,” reads one.
What happens when the pictures and content you post online for friends to see is also viewed by a potential employer?
The question has become of particular importance in recent years, where photos, profiles, and online commentary are being factored into who gets hired–and fired–in the workforce.
Close to 50% of companies report doing background checks on their candidates by searching through online content, and claim to have not hired candidates based on finding “provocative photographs,” “content about drinking or using drugs,” or even “poor communication skills” demonstrated on their online profiles. For recent college students joining the workplace, this is particularly a problem, because they often have this type of “unprofessional” content on their profiles from their time in school.
Here’s a challenge: can you read this whole post without getting distracted? Can you resist the urge to skim each paragraph for the “gist of it”, and instead read each sentence carefully, reflecting on its meaning, even thinking about how it might apply to your life?
Chances are this might take some work: if you are accustomed to reading on the web, you’ve likely also grown accustomed to the online reading style known as the “F-shaped pattern“, where when you open a webpage, you read in an F-shape quickly from left to right across the top, and then scan the middle until you get to the bottom, absorbing a few main ideas but not truly engaging with any of them. It’s a quick and easy way to catch the major points, enabling you to get an overview of everything presented, perhaps giving you the sense of comprehension. But as the research shows, it’s likely that you are absorbing very little.
And when you’re websurfing, reading for entertainment, or perusing blogs, maybe it doesn’t matter if you’re just skimming. But as the internet is increasingly the source for all our content – the news we read, the research we do for work and school, the entertainment we enjoy– we must ask the question: how is the internet changing the way we read, and the depth with which we take in information? What are the implications for society if the deep, reflective thinking associated with reading is replaced by the “web-page graze”?
Nanotech Self-Assemblers. Genetically Engineered Offspring. Full Immersion Virtual Reality. Robots That Can Think.
It’s easy to dismiss many of these “future technologies” as the stuff of science fiction, existing only in the ‘advanced’ societies we’ve seen rendered in the movies. But Ray Kurzweil, famous futurist and author of “The Singularity Is Near,” believes we are at a precipice of a technological revolution where nanotechnology, information technology, and artificial intellegience will, over the next few decades, develop at such a fast rate that the human race will soon be faced with a fundamentally restructured way of living. He declares that we are entering into “an era in which our intelligence will become increasingly nonbiological and trillions of times more powerful than it is today–the dawning of a new civilization that will enable us to transcend our biological limitations and amplify our creativity.
Welcome to The Technological Citizen! This blog is meant to promote awareness, reflection, and dialogue about ethical issues in modern technology. The discussions are ongoing, so please feel free to share your views on these topics in the comments sections of each post, regardless of when it was posted. Thanks for stopping by!