“There is a sacred realm of privacy for every man and woman where he makes his choices and decisions–a realm of his own essential rights and liberties into which the law, generally speaking, must not intrude.” -Geoffrey Fisher
In the times of social networking, the Internet, and personal information everywhere being made public, there is no question that we are experiencing a loss of privacy left and right. One might say that the last bastion of privacy – our own thoughts – is all we have to hold onto (although some people, driven by the age of Twitter, have taken to publishing all of those, too).
But a segment on 60 Minutes last year brought to light that even these private thoughts are up for grabs, with brain scanning technologies “making it possible for the first time in human history to peer directly into the brain to read out the physical make up of our thoughts, some would say, to read out minds.” Functional Magnetic Resonance Imaging (fMRI for short) enables us to scan and see the metabolic activity inside the brain, allowing researchers to begin to identify where thoughts occur, and what they might look like, by measuring changes in blood flow and oxygenation in the brain and linking it with certain mental states. The implications – for the law, for our notions of privacy, for our conceptions of free will– are profound. “We all take as a given that we’ll never really know for sure, that the content of our thoughts is our own. Private, secret, unknowable by anyone else,” Lesley Stahl, 60 Minutes correspondent says. “Until now, that is.”
“Reading Your Mind,” this segment on 60 Minutes which aired last March, walks us through just how these brain scans are being used for “thought identification,” and raises some interesting questions about how these new technologies might be used in the future. Below, I’ll bring up some thoughts it raised for me:
How does it work?
To summarize some of the video, Marcel Just’s work shows the capability of fMRI technology to identify the areas in the brain associated with thinking about certain objects; for example, you could show a subject a series of pictures – a screwdriver, an igloo — and have the subject think about those objects; then, when you present a pair of objects and ask the subject to think about one of them, the computer can identify which object you were thinking of by tracking which areas of the brain light up.
When you think about an object like a screwdriver, similar parts of the brain are likely to fire — the parts implicated in holding a tool, the parts associated with what you use a screwdriver for, the parts implicated in twisting an object, and so on. By piecing these bits of data together, the computer (and thus, the researcher) can identify which object you were thinking about by seeing which neurons fire, and where.
These studies are indeed remarkable. Though the thoughts they can identify are exceedingly basic – showing that a person is picking “screwdriver” from the options of “screwdriver” and “igloo” is a far cry from reading a complex emotion like anger, motive, or jealousy–it certainly opens the door for some interesting issues to consider in the field of “reading minds.” Some implications of this technology are still theoretical, and depend on how advanced our technologies get; yet some are much more immediate and able to be implemented now, or at least in the very near future.
So what are the current implications of this work? One of the parts of the segment I found the most fascinating is considering the implications of thought recognition in the court of law. In his article “The Brain on Stand,” Jeffrey Rosen elaborated on some potential applications as well:
-One of the ways this technology could be used is to identify “recognition” patterns that might implicate someone in committing a crime. For example, if you can prove that a person is familiar with the scene of a crime or with a murder weapon by tracking which parts of their brain fire when they are exposed to these things – because the area of the brain that lights up with “recognition” is different than the area that lights up in a novel situation — you might be able to prove that they were involved with the crime. For example, as mentioned in the video, you might be able to tell if someone has been in an Al Qaeda training camp before, perhaps by exposing them to photos of the camps and seeing what happens in their brains; or perhaps you could show them a list of names and see if his or her brain “lights up” with recogntion. In fact, a case of this very technique was reported in India, when a woman was convicted and sentenced after an EEG allegedly showed she was familiar with the circumstances around the poisoning of her ex-fiancé.
-Another application of this technology is that it could be used in a line-up scenario, allowing a witness to scan the potential criminals and have the brain scan identify if he or she recognizes anyone, and use the brain recognition patterns to identify the criminal — even if the witness can’t consciously remember who the criminal is or what they look like. Rosen explains, “The brain stores memories both explicitly and implicitly. Assemble a standard police line up and a person may not be able to explicitly remember who was the attacker in question; but perhaps the brain “recognizes” the face on some implicit level, and lights up when looking at one of the attackers and none of the others. This method literally reads a person’s mind, gathering information that the victim may not have even been able to explicitly recall on his or her own.”
-Another potential use? Advanced versions of lie detection are a big area being pursued. “Current lie detectors use biological cues to assess if someone is lying: pupil dilation, stress signals, and the like,” Rosen explains. “The future of lie detection, some think, will be peering into the brain. It might light up differently in the brain if you committed the action than if you watched it happen.” Indeed, two companies outlined in the video, Cephos and No Lie MRI, have already capitalized on this trend. And who would stop at criminal defense? “ I have two teenage daughters,” Paul Root Wolpe, the ethicist from Emory interviewed in the video jokes. “I come home one day and my car is dented and both of them say they didn’t do it. Am I going to be able to drag them off to the local lie detection agency and get them put in a scanner?”
All of these techniques, Rosen says, could also lead to pre-emptive screening – if you could look into someone’s brain and see that they have “reduced glucose metabolisms, faulty amygdalas, disinhibition in the prefrontal cortex,” Rosen says, you might be able to better predict criminal behavior. “You could require counseling, surveillance, G.P.S. transmitters or warning the neighbors,” Henry Greely adds, in Rosen’s article. “None of these are necessarily benign, but they beat the heck out of preventative detention…Even with today’s knowledge, I think we can tell whether someone has a strong emotional reaction to seeing things, and I can certainly imagine a friend-versus-foe scanner. If you put everyone who reacts badly to an American flag in a concentration camp or Guantánamo, that would be bad, but in an occupation situation, to mark someone down for further surveillance, that might be appropriate.”
Sound a little too much like Big Brother yet? “I always tell my students there is no science fiction anymore,” Wolpe said. “All the science fiction I read in high school, we’re doing.”
Brain scans may be used in a variety of ways in the court of law; but more deeply, they raise some very important questions about the fundamental ways we understand ourselves. Will these brain scanning technologies enable us to see into the brain and predict, explain, and determine everyone’s behavior? If we are able to determine that we are just the biology of our brains–and not in control, in a sense, of what we do– then does that mean we don’t possess free will? If we are just the biological substrates of our thoughts, are we really, in any meaningful philosophical sense, responsible for our actions?
In their article “For the Law, Neuroscience Changes Nothing and Everything,” Joshua Greene and Jonathan Cohen from Princeton University take the view that as neuroscience uncovers more and more about the inner workings of the mind, these technologies will provide us with a biological explanation for all human behavior, and that our conceptions of ourselves will be redefined as a result:
“At some time in the future,” they write, “we may have extremely high-resolution scanners that can simultaneously track the neural activity and connectivity of every neuron in a human brain, along with computers and software that can analyze and organize these data. Imagine, for example, watching a film of your brain choosing between soup and salad. The analysis software highlights the neurons pushing for soup in red and the neurons pushing for salad in blue. You zoom in and slow down the film, allowing yourself to trace the cause-and-effect relationships between individual neurons – the mind’s clockwork revealed in arbitrary detail. You find the tipping-point moment at which the blue neurons in your prefrontal cortex out-fire the red neurons, seizing control of your pre-motor cortex and causing you to say, “I will have the salad, please.”
Greene and Cohen continue:
At some further point this sort of brainware may be very widespread, with a high-resolution brain scanner in every classroom. People may grow up completely used to the idea that every decision is a thoroughly mechanical process, the outcome of which is completely determined by the results of prior mechanical processes. What will such people think as they sit in their jury boxes? Suppose a man has killed his wife in a jealous rage. Will jurors of the future wonder whether the defendant acted in that moment of his own free will? Will they wonder if it was really him who killed his wife rather than his uncontrollable anger? Will they ask whether he could have done otherwise? Whether he really deserves to be punished, or if he is just a victim of unfortunate circumstances?
We submit that these questions, which seem so important today, will lose their grip in an age when the mechanical nature of human decision-making is fully appreciated. The law will continue to punish misdeeds, as it must for practical reasons, but the idea of distinguishing the truly, deeply guilty from those who are merely victims of neuronal circumstance will, we submit, seem pointless.”
Indeed, with advancement in our understanding of neurobiology, and our ability to explain certain thoughts and behaviors based on activity in the brain, some predict a new type of defense argument emerges: “It wasn’t me, ladies and gentlemen of the jury. My brain made me do it” — in which a person is no more responsible for his or her actions than a car with faulty brakes is for an accident, says Stanford neuroscientist Robert Sapolsky. And from this deterministic perspective, Cohen and Greene extrapolate a much broader philosophical shift:
“Free will, as we ordinarily understand it, is an illusion.”
Now, Greene and Cohen’s argument may appear to take neuroreductionism to its extreme – but an extreme that many neuroscientists, rationalists, and “science-can-explain-everything-ists” make the jump too as well. What would this mean for our society, and for how we view ourselves? It might mean that people could blame their behaviors on faulty brain wiring; that we could predict bad behavior from bad brains; that a person isn’t any more responsible for their actions than they are for having a defective heart or malfunctioning kidneys. This, of course, would radically change the way we treat criminal behavior and the type of punishment we put forth, which as Cohen and Greene argue, would have to shift from a retributivist (punishing someone because they deserve it, from the point of view of justice) to a consequentialist one (punishing someone to prevent them from committing more crimes, from the point of view of utilitarian tradition):
“We maintain that advances in neuroscience are likely to change the way people think about human action and criminal responsibility by vividly illustrating lessons that some people appreciated long ago. Free will, as we ordinarily understand it is an illusion generated by our cognitive architecture… At this time, the law deals firmly but mercifully with individuals whose behavior is obviously the product of forces that are ultimately beyond their control.”
Still, many challenge this presumption, saying Cohen and Greene’s argument, along with any other that presumes that behavior is caused soley by a mere brain state, confuses causation. Emotions and decisions are not necessarily caused by the brain, resulting in behavior that “is obviously the product of forces ultimately beyond their control,” but rather may be created and then manifested in the brain; in other words, if the area of my brain lights up because of a decision I make, it is because I made that decision, not because my brain made it for me. Rosen offers an example: “If you are told your mother has died,” he explains, “your dismayed comprehension of the fact, which is a subjective mental event, will cause an objective physiological change in your brain.”
Similarly, if someone commits murder out of rage, it may not be the brain that caused the rage, but rather a person who experienced rage and decided then to act on it. In his article, “Does Neuroscience Refute Free Will?” the blog writer Lucretius elaborates,
“To say that we are victims of neuronal circumstances is to say that we are victims of ourselves. The underlying assumption is that we have no control over “neuronal circumstances,” just as we have no control over “external circumstances.” But this assumption (a newly bottled behaviorist assumption) entirely contradicts our knowledge that the brain is a self-organizing and self-regulating biological system, not merely a step in the transformation of some external stimulus to behavioral output.” In other words, they assume we are not in charge of our own brains; that “our brains commit crimes,” But “we remain innocent.” This division is unfounded. Our choices may elicit neuronal firing, not the other way around.”
Indeed, this jump to say that brain scans, because they show thoughts taking place then explain where those thoughts comes from, could be viewed as unfounded. Brain scans and associated technologies, Matthew Crawford says, in his article “The Limits of Neuro-Talk,” don’t provide the evidence of how a thought is taking place, just that it is taking place: “With such signs (as fMRIs), we do not have a picture of a mechanism. We have a sign that there is a mechanism.” In other words, we are seeing that the brains works in a given way, not how or why. Declarations of the denial of free will, when considered under this paradigm, naturally feel a bit overzealous.
But for a neuroreductionist who assumes that the brain is a force beyond on our own control, that we are pawns and our brains the players, the notion of free will is indeed an illusion. Is this perspective reliable? Are we really not in control of our thoughts and actions, and simply automatons acting out the neuronal messages of our brains?
Many recent findings in neuroscience would challenge this view, showing in fact that the brain is highly capable of being under one’s own control and amenable to conscious influence–that people’s conscious decisions have an effect on the way the brain functions and wires itself. Indeed, the discovery that the brain has neuroplasticity – that it can be rewired based on experience, and that new neural pathways can be formed and old ones deconditioned based on one’s choices and practices — puts a big thorn in this “everything is determined” perspective. In his book, “The Brain That Changes Itself,” Norman Doidge shows how much influence people have over their own brain patterns, citing research in which people overcome what were once thought to be unchangeable biological constraints — traumatic brain injuries, mental illness — by literally changing the structure and function of the neural networks in their own brains. He chronicles stories of stroke victims regaining use of their limbs by using conditioning techniques to rewire their brains so that they can learn to control their limbs with new areas that weren’t subjected to neuronal death, people who overcome learning disabilities and even people who are missing entire portions of their brain as a result of injury or disease who are able to build new neural networks that allow them to function. And all these examples beg the question: if we can control the outcome of our brains — if our brains are plastic, and amenable to biological restructuring based on conscious effort — then how could it be said that we don’t have responsibility for the actions we decide to take? If we can control our brains, then aren’t we exercising a measure of free will?
To be sure, “neuroplasticity” is not a panacea for all brain impairments, and does mean that all activity of the brain can be shifted with conscious attention. There’s no denying that certain brain pathologies — a damaged amygdala, or a tumor pressing on a part of one’s brain — can lead to irrational, unpredictable, and sometimes violent behavior, and that a person in this position may not be responsible for their actions in a strict sense, in the same way that someone who is criminally insane elicits different treatment by the law. But does this mean that we should make the leap that Cohen and Greene make — that all behavior can be attributed to a similar uncontrollable brain activity? I think to say that everyone who acts out of anger is free of blame because their amygdala is overactive seems to be confusing terms. Many people feel anger and their amygdalas are activated accordingly, and still they don’t commit acts of violence or crime. And Stephen J. Morse, professor of law and psychiatry at UPenn adds, “Even if (one’s) amygdala made him more angry and volatile, since when are anger and volatility excusing conditions? Some people are angry because they had bad mommies and daddies and others because their amygdalas are mucked up. The question is: When should anger be an excusing condition?” ”Brains do not commit crimes,” Morse says, making an interesting distinction. “People commit crimes.”
And still, one wonders why, even if we do identify the brain is “at fault” for causing criminal action, does it somehow justify behavior? One view is that whether it was the brain or a person’s upbringing or the Twinkies that they ate in excess — it doesn’t even matter when it comes to responsibility for one’s actions. Morse says,
“So what if there’s biological causation? Causation can’t be an excuse for someone who believes that responsibility is possible. Since all behavior is caused, this would mean all behavior has to be excused.”
And the question remains, Do these pictures into the brain provide a biological excuse for behavior, or merely just a biological explanation for it?
How much do we really know?
The questions of free will and consciousness are deep philosophical debates that have been taking place for millennia, and will likely not soon be resolved. But with new technologies, whether they be a telescope or a brain scan, often come new views, perspectives, and philosophies about the world. Trying to place what these technological findings tell us about the physical world into our philosophical frameworks is indeed a fascinating undertaking.
Are we in control of our own thoughts, actions, and lives, or is there some force that determines everything for us, regardless of our motives? Is every action we take biologically predetermined, or do we have a say in which paths we go down? These questions are somewhat timeless, and yet it’s interesting to see how technologies have reframed the debate. To be sure, determinists believe they are finding strong ground to stand on with advances in neuroscience that everything to be explained about human beings will be explained through our understanding of the brain. They see fMRIs as allowing us access into what we currently conceive of as the central part of our being – our brains – and project that these images will likely be the key to answering many questions about why we are the way we are, and why we act the way we do. And why wouldn’t we, with the ability to see into our own brains, feel like we’ve finally gained access to the true answers of existence? We once thought the heart to be the center of the human experience, and perhaps considered someone like Shakespeare most likely to articulate our human purpose. Now, we live in the age of the brain, where we expect technology to show us, as Marcel Just says, ‘the essence of who we are.’
But new technologies often have a way of convincing us that we have finally figured out why things work the way they do, and we seem to cling to each technological development as if it is finally the one that will offer the answers. An important question I would pose would be: How much can we really learn about a person from these technologies? And what happens if we assume we can know more than we really do?
Limitations to fMRI Technology
Though these scans are certainly amazing technologies that bring about much fruitful research, in general, we seem to currently overstate our ability their ability to reliably identify thoughts and patterns in the brain. The scans are limited in and of themselves, from a technical standpoint, writes Norman Doidge: “The current generations of brain scans…detect bursts of activity that last one second in thousands of neurons. But a neuron’s electrical signal often lasts a thousandth of a second, so brain scans miss an extraordinary amount of information,” he writes.
NPR’s Jon Hamilton adds that there a number of often undisclosed deficiencies with these scans, in his article, “False Signals Cause Misleading Brain Scans.” He interviews neuroscientist Chris Baker, who says, “The problem with functional imaging is that the signals we’re trying to get at are quite weak, and there’s a lot of noise.” And Hamilton adds: “The “noise” is in the form of false signals. These can come from the scanning equipment itself, but a lot of it comes from the person being scanned. Every heartbeat affects the flow of blood, which changes the signal. Every tiny head movement blurs the image.”
Bearing out this point, the scans can often pick up on signals that are inaccurate; one study at Dartmouth, to showcase this point, put a dead salmon in an fMRI scan, showed it pictures of emotional situations just like they would have a human subject (for humor’s sake), and recorded the results. Interestingly, the fMRI “picked up on signals” from the dead salmon’s “brain activity” — of course, when there was clearly no activity at all. “By complete, random chance, we found some voxels that were significant that just happened to be in the fish’s brain,” the researcher Craig Bennett said. “And if I were a ridiculous researcher, I’d say, ‘A dead salmon perceiving humans can tell their emotional state.’”
So what does all of this mean? It means that there’s a lot of white noise, static, and unmeasurable (or perhaps even unknowable) activity in the brain that gets overlooked, dismissed, or unprocessed in the duration of a brain scan. It means that these pretty pictures we see in magazines and articles may not be as easily color coded as we think they are. It means that while we may be able to find the correlation for “screwdrivers” in the brain, we may be a long way from identifying that someone committed murder, that someone harbors terrorist-ideologies, or that someone “subconsciously” recognizes a burglar or rapist in a line-up.
But more importantly, it raises the question whether these technologies –or any technologies– are really capable of answering all our questions. We seem to have a tendency to like reductionist explanations for things, and to hear that science has proven something to be true and then to shut out other explanations for phenomena. But this tendency can be a trap, particularly when our science is not as good as we claim it to be. And it also begs the question: are there aspects to being human that science can simply not explain?
Judging from the trend of neuroscience, nothing is off-limits for a scientific explanation — anger, romantic love, and even belief in God have been chalked up to nothing more than neurons firing in certain parts of the brain. (Indeed, every month, dozens of studies are published claiming to have found the areas of the brain responsible for various phenomena — “‘The God Spot’ is found in the brain,’ reads one article, “Watching New Love As It Sears In The Brain” reads another.) A person falling in love is reduced to surging dopamine in the caudate nucleus, and belief in God is described as excessive firing of neurons in the temporal lobe. Excessive rage that leads to murder is attributed not to some intangible motive of retribution or anger but to an overactive emotional center in the brain. And we tend to be transfixed by these findings, even if they are, when examined closely, extremely broad generalizations based on a limited amount of data. We have a tendency to see explanations of things that invoke terminology about the brain as more convincing than other explanations, simply because, as Rosen’s article points out, “we have prettier pictures and it appears more scientific.” In his article “Brain on Stand,” Rosen writes about our tendency to over-exaggerate our brain findings as being more meaningful than they are, calling it “Brain overclaim syndrome.”
The bright lights and science-looking pictures appeal to our inner rationalists; we believe articles that say things have been proven by brain scans simply because it seems so convincing, so solid, so technologically sound.A fascinating study published in the Journal of Cognitive Neuroscience showed that people are “seductively allured by neuroscientific explanations” for things, excessively more likely to believe data when it is preceded by the words “brain scans indicate,” even when the data or research findings are very obviously faulty or illogical. (In other words, present a person with two sets of data that say exactly the same thing, even if it’s not very believable, and they are much more likely to believe the data “proven” by brain scans.)
Crawford writes, “These findings suggest that we are culturally predisposed to surrender our own judgment in the face of brain scans. More generally, we defer to the mere trappings of “science.” We automatically assume the words “brain scans” assert some measure of influence, even if the findings go against our better judgment. Essentially, we are seduced by the pretty pictures of brain scans, as we often are by new and exciting technologies.
What does this say about our tendency to see technology as the answer to all our questions, even at the expense of our own better judgments? From a practical standpoint, what implications does this have for jurors in the courtroom, who are likely to be influenced by these pictures, much like the subjects of the study mentioned above? And more generally, what happens, one wonders, for a future where these technologies may be used to try to prove you are thinking something and you insist you are not, but the brain scans insist you are? Will we come to trust technology more than we trust ourselves?
And more broadly, the question seems to be this: can we really make the jump from identifying basic item recognition to saying that these machines can read our “essences”? Will these types of neuroscientific discoveries that help explain biological mechanisms of the brain necessarily lead to a comprehensive picture of consciousness? Or are we overestimating our own technological capabilities, and our own abilities to use technology to really read something as complex as the mind?
And furthermore, what are the risks to thinking we can predict behavior when we can’t? — in this assumption that our technologies will answer questions whose answers might be more complicated than we are giving them credit for? What would happen if we start condemning someone for their “predispositions”, and not their actual actions? If we start basing convictions of someone based on “subconscious recognition”? The potential for harm, and for infringement on civil liberties, seems profound.
But perhaps the more interesting question is to consider why we trust science so much more than anything else. In his influential essay, “The Question Concerning Technology,” the philosopher Heidegger argued that technology, by revealing the world through a technological framework, will increasingly shut out other ways of seeing the world — ways of understanding the world and ourselves through art, for example, or through the humanities. Has this transition already taken place? Is it already impossible to see the world, and our place in it, through other ways than the technological and scientific? By presuming these scans provide more reliable evidence of who a person is than his or her actual actions — by thinking they show the “essence” of who someone is, as Just says — are we concealing other aspects of the human condition that may be accessible only through non-technological avenues?
Jonah Lehrer presents this idea articulately in his article, “The Future of Science is…Art?” where he writes about the limitations of viewing the world solely through a scientific lens, calling for a need for art to explain the things which science cannot:
“The standard response of science is that…art is too incoherent and imprecise for the scientific process. Beauty isn’t truth… If it can’t be plotted on a line graph or condensed into variables, then it’s not worth taking into account. But isn’t such incoherence an essential aspect of the human mind? Isn’t our inner experience full of gaps and non-sequiturs and inexplicable feelings? In this sense, the messiness of the novel and the abstraction of the painting is actually a mirror. As the poetry critic Randall Jarrell put it, “It is the contradictions in works of art which make them able to represent us—as logical and methodical generalizations cannot—our world and our selves, which are also full of contradictions.”
Great novelists like Virginia Woolf “have constructed elegant models of human consciousness that manage to express the texture of our experience, distilling the details of real life into prose and plot. That’s why their novels have endured: because they feel true. And they feel true because they capture a layer of reality that reductionism cannot.”
“The arts are an incredibly rich data set, providing science with a glimpse into its blind spots,” he adds. ”No scientific model of the mind will be wholly complete unless it includes what can’t be reduced.”
Lehrer’s point seems critical: does measuring one’s rising serotonin and dopamine levels in the caudate nucleus, associated in many studies with the feeling of love in the brain, truly capture the feeling a love in any meaningful way? Don’t the volumes of Neruda’s poetry or a Shakespearean sonnet capture it better? Might a novel, a poem, a painting, even a simple conversation, provide more of a window into someone’s essence than a brain scan? (“It is quite possible—overwhelmingly probable, one might guess—that we will always learn more about human life and personality from novels than from scientific psychology,” Lehrer quotes Noam Chomsky saying.) Surely pictures of our brains can provide us with important and interesting information about ourselves, but can they explain everything, eliminating the need for all other modes of understanding? Is there any room in this view of a person for the concepts of a soul, of a spirit, of a mind that is ethereal, and not purely biological?
So the very basic question underlying this whole debate might be summarized as follows: are we, or are we not, reduceable to scientific premises? Is there room for any other explanations for the human condition than scientific ones? And though a neuro-reductionist would say it’s only a matter of time until everything can be explained through that scan, my question is, is this true only if we accept it to be? Are we assigning technology this power, and thereby deciding to shut off other ways of seeing human beings?
This issue, to me, has two layers: one is considering the actual implications of these technologies – in the court room, and in society—and the second is what this issue says about our quest to understand who we are, and about what science can explain, and what it can’t.
Cohen and Greene represent a reductionist view of the brain, thinking that we will be able to explain everything about how and why we behave the way we do through neuroscience, and that the law – and how we hold people accountable for their actions –should be adjusted accordingly.
However, I am less convinced that we are far along the path of finding out where all motives and behaviors exist in the brain. Instead, I identify with Matthew Crawford’s perspective in his article “The Limits of Neurotalk,” in which he calls for “Respect for the machine” saying “The human brain, everyone agrees, presents complexity that is simply colossal by comparison—by one estimate, the number of possible neuronal pathways is larger than the number of particles in the universe.”
Technology, it seems, always presents itself as the answer to all our questions; however, we may overinflate the ability of our technological tools to explain everything there is to know. Much like genetic determinists, neuro-reductionism is enticing as a way to explain, down to a basic unit, what a human being is and why he or she acts the way she does. But many geneticists explain that learning more about our own biology has brought awareness not to its simplicity and reducibility, but rather to its immense complexity. Though we once predicted that we would locate genes for all behavior, we have in fact identified, for the most part, single genes are not wholly predictive: rather, it is the complex interplay of many genes, along with the influence of environmental factors, that determine behavior — and even then it can be a total crapshoot as to how a person develops. Two twin sisters with precisely the same genome can have vastly different “epigenomes” – one can develop cancer, while the other does not; one can be temperamental and angry, the other placid and calm. An attempt to “standardize” our predictions of the brain and try to gain understanding based on what the lighting up on one person’s amygdala might mean and what the recognition of something might indicate will likely fail in trying to capture the brain’s immense complexity. And an attempt to do so, particularly in these early stages of technological capabilities, I would worry, would probably result in more harm than good.
But more importantly, we might challenge the idea that science and technology can offer answers to all of our questions. Lehrer writes:
“The history of science is supposed to obey a simple equation: Time plus data equals understanding. One day, we believe, science will solve everything…But the trajectory of science has proven to be a little more complicated. The more we know about reality – about its quantum mechanics and neural origins – the more palpable its paradoxes become. As Vladimir Nabokov, the novelist and lepidopterist, once put it, “The greater one’s science, the deeper the sense of mystery.””
…”The fundamental point is that modern science has made little progress towards any unified understanding of everything. Our unknowns have not dramatically receded. In many instances, the opposite has happened, so that our most fundamental sciences are bracketed by utter mystery.
The epic questions that modern science must answer cannot be solved by science alone…The struggle for scientific truth is long and hard and never ending. If we want to get an answer to our deepest questions—the questions of who we are and what everything is—we will need to draw from both science and art, so that each completes the other.”
So the issue comes down to this: can we really reduce the brain to its component parts? A person to his or her biological substrates? Neuroreductionists would say biology is everything; there is no free will. We are our biology: “all mental and behavioral activity is the causal product of physical events in the brain.” As Marcel Just says in the video, “we are biological creatures, you know, our limbs we accept are muscles and bone and our brain is a biological thinking machine.” These scans, therefore, “ reveal the essence of who we are as a person.” Is this true — can science and technology ultimately explain everything there is to know about human beings? And if so, are we only a few technological advancements away from understanding the human condition – why we act the way we do, why we make the decisions we make – and from dispelling the notions of free will altogether?
I, for one, highly doubt it. I reject the idea that science is the only portal through which to understand the human condition – leaving no room for the arts, or for philosophy, or other modalities of understanding.
As Lehrer points out in his article, “The sciences must recognize that their truths are not the only truths. No single area of knowledge has a monopoly on knowledge.”
Technology and science seek to explain, to reduce, the human experience, down to the most basic unit of understanding, and often present themselves as the only ways of understanding the world. But perhaps there are aspects of the human experience science cannot grasp; perhaps we cannot be explained in ‘basic units.’ Perhaps, as poet Wendell Berry says, “We should not mislead ourselves. There is more to the world, and to our own work in it, than we are going to know.”
Consider the use of fMRI technologies in the courtroom: should we embrace the use of these scans for use in lie detection? In confirming eye witness testimony? In identifying “thoughts” when the person claims to be thinking or is saying something different? In using them to “preemptively screen” who might commit criminal behavior? What do you think about more commercial uses for these scans, such as between parents and children, romantic partners, or employers and workers, for the sake of lie detection and thought identification?
What are some of the broader metaphysical implications of these technologies? Do you agree with Cohen and Greene that neuroscience refutes free will? Will these scans explain everything there is to know about human beings, and reveal our “essences”? Or are there aspects to human beings that cannot be explained through the scientific and technological? Do agree with Jonah Lehrer that we need art and the humanities to have a comprehensive picture of the human condition? How do our views of technology influence this debate?Want To Read More? Check Out These Articles: The Brain On Stand by Jeffrey Rosen The Limits of Neuro-Talk by Matthew Crawford Does Neuroscience Refute Free Will by Lucretius For The Law, Neuroscience Changes Everything and Nothing by Joshua Greene and Jonathan Cohen The Future of Science Is…Art? by Jonah Lehrer NeuroImaging and Capital Punishment by Carter Snead False Signals Lead To Misleading Brain Scans by Jon Hamilton
21 Responses to “Reading Minds With fMRIs”
Leave a Reply
You must be logged in to post a comment.