Reading Minds With fMRIs

brain scan“There is a sacred realm of privacy for every man and woman where he makes his choices and decisions–a realm of his own essential rights and liberties into which the law, generally speaking, must not intrude.” -Geoffrey Fisher

In the times of social networking, the Internet, and personal information everywhere being made public, there is no question that we are experiencing a loss of privacy left and right.  One might say that the last bastion of privacy – our own thoughts – is all we have to hold onto (although some people, driven by the age of Twitter, have taken to publishing all of those, too).

But a segment on 60 Minutes last year brought to light that even these private thoughts are up for grabs, with brain scanning technologies “making it possible for the first time in human history to peer directly into the brain to read out the physical make up of our thoughts, some would say, to read out minds.”  Functional Magnetic Resonance Imaging (fMRI for short) enables us to scan and see the metabolic activity inside the brain, allowing researchers to begin to identify where thoughts occur, and what they might look like, by measuring changes in blood flow and oxygenation in the brain and linking it with certain mental states.  The implications – for the law, for our notions of privacy, for our conceptions of free will– are profound.  “We all take as a given that we’ll never really know for sure, that the content of our thoughts is our own.  Private, secret, unknowable by anyone else,” Lesley Stahl, 60 Minutes correspondent says.  “Until now, that is.”

“Reading Your Mind,” this segment on 60 Minutes which aired last March, walks us through just how these brain scans are being used for “thought identification,” and raises some interesting questions about how these new technologies might be used in the future. Below, I’ll bring up some thoughts it raised for me:

How does it work?

To summarize some of the video, Marcel Just’s work shows the capability of fMRI technology to identify the areas in the brain associated with thinking about certain objects; for example, you could show a subject a series of pictures – a screwdriver, an igloo — and have the subject think about those objects; then, when you present a pair of objects and ask the subject to think about one of them, the computer can identify which object you were thinking of by tracking which areas of the brain light up.

Picture 3

When you think about an object like a screwdriver, similar parts of the brain are likely to fire — the parts implicated in holding a tool, the parts associated with what you use a screwdriver for, the parts implicated in twisting an object, and so on.  By piecing these bits of data together, the computer (and thus, the researcher) can identify which object you were thinking about by seeing which neurons fire, and where.

Picture 2

These studies are indeed remarkable.  Though the thoughts they can identify are exceedingly basic – showing that a person is picking “screwdriver” from the options of “screwdriver” and “igloo” is a far cry from reading a complex emotion like anger, motive, or jealousy–it certainly opens the door for some interesting issues to consider in the field of “reading minds.” Some implications of this technology are still theoretical, and depend on how advanced our technologies get; yet some are much more immediate and able to be implemented now, or at least in the very near future.

So what are the current implications of this work? One of the parts of the segment I found the most fascinating is considering the implications of thought recognition in the court of law.  In his article “The Brain on Stand,” Jeffrey Rosen elaborated on some potential applications as well:

fmri.h2

-One of the ways this technology could be used is to identify “recognition” patterns that might implicate someone in committing a crime.  For example, if you can prove that a person is familiar with the scene of a crime or with a murder weapon by tracking which parts of their brain fire when they are exposed to these things – because the area of the brain that lights up with “recognition” is different than the area that lights up in a novel situation — you might be able to prove that they were involved with the crime. For example, as mentioned in the video, you might be able to tell if someone has been in an Al Qaeda training camp before, perhaps by exposing them to photos of the camps and seeing what happens in their brains; or perhaps you could show them a list of names and see if his or her brain “lights up” with recogntion.  In fact, a case of this very technique was reported in India, when a woman was convicted and sentenced after an EEG allegedly showed she was familiar with the circumstances around the poisoning of her ex-fiancé.

-Another application of this technology is that it could be used in a line-up scenario, allowing a witness to scan the potential criminals and have the brain scan identify if he or she recognizes anyone, and use the brain recognition patterns to identify the criminal — even if the witness can’t consciously remember who the criminal is or what they look like. Rosen explains, “The brain stores memories both explicitly and implicitly.  Assemble a standard police line up and a person may not be able to explicitly remember who was the attacker in question; but perhaps the brain “recognizes” the face on some implicit level, and lights up when looking at one of the attackers and none of the others.  This method literally reads a person’s mind, gathering information that the victim may not have even been able to explicitly recall on his or her own.”

line up

-Another potential use? Advanced versions of lie detection are a big area being pursued.  “Current lie detectors use biological cues to assess if someone is lying: pupil dilation, stress signals, and the like,” Rosen explains.  “The future of lie detection, some think, will be peering into the brain.  It might light up differently in the brain if you committed the action than if you watched it happen.” Indeed, two companies outlined in the video, Cephos and No Lie MRI, have already capitalized on this trend. And who would stop at criminal defense? “ I have two teenage daughters,” Paul Root Wolpe, the ethicist from Emory interviewed in the video jokes. “I come home one day and my car is dented and both of them say they didn’t do it.  Am I going to be able to drag them off to the local lie detection agency and get them put in a scanner?”

Picture 6

All of these techniques, Rosen says, could also lead to pre-emptive screening – if you could look into someone’s brain and see that they have “reduced glucose metabolisms, faulty amygdalas, disinhibition in the prefrontal cortex,” Rosen says, you might be able to better predict criminal behavior.  “You could require counseling, surveillance, G.P.S. transmitters or warning the neighbors,” Henry Greely adds, in Rosen’s article. “None of these are necessarily benign, but they beat the heck out of preventative detention…Even with today’s knowledge, I think we can tell whether someone has a strong emotional reaction to seeing things, and I can certainly imagine a friend-versus-foe scanner. If you put everyone who reacts badly to an American flag in a concentration camp or Guantánamo, that would be bad, but in an occupation situation, to mark someone down for further surveillance, that might be appropriate.”

Sound a little too much like Big Brother yet?  I always tell my students there is no science fiction anymore,” Wolpe said. “All the science fiction I read in high school, we’re doing.”

Further Implications

shatteredBrain scans may be used in a variety of ways in the court of law; but more deeply, they raise some very important questions about the fundamental ways we understand ourselves. Will these brain scanning technologies enable us to see into the brain and predict, explain, and determine everyone’s behavior? If we are able to determine that we are just the biology of our brains–and not in control, in a sense, of what we do– then does that mean we don’t possess free will?  If we are just the biological substrates of our thoughts, are we really, in any meaningful philosophical sense, responsible for our actions?

In their article “For the Law, Neuroscience Changes Nothing and Everything,” Joshua Greene and Jonathan Cohen from Princeton University take the view that as neuroscience uncovers more and more about the inner workings of the mind, these technologies will provide us with a biological explanation for all human behavior, and that our conceptions of ourselves will be redefined as a result:

“At some time in the future,” they write, “we may have extremely high-resolution scanners that can simultaneously track the neural activity and connectivity of every neuron in a human brain, along with computers and software that can analyze and organize these data.  Imagine, for example, watching a film of your brain choosing between soup and salad.  The analysis software highlights the neurons pushing for soup in red and the neurons pushing for salad in blue.  You zoom in and slow down the film, allowing yourself to trace the cause-and-effect relationships between individual neurons – the mind’s clockwork revealed in arbitrary detail.  You find the tipping-point moment at which the blue neurons in your prefrontal cortex out-fire the red neurons, seizing control of your pre-motor cortex and causing you to say, “I will have the salad, please.”

Picture 3

Greene and Cohen continue:

At some further point this sort of brainware may be very widespread, with a high-resolution brain scanner in every classroom.  People may grow up completely used to the idea that every decision is a thoroughly mechanical process, the outcome of which is completely determined by the results of prior mechanical processes. What will such people think as they sit in their jury boxes?  Suppose a man has killed his wife in a jealous rage.  Will jurors of the future wonder whether the defendant acted in that moment of his own free will?  Will they wonder if it was really him who killed his wife rather than his uncontrollable anger?  Will they ask whether he could have done otherwise? Whether he really deserves to be punished, or if he is just a victim of unfortunate circumstances?

We submit that these questions, which seem so important today, will lose their grip in an age when the mechanical nature of human decision-making is fully appreciated.  The law will continue to punish misdeeds, as it must for practical reasons, but the idea of distinguishing the truly, deeply guilty from those who are merely victims of neuronal circumstance will, we submit, seem pointless.”

jury

Indeed, with advancement in our understanding of neurobiology, and our ability to explain certain thoughts and behaviors based on activity in the brain, some predict a new type of defense argument emerges: “It wasn’t me, ladies and gentlemen of the jury. My brain made me do it” — in which a person is no more responsible for his or her actions than a car with faulty brakes is for an accident, says Stanford neuroscientist Robert Sapolsky.  And from this deterministic perspective, Cohen and Greene extrapolate a much broader philosophical shift:

“Free will, as we ordinarily understand it, is an illusion.”

partsofthemindNow, Greene and Cohen’s argument may appear to take neuroreductionism to its extreme – but an extreme that many neuroscientists, rationalists, and “science-can-explain-everything-ists” make the jump too as well.  What would this mean for our society, and for how we view ourselves?  It might mean that people could blame their behaviors on faulty brain wiring; that we could predict bad behavior from bad brains; that a person isn’t any more responsible for their actions than they are for having a defective heart or malfunctioning kidneys.  This, of course, would radically change the way we treat criminal behavior and the type of punishment we put forth, which as Cohen and Greene argue, would have to shift from a retributivist (punishing someone because they deserve it, from the point of view of justice) to a consequentialist one (punishing someone to prevent them from committing more crimes, from the point of view of utilitarian tradition):

“We maintain that advances in neuroscience are likely to change the way people think about human action and criminal responsibility by vividly illustrating lessons that some people appreciated long ago.  Free will, as we ordinarily understand it is an illusion generated by our cognitive architecture…  At this time, the law deals firmly but mercifully with individuals whose behavior is obviously the product of forces that are ultimately beyond their control.”

Another Perspective

Still, many challenge this presumption, saying Cohen and Greene’s argument, along with any other that presumes that behavior is caused soley by a mere brain state, confuses causation.  Emotions and decisions are not necessarily caused by the brain, resulting in behavior that “is obviously the product of forces ultimately beyond their control,” but rather may be created and then manifested in the brain; in other words, if the area of my brain lights up because of a decision I make, it is because I made that decision, not because my brain made it for me.  Rosen offers an example: “If you are told your mother has died,” he explains, “your dismayed comprehension of the fact, which is a subjective mental event, will cause an objective physiological change in your brain.”

Similarly, if someone commits murder out of rage, it may not be the brain that caused the rage, but rather a person who experienced rage and decided then to act on it.  In his article, “Does Neuroscience Refute Free Will?”  the blog writer Lucretius elaborates,

“To say that we are victims of neuronal circumstances is to say that we are victims of ourselves.  The underlying assumption is that we have no control over “neuronal circumstances,” just as we have no control over “external circumstances.” But this assumption (a newly bottled behaviorist assumption) entirely contradicts our knowledge that the brain is a self-organizing and self-regulating biological system, not merely a step in the transformation of some external stimulus to behavioral output.”  In other words, they assume we are not in charge of our own brains; that “our brains commit crimes,” But “we remain innocent.” This division is unfounded.  Our choices may elicit neuronal firing, not the other way around.”

Indeed, this jump to say that brain scans, because they show thoughts taking place then explain where those thoughts comes from, could be viewed as unfounded.  Brain scans and associated technologies, Matthew Crawford says, in his article “The Limits of Neuro-Talk,” don’t provide the evidence of how a thought is taking place, just that it is taking place: “With such signs (as fMRIs), we do not have a picture of a mechanism. We have a sign that there is a mechanism.” In other words, we are seeing that the brains works in a given way, not how or why.  Declarations of the denial of free will, when considered under this paradigm, naturally feel a bit overzealous.

Brain Pathways

Neuroplasticity

But for a neuroreductionist who assumes that the brain is a force beyond on our own control, that we are pawns and our brains the players, the notion of free will is indeed an illusion.  Is this perspective reliable?  Are we really not in control of our thoughts and actions, and simply automatons acting out the neuronal messages of our brains?

Many recent findings in neuroscience would challenge this view, showing in fact that the brain is highly capable of being under one’s own control and amenable to conscious influence–that people’s conscious decisions have an effect on the way the brain functions and wires itself.  Indeed, the discovery that the brain has neuroplasticity – that it can be rewired based on experience, and that new neural pathways can be formed and old ones deconditioned based on one’s choices and practices — puts a big thorn in this “everything is determined” perspective.  In his book, “The Brain That Changes Itself,” Norman Doidge shows how much influence people have over their own brain patterns, citing research in which people overcome what were once thought to be unchangeable biological constraints — traumatic brain injuries, mental illness — by literally changing the structure and function of the neural networks in their own brains.  He chronicles stories of stroke victims regaining use of their limbs by using conditioning techniques to rewire their brains so that they can learn to control their limbs with new areas that weren’t subjected to neuronal death, people who overcome learning disabilities and even people who are missing entire portions of their brain as a result of injury or disease who are able to build new neural networks that allow them to function.  And all these examples beg the question: if we can control the outcome of our brains — if our brains are plastic, and amenable to biological restructuring based on conscious effort — then how could it be said that we don’t have responsibility for the actions we decide to take?  If we can control our brains, then aren’t we exercising a measure of free will?

ist2_9004545-brain-handcuffsTo be sure, “neuroplasticity” is not a panacea for all brain impairments, and does mean that all activity of the brain can be shifted with conscious attention.  There’s no denying that certain brain pathologies — a damaged amygdala, or a tumor pressing on a part of one’s brain — can lead to irrational, unpredictable, and sometimes violent behavior, and that a person in this position may not be responsible for their actions in a strict sense, in the same way that someone who is criminally insane elicits different treatment by the law.  But does this mean that we should make the leap that Cohen and Greene make — that all behavior can be attributed to a similar uncontrollable brain activity?  I think to say that everyone who acts out of anger is free of blame because their amygdala is overactive seems to be confusing terms.  Many people feel anger and their amygdalas are activated accordingly, and still they don’t commit acts of violence or crime.  And Stephen J. Morse, professor of law and psychiatry at UPenn adds, “Even if (one’s) amygdala made him more angry and volatile, since when are anger and volatility excusing conditions?  Some people are angry because they had bad mommies and daddies and others because their amygdalas are mucked  up.  The question is: When should anger be an excusing condition?”  ”Brains do not commit crimes,” Morse says, making an interesting distinction. “People commit crimes.”

And still, one wonders why, even if we do identify the brain is “at fault” for causing criminal action, does it somehow justify behavior?   One view is that whether it was the brain or a person’s upbringing or the Twinkies that they ate in excess — it doesn’t even matter when it comes to responsibility for one’s actions. Morse says,

“So what if there’s biological causation? Causation can’t be an excuse for someone who believes that responsibility is possible. Since all behavior is caused, this would mean all behavior has to be excused.”

And the question remains, Do these pictures into the brain provide a biological excuse for behavior, or merely just a biological explanation for it?

How much do we really know?

The questions of free will and consciousness are deep philosophical debates that have been taking place for millennia, and will likely not soon be resolved. But with new technologies, whether they be a telescope or a brain scan, often come new views, perspectives, and philosophies about the world.  Trying to place what these technological findings tell us about the physical world into our philosophical frameworks is indeed a fascinating undertaking.

consciousAre we in control of our own thoughts, actions, and lives, or is there some force that determines everything for us, regardless of our motives?  Is every action we take biologically predetermined, or do we have a say in which paths we go down? These questions are somewhat timeless, and yet  it’s interesting to see how technologies have reframed the debate.  To be sure, determinists believe they are finding strong ground to stand on with advances in neuroscience that everything to be explained about human beings will be explained through our understanding of the brain.  They see fMRIs as allowing us access into what we currently conceive of as the central part of our being – our brains – and project that these images will likely be the key to answering many questions about why we are the way we are, and why we act the way we do.  And why wouldn’t we, with the ability to see into our own brains, feel like we’ve finally gained access to the true answers of existence?  We once thought the heart to be the center of the human experience, and perhaps considered someone like Shakespeare most likely to articulate our human purpose.  Now, we live in the age of the brain, where we expect technology to show us, as Marcel Just says, ‘the essence of who we are.’

But new technologies often have a way of convincing us that we have finally figured out why things work the way they do, and we seem to cling to each technological development as if it is finally the one that will offer the answers.  An important question I would pose would be: How much can we really learn about a person from these technologies? And what happens if we assume we can know more than we really do?

Limitations to fMRI Technology

Though these scans are certainly amazing technologies that bring about much fruitful research, in general, we seem to currently overstate our ability their ability to reliably identify thoughts and patterns in the brain.  The scans are limited in and of themselves, from a technical standpoint, writes Norman Doidge: “The current generations of brain scans…detect bursts of activity that last one second in thousands of neurons.  But a neuron’s electrical signal often lasts a thousandth of a second, so brain scans miss an extraordinary amount of information,” he writes.

NPR’s Jon Hamilton adds that there a number of often undisclosed deficiencies with these scans, in his article, “False Signals Cause Misleading Brain Scans.” He interviews neuroscientist Chris Baker, who says, “The problem with functional imaging is that the signals we’re trying to get at are quite weak, and there’s a lot of noise.” And Hamilton adds: “The “noise” is in the form of false signals. These can come from the scanning equipment itself, but a lot of it comes from the person being scanned. Every heartbeat affects the flow of blood, which changes the signal. Every tiny head movement blurs the image.”

Picture 7Bearing out this point, the scans can often pick up on signals that are inaccurate; one study at Dartmouth, to showcase this point, put a dead salmon in an fMRI scan, showed it pictures of emotional situations just like they would have a human subject (for humor’s sake), and recorded the results. Interestingly, the fMRI “picked up on signals” from the dead salmon’s “brain activity” — of course, when there was clearly no activity at all.  “By complete, random chance, we found some voxels that were significant that just happened to be in the fish’s brain,” the researcher Craig Bennett said. “And if I were a ridiculous researcher, I’d say, ‘A dead salmon perceiving humans can tell their emotional state.’”

So what does all  of this mean?  It means that there’s a lot of white noise, static, and unmeasurable (or perhaps even unknowable) activity in the brain that gets overlooked, dismissed, or unprocessed in the duration of a brain scan.  It means that these pretty pictures we see in magazines and articles may not be as easily color coded as we think they are.  It means that while we may be able to find the correlation for “screwdrivers” in the brain, we may be a long way from identifying that someone committed murder, that someone harbors terrorist-ideologies, or that someone “subconsciously” recognizes a burglar or rapist in a line-up.

But more importantly, it raises the question whether these technologies –or any technologies– are really capable of answering all our questions. We seem to have a tendency to like reductionist explanations for things, and to hear that science has proven something to be true and then to shut out other explanations for phenomena.  But this tendency can be a trap, particularly when our science is not as good as we claim it to be. And it also begs the question: are there aspects to being human that science can simply not explain?

MRI.JPG

Judging from the trend of neuroscience, nothing is off-limits for a scientific explanation — anger, romantic love, and even belief in God have been chalked up to nothing more than neurons firing in certain parts of the brain. (Indeed, every month, dozens of studies are published claiming to have found the areas of the brain responsible for various phenomena — “‘The God Spot’ is found in the brain,’ reads one article, “Watching New Love As It Sears In The Brain” reads another.)  A person falling in love is reduced to surging dopamine in the caudate nucleus, and belief in God is described as excessive firing of neurons in the temporal lobe.  Excessive rage that leads to murder is attributed not to some intangible motive of retribution or anger but to an overactive emotional center in the brain. And we tend to be transfixed by these findings, even if they are, when examined closely, extremely broad generalizations based on a limited amount of data.   We have a tendency to see explanations of things that invoke terminology about the brain as more convincing than other explanations, simply because, as Rosen’s article points out, “we have prettier pictures and it appears more scientific.”  In his article “Brain on Stand,” Rosen writes about our tendency to over-exaggerate our brain findings as being more meaningful than they are, calling it “Brain overclaim syndrome.”

Picture 1

Picture 9The bright lights and science-looking pictures appeal to our inner rationalists; we believe articles that say things have been proven by brain scans simply because it seems so convincing, so solid, so technologically sound.A fascinating study published in the Journal of Cognitive Neuroscience showed that people are “seductively allured by neuroscientific explanations” for things, excessively more likely to believe data when it is preceded by the words “brain scans indicate,” even when the data or research findings are very obviously faulty or illogical. (In other words, present a person with two sets of data that say exactly the same thing, even if it’s not very believable, and they are much more likely to believe the data “proven” by brain scans.)

Crawford writes, “These findings suggest that we are culturally predisposed to surrender our own judgment in the face of brain scans. More generally, we defer to the mere trappings of “science.” We automatically assume the words “brain scans” assert some measure of influence, even if the findings go against our better judgment.  Essentially, we are seduced by the pretty pictures of brain scans, as we often are by new and exciting technologies.

What does this say about our tendency to see technology as the answer to all our questions, even at the expense of our own better judgments?  From a practical standpoint, what implications does this have for jurors in the courtroom, who are likely to be influenced by these pictures, much like the subjects of the study mentioned above? And more generally, what happens, one wonders, for a future where these technologies may be used to try to prove you are thinking something and you insist you are not, but the brain scans insist you are?  Will we come to trust technology more than we trust ourselves?

And more broadly, the question seems to be this: can we really make the jump from identifying basic item recognition to saying that these machines can read our “essences”?  Will these types of neuroscientific discoveries that help explain biological mechanisms of the brain necessarily lead to a comprehensive picture of consciousness?  Or are we overestimating our own technological capabilities, and our own abilities to use technology to really read something as complex as the mind?

And furthermore, what are the risks to thinking we can predict behavior when we can’t? — in this assumption that our technologies will answer questions whose answers might be more complicated than we are giving them credit for?  What would happen if we start condemning someone for their “predispositions”, and not their actual actions?  If we start basing convictions of someone based on “subconscious recognition”?  The potential for harm, and for infringement on civil liberties, seems profound.

Winding Road SignBut perhaps the more interesting question is to consider why we trust science so much more than anything else.  In his influential essay, “The Question Concerning Technology,” the philosopher Heidegger argued that technology, by revealing the world through a technological framework, will increasingly shut out other ways of seeing the world — ways of understanding the world and ourselves through art, for example, or through the humanities.  Has this transition already taken place? Is it already impossible to see the world, and our place in it, through other ways than the technological and scientific?  By presuming these scans provide more reliable evidence of who a person is than his or her actual actions — by thinking they show the “essence” of who someone is, as Just says — are we concealing other aspects of the human condition that may be accessible only through non-technological avenues?

Jonah Lehrer presents this idea articulately in his article, “The Future of Science is…Art?”  where he writes about the limitations of viewing the world solely through a scientific lens, calling for a need for art to explain the things which science cannot:

painting“The standard response of science is that…art is too incoherent and imprecise for the scientific process. Beauty isn’t truth… If it can’t be plotted on a line graph or condensed into variables, then it’s not worth taking into account. But isn’t such incoherence an essential aspect of the human mind? Isn’t our inner experience full of gaps and non-sequiturs and inexplicable feelings? In this sense, the messiness of the novel and the abstraction of the painting is actually a mirror. As the poetry critic Randall Jarrell put it, “It is the contradictions in works of art which make them able to represent us—as logical and methodical generalizations cannot—our world and our selves, which are also full of contradictions.”

Great novelists like Virginia Woolf “have constructed elegant models of human consciousness that manage to express the texture of our experience, distilling the details of real life into prose and plot.  That’s why their novels have endured: because they feel true.  And they feel true because they capture a layer of reality that reductionism cannot.”

“The arts are an incredibly rich data set, providing science with a glimpse into its blind spots,” he adds. ”No scientific model of the mind will be wholly complete unless it includes what can’t be reduced.”

Chemical_basis_of_loveLehrer’s point seems critical: does measuring one’s rising serotonin and dopamine levels in the caudate nucleus, associated in many studies with the feeling of love in the brain, truly capture the feeling a love in any meaningful way?  Don’t the volumes of Neruda’s poetry or a Shakespearean sonnet capture it better? Might a novel, a poem, a painting, even a simple conversation, provide more of a window into someone’s essence than a brain scan?  (“It is quite possible—overwhelmingly probable, one might guess—that we will always learn more about human life and personality from novels than from scientific psychology,” Lehrer quotes Noam Chomsky saying.)  Surely pictures of our brains can provide us with important and interesting information about ourselves, but can they explain everything, eliminating the need for all other modes of understanding?  Is there any room in this view of a person for the concepts of a soul, of a spirit, of a mind that is ethereal, and not purely biological?

So the very basic question underlying this whole debate might be summarized as follows: are we, or are we not, reduceable to scientific premises? Is there room for any other explanations for the human condition than scientific ones? And though a neuro-reductionist would say it’s only a matter of time until everything can be explained through that scan, my question is, is this true only if we accept it to be? Are we assigning technology this power, and  thereby deciding to shut off other ways of seeing human beings?

Conclusion

This issue, to me, has two layers: one is considering the actual implications of these technologies – in the court room, and in society—and the second is what this issue says about our quest to understand who we are, and about what science can explain, and what it can’t.

Cohen and Greene represent a reductionist view of the brain, thinking that we will be able to explain everything about how and why we behave the way we do through neuroscience, and that the law  – and how we hold people accountable for their actions –should be adjusted accordingly.

However, I am less convinced that we are far along the path of finding out where all motives and behaviors exist in the brain.  Instead, I identify with Matthew Crawford’s perspective in his article “The Limits of Neurotalk,” in which he calls for “Respect for the machine” saying “The human brain, everyone agrees, presents complexity that is simply colossal by comparison—by one estimate, the number of possible neuronal pathways is larger than the number of particles in the universe.”

neuron

An attempt to “standardize” our predictions of the brain will likely fail in trying to capture the brain’s immense complexity

Technology, it seems, always presents itself as the answer to all our questions; however, we may overinflate the ability of our technological tools to explain everything there is to know. Much like genetic determinists, neuro-reductionism is enticing as a way to explain, down to a basic unit, what a human being is and why he or she acts the way she does.  But many geneticists explain that learning more about our own biology has brought awareness not to its simplicity and reducibility, but rather to its immense complexity.  Though we once predicted that we would locate genes for all behavior, we have in fact identified, for the most part, single genes are not wholly predictive: rather, it is the complex interplay of many genes, along with the influence of environmental factors, that determine behavior — and even then it can be a total crapshoot as to how a person develops.  Two twin sisters with precisely the same genome can have vastly different “epigenomes” – one can develop cancer, while the other does not; one can be temperamental and angry, the other placid and calm.  An attempt to “standardize” our predictions of the brain and try to gain understanding based on what the lighting up on one person’s amygdala might mean and what the recognition of something might indicate will likely fail in trying to capture the brain’s immense complexity.  And an attempt to do so, particularly in these early stages of technological capabilities, I would worry, would probably result in more harm than good.

But more importantly, we might challenge the idea that science and technology can offer answers to all of our questions.  Lehrer writes:

“The history of science is supposed to obey a simple equation: Time plus data equals understanding.  One day, we believe, science will solve everything…But the trajectory of science has proven to be a little more complicated.  The more we know about reality – about its quantum mechanics and neural origins – the more palpable its paradoxes become.  As Vladimir Nabokov, the novelist and lepidopterist, once put it, “The greater one’s science, the deeper the sense of mystery.””

…”The fundamental point is that modern science has made little progress towards any unified understanding of everything. Our unknowns have not dramatically receded.  In many instances, the opposite has happened, so that our most fundamental sciences are bracketed by utter mystery.

Then concludes:

The epic questions that modern science must answer cannot be solved by science alone…The struggle for scientific truth is long and hard and never ending. If we want to get an answer to our deepest questions—the questions of who we are and what everything is—we will need to draw from both science and art, so that each completes the other.”

Massive productionSo the issue comes down to this: can we really reduce the brain to its component parts?  A person to his or her biological substrates?  Neuroreductionists would say biology is everything; there is no free will.  We are our biology: “all mental and behavioral activity is the causal product of physical events in the brain.”  As Marcel Just says in the video, “we are biological creatures, you know, our limbs we accept are muscles and bone and our brain is a biological thinking machine.” These scans, therefore, “ reveal the essence of who we are as a person.” Is this true — can science and technology ultimately explain everything there is to know about human beings? And if so, are we only a few technological advancements away from understanding the human condition – why we act the way we do, why we make the decisions we make – and from dispelling the notions of free will altogether?

I, for one, highly doubt it.  I reject the idea that science is the only portal through which to understand the human condition – leaving no room for the arts, or for philosophy, or other modalities of understanding.

As Lehrer points out in his article, “The sciences must recognize that their truths are not the only truths.  No single area of knowledge has a monopoly on knowledge.”

Technology and science seek to explain, to reduce, the human experience, down to the most basic unit of understanding, and often present themselves as the only ways of understanding the world.  But perhaps there are aspects of the human experience science cannot grasp; perhaps we cannot be explained in ‘basic units.’ Perhaps, as poet Wendell Berry says, “We should not mislead ourselves.  There is more to the world, and to our own work in it, than we are going to know.”

Questions

Consider the use of fMRI technologies in the courtroom: should we embrace the use of these scans for use in lie detection? In confirming eye witness testimony? In identifying “thoughts” when the person claims to be thinking or is saying something different?  In using them to “preemptively screen” who might commit criminal behavior? What do you think about more commercial uses for these scans, such as between parents and children, romantic partners, or employers and workers, for the sake of lie detection and thought identification?

What are some of the broader metaphysical implications of these technologies? Do you agree with Cohen and Greene that neuroscience refutes free will?  Will these scans explain everything there is to know about human beings, and reveal our “essences”? Or are there aspects to human beings that cannot be explained through the scientific and technological?  Do agree with Jonah Lehrer that we need art and the humanities to have a comprehensive picture of the human condition?  How do our views of technology influence this debate?

Want To Read More? Check Out These Articles:
The Brain On Stand by Jeffrey Rosen
The Limits of Neuro-Talk by Matthew Crawford
Does Neuroscience Refute Free Will by Lucretius
For The Law, Neuroscience Changes Everything and Nothing by Joshua Greene and Jonathan Cohen
The Future of Science Is…Art? by Jonah Lehrer
NeuroImaging and Capital Punishment by Carter Snead
False Signals Lead To Misleading Brain Scans by Jon Hamilton

21 Responses to “Reading Minds With fMRIs”

  1. Alec says:

    I think the big problem with all of this is that it relies far too much on the cooperation of the subject being studied. As said in a previous article, a person could move their head or think a bunch of random thoughts instead of what they are told and ruin the entire study, rendering it useless. In addition to this, there is also the evidence that brain scans can simply be unreliable themselves, since the imaging is actually only a computer model, drawn up by incredibly complex algorithms.

    Playing the game, however, I see this as incredibly bad news for society. By actions simply being recognized as results of brain chemistry and neural networks, the responsibility is taken off of the person. We tend to do this already with the mentally handicapped and women at “that time of the month.” Regardless of whether these cases are justified or not, the acceptance of any action simply due to poor brain chemistry is not a good way to go. I also have a hard time believing it, since people exercise self-control and good responsibility all the time. If people are told that they are no longer responsible for their actions, I don’t see how society could continue to function.

    As it has been said, we are able to condition our brains, and we do have control over them. Because of this, any preemptive judicial action on individuals that appear to be inclined toward criminal behavior would be completely unjustified. It may make sense to watch them or advise treatment, ASSUMING that the screening has proven to be reliable, which is difficult to prove in itself.

    All the ways I look at it make it very apparent it is simply something that should be avoided. It’s very cool and impressive, but its inevitable uses in the real-world just do not look good.

    • Molly Quigley says:

      I agree with this point that the concept of using fMRIs for lie detection is flawed because it is so easy to skew the results. I am sure that criminals would find ways to systematically make results negative and negate the results completely. This makes me feel that the technology as fMRIs as a lie detection test will never be 100 percent accurate, and, therefore, never should be allowed.

  2. Alex G says:

    This article brings up a number of issues. It opens with an acknowledgment that the right to privacy is a pivotal part of our culture and brings up the fact that this technology will infringe on this right. If our current interpretation of the constitution holds firm, Alec is correct that it will be necessary to get consent from the subjects before the brain scans could take place. It would take an incredible, frightening constitutional shift away from a protection of the right to privacy to allow for this technology’s application in the courtroom or other areas.
    Furthermore, the argument that chemical interactions in our brain exclusively determine our actions is both harmful and inaccurate. It is harmful because it would become common for people to excuse their mistakes by saying that the chemicals in their brain forced them to act a certain way, and science would support their claims. Additionally, by negating responsibility for our actions, we eliminate the possibility of passing judgment on people or punishing them for their actions. If a person is not responsible for their actions, how can we punish them? This same argument is already used in court to in cases where the defendants suffer mental illnesses. Therefore, it would be almost impossible to create a coherent criminal justice system with an acceptance of this argument. This argument is also inaccurate. We live our lives knowing that we have the capacity to make choices, making a great many choices every day. The only way we can accept this view is if we admit that our choices are solely defined by the interactions of chemicals within our brain. Otherwise, it is a combination of chemicals and our ability to separately make choices that go into our decisions.
    The use of this technology outside of the courtroom is even more frightening. If potential employers were able to look inside the brain and understand the makeup and functioning of your brain, they could depict, possibly inaccurately, whether or not you would excel at your desired position. While this may give your employer a more accurate depiction of your actual value, it would have dramatic effects on personal freedom and the ability to seek a profession of your choice. Instead of choosing a profession, people would likely be forced to join the industry that their brain was most genetically disposed to handle. As a result, people would often work in areas that they do not enjoy as much and where they will not be as motivated. The loss of freedom is bad enough, but the lack of motivation in the workplace could actually adversely affect production, offsetting the initial advantage that was gained by a more accurate employee selection.

  3. Loren M says:

    We seem to have a tendency to overestimate new things. We saw this with Technological Determinism—there were a great number of technological advancements being made very quickly, and suddenly technology became the dictator of human destiny. This is simply the vogue version of technological determinism; call it neuroscience determinism if you will. Only now everything is explained and controlled by our brains instead of technology.

    There’s been an explosion of neuroscientific advancements since the 1990’s, and I think there’s a tendency in all the excitement (and apprehension) that inevitably accompanies new knowledge to exaggerate the importance of the information. For example, anthropologist Helen Fisher, among other researchers, has gathered fMRI data that suggests people in love have increased activity in the caudate nucleus and ventral tegmental area. That’s pretty cool, that we can see a physical manifestation of our emotional state, and attribute some of what we experience to certain brain processes. It’s exciting, because it betters our understanding of how the brain works and contributes to the experience of romantic love. And it’s very tempting to make the leap of, “Oh, now we know what the parts of the brain are and what the chemicals are that make us feel like we’re in love, so now we understand what love is.” But I don’t think that’s true.

    First, I don’t think these studies really teach us that much we didn’t already know or at least suspect. People have been writing and singing and talking about love for all of human history, and falling in love for much longer that that. It may be news that the caudate nucleus is partly responsible, but did we ever really doubt that our brains had something to do with falling in love? We’ve increased the specificity of our knowledge, but if we can take a step away from the big words and “pretty pictures”, I think we’d realize that this new knowledge isn’t exactly revolutionary.

    Second, understanding what the brain is doing is cool, but I think for most of us, knowledge about the biochemistry of love doesn’t really help us understand love. I’ve read a lot of books and articles and studies on this subject, because I personally find it really interesting, but at the end of the day my knowledge of romantic love doesn’t come from what I’ve read or my abstract knowledge about the role of dopamine. It comes from my experiences—feeling physical attraction, being in relationships, watching my roommate and her boyfriend stare into each other’s eyes with vomit inducing gooey-ness, etc. And, personally, knowing that dopamine is responsible for certain sensations doesn’t take any magic out of feeling those sensations. Love is still mysterious, still complicated, still compelling, even if we do know something about the biology of it.

    Anyway I suppose my point is that, the same way we hoped we’d find a genetic explanation for everything about ourselves, we’re now hoping to find brain-based explanations. Undoubtedly, our genes and our brains are hugely important in determining who we are, and there is a lot of information we can gain from using technologies that illuminate more about our brains and how they work. This information has potential applications that are both exciting and worrisome, but I think it’s critical to examine any ethical issues from a holistic perspective at the outset. In my opinion, to reduce a man wholly to the physiology of his brain is (if I’m being polite) overzealous, and (if I’m being blunt) reckless, uninformed, and downright foolish. There is so much we don’t understand, so much that escapes our notice, so many variables that affect every aspect of our existence—to tease out one of these variables and believe that we have found the master key that will unlock every mystery is to delude ourselves. Looking at the big picture, realizing our limitations, we’re likely to make much more reasonable and responsible ethical decisions about these new technologies. But if, instead, we allow ourselves to be swept down a neuro-reductionist path, I think we begin tread some very dangerous ground indeed.

    • Justin_Thomsen says:

      I have to agree with your conclusions wholeheartedly. When I read some of these articles, I often wonder what motivates people to become so radical neuroreductionists. Yes, we humans love to explain things. We’re curious creatures. We want to know why, as any parent or sibling of a 3-year old can readily assure you.

      But in the end, knowing “why” is not going to get us very far. Even if we can pinpoint the neurological networks that ultimately cause our actions, where does that get us. Frankly, we do not experience life as a series of computer switches and binary options. We live and think on a macro level. We are not conscious of our neurological networks. Our concept of self resides at a higher level. Obviously, something happens physically beneath that level, and one would be delusional to deny that. But to people who live on a macro conscious level, saying the micro level manages everything has no meaning.

      Moreover, if we are going to go down the reductionist path, why do we stop at neural networks. Why not at particular neurons? Why not at particular cellular organelles? Why not at particular molecules in the organelles? Why not at the atoms in the molecules? Why not at the subatomic particles in the atom? We’re all mostly empty space anyway. The person who starts down the reductionist path is the one who can’t ever justifiably stop. It’s like the infinite regression of skepticism.

      Ludwig Wittgenstein, though not studied in this course, has something valuable to say on this subject through his works “On Certainty.” It is impossible for me to truly doubt that I am a free conscious being. It is a framework for my experience–a hinge proposition that the rest of my world is built upon. Ultimately, it is one of the things that frames my existence. When we sit down and invite ourselves to truly doubt that we experience a free existence, we cannot succeed. It is simply impossible. Whether or not we are objectively right in some neuro-reductionist sense is of no matter. Experience shows that we are conscious free beings, and that is all there is to it. That is our context. That is our world. As Wittgensteins says, our sense of freedom and our consciousness has no grounds–nor does it need any. “It is there–like our life” (On Certainty, §559).

      So yes, while we could engage in this extreme reductionism, what is the point. That will not change the way I experience the world. Just like Hume’s argument that the causal nexus is nothing more than a function of the brain associating two events will not change the way he lives his life. Hume is not going to go outside and stand in front of an oncoming train, defiantly denying the existence of cause-and-effect. We are not going to go outside and deny that we experience a free consciousness.

  4. Danny W says:

    In response to the issues confronting the justice system regarding fMRIs, I would agree that the treatment of criminals will likely be drastically changed by the new brain scanning technology. In my mind, however, this change is much less important than other changes the justice system may experience if these technologies are allowed into the courtroom.

    I would say that the chances of someone close to me being murdered are pretty low, and in the event of this happening, I doubt I would care very much about the fate of the murderer as long as he or she was off the streets and not allowed to kill any more people. Court cases that would have a much greater effect on my life are civil cases, which occur all too often in American society, and often have severely negative consequences. fMRI technology could have various impacts on civil cases, the most worrying to me being the scanning of jury members’ brains. Already, lawyers take great measures in selecting the perfect jury who will be sympathetic to their side of the case. In high profile cases, lawyer teams put a great deal of money and effort into researching potential jury members, and often the team with the most money behind it gains an upper hand before the case even hits the courtroom because they are able to select a jury sympathetic to their case.

    If potential jury members are scanned with fMRIs, the process of jury selection will be even more dramatically tipped in favor of the party that has the money to hire neuroscientists to interpret the results of the scans. Someday it may even be possible to decide the outcome of the case entirely based on the brains of the jury members. If this gets out of hand the American legal system will lean even more in favor of the wealthy.

  5. victorqz says:

    Courtney, I am sure you saw this in PNAS this week, but since I didn’t see “Cashmore” anywhere in your post, so just bringing it up for the benefit of others in your audience http://www.pnas.org/content/107/10/4499.full; The Lucretian swerve: The biological basis of human behavior and the criminal justice system

    A lot of specific things can be answered by brain scans, and in a courtroom as well as in various commercial or other ‘life’ applications, this is as far as I’m concerned perfectly useful and valid. If the technology can tell us things and we do know things about associations between brain areas & emotions, etc … hey, if the information is there and we can get it, let’s use it! I should qualify this by saying that although I am a neuroscientist, this comment is not neuro-specific in any way, just me speaking as a lover of knowledge, science, and technology.

    That said, that’s all these brain scans provide: information. Let’s take other quantitative metrics, like those for lying: if you ask someone a question, you aren’t sure if you got an honest response, you look at a zoomed-in video, and sure enough, their pupil dilated, is that going to make you think “yup, I thought they might have lied and now I am SO sure they did. Thank God for instant replay”? I mean, ok, sometimes it might … but other times, it will just be weighted as evidence that they might have lied … so yeah, let’s not get too excited here, esp w/what Courtney said about the limits of fMRI.

    To address some of the deep questions that bear somewhat upon the criminal justice & otherwise applied stuff …
    First, what was said about higher feelings (love), beauty, etc … people who talk about these things as if they were beyond science are not taking into account the myriad studies that show, for example, that certain musical elements seem to have almost universal appeal, regardless of culture, upbringing, etc, and that reactions to these inputs has neural signatures … some things are to some extent hardwired by genetics. Functional localization, after all, is a product of the fact that genetic processes determine over 500 brain regions, none with identical local architectures, connectivity schemes to the rest of the brain, etc. This complexity, and some of the neural phenomena that it causes … let’s not overestimate how far beyond the grasp of science this is.

    That said, the brain is a chaotic system, because of learning and other interactions with the outside world the attractor manifold of this system is being changed (plasticity…) every day, there is stochasticity in reward and decision computations, the brain is highly nonlinear, etc, etc. To take an example from the blog post, if the external world is causing someone to have a bad day, sure, the likelihood is higher that there will be a neural avalanche of anger at some point that becomes a murderous rage … but this is *just* a shift in a probability distribution, and such an event is still possible on a routine day. As another example (one touched upon in my modeling work; a paper that inspired this particular project and is relevant to this sentence: Science 319, 1543), if your thoughts are wandering around in the evening, are you more likely to think about someone that you saw that day or someone that you haven’t seen in months (all other things being equal, i.e. how important they are to you or how salient their personality is)? Obviously, the person you saw that day—the circuitry associated w/them may be primed in one way or another. But this priming (or elevated activity, which a hypothetically good brain scanner could detect) *does not mean you will actually think about them consciously*. While the argument so far is a strike against neuroreductionism (which, Courtney, you were completely correct in comparing in absurdity to genetic determinism), it still isn’t an argument against the mechanistic brain per se … “the brain that changes itself” captures effectively the idea that it is not a mechanistic brain, but given all the processes that are mechanistic, one really is faced with that classic question of how a system with mechanistic parts exhibits emergent behavior. For now, I can say that few systems biologists would take a purely mechanistic view of cells, and instead acknowledge their emergent complexity, and surely this logic can be extended to the much more complicated brain.

    On a closing note, the brain is embedded in the world and in a body, but learning a new skill, living amongst a new culture of a month, living for the most recent year while the country is in economic doldrums vs. while the economy looks rosy … we all know, for example, that looting happened in Chile after the big earthquake, but that’s b/c the current situation was shaping the brain’s attractor manifold and decision dynamics … but were all Chileans looting? No. Is that tendency to loot after disaster criminal, opportunistic, or motivated by (understandable) fear & insecurity about the situation? This is a question that would have to be answered alongside a bunch of other questions about psychological tendencies, whose aggregate is probably not reducible. In short, b/c of the plasticity and huge feature space of the attractor manifold that does govern the course of the brain’s activity … yes, brain tech can answer a lot of questions, such as about musical tastes, whether their last statement was a lie, etc. BUT, there are many, many features about which these technologies would inherently have no capability to determine anything with any truth/relevance much beyond the present.

  6. Dmeyers says:

    I don’t see Greene and Cohen’s reductionist theory ever fully coming to fruition. Sure, as the fMRI (or other instances of ‘brain readers’) become more developed, I can see how they would be increasingly used in both society and the courtroom. By reducing or eliminating ‘noise’, and by ensuring the extraneous variables – like the patient moving her head – are controlled, the brain scanner could weed out the ‘truth’ more effectively and accurately than any persuasive lawyer could to a jury. With the assurance that the fMRI is more accurate than eye-witness testimony, or DNA samples, or even photographic evidence (these can be quite well doctored nowadays), the guilty will be convicted more frequently while the innocent similarly acquitted. That seems like a reality in great proximity to our current technological, and legal, abilities.

    However, when technology is used in the courtroom to diagnose, and effectively acquit, the accused with insanity, or to reveal an enlarged tumor pressing on the amygdala, or to link a fit of murderous rage with a traumatic near-drowning experience, now it is being taken too far. If the guilty begin to find solace in knowing they can escape conviction by taking a brain scan, our legal system will turn into a zoo, and society will see a sharp increase in criminal activity.

    Especially with the mention that the judicial system (both the courtroom and correctional facilities) will become characterized less by its intentions to punish, and more by its abilities to rehabilitate via brain surgery. If a suspect is acquitted because of a tumor pressing on a region of his brain, it would be reasonable to say that the tumor, with the advancement of medicine, will be swiftly and safely removed. Now, if the grounds on which the accused (solely due to brain tumor) was acquitted (removal thereof) can be held to his post-surgery life (one absent of a crime-inducing tumor) then this newly-rehabilitated, fully-reinstated citizen should not have the ABILITY to commit a crime. As the tumor was, and must be, the sole reason the crime was committed in the first place, then the removal of the tumor is effectively the removal of the criminal instinct. This is where I find fault in the application of Greene and Cohen’s reductionist argument as a method of diagnosing criminal behavior. Realistically, that post-tumor citizen is fully capable of committing a crime again; it is a simple act of desire; of choice. If Greene and Cohen’s theory were to hold true, then the removal of the tumor (the cause) would mean the removal of his criminal ability. If the functions of the brain determine how the person acts, then the removal of the broken parts must mean the brain is not broken and the person’s actions are determined by a crime-free brain. This situation does not seem feasible to me.

  7. Kendra Postell says:

    I do not think that our understanding of the workings of the brain and what the results of fMRI scans really mean to use this technology to make important legal judgments such as premising fMRI lie detection in court and to confirm eye whiteness testimony. As researchers cited in this article stated, there is way too much noise in the scanning process to ever be 100% accurate. I feel that even 95% accuracy is not going to be a feat will be achieved anytime soon. I do not think these technologies will ever be accurate enough to be permissible in court because the complex nature of the brain and the possibility of a nonphysical aspect of the self-probematize any interpretation science attempts to draw from brain scans.

    I personally am very frightened by the idea of this kind of “mind reading” technology being used to preemptively address possible criminals. There are so many things that effect one’s willingness or the possibility that they will commit a crime it would be unfair to individuals who have “unnatural” brains for us to put so much weight on this physical structure. So much of human experience and existence takes place beyond the physical world and to approach the physical world as if it is the only world is a very limiting and deceiving approach. One may be physically predisposed to murder because they have an overactive part of the brain or such and such hormone production is out of whack, but these physical problems do not mean that this person is going to commit any crime. Because of the effects of environment and self-control someone with this predisposition to violent crime could end up being among the most docile people on earth.

    It is unfair to instantly label people based on their brain chemistry because this does not give them any chance to define themselves as something besides their physical brain structure. If one is screened at birth and found to have this overactive violence area, and then put in a special school or program for people with this “condition” he or she will be known and always know themselves as one likely to be violent, whereas if they had been raised without this knowledge of their violence risk factor, it is possible that they could lead a totally peaceful life. Some believe that this preemptive screening and possible subsequent personality rehabilitation programs could “fix” this person, or at least help them know to avoid situations where their violent tendencies might break through. Though these may be positive aspects of prescreening I feel that the issues of labeling would have a negative effect too profound to make adopting this brain scan practice worthwhile.

    I do not believe that brain scans are the human essence or that knowing what chemical changes in the brain cause each of our mental states means that we do not have free will. As stated in this article it is entirely possible (and I agree with this view) that though we may know what chemical changes in our brain cause, but we do not know what caused these chemical changes in the first place. In other words, it is entirely possible that we have some invisible and nonphysical free-will force that instigates the physical brain changes. For now, we simply do not have enough information to assume one view or the other.

    I believe also that humanities should not be ignored in these kinds of debates as they are now. In fact, I feel that science and the humanities should go hand in hand so practical application of an invention and ethical issues can both be addressed simultaneously and scientific progress is not just left to run ramped and gnaw away at our true values. With this joining of humanities and science we can be sure that humans will use technology to their advantage, and not be trapped in a progress pace that is too demanding and detrimental to our own health and the health of our planet.

  8. StephieDav says:

    While I understand, and acknowledge the amazing benefits this kind of technology could afford us, I can’t look past the drawbacks. Sure, it would be amazing to be able to look at a person’s brain and tell based off of their neural firing tell whether or not a murder weapon is familiar to them, and establish guilt that way, but on the other hand what if they’re found guilty unjustly. If someone comes into my house and murders my mom with our kitchen knife, and they show me the knife and of course my neurons fire, would they blame me? Of course they wouldn’t be able to establish guilt solely off of this kind of reasoning, but what if this kind of reasoning established guilt in other ways? The science concerning the brain is not exact, it’s constantly being modified, and we discover things we didn’t know the brain was capable of all the time. Just because everyone else’s brain lights up in one area when they’re shown a cat, and mine doesn’t, doesn’t mean that there’s anything wrong with me, it just means I’m different. If we rely too heavily on this kind of technology for answers, I think that it will inevitably prove to be deceitful. One would hope that much like with polygraphs, brain scans would be used with moderation and not necessarily as the deciding factors in investigations.
    Once we decide to use this kind of technology, it will be hard to restrain its usage. Much like we see today, people use “temporary insanity” to justify crimes all the time. Although those please are now considered with more scrutiny, something tells me that just as patients can fool psychologists, maybe candidates for fmri scans, will be able to fool doctors. We know the human brain can be conditioned, what if I force myself to think of an apple every time I see a pencil. Will I be able to fool scientists looking at my brain, trying to establish whether or not I’ve ever seen the murder weapon before?
    It’s not just the accuracy of the technology that concerns me, it’s the breach of rights that goes hand in hand with the scans. The Patriot Act requires anyone who possesses any kind of e-mail, or medical/financial records, to cede them to the authorities if asked (or not). Will the government use the Patriot Act to force me to give them access to my brain? How plausible is this? Some people have the notion that they “have nothing to hide”, and thus are okay with allowing access into their brains, but they are perhaps ill informed and unconscious of the effects this can have. Not only would this be a violation of privacy, it would unfortunately be a legal one. I would not like to see this happen, this sort of Orwellian “thought crime”. I hope that our right to privacy, and our right to not self incriminate ourselves, as stated in the 5th amendment, will not someday become obsolete. I hope that as Hobbes and others believed, humans will understand that the preservation of life is what we are trying to accomplish, and allowing invasions of our mental privacy, and self incrimination goes against our goal.

  9. Molly Quigley says:

    Personally, I would be very uncomfortable using fMRI scans in courtrooms until the technology is absolutely perfected. As demonstrated in the salmon experiment, this technology, although ground breaking and promising, is no where near perfected and there is still so much we must learn in order to master it. It is not worth the risk of wrongly convicting someone and putting them behind bars for life or executed just to use this technology. It must also be perfected before used because it has been proven by so many tests to be so powerful on people’s decision making. Therefore, if the scans came to incorrect conclusion, the jury would most likely be swayed by this information and wrongly convict an innocent person. So, I am very hesitant about this technology in the courtroom until it can be proven 100% accurate.
    The question of free will that these brain scans pose are completely and utterly ridiculous. It seems obvious to me that our decisions are not merely mechanical functions in our brains. As Morse says, “Brains do not commit crimes, humans do.” Certainly, humans possess more than just a brain that makes us individuals. I believe all humans are made up of a soul or an immaterial part that makes us who we are and also plays a role in our morality and beliefs, that, ultimately, influence our decision making. While I do believe it is legitimate to believe that a tumor pressing up against an amygdala can make a person less responsible for violent behavior, it is irresponsible to believe that this should apply to all humans. All healthy, functioning humans can control their actions. Every single one of us have experienced saying no to our brain’s temptations because we know it is not the right thing to do, such as saying no to that second piece of chocolate cake. I strongly hope that this technology does not influence people to believe that we have no responsibility for our actions and that our decisions are merely by products of mechanical firings in our brains because this is simply not the case. We, as human souls and people, decide our choices, which, in turn, decides who we are.

  10. kevin Laymoun says:

    The subject of Neuroscience, as it relates to fMRI and brain reading technologies, carries some of tech’s greatest possibilities but also some of its greatest dangers. Neuroscience will lead to technological breakthroughs that will make life easier but at the same time threaten to betray our classical ethical values. Technologies like fMRI readers can scan the brain providing answers to questions we may not want others to access. Imagine a society where the common word is disregarded in favor of a quick scan of an acquaintances’ brain. This idea is not implausible. Since the beginning of man, we humans have an insatiable thirst for truth in the office, politics, advertisement, and even everyday conversation. Lying has been a tool used to avoid accountability; the very staple of society, and the introduction of these new technologies could spell the end for the lie. Or that is what we are led to believe by researchers touting the real-life applications of these technologies. In actuality such “brain-reading” creations will bring about the deterioration of society. The human being will become an object, to be read like a book, and our lives even more restricted.
    Technologies, such as fMRIs, threaten our free will and our right to free thought. By reading our seeing the brain images of how we react to certain products or scenes, companies and agencies can unlawfully exploit such information without our knowledge. Even writing the words “free thought” is difficult to stomach. In the near future Americans might be fighting for “free thought” rather than the classic concept of “free speech” because technology threatens to reach deep into the depths of our minds, into what we hold so dear. The reading of one’s mind might be useful but is ultimately unethical because it betrays that individual’s innate privilege to his thoughts. Furthermore, humans have rights to their thoughts because thoughts do not define people. People define people; that is to say, many thoughts pop into our heads everyday but many of them we do not choose to share, whether verbally or through action. Those thoughts that we choose to share are the ones we are allowing others to judge us upon. So the neurotechnologies being developed will allow judgment upon the thoughts and feelings not yet expressed by individuals and would be unethical in its portrayal of people.

  11. Anisha says:

    Though we have made great strides in the development of technology in the past couple decades, these technologies have yet to be perfected in giving accurate results. Therefore I don’t believe that the fMRI technology is ready to be used in the courtroom. One can argue that there is going to be a lack of accuracy in any form of technology; however this fMRI technology is not common to us yet. Meaning, even if it were used in the courtroom, the average person on the jury may not be able to fully comprehend the data without a scientist’s analysis. I think the technology would be great when helping a witness recognize the suspect when the memory is stored in the subconscious, but other than this example I’m worried the technology will be misused.

    As mentioned in the article, people will begin to blame their actions on the mechanical functions of their brain, not on their own choices. I don’t agree with Cohen and Greene’s viewpoints on free will. I think regardless of what goes on in our brain, there are other factors that influence the decisions we make. The example in the article was over whether to pick soup or salad. Maybe your brain, after weighing out the pros and cons, tells you to pick the salad. But after you select you see your friend eating the soup. Smelling the soup, seeing your friend eat the soup, and maybe even tasting it makes you change your mind.

    If fMRI was allowed in the courtroom, it should be when it could reach 96% accuracy (it’s unrealistic to shoot for 100%). Even then I don’t think it should be used for extreme cases. It shouldn’t make up the entire defense, but rather support a view everyone already has. I don’t think the fMRI scans can fully explain who a person is, which is why it would be important to incorporate this method with many others. I think that this technology should not be available to the general public. It would only create chaos because people would no longer take responsibilities for their actions.

  12. BonnieGiven says:

    Before answering the questions pose, I have to say that it is incredible to me that fMRI technologies hold such a prominent place in our society when they are only in their initial stage of development. Although it is well-known that much still needs to be discovered about these technologies, there is already discussion of whether or not they should be used in a court of law. To begin, I definitely do not think that fMRI technologies should be used in lie detection. As shown above, these technologies are quite inaccurate. There is no way that we could state that these technologies are reliable if there are responses of activity in the brain of a dead salmon. Obviously, there are many faulty qualities of fMRI. Much more needs to be discovered before it is used in a court of law and before it becomes a determining factor in a major, or even a minor, trial. The same goes for identifying thoughts. My major problem with this is that there are people with cognitive disorders who may be telling the truth but have brain signals that say otherwise. Suppose that a person with OCD knows that she did not cheat on her boyfriend but obsesses about the possibility that she did and her disorderly brain convinces her that the even actually occurred. Would an fMRI test prove that she is lying because parts of her brain light up when she is probed with the question? These sorts of situations need to be taken into account and scientists must know how to deal with various kinds of brains before they start using fMRI technology more frequently. In terms of criminal behavior, I believe that people will start buying into the “my brain made me do it” scenario. If this occurs, how many guilty people will be let off simply because they use the excuse that they have some sort of deficiency in their brain that they can’t control? In these terms, I definitely disagree with the side that Green and Cohen take. Free will is a fundamental aspect of human nature and people are in control of their actions. Morse quotes that “Brains do not commit crimes, people commit crimes.” I agree with these statement. We don’t simply live day to day, hoping our brains won’t lead us to do something bad and blaming our own personal behavior on our brains. Punishment exists so that people can learn from THEIR ACTIONS. If we start making everything a function of our brain, would punishment even exist anymore? Would jails begin declining in population because more and more people will begin pairing their behavior to non-existent brain deficiencies? These are important questions to consider when dealing with fMRI. In terms of commercial purposes, it would be difficult to completely limit the use of fMRI technologies to the entire public because people always find some way to get what they want. From my personal standpoint, I think that if one feels they need to use fMRI technology on their spouse or employee they should reconsider their involvement with that person. Although there are certain circumstances where people can be fooled, many times it is not difficult to distinguish an honest person from a dishonest person. We need to stick to our natural methods of determining whether or not a person is good to associate with, (such as how they make us feel, how they effect our lives, and how their actions coincide with their statements). People aren’t foolish and I don’t think we should depend on technologies to test honesty in public situations. The limitations to doing this is very hard to predict. What happens when parents start testing their children every day to see whether they really went to school or not? The possibilities are not only ridiculous but they are also endless.
    As stated above, I do no believe that neuroscience refutes free will. Free will exists constantly in everyday life. We have the free will to get up or sleep all day, to go to college or to drop out of school before getting a high school diploma, or to throw a lamp at our sibling when we are mad or to control our anger. There have been many times when I have done things because I felt like I couldn’t control my anger. However, when thinking back, I realized that if I had put more effort into controlling my emotions, I would have been able to restrain myself from performing a bad action. Of course, there are people who have seriously mental deficiencies who should be taken into account. Nevertheless, I don’t feel that our entire population can be put into this pool and I don’t believe that this means that the concept of free will is completely absurd. Finally, I agree with Jonah Lehrer that the arts and humanities introduce facets of the human condition that can not be explained by technology. Arts and humanities introduce the sense of mystery of human life that can not be explained using technical, scientific terms. For example, we read love stories that talk about human chemistry. Of course, these ideas can’t be fully explained but we like them because they allow for individual interpretation on some of the more interesting ideas of life. What would happen if someone were to explain that people fall in love based on chemical reactions in the brain? This would completely rule out the idea of soul mates and romantic encountering. Even more, relationships take work and sometimes the different ways certain people’s brain works create the most difficult problems for two people. Nevertheless, since the beginning of time people have either been able to work through it and be stronger in the end or realize that their relationship isn’t for the best. Science can’t be used to explain everything in life, because there are so many circumstances that happen out of chance and others that simply can’t be explained. Part of being a human is letting some things just be and not having an answer to every circumstance. Literature and the arts help emphasize this idea. Today, as technology becomes more prominent in society, people are beginning to view as the ultimate end to every situation. However, if we are to subscribe to this sort of lifestyle I believe that we will find ourselves living in a world that is extremely unsatisfying.

  13. christine says:

    When considering the use of fMRI technologies in the courtroom, it seems to me that it would be difficult to take anything but an “all or nothing” stance in the situation. As we read in the “Seeing is Believing” people as a whole tend to put a good amount of trust into technologies and things that are visually convincing. We have been fed the knowledge that technologies are an error proof replacement to human work and hence a technology’s verdict can be trusted just as much if not more than a human verdict. Just as comments such as Molly’s and Danny’s point out, if these technologies are introduced into the courtroom they will have powerful effects in swaying the jury whether the results are true or false. Our society as a whole must either come to the conclusion to entrust completely in these technologies in our court systems or to disregard them altogether. The problem with this is that it is almost impossible to prove their accuracy 100% because while we have the ability to test the use of fMRIs in laboratory settings, the results in a real life court case would vary dramatically. While some suggest that even when the technology’s accuracy hasn’t been proven yet we could still use it strictly as a sort of “food for thought” evidence much like a witness, I disagree. Presenting this information could greatly sway the jury no matter the truth because of society’s willingness to trust in technologies.

    However, outside of a court setting I believe many of these technologies could prove to be useful once they have been developed and tested at a more trustworthy level. I realize that it is most likely impossible to be 100% certain in these situations also, but outside of an uncontrolled court setting testing would be much more accurate. If we could use these technologies in criminal lineups and confirming witness testimony due to the way certain parts of the brain “light up”, it could provide major advancements in confirming or disputing verdicts.

  14. Joshua Dunn says:

    In my mind, the main problem with incorporating fMRIs into legal procedures is the fact that such evidence assumes that the “identity thesis” is true and applicable to all scenarios. (The identity thesis states, rather broadly, that thoughts and mental states are always accompanied by a corresponding physical state.) If brain scans ever become the new standard for evidence in the courtroom (as they increasingly are), it will mean that our society at large has generally accepted this thesis.

    Now there’s little doubt that certain thoughts, feelings, and emotions correspond to specific areas in the brain, and these areas can be studied by neurologists who can then make a statement as to what that physical manifestation might mean. Despite this, I still think the science at hand is too early in its development to be readily depended upon in legal cases.

    Even if neurologists were one day able to say without a doubt that certain mental states were essentially brain states, this thesis would seem to imply that we are all complex biological machines that fall victim to the causal world around us. If this is true, then all our notions of individual responsibility and free will must be thrown out and redefined. Ironically enough, the neurological images we use now to convict (or exonerate) people from certain accusations might one day be used to show that all humans lack the free will essential to hold someone accountable. While this possibility may seem unlikely, it’s something to keep in mind when we assume that fMRIs have legal significance.

    • msavage says:

      I agree with Josh that the central issue to this debate centers on the topic of identity and whether we are simply our physical selves, or if there is perhaps another part of the human essence that cannot be categorized in biological or physical terms. What is ironic about the case made in favor of fMRI technologies is that they in fact do make a distinction that would then undermine their argument. If the claim is made “my brain made me do it” or “I am simply a victim to the biochemistry of neurons and synapses” than what you have asserted is that there is another “you” in the scenario. To be a victim would imply that there is something outside of you that is acting upon you to do something, which then implies that you live independent of whatever it is that is causing the action. So even in the rationality supporting fMRI technologies, we still deal with this concept of ourselves as both a physical, and metaphysical being. There is something else to us that we cannot full grasp, and I would argue that the evidence of brain plasticity supports the idea that it is this non-physical self that really dictates our lives and actions.

      The perspective of neuroplasticity is particularly compelling, because it speaks to Aristotle’s discussion of virtue ethics. What it comes down to is, human character is formed through habituation of action, and that process is entirely in the hands of the individual. While it becomes difficult to change these habits once they are ingrained, a rational human being has the ability through choice to develop new habits and what are referred to as virtues and vices which culminate ultimately in a person’s character. Brain plasticity supports this ethical theory in its assertion of the power of the human mind to form new connections and pathways that didn’t exist before. Ultimately we are in control of the person that we want to become, and the sooner people own up to that fact the better off society will be.

      Imagine if we accepted that we could not be held responsible for our character. What society would we be living in? It would actually create a weaker society in which there is no sense of moral responsibility. Children could not be disciplined, criminals could not be prosecuted, and no one would take responsibility for any wrong or harm that they do. Part of the maturation process is accepting the consequences of your actions, be they harmful or beneficial, and asserting these technologies would change that and leave humanity in a constant state of immaturity which would be ultimately harmful for us.

  15. rachel says:

    I have a difficult time accepting the claim that Greene makes—that “you are your brain; nothing causes your behavior other than the operations of your brain.”

    I mean, I think to accept the legitimacy or validity of these brain scanning technologies in determining our thoughts, motives, memories, or incentives is in a sense denying the human’s fundamental power of free will. If we reduce people’s capacities for action to their brain wiring, I agree that “a person isn’t any more responsible for their actions than they are for having a defective heart or malfunctioning kidneys.”

    I’m more inclined to agree with Morse’s view that “brains do not commit crimes; people commit crimes.” I suppose it’s because I have a significant amount of faith in a sort of metaphysical, spiritual ability that we all possess—that is, that our wills and our destinies are not predetermined or limited by the purely physical, neural workings in our brains. I think the human brain is far too complex to be accurately and completely dissected and analyzed with the limited means of neuroscientific data that we have right now. The human condition is so incredibly multifaceted. These technologies leave out any room for, like the article pointed out, “arts…philosophy, or other modalities of understanding.”

    I think a good argument for this is that our brain’s physical organization and chemical makeup can actually be altered by the repeated decisions we make and actions we take—like, for example, we can actively train our memories to be more acute, expansive, etc. And there is actually a significant amount of scientific research that supports this. Furthermore, not everyone’s brains are the same, so how could the legal or criminal justice system devise any sort of objective, universal criteria for determining what physical parts of the brain reveal what evidence?

    As far as allowing fMRI scans into the courtroom as evidence, I don’t think we’re nearly ready for this—there is way too much controversy over the matter to be able to make a decision any time soon. I think an interesting thing to point out, however, is whether or not these scans could be considered a violation against the 5th Amendment—defense against self incrimination—or if they could be considered as evidence in the category of DNA, blood/semen samples, etc. If a defendant’s brain scan suggests one thing, but the defendant’s verbal claim asserts another, what should the jury believe? I guess what this issue boils down to is whether or not the scans reveal absolute, clear, indisputable scientific evidence, or if, as I mentioned before, they could misconstrue the prevailing fundamental human will and other dimensions of the human condition undetectable by any sort of technological device.

  16. Jorge Castrillo says:

    The technology behind FMRIs is very primitive. For me there is not enough real science behind fMRIs to being using brain scans as evidence in trials. As a society we do not understand enough about the human brain to really decided if someone will do X or Y and that they did such a thing because their brain is a certain way. Also the question of current state v.s. state when the incident occurred comes into question. We know that the basic brain does not change, but what if a persons brains were to make slight changes that can not yet be perceived by humans? This slight change would make it so that a scan made on October 11th could mean something completely different then a brain scan on October 12th. Only under the circumstance that there is a giant catalog on brain activity and actions that show a relation between certain brain activity and certain actions should courts be able to use fMRIs as conclusive evidence. However, fMRIs should be allowed as evidence inside of the court room. Psychology, for example, is far form being a concrete science and yet it is allowed to be presented in court cases. fMRIs can be evidence towards a particular case in its current state, but never could it be considered conclusive evidence by itself. There are to many questions regarding the brain.
    With that said I also oppose using fMRIs for preemptive screening, confirming eye witness testimony, identifying thoughts because the science behind it is weak.
    However, I do support using fMRIs for commercial use. I do think that using fMRIs for commercial use will lead to something similar to the CSI trend where people began to think “If I saw it on t.v. it must be true”. With the rise of shows such as CSI people began to think that DNA evidence was conclusive evidence. DNA evidence supports theories and physical evidence, but it can not say that this or that happened.
    My main problem with fMRI technology is what it says. It says that people that show certain activity within regions X, Y, and Z have a higher chance to do A in a given situation. Would it then be ok to treat someone differently because of their brain activity? Is it just to kill someone who has a 99.9% chance of killing one person throughout their entire life time? What if later developed technology proves all of that wrong? Even if future technologies do not prove that people are less or more likely to do certain things in certain situations what could be the effects of negative reinforcement. To clarify I have an example. A brain scan shows that Joe is highly likely to be a murderer. Counseling would be advised, but what if constantly telling Joe that he is a killer ends up pushing him over the edge. That is a chance I am not willing to take. Reason and potential for a certain action does not prove that a person will commit a certain action.
    There are so many mysteries regarding the humans. fMRI technology lacks the development to show anything other then a nice picture.

  17. Sara Phillips says:

    Chemical reactions that are present in our brains do not accurately define the choices we make as individuals. Although an fMRI may accurately scan the brain activity and chemical reactions that take place in our brains, we are drawing conclusions about the results of these tests and linking them with specific decisions that an individual may make. Although someone may be more inclined to make a certain kind of decision based on his or her brain activity, that does not necessarily mean that an individual made a certain decision as a direct result of the chemical reactions of their brain. There are several factors that go into making any decision and it is simply naïve to say that a complex decision is a result of one factor that we frankly don’t even fully understand.
    By allowing individuals to say that their brains determine the choices they make, we will ultimately allow society to pass of responsibility for any of their actions. People will constantly use the excuse that their “brain activity” forced them to make certain kinds of decisions. This is definitely not feasible or helpful in a courtroom where something obviously went wrong and someone must be held accountable.
    In terms of bringing brain scans into the workplace, how could employers not want to perform these scans on their potential employees? They would be able to find the “perfect” schoolteacher, scientist, newscaster, or accountant. Individuals would then be limited to the jobs that they were supposedly best suited for. Perhaps this would become more of an obligation and individuals who refused to work in a place where they were best suited wouldn’t be contributing to the growth of society. Ultimately individuals would become more like robots, forced into a working situation where they are best suited rather than where they would most enjoy themselves.

  18. Mike says:

    The fMRI sounds very interesting. This is a very slippery slope. As we any digital imaging there will be some error. While the concept is unique and makes sense, it is probably not going to be %100 when deciding on whether or not to convict someone on trial.
    It sounds like a good tool use in conjunction with other means of investigation.
    This should not the primary determining factor.
    Mike from Banjos For Sale

Leave a Reply

You must be logged in to post a comment.

WordPress Themes