Thursday, January 15, 2004
Session 4: Neuroscience and Neuroethics:
Reward and Decision
Jonathan D. Cohen, M.D., Ph.D., Professor of Psychology,
and Director, Center for the Study of Brain, Mind and Behavior,
Princeton University
CHAIRMAN KASS: Let's proceed, if we might. I'm in a
different spot so I can see the PowerPoint presentation as
well.
Dr. Cohen, would you just simply begin, and we look forward
very much to what you have to say.
Mary Ann, why don't you take a seat over here so that you
can see the presentation. Thank you.
DR. COHEN: More importantly, so I don't shine the
laser pointer in your eyes.
So thank you very much for inviting me. It's quite a privilege
to be here. I have to say, the strongest impression I have
so far is how cordial the proceedings are. One from the outside
imagines that debates about such hot topics are intense and
vituperative, and they're just intense which I found really
enjoyable so far. And I hope it continues that way.
So I was kind of — I was mandated to do something slightly
different than Dr. Michels did which is to give you kind of
a case in point. And I struggled long and hard over what
case I should give, in point, because I think there are many
that actually are not the obvious ones to discuss, but that
nevertheless you should be aware of.
And what I decided was to stick with the one that struck
me first I should do, largely because it's the most accessible
one. And it certainly — I was kind of reaffirmed in our
discussion in the last hour that that was so because many
of the topics and issues that I hope to address have already
come up. But what I want to do, just to guard against provincialism
as best I can, is to just put up a couple of issues that I
think are really important, maybe some cases — and I have
this one here starred because I think it's maybe more important — that over the course of this discussion; that is, your
discussion over the next whatever, months, years, you will
be attentive to because they will give you the eye on the
long picture and not just the short one.
And maybe the most important one is the fact that what is
really lacking in neuroscience, which is a theoretical framework,
is just starting to show the seeds of development, at least
within the domain of understanding high level human behaviors.
It seems inconceivable that we could ever really, as people
have expressed already today, that we could ever really come
up with an understanding as something as complex as the brain
in a way that will tell us how people will behave, but I would
submit that there are just the beginnings of that in the theoretical
world, and there are a couple of developments that I highlight
here.
One is the diffusion model, and it's something that would
take an hour to describe to you but I think is easily understood
when it's properly presented, that is an instance of where
we're really gaining an understanding in precise form of the
kernel of decision making, at least in simple cases. What
of it is deterministic or how deterministic it is, where stochasticity
enters into it, and in fact we're in the process — this
is something I'm actually involved in. We're in the process
of writing a paper about this called "The Physics of
Decision- Making" because literally the model that people
are beginning to converge on is one that comes out of physics.
It's just an extremely interesting and, I think, important
development that you'll be hearing more about and that I think
you need to keep your ear to the ground with regard to it
because I think this really sets the stage for answering some
of the questions of the sort that came up this morning or,
sorry, earlier today.
Another area that is probably more developed than this one
is our understanding of reward- based reinforcement learning
and how that affects preference. Some of the studies that
you read about in The New York Times about fMRI being
able to, as I mentioned earlier, predict whether somebody
likes Coke or Pepsi were actually inspired by theoretical
work regarding the neural bases of reinforcement learning
and how that leads, over time, to people's preference behavior
or choice behavior. And this is something where there is
really mathematical knowledge emerging about how these systems
work and that are making very specific and precise predictions
about neurophysiology, about human behavior. So that's one
important domain, I would argue maybe the most important over
the longer term.
The other is "wet" neuroscience. I put "wet"
in quotes because it's not necessarily always involving electrodes,
but largely studies that involve neurotransmitters that direct
neuronal recordings in nonhuman species. And, again, it's
begun to paint a picture that is interacting pretty tightly
with at least some of the theoretical work. For example,
our understanding of reinforcement learning was originally
inspired by interest in dopamine and now provides a pretty
mathematically precise account of what dopamine is doing,
at least in certain circumstances.
Recordings from the prefrontal cortex, an area of the brain
that I'll be talking about from a different perspective, is
really the origin of much of the work that's being done in
humans now; that is, recordings from monkeys performing tasks
that are facsimiles, at least, rough facsimiles of the sorts
of things that we can have humans do in the laboratory, and
perhaps most recently, a renewed interest in orbitofrontal
cortex as an important area in the evaluation of actions or
stimuli and their consequence for subsequent action.
So these are all — this is obviously a highly kind of edited
list of developments, no doubt biased by my personal interest,
but just to point out that there's a lot going on in neuroscience
that we're not talking about today and that nevertheless is
as, if not more, important than the things we are talking
about. So I would really encourage you to find representative
experts in these areas and others to come and talk to you
about extremely important developments that I'm not going
to touch on.
Then there's a third area, neuroimaging studies, which we
did spend a lot of time talking about and which I will actually
focus on for the rest of my comments. And I want to start
by saying a word about some of the caveats and concerns that
one should have about these studies. Some are obvious. For
example, imaging studies are complex. I mean, you see this
beautiful pseudo- color map on the pages of the Tuesday's
Science Times and you think you've actually learned
something about the brain, but actually what those pictures
don't reveal is the many, many — well, I would venture to
say hundreds of hours of analysis that go into — really
complicated analysis programs that go into producing those
images. And whenever you have a complicated analysis program,
you have to ask whether what you're looking at is what's really
going on or what the person wrote the program was biased to
show you.
So there's a real danger in interpreting this data without
a keen understanding of how the data were generated and, similarly,
with how the studies are carried out. And I'm sure you all
are at least roughly attuned to some of these issues. I'd
be happy to say more about them later, if you wish, but it's
really important to keep in mind that all of the studies that
we do are subject to all kinds of assumptions, of limitations,
and it's very tempting to jump on the conclusions without
taking account of the uncertainties involved in many, if not
most of these data sets. And that factors directly into the
probabilities that Dr. Michels was discussing earlier.
I'll say a word or two about that later when I talk about
a couple of specific studies, but that's important to keep
in mind.
At the other end, they're really crude. As complicated as
they are, they're telling us very, very crude information.
And again I won't belabor this because this has come up already,
but just to drive home the point, we're not actually measuring
brain activity with any of these methods that are used in
humans. We're measuring blood flow, and blood flow is a really
poor approximation of brain activity. It's off by about four
or five seconds from when the neural activity occurred. It's
occurring on the order of centimeters, maybe millimeters,
when neural activity at the level of single neurons is occurring
on the order of microns. So you're off by two or three orders
of magnitude in the spatial resolution of at least some of
the relevant events.
Now our hope is that nevertheless by summarizing over many
kind of probabilistic or stochastic neural events we see kind
of the forest for the trees, as it were. So that in fact,
these methods are telling us something of meaning, if not
everything there is to know, about neural function. But that
said, they're crude relative to the unit of computation in
the brain which could arguably be said to be the single neuron.
So it's important to keep in mind that they're complex, that
they're crude, but maybe most importantly and at least most
importantly for ethical and moral concerns, is that they're
really sexy. And I mean that in a technical sense. I mean
people love this stuff. They eat it up. They want to know
about it. They're persuaded by it and they want more. And
that lowers the thresholds for keeping these considerations
in mind. And so just to make that point, let me show you
something that I got off the news I think it was a couple
of weeks ago.
I'm sorry. I've got to move the microphone. Tell me if
you can't hear.
(Video plays.)
"You've heard the saying an apple a day keeps the doctor
away, but a Japanese study finds just peeling the fruit might
be enough to do your body good or at least your brain. Fourteen
adults in the study peeled or just touched apples with a knife
showed stimulation to their brains. Those who actually peeled
the fruit showed stimulation to their frontal lobes which
is the most highly evolved section of the brain."
(Video ends.)
I mean I could just stop there, right, with regard to this
point. I mean maybe there's something interesting in that
study, but if it is, it's not what was communicated, right?
And this is scary because it means in courtrooms, on the floors
of Congress, in rooms like this, brain imaging data carries
a persuasive and deceptive, a persuasive and deceptive ability
that really has to be cautioned against. Okay?
Maybe that's the most important thing I have to say to you
today because this is already on the minds of marketeers.
I've had unnamed, but very well known and reputable and influential
concerns, commercial concerns, approach me with an interest
in doing brain imaging studies. And when I told them that
I thought their behavioral measures were actually much better
than what the brain imaging data could tell them right now,
they said they didn't care. And I said, well, why don't you
care; and they said, well, can you just show us that Product
A is going to activate an area of the brain that Product B
doesn't. Well, sure, but Product B will activate a different
area. Well, we don't care. We're going to go into the pitch
with the picture that shows Product A.
And I was tempted to say, well, you know, give me my consulting
fee and promise you'll never publish it and sure. But, I
mean, that tells you the power of this method and the perverse
uses to which it could easily be put.
I'm sure I'm not telling you anything you don't know, but
I think it's really important to punctuate that. And I think
maybe the most immediate ethical concern you have is exactly
this: what to do with the status of these sorts of data which
very easily outstep their actual — their legitimate bounds.
Okay. With those caveats, I am going to tell you about brain
imaging data, hopefully though with a little bit more responsibility
than that news report.
What I'm going to do is tell you about two studies that we
did. I hope I'll have time to get through both. I'll go
through the first one in a little bit more detail because
it will require a little bit more explanation with regard
to the methods and then maybe whisk through the second one
just to make the higher level point. But both are designed
or chosen to make a higher level point about what we're learning
about the nature of the brain and how that gives rise to people's
higher level behaviors. And in this case I picked moral decision-
making and economic decision- making as two examples of high
level behavior that may begin to change our view of what economic
decision- making is about and what rational behavior really
is and what moral and ethics is about, at least as it's practiced.
So an issue that's already come up is does this sort of information
have any prescriptive value or is it just descriptive. And
at the moment I adhere strongly to the stand that it is just
descriptive. I'm certainly not licensed, nor do I feel qualified,
to tell you what prescriptions it should engender, nor do
I feel that the data themselves are yet reliable enough for
those of you who are licensed to do so, to do that with these
data
But the point is that sooner or later I think they are going
to give us reliable data that are going to raise prescriptive
issues, some at the level of policy with regard to individuals
that Dr. Michels talked about, but, I would conjecture, some
at the very most fundamental level of what it means for something
to be moral or ethical.
And I know this is a contentious point. It's one that I
think begins to address the question that was asked earlier
about the kind of epistemological status of these data. Again,
I don't want to say that the data I have are at that point,
but I think that over the longer term, neuroscience can begin
to shape how we view ourselves as ethical and moral creatures
or as rational creatures and may even inform us about that
in ways that will lead us to change it. And that's the most
fundamental kind of impact that neuroscience could have, and
I think very well might have.
So the two examples I picked. One is about how people are
kind of inconsistent in their moral behavior and the other
is how they're kind of suboptimal in their economic behavior,
and I'll explain what I mean by that in a moment.
The question they pose for me is: Why do people behave in
these,loosely said, irrational ways? Why are they inconsistent?
Why are they suboptimal? And I'm going to offer some speculations
that have to do with evolution. I'm not an evolutionary biologist,
so I'm getting well beyond my expertise here too, but I find
it impossible not to consider this, and again I think it helps
illustrates the sorts of fundamental questions that these
data can raise.
And I'm going to offer a conjecture at the end that I will
describe, and in so doing so, define for you what I mean by
vulcanization. I'll leave that as a little mystery until
I get to the end.
Before I go any further, I want to acknowledge the people
who really do the work. At this point, I'm just kind of the
mouthpiece for some really talented and devoted scientists
who are really not only technically gift, but I think are
giving really hard thought to some of the questions that are
being discussed here. In particular, Josh Greene, who did
the study on moral reasoning that I'll talk about; Lee Nystrom,
who runs my lab and actually oversees all the imaging studies
and then Alan Sanfey who is not pictured here — this is
when we first got our scanner — but was responsible for
the study on economic decision- making.
Okay. So let's start with the moral decision- making test.
Here's a dilemma that moral philosophers have been toiling
with for the past couple of decades. It's called the Trolley
Dilemma. Some of you are probably familiar with it, but under
the assumption that not everybody is, I'll go through it quickly.
There's actually two scenarios that constitute the dilemma
as a whole, and I'll describe each one. In the trolley scenario,
there's a trolley or a train coming down a track and it's
actually — these pictures are not the best, I apologize.
But it should be showing that the switch on the track, there's
a switchpoint on the track and it's set so that it's going
to come down and kill these five workers. But you're a switch
operator on the track far enough away from the junction itself
that you can't alert anybody, but you can act quickly enough
to flip this switch which would cause the trolley to go along
this track instead of this one, killing this workman instead
of these five. And the question is is that an ethically or
morally acceptable action? I'm not asking how difficult it
is, whether you would like to be in this situation, whether
you actually would bring yourself to do it, but simply the
abstract question: Is it morally or ethically sound to flip
that switch? And let's just take a poll.
Those of you who think it's ethically sound to flip the switch,
please raise your hands.
PROF. GEORGE: Who are the guys?
DR. COHEN: They're workmen.
PROF. GEORGE: We need a little more information.
DR. COHEN: Actually, thank you for asking that.
Let me —
DR. KRAUTHAMMER: Republicans.
DR. COHEN: — do two things here. First of all,
let me caution you about the fact that these — we could
easily spend the next two days talking about some of these
issues, and so I'm going to try and curtail a discussion about
the ethics here, at least until the end so I can get through
the material. On the other hand, if anybody has questions — because I'm presenting semi- technical material. If people
have questions of clarification that require me to say a bit
more for you to understand what I'm talking about, please
don't hesitate to stop me. But what I don't want to do is
get sidetracked into the ethical conundrums because literally
this could take hours.
That said, that's a good question, and let's assume they're
workmen. And you'll see that the next slide is flawed in
that regard, and so I'll correct it. I'll describe what the
slide should show when I get to it. But assume in all cases
the people influenced are workmen. So they've kind of signed
on to this job. They know the risks, dot dot dot.
So most people, as I saw it, raised their hand, not everybody,
interestingly, but most people raised their hand and said
that they thought it was morally acceptable to flip that switch.
Now let's consider another scenario and instead of this being
a sumo wrestler, let's imagine it's a very large workman and
he's repointing this bridge here — you know, the bricks — and he's kind of leaning precariously over the edge and
again, you're a bystander. You notice that the train is coming
down the tracks, and it's going to kill these five workmen,
only this time, in order to save them, what you'd have to
do is push this large workman off the edge of the bridge.
He's large enough that he'll stop the train. You're small
enough that if you jumped and committed suicide, you wouldn't
stop the train. I mean, allow me all the contingencies here
to set up the dilemma in the way you know I want to set it
up. And as a matter of fact, we empirically tested most of
these, so all of these assumptions kind of bear out. The
finding is general to the issue and not to the specifics of
this particular circumstance as you'll see shortly, actually.
But the question now is is it morally acceptable for you
to push this guy off the edge, on the assumption that you're
going to succeed as effectively as flipping the switch on
the other one, and have him take the hit and die but spare
these five. So many people feel that that's a morally or
ethically acceptable thing to do?
Fewer hands went up. A couple came up late. So it took
a little longer for people to say, for everybody to say whether
or not that was acceptable, but certainly many fewer. That
concords with the empirical data that we have when we test
these two scenarios in particular and a number of related
ones that we actually went on to use in our imaging study,
that 90 percent of people say that it's okay to flip the switch,
but we get exactly the opposite for pushing the person off
the bridge.
Now obviously what's interesting here is that the actuarial
statistics are the same. It's five for one, right? And so
why is it that people feel that in one case it's acceptable
to sacrifice a life, play God in effect, right, by influencing
the outcome of the events and spare five at the cost of one,
and in the other case, it's not acceptable?
Now I know spinning through your heads are 15 or 20 or 100
different conjectures. Let me offer you one, okay, to satisfy
you that we're not dolts and that we've thought about this
as intensely as we can and, hopefully, as intensely as you
might to come up with as many possible accounts as one could
imagine, attendant to the literature and all the possibilities,
and controlled for that in the various scenarios that we've
used to limit it to — with the outcome being that there's
still a conundrum, that there's no simple account that really
explains it.
So let me give you one possible account and show you how
we can dismantle that one and then hope that you take it on
faith that we've done similarly with all the others.
Well, one can say that one important difference between these
two scenarios is that in the first case, the guy standing
on the sidetrack was dying an incidental death; that is, there
was nothing instrumental in your use of him. You would have
been much happier, in fact, if he wasn't there. Then it would
be a no brainer. You'd just flip the switch, right? But
the fact that he's there is unfortunate, but not consequential
to your being able to save the five.
Whereas, in the second case, you had to have that guy there.
If he wasn't there, the five men are dead. There's nothing
you can do, right? So you're actually using that individual.
He's an instrument of your actions and that maybe we have
an aversion to using people as instruments and that that's
what leads to this moral intuition. I'm not saying that you
calculated this consciously, but at some level maybe you were
attuned to that possibility. And, in fact, Kant suggested
exactly this in this philosophy. He said never use on people
as means, only as ends, and that that was a fundamental moral
principle to which we should adhere. And maybe that's what
people are doing.
Well, at least in this case we know that's not what they're
doing because if we change the scenario just slightly by adding
this little bit of track here, such that if you flip the switch
and the train comes down here and this guy wasn't there, the
train would come around and kill these five — you still
don't have time to warn them, okay? — it doesn't change
people's intuitions and yet now that person has to be there.
If they're not there, the train will come around and kill
them. It's because they're there that you're using them,
you're using their presence there to save these five.
So it's a purely instrumental case, just as the footbridge
scenario is, and yet this doesn't change people's intuitions.
When you give people this dilemma, they're as likely to say
flip the switch as they are if this bit of track isn't there.
So that alone can't be the principle that people are using.
And now again, if you'll allow me that we've considered many
other possibilities, professional status, whether they're
part of your racial group or not, there are a lot of variables,
admittedly, in these particular scenarios, but we've tried
to control for them in versions of these scenarios and we've
used a bunch of other scenarios that vary in a whole bunch
of different ways. And we, at least, have not been able to
come up with any systematic rationalist account - B rationalist
in the sense that Kant intended it, okay? — that causes
people to systemically go with the ones that they say are
ethical and not with the ones that they don't. And if you
don't trust me, then the materials are on a website and you
can go and look through all the materials and tell us whether
you can find some systematic principle that caused people
to go one way with one set and the other way with the other
set.
Our hypothesis — I should say Josh Greene's hypothesis.
He was really the impetus for this work. He was a graduate
student of philosophy when I arrived at Princeton and was
the first person to kind of knock on my door, when I got there
and was building a brain imaging facility, to say he wanted
to do a study. And I've got to say I really knew I was at
Princeton when it was a philosopher that walked through my
door and not a neuroscientist, you know, to want to do a brain
imaging experiment. But it didn't take but five minutes of
talking with him to convince me that this guy is an incredibly
gifted, thoughtful guy and had a really interesting program
of research to pursue, and so I really want to take a moment
to credit him with most of the thinking and kind of motivation
for the study.
And it was his hypothesis — and certainly I shared the
intuition with him — that what explains the difference is
an emotional response that you have in one case and not in
the other. Okay? That the thought of pushing a person off
a bridge when you're close at hand is emotionally more salient
than flipping a switch when you're several hundred feet or
several miles away from them, and that these sorts of emotional
reactions have an impact on people's moral decisions.
Now this is a descriptive claim, not a prescriptive claim.
We think this is what happens in the real world. It's what
explains behavioral data of the sort that we just collected,
not necessarily the way it ought to be. But, nevertheless,
it's a strong claim and philosophers have tried to reject
this claim, and the question is how do you test it. Right?
So you can ask people whether or not this concurs with their
intuitions and some may be forthcoming and insightful enough
to tell you, oh yeah, that makes sense, but others may not.
Either they may not realize they had an emotional reaction
that at some level of the unconscious they actually did have.
Psychodynamic theory and psychoanalysis is not entirely dead,
Mike.
Or perhaps they are aware, but, like many people in our society,
are unwilling to admit that emotions will impact their decision
and so won't tell you about it. And so how can we assess
this hypothesis that emotions are influencing people's moral
decision- making without asking them? And, you know, one
way is to use long- standing, relatively well characterized
measures like galvanic skin response. When you get emotionally
aroused, your skin conductance goes up and you can measure
that. And in point of fact, we've done that and it provides
results that are, in the end, not quite as clean as the imaging
study. It's a lot cheaper, you might think. We were in the
perverse circumstance of having a brain scanner sitting there
that was easier to use and that we knew how to use better
than a couple of electrodes and a small resistance box or
capacitor box. So I have to confess that expediency led us
to neuroimaging in our case as much as anything. But what
I hope to show you is that the results, actually, told us
much more than I think a simple GSR experiment would have
told us.
So our hypothesis was that emotions account for the variance
across the different categories of dilemmas that seem to pose
the sort of — elicit this inconsistent behavior and that
we could test this by putting people on a brain scanner and
measuring their brain activity when they considered these
sorts of dilemmas and were asked to make decisions about them.
So now this wasn't totally out of left field. There's certainly
a long tradition of human neuropsychological research that
suggests that there are parts of the brain that are involved
in processing emotion. I certainly concur with Dr. Michels'
view that the distinction between cognition and emotion is
nowhere near as clean as we would like it to be or as we often
treat it. But that said, there is a meaningful difference
there and it seems to be reified at least in part with what
brain areas seem to be computing emotionally charged or valenced
decisions and which ones don't.
And some of the earliest data come from brain damage, so
I'm sure you're all familiar with the case of Phineas Gage
who was coincidentally a railroad worker. I don't think he
was hit by a trolley coming down a track that somebody failed
to flip the switch on. In fact, I know he wasn't. He had
an accident with a tamping iron that sent this big rod through
this frontal cortex. And the remarkable part of this story
is that the guy lived to walk away three or four weeks later
in perfect physical health and lived many, many years after
that.
The sad part of the story is that his personality was forever
changed and it was changed in just the sorts of ways that
you might predict if it hit the part of the brain that was
responsible for integrating emotional evaluations or maybe
even moral and ethical evaluations with behavior. He was
a very responsible, kind of adroit citizen, a well- regarded
foreman on the team on which he worked, and after the injury
he became somewhat of a kind of a rascal. He got himself
into trouble with gambling, he couldn't manage his finances,
he became lascivious, he made lewd comments in public. His
whole, what we would call, his moral fiber seems to have changed.
And that led to early conjectures that this front part of
the brain, the prefrontal cortex, a particular part of the
prefrontal cortex, was an important, if not the sole determinant
of moral and ethical behavior.
So the idea that we might find brain areas that were specific
to these sorts of tasks was not totally uninformed. So we
went ahead and put people in the scanner, and we used MRI.
And I was asked to say a word about these methods. I won't
go into too much detail, but this, I think, gives you a graphic
depiction of the kind of physical detail that we can get from
a single brain. This is a half- an- hour scan. Nowadays
you can do — this was done about ten years ago. This scan
can be done now in about five minutes. You can get a picture — this is a graduate student at my laboratory, then a graduate
student — a picture of their brain that shows every last
little fold of their brain. So at least with regard to the
anatomy it's pretty remarkable how much detail we can see.
Up until about ten years ago you could only see this sort
of thing, so you could tell whether or not somebody had cancer
maybe by seeing whether or not there was a growth. You could
tell whether or not they had Alzheimer's by — well, you
actually couldn't tell whether they had Alzheimer's, but other
forms of degenerative disease you could diagnose by seeing
whether there was loss of tissue, but what you couldn't see
is what areas were functioning when people did particular
tests. That all changed about ten years ago when several
groups realized that using the very same machine and trying
to make the same measurements with that machine that are,
in effect, made with PET scanning — that is, measuring blood
flow — you could index brain activity with a remarkable
degree of precision, remarkable with respect to what you could
do with PET scanning at the time and with respect to the fact
that it's totally non- invasive.
Every once in a while I go down to the scanner still and
I watch what's going on and I still get the heebie- jeebies.
I mean it's really like an episode out of Star Trek.
Twenty or 30 years ago, Gene Rodenberry gave us an image of
one of these scans that somebody would lie in and Bones, the
doctor on the ship, would be able to tell what was going on.
Never a needle prick, never any radiation and that's effectively
what we're able to do. It would be in crude form, but in
a totally, as far as we know, non- invasive way. We can tell
what areas of the brain are activated when somebody is doing
a test.
So here's an example of people looking at a chess/checker
board. And areas that we know from other sorts of measurements
are involved in visual processing light up and other areas
that are not involved in visual processing don't light up.
So this is a particularly good case of our ability to be both
accurate and precise in our measurements.
There are perils to this method. I've already mentioned
a few. I want to mention one conceptual one because it too
comes up frequently in discussions of brain imaging, and it
raises concerns, but I think also sometimes overstates those
concerns, and that is the idea that all we're really doing
is reinventing something that was discredited a long time
ago, namely phrenology.
So Franz Gall, about 150, almost 200 years ago now, had the
idea that different parts of the brain represented different
functions. Now for my money that was a major insight, and
it took a long time for it actually to take hold. But he
was the leader, one of the leaders in the pack, in realizing
that the brain is not an undifferentiated mass of tissue,
but that different parts of the brain actually carry out distinguishable
functions. How distinguishable, we can debate. But there
are characteristic functions in different parts of the brain.
The back of your brain handles vision; and if you cut it
out, you're not going to see very well.
Now he reasoned from that that if different parts of the
brain had different functions and some people had those functions
better developed, maybe it was because they had more tissue
allocated to those functions and that, in turn, would occupy
more space which means that the cranium had to accommodate
it by getting larger and therefore he could diagnose who was
more reasonable and who was a better lover by feeling bumps
on the head.
Now that version of the story or that inference, of course,
is wrong and it's silly. And when people referred to neophrenology
they're talking about brain imaging experiments as being just
a reinventing of bumps on the head only now it's pseudocolors,
right?
And an important point that's made by that is we have to
be careful about not being too simplistic in our idea of how
the brain works, even though there's a part of the brain that's
responsive to visual stimuli that doesn't mean that all of
vision, all of object recognition, your ability to kind of
appreciate the smile on your wife's face when you wake up
in the morning is all housed in the visual part of the brain
because it's a visual stimulus, right? The brain is a highly
interactive and intricate mechanism that's integrating all
kinds of information at every point in time. And so the fact
that there's some specialization of function doesn't mean
that the functions that each part of the brain is specialized
for map on to functions that we recognize at the surface.
They may be much more complex and intricate sorts of functions
that don't correspond to simple sorts of things like vision
and smell. Some may, but some may not.
So the idea that there's a reason area or a love area may
be right or it may be wrong, and the fact that we can see
areas of the brain activate when we give somebody a love test,
doesn't mean that that's the love area of the brain. That's
the right criticism of neophrenology.
This is just to say that even those ideas, bad ideas die
hard. I got this off the web a year or so ago.
(Laughter.)
Some people still believe in old Franz Gall's diagnostic
techniques. But that said, it is important to realize that
the brain does have functional specialization, and we can
leverage that for scientific understanding and maybe even
for better understanding of who we are as individuals and
as species. So, for example, if I know that there are some
areas of the brain that reliably activate when emotional stimuli
are presented, then I can leverage that observation, that
prior observation to ask whether those areas are activated
when I give somebody a footbridge type scenario as opposed
to a trolley type scenario.
So another way of saying this is that a map, as such, is
useless, but a map is extremely useful if you want to go somewhere.
So neophrenology is useless if that's where you stop, but
if you're going to use that map, with all the proper caveats
applied to it, to understand how things are happening in the
brain, then it may be actually a valid endeavor.
Okay. So the last bit before I tell you about the experiment
is to say how did we kind of systematically manipulate the
emotionality of the moral dilemma. We had to operationalize
this idea so we could do an experiment, so we could have some
that were emotional that were like the footbridge problem,
and others that weren't that were more like the trolley problem.
But we didn't want to use — we couldn't use the same dilemma
over and over, so we needed lots of dilemmas to be able to
test this because we have to do signal averaging. One of
the problems with these methods is that a single trial doesn't
tell you a lot. There's a lot of noise in the data. So you
have to perform the experiment 15, 20, sometimes a 100 times
and then average over all of those to take out the kind of
noise and see the signal you're looking for. So for that,
we needed lots of dilemmas, and for that we needed a way of
characterizing ones as being either footbridge- like or trolley-
like.
And so we used these criteria. The dilemma was emotional
if it was up close and personal; that is, if it can be expected
to cause serious bodily harm B it wouldn't be immoral otherwise — to a particular individual as opposed to a statistic,
to an undescribed body of people and through proximal action.
And Josh and I actually differ as to how important this is.
I happen to believe this is the most important one as you'll
see shortly. He's not as convinced by this and we're doing
experiments to try and test that.
But in any event, these were the criteria that we used, and
it was meant to capture the sort of kind of primitive notion
of me- hurt- you. And, again, this will factor into my comments
in just a few minutes. So "hurt" is the serious
bodily harm, "you" is a particular person, and "me"
is the direct proximal action.
And so the experiment involved the subjects going into the
scanner and we gave them 60 dilemmas, 20 that were moral/personal,
defined in this way — sorry, we generated a bunch of dilemmas
and then we had people rate them on these criteria. And then
we took the ones that were rated as satisfying these criteria
and called those "moral/personal" and presumably
thereby invoking emotional responses. We took the ones that
didn't satisfy these criteria, reliably didn't satisfy these
criteria as "impersonal" and then we included a
control set that were meant to just control for all the other
things that people have to do in these experiments: read the
materials, think about them, maybe agonize a little bit over
what they're going to do or what answer is right, be on the
spot, all the kind of incidental processes that we don't think
are relevant to what we're interested in that we try and control
for in our baseline condition.
And so for that condition we invented 20 kind of cognitive
puzzles that roughly took the same amount of time to solve
as people took to answer the moral dilemma ones so that we
were controlling for time on task.
And when we do the experiment, we get a bunch of areas that
are activated. So we compared these moral/ personal ones
against the nonmoral ones as a kind of baseline or control.
We compared the impersonal ones against the baseline or control.
And we asked what areas showed greater activity in one of
these two conditions as compared to this one, as evidence
that they were specifically involved in the processes involved
in solving this type of problem, this type of problem versus
this type of problem. Is that clear to everybody?
This subtractive methodology is kind of at the core of most
studies, and it too is subject to many assumptions and therefore
many potential problems. When done properly, it's extremely
powerful and has been validated to tell us information that
can be independently confirmed with other methods, but it's
also easily abused. So here's another place where if one
wants to evaluate a particular study, one really has to look
carefully at how these comparisons are made and what this
condition looks like.
So when we do this, we found a series of areas, not all of
which are shown here, but some of the critical ones are shown
and I've colored them red or blue to roughly connote what
the prior literature in neuroimaging had suggested about the
function of these areas. The ones coded or colored or kind
of backgrounded in red are areas that in most previous studies
that have reported them have involved emotional stimuli or
emotional decisions or emotional circumstances that the subject
had to apprehend or interpret or whatever. None of them were
moral, but all of them evolved emotions in some way. And
this one here and a few others that I'm not showing, the prefrontal
cortex, the parietal cortex that I've kind of backgrounded
in blue, are ones that typically are not associated with emotional
stimuli, but are associated with cogitation, as it were, with
kind of mental problem- solving, working memory.
DR. FOSTER: One question: Are those outlined areas
of change which you've just colored in or are these stylized
diagrams in the area? Are these the raw data?
DR. COHEN: No, these are absolutely not the raw data.
Let me tell you what you're seeing. You're seeing three things.
First of all, kind of as a background, you're seeing a structural
image of the brain so that you can — for those who know
something about the brain, they would be oriented as to where
these areas are, okay? So that was not acquired at the same
time, but it was acquired in the same subject. So we're overlaying
these areas of activity on an image of the person's brain
or, in some cases, an average of all of the subjects' brains
that we studied.
The colored areas here are statistical maps. So where it's
red, there was a much greater signal associated with — well,
you'll see what the signals were associated with. But there
was a much greater signal, either in the moral personal or
the moral impersonal compared to the control, and the colors
code the degree of statistical reliability with which those
areas were more active in the experimental conditions than
the controlled conditions. So they're not raw data, but they're
statistical analyses of the raw data. They're quantitative
data. They're not graphic renderings. These are derived
from real statistical analysis of the actual data.
And then I've just drawn these circles here to draw your
attention to those areas. Does that answer your question?
Okay. Now we can ask, well, these are areas that, as I say
in the past have been associated with either emotion or not,
how do they activate in our moral/personal versus impersonal
conditions or non- moral. So actually here I've plotted the
activity in these areas including in the moral, sorry, the
non- moral condition against baseline, against just plain
rest where the subject isn't doing anything. And what I want
you to see from the slide is that all the areas that are associated
and that have been associated in the past with emotion activated
in the moral/personal condition for moral/ personal dilemmas
when the subject was contemplating and deciding about those,
and not in the moral/impersonal or, in any event, much less
so in the moral/impersonal or non- moral conditions. And
the exact converse was true for the areas that are associated
with cogitation, the working memory or problem solving. They
tended to be more activated in the non- moral and the moral/impersonal
conditions and not in the moral/personal.
So emotional areas seem to be engaged when people were contemplating
dilemmas like the footbridge problem and non- moral, but kind
of higher level cognitive areas were engaged by the moral/impersonal,
the trolley- like problems and then, as predicted, kind of
abstract problem- solving tasks.
What's even more interesting I think, and this is more the
result of actually an extension of the original study that — these results, by the way, have been replicated three
times now, twice in our laboratory and once in one other laboratory.
So I'm pretty confident that these are reliable effects.
What's even more startling is that when we look across many
different experiments and start to correlate the extent to
which individuals make a utilitarian decision — that is,
they say, look, even though it's emotionally kind of aversive
to me, to think about pushing that person off the bridge,
I'm going to do it anyway, okay? — you get more activity
in prefrontal cortex. And if you look at the people who are
most utilitarian, the correlation is really quite startling,
about .9. And you can see that that's not being driven by
outliers. That the more — when people make utilitarian
decisions, that prefrontal cortex seems more active than when
they don't. So that this particular area of the brain is
not just overall correlated, but almost begins to have the
feeling of being predictive of when they're going to make
a utilitarian versus a non- utilitarian decision. And I'll
say more about that kind of data analysis when I talk about
the economic decision- making test.
Okay. So some inferences from these data: emotional responses
can influence moral judgments. I'd like to infer that. There
are additional data that I don't have time to tell you about
that are behavioral data that suggest that this isn't just
correlation but actually is cause. That is, these areas aren't
kind of incidental activations associated perhaps with the
discomfort of having to make a decision, but rather precede
it and actually, as I said just a second ago, predict the
outcome of the behavior. I don't have time to go into that
in this experiment, but I'll say something about that in the
economic decision- making test.
Not all moral judgments elicit emotional responses. They
occur to proximal interactions, me hurt you. They don't occur
from more distal interactions, for example, flipping the switch.
And, furthermore, the competition between prefrontal areas
and emotional areas seems to be at the heart of or at least
an important component to what the outcome of this decision
is.
Now I want to offer in this context a little hypothesis,
and this is really armchair theorizing. It's no more than
that, but it's provocative, and I think it's usefully provocative.
Why do people have these emotional responses? Well, one
hypothesis would be that they reflect evolutionary adaptations
that favor prohibitions against inter- clan violence. That
is, to the extent that we were successful in evolving as a
social species and that we came genetically wired with mechanisms
for aggression that protected ourselves and what we had accrued,
we needed to somehow kind of stop that from happening among
those with whom we were starting to cooperate, or else the
threshold for cooperation would be too high and we wouldn't
succeed, right? That puts it teleologically, but I think
you get the drift of the argument.
So one can imagine that these emotional responses or the
brain systems that mediate them evolved as a way of controlling
our aggressive tendencies to those with whom it would benefit
us to cooperate. Now evolution is opportunistic, right?
And so it optimizes mechanisms for the circumstances in which
it finds itself, for local prevailing circumstances and not
at all possible circumstances in which these mechanisms were
developing. The only way we could do damage was through proximal
cause, by hitting somebody, or picking up a stick and bopping
them on the head, right, and not by flipping a switch and
causing some damage a few miles away, no less hitting a button
and causing many millions of deaths many thousands of miles
away. We just didn't evolve brain mechanisms to deal with
that. It just wasn't in our environment. It wasn't in our
circumstance, and so the brain just never developed mechanisms
to deal with that. And so our emotional responses are circumscribed
to the the circumstances that we found ourselves in evolution
and maybe no longer are the only ones that are relevant.
So hold that thought and I'll come back to a similar sort
of argument when we get to economic decision- making or the
end of economic decision- making. All right, so that's the
moral decision- making experiment.
Economists are as interested in rational behavior, maybe
even more so, than philosophers, and again as I'm sure most
of you know, the standard economic model assumes that, in
fact, we are optimal; that is, rational decision- makers,
that we always choose the action or the good that is going
to maximize our utility, maximize our personal gain and that
we do that optimally.
And they make that assumption for a very reasonable reason
and that is to be able to have traction in theory. That is,
it simplifies a lot of matters when you assume people are
optimal because you can do proofs about optimality, right?
You're stuck doing statistics on what people actually do.
So I think it was actually just a tactic that led to a stratagem
in economics. That's another story.
In any event, economists have long assumed that people act
rationally, and they got away with it for about 30 or 40 years,
but the development of behavioral economics has begun to catalog
a large number of instances in which people don't seem to
act anything like the way economists say they should. And
in fact, the Nobel Prize was given out this year for that
work, to Daniel Kahneman and Vernon Smith, among others, who
have kind of championed this area of behavioral economics.
And we were really interested in this, in part, from my perspective
because of the parallels that it draws with the moral reasoning
work, but because I think it has intrinsic interest in its
own. Understanding what the basis is of economic decision-
making is just as interesting as the basis of moral decision-
making. And so we used kind of a similar strategy. We picked
a task that we thought highlighted, in this case, the suboptimality
of people's behavior and then scanned them while they performed
this task and looked at what the brain was doing when they
made what seemed to be optimal decisions versus non- optimal
decisions.
In this case, the task was rather a simple one. It was called
the Ultimatum Game. Subjects were paired with a partner who
they met before the scan and were introduced, actually were
ten partners. They were going to play with all ten of them.
They were introduced to them, and while they were in the scanner,
they were shown pictures of who it was that they were supposed
to be playing with. And in each case, the partnership was
offered a sum of money, let's say $10. So let's say Dr. Kass
and I are offered, as partners, this $10, and I'm in the scanner.
So it's Kass' job to decide how we're going to split it.
So you can decide to give me $5 and you'll keep $5 and then
it's my option to either accept the offer in which case we
each get the allotted sum or to reject it in which case neither
of gets anything.
So you offer me $5, you keep $5, I say sure, he's a fair
guy, that's a fair deal. I take it.
But what if you offer me $1 and you decide to keep $9? Or
what if you decide to give me a penny and keep $9.99, what
do I do? Now the economists say you'd take the penny, right?
You're not going to get anything else. If you reject it,
you don't get anything. Now you can get sophisticated about
this and say well, but you want to punish him so that next
time he'll give you a better offer. You want to establish
a bargaining position. But we set up the game so that subjects
know they're only playing it once with each individual. Well,
maybe they want to protect their reputation. They don't want
that guy to tell the next guy, right? We tell them that it's
totally confidential, the outcome of each individual interaction
is not going to be imparted to anybody else. Now you can
question whether or not they believe it, but there's a whole
line of work using this task behaviorally that shows that,
in fact, you can convince people that these conditions are,
in fact, so. And, nevertheless, people still reject the penny
or the dollar or even $2 up to about $3, okay, and get nothing,
in effect, just to kind of punish the other guy. And so the
question is why do they do this? I think that's summarized.
And this is just a behavior evidence that they do. This is
actually from our study, but this totally mimics what's observed
in the literature, that subjects reject offers at around 20
to 30 percent. In this case, it was 20 percent of the total
pot and they accept it when it exceeds that. So if Kass had
offered me $3, I would taken it, but if he offered me $2,
I would say screw you, we're both going to get hurt here,
but I don't care because maybe it gives me pleasure to hurt
you for having tried to rob me.
I won't get into this, but it's interesting that the response
times are longer for fair than for unfair offers. So, again,
the question is: What's happening? Why are they doing this?
And so I've already said a couple of reasons and rejected
them, bargaining position. Maybe they want to — well, bargaining
position or reputation. The situation is such that that doesn't
make sense. Maybe they want to punish their partners, but
that just begs the question why do they want to punish if
they're never going to interact with them again. So it really
doesn't give an answer.
Again, we're left with the answer that there's some irrepressible,
negative emotional response that they have that causes them
to do it. So, again, we can test this by putting them in
the scanner and looking at what happens when they accept offers
versus reject offers, and that's exactly what we did.
Again, we got areas of the brain that were activated in the
task as compared to the baseline. We got our same player,
the prefrontal cortex. This was also in the other study,
the anterior cingulate cortex, but I didn't say anything about
it, and then critically, in this case, the insular cortex.
And what's really interesting though is that if you look first
at the population level, people who accepted versus rejected
the offer, generally speaking their — in people who accepted
the offer, their prefrontal cortex was activated about ten
percent more than people who rejected on the offer. And exactly
the opposite was true for the insular cortex. So it looks
like if your prefrontal cortex is active or if you're going
to accept the offer, your prefrontal cortex is going to be
active and if you reject the offer, your insular cortex is
active.
Now the insular cortex is an interesting area. It's an area
that has repeatedly been associated with physical revulsion,
with interoceptive pain, with real aversion, in some cases
again, physical aversion to stimuli. The classic neuroimaging
experiment with the insula was one done at Harvard in which
they took people with obsessive- compulsive disorders — (how they got this past the Human Subjects Committee I'll
never know). They took people with obsessive- compulsive
disorder, put them in the scanner, then took soiled rags which
they said had been soiled with human feces and threw it on
the subject and the insular cortex lit up like a Christmas
tree.
Now the study itself raises ethical questions of its own
which we might debate, but it certainly points out that the
insular cortex is an area that is engaged in negative or aversive
emotional responses and here we're seeing it activated when
people find an offer, in effect, revulsive.
But what's even more interesting is that if we go trial by
trial, so we take all the trials in which individual subjects,
sometimes they accepted the offer, sometimes they rejected
it, and we looked at what was their brain doing before they
made the decision. If the prefrontal cortex was more active
than the insular cortex, they accepted the offer. If the
prefrontal cortex was less active than the insula — the
insula, in effect, broke through the activity of prefrontal
cortex — they rejected the offer. So it's as if the outcome
of the behavior again was being defined by this competition
between this prefrontal area and a, I might say, more primitive
area of the brain that's coding the emotional response.
And I think that we can come up with a hypothesis here that's
very similar in character to the one that we came up with
in the case of moral reasoning, that maybe this emotional
response reflects an evolutionary adaptation that favored
protection of reputation. And this makes sense if you imagine
that, as we were evolving as a social species, we were much
more likely to come into contact with people that we had had
previous dealings with again, right, than we are in modern
society. And so it behooved us to develop very quick, hardwired
responses to protect our reputation because it was going to
come back and haunt us much more than in modern circumstances
where, you know, you got some guy making an offer on a house,
and at first they come in at your asking price and then they
concoct some story when they do the inspection that the basement
is leaking, and you know it was because there was a little
bit of water that your kid left down there when he took off
his swimming trunks and you know that he knows it, but it's
a way for him to get an extra $1,000 off the price. And you
say,to hell with it, right?
Now there's no sense in which that makes sense. You're never
going to see that guy again. Your reputation isn't going
to be established, right? You might as well take the $1,000
hit and sell the $100,000 or $200,000 or $300,000 house, whatever
it is, right and be done with it and yet you can just imagine
yourself, I suspect — I don't know, I can imagine myself — getting hotheaded, right?
So once upon a time that made sense, but in modern society
it doesn't. So we call that irrational in modern context
or in the experimental situation where we told people explicitly,
no reputation, no bargaining, right? They still have this
hardwired response because of evolutionary circumstance.
So let me just end then with kind of a playing out of that
idea. I called the talk the "Vulcanization of the Human
Brain." The term "vulcanize" at least according
to Merriam Webster's means the taking of a crude or
synthetic material, rubber, in the case of industry, and then
giving it useful properties.
So what I would like to argue is that the development of
the prefrontal cortex has, in effect, vulcanized the human
brain. What it's done is that it's given us the ability to
surmount older evolutionary adaptations and consider what
we might recognize as more rational decisions, right? And
when the prefrontal cortex comes into play, people can actually
bring themselves to push somebody off of a bridge or accept
an unfair offer because they know they're not getting anything
else.
What's interesting though — so the development of the prefrontal
cortex is vulcanizing the human brain by giving it the capacity
for cognitive control. What's interesting is that that very
development, I would argue, has created exactly the contexts
in which those older evolutionary adaptations are no longer
adaptive. So, for example, in the case of the moral reasoning
study, it's created the technology, it's because of the development
of the prefrontal cortex that we have the capacity to produce
technologies like switches on trains or buttons in nuclear
arsenals, right, that control nuclear arsenals, that can do
damage at a distance.
Similarly, it's the development of the prefrontal cortex
that supports the complexities of modern society in which
social structure can occur in a much wider scale in which
we don't have recurrent interactions with everybody that — with whom we've dealt with in the past.
So at the same time the prefrontal cortex solves the problem,
it's solving the problem, in some sense, that it created.
And in so doing, insofar as there's not uniformity of prefrontal
development, whether because of circumstance or genetic structure,
I know not, but because there's not uniformity, and because
it only takes some prefrontal cortices to create a circumstance
in which other brains that don't have as strong prefrontal
cortices can exploit, we are in great peril and I think this
raises really important sociological, as well as ethical and
moral, issues.
So the prefrontal cortex is precisely the part of the brain
that permits rational decision- making in the face of competing
evolutionarily older emotional responses. The vulcanization
of the human brain can save us from those circumstances it
created, but we're kind of in this delicate stage right now
where it's not as if we all have prefrontal cortices that
know how to deal with the responsibility to deal with the
things that the prefrontal cortex was created.
This is the other sense in which — I thought it would be
perhaps a little too cute to call it "The Vulcanization
of the Human Brain" because science fiction often anticipates
the issues that science later has to deal with and I think
Gene Rodenberry anticipated exactly this issue when he kind
of designed his character Spock and the species that Spock
is a part of, the Vulcan species. For those of you who don't
know about Star Trek, the Vulcan species was a species
that literally had a more developed prefrontal cortex and
was totally rational and was able to come to social decisions
through rational choice and acknowledged that there were
emotional forces in the brain that influenced behavior, but
they somehow circumscribed those and expressed them one day
or one week a year or something.
But one has to wonder what the path is going to be for us
to getting there. And I picked this as one example of an
alternative in the real world where there are traditions and
cultures that have looked for rationalist theories, and I
picked this to be again provocative. There are certainly
ones in Western culture as well that seek to kind of deal
with these issues of how rationality can be exploited in a
world in which not everybody is rational.
I've made this point already, that it takes relatively few
rational agents to create things that many more people can
put to irrational use, and so there's this race. This actual
last point is more for my scientific colleagues as a kind
of a challenge for how we can deal with this knowledge, but
I present this issue to you because I think it's transparent,
the sorts of ethical questions that come into focus in now
concrete and measurable ways that wasn't so without the sorts
of tools that we have available to us now. So I can stop
there.
(Applause.)
CHAIRMAN KASS: Thank you very much. Frank and then
Michael.
PROF. FUKUYAMA: Well, thank you very much for
that presentation. I guess in a certain sense you've answered
the question that I was trying to pose to Dr. Michels about
how precise the technology is. My general impression is you've
got to drill down a couple of orders of magnitude more before
you get to a lot of the things that have been speculated on.
But I guess in reaction to the presentation about the moral
reasoning and the way the emotions play into it, what you
say is your armchair speculation about the role of emotions
and how they were evolutionarily derived. I think it's standard
fare in evolutionary psychology, and they've got extremely
highly developed theories about how all of these social emotions
were the result of cooperation dilemmas in hunter- gatherer
societies and so forth.
What they don't provide — and the reason that people don't
like this field — is that no one can come up with a biological
correlate. And the big argument is not that there wouldn't
be a biological correlate to say that certain kinds of moral
decision- making are emotionally based, but the real question
is where does that emotional response come from? Is it hardwired
genetically or is it socially constructed as a result of experiences
that the individual has over birth. And it seems to me none
of the empirical evidence — I mean your kind of implicit
assumption is that it's hardwired —
DR. COHEN: No, no. Let me correct that right here.
I'm just measuring it. I have no idea how it got there, whether
philogenetically or ontogenetically.
PROF. FUKUYAMA: Right.
DR. COHEN: But with the tools to measure it, we can
begin to try and ask those questions.
PROF. FUKUYAMA: Okay. But it does seem to me
you're awfully far from really having an answer to that because
you'd be surprised, even if it were socially learned, it would
be very surprising if it didn't light up.
DR. COHEN: I'm not so sure we're far from answering
at least some first- order questions. For example, not only
is it relatively straight forward, but we're actually in the
process of beginning crosscultural studies to see whether
or not people have different emotional responses to the very
same dilemmas, and there's been psychology that addresses
this question, but it's been very hard to get hard evidence
for the reasons that I said at the beginning. We can produce
harder evidence about that question, and that certainly bears
on the question of whether it's learned or it's innate.
PROF. FUKUYAMA: That kind of crosscultural study
could have been done and was done prior to the brain imaging,
but I also think that the conclusions you draw about the moral
peril we're in is — I would put it quite differently. What's
interesting about that ultimatum game is that it suggests
that there's something like an innate sense of human justice;
that is to say, people have a certain pride, and they will
not accept an unfair division of resources, and they'd rather
have nothing rather than have the division of resources be
unfair.
That's quite an interesting conclusion —
DR. COHEN: Why is that?
PROF. FUKUYAMA: What?
DR. COHEN: Why is that?
PROF. FUKUYAMA: Well, I don't know why it is,
but if it's a fact that that is actually the result of an
evolutionary process which is hardwired, that tells you something
interesting about human morality, which is that contrary to
the Lockian idea that the mind is a tabula rasa,
there actually are innate ideas about justice. They seem
to have come from our evolutionary experience as hunter- gatherers.
But I actually find that reassuring because it means that,
in fact, we're not these cold, calculating machines, but we
have certain innate principles on which our moral order — and the other one that's cited very typically is a sense
of reciprocity, tit for tat. There's this whole thing comes
out of the iterated prisoner's dilemma that would tell you
why socially cooperating creatures should develop a principle
of reciprocity.
Now the evolutionary psychologists have suggested that that
is also a moral principle that is hardwired. I think it's
actually a pretty good one and it is actually reassuring to
me that we arrive at that kind of moral reasoning, not through
a rational process, a cognitive process, the way the economists
posit, but in fact, there are emotions and subrational processes
in the human psyche that lead us almost instinctively to those
kinds of outcomes.
So I'm not sure that this idea that you've got these evolutionarily-
derived impulses that need to be overridden by the super ego
that's created only in civilizational time, that that's the
fundamental problem. I think that, in fact, all of our moral
structures, including those in advanced civilizations, depend
very heavily, fortunately, on the fact that we are wired to
have these certain kinds of species- typical responses.
But again, it does seem to me that — well, okay, maybe I'll
just —
DR. COHEN: Obviously, inference is in the eye of
the beholder because I look at these data — and again they're
early in the game — but I look at them as providing the
potential to say look, we can contextualize these quote,unquote
moral responses that we have, these intuitions that people
seem to have, right, in an understanding of where they came
from that may allow us rationally to dismiss them. And so
rather than reify them because we found them, I think, as
reasonably, one can say look, now that we understand where
they came from and we realize that they're not really what
we want, we're free to kind of propose something else, right?
So it's just the opposite perspective to take the same data,
at least as I understand what you were saying. But the key
point is that maybe not the answer we come to, but the fact
that the debate is now much more informed. It's no longer
a matter — at some point we'll get to the point, I think,
where it's no longer a matter of conjecture as to whether
or not people's moral intuitions are informed by their emotional
responses which might have either genetically programmed or
deeply ingrained cultural roots. We'll be able to say that.
CHAIRMAN KASS: Could I jump in on this and jump my
place in the queue just because I think it's pertinent here.
It seems to me the issue — and Frank, if I were to come
to your aid I would do it this way. It seems to me the question
is not whether this has an evolutionary foundation of the
sort that you suggest. The further question is whether because
it is emotionally mediated, it is therefore irrational; or
whether or not those emotional responses are the embodiment
of a certain kind of reasonableness as opposed to a kind of
theoretical rationality. That would be one way to put it.
And the presumption in a way — I don't have any difficulty
with the findings. The findings are very exciting to me.
They support my own sense that the attempt to do moral philosophy,
as Kant does it, is wrong and that this kind of universalizability
is the only measure of rationality. It might be absolutely
reasonable to treat kin better than strangers and that the
universalizability of human beings is a construct of theory — by the way, something that we need to think about in the
global world, but that you somehow can't say that the abstraction
from proximity is going to be an advance if precisely the
care for those near and dear depends upon these kinds of rational,
these kinds of reasonable things that don't depend upon calculation,
but the reasonableness is somehow built into the passions
of emotion and love and anger when our own are hurt and things
of that sort.
Now that's not to say that those emotions don't cause difficulties
and sometimes get the better of what make sense, but I'm not
sure I accept the — I don't think that the description of
the footbridge versus the trolley dilemma is a sign of moral
inconsistency at all.
DR. COHEN: We differ there.
CHAIRMAN KASS: But the difference has to do with
accepting the view of a kind of calculation of outcome as
the measure of reason, whereas the question is how do you
describe the moral situation in the first place such that —
DR. COHEN: Well —
CHAIRMAN KASS: I don't want to belabor this, but
I do think that there are certain kinds of theoretical things
that are built into the formulation of the question that produce
the dichotomy between what looks to you to be primitive and
what looks to you to be advanced and rational, whereas I'm
not sure that there isn't a kind of deep reason in what you're
calling mere primitive, but it's carried and mediated in a
different way.
DR. COHEN: Fair enough. So I was, of course, caricaturing
the arguments for the sake of clarity, but I think the fundamental
point still stands, and I think again it's the inversion of
the position that is being laid out between you and PROF.
Fukuyama, and that is to say that okay, I don't know what
the right calculus is, but, at the same time, the fact that
we have these intuitions and that we could maybe come to understand
where they came from as circumstantially developed and not
in the circumstances that we now find ourselves, I agree it
doesn't mean that we just therefore dismiss them as primitive
and therefore irrelevant, but it does mean that we may have
some deeper insight into what their usefulness was and what
the limits of their usefulness may now be.
And so the interpretation of the finding is neither that
because it's primitive, it's irrelevant, nor that because
we have intuitions, as such we should go with them, that that's
kind of our moral compass. It allows us to say no, the compass
has to be something else. And this just contextualizes where
those are coming from and allows us, however we might do it — and godspeed to those who are more qualified than I to
actually do it — to come up with a theory that is, in some
sense rationalist, that takes account of all of our circumstances.
And it can't dismiss love or kin bonding. Of course not.
That would lead to consequences that would be as deleterious
as only caring about your kin, but some balance between the
circumstances we currently face and the ones that we've brought
with us because, Lord knows, evolution — and this is the
peril I'm referring to — evolution is not fast enough to
give us the answers. It got us this far, but something has
happened that the answer for which is not evolution, not at
the biological or genetic level.
Genetics was able to solve the problems that got us this
far, but we now have the capacity to pose problems that genetics
is not going to be fast enough to solve.
CHAIRMAN KASS: Fair enough. Michael Sandel, please,
and then Alfonso is next.
PROF. SANDEL: Even before we get to the evolutionary
biology, my question is prior to that about the logic of the
scientific project that you're engaged in and the general
question I have is how — what justifies your choice of coding
certain responses as emotional as against others? But in
order to get to that, I was puzzled by one thing you said
in the talk and then just in this exchange with Leon.
From your point of view, in order to characterize in the
trolley and footbridge case one of the responses, one set
of intuitions as emotional, and then you would then go on
to explore the neural correlates, it seemed to matter to you
that you had run through all the possible rational justifications
for answering the footbridge and trolley case differently.
And not having found a persuasive one, that seemed to license
calling the response in the footbridge case emotional. But
why, even from your point of view, would that be necessary?
Why can't you just directly, on your own account, characterize
the footbridge response as emotional because personal rather
than impersonal and so on and code it that way and go ahead
and then look for the correlations? Why was it important — you seemed to suggest it was important and you said we
could check on the website — that you had explored all these
possible moral justifications for distinguishing and found
that they weren't persuasive and therefore — why did you
have to go through that? What would be undermined in the
experiment that you did if it turned out that somebody came
along with a persuasive moral distinction between those two
cases? Why would that in any way damage the rest of what
you've done from your point of view?
DR. COHEN: Excellent question. I'm hesitating only
because I'm trying to figure out where to begin with the answer.
There's at least or two or maybe three things I want to say
in response to that.
First of all, the term "emotion" is a code here.
I mean ultimately words are not the language of science.
It's mathematics and mechanistic understanding. But until
we have that we need some guidance and we need some way of
communicating with our colleagues to share the intuitions
of what we think the proper mechanism or mechanistic or formal
description of the mechanism is, right?
So that's all I think of the word "emotion" as.
PROF. SANDEL: But you have to pick out certain
responses that people give you in order to run the correlation.
DR. COHEN: I'm going to get to that. I understand.
In this case, I think what the term "emotion" kind
of connotes, if not denotes, is a set of systems that are
hardwired to produce rapid evaluations and rapid responses
given the exigencies of the feral world in which we grew up.
And so what family resemblance the different systems that
we were studying here have to one another is that: they're
rapid- interpretation, rapid- fire systems that lead to quick
action. That, to me, is what the mechanistic underpinnings
of emotion is and emotion, as we think of it kind of introspectively
is just a phenomenological projection or consequence or correlate
of the operation of those mechanisms.
Now to answer the kind of the methodological question, in
order to get at that as what's accounting for the variance — that is, accounting for the empirical phenomenology here,
right? — I have to be certain that there aren't confounds
that are alternative accounts. So for example, supposing
it turned out that — and as I've already said I just can't
give one problem. I couldn't give the trolley problem a hundred
times because the person would stop paying attention to it
after a while. They'd say, look, I know I hit the left button
last time. I'm just going to hit it again. They wouldn't
be thinking about the problem in the way that engaged the
mechanisms I want to study.
PROF. SANDEL: But even on the trolley and the
footbridge problem, if you, tomorrow, discovered there is
good moral justification for not pushing and yet for switching,
would that cause you to code the behavior differently?
DR. COHEN: The answer is yes.
PROF. SANDEL: Why? Why would it?
DR. COHEN: I don't know what you mean by "code
the behavior differently." It would lead me to worry
about the interpretation that I've placed on the data so far,
not that I don't anyway, but I would worry more. That is
to say — okay. Forget about the signal averaging problem
and say we could do the experiment with just the trolley and
the footbridge problem.
PROF. SANDEL: Fair enough.
DR. COHEN: So supposing I hadn't thought of this
Kantian alternative, right, with the loop, and I did the experiment,
and then Kant came out of the grave and said ha, ha, ha, you
fool. Don't you realize that all that's accounting for that
difference is that in one case the person is using instrument,
the person is used as an instrument, and what we found were
areas of the brain that compute instrumentality. I think that's
a silly likelihood or a silly interpretation, but it's a logical
interpretation of the data.
Instrumentalism is confounded with emotionality, and what
I think subserves emotionality is different than the computation
of instrumentality. I think different mechanisms computer
those different functions. And so now if there's a confound,
I don't know how to interpret my data. So I have to try and
construct the experiment in such a way that no other confound,
no other reasonable account can be given for why those areas
activated in these conditions and not those.
PROF. SANDEL: If there were a reasonable account,
then —
DR. COHEN: Then that could provide an alternative
explanation and my interpretation of what those brain areas
were doing, at least in the context of this experiment, would
be only one of at least two. Now, I don't doubt that that's
probably true anyway, but at least that's the game we play
when we're playing science, right? We try and eliminate all
the confounds so that the only reasonable account that one
can give for the differences is the one that you postulated.
PROF. SANDEL: Just to test this, could I give — if I have one more minute, to take another set of dilemmas
of that kind, that play into this intuition you have about
the personal versus the impersonal.
In the one case you ask people whether in order to — there's
an intruder who comes threatening to their home and their
family members are there and it's a murderer, let's say.
And you ask people would you be justified in shooting the
murderer who is threatening your five children and your wife.
And then you ask them would you stab the intruder? And it
turned out that more people would shoot than would stab, maybe
because of the same kind of squeamishness that operates in
the footbridge. And then you offer them a different case
where it's not an intruder coming to kill your family, but
to steal the hubcaps from your car and you ask would you shoot
the intruder — the hubcap thief, and would you stab him?
And there too, more would probably shoot than would stab,
though the numbers would be less than in the first case.
Now would — in those cases, which would you code as emotional
responses that you would then expect would correlate with
the emotional neuro activity in the brain?
Would it be the stabbing in both cases —
DR. COHEN: But more so in the intruder.
PROF. SANDEL: But whether stabbing the intruder
was justified in the case of protecting your family, as against
stabbing the hubcap thief, would that influence whether you
coded the two as emotional rather than rational or rather
than morally defensible?
Because in the one case, we assume we would agree that it's
possibly — that's it's morally defensible to stab in the
first —
DR. COHEN: Right.
PROF. SANDEL: And not in the hubcap case. So
what I'm trying to get at is whether what counts as emotional
for purposes running the correlations, depends on there being
no good moral justification for it or for some more primitive
thing that doesn't depend on figuring out whether there's
a good moral justification for it.
DR. COHEN: I don't know the answer to what would
actually attain interesting experiment, but if you're asking
for my guess, based on how I imagine these systems are operating,
it would be that in both cases the same areas would be engaged
by the notion of stabbing somebody more than the notion of
shooting.
PROF. SANDEL: So it wouldn't be tied to whether
the action is morally justified or not?
DR. COHEN: No, I don't think these areas are specific.
This is the sense in which —
PROF. SANDEL: But then it's back to my question.
Why would it worry you if it turned in the case you used if
there was a good Kantian or otherwise moral justification —
DR. COHEN: Because I want to know that their emotional,
that these areas are activating for an emotional — representing
an emotional process as opposed to an analytic one for lack
of a better term, a cognitive one.
PROF. SANDEL: That begs the question, doesn't
it, because we're trying to get at what counts as emotional.
And on one definition it has to do with this primitive idea
of proximate versus less proximate, but there's another overlapping
consideration that seems to be at work which is morally justified
on rational grounds or not, right?
DR. COHEN: Right. I'm sorry. I understand what
you're saying now.
We had to start somewhere, right? We wanted to give ourselves
the best shot at getting the results that we expected to get,
based on our theory, right? We couldn't just put people in
the scanner and just have them lie there until some event
occurred that we hoped would be moral and then see. We had
to create situations, right, that were likely to engage the
areas of interest in some way. We predicted that the way
they would be engaged would be according to emotional versus
non- emotional circumstances. We could have done the experiment
without moral circumstances at all. In fact, as I say, the
literature has done that. We wanted to know whether or not
emotional areas are engaged in moral circumstances and in
some way related to the outcome of more decisions. But we
needed some way of probing that. So as a bootstrapping problem
we said all right, well, how can we construct dilemmas to
that they're going to be likely to see this difference, if
it exists. It wasn't guaranteed that it existed, but we wanted
to stack the deck in favor of seeing it, if it was there and
so it's for that reason that we went through and tried to
code these things as personal quote unquote emotional or not.
Just to give ourselves the best chance of seeing those areas
activate if, in fact, it was an emotion that explained the
differences.
PROF. SANDEL: I don't want to take up other people's
time.
CHAIRMAN KASS: Alfonso, you want to go ahead?
DR. GÓMEZ-LOBO: I think, Michael, this is
going to go down your lane as well because actually in a way
I'm going back to the question I asked of the previous session.
And I'll ask it just again as a matter of perplexity and openness
in the following sense. The experiments to me sound fantastic.
I really enjoy reading all of this data because I think it's
very interesting to know the physiological correlates of our
emotions. That's not a new project, of course. There's been
efforts in the past, now not with this level of accuracy and
sophistication. So I find that very welcome.
And I'm referring to your published papers, the ones that
here I can be a bit more accurate with that. What you do
in the first paper is you show that there are neuro correlates
to the moral judgments, but of course, this doesn't tell us
whether the moral judgments are true or false, right?
And I tend to — if you allow me, I would like to emphasize
that because, of course, the question whether they're true
or false, again the replies to that question were not going
to get by more scans and more MRIs, et cetera. I think again
they depend on other considerations and if I may make a suggestion,
for instance, the traditional doctrine of the principle of
double effect will give you a lot for the trolley case because
the trolley case is set up by utilitarian philosophers. I
mean it's an example that appears over and over in people
who believe that the morality of actions has to be decided
by outcomes and that's why it sounds as if there were an inconsistency
in the foot bridge and in the other example because the outcome
is the same, and yet there are remarkable differences, I think,
in the two actions.
Now with regard to the Ultimatum Game, again, for me there's
something similar. The Ultimatum Game, I'm not surprised
by the outcome because again it's a very old case. This is
described very neatly by Aristotle in The Rhetoric.
It's the case of anger. Anger is the reaction, the desire
of revenge for a perceived injustice. So for me, the question
is well, is it unjust if Dr. Kass offers me $1 for every $9
he gets, I would have to concede that it is just and fair
because he does a lot more work than I do and has much better
insights, etcetera. So in that question, the question of
the fairness, the initial question of the fairness again is,
it seems to me is not going to be a question we can answer
by observing this behavior.
So I'm delighted with the idea of the correlation. I'd be
a little bit more worried if you're talking about causation,
in other words, your brain fires up and then you refuse the
offer. It would seem to me that the sequence would have to
be that you perceive somehow, you grasp that there's an injustice
there which may or may not be true and then, of course, your
emotions get fired up. Isn't there something like that?
DR. COHEN: I can't tell you the moment at which the
injustice is perceived. Maybe some day we'll be able to do
that too, but that experiment hasn't been done. So I can't
tell you the answer to that last question.
I also agree that this study in and of itself nor 10 more
like it on their own won't answer the question of whether
9/1 split is just or unjust any more than kind of asking people
will. I think what it may do is reveal where our intuitions
about the justice or injustice or in any event our emotional
responses to the circumstances come from over the long term.
This experiment alone clearly doesn't do that. But over the
long term a deeper understanding of how these mechanisms work,
what are their trigger points, what are the circumstances
to which they seem tuned, which ones are not tuned to, will
reveal for us where our kind of common sense comes from.
And then — this is kind of the point we've been dancing
around all — in this whole discussion is what do you make
of that? Do you decide well that's an important insight — the common sense is a good compass for what's right and
just or not. And that's not for the neuroscientists to decide,
but it's certainly going to be — that discussion I think
will be contributed to by our understanding of what common
sense is.
DR. GÓMEZ-LOBO: Just as a suggestion, do we
need to go back to evolution, why not just to present day
understanding say of fairness? In a democratic society, we
expect to get paid more or less the same.
DR. COHEN: I think there are many determining factors.
I pick evolution because it's a simple and easily described
one and because there may be parameters of the processing
mechanisms that really are determined by very long standing
and old influences, but that doesn't preclude the sort of
influence that you describe which I don't doubt for a second
is there.
CHAIRMAN KASS: Folks, we're already a little over.
We'll go a little longer because there are people in the queue
and I don't want to short change them, but I have Rebecca,
Gil, Mike Gazzaniga and Mike Sandel for a very brief reprise
on this and then I'll just have to call it.
Rebecca Dresser, please.
PROF. DRESSER: I think you've both shown that
or you've been provocative in showing us how difficult it
is to even sort through the issues and try to figure out what
to make of this information, so two of these, your colleague,
Dr. Greene, and in a way your first paper, you talk about
and you were just saying the use we can put to this knowledge,
your colleague talks about moral maturation. We will inevitably
change ourselves in the process and reject some values and
retain others and so forth.
I mean part of this whole examination has to be well how
good are human beings at putting self- knowledge to use in
a beneficial way and sometimes we do and many times we don't.
So I guess that would be something to study as well.
When I was reading a lot of philosophy of mind, I think it
was Donald Davidson who said something that stuck with me
which was yes, this mind, brain is a physical system and yes,
at some level we could reduce it, but it will be like the
weather, that is our ability to predict, our ability to control
will never be at the level where we ought to make important
decisions or construct our lives around that.
So I wonder what you think of that and then in relation to
that, some of the possible uses that Dr. Michels was referring
to, I was thinking about how would we study these predictive,
especially approaches to that you would be able to look at
a six- month- old brain, infant's brain and say well, this
person looks like she's going to end up a juvenile delinquent
so we better do X which would always be probabilistic.
Now in order to make that judgment you would have had to
have a study where you're following all these kids with different
kinds of brains and really, if you want to make a lot of these
social judgments throughout their lives and then what kind
of percentage would be enough to trigger some kind of intervention
and then you'd have to do all the studies to show that the
interventions were effective, and you'd have to figure out
outcomes such as your six month old granddaughter, would it
be better if she took ballet or played soccer. Well, what
do you mean better in what respect? She gets more prizes,
she's happier, you're happier.
So it does seem to me to be a very complex process to think
about well how would this then go into actual application
and use. So I just encourage thought about those kinds of
questions.
CHAIRMAN KASS: Thank you. Gil Meilaender.
PROF. MEILAENDER: I have a comment and a question.
They would follow- on a lot of other things that have been
said. Just a comment. It would take way too long to pursue
it, but I just can't resist saying, I think there are rational
reasons for distinguishing all your versions of the trolley
problem from the foot bridge problem. I mean I don't actually
think it's hard to do, in fact. But the question has to do — I mean you — the structure of the way you move is that
you think that certain kinds of decisions that we make are
not necessarily good or wise ones. You suggest that we — once we come to understand their roots, their perhaps usefulness,
an earlier time, it will free us to get rid of them. And
it's that that I want to think about.
There's a story about a guy who was driving along, got a
flat tire, pulled along the side of the road and turned out
he pulled over right next to an institution where emotionally
disturbed people stayed. He gets out. He jacks up the car
and takes the tire off and a resident of the institution is
standing there watching him the whole time and then he's put
the lugs in the — what do we call it, the hubcap and he
accidentally kicks it over and they all roll down in the ditch
into the mud and the tall grass and he can't find them. He's
got his tire off and he's standing there looking at it and
he just can't figure out what in the world to do. And the
resident of the institution who has been standing there watching
all along pipes up and he says you know, I believe if you
take one lug off of each of the other three tires and use
it to put that tire on, it will serve you just fine until
you can get somewhere and get it taken care of. And the man
sort of looks at him with a really astonished look on his
face. He's amazed to get this answer to his problem from
this particular source and the resident of the institution
says well look, I may be mad, but I'm not crazy.
We don't know what causes moved him to offer that suggestion,
maybe he thinks he's the mechanic for the Queen of England
and an expert on these things, but it's a very wise piece
of advice. It's true to the situation.
It seems to me that there's a — the fundamental distinction
between causes and reasons needs to be paid attention to here
in this work. Whatever the causes that might lead to certain
kinds of behavior, that doesn't in itself tell us whether
the behavior is wise, whether it's good, whether it's in accord
with the truth. And it's that kind of fundamental distinction
between causes and reasons that it seems to me insofar as
you want philosophical payoff from the work. There are other
kinds of payoff that's fine, but if you want philosophical
payoff from the work, then one has to get clearer and cleaner
on that distinction, that it seems to me so far I can find.
DR. COHEN: I don't really have anything to add.
I think I agree with you in principle, but — well, maybe — I don't know if I agree with you in principle or not.
I guess —
PROF. MEILAENDER: I don't think you do.
DR. COHEN: Maybe I don't. I would just say that
it's not guaranteed that an understanding of cause will lead
to reason, but I think it can inform. I think it's knowledge
and knowledge will inform us when we have to make decisions,
an understanding of why we do things is one contributing factor,
I think, to our decisions about what it is that we do. It
may not be the only one, but it's a useful one.
CHAIRMAN KASS: Mike?
DR. GAZZANIGA: It's really a shame that Jim Wilson
couldn't be here today.
I just finished — I'm the last one apparently probably
on this Council who just finished his book, ten years ago,
The Moral Sense, which is a beautiful book. It slugs
through a ton of social science data to come up with a hypothesis
that there is a biologic sense of morality and I think the
work of Jonathan and his colleague, Josh Greene has really
opened up a fantastic opportunity to look at that.
I want to make one question — I'm dying to know your answer
to it. There's a colleague of ours and mutual friend, Mark
Reikle, (phonetic) who is talking these days about what brain
images mean and to go back to your first point, to bring it
back to your own work, Mark Reikle has said when we look at
these brain activations and Jonathan was very careful to always
use that word, you'll notice, we really don't know if the
activation is an excitatory event of the brain area or an
inhibitory event. And so when we start pulling together our
models where we're pulling different experiments together
and we're speaking of them as activations, when in fact, maybe
in one experiment it's an inhibition and in one experiment
it's an excitation, how do we actually come to think of these
data and the technical of neuroscientists trying to figure
out the underlying mechanisms?
DR. COHEN: I think that's another great question.
I guess I just have to restate what I said at the beginning.
These methods are still really crude and our understanding
of exactly what they're telling us about the brain, no less
about the mind is still in its infancy and I hope that you
take the data that we have published and what I talk about
today is kind of more illustrative examples rather than necessarily
indications of truth, partly in the spirit of the sorts of
uncertainties that Mike points to.
That said, we can guard ourselves against certain sorts of
silliness interpretations. We know — our knowledge about
how these measurements reflect neuro activity is growing and
as yet most of the assumptions that we've made as they've
been addressed by further study, seem to have been right.
That's not to say that at some point we'll find out something
really fundamentally wrong with those assumptions, but there's
no evidence yet, for example, that when you see a pattern
of activity that shows pretty striking similarity in one case
to similar patterns of activity in another case that something
fundamentally different has happened. Nobody — in the few
studies in which people have gone in and stuck electrodes
in and measured blood flow as well as neuro activity, it all
kind of lines up.
There are other issues, you know. What is the rest condition
really telling us. How — those are the things that Mark
Reikle has been most concerned with recently. What is the
rest condition tell us, how stable is that? There's lots
more to be learned and no doubt it's going to shape and color
future work and our ability to interpret these results, but
I've got to say so far it's really pretty impressive how much
validation, the findings that have come with these methods
has received when it's been done properly from convergent
methods. So that's not a particularly satisfactory answer
I know, but —
CHAIRMAN KASS: Michael Sandel, very briefly.
PROF. SANDEL: In the Ultimatum Game, I'm playing
with Leon. He offers me $2 and he'll keep $8. I have two
desires in trying to decide whether to accept. I don't want
to forego the $2. And I don't want to reward greed. How
do you know which of those desires is rational and which is
in need of explanation?
DR. COHEN: Given the circumstances, I think they're
both in need of explanation. I'd like to understand both.
PROF. SANDEL: Okay.
DR. COHEN: But what's intriguing about the circumstances
created in the laboratory, at least on surface consideration,
is that your desire to punish greed doesn't have any immediate
consequently value, right? Now you can say it's reflective
of a generic thing that you don't want to turn off in this
one case because it's not going to do any good, but I can
tell you in any event, for whatever this is worth to you,
that in this case it's not going to do you any good.
Now if you think that that's acceptable, then that's fine,
then I guess there's nothing more to be explained, but I find
that intriguing because I don't think people on average tend
to behave in ways that on average is not going to do them
good.
And so now we've created at least, immediately a rarified
and contorted between what the actual presumably normative
goal is and what the behavior is and to me, that is in need
of explanation.
PROF. SANDEL: Suppose you're right about that,
then a further part of your claim, this is the ambition, is
to say that the emotional desire of the two, namely to punish
greed, rather than reap the $2, has certain features in common
with the desire not to push the man off the — with the view
that it would be immoral to push the guy off the foot bridge.
Now in virtue of what are they the same kind of thing or
is it just that they both happen to light up the same part
of the brain? Or is there something —
DR. COHEN: They don't. See, this is why I started
my response to your initial question with a concern about
the use of the word "emotion." I'm not meaning
to lump all things that I probably very sloppily designated
as emotional as being the same. They have a family resemblance,
okay? They're not the same. So in fact, in the data that
I showed you, the insula was the primary area that seemed
to predict behavior in the economics task and it was other
areas, the posterior cingulate, the anterior gyrus, et cetera,
predicted behaviors in the moral reasoning test. So they're
not the same emotional response. And furthermore, I'll make
that claim even more extreme by saying that I think that the
desire to reap the $2 is also an emotional response in some
sense. It's a valuative.
So it's just that that is all — that also happens to be
more universally rational in that case because the punishment
is not reaping any definable goal.
PROF. MEILAENDER: Other than witnessing to the
good of justice in the world.
DR. COHEN: But that doesn't have any meaning to me,
I've got to say, other than that the world will be a better
place for that to happen.
PROF. SANDEL: Does your science depend on this
opinion of yours?
DR. COHEN: I should hope that the science doesn't
depend on this opinion. This hypothesis, I think, does, yes.
But the science doesn't, no. I mean the same experiment could
have been conducted by somebody who believed exactly the opposite,
presumably that's the beauty of science, right? And then
the answer will help inform our understanding.
PROF. GEORGE: But can you be playing what you
call the game of science if you presuppose that at least one
possibility is we have uncaused behavior, behavior that's
rationally motivated, just as such. It's motivated by his
grasp of the value of fairness as an objective intrinsic value.
DR. COHEN: Obviously, this is — we're back to the
same point that will take much longer to have a reasonable
discussion about and I hesitate to make a comment in response
because I just feel more and more the comments are going to
come across as glib and uninformed rather than considered,
but I'll do it anyway and say I don't think there is much
meaning in science where you can't measure, or at least in
principle be able to measure what the outcomes are and the
factors are, the causal factors are.
And so I guess the answer in a glib way, right, if I was
forced to give a single one word answer would be no.
PROF. GEORGE: To be clear, I'm not saying that
if you were analyzing, if you were doing a moral, philosophical
analysis that presupposed that there really were basic reasons
for action and rationally motivated action as a possibility,
I'm not saying that the philosophical analysis would be science.
It would be something different.
My question is can you play the game of science and believe
that there is also this other thing that's not science, that
is rational that deals with realities that science can't measure
and therefore another discipline has to do the work.
DR. COHEN: Absolutely. Anybody is welcome. It's
a totally ecumenical game. Anybody who believes anything
they want can play the game of science. Those beliefs just
don't have much place in the playing of the game of science.
So if you're asking whether you can be a mystic and still
be a scientist, sure. You know? As I said at the beginning,
I personally am agnostic or at least for the purposes of this
discussion. I won't reveal to you what my beliefs are about
whether or not there are transcendental realities, okay?
That's my own personal prerogative to believe or not believe.
I am here speaking as a scientist and that's why I won't tell
you what I believe about that, okay?
But as a scientist I can tell you that those claims and discussions
about those factors just don't factor in. During the break
I was saying, you know, you can't use the doubling cube in
Monopoly.
PROF. GEORGE: But that means there's a possibility
that it's not an emotional reaction per se that's motivating
the decision in the Ultimatum Game. It's a reason, but not
the kind of thing that's susceptible of scientific explication.
DR. COHEN: And by that I mean something very strong.
It will never be able to be measured, use and predicting
measurable things. Now if you're willing to accept that that's
the stakes, then yes. But I want to be clear. You can't
have it both ways. You can't have that thing out there, influencing
measurable things in systematic ways, right and yet it not
be physical or somehow be explainable in physical terms, right?
So —
PROF. GEORGE: Physical meaning causal? Measurable.
CHAIRMAN KASS: Could I — the hour is really late
and there are two people who want small things, but look,
this is a philosophical point that has some scientific purchase.
It's not — I want to change slightly what Robby is saying.
Michael Sandel put very nicely to you the two choices, the
desire for the money and the desire to punish Leon, treating
them both as in a way equally capable of being formulated
cognitively, but both of them carry some kind of a repetitive
characteristic.
DR. COHEN: Absolutely correct.
CHAIRMAN KASS: Conceding that the line between cognition
and emotion is much more blurry than people have hitherto
thought and your own contributions to the thinking about morality
through this kind of study, I just am delighted to see, but
there is an old philosophical teaching which says thought
alone moves nothing and that means that any kind of choice
is animated not only by some kind of cognition calculation
of consequences, but by some kind of desire for the outcome
and therefore it would seem that if you went looking for — you might go looking for not just the cognitive aspects
of what you call the kind of clear choice because the result
is obvious, but you might find other elements of so- called
emotional life that are at least as deeply seeded as the root
of anger for slight or revulsion at being the one who causes
pain to a fellow human being in your face.
So I guess — I don't want to be tied to the particular
remarks, but it does seem to me that the exposure of the multifaceted
and perhaps always emotional character or always not simply
cognitive character of our choices would produce a much richer
kind of anthropology following the lines you've already started
and they could be absolutely separate from the kind of moral
theory that you and your friend Josh Greene seem to have sort
of bought into at the beginning.
I don't think you need that to show us all kinds of really
deep and rich things about the way the mind and brain works
when we make decisions. I guess this is partly what Michael
Sandel is interested in. I think it's partly what Robby is
interested in.
DR. COHEN: No, look, first of all, let me say that — let me reiterate what I think I've already said which
is I don't think anything about the personal interpretation — the inferences that I've drawn from our data or the theories
that led us to these experiments should circumscribe the importance
that we're really trying to communicate which is these tools
can raise these sorts of discussions and inform — raise
these sorts of questions and inform these sorts of discussions.
That said, I feel like I should have the right one last time
to defend my view of these, of our findings. And regret that
I use the terms cognitive and emotional because I think that
paints way too kind of dichotomous a view of what I think
is going on and so in that sense I totally agree with you.
Emotions are coming into play in a sense that emotions reflect
motivations, right, and valuative decisions based on evaluation
just about everything, if not everything we do has got to
be traced to that.
So let me now restate. I use those terms because — you
never quite have the read on your audience and you don't want
to speak too highly, you don't want to speak too low and those
are accessible terms that people have intuitions of that and
at least it got the conversation going.
That said, obviously, that said let me make the more technical
point, the way I would be most comfortable making which is
that the decision to make — to go for the $2 or the $1,
the penny, whatever it is and the decision to punish have
essentially qualitatively different statuses in a structure
of processing or in the kind of a cognitive architecture as
I would imagine it. And those are describably different okay,
and there's a family resemblance, they punish kind of senses
of inequity, don't hurt your brethren, than doing utilitarian
calculations of what's likely to come in the biggest possible
picture that you can calculate. Those are different, fundamentally
different sorts of calculations that require different operations
that benefit by different computational architectures or styles
of computational architectures that I think are what are going
to be reflected in these different brain areas when we really
understand what's going on. We're going to see that they're
suited to making different kinds of calculations. Some are
coarse and quick, more deliberate, but accommodating many
more degrees of freedom. That's the more interesting way
that I think these things will parse out and the terms cognitive
and emotion are just loose descriptions of what those much
more detailed and I think formal accounts of what's driving
these processes will look like.
That said, I still think there are these family resemblances
among the things that evolved earlier, again, evolve is another
kind of heuristic or that develop early in life or that are
subject to strong cultural influence versus capacities that
are less so.
CHAIRMAN KASS: Paul wants a footnote and we're going
to break.
DR. MCHUGH: Just a footnote. I liked that last answer.
See if we are on the same wavelength. When I was taught by
my great teacher Wally Nauder (phonetic), he said this is
the way to think about the brain. He said there's the sensory
neuron and there's the motor neuron and then there's the big
internuncial net between. Okay? And that internuncial net
has two elements. Actually, he said three, but for our purposes,
two. A low- fi and a high- fi system. The high- fi is the
lemniscal system that goes through the thalamus and up to
the cortex. The low- fi up the reticular system and into
the limbic system.
And he said the analytic — the great internuncial net is
for analytical purposes. Reason comes from both, okay? The
emotions and drives are in the low- fi. The perceptions and
the details are in the high- fi. But both are together in
trying to make a decision and to ultimately make the right
decisions. And so if you and I are talking, if what you're
saying that you don't like the words emotional and cognitive,
then you are not trying to drive everything then into the
high- fi system. You're prepared to let the low- fi system
give reason and purpose to our decision. Isn't that right?
DR. COHEN: Absolutely. I just want to understand
in what sense it's being reasonable. I'm not saying it's
not reasonable at all. It pays to protect the reputation
in some situations. It pays to not hurt your brethren in
some situations. Don't get me wrong. I'm not — several
people I think have tried to put in my mouth this, the things
I'm calling lower order or older are bad, no. They're just
circumscribed and that's what we can learn. We can learn
something about how they're formulated, what they're good
at and by inference thereby, what they're not good at and
that's useful and important information.
CHAIRMAN KASS: Bob Michels has asked for a closing
comment.
Mike, this is really the last.
DR. MICHELS: The last hour, I think, illustrated
something that was said between Leon and me jokingly at the
beginning about the nonexistence of neuro ethics and later
in my comment about the value of neuroscience for your dialogue.
I think the value is that it enriches our understanding of
psychology and psychology is critical for your dialogue.
I don't think there's a direct relationship between neuroscience
and moral philosophy and I think skipping psychology leads
to the kinds of conversations you've had in the last hour.
PROF. SANDEL: Then we should skip it more often.
CHAIRMAN KASS: I want to express the Council's thanks
to Bob Michels and Jonathan Cohen for really a wonderful afternoon.
The presentations were illuminating, provocative and I think
this has been one of the most interesting conversations this
Council has had.
So thanks to both of you.
(Applause.)
Those who are staying for dinner we meet up in the usual
room at 6:30 for drinks. Dinner is at 7. Eight- thirty tomorrow
morning, 8:30, we have guests.
(Whereupon, at 5:48 p.m., the meeting
was concluded.)