Wednesday, September 24, 2008

Neuroscience: Where materialism misleads

In a recent Newsweek article (September 13, 2008), Michael Craig Miller, M.D., advises that

The brain is the mind is the brain. One hundred billion nerve cells, give or take, none of which individually has the capacity to feel or to reason, yet together generating consciousness. For about 400 years, following the ideas of French philosopher René Descartes, those who thought about its nature considered the mind related to the body, but separate from it. In this model—often called "dualism" or the mind-body problem—the mind was "immaterial," not anchored in anything physical. Today neuroscientists are finding abundant evidence of an idea that even Freud played with more than 100 years ago, that separating mind from brain makes no sense. Nobel Prize-winning psychiatrist-neuroscientist Eric Kandel stated it directly in a watershed paper published in 1998: "All mental processes, even the most complex psychological processes, derive from operations of the brain."
This is a marvellous article, in its way, because the author does not seem to realize that he has simply not demonstrated what he asserts.

The key word is "derive" as in "derive from operations of the brain." Do they derive from operations of the brain or are they instantiated in operations of the brain?

First, when someone tells me "One hundred billion nerve cells, give or take, none of which individually has the capacity to feel or to reason, yet together generating consciousness," my first thought is

1. "If true, that would be magic,"

and my second thought is

2. "The nerve cells are not generating consciousness, they are facilitating it."

Miller's idea is somewhat like assuming that there are little people inside the TV set enacting the programs. No. The set is receiving a signal.

Of course, if you don't believe that the signal exists, you must assume that the set is generating the little people. Yes, the set is an intricate device, but the task of generating the little people is actually beyond its powers. Receiving a signal is within them. So it is more reasonable to believe that the set is receiving a signal. The relationship between the mind and the brain can be understood somewhat like this.

That said, Miller's article is a valuable introduction to some recent neuroscience findings, chiefly these two:

1. States of being happy or sad, unlike fear or anger, are very complex, involving dozens of brain areas. For example:

The common emotions of sadness and happiness are a problem for researchers. Depression and mania are core areas of study for a neuroscientist. But everyday ups and downs are so broadly defined that researchers have a hard time figuring out what exactly to study. They note activity in virtually every part of the brain. Last year Drs. Peter J. Freed and J. John Mann, publishing in The American Journal of Psychiatry, reported on the literature of sadness and the brain. In 22 studies, brain scans were performed on nondepressed but sad volunteers. Sadness was mostly induced (subjects were shown sad pictures or films, asked to remember a sad event), although, in a couple of studies, subjects had recently experienced a loss. In the aggregate, sadness appeared to cause altered activity in more than 70 different brain regions. The amygdala and hippocampus both show up on this list, as do the front part of the brain (prefrontal cortex) and the anterior cingulate cortex. A structure called the insula (which means "island") also appears here—it is a small region of cortex beneath the temporal lobes that registers body perceptions and taste.
That's not really very surprising, because we all have our own definitions, coping methods, and tolerances for either happy or sad. Eeyore the Donkey was famously happy at being sad - and I am quite sure that Eeyore is based on a human model, not an asinine one:

2. Sam Harris, one of Richard Dawkins's atheist Four Horsemen, tried to study "belief" vs. "unbelief." He and his colleagues discovered,

Earlier this year the Annals of Neurology published an article by Sam Harris and colleagues exploring what happens in the brain when people are in the act of either believing or disbelieving. In an accompanying editorial, Oliver Sachs and Joy Hirsch underscored the significance of what the researchers found. Belief and disbelief activated different regions of the brain. But in the brain, all belief reactions looked the same, whether the stimulus was relatively neutral: an equation like (2+6)+8=16, or emotionally charged: "A Personal God exists, just as the Bible describes."
I confess I do not know why anyone should be surprised unless they assume that belief in God is by definition a big emotional experience. But is it?

Look, here are two equations:


Personal God = what the Bible describes

Why is the second proposition assumed to be emotionally charged? Most North Americans simply assume that a personal God exists. Large numbers, and perhaps a majority, consider the Bible a reliable source on the subject.

Even professional religious leaders cannot be emotionally charged about that most of the time. The human body, let alone the mind, can only sustain so much purely emotional excitement. Much actual excitement will be intellectual exploration and the rest will be good works and putting up with difficult people and circumstances.

Labels: ,

Evolutionary psychology: Misunderstanding superstition

New Scientist has recently announced that we now know the origin of superstition. Ewen Callaway tells us (10 September 2008):
Darwin never warned against crossing black cats, walking under ladders or stepping on cracks in the pavement, but his theory of natural selection explains why people believe in such nonsense.

The tendency to falsely link cause to effect – a superstition – is occasionally beneficial, says Kevin Foster, an evolutionary biologist at Harvard University.

For instance, a prehistoric human might associate rustling grass with the approach of a predator and hide. Most of the time, the wind will have caused the sound, but "if a group of lions is coming there’s a huge benefit to not being around," Foster says.
Foster and a University of Helsinki colleague Hanna Kokko sought to model superstition in mathematical language, using a definition that could apply to animals and bacteria as well as humans, and found that "As long as the cost of believing a superstition is less than the cost of missing a real association, superstitious beliefs will be favoured."

The problem is that the quality described in the New Scientist article as "superstition" is more commonly called "prudence" (= avoiding foreseeable risks).

It would help if we begin by understanding what superstition actually is.

Superstition is not a "false" link between cause and effect. If it were, many health fads would be superstition. But they are not; they are merely unsubstantiated or poorly substantiated claims.

Superstition is the belief that the connections between events are occult (hidden) and that bad events can be caused or prevented by understanding and working with these hidden causes. For example, here's a superstition: It's seven years' bad luck to break a mirror. Why? Well, I've heard people theorize that at one time mirrors were very expensive, and therefore it might take seven years to save enough to replace one. And later on, people just somehow continued to believe the idea even though mirrors had become cheap.

There is a name for that kind of thinking - euhemerism, in honour of Euhemerus, a 3rd-century BC Greek philosopher. Euhemerus argued that the Greek gods were originally just mortal heroes whose exploits were embellished. In other words, he sought a pragmatic explanation for belief in the gods, in the same way that Foster and Kokko seek a pragmatic explanation for superstition.

But Euhemerus missed the transcendent and numinous qualities that people sought in the Greek gods, the qualities that caused the 19th century poet Wordsworth - trapped in industrial England - to exclaim,

Great God! I'd rather be
A Pagan suckled in a creed outworn;
So might I, standing on this pleasant lea,
Have glimpses that would make me less forlorn;
Have sight of Proteus rising from the sea;
Or hear old Triton blow his wreathèd horn.
In the same way, Foster and Kokko missed the point about superstition - what makes a belief a superstition is not that the supposed connections between events may be false but that they are occult. They are not normal connections in any event.

Now back to the mirror: The true reason that breaking a mirror was anciently considered bad luck is that one's reflection was thought to be an image of one's soul, one's life. So the shattered image was an omen of death:

The mirror crack'd from side to side;
The curse is come upon me," cried
The Lady of Shalott.

- Tennyson
Is the belief false? That's difficult to say because, while it is false for the person who disregards it, it might be true for the person who believes it.

That is, if you break a mirror, nothing happens, of course. But the person who honestly believes she will become very ill could trigger the flareup of a chronic illness. That's called a nocebo effect.
The difficulty then is that the person who believes in superstitions and occult causes may see genuine confirmation of her belief. So one will not get very far in discussing the matter with her by simply informing her that her belief about the broken mirror is false.

It might be wiser to help her see that the power that she attributes to the image in the mirror actually resides in her own mind. It is quite real, but it is not what she thinks and she has power over it.

But, back to Foster and Kokko for a moment, we can now see why their "simple definition for superstition that includes animals and even bacteria" is not going to be very helpful for humans.

Note: Mario Beauregard and I discussed the nocebo effect in The Spiritual Brain.
Image note: The image is f rom Wiki Commons, a rendering of Tennyson's Lady of Shalott, doomed to die - or so she believed - if she looked out the window, so she had used a mirror instead.

Labels: ,