Robots that can feel? Really? No, not really.
I've been following with interest the attempt to develop human-like (and dog-like) robots, so it was interesting to hear from a scientist involved with the project. Mirko Petricevic of the Kitchener-Waterloo Record interviewed Rosalind Picard, a scientist who is the director of affective computing research in the Media Lab at the Massachusetts Institute of Technology in Cambridge, Mass. A devout Christian (former atheist), she had some interesting things to say, including,
She is not making machines with feelings, she emphasizes.
Human faces can make 10,000 different expressions, she says. In the course of a 10-minute conversation, a person's face makes between 300 to 400 different faces.
Most of the people we talk to can tell, by our facial expressions, when we're frustrated.
In general, machines can't.
Her hope is to develop machines that act like they know our feelings. However,
"None of this technology actually knows your feelings," Picard notes.
That doesn't mean that scientists won't ever develop a machine that can read our feelings, she adds.
"But we're nowhere near there yet."
I suspect they never will be anywhere near there. The main problem is that, as Mario Beauregard and I reflected in The Spiritual Brain, the human mind is more like an ocean than a machine (and so is the brain it inhabits). We can only approximate interpretations of complex emotional states. Any interpretation definite enough to be considered definitive must soon give way to another interpretation. Think of all the great actors who have interpreted Hamlet, for example, or Queen Gertrude or Lady MacBeth. Trying to nail feelings down for good is not the way to go.
Petricevic's article is a great read.