Wednesday, April 02, 2014

Can body language be read more reliably by computers than by humans?

This post continues the thread started in my March 20 post "A debate on what faces can tell us." Enormous effort and expense has gone into training security screeners to read body language in an effort to detect possible terrorists. John Tierney notes that there is no evidence that this effort at airports has accomplished much beyond inconveniencing tens of thousands of passengers a year. He points to more than 200 studies in which:
...people correctly identified liars only 47 percent of the time, less than chance. Their accuracy rate was higher, 61 percent, when it came to spotting truth tellers, but that still left their overall average, 54 percent, only slightly better than chance. Their accuracy was even lower in experiments when they couldn’t hear what was being said, and had to make a judgment based solely on watching the person’s body language.
A comment on the March 20 post noted work by UC San Diego researchers who have developed software that appears to be more successful than human decoders of facial movements because it more effectively follows dynamics of facial movements that are markers for voluntary versus involuntary underlying nerve mechanisms. Here are highlights and summary from Bartlett et al.:

Highlights
-Untrained human observers cannot differentiate faked from genuine pain expressions
-With training, human performance is above chance but remains poor
-A computer vision system distinguishes faked from genuine pain better than humans
-The system detected distinctive dynamic features of expression missed by humans

Summary
In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain. Two motor pathways control facial movement: a subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions, and a cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers. However, machine vision may be able to distinguish deceptive facial signals from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here, we show that human observers could not discriminate real expressions of pain from faked expressions of pain better than chance, and after training human observers, we improved accuracy to a modest 55%. However, a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system’s superiority is attributable to its ability to differentiate the dynamics of genuine expressions from faked expressions. Thus, by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling. In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain. Two motor pathways control facial movement: a subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions, and a cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers. However, machine vision may be able to distinguish deceptive facial signals from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here, we show that human observers could not discriminate real expressions of pain from faked expressions of pain better than chance, and after training human observers, we improved accuracy to a modest 55%. However, a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system’s superiority is attributable to its ability to differentiate the dynamics of genuine expressions from faked expressions. Thus, by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling.

4 comments:

  1. So which would be more accurate, a lie detector test or a computer going on the differences in your facial expression?

    ReplyDelete
  2. I don't really know, but I suspect it would be easier to deceive the lie detector test than the computer facial muscle recognition algorithm.

    ReplyDelete
  3. Wow ... the places technology is taking us.

    ReplyDelete
  4. Anonymous5:35 PM

    Derik thanks for following my suggestion about this story. The current great use for this technology is that people in medical fields would gain a reasonable way to assess whether or not a patient is faking pain — perhaps in order to get narcotics. My question is not only "How good will this pain detection become?" but I also wonder the extent to which other sensations might be assessed this way. With discreet facial monitor perhaps such technology could somewhat quantify how good/bad a given life happens to be, though I would expect direct neurological sensors to ultimately be far more useful.

    Such technology does fall right into the lap of an extreme Utilitarian like myself, however. For the past couple of months I've been raising quite a storm on Peter's "Conscious Entities" blog, which is a great "pure philosophy" site for anyone interested. ---- Conscious Entities is here.

    Philosopher Eric --- My own site is here.

    ReplyDelete