Home Research Behavior and Gesture Analysis Research Behavior and Gesture Analysis

Behavior and Gesture Analysis

Dyadic Synchrony as a Measure of Trust and Veracity

synchrony

 

We investigate how degree of interactional synchrony can signal whether trust is present, absent, increasing or declining. We propose an automated, data-driven and unobtrusive framework for deception detection and analysis in interrogation interviews from visual cues only. Our framework consists of the face tracking, the gesture detection, the expression recognition, and the synchrony estimation. This framework is able to automatically track gestures and expressions of both the subject and the interviewer, extract normalized meaningful synchrony features and learn classification models for deception recognition. To validate these proposed synchrony features, extensive experiments have been conducted on a database of $242$ video samples, and shown that these features are very effective at detecting deceptions.

Publications

  • X. Yu, S. Zhang, Z. Yan, F. Yang, J. Huang, N.E. Dunbar, M.L. Jensen, J.K. Burgoon and D.N. Metaxas, "Is Interactional Dissynchrony a Clue to Deception? Insights from Automated Analysis of Nonverbal Visual Cues", IEEE Transactions on Cybernetics, 2014.

Face Tracking

 

Accurate face tracking and 3D head pose prediction (shown in top left as a 3D vector of pitch, yaw and tilt) while the face is making various facial expressions as well as out of plane rotations. The 79 tracked landmarks corresponding to the eyes, eyebrows, nose, mouth and face contour are shown as red dots.

Publications

  • Yuchi Huang, Qingshan Liu, Dimitris N. Metaxas: A Component-Based Framework for Generalized Face Alignment. IEEE Transactions on Systems, Man, and Cybernetics, Part B 41(1): 287-298 (2011)
  • Yuchi Huang, Qingshan Liu, Dimitris N. Metaxas: A Component Based Deformable Model for Generalized Face Alignment. ICCV 2007: 1-8
  • Douglas DeCarlo, Dimitris N. Metaxas: Optical Flow Constraints on Deformable Models with Applications to Face Tracking. International Journal of Computer Vision 38(2): 99-127 (2000)
  • Douglas DeCarlo, Dimitris N. Metaxas, Matthew Stone: An Anthropometric Face Model Using Variational Techniques. SIGGRAPH 1998: 67-74
  • Douglas DeCarlo, Dimitris N. Metaxas: The Integration of Optical Flow and Deformable Models with Applications to Human Face Shape and Motion Estimation. CVPR 1996: 231-238

Facial Expression Recognition

 

 

Publications

  • Lin Zhong, Qingshan Liu, Peng Yang, Bo Liu, Junzhou Huang, Dimitris N. Metaxas: Learning active facial patches for expression analysis. CVPR 2012: 2562-2569
  • Peng Yang, Qingshan Liu, Dimitris N. Metaxas: Dynamic soft encoded patterns for facial event analysis. Computer Vision and Image Understanding 115(3): 456-465 (2011)
  • Peng Yang, Qingshan Liu, Dimitris N. Metaxas: Exploring facial expressions with compositional features. CVPR 2010: 2638-2644
  • Peng Yang, Qingshan Liu, Dimitris N. Metaxas: RankBoost with l1 regularization for facial expression recognition and intensity estimation. ICCV 2009: 1018-1025
  • Peng Yang, Qingshan Liu, Xinyi Cui, Dimitris N. Metaxas: Facial expression recognition using encoded dynamic features. CVPR 2008
  • Peng Yang, Qingshan Liu, Dimitris N. Metaxas: Similarity Features for Facial Event Analysis. ECCV (1) 2008: 685-696
  • Peng Yang, Qingshan Liu, Dimitris N. Metaxas: Boosting Coded Dynamic Features for Facial Action Units and Facial Expression Recognition. CVPR 2007

    ASL Recognition


    The first video demonstrats detection of a wh-question non-manual marker. Tracked face and head is shown on left, while the right image shows the extracted spatial pyramid features. Red bars indicate detection of the wh non-manual marker, while blue bars indicate that the system detects no wh non-manual marker.
    In the second video we demonstrate that we are able to track eyebrow height (top right) and head pitch angle (bottom right) in an isolated utterance of wh-question. The red graph line is used to identify the segment of the sequence over which the nonmanual marker is being produced. Lowered eyebrows correspond to a wh-question marker being present.

    Collaboration with: 

    National Center for Sign Language and Gesture Resources

     

    Publications

    • Dimitris Metaxas, Bo Liu, Fei Yang, Peng Yang, Nicholas Michael and Carol Neidle: "Recognition of Nonmanual Markers in American Sign Language (ASL) Using Non-Parametric Adaptive 2D-3D Face Tracking". LREC, 2012.
    • Vogler, C., Metaxas, D.: "ASL recognition based on a coupling between HMMs and 3D motion analysis". ICCV, 1998

      Group Activity Analysis

      This approach effectively models group activities based on social behavioranalysis. Different from previous work that uses independent local features,this project explores the relationships between the current behavior stateof a subject and its actions. Our method does not depend on human detectionor segmentation, so it is robust to detection errors. Instead, trackedspatio-temporal interest points are able to provide a good estimation ofmodeling group interaction. SVM is usedto find abnormal events. Experimental results show its promising performanceagainst the state-of-art methods.

      Publications

      • Xinyi Cui, Qingshan Liu, Mingchen Gao, Dimitris N. Metaxas. "Abnormal Detection Using Interaction Energy Potentials." CVPR. 2011.