首页 > 最新文献

Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition最新文献

英文 中文
Goals, Tasks, and Bonds: Toward the Computational Assessment of Therapist Versus Client Perception of Working Alliance. 目标、任务和纽带:治疗师与客户对工作联盟认知的计算评估》。
Alexandria K Vail, Jeffrey Girard, Lauren Bylsma, Jeffrey Cohn, Jay Fournier, Holly Swartz, Louis-Philippe Morency

Early client dropout is one of the most significant challenges facing psychotherapy: recent studies suggest that at least one in five clients will leave treatment prematurely. Clients may terminate therapy for various reasons, but one of the most common causes is the lack of a strong working alliance. The concept of working alliance captures the collaborative relationship between a client and their therapist when working toward the progress and recovery of the client seeking treatment. Unfortunately, clients are often unwilling to directly express dissatisfaction in care until they have already decided to terminate therapy. On the other side, therapists may miss subtle signs of client discontent during treatment before it is too late. In this work, we demonstrate that nonverbal behavior analysis may aid in bridging this gap. The present study focuses primarily on the head gestures of both the client and therapist, contextualized within conversational turn-taking actions between the pair during psychotherapy sessions. We identify multiple behavior patterns suggestive of an individual's perspective on the working alliance; interestingly, these patterns also differ between the client and the therapist. These patterns inform the development of predictive models for self-reported ratings of working alliance, which demonstrate significant predictive power for both client and therapist ratings. Future applications of such models may stimulate preemptive intervention to strengthen a weak working alliance, whether explicitly attempting to repair the existing alliance or establishing a more suitable client-therapist pairing, to ensure that clients encounter fewer barriers to receiving the treatment they need.

客户过早退出治疗是心理治疗面临的最大挑战之一:最近的研究表明,至少有五分之一的客户会过早离开治疗。客户终止治疗的原因多种多样,但最常见的原因之一是缺乏强有力的工作联盟。工作联盟的概念体现了客户与治疗师之间的合作关系,这种合作关系旨在帮助寻求治疗的客户取得进步和康复。不幸的是,客户往往不愿意直接表达对治疗的不满,直到他们已经决定终止治疗。另一方面,治疗师可能会在治疗过程中错过客户不满的细微迹象,否则就为时已晚。在这项研究中,我们证明了非语言行为分析可以帮助弥合这一差距。本研究主要关注求助者和治疗师的头部手势,并将其与心理治疗过程中双方的轮流对话行为结合起来。我们发现了暗示个人对工作联盟看法的多种行为模式;有趣的是,这些模式在求助者和治疗师之间也存在差异。这些模式为工作联盟自我报告评级预测模型的开发提供了信息,该模型对求助者和治疗师的评级都具有显著的预测能力。这种模型在未来的应用可能会刺激先发制人的干预,以加强薄弱的工作联盟,无论是明确尝试修复现有的联盟,还是建立更合适的客户-治疗师配对,以确保客户在接受他们需要的治疗时遇到更少的障碍。
{"title":"Goals, Tasks, and Bonds: Toward the Computational Assessment of Therapist Versus Client Perception of Working Alliance.","authors":"Alexandria K Vail, Jeffrey Girard, Lauren Bylsma, Jeffrey Cohn, Jay Fournier, Holly Swartz, Louis-Philippe Morency","doi":"10.1109/fg52635.2021.9667021","DOIUrl":"10.1109/fg52635.2021.9667021","url":null,"abstract":"<p><p>Early client dropout is one of the most significant challenges facing psychotherapy: recent studies suggest that at least one in five clients will leave treatment prematurely. Clients may terminate therapy for various reasons, but one of the most common causes is the lack of a strong <i>working alliance</i>. The concept of working alliance captures the collaborative relationship between a client and their therapist when working toward the progress and recovery of the client seeking treatment. Unfortunately, clients are often unwilling to directly express dissatisfaction in care until they have already decided to terminate therapy. On the other side, therapists may miss subtle signs of client discontent during treatment before it is too late. In this work, we demonstrate that nonverbal behavior analysis may aid in bridging this gap. The present study focuses primarily on the head gestures of both the client and therapist, contextualized within conversational turn-taking actions between the pair during psychotherapy sessions. We identify multiple behavior patterns suggestive of an individual's perspective on the working alliance; interestingly, these patterns also differ between the client and the therapist. These patterns inform the development of predictive models for self-reported ratings of working alliance, which demonstrate significant predictive power for both client and therapist ratings. Future applications of such models may stimulate preemptive intervention to strengthen a weak working alliance, whether explicitly attempting to repair the existing alliance or establishing a more suitable client-therapist pairing, to ensure that clients encounter fewer barriers to receiving the treatment they need.</p>","PeriodicalId":87341,"journal":{"name":"Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9355426/pdf/nihms-1771359.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40700885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simple and Effective Approaches for Uncertainty Prediction in Facial Action Unit Intensity Regression. 面部动作单元强度回归中不确定性预测的简单有效方法。
Torsten Wörtwein, Louis-Philippe Morency

Knowing how much to trust a prediction is important for many critical applications. We describe two simple approaches to estimate uncertainty in regression prediction tasks and compare their performance and complexity against popular approaches. We operationalize uncertainty in regression as the absolute error between a model's prediction and the ground truth. Our two proposed approaches use a secondary model to predict the uncertainty of a primary predictive model. Our first approach leverages the assumption that similar observations are likely to have similar uncertainty and predicts uncertainty with a non-parametric method. Our second approach trains a secondary model to directly predict the uncertainty of the primary predictive model. Both approaches outperform other established uncertainty estimation approaches on the MNIST, DISFA, and BP4D+ datasets. Furthermore, we observe that approaches that directly predict the uncertainty generally perform better than approaches that indirectly estimate uncertainty.

知道在多大程度上信任预测对于许多关键应用程序都很重要。我们描述了两种简单的方法来估计回归预测任务中的不确定性,并将它们的性能和复杂性与流行的方法进行了比较。我们将回归中的不确定性作为模型预测与基本事实之间的绝对误差来操作。我们提出的两种方法使用二级模型来预测初级预测模型的不确定性。我们的第一种方法利用了类似观测可能具有类似不确定性的假设,并使用非参数方法预测不确定性。我们的第二种方法训练一个辅助模型来直接预测主要预测模型的不确定性。在MNIST、DISFA和BP4D+数据集上,这两种方法都优于其他已建立的不确定性估计方法。此外,我们观察到直接预测不确定性的方法通常比间接估计不确定性的方法表现得更好。
{"title":"Simple and Effective Approaches for Uncertainty Prediction in Facial Action Unit Intensity Regression.","authors":"Torsten Wörtwein,&nbsp;Louis-Philippe Morency","doi":"10.1109/fg47880.2020.00045","DOIUrl":"https://doi.org/10.1109/fg47880.2020.00045","url":null,"abstract":"<p><p>Knowing how much to trust a prediction is important for many critical applications. We describe two simple approaches to estimate uncertainty in regression prediction tasks and compare their performance and complexity against popular approaches. We operationalize uncertainty in regression as the absolute error between a model's prediction and the ground truth. Our two proposed approaches use a secondary model to predict the uncertainty of a primary predictive model. Our first approach leverages the assumption that similar observations are likely to have similar uncertainty and predicts uncertainty with a non-parametric method. Our second approach trains a secondary model to directly predict the uncertainty of the primary predictive model. Both approaches outperform other established uncertainty estimation approaches on the MNIST, DISFA, and BP4D+ datasets. Furthermore, we observe that approaches that directly predict the uncertainty generally perform better than approaches that indirectly estimate uncertainty.</p>","PeriodicalId":87341,"journal":{"name":"Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/fg47880.2020.00045","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25453101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Nonverbal Behavioral Patterns Predict Social Rejection Elicited Aggression. 非语言行为模式预测社会排斥引发的攻击。
Megan Quarmley, Zhibo Yang, Shahrukh Athar, Gregory Zelinksy, Dimitris Samaras, Johanna M Jarcho

Peer-based aggression following social rejection is a costly and prevalent problem for which existing treatments have had little success. This may be because aggression is a complex process influenced by current states of attention and arousal, which are difficult to measure on a moment to moment basis via self report. It is therefore crucial to identify nonverbal behavioral indices of attention and arousal that predict subsequent aggression. We used Support Vector Machines (SVMs) and eye gaze duration and pupillary response features, measured during positive and negative peer-based social interactions, to predict subsequent aggressive behavior towards those same peers. We found that eye gaze and pupillary reactivity not only predicted aggressive behavior, but performed better than models that included information about the participant's exposure to harsh parenting or trait aggression. Eye gaze and pupillary reactivity models also performed equally as well as those that included information about peer reputation (e.g. whether the peer was rejecting or accepting). This is the first study to decode nonverbal eye behavior during social interaction to predict social rejection-elicited aggression.

社会排斥后的同伴攻击是一种昂贵而普遍的问题,现有的治疗方法收效甚微。这可能是因为攻击是一个复杂的过程,受到当前注意力和觉醒状态的影响,这很难通过自我报告来衡量。因此,识别非语言行为的注意和唤醒指标,预测随后的攻击行为是至关重要的。我们使用支持向量机(svm)和眼睛注视时间和瞳孔反应特征,在积极和消极的基于同伴的社会互动中测量,来预测随后对相同同伴的攻击行为。我们发现,眼睛注视和瞳孔反应不仅能预测攻击行为,而且比包含参与者受到严厉父母教育或特质攻击信息的模型表现得更好。眼睛凝视和瞳孔反应模型的表现也与那些包含同伴声誉信息(例如同伴是拒绝还是接受)的模型一样好。这是第一个在社会互动中解码非语言的眼睛行为来预测社会排斥引发的攻击的研究。
{"title":"Nonverbal Behavioral Patterns Predict Social Rejection Elicited Aggression.","authors":"Megan Quarmley,&nbsp;Zhibo Yang,&nbsp;Shahrukh Athar,&nbsp;Gregory Zelinksy,&nbsp;Dimitris Samaras,&nbsp;Johanna M Jarcho","doi":"10.1109/fg47880.2020.00111","DOIUrl":"https://doi.org/10.1109/fg47880.2020.00111","url":null,"abstract":"<p><p>Peer-based aggression following social rejection is a costly and prevalent problem for which existing treatments have had little success. This may be because aggression is a complex process influenced by current states of attention and arousal, which are difficult to measure on a moment to moment basis via self report. It is therefore crucial to identify nonverbal behavioral indices of attention and arousal that predict subsequent aggression. We used Support Vector Machines (SVMs) and eye gaze duration and pupillary response features, measured during positive and negative peer-based social interactions, to predict subsequent aggressive behavior towards those same peers. We found that eye gaze and pupillary reactivity not only predicted aggressive behavior, but performed better than models that included information about the participant's exposure to harsh parenting or trait aggression. Eye gaze and pupillary reactivity models also performed equally as well as those that included information about peer reputation (e.g. whether the peer was rejecting or accepting). This is the first study to decode nonverbal eye behavior during social interaction to predict social rejection-elicited aggression.</p>","PeriodicalId":87341,"journal":{"name":"Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/fg47880.2020.00111","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39774870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting Depression Severity by Interpretable Representations of Motion Dynamics. 通过可解释的运动动态表示检测抑郁症严重程度
Anis Kacem, Zakia Hammal, Mohamed Daoudi, Jeffrey Cohn

Recent breakthroughs in deep learning using automated measurement of face and head motion have made possible the first objective measurement of depression severity. While powerful, deep learning approaches lack interpretability. We developed an interpretable method of automatically measuring depression severity that uses barycentric coordinates of facial landmarks and a Lie-algebra based rotation matrix of 3D head motion. Using these representations, kinematic features are extracted, preprocessed, and encoded using Gaussian Mixture Models (GMM) and Fisher vector encoding. A multi-class SVM is used to classify the encoded facial and head movement dynamics into three levels of depression severity. The proposed approach was evaluated in adults with history of chronic depression. The method approached the classification accuracy of state-of-the-art deep learning while enabling clinically and theoretically relevant findings. The velocity and acceleration of facial movement strongly mapped onto depression severity symptoms consistent with clinical data and theory.

最近,利用自动测量面部和头部运动的深度学习技术取得了突破性进展,首次实现了对抑郁症严重程度的客观测量。深度学习方法虽然功能强大,但缺乏可解释性。我们开发了一种可解释的自动测量抑郁症严重程度的方法,该方法使用面部地标的巴里中心坐标和基于李代数的三维头部运动旋转矩阵。利用这些表征,可以提取运动学特征,进行预处理,并使用高斯混合模型(GMM)和费雪向量编码进行编码。使用多类 SVM 将编码后的面部和头部运动动态分为三个抑郁严重程度等级。所提出的方法在有慢性抑郁症病史的成年人中进行了评估。该方法接近最先进的深度学习的分类准确性,同时还能得出临床和理论相关的结论。面部运动的速度和加速度与抑郁症的严重程度症状密切相关,这与临床数据和理论一致。
{"title":"Detecting Depression Severity by Interpretable Representations of Motion Dynamics.","authors":"Anis Kacem, Zakia Hammal, Mohamed Daoudi, Jeffrey Cohn","doi":"10.1109/FG.2018.00116","DOIUrl":"10.1109/FG.2018.00116","url":null,"abstract":"<p><p>Recent breakthroughs in deep learning using automated measurement of face and head motion have made possible the first objective measurement of depression severity. While powerful, deep learning approaches lack interpretability. We developed an interpretable method of automatically measuring depression severity that uses barycentric coordinates of facial landmarks and a Lie-algebra based rotation matrix of 3D head motion. Using these representations, kinematic features are extracted, preprocessed, and encoded using Gaussian Mixture Models (GMM) and Fisher vector encoding. A multi-class SVM is used to classify the encoded facial and head movement dynamics into three levels of depression severity. The proposed approach was evaluated in adults with history of chronic depression. The method approached the classification accuracy of state-of-the-art deep learning while enabling clinically and theoretically relevant findings. The velocity and acceleration of facial movement strongly mapped onto depression severity symptoms consistent with clinical data and theory.</p>","PeriodicalId":87341,"journal":{"name":"Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6157749/pdf/nihms950419.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36538326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning for Action and Gesture Recognition in Image Sequences: A Survey 图像序列中动作和手势识别的深度学习:综述
Maryam Asadi-Aghbolaghi, Albert Clapés, M. Bellantonio, H. Escalante, V. Ponce-López, Xavier Baró, Isabelle M Guyon, S. Kasaei, Sergio Escalera
{"title":"Deep Learning for Action and Gesture Recognition in Image Sequences: A Survey","authors":"Maryam Asadi-Aghbolaghi, Albert Clapés, M. Bellantonio, H. Escalante, V. Ponce-López, Xavier Baró, Isabelle M Guyon, S. Kasaei, Sergio Escalera","doi":"10.1007/978-3-319-57021-1_19","DOIUrl":"https://doi.org/10.1007/978-3-319-57021-1_19","url":null,"abstract":"","PeriodicalId":87341,"journal":{"name":"Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80009798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
FERA 2017 - Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge. FERA 2017 -在第三次面部表情识别和分析挑战中解决头部姿势。
Michel F Valstar, Enrique Sánchez-Lozano, Jeffrey F Cohn, László A Jeni, Jeffrey M Girard, Zheng Zhang, Lijun Yin, Maja Pantic

The field of Automatic Facial Expression Analysis has grown rapidly in recent years. However, despite progress in new approaches as well as benchmarking efforts, most evaluations still focus on either posed expressions, near-frontal recordings, or both. This makes it hard to tell how existing expression recognition approaches perform under conditions where faces appear in a wide range of poses (or camera views), displaying ecologically valid expressions. The main obstacle for assessing this is the availability of suitable data, and the challenge proposed here addresses this limitation. The FG 2017 Facial Expression Recognition and Analysis challenge (FERA 2017) extends FERA 2015 to the estimation of Action Units occurrence and intensity under different camera views. In this paper we present the third challenge in automatic recognition of facial expressions, to be held in conjunction with the 12th IEEE conference on Face and Gesture Recognition, May 2017, in Washington, United States. Two sub-challenges are defined: the detection of AU occurrence, and the estimation of AU intensity. In this work we outline the evaluation protocol, the data used, and the results of a baseline method for both sub-challenges.

面部表情自动分析是近年来发展迅速的研究领域。然而,尽管在新方法和基准测试方面取得了进展,大多数评估仍然集中在摆姿势的表情,近额部记录,或两者兼而有之。这使得现有的表情识别方法很难在面部出现在各种姿势(或相机视图)的情况下表现如何,显示生态有效的表情。评估这一点的主要障碍是适当数据的可用性,这里提出的挑战解决了这一限制。FG 2017面部表情识别和分析挑战(FERA 2017)将FERA 2015扩展到不同相机视图下动作单元发生和强度的估计。在本文中,我们提出了面部表情自动识别的第三个挑战,该挑战将于2017年5月在美国华盛顿举行的第12届IEEE面部和手势识别会议上举行。定义了两个子挑战:AU发生的检测和AU强度的估计。在这项工作中,我们概述了评估方案,使用的数据,以及两个子挑战的基线方法的结果。
{"title":"FERA 2017 - Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge.","authors":"Michel F Valstar, Enrique Sánchez-Lozano, Jeffrey F Cohn, László A Jeni, Jeffrey M Girard, Zheng Zhang, Lijun Yin, Maja Pantic","doi":"10.1109/FG.2017.107","DOIUrl":"10.1109/FG.2017.107","url":null,"abstract":"<p><p>The field of Automatic Facial Expression Analysis has grown rapidly in recent years. However, despite progress in new approaches as well as benchmarking efforts, most evaluations still focus on either posed expressions, near-frontal recordings, or both. This makes it hard to tell how existing expression recognition approaches perform under conditions where faces appear in a wide range of poses (or camera views), displaying ecologically valid expressions. The main obstacle for assessing this is the availability of suitable data, and the challenge proposed here addresses this limitation. The FG 2017 Facial Expression Recognition and Analysis challenge (FERA 2017) extends FERA 2015 to the estimation of Action Units occurrence and intensity under different camera views. In this paper we present the third challenge in automatic recognition of facial expressions, to be held in conjunction with the 12th IEEE conference on Face and Gesture Recognition, May 2017, in Washington, United States. Two sub-challenges are defined: the detection of AU occurrence, and the estimation of AU intensity. In this work we outline the evaluation protocol, the data used, and the results of a baseline method for both sub-challenges.</p>","PeriodicalId":87341,"journal":{"name":"Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/FG.2017.107","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35967120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 123
Sayette Group Formation Task (GFT) Spontaneous Facial Expression Database. Sayette Group Formation Task (GFT)自发面部表情数据库。
Jeffrey M Girard, Wen-Sheng Chu, László A Jeni, Jeffrey F Cohn, Fernando De la Torre, Michael A Sayette

Despite the important role that facial expressions play in interpersonal communication and our knowledge that interpersonal behavior is influenced by social context, no currently available facial expression database includes multiple interacting participants. The Sayette Group Formation Task (GFT) database addresses the need for well-annotated video of multiple participants during unscripted interactions. The database includes 172,800 video frames from 96 participants in 32 three-person groups. To aid in the development of automated facial expression analysis systems, GFT includes expert annotations of FACS occurrence and intensity, facial landmark tracking, and baseline results for linear SVM, deep learning, active patch learning, and personalized classification. Baseline performance is quantified and compared using identical partitioning and a variety of metrics (including means and confidence intervals). The highest performance scores were found for the deep learning and active patch learning methods. Learn more at http://osf.io/7wcyz.

尽管面部表情在人际交往中发挥着重要作用,我们也知道人际行为受社会环境的影响,但目前还没有包含多个互动参与者的面部表情数据库。Sayette Group Formation Task (GFT)数据库解决了在无脚本交互过程中对多个参与者的良好注释视频的需求。该数据库包括来自32个三人小组的96名参与者的172,800个视频帧。为了帮助开发自动面部表情分析系统,GFT包括FACS发生和强度的专家注释、面部地标跟踪、线性支持向量机的基线结果、深度学习、主动补丁学习和个性化分类。使用相同的划分和各种度量(包括平均值和置信区间)对基线性能进行量化和比较。深度学习和主动补丁学习方法的性能得分最高。更多信息请访问http://osf.io/7wcyz。
{"title":"Sayette Group Formation Task (GFT) Spontaneous Facial Expression Database.","authors":"Jeffrey M Girard, Wen-Sheng Chu, László A Jeni, Jeffrey F Cohn, Fernando De la Torre, Michael A Sayette","doi":"10.1109/FG.2017.144","DOIUrl":"10.1109/FG.2017.144","url":null,"abstract":"<p><p>Despite the important role that facial expressions play in interpersonal communication and our knowledge that interpersonal behavior is influenced by social context, no currently available facial expression database includes multiple interacting participants. The Sayette Group Formation Task (GFT) database addresses the need for well-annotated video of multiple participants during unscripted interactions. The database includes 172,800 video frames from 96 participants in 32 three-person groups. To aid in the development of automated facial expression analysis systems, GFT includes expert annotations of FACS occurrence and intensity, facial landmark tracking, and baseline results for linear SVM, deep learning, active patch learning, and personalized classification. Baseline performance is quantified and compared using identical partitioning and a variety of metrics (including means and confidence intervals). The highest performance scores were found for the deep learning and active patch learning methods. Learn more at http://osf.io/7wcyz.</p>","PeriodicalId":87341,"journal":{"name":"Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/FG.2017.144","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35966631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 54
Challenges in Multi-modal Gesture Recognition 多模态手势识别中的挑战
Sergio Escalera, V. Athitsos, Isabelle M Guyon
{"title":"Challenges in Multi-modal Gesture Recognition","authors":"Sergio Escalera, V. Athitsos, Isabelle M Guyon","doi":"10.1007/978-3-319-57021-1_1","DOIUrl":"https://doi.org/10.1007/978-3-319-57021-1_1","url":null,"abstract":"","PeriodicalId":87341,"journal":{"name":"Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88215947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 75
Social Risk and Depression: Evidence from Manual and Automatic Facial Expression Analysis. 社会风险与抑郁:手动和自动面部表情分析的证据
Jeffrey M Girard, Jeffrey F Cohn, Mohammad H Mahoor, Seyedmohammad Mavadati, Dean P Rosenwald

Investigated the relationship between change over time in severity of depression symptoms and facial expression. Depressed participants were followed over the course of treatment and video recorded during a series of clinical interviews. Facial expressions were analyzed from the video using both manual and automatic systems. Automatic and manual coding were highly consistent for FACS action units, and showed similar effects for change over time in depression severity. For both systems, when symptom severity was high, participants made more facial expressions associated with contempt, smiled less, and those smiles that occurred were more likely to be accompanied by facial actions associated with contempt. These results are consistent with the "social risk hypothesis" of depression. According to this hypothesis, when symptoms are severe, depressed participants withdraw from other people in order to protect themselves from anticipated rejection, scorn, and social exclusion. As their symptoms fade, participants send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and produced the same pattern of depression effects suggests that automatic facial expression analysis may be ready for use in behavioral and clinical science.

研究抑郁症状严重程度随时间的变化与面部表情之间的关系。在一系列临床访谈过程中,对抑郁症患者进行了全程跟踪和录像。通过手动和自动系统对视频中的面部表情进行分析。对于 FACS 动作单元,自动编码和手动编码高度一致,并对抑郁症严重程度随时间的变化表现出相似的效果。在这两种系统中,当症状严重程度较高时,参与者会做出更多与蔑视相关的面部表情,微笑较少,而且在微笑时更有可能伴有与蔑视相关的面部动作。这些结果符合抑郁症的 "社会风险假说"。根据这一假说,当症状严重时,抑郁参与者会从其他人那里退缩,以保护自己免受预期的拒绝、蔑视和社会排斥。随着症状的消退,参与者会发出更多表示愿意与他人交往的信号。自动面部表情分析与人工编码一致,并产生了相同的抑郁效应模式,这一发现表明自动面部表情分析可以用于行为和临床科学。
{"title":"Social Risk and Depression: Evidence from Manual and Automatic Facial Expression Analysis.","authors":"Jeffrey M Girard, Jeffrey F Cohn, Mohammad H Mahoor, Seyedmohammad Mavadati, Dean P Rosenwald","doi":"10.1109/FG.2013.6553748","DOIUrl":"10.1109/FG.2013.6553748","url":null,"abstract":"<p><p>Investigated the relationship between change over time in severity of depression symptoms and facial expression. Depressed participants were followed over the course of treatment and video recorded during a series of clinical interviews. Facial expressions were analyzed from the video using both manual and automatic systems. Automatic and manual coding were highly consistent for FACS action units, and showed similar effects for change over time in depression severity. For both systems, when symptom severity was high, participants made more facial expressions associated with contempt, smiled less, and those smiles that occurred were more likely to be accompanied by facial actions associated with contempt. These results are consistent with the \"social risk hypothesis\" of depression. According to this hypothesis, when symptoms are severe, depressed participants withdraw from other people in order to protect themselves from anticipated rejection, scorn, and social exclusion. As their symptoms fade, participants send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and produced the same pattern of depression effects suggests that automatic facial expression analysis may be ready for use in behavioral and clinical science.</p>","PeriodicalId":87341,"journal":{"name":"Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3935843/pdf/nihms555449.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40286185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time Avatar Animation from a Single Image. 通过单张图像制作实时头像动画
Jason M Saragih, Simon Lucey, Jeffrey F Cohn

A real time facial puppetry system is presented. Compared with existing systems, the proposed method requires no special hardware, runs in real time (23 frames-per-second), and requires only a single image of the avatar and user. The user's facial expression is captured through a real-time 3D non-rigid tracking system. Expression transfer is achieved by combining a generic expression model with synthetically generated examples that better capture person specific characteristics. Performance of the system is evaluated on avatars of real people as well as masks and cartoon characters.

本文介绍了一种实时面部木偶系统。与现有系统相比,所提出的方法不需要特殊硬件,可实时运行(每秒 23 帧),并且只需要头像和用户的单张图像。用户的面部表情通过实时三维非刚性跟踪系统捕捉。表情转移是通过将通用表情模型与合成生成的示例相结合来实现的,后者能更好地捕捉人物的具体特征。该系统的性能在真人头像以及面具和卡通人物身上进行了评估。
{"title":"Real-time Avatar Animation from a Single Image.","authors":"Jason M Saragih, Simon Lucey, Jeffrey F Cohn","doi":"10.1109/FG.2011.5771383","DOIUrl":"10.1109/FG.2011.5771383","url":null,"abstract":"<p><p>A real time facial puppetry system is presented. Compared with existing systems, the proposed method requires no special hardware, runs in real time (23 frames-per-second), and requires only a single image of the avatar and user. The user's facial expression is captured through a real-time 3D non-rigid tracking system. Expression transfer is achieved by combining a generic expression model with synthetically generated examples that better capture person specific characteristics. Performance of the system is evaluated on avatars of real people as well as masks and cartoon characters.</p>","PeriodicalId":87341,"journal":{"name":"Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3935737/pdf/nihms-554963.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40285898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1