首页 > 最新文献

IEEE International Conference on Automatic Face & Gesture Recognition and Workshops最新文献

英文 中文
The Proper Treatment of Linguistic Ambiguity in Ordinary Algebra 普通代数中语言歧义的正确处理
Pub Date : 2015-08-01 DOI: 10.1007/978-3-662-53042-9_18
C. Wurm, Timm Lichte
{"title":"The Proper Treatment of Linguistic Ambiguity in Ordinary Algebra","authors":"C. Wurm, Timm Lichte","doi":"10.1007/978-3-662-53042-9_18","DOIUrl":"https://doi.org/10.1007/978-3-662-53042-9_18","url":null,"abstract":"","PeriodicalId":91494,"journal":{"name":"IEEE International Conference on Automatic Face & Gesture Recognition and Workshops","volume":"134 1","pages":"306-322"},"PeriodicalIF":0.0,"publicationDate":"2015-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78037931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Single Movement Normal Form for Minimalist Grammars 极简语法的单一运动范式
Pub Date : 2015-08-01 DOI: 10.1007/978-3-662-53042-9_12
T. Graf, Alëna Aksënova, Aniello De Santo
{"title":"A Single Movement Normal Form for Minimalist Grammars","authors":"T. Graf, Alëna Aksënova, Aniello De Santo","doi":"10.1007/978-3-662-53042-9_12","DOIUrl":"https://doi.org/10.1007/978-3-662-53042-9_12","url":null,"abstract":"","PeriodicalId":91494,"journal":{"name":"IEEE International Conference on Automatic Face & Gesture Recognition and Workshops","volume":"35 1","pages":"200-215"},"PeriodicalIF":0.0,"publicationDate":"2015-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90627464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Foreword - Biometrics in the Wild 2015 前言-生物识别技术在野外2015
B. Bhanu, A. Hadid, Q. Ji, M. Nixon, V. Štruc
The first International Workshop on Biometrics in the Wild (B-Wild 2015) was held on May 8th, 2015 in conjunction with the 11th IEEE International Conference on Automatic Face and Gesture Recognition (IEEE FG-2015) in Ljubljana, Slovenia. The goal of the workshop was to present the most advanced work related to biometric recognition in the wild and to bring recent advances from this field to the attention of the broader FG community.
第一届野外生物识别国际研讨会(B-Wild 2015)于2015年5月8日在斯洛文尼亚卢布尔雅那举行,同时举行了第11届IEEE自动人脸和手势识别国际会议(IEEE FG-2015)。研讨会的目标是介绍与野外生物识别相关的最先进的工作,并将该领域的最新进展引起更广泛的FG社区的注意。
{"title":"Foreword - Biometrics in the Wild 2015","authors":"B. Bhanu, A. Hadid, Q. Ji, M. Nixon, V. Štruc","doi":"10.1109/FG.2015.7284809","DOIUrl":"https://doi.org/10.1109/FG.2015.7284809","url":null,"abstract":"The first International Workshop on Biometrics in the Wild (B-Wild 2015) was held on May 8th, 2015 in conjunction with the 11th IEEE International Conference on Automatic Face and Gesture Recognition (IEEE FG-2015) in Ljubljana, Slovenia. The goal of the workshop was to present the most advanced work related to biometric recognition in the wild and to bring recent advances from this field to the attention of the broader FG community.","PeriodicalId":91494,"journal":{"name":"IEEE International Conference on Automatic Face & Gesture Recognition and Workshops","volume":"10 1","pages":"1-2"},"PeriodicalIF":0.0,"publicationDate":"2015-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89944218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
FERA 2014 chairs' welcome 欢迎2014年FERA主席
M. Valstar, G. McKeown, M. Mehu, L. Yin, M. Pantic, J. Cohn
It is our great pleasure to welcome you to the 2d Facial Expression Recognition and Analysis challenge and workshop (FERA 2015), held in conjunction with the 11th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2015). It's been four years since the first facial expression recognition challenge (FERA 2011), and we're excited to come back to challenge researchers worldwide to go ever further in the automatic recognition of facial expressions. This year's challenge and associated workshop pushes the boundaries of expression recognition by focusing on the estimation of FACS Facial Action Unit intensity, as well as regular frame-based occurrence detection. The challenge is set on previously unreleased data of extensive duration (over 350,000 annotated frames) of relatively naturalistic scenarios taken from the BP4D and SEMAINE databases.
我们非常高兴地欢迎您参加与第11届IEEE自动面部和手势识别国际会议(FG 2015)同时举行的二维面部表情识别和分析挑战和研讨会(FERA 2015)。第一次面部表情识别挑战赛(FERA 2011)已经过去四年了,我们很高兴能再次挑战世界各地的研究人员,在面部表情的自动识别方面走得更远。今年的挑战和相关的研讨会通过专注于FACS面部动作单元强度的估计,以及常规的基于帧的事件检测,推动了表情识别的界限。挑战设置在以前未发布的长时间数据(超过350,000个注释帧)上,这些数据来自BP4D和SEMAINE数据库中相对自然的场景。
{"title":"FERA 2014 chairs' welcome","authors":"M. Valstar, G. McKeown, M. Mehu, L. Yin, M. Pantic, J. Cohn","doi":"10.1109/FG.2015.7284866","DOIUrl":"https://doi.org/10.1109/FG.2015.7284866","url":null,"abstract":"It is our great pleasure to welcome you to the 2d Facial Expression Recognition and Analysis challenge and workshop (FERA 2015), held in conjunction with the 11th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2015). It's been four years since the first facial expression recognition challenge (FERA 2011), and we're excited to come back to challenge researchers worldwide to go ever further in the automatic recognition of facial expressions. This year's challenge and associated workshop pushes the boundaries of expression recognition by focusing on the estimation of FACS Facial Action Unit intensity, as well as regular frame-based occurrence detection. The challenge is set on previously unreleased data of extensive duration (over 350,000 annotated frames) of relatively naturalistic scenarios taken from the BP4D and SEMAINE databases.","PeriodicalId":91494,"journal":{"name":"IEEE International Conference on Automatic Face & Gesture Recognition and Workshops","volume":"1 1","pages":"iii"},"PeriodicalIF":0.0,"publicationDate":"2015-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88936141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Three Dimensional Binary Edge Feature Representation for Pain Expression Analysis. 用于疼痛表情分析的三维二值边缘特征表示。
Pub Date : 2015-05-01 Epub Date: 2015-07-23 DOI: 10.1109/fg.2015.7163107
Xing Zhang, Lijun Yin, Jeffrey F Cohn

Automatic pain expression recognition is a challenging task for pain assessment and diagnosis. Conventional 2D-based approaches to automatic pain detection lack robustness to the moderate to large head pose variation and changes in illumination that are common in real-world settings and with few exceptions omit potentially informative temporal information. In this paper, we propose an innovative 3D binary edge feature (3D-BE) to represent high-resolution 3D dynamic facial expression. To exploit temporal information, we apply a latent-dynamic conditional random field approach with the 3D-BE. The resulting pain expression detection system proves that 3D-BE represents the pain facial features well, and illustrates the potential of noncontact pain detection from 3D facial expression data.

疼痛表情的自动识别是疼痛评估和诊断的一个具有挑战性的任务。传统的基于2d的自动疼痛检测方法对现实世界中常见的头部姿势变化和光照变化缺乏鲁棒性,除了少数例外情况,还会忽略潜在的信息时间信息。在本文中,我们提出了一种创新的3D二进制边缘特征(3D- be)来表示高分辨率的3D动态面部表情。为了利用时间信息,我们对3D-BE应用了潜在动态条件随机场方法。实验结果表明,3D- be能够很好地表征疼痛面部特征,说明了基于3D面部表情数据的非接触式疼痛检测的潜力。
{"title":"Three Dimensional Binary Edge Feature Representation for Pain Expression Analysis.","authors":"Xing Zhang,&nbsp;Lijun Yin,&nbsp;Jeffrey F Cohn","doi":"10.1109/fg.2015.7163107","DOIUrl":"https://doi.org/10.1109/fg.2015.7163107","url":null,"abstract":"<p><p>Automatic pain expression recognition is a challenging task for pain assessment and diagnosis. Conventional 2D-based approaches to automatic pain detection lack robustness to the moderate to large head pose variation and changes in illumination that are common in real-world settings and with few exceptions omit potentially informative temporal information. In this paper, we propose an innovative 3D binary edge feature (3D-BE) to represent high-resolution 3D dynamic facial expression. To exploit temporal information, we apply a latent-dynamic conditional random field approach with the 3D-BE. The resulting pain expression detection system proves that 3D-BE represents the pain facial features well, and illustrates the potential of noncontact pain detection from 3D facial expression data.</p>","PeriodicalId":91494,"journal":{"name":"IEEE International Conference on Automatic Face & Gesture Recognition and Workshops","volume":"2015 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/fg.2015.7163107","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38146268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Dense 3D Face Alignment from 2D Videos in Real-Time. 密集的3D面部对齐从2D视频在实时。
László A Jeni, Jeffrey F Cohn, Takeo Kanade

To enable real-time, person-independent 3D registration from 2D video, we developed a 3D cascade regression approach in which facial landmarks remain invariant across pose over a range of approximately 60 degrees. From a single 2D image of a person's face, a dense 3D shape is registered in real time for each frame. The algorithm utilizes a fast cascade regression framework trained on high-resolution 3D face-scans of posed and spontaneous emotion expression. The algorithm first estimates the location of a dense set of markers and their visibility, then reconstructs face shapes by fitting a part-based 3D model. Because no assumptions are required about illumination or surface properties, the method can be applied to a wide range of imaging conditions that include 2D video and uncalibrated multi-view video. The method has been validated in a battery of experiments that evaluate its precision of 3D reconstruction and extension to multi-view reconstruction. Experimental findings strongly support the validity of real-time, 3D registration and reconstruction from 2D video. The software is available online at http://zface.org.

为了从2D视频中实现实时、独立于人的3D注册,我们开发了一种3D级联回归方法,其中面部地标在大约60度的范围内保持姿态不变。从一张人脸的2D图像中,每一帧都会实时注册一个密集的3D形状。该算法利用高分辨率3D面部扫描训练的快速级联回归框架,对摆姿势和自发的情绪表达进行训练。该算法首先估计密集标记集的位置及其可见性,然后通过拟合基于零件的3D模型重建人脸形状。由于不需要对照明或表面特性进行假设,因此该方法可以应用于广泛的成像条件,包括2D视频和未校准的多视图视频。通过一系列实验验证了该方法在三维重建和多视图重建方面的精度。实验结果有力地支持了二维视频实时、三维配准和重建的有效性。该软件可在http://zface.org上获得。
{"title":"Dense 3D Face Alignment from 2D Videos in Real-Time.","authors":"László A Jeni,&nbsp;Jeffrey F Cohn,&nbsp;Takeo Kanade","doi":"10.1109/FG.2015.7163142","DOIUrl":"https://doi.org/10.1109/FG.2015.7163142","url":null,"abstract":"<p><p>To enable real-time, person-independent 3D registration from 2D video, we developed a 3D cascade regression approach in which facial landmarks remain invariant across pose over a range of approximately 60 degrees. From a single 2D image of a person's face, a dense 3D shape is registered in real time for each frame. The algorithm utilizes a fast cascade regression framework trained on high-resolution 3D face-scans of posed and spontaneous emotion expression. The algorithm first estimates the location of a dense set of markers and their visibility, then reconstructs face shapes by fitting a part-based 3D model. Because no assumptions are required about illumination or surface properties, the method can be applied to a wide range of imaging conditions that include 2D video and uncalibrated multi-view video. The method has been validated in a battery of experiments that evaluate its precision of 3D reconstruction and extension to multi-view reconstruction. Experimental findings strongly support the validity of real-time, 3D registration and reconstruction from 2D video. The software is available online at http://zface.org.</p>","PeriodicalId":91494,"journal":{"name":"IEEE International Conference on Automatic Face & Gesture Recognition and Workshops","volume":"1 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/FG.2015.7163142","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34570073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 173
How much training data for facial action unit detection? 面部动作单元检测的训练数据有多少?
Jeffrey M Girard, Jeffrey F Cohn, László A Jeni, Simon Lucey, Fernando De la Torre

By systematically varying the number of subjects and the number of frames per subject, we explored the influence of training set size on appearance and shape-based approaches to facial action unit (AU) detection. Digital video and expert coding of spontaneous facial activity from 80 subjects (over 350,000 frames) were used to train and test support vector machine classifiers. Appearance features were shape-normalized SIFT descriptors and shape features were 66 facial landmarks. Ten-fold cross-validation was used in all evaluations. Number of subjects and number of frames per subject differentially affected appearance and shape-based classifiers. For appearance features, which are high-dimensional, increasing the number of training subjects from 8 to 64 incrementally improved performance, regardless of the number of frames taken from each subject (ranging from 450 through 3600). In contrast, for shape features, increases in the number of training subjects and frames were associated with mixed results. In summary, maximal performance was attained using appearance features from large numbers of subjects with as few as 450 frames per subject. These findings suggest that variation in the number of subjects rather than number of frames per subject yields most efficient performance.

通过系统地改变受试者的数量和每个受试者的帧数,我们探索了训练集大小对外观和基于形状的面部动作单元(AU)检测方法的影响。使用80个被试(超过35万帧)的自发面部活动的数字视频和专家编码来训练和测试支持向量机分类器。外观特征是形状归一化SIFT描述子,形状特征是66个面部标志。所有评价均采用十倍交叉验证。受试者的数量和每个受试者的帧数对外观和基于形状的分类器有不同的影响。对于高维的外观特征,将训练对象的数量从8个增加到64个,无论从每个对象获取的帧数(从450到3600不等)如何,都可以逐步提高性能。相比之下,对于形状特征,训练对象和框架数量的增加与混合结果相关。总而言之,使用大量受试者的外观特征获得最大性能,每个受试者只有450帧。这些发现表明,受试者数量的变化而不是每个受试者的帧数产生最有效的性能。
{"title":"How much training data for facial action unit detection?","authors":"Jeffrey M Girard,&nbsp;Jeffrey F Cohn,&nbsp;László A Jeni,&nbsp;Simon Lucey,&nbsp;Fernando De la Torre","doi":"10.1109/FG.2015.7163106","DOIUrl":"https://doi.org/10.1109/FG.2015.7163106","url":null,"abstract":"<p><p>By systematically varying the number of subjects and the number of frames per subject, we explored the influence of training set size on appearance and shape-based approaches to facial action unit (AU) detection. Digital video and expert coding of spontaneous facial activity from 80 subjects (over 350,000 frames) were used to train and test support vector machine classifiers. Appearance features were shape-normalized SIFT descriptors and shape features were 66 facial landmarks. Ten-fold cross-validation was used in all evaluations. Number of subjects and number of frames per subject differentially affected appearance and shape-based classifiers. For appearance features, which are high-dimensional, increasing the number of training subjects from 8 to 64 incrementally improved performance, regardless of the number of frames taken from each subject (ranging from 450 through 3600). In contrast, for shape features, increases in the number of training subjects and frames were associated with mixed results. In summary, maximal performance was attained using appearance features from large numbers of subjects with as few as 450 frames per subject. These findings suggest that variation in the number of subjects rather than number of frames per subject yields most efficient performance.</p>","PeriodicalId":91494,"journal":{"name":"IEEE International Conference on Automatic Face & Gesture Recognition and Workshops","volume":"1 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/FG.2015.7163106","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34558406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
IntraFace. IntraFace。
Fernando De la Torre, Wen-Sheng Chu, Xuehan Xiong, Francisco Vicente, Xiaoyu Ding, Jeffrey Cohn

Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that incorporates all these functionalities is unavailable. This paper presents IntraFace (IF), a publicly-available software package for automated facial feature tracking, head pose estimation, facial attribute recognition, and facial expression analysis from video. In addition, IFincludes a newly develop technique for unsupervised synchrony detection to discover correlated facial behavior between two or more persons, a relatively unexplored problem in facial image analysis. In tests, IF achieved state-of-the-art results for emotion expression and action unit detection in three databases, FERA, CK+ and RU-FACS; measured audience reaction to a talk given by one of the authors; and discovered synchrony for smiling in videos of parent-infant interaction. IF is free of charge for academic use at http://www.humansensing.cs.cmu.edu/intraface/.

在过去 20 年里,计算机视觉领域对自动面部图像分析算法的兴趣与日俱增。这主要是受动画、市场研究、自动驾驶、监控和面部编辑等应用的推动。迄今为止,已有几种商业软件包可用于特定的面部图像分析任务,如面部表情识别、面部属性分析或面部跟踪。然而,整合了所有这些功能的免费且易于使用的软件却还没有。本文介绍的 IntraFace(IF)是一款可公开获取的软件包,用于自动面部特征跟踪、头部姿态估计、面部属性识别和视频面部表情分析。此外,IF 还包括一项新开发的无监督同步检测技术,用于发现两人或多人之间相关的面部行为,这是面部图像分析中一个相对尚未开发的问题。在测试中,IF 在三个数据库(FERA、CK+ 和 RU-FACS)中的情绪表达和动作单元检测方面取得了最先进的结果;测量了听众对作者之一演讲的反应;发现了亲子互动视频中微笑的同步性。IF免费供学术界使用,http://www.humansensing.cs.cmu.edu/intraface/。
{"title":"IntraFace.","authors":"Fernando De la Torre, Wen-Sheng Chu, Xuehan Xiong, Francisco Vicente, Xiaoyu Ding, Jeffrey Cohn","doi":"10.1109/FG.2015.7163082","DOIUrl":"10.1109/FG.2015.7163082","url":null,"abstract":"<p><p>Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that incorporates all these functionalities is unavailable. This paper presents IntraFace (IF), a publicly-available software package for automated facial feature tracking, head pose estimation, facial attribute recognition, and facial expression analysis from video. In addition, IFincludes a newly develop technique for unsupervised synchrony detection to discover correlated facial behavior between two or more persons, a relatively unexplored problem in facial image analysis. In tests, IF achieved state-of-the-art results for emotion expression and action unit detection in three databases, FERA, CK+ and RU-FACS; measured audience reaction to a talk given by one of the authors; and discovered synchrony for smiling in videos of parent-infant interaction. IF is free of charge for academic use at http://www.humansensing.cs.cmu.edu/intraface/.</p>","PeriodicalId":91494,"journal":{"name":"IEEE International Conference on Automatic Face & Gesture Recognition and Workshops","volume":"1 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4918819/pdf/nihms-751967.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34612877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Cultural Detection of Depression from Nonverbal Behaviour. 非语言行为对抑郁的跨文化检测。
Sharifa Alghowinem, Roland Goecke, Jeffrey F Cohn, Michael Wagner, Gordon Parker, Michael Breakspear

Millions of people worldwide suffer from depression. Do commonalities exist in their nonverbal behavior that would enable cross-culturally viable screening and assessment of severity? We investigated the generalisability of an approach to detect depression severity cross-culturally using video-recorded clinical interviews from Australia, the USA and Germany. The material varied in type of interview, subtypes of depression and inclusion healthy control subjects, cultural background, and recording environment. The analysis focussed on temporal features of participants' eye gaze and head pose. Several approaches to training and testing within and between datasets were evaluated. The strongest results were found for training across all datasets and testing across datasets using leave-one-subject-out cross-validation. In contrast, generalisability was attenuated when training on only one or two of the three datasets and testing on subjects from the dataset(s) not used in training. These findings highlight the importance of using training data exhibiting the expected range of variability.

全世界有数百万人患有抑郁症。他们的非语言行为是否存在共性,使得跨文化筛查和严重程度评估可行?我们利用来自澳大利亚、美国和德国的临床访谈录像,研究了一种检测跨文化抑郁症严重程度的方法的普遍性。在访谈类型、抑郁亚型和纳入健康对照者、文化背景和记录环境等方面,材料存在差异。分析的重点是参与者眼睛注视和头部姿势的时间特征。评估了数据集内部和数据集之间的几种训练和测试方法。在所有数据集的训练和使用留一个主体的交叉验证的数据集测试中发现了最强的结果。相比之下,当只在三个数据集中的一个或两个上进行训练并对未用于训练的数据集的主题进行测试时,泛化性会减弱。这些发现强调了使用训练数据显示预期变异性范围的重要性。
{"title":"Cross-Cultural Detection of Depression from Nonverbal Behaviour.","authors":"Sharifa Alghowinem,&nbsp;Roland Goecke,&nbsp;Jeffrey F Cohn,&nbsp;Michael Wagner,&nbsp;Gordon Parker,&nbsp;Michael Breakspear","doi":"10.1109/FG.2015.7163113","DOIUrl":"https://doi.org/10.1109/FG.2015.7163113","url":null,"abstract":"<p><p>Millions of people worldwide suffer from depression. Do commonalities exist in their nonverbal behavior that would enable cross-culturally viable screening and assessment of severity? We investigated the generalisability of an approach to detect depression severity cross-culturally using video-recorded clinical interviews from Australia, the USA and Germany. The material varied in type of interview, subtypes of depression and inclusion healthy control subjects, cultural background, and recording environment. The analysis focussed on temporal features of participants' eye gaze and head pose. Several approaches to training and testing within and between datasets were evaluated. The strongest results were found for training across all datasets and testing across datasets using leave-one-subject-out cross-validation. In contrast, generalisability was attenuated when training on only one or two of the three datasets and testing on subjects from the dataset(s) not used in training. These findings highlight the importance of using training data exhibiting the expected range of variability.</p>","PeriodicalId":91494,"journal":{"name":"IEEE International Conference on Automatic Face & Gesture Recognition and Workshops","volume":"1 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2015-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/FG.2015.7163113","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34699799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 65
The Conjoinability Relation in Discontinuous Lambek Calculus 不连续Lambek微积分中的可合性关系
Pub Date : 2014-08-16 DOI: 10.1007/978-3-662-44121-3_11
A. Sorokin
{"title":"The Conjoinability Relation in Discontinuous Lambek Calculus","authors":"A. Sorokin","doi":"10.1007/978-3-662-44121-3_11","DOIUrl":"https://doi.org/10.1007/978-3-662-44121-3_11","url":null,"abstract":"","PeriodicalId":91494,"journal":{"name":"IEEE International Conference on Automatic Face & Gesture Recognition and Workshops","volume":"23 1","pages":"171-184"},"PeriodicalIF":0.0,"publicationDate":"2014-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81488662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE International Conference on Automatic Face & Gesture Recognition and Workshops
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1