Jeffrey M Girard, Wen-Sheng Chu, László A Jeni, Jeffrey F Cohn, Fernando De la Torre, Michael A Sayette
{"title":"Sayette Group Formation Task (GFT)自发面部表情数据库。","authors":"Jeffrey M Girard, Wen-Sheng Chu, László A Jeni, Jeffrey F Cohn, Fernando De la Torre, Michael A Sayette","doi":"10.1109/FG.2017.144","DOIUrl":null,"url":null,"abstract":"<p><p>Despite the important role that facial expressions play in interpersonal communication and our knowledge that interpersonal behavior is influenced by social context, no currently available facial expression database includes multiple interacting participants. The Sayette Group Formation Task (GFT) database addresses the need for well-annotated video of multiple participants during unscripted interactions. The database includes 172,800 video frames from 96 participants in 32 three-person groups. To aid in the development of automated facial expression analysis systems, GFT includes expert annotations of FACS occurrence and intensity, facial landmark tracking, and baseline results for linear SVM, deep learning, active patch learning, and personalized classification. Baseline performance is quantified and compared using identical partitioning and a variety of metrics (including means and confidence intervals). The highest performance scores were found for the deep learning and active patch learning methods. Learn more at http://osf.io/7wcyz.</p>","PeriodicalId":87341,"journal":{"name":"Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/FG.2017.144","citationCount":"54","resultStr":"{\"title\":\"Sayette Group Formation Task (GFT) Spontaneous Facial Expression Database.\",\"authors\":\"Jeffrey M Girard, Wen-Sheng Chu, László A Jeni, Jeffrey F Cohn, Fernando De la Torre, Michael A Sayette\",\"doi\":\"10.1109/FG.2017.144\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Despite the important role that facial expressions play in interpersonal communication and our knowledge that interpersonal behavior is influenced by social context, no currently available facial expression database includes multiple interacting participants. The Sayette Group Formation Task (GFT) database addresses the need for well-annotated video of multiple participants during unscripted interactions. The database includes 172,800 video frames from 96 participants in 32 three-person groups. To aid in the development of automated facial expression analysis systems, GFT includes expert annotations of FACS occurrence and intensity, facial landmark tracking, and baseline results for linear SVM, deep learning, active patch learning, and personalized classification. Baseline performance is quantified and compared using identical partitioning and a variety of metrics (including means and confidence intervals). The highest performance scores were found for the deep learning and active patch learning methods. Learn more at http://osf.io/7wcyz.</p>\",\"PeriodicalId\":87341,\"journal\":{\"name\":\"Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1109/FG.2017.144\",\"citationCount\":\"54\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/FG.2017.144\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2017/6/29 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ... International Conference on Automatic Face and Gesture Recognition. IEEE International Conference on Automatic Face & Gesture Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FG.2017.144","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2017/6/29 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 54
摘要
尽管面部表情在人际交往中发挥着重要作用,我们也知道人际行为受社会环境的影响,但目前还没有包含多个互动参与者的面部表情数据库。Sayette Group Formation Task (GFT)数据库解决了在无脚本交互过程中对多个参与者的良好注释视频的需求。该数据库包括来自32个三人小组的96名参与者的172,800个视频帧。为了帮助开发自动面部表情分析系统,GFT包括FACS发生和强度的专家注释、面部地标跟踪、线性支持向量机的基线结果、深度学习、主动补丁学习和个性化分类。使用相同的划分和各种度量(包括平均值和置信区间)对基线性能进行量化和比较。深度学习和主动补丁学习方法的性能得分最高。更多信息请访问http://osf.io/7wcyz。
Sayette Group Formation Task (GFT) Spontaneous Facial Expression Database.
Despite the important role that facial expressions play in interpersonal communication and our knowledge that interpersonal behavior is influenced by social context, no currently available facial expression database includes multiple interacting participants. The Sayette Group Formation Task (GFT) database addresses the need for well-annotated video of multiple participants during unscripted interactions. The database includes 172,800 video frames from 96 participants in 32 three-person groups. To aid in the development of automated facial expression analysis systems, GFT includes expert annotations of FACS occurrence and intensity, facial landmark tracking, and baseline results for linear SVM, deep learning, active patch learning, and personalized classification. Baseline performance is quantified and compared using identical partitioning and a variety of metrics (including means and confidence intervals). The highest performance scores were found for the deep learning and active patch learning methods. Learn more at http://osf.io/7wcyz.