Andrew H. Ryan, J. F. Cohn, S. Lucey, Jason M. Saragih, P. Lucey, F. de la Torre, Adam Rossi
{"title":"自动面部表情识别系统","authors":"Andrew H. Ryan, J. F. Cohn, S. Lucey, Jason M. Saragih, P. Lucey, F. de la Torre, Adam Rossi","doi":"10.1109/CCST.2009.5335546","DOIUrl":null,"url":null,"abstract":"Heightened concerns about the treatment of individuals during interviews and interrogations have stimulated efforts to develop “non-intrusive” technologies for rapidly assessing the credibility of statements by individuals in a variety of sensitive environments. Methods or processes that have the potential to precisely focus investigative resources will advance operational excellence and improve investigative capabilities. Facial expressions have the ability to communicate emotion and regulate interpersonal behavior. Over the past 30 years, scientists have developed human-observer based methods that can be used to classify and correlate facial expressions with human emotion. However, these methods have proven to be labor intensive, qualitative, and difficult to standardize. The Facial Action Coding System (FACS) developed by Paul Ekman and Wallace V. Friesen is the most widely used and validated method for measuring and describing facial behaviors. The Automated Facial Expression Recognition System (AFERS) automates the manual practice of FACS, leveraging the research and technology behind the CMU/PITT Automated Facial Image Analysis System (AFA) system developed by Dr. Jeffery Cohn and his colleagues at the Robotics Institute of Carnegie Mellon University. This portable, near real-time system will detect the seven universal expressions of emotion (figure 1), providing investigators with indicators of the presence of deception during the interview process. In addition, the system will include features such as full video support, snapshot generation, and case management utilities, enabling users to re-evaluate interviews in detail at a later date.","PeriodicalId":117285,"journal":{"name":"43rd Annual 2009 International Carnahan Conference on Security Technology","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"111","resultStr":"{\"title\":\"Automated Facial Expression Recognition System\",\"authors\":\"Andrew H. Ryan, J. F. Cohn, S. Lucey, Jason M. Saragih, P. Lucey, F. de la Torre, Adam Rossi\",\"doi\":\"10.1109/CCST.2009.5335546\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Heightened concerns about the treatment of individuals during interviews and interrogations have stimulated efforts to develop “non-intrusive” technologies for rapidly assessing the credibility of statements by individuals in a variety of sensitive environments. Methods or processes that have the potential to precisely focus investigative resources will advance operational excellence and improve investigative capabilities. Facial expressions have the ability to communicate emotion and regulate interpersonal behavior. Over the past 30 years, scientists have developed human-observer based methods that can be used to classify and correlate facial expressions with human emotion. However, these methods have proven to be labor intensive, qualitative, and difficult to standardize. The Facial Action Coding System (FACS) developed by Paul Ekman and Wallace V. Friesen is the most widely used and validated method for measuring and describing facial behaviors. The Automated Facial Expression Recognition System (AFERS) automates the manual practice of FACS, leveraging the research and technology behind the CMU/PITT Automated Facial Image Analysis System (AFA) system developed by Dr. Jeffery Cohn and his colleagues at the Robotics Institute of Carnegie Mellon University. This portable, near real-time system will detect the seven universal expressions of emotion (figure 1), providing investigators with indicators of the presence of deception during the interview process. In addition, the system will include features such as full video support, snapshot generation, and case management utilities, enabling users to re-evaluate interviews in detail at a later date.\",\"PeriodicalId\":117285,\"journal\":{\"name\":\"43rd Annual 2009 International Carnahan Conference on Security Technology\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2009-11-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"111\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"43rd Annual 2009 International Carnahan Conference on Security Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CCST.2009.5335546\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"43rd Annual 2009 International Carnahan Conference on Security Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCST.2009.5335546","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 111
摘要
对个人在面谈和审讯期间所受待遇的高度关注,促使人们努力发展“非侵入性”技术,以便在各种敏感环境中迅速评估个人陈述的可信度。有可能精确集中调查资源的方法或流程将推进卓越的行动并提高调查能力。面部表情具有交流情绪和调节人际行为的能力。在过去的30年里,科学家们开发了基于人类观察者的方法,可以用来对面部表情与人类情绪进行分类和关联。然而,这些方法已被证明是劳动密集型的,定性的,难以标准化。由Paul Ekman和Wallace V. Friesen开发的面部动作编码系统(FACS)是测量和描述面部行为的最广泛使用和验证的方法。自动面部表情识别系统(AFERS)利用卡内基梅隆大学机器人研究所Jeffery Cohn博士及其同事开发的CMU/PITT自动面部图像分析系统(AFA)系统背后的研究和技术,使FACS的手动操作自动化。这种便携式、接近实时的系统将检测七种普遍的情绪表达(图1),为调查人员提供在采访过程中是否存在欺骗的指标。此外,该系统还将包括完整视频支持、快照生成和案例管理实用程序等功能,使用户能够在以后详细地重新评估面试。
Heightened concerns about the treatment of individuals during interviews and interrogations have stimulated efforts to develop “non-intrusive” technologies for rapidly assessing the credibility of statements by individuals in a variety of sensitive environments. Methods or processes that have the potential to precisely focus investigative resources will advance operational excellence and improve investigative capabilities. Facial expressions have the ability to communicate emotion and regulate interpersonal behavior. Over the past 30 years, scientists have developed human-observer based methods that can be used to classify and correlate facial expressions with human emotion. However, these methods have proven to be labor intensive, qualitative, and difficult to standardize. The Facial Action Coding System (FACS) developed by Paul Ekman and Wallace V. Friesen is the most widely used and validated method for measuring and describing facial behaviors. The Automated Facial Expression Recognition System (AFERS) automates the manual practice of FACS, leveraging the research and technology behind the CMU/PITT Automated Facial Image Analysis System (AFA) system developed by Dr. Jeffery Cohn and his colleagues at the Robotics Institute of Carnegie Mellon University. This portable, near real-time system will detect the seven universal expressions of emotion (figure 1), providing investigators with indicators of the presence of deception during the interview process. In addition, the system will include features such as full video support, snapshot generation, and case management utilities, enabling users to re-evaluate interviews in detail at a later date.