Emotional Analysis of Candidates During Online Interviews

Alperen Sayar, Tuna Çakar, Tunahan Bozkan, Seyit Ertugrul, Mert Güvençli
{"title":"Emotional Analysis of Candidates During Online Interviews","authors":"Alperen Sayar, Tuna Çakar, Tunahan Bozkan, Seyit Ertugrul, Mert Güvençli","doi":"10.54941/ahfe1003278","DOIUrl":null,"url":null,"abstract":"The recent empirical findings from the related fields including psychology, behavioral sciences, and neuroscience indicate that both emotion and cognition are influential during the decision making processes and so on the final behavioral outcome. On the other hand, emotions are mostly reflected by facial expressions that could be accepted as a vital means of communication and critical for social cognition. This has been known as the facial activation coding in the related academic literature. There have been several different AI-based systems that produce analysis of facial expressions with respect to 7 basic emotions including happy, sad, angry, disgust, fear, surprise, and neutral through the photos captured by camera-based systems. The system we have designed is composed of the following stages: (1) face verification, (2) facial emotion analysis and reporting, (3) emotion recognition from speech. The users upload their online video in which the participants tell about themselves within 3 minutes duration. In this study, several classification methods were applied for model development processes, and the candidates' emotional analysis in online interviews was focused on, and inferences about the situation were attempted using the related face images and sounds. In terms of the face verification system obtained as a result of the model used, 98% success was achieved. The main target of this paper is related to the analysis of facial expressions. The distances between facial landmarks are made up of the starting and ending points of these points. 'Face frames' were obtained while the study was being conducted by extracting human faces from the video using the VideoCapture and Haar Cascade functions in the OpenCV library in the Python programming language with the image taken in the recorded video. The videos consist of 24 frames for 1000 milliseconds. During the whole video, the participant's emotion analysis with respect to facial expressions is provided for the durations of 500 milliseconds. Since there are more than one face in the video, face verification was done with the help of different algorithms: VGG-Face, Facenet, OpenFace, DeepFace, DeepID, Dlib and ArcFace. Emotion analysis via facial landmarks was performed on all photographs of the participant during the interview. DeepFace algorithm was used to analyze face frames through study that recognizes faces using convolutional neural networks, then analyzes age, gender, race, and emotions. The study classified emotions as basic emotions. Emotion analysis was performed on all of the photographs obtained as a result of the verification, and the average mood analysis was carried out throughout the interview, and the data with the highest values ​​on the basis of emotion were also recorded and the probability values have been extracted for further analyses. Besides the local analyses, there have also been global outputs with respect to the whole video session. The main target has been to introduce different potential features to the feature matrix that could be correlated with the other variables and labels tagged by the HR expert.","PeriodicalId":405313,"journal":{"name":"Artificial Intelligence and Social Computing","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence and Social Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.54941/ahfe1003278","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The recent empirical findings from the related fields including psychology, behavioral sciences, and neuroscience indicate that both emotion and cognition are influential during the decision making processes and so on the final behavioral outcome. On the other hand, emotions are mostly reflected by facial expressions that could be accepted as a vital means of communication and critical for social cognition. This has been known as the facial activation coding in the related academic literature. There have been several different AI-based systems that produce analysis of facial expressions with respect to 7 basic emotions including happy, sad, angry, disgust, fear, surprise, and neutral through the photos captured by camera-based systems. The system we have designed is composed of the following stages: (1) face verification, (2) facial emotion analysis and reporting, (3) emotion recognition from speech. The users upload their online video in which the participants tell about themselves within 3 minutes duration. In this study, several classification methods were applied for model development processes, and the candidates' emotional analysis in online interviews was focused on, and inferences about the situation were attempted using the related face images and sounds. In terms of the face verification system obtained as a result of the model used, 98% success was achieved. The main target of this paper is related to the analysis of facial expressions. The distances between facial landmarks are made up of the starting and ending points of these points. 'Face frames' were obtained while the study was being conducted by extracting human faces from the video using the VideoCapture and Haar Cascade functions in the OpenCV library in the Python programming language with the image taken in the recorded video. The videos consist of 24 frames for 1000 milliseconds. During the whole video, the participant's emotion analysis with respect to facial expressions is provided for the durations of 500 milliseconds. Since there are more than one face in the video, face verification was done with the help of different algorithms: VGG-Face, Facenet, OpenFace, DeepFace, DeepID, Dlib and ArcFace. Emotion analysis via facial landmarks was performed on all photographs of the participant during the interview. DeepFace algorithm was used to analyze face frames through study that recognizes faces using convolutional neural networks, then analyzes age, gender, race, and emotions. The study classified emotions as basic emotions. Emotion analysis was performed on all of the photographs obtained as a result of the verification, and the average mood analysis was carried out throughout the interview, and the data with the highest values ​​on the basis of emotion were also recorded and the probability values have been extracted for further analyses. Besides the local analyses, there have also been global outputs with respect to the whole video session. The main target has been to introduce different potential features to the feature matrix that could be correlated with the other variables and labels tagged by the HR expert.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
在线面试中候选人的情绪分析
最近心理学、行为科学和神经科学等相关领域的实证研究表明,情绪和认知在决策过程等最终行为结果中都有影响。另一方面,情绪主要通过面部表情来反映,面部表情可以被接受为一种重要的交流手段,对社会认知至关重要。这在相关学术文献中被称为面部激活编码。有几种不同的基于人工智能的系统,可以通过基于摄像头的系统拍摄的照片,对包括快乐、悲伤、愤怒、厌恶、恐惧、惊讶和中性在内的7种基本情绪的面部表情进行分析。我们设计的系统由以下几个阶段组成:(1)人脸验证;(2)面部情绪分析与报告;(3)语音情绪识别。用户上传他们的在线视频,参与者在视频中介绍自己,时长为3分钟。本研究在模型开发过程中采用了多种分类方法,重点研究了在线面试中候选人的情绪分析,并尝试使用相关的人脸图像和声音进行情境推断。在使用该模型得到的人脸验证系统中,成功率达到98%。本文的主要研究对象是面部表情的分析。面部地标之间的距离由这些点的起点和终点组成。在进行研究时,使用Python编程语言的OpenCV库中的videoccapture和Haar Cascade函数从视频中提取人脸,并从录制的视频中拍摄图像,从而获得“人脸帧”。视频由24帧1000毫秒组成。在整个视频中,提供了参与者关于面部表情的情绪分析,持续时间为500毫秒。由于视频中有不止一张脸,人脸验证是在不同算法的帮助下完成的:VGG-Face, Facenet, OpenFace, DeepFace, DeepID, Dlib和ArcFace。通过面部标志对访谈期间参与者的所有照片进行情绪分析。DeepFace算法通过卷积神经网络识别人脸的研究,分析人脸框架,然后分析年龄、性别、种族、情绪等。该研究将情绪分为基本情绪。对验证后获得的所有照片进行情绪分析,并在整个访谈过程中进行平均情绪分析,并记录基于情绪的最高值的数据,并提取概率值进行进一步分析。除了当地的分析外,还有关于整个录象会议的全球产出。主要目标是向特征矩阵中引入不同的潜在特征,这些特征可以与人力资源专家标记的其他变量和标签相关联。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Hepatitis predictive analysis model through deep learning using neural networks based on patient history A machine learning approach for optimizing waiting times in a hand surgery operation center Automated Decision Support for Collaborative, Interactive Classification Dynamically monitoring crowd-worker's reliability with interval-valued labels Detection of inappropriate images on smartphones based on computer vision techniques
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1