Jinal D. Thakkar, Pooja S B. Rao, Kumar Shubham, Vaibhav Jain, D. Jayagopi
{"title":"Understanding Interviewees’ Perceptions and Behaviour towards Verbally and Non-verbally Expressive Virtual Interviewing Agents","authors":"Jinal D. Thakkar, Pooja S B. Rao, Kumar Shubham, Vaibhav Jain, D. Jayagopi","doi":"10.1145/3536220.3558802","DOIUrl":null,"url":null,"abstract":"Recent technological advancements have boosted the usage of virtual interviewing platforms where the candidates interact with a virtual interviewing agent or an avatar that has human-like behavior instead of face-to-face interviews. As a result, it is essential to understand how candidates perceive these virtual interviewing avatars and whether adding features to boost the system’s interaction makes a difference. In this work, we present the results of two studies in which a virtual interviewing avatar with verbal and non-verbal interaction capabilities was used to conduct employment interviews. We add two interactive capabilities to the avatar, namely the non-verbal gestures and the verbal follow-up questioning and compare it with a simple interviewing avatar. We analyze the differences in perception with self-rated measures and behaviour with automatically extracted audiovisual behavioural cues. The results show that the candidates speak for a longer time, feel less stressed and have a better chance to perform with verbally and non-verbally expressive virtual interviewing agents.","PeriodicalId":186796,"journal":{"name":"Companion Publication of the 2022 International Conference on Multimodal Interaction","volume":"284 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Companion Publication of the 2022 International Conference on Multimodal Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3536220.3558802","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Recent technological advancements have boosted the usage of virtual interviewing platforms where the candidates interact with a virtual interviewing agent or an avatar that has human-like behavior instead of face-to-face interviews. As a result, it is essential to understand how candidates perceive these virtual interviewing avatars and whether adding features to boost the system’s interaction makes a difference. In this work, we present the results of two studies in which a virtual interviewing avatar with verbal and non-verbal interaction capabilities was used to conduct employment interviews. We add two interactive capabilities to the avatar, namely the non-verbal gestures and the verbal follow-up questioning and compare it with a simple interviewing avatar. We analyze the differences in perception with self-rated measures and behaviour with automatically extracted audiovisual behavioural cues. The results show that the candidates speak for a longer time, feel less stressed and have a better chance to perform with verbally and non-verbally expressive virtual interviewing agents.