通过基于云的视频通信,视觉障碍人士面部肌肉表达情绪的频率

H. N. Kim
{"title":"通过基于云的视频通信,视觉障碍人士面部肌肉表达情绪的频率","authors":"H. N. Kim","doi":"10.1080/1463922X.2022.2081374","DOIUrl":null,"url":null,"abstract":"Abstract As technology is advancing quickly, and various assistive technology applications are introduced to users with visual disabilities, many people with visual disabilities use smartphones and cloud-based video communication platforms such as Zoom. This study aims at advancing knowledge of how people with visual disabilities visualize voluntary emotions via facial expressions, especially in online contexts. A convenience sample of 28 participants with visual disabilities were observed as to how they show voluntary facial expressions via Zoom. The facial expressions were coded using the Facial Action Coding System (FACS) Action Units (AU). Individual differences were found in the frequency of facial action units, which were influenced by the participants’ visual acuity levels (i.e., visual impairment and blindness) and emotion characteristics (i.e., positive/negative valence and high/low arousal levels). The research findings are anticipated to be widely beneficial to many researchers and professionals in the field of facial expressions of emotions, such as facial recognition systems and emotion sensing technologies. Relevance to human factors/ergonomics theoryThis study advanced knowledge of facial muscle engagements while people with visual disabilities visualize their emotions via facial expressions, especially in online contexts. The advanced understanding would contribute to building a fundamental knowledge foundation, ultimately applicable to universal designs of emotion technology that can read users’ facial expressions to customize services with the focus on adequately accommodating the users’ emotional needs (e.g., ambient intelligence) regardless of users’ visual ability/disability.","PeriodicalId":22852,"journal":{"name":"Theoretical Issues in Ergonomics Science","volume":"24 1","pages":"267 - 280"},"PeriodicalIF":1.4000,"publicationDate":"2022-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The frequency of facial muscles engaged in expressing emotions in people with visual disabilities via cloud-based video communication\",\"authors\":\"H. N. Kim\",\"doi\":\"10.1080/1463922X.2022.2081374\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract As technology is advancing quickly, and various assistive technology applications are introduced to users with visual disabilities, many people with visual disabilities use smartphones and cloud-based video communication platforms such as Zoom. This study aims at advancing knowledge of how people with visual disabilities visualize voluntary emotions via facial expressions, especially in online contexts. A convenience sample of 28 participants with visual disabilities were observed as to how they show voluntary facial expressions via Zoom. The facial expressions were coded using the Facial Action Coding System (FACS) Action Units (AU). Individual differences were found in the frequency of facial action units, which were influenced by the participants’ visual acuity levels (i.e., visual impairment and blindness) and emotion characteristics (i.e., positive/negative valence and high/low arousal levels). The research findings are anticipated to be widely beneficial to many researchers and professionals in the field of facial expressions of emotions, such as facial recognition systems and emotion sensing technologies. Relevance to human factors/ergonomics theoryThis study advanced knowledge of facial muscle engagements while people with visual disabilities visualize their emotions via facial expressions, especially in online contexts. The advanced understanding would contribute to building a fundamental knowledge foundation, ultimately applicable to universal designs of emotion technology that can read users’ facial expressions to customize services with the focus on adequately accommodating the users’ emotional needs (e.g., ambient intelligence) regardless of users’ visual ability/disability.\",\"PeriodicalId\":22852,\"journal\":{\"name\":\"Theoretical Issues in Ergonomics Science\",\"volume\":\"24 1\",\"pages\":\"267 - 280\"},\"PeriodicalIF\":1.4000,\"publicationDate\":\"2022-05-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Theoretical Issues in Ergonomics Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/1463922X.2022.2081374\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"ERGONOMICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Theoretical Issues in Ergonomics Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/1463922X.2022.2081374","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ERGONOMICS","Score":null,"Total":0}
引用次数: 0

摘要

摘要随着技术的快速发展,各种辅助技术应用被引入视障用户,许多视障人士使用智能手机和Zoom等基于云的视频通信平台。这项研究旨在提高视障人士如何通过面部表情,特别是在网络环境中,将自愿情绪可视化的知识。对28名视觉残疾参与者的便利样本进行了观察,了解他们如何通过Zoom显示自愿的面部表情。使用面部动作编码系统(FACS)动作单元(AU)对面部表情进行编码。面部动作单元的频率存在个体差异,这受到参与者的视力水平(即视觉障碍和失明)和情绪特征(即积极/消极效价和高/低唤醒水平)的影响。预计这项研究结果将对面部表情领域的许多研究人员和专业人士广泛有益,如面部识别系统和情感传感技术。与人为因素/人机工程学理论的相关性这项研究提高了面部肌肉活动的知识,而视障人士则通过面部表情来表达自己的情绪,尤其是在网络环境中。先进的理解将有助于建立一个基础知识基础,最终适用于情感技术的通用设计,该技术可以读取用户的面部表情来定制服务,重点是充分满足用户的情感需求(例如,环境智能),而不考虑用户的视觉能力/残疾。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
The frequency of facial muscles engaged in expressing emotions in people with visual disabilities via cloud-based video communication
Abstract As technology is advancing quickly, and various assistive technology applications are introduced to users with visual disabilities, many people with visual disabilities use smartphones and cloud-based video communication platforms such as Zoom. This study aims at advancing knowledge of how people with visual disabilities visualize voluntary emotions via facial expressions, especially in online contexts. A convenience sample of 28 participants with visual disabilities were observed as to how they show voluntary facial expressions via Zoom. The facial expressions were coded using the Facial Action Coding System (FACS) Action Units (AU). Individual differences were found in the frequency of facial action units, which were influenced by the participants’ visual acuity levels (i.e., visual impairment and blindness) and emotion characteristics (i.e., positive/negative valence and high/low arousal levels). The research findings are anticipated to be widely beneficial to many researchers and professionals in the field of facial expressions of emotions, such as facial recognition systems and emotion sensing technologies. Relevance to human factors/ergonomics theoryThis study advanced knowledge of facial muscle engagements while people with visual disabilities visualize their emotions via facial expressions, especially in online contexts. The advanced understanding would contribute to building a fundamental knowledge foundation, ultimately applicable to universal designs of emotion technology that can read users’ facial expressions to customize services with the focus on adequately accommodating the users’ emotional needs (e.g., ambient intelligence) regardless of users’ visual ability/disability.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
4.10
自引率
6.20%
发文量
38
期刊最新文献
Conceptual framework for the design and development of sustainability-oriented products: toward EQUID 4.0 Trust building with artificial intelligence: comparing with human in investment behaviour, emotional arousal and neuro activities Establishing driving simulator validity: drawbacks of null-hypothesis significance testing when compared to equivalence tests and Bayes factors The influence of industry 4.0, internet of things, and physical-cyber systems on human factors: a case study of workers in Indonesian oil and gas refineries A theoretical model of industrial accidents investigations: a conceptualization of the mental processes that trigger and control investigative activities
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1