A New Look at Breathing for Affective Studies

IF 9.8 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE Transactions on Affective Computing Pub Date : 2024-06-12 DOI:10.1109/TAFFC.2024.3413053
Nanfei Sun;Ioannis Pavlidis
{"title":"A New Look at Breathing for Affective Studies","authors":"Nanfei Sun;Ioannis Pavlidis","doi":"10.1109/TAFFC.2024.3413053","DOIUrl":null,"url":null,"abstract":"In affective computing, breathing has seen lighter use than the heart and EDA channels. Several reasons have contributed to this, including difficulties in disambiguating affective from speech effects and perceived lack of generalizability. Here we report a framework that addresses these issues. The cornerstone of the framework is a comprehensive set of physiologically informed features, comprised of three groups: breathing depth, respiratory time quotient (RTQ), and breathing speed features. The breathing depth features capture either mental arousal or fear effects. The RTQ features capture speech production. The breathing speed features capture arousal effects due to emotional influences. The said framework appears to have broad applicability. In the naturalistic <i>Office Tasks 2019</i> dataset with speaking sessions, the said features used either in regression or random forest models led to robust classification of arousal (<inline-formula><tex-math>$\\overline{\\text{AUC}}$</tex-math></inline-formula> in [0.75, 0.96]) stemming from three different conditions: a) mental-emotional stressor effected through a time-pressured knowledge task; b) pure mental stressor effected through a long knowledge task; c) mental-social stressor effected through a public speech task. In the stylized <i>CASE</i> dataset with silent sessions, the same features and algorithms led to solid classification of arousal (<inline-formula><tex-math>$\\overline{\\text{AUC}}$</tex-math></inline-formula> in [0.71, 0.85]) stemming from scary vs. non-scary movie clips.","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"16 1","pages":"98-115"},"PeriodicalIF":9.8000,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Affective Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10555307/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

In affective computing, breathing has seen lighter use than the heart and EDA channels. Several reasons have contributed to this, including difficulties in disambiguating affective from speech effects and perceived lack of generalizability. Here we report a framework that addresses these issues. The cornerstone of the framework is a comprehensive set of physiologically informed features, comprised of three groups: breathing depth, respiratory time quotient (RTQ), and breathing speed features. The breathing depth features capture either mental arousal or fear effects. The RTQ features capture speech production. The breathing speed features capture arousal effects due to emotional influences. The said framework appears to have broad applicability. In the naturalistic Office Tasks 2019 dataset with speaking sessions, the said features used either in regression or random forest models led to robust classification of arousal ($\overline{\text{AUC}}$ in [0.75, 0.96]) stemming from three different conditions: a) mental-emotional stressor effected through a time-pressured knowledge task; b) pure mental stressor effected through a long knowledge task; c) mental-social stressor effected through a public speech task. In the stylized CASE dataset with silent sessions, the same features and algorithms led to solid classification of arousal ($\overline{\text{AUC}}$ in [0.71, 0.85]) stemming from scary vs. non-scary movie clips.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
情感研究中的呼吸新视角
在情感计算中,呼吸比心脏和EDA通道使用更少。造成这种情况的原因有几个,包括难以从言语效果中消除情感歧义和感知到的缺乏概括性。这里我们报告一个解决这些问题的框架。框架的基石是一套全面的生理信息特征,由三组组成:呼吸深度,呼吸时间商(RTQ),和呼吸速度特征。呼吸深度的特征捕捉到精神觉醒或恐惧的影响。RTQ功能捕获语音生产。呼吸速度特征捕捉到由于情绪影响而产生的唤醒效应。上述框架似乎具有广泛的适用性。在具有演讲会话的naturalistic Office Tasks 2019数据集中,在回归模型或随机森林模型中使用的上述特征导致来自三种不同条件的唤醒($\overline{\text{AUC}}$ In[0.75, 0.96])的鲁棒分类:a)通过时间压力知识任务影响的心理情绪压力源;B)通过长时间的知识任务产生的纯精神压力;C)通过公共演讲任务影响的心理社会压力源。在具有无声会话的程式化CASE数据集中,相同的特征和算法导致了来自恐怖和非恐怖电影片段的唤醒的可靠分类($\overline{\text{AUC}}$ In[0.71, 0.85])。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Affective Computing
IEEE Transactions on Affective Computing COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, CYBERNETICS
CiteScore
15.00
自引率
6.20%
发文量
174
期刊介绍: The IEEE Transactions on Affective Computing is an international and interdisciplinary journal. Its primary goal is to share research findings on the development of systems capable of recognizing, interpreting, and simulating human emotions and related affective phenomena. The journal publishes original research on the underlying principles and theories that explain how and why affective factors shape human-technology interactions. It also focuses on how techniques for sensing and simulating affect can enhance our understanding of human emotions and processes. Additionally, the journal explores the design, implementation, and evaluation of systems that prioritize the consideration of affect in their usability. We also welcome surveys of existing work that provide new perspectives on the historical and future directions of this field.
期刊最新文献
F2FNet: An Efficient Affective State Analysis Network Reconstructing EEG From Few-Channel To Full-Channel Evaluating and Correcting Human Annotation Bias in Dynamic Micro-Expression Recognition Prototypical Contrastive Learning With Temporal Dynamic Graph Convolutional Network for EEG-Based Emotion Recognition Transformer-Based Physiological Emotion Recognition for Autism Intervention Support CMD$^{3}$: Cross-Modal Decoupled Deformable Distillation for EEG-fNIRS Fusion
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1