{"title":"A New Look at Breathing for Affective Studies","authors":"Nanfei Sun;Ioannis Pavlidis","doi":"10.1109/TAFFC.2024.3413053","DOIUrl":null,"url":null,"abstract":"In affective computing, breathing has seen lighter use than the heart and EDA channels. Several reasons have contributed to this, including difficulties in disambiguating affective from speech effects and perceived lack of generalizability. Here we report a framework that addresses these issues. The cornerstone of the framework is a comprehensive set of physiologically informed features, comprised of three groups: breathing depth, respiratory time quotient (RTQ), and breathing speed features. The breathing depth features capture either mental arousal or fear effects. The RTQ features capture speech production. The breathing speed features capture arousal effects due to emotional influences. The said framework appears to have broad applicability. In the naturalistic <i>Office Tasks 2019</i> dataset with speaking sessions, the said features used either in regression or random forest models led to robust classification of arousal (<inline-formula><tex-math>$\\overline{\\text{AUC}}$</tex-math></inline-formula> in [0.75, 0.96]) stemming from three different conditions: a) mental-emotional stressor effected through a time-pressured knowledge task; b) pure mental stressor effected through a long knowledge task; c) mental-social stressor effected through a public speech task. In the stylized <i>CASE</i> dataset with silent sessions, the same features and algorithms led to solid classification of arousal (<inline-formula><tex-math>$\\overline{\\text{AUC}}$</tex-math></inline-formula> in [0.71, 0.85]) stemming from scary vs. non-scary movie clips.","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"16 1","pages":"98-115"},"PeriodicalIF":9.8000,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Affective Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10555307/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
In affective computing, breathing has seen lighter use than the heart and EDA channels. Several reasons have contributed to this, including difficulties in disambiguating affective from speech effects and perceived lack of generalizability. Here we report a framework that addresses these issues. The cornerstone of the framework is a comprehensive set of physiologically informed features, comprised of three groups: breathing depth, respiratory time quotient (RTQ), and breathing speed features. The breathing depth features capture either mental arousal or fear effects. The RTQ features capture speech production. The breathing speed features capture arousal effects due to emotional influences. The said framework appears to have broad applicability. In the naturalistic Office Tasks 2019 dataset with speaking sessions, the said features used either in regression or random forest models led to robust classification of arousal ($\overline{\text{AUC}}$ in [0.75, 0.96]) stemming from three different conditions: a) mental-emotional stressor effected through a time-pressured knowledge task; b) pure mental stressor effected through a long knowledge task; c) mental-social stressor effected through a public speech task. In the stylized CASE dataset with silent sessions, the same features and algorithms led to solid classification of arousal ($\overline{\text{AUC}}$ in [0.71, 0.85]) stemming from scary vs. non-scary movie clips.
期刊介绍:
The IEEE Transactions on Affective Computing is an international and interdisciplinary journal. Its primary goal is to share research findings on the development of systems capable of recognizing, interpreting, and simulating human emotions and related affective phenomena. The journal publishes original research on the underlying principles and theories that explain how and why affective factors shape human-technology interactions. It also focuses on how techniques for sensing and simulating affect can enhance our understanding of human emotions and processes. Additionally, the journal explores the design, implementation, and evaluation of systems that prioritize the consideration of affect in their usability. We also welcome surveys of existing work that provide new perspectives on the historical and future directions of this field.