首页 > 最新文献

Proceedings of the ... ACM International Conference on Multimodal Interaction. ICMI (Conference)最新文献

英文 中文
Detecting Autism from Head Movements using Kinesics. 利用运动学从头部运动中检测自闭症
Pub Date : 2024-11-01 Epub Date: 2024-11-04 DOI: 10.1145/3678957.3685711
Muhittin Gokmen, Evangelos Sariyanidi, Lisa Yankowitz, Casey J Zampella, Robert T Schultz, Birkan Tunç

Head movements play a crucial role in social interactions. The quantification of communicative movements such as nodding, shaking, orienting, and backchanneling is significant in behavioral and mental health research. However, automated localization of such head movements within videos remains challenging in computer vision due to their arbitrary start and end times, durations, and frequencies. In this work, we introduce a novel and efficient coding system for head movements, grounded in Birdwhistell's kinesics theory, to automatically identify basic head motion units such as nodding and shaking. Our approach first defines the smallest unit of head movement, termed kine, based on the anatomical constraints of the neck and head. We then quantify the location, magnitude, and duration of kines within each angular component of head movement. Through defining possible combinations of identified kines, we define a higher-level construct, kineme, which corresponds to basic head motion units such as nodding and shaking. We validate the proposed framework by predicting autism spectrum disorder (ASD) diagnosis from video recordings of interacting partners. We show that the multi-scale property of the proposed framework provides a significant advantage, as collapsing behavior across temporal scales reduces performance consistently. Finally, we incorporate another fundamental behavioral modality, namely speech, and show that distinguishing between speaking- and listening-time head movementsments significantly improves ASD classification performance.

头部动作在社会交往中起着至关重要的作用。对点头、摇头、定向和回传等交流动作进行量化,对行为和心理健康研究具有重要意义。然而,由于头部运动的开始和结束时间、持续时间和频率具有任意性,因此在视频中自动定位此类头部运动在计算机视觉领域仍具有挑战性。在这项工作中,我们以 Birdwhistell 的运动学理论为基础,引入了一种新颖高效的头部运动编码系统,用于自动识别点头和摇晃等基本头部运动单元。我们的方法首先根据颈部和头部的解剖限制定义了最小的头部运动单位,称为 "运动"。然后,我们对头部运动每个角度分量中的 "线 "的位置、幅度和持续时间进行量化。通过定义已识别的 "动因 "的可能组合,我们定义了一个更高层次的结构--"动因",它与点头和摇头等基本头部运动单元相对应。我们从互动伙伴的视频记录中预测自闭症谱系障碍(ASD)的诊断,从而验证了所提出的框架。我们发现,所提框架的多尺度特性具有显著优势,因为在时间尺度上对行为进行折叠会持续降低性能。最后,我们纳入了另一种基本行为模式,即语言,并表明区分说话和倾听时的头部运动能显著提高 ASD 的分类性能。
{"title":"Detecting Autism from Head Movements using Kinesics.","authors":"Muhittin Gokmen, Evangelos Sariyanidi, Lisa Yankowitz, Casey J Zampella, Robert T Schultz, Birkan Tunç","doi":"10.1145/3678957.3685711","DOIUrl":"https://doi.org/10.1145/3678957.3685711","url":null,"abstract":"<p><p>Head movements play a crucial role in social interactions. The quantification of communicative movements such as nodding, shaking, orienting, and backchanneling is significant in behavioral and mental health research. However, automated localization of such head movements within videos remains challenging in computer vision due to their arbitrary start and end times, durations, and frequencies. In this work, we introduce a novel and efficient coding system for head movements, grounded in Birdwhistell's kinesics theory, to automatically identify basic head motion units such as nodding and shaking. Our approach first defines the smallest unit of head movement, termed <i>kine</i>, based on the anatomical constraints of the neck and head. We then quantify the location, magnitude, and duration of <i>kines</i> within each angular component of head movement. Through defining possible combinations of identified <i>kines</i>, we define a higher-level construct, <i>kineme</i>, which corresponds to basic head motion units such as nodding and shaking. We validate the proposed framework by predicting autism spectrum disorder (ASD) diagnosis from video recordings of interacting partners. We show that the multi-scale property of the proposed framework provides a significant advantage, as collapsing behavior across temporal scales reduces performance consistently. Finally, we incorporate another fundamental behavioral modality, namely speech, and show that distinguishing between speaking- and listening-time head movementsments significantly improves ASD classification performance.</p>","PeriodicalId":74508,"journal":{"name":"Proceedings of the ... ACM International Conference on Multimodal Interaction. ICMI (Conference)","volume":"2024 ","pages":"350-354"},"PeriodicalIF":0.0,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11542642/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142633793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Causal Understanding of Therapist-Client Relationships: A Study of Language Modality and Social Entrainment. 对治疗师-来访者关系的因果理解:语言形态与社会娱乐的研究。
Alexandria K Vail, Jeffrey M Girard, Lauren M Bylsma, Jeffrey F Cohn, Jay Fournier, Holly A Swartz, Louis-Philippe Morency

The relationship between a therapist and their client is one of the most critical determinants of successful therapy. The working alliance is a multifaceted concept capturing the collaborative aspect of the therapist-client relationship; a strong working alliance has been extensively linked to many positive therapeutic outcomes. Although therapy sessions are decidedly multimodal interactions, the language modality is of particular interest given its recognized relationship to similar dyadic concepts such as rapport, cooperation, and affiliation. Specifically, in this work we study language entrainment, which measures how much the therapist and client adapt toward each other's use of language over time. Despite the growing body of work in this area, however, relatively few studies examine causal relationships between human behavior and these relationship metrics: does an individual's perception of their partner affect how they speak, or does how they speak affect their perception? We explore these questions in this work through the use of structural equation modeling (SEM) techniques, which allow for both multilevel and temporal modeling of the relationship between the quality of the therapist-client working alliance and the participants' language entrainment. In our first experiment, we demonstrate that these techniques perform well in comparison to other common machine learning models, with the added benefits of interpretability and causal analysis. In our second analysis, we interpret the learned models to examine the relationship between working alliance and language entrainment and address our exploratory research questions. The results reveal that a therapist's language entrainment can have a significant impact on the client's perception of the working alliance, and that the client's language entrainment is a strong indicator of their perception of the working alliance. We discuss the implications of these results and consider several directions for future work in multimodality.

治疗师和客户之间的关系是成功治疗的最关键的决定因素之一。工作联盟是一个多方面的概念,捕捉治疗师与客户关系的合作方面;强大的工作联盟与许多积极的治疗结果广泛相关。虽然治疗过程是决定性的多模态互动,但鉴于其与类似的二元概念(如融洽、合作和隶属关系)的公认关系,语言模态是特别有趣的。具体来说,在这项工作中,我们研究了语言卷入,它衡量了治疗师和客户随着时间的推移对彼此语言使用的适应程度。尽管这一领域的工作越来越多,但是,相对较少的研究考察了人类行为和这些关系指标之间的因果关系:个人对伴侣的感知是否影响他们的说话方式,或者他们的说话方式是否影响他们的感知?我们通过使用结构方程建模(SEM)技术来探索这些问题,该技术允许对治疗师-来访者工作联盟的质量与参与者语言娱乐之间的关系进行多层次和时间建模。在我们的第一个实验中,我们证明了这些技术与其他常见的机器学习模型相比表现良好,并且具有可解释性和因果分析的额外好处。在我们的第二个分析中,我们解释了学习模型来检验工作联盟和语言娱乐之间的关系,并解决了我们的探索性研究问题。结果显示,治疗师的语言夹带对来访者对工作联盟的感知有显著的影响,来访者的语言夹带是他们对工作联盟感知的一个强有力的指标。我们讨论了这些结果的含义,并考虑了未来多模态工作的几个方向。
{"title":"Toward Causal Understanding of Therapist-Client Relationships: A Study of Language Modality and Social Entrainment.","authors":"Alexandria K Vail,&nbsp;Jeffrey M Girard,&nbsp;Lauren M Bylsma,&nbsp;Jeffrey F Cohn,&nbsp;Jay Fournier,&nbsp;Holly A Swartz,&nbsp;Louis-Philippe Morency","doi":"10.1145/3536221.3556616","DOIUrl":"https://doi.org/10.1145/3536221.3556616","url":null,"abstract":"<p><p>The relationship between a therapist and their client is one of the most critical determinants of successful therapy. The <i>working alliance</i> is a multifaceted concept capturing the collaborative aspect of the therapist-client relationship; a strong working alliance has been extensively linked to many positive therapeutic outcomes. Although therapy sessions are decidedly multimodal interactions, the language modality is of particular interest given its recognized relationship to similar dyadic concepts such as rapport, cooperation, and affiliation. Specifically, in this work we study <i>language entrainment</i>, which measures how much the therapist and client adapt toward each other's use of language over time. Despite the growing body of work in this area, however, relatively few studies examine <i>causal</i> relationships between human behavior and these relationship metrics: does an individual's perception of their partner affect how they speak, or does how they speak affect their perception? We explore these questions in this work through the use of structural equation modeling (SEM) techniques, which allow for both multilevel and temporal modeling of the relationship between the quality of the therapist-client working alliance and the participants' language entrainment. In our first experiment, we demonstrate that these techniques perform well in comparison to other common machine learning models, with the added benefits of interpretability and causal analysis. In our second analysis, we interpret the learned models to examine the relationship between working alliance and language entrainment and address our exploratory research questions. The results reveal that a therapist's language entrainment can have a significant impact on the client's perception of the working alliance, and that the client's language entrainment is a strong indicator of their perception of the working alliance. We discuss the implications of these results and consider several directions for future work in multimodality.</p>","PeriodicalId":74508,"journal":{"name":"Proceedings of the ... ACM International Conference on Multimodal Interaction. ICMI (Conference)","volume":"2022 ","pages":"487-494"},"PeriodicalIF":0.0,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9999472/pdf/nihms-1879155.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9110473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Transition of Social Interaction from In-Person to Online: Predicting Changes in Social Media Usage of College Students during the COVID-19 Pandemic based on Pre-COVID-19 On-Campus Colocation. 社交互动从面对面到在线的转变:基于新冠疫情前校园托管的大学生社交媒体使用变化预测
Weichen Wang, Jialing Wu, Subigya Nepal, Alex daSilva, Elin Hedlund, Eilis Murphy, Courtney Rogers, Jeremy Huckins

Pandemics significantly impact human daily life. People throughout the world adhere to safety protocols (e.g., social distancing and self-quarantining). As a result, they willingly keep distance from workplace, friends and even family. In such circumstances, in-person social interactions may be substituted with virtual ones via online channels, such as, Instagram and Snapchat. To get insights into this phenomenon, we study a group of undergraduate students before and after the start of COVID-19 pandemic. Specifically, we track N=102 undergraduate students on a small college campus prior to the pandemic using mobile sensing from phones and assign semantic labels to each location they visit on campus where they study, socialize and live. By leveraging their colocation network at these various semantically labeled places on campus, we find that colocations at certain places that possibly proxy higher in-person social interactions (e.g., dormitories, gyms and Greek houses) show significant predictive capability in identifying the individuals' change in social media usage during the pandemic period. We show that we can predict student's change in social media usage during COVID-19 with an F1 score of 0.73 purely from the in-person colocation data generated prior to the pandemic.

流行病严重影响人类的日常生活。世界各地的人们都遵守安全协议(例如,保持社交距离和自我隔离)。因此,他们愿意与工作场所、朋友甚至家人保持距离。在这种情况下,面对面的社交互动可能会被通过Instagram和Snapchat等在线渠道进行的虚拟互动所取代。为了深入了解这一现象,我们对新冠肺炎大流行开始前后的一组本科生进行了研究。具体来说,我们在大流行之前使用手机的移动传感跟踪了一所小型大学校园中的N=102名本科生,并为他们在校园里学习、社交和生活的每个地点分配了语义标签。通过利用他们在校园中这些不同语义标记的地方的托管网络,我们发现,在某些地方的托管可能代表更高的面对面社交互动(例如,宿舍、健身房和希腊房屋),在识别个人在大流行期间使用社交媒体的变化方面显示出显着的预测能力。我们表明,我们可以预测学生在COVID-19期间使用社交媒体的变化,F1得分为0.73,仅来自大流行之前生成的现场托管数据。
{"title":"On the Transition of Social Interaction from In-Person to Online: Predicting Changes in Social Media Usage of College Students during the COVID-19 Pandemic based on Pre-COVID-19 On-Campus Colocation.","authors":"Weichen Wang,&nbsp;Jialing Wu,&nbsp;Subigya Nepal,&nbsp;Alex daSilva,&nbsp;Elin Hedlund,&nbsp;Eilis Murphy,&nbsp;Courtney Rogers,&nbsp;Jeremy Huckins","doi":"10.1145/3462244.3479888","DOIUrl":"https://doi.org/10.1145/3462244.3479888","url":null,"abstract":"<p><p>Pandemics significantly impact human daily life. People throughout the world adhere to safety protocols (e.g., social distancing and self-quarantining). As a result, they willingly keep distance from workplace, friends and even family. In such circumstances, in-person social interactions may be substituted with virtual ones via online channels, such as, Instagram and Snapchat. To get insights into this phenomenon, we study a group of undergraduate students before and after the start of COVID-19 pandemic. Specifically, we track N=102 undergraduate students on a small college campus prior to the pandemic using mobile sensing from phones and assign semantic labels to each location they visit on campus where they study, socialize and live. By leveraging their colocation network at these various semantically labeled places on campus, we find that colocations at certain places that possibly proxy higher in-person social interactions (e.g., dormitories, gyms and Greek houses) show significant predictive capability in identifying the individuals' change in social media usage during the pandemic period. We show that we can predict student's change in social media usage during COVID-19 with an F1 score of 0.73 purely from the in-person colocation data generated prior to the pandemic.</p>","PeriodicalId":74508,"journal":{"name":"Proceedings of the ... ACM International Conference on Multimodal Interaction. ICMI (Conference)","volume":"2021 ","pages":"425-434"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9747327/pdf/nihms-1855031.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10641903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Human-Guided Modality Informativeness for Affective States. 情感状态的人类引导情态信息。
Torsten Wörtwein, Lisa B Sheeber, Nicholas Allen, Jeffrey F Cohn, Louis-Philippe Morency

This paper studies the hypothesis that not all modalities are always needed to predict affective states. We explore this hypothesis in the context of recognizing three affective states that have shown a relation to a future onset of depression: positive, aggressive, and dysphoric. In particular, we investigate three important modalities for face-to-face conversations: vision, language, and acoustic modality. We first perform a human study to better understand which subset of modalities people find informative, when recognizing three affective states. As a second contribution, we explore how these human annotations can guide automatic affect recognition systems to be more interpretable while not degrading their predictive performance. Our studies show that humans can reliably annotate modality informativeness. Further, we observe that guided models significantly improve interpretability, i.e., they attend to modalities similarly to how humans rate the modality informativeness, while at the same time showing a slight increase in predictive performance.

本文研究了一种假设,即并非所有的模式都需要预测情感状态。我们在认识到与未来抑郁症发病有关的三种情感状态的背景下探讨了这一假设:积极、好斗和烦躁。我们特别研究了面对面对话的三种重要模式:视觉、语言和听觉模式。我们首先进行了一项人类研究,以更好地了解人们在识别三种情感状态时发现的信息。作为第二个贡献,我们探索了这些人工注释如何指导自动影响识别系统在不降低其预测性能的同时提高可解释性。我们的研究表明,人类可以可靠地注释模态信息。此外,我们观察到,引导模型显著提高了可解释性,即,它们关注的模态与人类对模态信息的评价相似,同时在预测性能上略有提高。
{"title":"Human-Guided Modality Informativeness for Affective States.","authors":"Torsten Wörtwein,&nbsp;Lisa B Sheeber,&nbsp;Nicholas Allen,&nbsp;Jeffrey F Cohn,&nbsp;Louis-Philippe Morency","doi":"10.1145/3462244.3481004","DOIUrl":"https://doi.org/10.1145/3462244.3481004","url":null,"abstract":"<p><p>This paper studies the hypothesis that not all modalities are always needed to predict affective states. We explore this hypothesis in the context of recognizing three affective states that have shown a relation to a future onset of depression: positive, aggressive, and dysphoric. In particular, we investigate three important modalities for face-to-face conversations: vision, language, and acoustic modality. We first perform a human study to better understand which subset of modalities people find informative, when recognizing three affective states. As a second contribution, we explore how these human annotations can guide automatic affect recognition systems to be more interpretable while not degrading their predictive performance. Our studies show that humans can reliably annotate modality informativeness. Further, we observe that guided models significantly improve interpretability, i.e., they attend to modalities similarly to how humans rate the modality informativeness, while at the same time showing a slight increase in predictive performance.</p>","PeriodicalId":74508,"journal":{"name":"Proceedings of the ... ACM International Conference on Multimodal Interaction. ICMI (Conference)","volume":" ","pages":"728-734"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8812829/pdf/nihms-1770971.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39895427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Face and Gesture Analysis for Health Informatics. 用于健康信息学的面部和手势分析。
Zakia Hammal, Di Huang, Kévin Bailly, Liming Chen, Mohamed Daoudi

The goal of Face and Gesture Analysis for Health Informatics's workshop is to share and discuss the achievements as well as the challenges in using computer vision and machine learning for automatic human behavior analysis and modeling for clinical research and healthcare applications. The workshop aims to promote current research and support growth of multidisciplinary collaborations to advance this groundbreaking research. The meeting gathers scientists working in related areas of computer vision and machine learning, multi-modal signal processing and fusion, human centered computing, behavioral sensing, assistive technologies, and medical tutoring systems for healthcare applications and medicine.

健康信息学人脸和手势分析研讨会的目标是分享和讨论在临床研究和医疗保健应用中使用计算机视觉和机器学习进行人类行为自动分析和建模方面所取得的成就和面临的挑战。研讨会旨在促进当前的研究,支持多学科合作的发展,以推动这一突破性研究。会议汇集了在计算机视觉和机器学习、多模态信号处理和融合、以人为中心的计算、行为传感、辅助技术以及医疗保健应用和医学辅导系统等相关领域工作的科学家。
{"title":"Face and Gesture Analysis for Health Informatics.","authors":"Zakia Hammal, Di Huang, Kévin Bailly, Liming Chen, Mohamed Daoudi","doi":"10.1145/3382507.3419747","DOIUrl":"10.1145/3382507.3419747","url":null,"abstract":"<p><p>The goal of Face and Gesture Analysis for Health Informatics's workshop is to share and discuss the achievements as well as the challenges in using computer vision and machine learning for automatic human behavior analysis and modeling for clinical research and healthcare applications. The workshop aims to promote current research and support growth of multidisciplinary collaborations to advance this groundbreaking research. The meeting gathers scientists working in related areas of computer vision and machine learning, multi-modal signal processing and fusion, human centered computing, behavioral sensing, assistive technologies, and medical tutoring systems for healthcare applications and medicine.</p>","PeriodicalId":74508,"journal":{"name":"Proceedings of the ... ACM International Conference on Multimodal Interaction. ICMI (Conference)","volume":"2020 ","pages":"874-875"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7710162/pdf/nihms-1643015.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38676363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Multimodal Modeling of Emotional Expressiveness. 实现情绪表达的多模态建模
Victoria Lin, Jeffrey M Girard, Michael A Sayette, Louis-Philippe Morency

Emotional expressiveness captures the extent to which a person tends to outwardly display their emotions through behavior. Due to the close relationship between emotional expressiveness and behavioral health, as well as the crucial role that it plays in social interaction, the ability to automatically predict emotional expressiveness stands to spur advances in science, medicine, and industry. In this paper, we explore three related research questions. First, how well can emotional expressiveness be predicted from visual, linguistic, and multimodal behavioral signals? Second, how important is each behavioral modality to the prediction of emotional expressiveness? Third, which behavioral signals are reliably related to emotional expressiveness? To answer these questions, we add highly reliable transcripts and human ratings of perceived emotional expressiveness to an existing video database and use this data to train, validate, and test predictive models. Our best model shows promising predictive performance on this dataset (RMSE = 0.65, R 2 = 0.45, r = 0.74). Multimodal models tend to perform best overall, and models trained on the linguistic modality tend to outperform models trained on the visual modality. Finally, examination of our interpretable models' coefficients reveals a number of visual and linguistic behavioral signals-such as facial action unit intensity, overall word count, and use of words related to social processes-that reliably predict emotional expressiveness.

情绪表达能力是指一个人倾向于通过行为向外展示其情绪的程度。由于情绪表达能力与行为健康之间的密切关系,以及情绪表达能力在社会交往中的关键作用,自动预测情绪表达能力的能力将推动科学、医学和工业的进步。在本文中,我们将探讨三个相关的研究问题。首先,从视觉、语言和多模态行为信号中预测情绪表达能力的效果如何?第二,每种行为模式对预测情绪表达能力的重要性如何?第三,哪些行为信号与情绪表达能力有可靠的关系?为了回答这些问题,我们在现有的视频数据库中添加了高度可靠的文字记录和人类对感知到的情感表现力的评分,并使用这些数据来训练、验证和测试预测模型。我们的最佳模型在该数据集上显示出良好的预测性能(RMSE = 0.65,R 2 = 0.45,r = 0.74)。多模态模型的整体表现往往最好,而在语言模态上训练的模型往往优于在视觉模态上训练的模型。最后,我们对可解释模型的系数进行了研究,发现了一些视觉和语言行为信号--如面部动作单位强度、总字数以及与社会过程相关的词语的使用--可以可靠地预测情绪表达能力。
{"title":"Toward Multimodal Modeling of Emotional Expressiveness.","authors":"Victoria Lin, Jeffrey M Girard, Michael A Sayette, Louis-Philippe Morency","doi":"10.1145/3382507.3418887","DOIUrl":"10.1145/3382507.3418887","url":null,"abstract":"<p><p>Emotional expressiveness captures the extent to which a person tends to outwardly display their emotions through behavior. Due to the close relationship between emotional expressiveness and behavioral health, as well as the crucial role that it plays in social interaction, the ability to automatically predict emotional expressiveness stands to spur advances in science, medicine, and industry. In this paper, we explore three related research questions. First, how well can emotional expressiveness be predicted from visual, linguistic, and multimodal behavioral signals? Second, how important is each behavioral modality to the prediction of emotional expressiveness? Third, which behavioral signals are reliably related to emotional expressiveness? To answer these questions, we add highly reliable transcripts and human ratings of perceived emotional expressiveness to an existing video database and use this data to train, validate, and test predictive models. Our best model shows promising predictive performance on this dataset (<i>RMSE</i> = 0.65, <i>R</i> <sup>2</sup> = 0.45, <i>r</i> = 0.74). Multimodal models tend to perform best overall, and models trained on the linguistic modality tend to outperform models trained on the visual modality. Finally, examination of our interpretable models' coefficients reveals a number of visual and linguistic behavioral signals-such as facial action unit intensity, overall word count, and use of words related to social processes-that reliably predict emotional expressiveness.</p>","PeriodicalId":74508,"journal":{"name":"Proceedings of the ... ACM International Conference on Multimodal Interaction. ICMI (Conference)","volume":"2020 ","pages":"548-557"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8106384/pdf/nihms-1680572.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38966276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Depression Severity Assessment for Adolescents at High Risk of Mental Disorders. 精神障碍高危青少年抑郁严重程度评估》。
Michal Muszynski, Jamie Zelazny, Jeffrey M Girard, Louis-Philippe Morency

Recent progress in artificial intelligence has led to the development of automatic behavioral marker recognition, such as facial and vocal expressions. Those automatic tools have enormous potential to support mental health assessment, clinical decision making, and treatment planning. In this paper, we investigate nonverbal behavioral markers of depression severity assessed during semi-structured medical interviews of adolescent patients. The main goal of our research is two-fold: studying a unique population of adolescents at high risk of mental disorders and differentiating mild depression from moderate or severe depression. We aim to explore computationally inferred facial and vocal behavioral responses elicited by three segments of the semi-structured medical interviews: Distress Assessment Questions, Ubiquitous Questions, and Concept Questions. Our experimental methodology reflects best practise used for analyzing small sample size and unbalanced datasets of unique patients. Our results show a very interesting trend with strongly discriminative behavioral markers from both acoustic and visual modalities. These promising results are likely due to the unique classification task (mild depression vs. moderate and severe depression) and three types of probing questions.

人工智能领域的最新进展推动了行为标记自动识别技术的发展,例如面部和声音表情。这些自动工具在支持心理健康评估、临床决策和治疗计划方面有着巨大的潜力。在本文中,我们研究了在对青少年患者进行半结构化医学访谈时评估抑郁症严重程度的非语言行为标记。我们研究的主要目标有两个方面:研究精神障碍高风险青少年这一特殊群体,以及区分轻度抑郁与中度或重度抑郁。我们的目标是探索通过计算推断半结构化医疗访谈的三个片段所引起的面部和声音行为反应:压力评估问题、泛在问题和概念问题。我们的实验方法反映了用于分析小样本量和不平衡数据集的最佳实践。我们的结果显示了一个非常有趣的趋势,即声学和视觉模式中的行为标记都具有很强的区分性。这些令人鼓舞的结果可能得益于独特的分类任务(轻度抑郁与中度和重度抑郁)和三种类型的探究性问题。
{"title":"Depression Severity Assessment for Adolescents at High Risk of Mental Disorders.","authors":"Michal Muszynski, Jamie Zelazny, Jeffrey M Girard, Louis-Philippe Morency","doi":"10.1145/3382507.3418859","DOIUrl":"10.1145/3382507.3418859","url":null,"abstract":"<p><p>Recent progress in artificial intelligence has led to the development of automatic behavioral marker recognition, such as facial and vocal expressions. Those automatic tools have enormous potential to support mental health assessment, clinical decision making, and treatment planning. In this paper, we investigate nonverbal behavioral markers of depression severity assessed during semi-structured medical interviews of adolescent patients. The main goal of our research is two-fold: studying a unique population of adolescents at high risk of mental disorders and differentiating mild depression from moderate or severe depression. We aim to explore computationally inferred facial and vocal behavioral responses elicited by three segments of the semi-structured medical interviews: Distress Assessment Questions, Ubiquitous Questions, and Concept Questions. Our experimental methodology reflects best practise used for analyzing small sample size and unbalanced datasets of unique patients. Our results show a very interesting trend with strongly discriminative behavioral markers from both acoustic and visual modalities. These promising results are likely due to the unique classification task (mild depression vs. moderate and severe depression) and three types of probing questions.</p>","PeriodicalId":74508,"journal":{"name":"Proceedings of the ... ACM International Conference on Multimodal Interaction. ICMI (Conference)","volume":"2020 ","pages":"70-78"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8005296/pdf/nihms-1680574.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25530531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enforcing Multilabel Consistency for Automatic Spatio-Temporal Assessment of Shoulder Pain Intensity. 强制多标签一致性肩痛强度的自动时空评估。
Diyala Erekat, Zakia Hammal, Maimoon Siddiqui, Hamdi Dibeklioğlu

The standard clinical assessment of pain is limited primarily to self-reported pain or clinician impression. While the self-reported measurement of pain is useful, in some circumstances it cannot be obtained. Automatic facial expression analysis has emerged as a potential solution for an objective, reliable, and valid measurement of pain. In this study, we propose a video based approach for the automatic measurement of self-reported pain and the observer pain intensity, respectively. To this end, we explore the added value of three self-reported pain scales, i.e., the Visual Analog Scale (VAS), the Sensory Scale (SEN), and the Affective Motivational Scale (AFF), as well as the Observer Pain Intensity (OPI) rating for a reliable assessment of pain intensity from facial expression. Using a spatio-temporal Convolutional Neural Network - Recurrent Neural Network (CNN-RNN) architecture, we propose to jointly minimize the mean absolute error of pain scores estimation for each of these scales while maximizing the consistency between them. The reliability of the proposed method is evaluated on the benchmark database for pain measurement from videos, namely, the UNBC-McMaster Pain Archive. Our results show that enforcing the consistency between different self-reported pain intensity scores collected using different pain scales enhances the quality of predictions and improve the state of the art in automatic self-reported pain estimation. The obtained results suggest that automatic assessment of self-reported pain intensity from videos is feasible, and could be used as a complementary instrument to unburden caregivers, specially for vulnerable populations that need constant monitoring.

疼痛的标准临床评估主要局限于自我报告的疼痛或临床医生的印象。虽然自我报告的疼痛测量是有用的,但在某些情况下无法获得。自动面部表情分析已成为客观、可靠和有效测量疼痛的潜在解决方案。在这项研究中,我们提出了一种基于视频的方法,分别用于自动测量自我报告的疼痛和观察者的疼痛强度。为此,我们探索了三种自我报告的疼痛量表的附加值,即视觉模拟量表(VAS)、感觉量表(SEN)和情感动机量表(AFF),以及观察者疼痛强度(OPI)评分,以可靠地评估面部表情的疼痛强度。使用时空卷积神经网络-递归神经网络(CNN-RNN)架构,我们建议联合最小化这些量表中每个量表的疼痛评分估计的平均绝对误差,同时最大限度地提高它们之间的一致性。在视频疼痛测量的基准数据库,即UNBC McMaster疼痛档案中,对所提出方法的可靠性进行了评估。我们的研究结果表明,加强使用不同疼痛量表收集的不同自我报告的疼痛强度评分之间的一致性,可以提高预测质量,并提高自动自我报告疼痛估计的技术水平。所获得的结果表明,从视频中自动评估自我报告的疼痛强度是可行的,可以作为减轻护理人员负担的补充工具,特别是对于需要持续监测的弱势人群。
{"title":"Enforcing Multilabel Consistency for Automatic Spatio-Temporal Assessment of Shoulder Pain Intensity.","authors":"Diyala Erekat,&nbsp;Zakia Hammal,&nbsp;Maimoon Siddiqui,&nbsp;Hamdi Dibeklioğlu","doi":"10.1145/3395035.3425190","DOIUrl":"10.1145/3395035.3425190","url":null,"abstract":"<p><p>The standard clinical assessment of pain is limited primarily to self-reported pain or clinician impression. While the self-reported measurement of pain is useful, in some circumstances it cannot be obtained. Automatic facial expression analysis has emerged as a potential solution for an objective, reliable, and valid measurement of pain. In this study, we propose a video based approach for the automatic measurement of self-reported pain and the observer pain intensity, respectively. To this end, we explore the added value of three self-reported pain scales, i.e., the Visual Analog Scale (VAS), the Sensory Scale (SEN), and the Affective Motivational Scale (AFF), as well as the Observer Pain Intensity (OPI) rating for a reliable assessment of pain intensity from facial expression. Using a spatio-temporal Convolutional Neural Network - Recurrent Neural Network (CNN-RNN) architecture, we propose to jointly minimize the mean absolute error of pain scores estimation for each of these scales while maximizing the consistency between them. The reliability of the proposed method is evaluated on the benchmark database for pain measurement from videos, namely, the UNBC-McMaster Pain Archive. Our results show that enforcing the consistency between different self-reported pain intensity scores collected using different pain scales enhances the quality of predictions and improve the state of the art in automatic self-reported pain estimation. The obtained results suggest that automatic assessment of self-reported pain intensity from videos is feasible, and could be used as a complementary instrument to unburden caregivers, specially for vulnerable populations that need constant monitoring.</p>","PeriodicalId":74508,"journal":{"name":"Proceedings of the ... ACM International Conference on Multimodal Interaction. ICMI (Conference)","volume":"2020 ","pages":"156-164"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3395035.3425190","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39858931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Multimodal Automatic Coding of Client Behavior in Motivational Interviewing. 动机访谈中客户行为的多模式自动编码。
Leili Tavabi, Brian Borsari, Kalin Stefanov, Joshua D Woolley, Mohammad Soleymani, Larry Zhang, Stefan Scherer

Motivational Interviewing (MI) is defined as a collaborative conversation style that evokes the client's own intrinsic reasons for behavioral change. In MI research, the clients' attitude (willingness or resistance) toward change as expressed through language, has been identified as an important indicator of their subsequent behavior change. Automated coding of these indicators provides systematic and efficient means for the analysis and assessment of MI therapy sessions. In this paper, we study and analyze behavioral cues in client language and speech that bear indications of the client's behavior toward change during a therapy session, using a database of dyadic motivational interviews between therapists and clients with alcohol-related problems. Deep language and voice encoders, i.e., BERT and VGGish, trained on large amounts of data are used to extract features from each utterance. We develop a neural network to automatically detect the MI codes using both the clients' and therapists' language and clients' voice, and demonstrate the importance of semantic context in such detection. Additionally, we develop machine learning models for predicting alcohol-use behavioral outcomes of clients through language and voice analysis. Our analysis demonstrates that we are able to estimate MI codes using clients' textual utterances along with preceding textual context from both the therapist and client, reaching an F1-score of 0.72 for a speaker-independent three-class classification. We also report initial results for using the clients' data for predicting behavioral outcomes, which outlines the direction for future work.

动机访谈法(MI)被定义为一种合作式谈话方式,它能唤起客户自身内在的行为改变原因。在动机访谈研究中,客户通过语言表达的对改变的态度(意愿或抵制)被认为是他们随后行为改变的一个重要指标。对这些指标进行自动编码,为分析和评估多元智能疗法的疗程提供了系统而有效的方法。在本文中,我们利用治疗师与有酒精相关问题的客户之间的双人动机访谈数据库,研究并分析了客户语言和语音中的行为线索,这些线索表明客户在治疗过程中的改变行为。深度语言和语音编码器(即 BERT 和 VGGish)在大量数据的基础上经过训练,可用于从每个语句中提取特征。我们开发了一个神经网络,利用客户和治疗师的语言以及客户的声音自动检测 MI 代码,并证明了语义上下文在此类检测中的重要性。此外,我们还开发了机器学习模型,通过语言和语音分析预测客户的酒精使用行为结果。我们的分析表明,我们能够利用客户的文本语句以及治疗师和客户之前的文本上下文来估算 MI 代码,在与说话者无关的三类分类中达到了 0.72 的 F1 分数。我们还报告了使用客户数据预测行为结果的初步结果,这为今后的工作指明了方向。
{"title":"Multimodal Automatic Coding of Client Behavior in Motivational Interviewing.","authors":"Leili Tavabi, Brian Borsari, Kalin Stefanov, Joshua D Woolley, Mohammad Soleymani, Larry Zhang, Stefan Scherer","doi":"10.1145/3382507.3418853","DOIUrl":"10.1145/3382507.3418853","url":null,"abstract":"<p><p>Motivational Interviewing (MI) is defined as a collaborative conversation style that evokes the client's own intrinsic reasons for behavioral change. In MI research, the clients' attitude (willingness or resistance) toward change as expressed through language, has been identified as an important indicator of their subsequent behavior change. Automated coding of these indicators provides systematic and efficient means for the analysis and assessment of MI therapy sessions. In this paper, we study and analyze behavioral cues in client language and speech that bear indications of the client's behavior toward change during a therapy session, using a database of dyadic motivational interviews between therapists and clients with alcohol-related problems. Deep language and voice encoders, <i>i.e.,</i> BERT and VGGish, trained on large amounts of data are used to extract features from each utterance. We develop a neural network to automatically detect the MI codes using both the clients' and therapists' language and clients' voice, and demonstrate the importance of semantic context in such detection. Additionally, we develop machine learning models for predicting alcohol-use behavioral outcomes of clients through language and voice analysis. Our analysis demonstrates that we are able to estimate MI codes using clients' textual utterances along with preceding textual context from both the therapist and client, reaching an F1-score of 0.72 for a speaker-independent three-class classification. We also report initial results for using the clients' data for predicting behavioral outcomes, which outlines the direction for future work.</p>","PeriodicalId":74508,"journal":{"name":"Proceedings of the ... ACM International Conference on Multimodal Interaction. ICMI (Conference)","volume":"2020 ","pages":"406-413"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8321780/pdf/nihms-1727152.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39266881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated Affect Detection in Deep Brain Stimulation for Obsessive-Compulsive Disorder: A Pilot Study. 强迫症脑深部刺激中的自动情感检测:一项初步研究。
Jeffrey F Cohn, Michael S Okun, Laszlo A Jeni, Itir Onal Ertugrul, David Borton, Donald Malone, Wayne K Goodman

Automated measurement of affective behavior in psychopathology has been limited primarily to screening and diagnosis. While useful, clinicians more often are concerned with whether patients are improving in response to treatment. Are symptoms abating, is affect becoming more positive, are unanticipated side effects emerging? When treatment includes neural implants, need for objective, repeatable biometrics tied to neurophysiology becomes especially pressing. We used automated face analysis to assess treatment response to deep brain stimulation (DBS) in two patients with intractable obsessive-compulsive disorder (OCD). One was assessed intraoperatively following implantation and activation of the DBS device. The other was assessed three months post-implantation. Both were assessed during DBS on and o conditions. Positive and negative valence were quantified using a CNN trained on normative data of 160 non-OCD participants. Thus, a secondary goal was domain transfer of the classifiers. In both contexts, DBS-on resulted in marked positive affect. In response to DBS-off, affect flattened in both contexts and alternated with increased negative affect in the outpatient setting. Mean AUC for domain transfer was 0.87. These findings suggest that parametric variation of DBS is strongly related to affective behavior and may introduce vulnerability for negative affect in the event that DBS is discontinued.

精神病理学中情感行为的自动测量主要局限于筛查和诊断。虽然有用,但临床医生更关心的是患者对治疗的反应是否有所改善。症状是否减轻,影响是否变得更加积极,是否出现了意想不到的副作用?当治疗包括神经植入物时,对与神经生理学相关的客观、可重复的生物特征的需求变得尤为迫切。我们使用自动面部分析来评估两名顽固性强迫症(OCD)患者对深部脑刺激(DBS)的治疗反应。其中一例在DBS装置植入和激活后进行了术中评估。另一个在植入后三个月进行评估。在DBS on和o条件下对两者进行了评估。使用对160名非强迫症参与者的规范性数据进行训练的CNN对阳性和阴性效价进行量化。因此,第二个目标是分类器的域转移。在这两种情况下,DBS都产生了显著的积极影响。在DBS关闭的情况下,情绪在两种情况下都趋于平缓,在门诊环境中与增加的负面情绪交替出现。结构域转移的平均AUC为0.87。这些发现表明,DBS的参数变化与情感行为密切相关,并可能在DBS停止的情况下引入负面影响的脆弱性。
{"title":"Automated Affect Detection in Deep Brain Stimulation for Obsessive-Compulsive Disorder: A Pilot Study.","authors":"Jeffrey F Cohn,&nbsp;Michael S Okun,&nbsp;Laszlo A Jeni,&nbsp;Itir Onal Ertugrul,&nbsp;David Borton,&nbsp;Donald Malone,&nbsp;Wayne K Goodman","doi":"10.1145/3242969.3243023","DOIUrl":"10.1145/3242969.3243023","url":null,"abstract":"<p><p>Automated measurement of affective behavior in psychopathology has been limited primarily to screening and diagnosis. While useful, clinicians more often are concerned with whether patients are improving in response to treatment. Are symptoms abating, is affect becoming more positive, are unanticipated side effects emerging? When treatment includes neural implants, need for objective, repeatable biometrics tied to neurophysiology becomes especially pressing. We used automated face analysis to assess treatment response to deep brain stimulation (DBS) in two patients with intractable obsessive-compulsive disorder (OCD). One was assessed intraoperatively following implantation and activation of the DBS device. The other was assessed three months post-implantation. Both were assessed during DBS on and o conditions. Positive and negative valence were quantified using a CNN trained on normative data of 160 non-OCD participants. Thus, a secondary goal was domain transfer of the classifiers. In both contexts, DBS-on resulted in marked positive affect. In response to DBS-off, affect flattened in both contexts and alternated with increased negative affect in the outpatient setting. Mean AUC for domain transfer was 0.87. These findings suggest that parametric variation of DBS is strongly related to affective behavior and may introduce vulnerability for negative affect in the event that DBS is discontinued.</p>","PeriodicalId":74508,"journal":{"name":"Proceedings of the ... ACM International Conference on Multimodal Interaction. ICMI (Conference)","volume":"2018 ","pages":"40-44"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3242969.3243023","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36748553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
期刊
Proceedings of the ... ACM International Conference on Multimodal Interaction. ICMI (Conference)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1