It's Not Only What You Say, But Also How You Say It: Machine Learning Approach to Estimate Trust from Conversation.

IF 2.9 3区 心理学 Q1 BEHAVIORAL SCIENCES Human Factors Pub Date : 2024-06-01 Epub Date: 2023-04-28 DOI:10.1177/00187208231166624
Mengyao Li, Isabel M Erickson, Ernest V Cross, John D Lee
{"title":"It's Not Only What You Say, But Also How You Say It: Machine Learning Approach to Estimate Trust from Conversation.","authors":"Mengyao Li, Isabel M Erickson, Ernest V Cross, John D Lee","doi":"10.1177/00187208231166624","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>The objective of this study was to estimate trust from conversations using both lexical and acoustic data.</p><p><strong>Background: </strong>As NASA moves to long-duration space exploration operations, the increasing need for cooperation between humans and virtual agents requires real-time trust estimation by virtual agents. Measuring trust through conversation is a novel and unintrusive approach.</p><p><strong>Method: </strong>A 2 (reliability) × 2 (cycles) × 3 (events) within-subject study with habitat system maintenance was designed to elicit various levels of trust in a conversational agent. Participants had trust-related conversations with the conversational agent at the end of each decision-making task. To estimate trust, subjective trust ratings were predicted using machine learning models trained on three types of conversational features (i.e., lexical, acoustic, and combined). After training, model explanation was performed using variable importance and partial dependence plots.</p><p><strong>Results: </strong>Results showed that a random forest algorithm, trained using the combined lexical and acoustic features, predicted trust in the conversational agent most accurately <math><mrow><mo>(</mo><mrow><msubsup><mi>R</mi><mrow><mi>a</mi><mi>d</mi><mi>j</mi></mrow><mn>2</mn></msubsup><mo>=</mo><mn>0.71</mn></mrow><mo>)</mo></mrow></math>. The most important predictors were a combination of lexical and acoustic cues: average sentiment considering valence shifters, the mean of formants, and Mel-frequency cepstral coefficients (MFCC). These conversational features were identified as partial mediators predicting people's trust.</p><p><strong>Conclusion: </strong>Precise trust estimation from conversation requires lexical cues and acoustic cues.</p><p><strong>Application: </strong>These results showed the possibility of using conversational data to measure trust, and potentially other dynamic mental states, unobtrusively and dynamically.</p>","PeriodicalId":56333,"journal":{"name":"Human Factors","volume":null,"pages":null},"PeriodicalIF":2.9000,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11044523/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human Factors","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1177/00187208231166624","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/4/28 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"BEHAVIORAL SCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

Objective: The objective of this study was to estimate trust from conversations using both lexical and acoustic data.

Background: As NASA moves to long-duration space exploration operations, the increasing need for cooperation between humans and virtual agents requires real-time trust estimation by virtual agents. Measuring trust through conversation is a novel and unintrusive approach.

Method: A 2 (reliability) × 2 (cycles) × 3 (events) within-subject study with habitat system maintenance was designed to elicit various levels of trust in a conversational agent. Participants had trust-related conversations with the conversational agent at the end of each decision-making task. To estimate trust, subjective trust ratings were predicted using machine learning models trained on three types of conversational features (i.e., lexical, acoustic, and combined). After training, model explanation was performed using variable importance and partial dependence plots.

Results: Results showed that a random forest algorithm, trained using the combined lexical and acoustic features, predicted trust in the conversational agent most accurately (Radj2=0.71). The most important predictors were a combination of lexical and acoustic cues: average sentiment considering valence shifters, the mean of formants, and Mel-frequency cepstral coefficients (MFCC). These conversational features were identified as partial mediators predicting people's trust.

Conclusion: Precise trust estimation from conversation requires lexical cues and acoustic cues.

Application: These results showed the possibility of using conversational data to measure trust, and potentially other dynamic mental states, unobtrusively and dynamically.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
不仅要看你说了什么,还要看你怎么说:从对话中估算信任度的机器学习方法。
研究目的本研究的目的是利用词汇和声音数据从对话中估计信任度:背景:随着美国国家航空航天局(NASA)转向长期太空探索行动,人类与虚拟代理之间的合作需求日益增加,这就要求虚拟代理进行实时信任度评估。通过对话来衡量信任度是一种新颖的非侵入式方法:方法:我们设计了一个 2(可靠性)×2(周期)×3(事件)的主体内研究,并对栖息地系统进行了维护,以激发对对话代理的不同程度的信任。在每个决策任务结束时,参与者都会与对话代理进行与信任相关的对话。为了估算信任度,使用根据三种会话特征(即词汇、声音和组合)训练的机器学习模型来预测主观信任度。训练结束后,使用变量重要性和部分依赖图对模型进行解释:结果表明,使用词性和声学特征组合训练的随机森林算法能最准确地预测对话代理的信任度(Radj2=0.71)。最重要的预测因素是词汇和声音线索的组合:考虑到价位转换器的平均情绪、前音平均值和梅尔-频率倒频谱系数(MFCC)。这些对话特征被认为是预测人们信任度的部分中介因素:结论:从会话中准确估计信任度需要词汇线索和声音线索:这些结果表明,可以利用会话数据来测量信任度,也可以测量其他潜在的动态心理状态,而且是不引人注意的动态测量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Human Factors
Human Factors 管理科学-行为科学
CiteScore
10.60
自引率
6.10%
发文量
99
审稿时长
6-12 weeks
期刊介绍: Human Factors: The Journal of the Human Factors and Ergonomics Society publishes peer-reviewed scientific studies in human factors/ergonomics that present theoretical and practical advances concerning the relationship between people and technologies, tools, environments, and systems. Papers published in Human Factors leverage fundamental knowledge of human capabilities and limitations – and the basic understanding of cognitive, physical, behavioral, physiological, social, developmental, affective, and motivational aspects of human performance – to yield design principles; enhance training, selection, and communication; and ultimately improve human-system interfaces and sociotechnical systems that lead to safer and more effective outcomes.
期刊最新文献
Attentional Tunneling in Pilots During a Visual Tracking Task With a Head Mounted Display. Examining Patterns and Predictors of ADHD Teens' Skill-Learning Trajectories During Enhanced FOrward Concentration and Attention Learning (FOCAL+) Training. Is Less Sometimes More? An Experimental Comparison of Four Measures of Perceived Usability. An Automobile's Tail Lights: Sacrificing Safety for Playful Design? Virtual Reality Adaptive Training for Personalized Stress Inoculation.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1