基于机器能力的有监督自治系统信任标定分析模型

Kamran Shafi
{"title":"基于机器能力的有监督自治系统信任标定分析模型","authors":"Kamran Shafi","doi":"10.1109/ICACI.2017.7974516","DOIUrl":null,"url":null,"abstract":"Modern day autonomous systems are moving away form mere automation of manual tasks to true autonomy that require them to apply human-like judgment when dealing with uncertain situations in performing complex tasks. Trust in these systems is a key enabler to fully realize this dream. A lack of trust leads to inefficient use of these systems and increases the supervision workload for humans. Conversely, an over trust in these systems leads to increased risks and exposure to catastrophic events. This paper presents a high-level analytical model to study trust dynamics in supervised autonomous system environments. Trust, in this context, is defined as a function of machine competence and the level of human control required to achieve this competence. A parametric model of machine competence is presented that allows generating different machine competence behaviors based on the task difficulty, level of supervision and machine's learning ability. The notions of perceived and desired or optimal trust, computed based on perceived and observed machine competence respectively, are introduced. This allows treating trust calibration as an optimization or control problem. The presented models provide a formal framework for developing higher-fidelity simulation models to study trust dynamics in supervised autonomous systems and develop appropriate controllers for optimizing the trust between humans and machines in these systems.","PeriodicalId":260701,"journal":{"name":"2017 Ninth International Conference on Advanced Computational Intelligence (ICACI)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"A machine competence based analytical model to study trust calibration in supervised autonomous systems\",\"authors\":\"Kamran Shafi\",\"doi\":\"10.1109/ICACI.2017.7974516\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Modern day autonomous systems are moving away form mere automation of manual tasks to true autonomy that require them to apply human-like judgment when dealing with uncertain situations in performing complex tasks. Trust in these systems is a key enabler to fully realize this dream. A lack of trust leads to inefficient use of these systems and increases the supervision workload for humans. Conversely, an over trust in these systems leads to increased risks and exposure to catastrophic events. This paper presents a high-level analytical model to study trust dynamics in supervised autonomous system environments. Trust, in this context, is defined as a function of machine competence and the level of human control required to achieve this competence. A parametric model of machine competence is presented that allows generating different machine competence behaviors based on the task difficulty, level of supervision and machine's learning ability. The notions of perceived and desired or optimal trust, computed based on perceived and observed machine competence respectively, are introduced. This allows treating trust calibration as an optimization or control problem. The presented models provide a formal framework for developing higher-fidelity simulation models to study trust dynamics in supervised autonomous systems and develop appropriate controllers for optimizing the trust between humans and machines in these systems.\",\"PeriodicalId\":260701,\"journal\":{\"name\":\"2017 Ninth International Conference on Advanced Computational Intelligence (ICACI)\",\"volume\":\"39 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 Ninth International Conference on Advanced Computational Intelligence (ICACI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICACI.2017.7974516\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 Ninth International Conference on Advanced Computational Intelligence (ICACI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICACI.2017.7974516","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

现代自主系统正在从单纯的手动任务自动化转向真正的自主,这需要它们在处理复杂任务中不确定的情况时应用类似人类的判断。对这些系统的信任是完全实现这一梦想的关键因素。缺乏信任导致这些系统的使用效率低下,并增加了人类的监督工作量。相反,对这些系统的过度信任会导致风险增加,并暴露于灾难性事件。本文提出了一个研究监督自治系统环境下信任动态的高级分析模型。在这种情况下,信任被定义为机器能力和实现这种能力所需的人类控制水平的函数。提出了一种基于任务难度、监督水平和机器学习能力的机器能力参数化模型,该模型可以生成不同的机器能力行为。引入了感知信任和期望信任或最优信任的概念,分别基于感知和观察机器能力进行计算。这允许将信任校准视为优化或控制问题。所提出的模型为开发高保真仿真模型提供了形式化框架,用于研究监督自治系统中的信任动态,并开发适当的控制器来优化这些系统中人与机器之间的信任。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A machine competence based analytical model to study trust calibration in supervised autonomous systems
Modern day autonomous systems are moving away form mere automation of manual tasks to true autonomy that require them to apply human-like judgment when dealing with uncertain situations in performing complex tasks. Trust in these systems is a key enabler to fully realize this dream. A lack of trust leads to inefficient use of these systems and increases the supervision workload for humans. Conversely, an over trust in these systems leads to increased risks and exposure to catastrophic events. This paper presents a high-level analytical model to study trust dynamics in supervised autonomous system environments. Trust, in this context, is defined as a function of machine competence and the level of human control required to achieve this competence. A parametric model of machine competence is presented that allows generating different machine competence behaviors based on the task difficulty, level of supervision and machine's learning ability. The notions of perceived and desired or optimal trust, computed based on perceived and observed machine competence respectively, are introduced. This allows treating trust calibration as an optimization or control problem. The presented models provide a formal framework for developing higher-fidelity simulation models to study trust dynamics in supervised autonomous systems and develop appropriate controllers for optimizing the trust between humans and machines in these systems.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Blood vessel segmentation in retinal images using echo state networks Global mean square exponential synchronization of stochastic neural networks with time-varying delays Navigation of mobile robot with cooperation of quadcopter Impact of grey wolf optimization on WSN cluster formation and lifetime expansion The optimization of vehicle routing of communal waste in an urban environment using a nearest neighbirs' algorithm and genetic algorithm: Communal waste vehicle routing optimization in urban areas
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1