{"title":"基于机器能力的有监督自治系统信任标定分析模型","authors":"Kamran Shafi","doi":"10.1109/ICACI.2017.7974516","DOIUrl":null,"url":null,"abstract":"Modern day autonomous systems are moving away form mere automation of manual tasks to true autonomy that require them to apply human-like judgment when dealing with uncertain situations in performing complex tasks. Trust in these systems is a key enabler to fully realize this dream. A lack of trust leads to inefficient use of these systems and increases the supervision workload for humans. Conversely, an over trust in these systems leads to increased risks and exposure to catastrophic events. This paper presents a high-level analytical model to study trust dynamics in supervised autonomous system environments. Trust, in this context, is defined as a function of machine competence and the level of human control required to achieve this competence. A parametric model of machine competence is presented that allows generating different machine competence behaviors based on the task difficulty, level of supervision and machine's learning ability. The notions of perceived and desired or optimal trust, computed based on perceived and observed machine competence respectively, are introduced. This allows treating trust calibration as an optimization or control problem. The presented models provide a formal framework for developing higher-fidelity simulation models to study trust dynamics in supervised autonomous systems and develop appropriate controllers for optimizing the trust between humans and machines in these systems.","PeriodicalId":260701,"journal":{"name":"2017 Ninth International Conference on Advanced Computational Intelligence (ICACI)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"A machine competence based analytical model to study trust calibration in supervised autonomous systems\",\"authors\":\"Kamran Shafi\",\"doi\":\"10.1109/ICACI.2017.7974516\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Modern day autonomous systems are moving away form mere automation of manual tasks to true autonomy that require them to apply human-like judgment when dealing with uncertain situations in performing complex tasks. Trust in these systems is a key enabler to fully realize this dream. A lack of trust leads to inefficient use of these systems and increases the supervision workload for humans. Conversely, an over trust in these systems leads to increased risks and exposure to catastrophic events. This paper presents a high-level analytical model to study trust dynamics in supervised autonomous system environments. Trust, in this context, is defined as a function of machine competence and the level of human control required to achieve this competence. A parametric model of machine competence is presented that allows generating different machine competence behaviors based on the task difficulty, level of supervision and machine's learning ability. The notions of perceived and desired or optimal trust, computed based on perceived and observed machine competence respectively, are introduced. This allows treating trust calibration as an optimization or control problem. The presented models provide a formal framework for developing higher-fidelity simulation models to study trust dynamics in supervised autonomous systems and develop appropriate controllers for optimizing the trust between humans and machines in these systems.\",\"PeriodicalId\":260701,\"journal\":{\"name\":\"2017 Ninth International Conference on Advanced Computational Intelligence (ICACI)\",\"volume\":\"39 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 Ninth International Conference on Advanced Computational Intelligence (ICACI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICACI.2017.7974516\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 Ninth International Conference on Advanced Computational Intelligence (ICACI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICACI.2017.7974516","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A machine competence based analytical model to study trust calibration in supervised autonomous systems
Modern day autonomous systems are moving away form mere automation of manual tasks to true autonomy that require them to apply human-like judgment when dealing with uncertain situations in performing complex tasks. Trust in these systems is a key enabler to fully realize this dream. A lack of trust leads to inefficient use of these systems and increases the supervision workload for humans. Conversely, an over trust in these systems leads to increased risks and exposure to catastrophic events. This paper presents a high-level analytical model to study trust dynamics in supervised autonomous system environments. Trust, in this context, is defined as a function of machine competence and the level of human control required to achieve this competence. A parametric model of machine competence is presented that allows generating different machine competence behaviors based on the task difficulty, level of supervision and machine's learning ability. The notions of perceived and desired or optimal trust, computed based on perceived and observed machine competence respectively, are introduced. This allows treating trust calibration as an optimization or control problem. The presented models provide a formal framework for developing higher-fidelity simulation models to study trust dynamics in supervised autonomous systems and develop appropriate controllers for optimizing the trust between humans and machines in these systems.