Syed Arshad, Jianlong Zhou, Constant Bridon, Fang Chen, Yang Wang
{"title":"Investigating User Confidence for Uncertainty Presentation in Predictive Decision Making","authors":"Syed Arshad, Jianlong Zhou, Constant Bridon, Fang Chen, Yang Wang","doi":"10.1145/2838739.2838753","DOIUrl":null,"url":null,"abstract":"Machine Learning (ML) based decision support systems are often like a black box to non-expert users. Here user's confidence becomes critical for effective decision making and maintaining trust in the system. We find that user confidence varies significantly depending on supplementary material presented on screen. We investigate change in user confidence (in the context of ML based decision making) by varying level of uncertainty presented (in an online water-pipe failure prediction case study) and find that all 26 subjects rated higher uncertainty task to be most difficult and had lowest user confidence in predictive decisions of the same. This agrees with our expectation that increased uncertainty would reduce user confidence in predictive decision making. However, ML-researchers subgroup reported being most confident when uncertainty with known probability was presented, whereas other subgroups (viz. general staff and non-ML researchers) appeared most confident when uncertainty was not at all presented. This is an original research to improve understanding of user's decision making confidence with respect to uncertainty presented in machine learning context.","PeriodicalId":364334,"journal":{"name":"Proceedings of the Annual Meeting of the Australian Special Interest Group for Computer Human Interaction","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2015-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"24","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Annual Meeting of the Australian Special Interest Group for Computer Human Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2838739.2838753","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 24
Abstract
Machine Learning (ML) based decision support systems are often like a black box to non-expert users. Here user's confidence becomes critical for effective decision making and maintaining trust in the system. We find that user confidence varies significantly depending on supplementary material presented on screen. We investigate change in user confidence (in the context of ML based decision making) by varying level of uncertainty presented (in an online water-pipe failure prediction case study) and find that all 26 subjects rated higher uncertainty task to be most difficult and had lowest user confidence in predictive decisions of the same. This agrees with our expectation that increased uncertainty would reduce user confidence in predictive decision making. However, ML-researchers subgroup reported being most confident when uncertainty with known probability was presented, whereas other subgroups (viz. general staff and non-ML researchers) appeared most confident when uncertainty was not at all presented. This is an original research to improve understanding of user's decision making confidence with respect to uncertainty presented in machine learning context.