{"title":"Accounting for Human Engagement Behavior to Enhance AI-Assisted Decision Making","authors":"Ming Yin","doi":"10.1609/aaaiss.v3i1.31184","DOIUrl":null,"url":null,"abstract":"Artificial intelligence (AI) technologies have been increasingly integrated into human workflows. For example, the usage of AI-based decision aids in human decision-making processes has resulted in a new paradigm of AI-assisted decision making---that is, the AI-based decision aid provides a decision recommendation to the human decision makers, while humans make the final decision. The increasing prevalence of human-AI collaborative decision making highlights the need to understand how humans engage with the AI-based decision aid in these decision-making processes, and how to promote the effectiveness of the human-AI team in decision making. In this talk, I'll discuss a few examples illustrating that when AI is used to assist humans---both an individual decision maker or a group of decision makers---in decision making, people's engagement with the AI assistance is largely subject to their heuristics and biases, rather than careful deliberation of the respective strengths and limitations of AI and themselves. I'll then describe how to enhance AI-assisted decision making by accounting for human engagement behavior in the designs of AI-based decision aids. For example, AI recommendations can be presented to decision makers in a way that promotes their appropriate trust and reliance on AI by leveraging or mitigating human biases, informed by the analysis of human competence in decision making. Alternatively, AI-assisted decision making can be improved by developing AI models that can anticipate and adapt to the engagement behavior of human decision makers.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"11 6","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the AAAI Symposium Series","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1609/aaaiss.v3i1.31184","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial intelligence (AI) technologies have been increasingly integrated into human workflows. For example, the usage of AI-based decision aids in human decision-making processes has resulted in a new paradigm of AI-assisted decision making---that is, the AI-based decision aid provides a decision recommendation to the human decision makers, while humans make the final decision. The increasing prevalence of human-AI collaborative decision making highlights the need to understand how humans engage with the AI-based decision aid in these decision-making processes, and how to promote the effectiveness of the human-AI team in decision making. In this talk, I'll discuss a few examples illustrating that when AI is used to assist humans---both an individual decision maker or a group of decision makers---in decision making, people's engagement with the AI assistance is largely subject to their heuristics and biases, rather than careful deliberation of the respective strengths and limitations of AI and themselves. I'll then describe how to enhance AI-assisted decision making by accounting for human engagement behavior in the designs of AI-based decision aids. For example, AI recommendations can be presented to decision makers in a way that promotes their appropriate trust and reliance on AI by leveraging or mitigating human biases, informed by the analysis of human competence in decision making. Alternatively, AI-assisted decision making can be improved by developing AI models that can anticipate and adapt to the engagement behavior of human decision makers.