{"title":"人工智能集成中的效率-责任权衡:对人类绩效和过度依赖的影响","authors":"Nicolas Spatola","doi":"10.1016/j.chbah.2024.100099","DOIUrl":null,"url":null,"abstract":"<div><div>As artificial intelligence proliferates across various sectors, it is crucial to explore the psychological impacts of over-reliance on these systems. This study examines how different formats of chatbot assistance (instruction-only, answer-only, and combined instruction and answer) influence user performance and reliance over time. In two experiments, participants completed reasoning tests with the aid of a chatbot, \"Cogbot,\" offering varying levels of explanatory detail and direct answers. In Experiment 1, participants receiving direct answers showed higher reliance on the chatbot compared to those receiving instructions, aligning with the practical hypothesis that prioritizes efficiency over explainability. Experiment 2 introduced transfer problems with incorrect AI guidance, revealing that initial reliance on direct answers impaired performance on subsequent tasks when the AI erred, supporting concerns about automation complacency. Findings indicate that while efficiency-focused AI solutions enhance immediate performance, they risk over-assimilation and reduced vigilance, leading to significant performance drops when AI accuracy falters. Conversely, explanatory guidance did not significantly improve outcomes absent of direct answers. These results highlight the complex dynamics between AI efficiency and accountability, suggesting that responsible AI adoption requires balancing streamlined functionality with safeguards against over-reliance.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100099"},"PeriodicalIF":0.0000,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The efficiency-accountability tradeoff in AI integration: Effects on human performance and over-reliance\",\"authors\":\"Nicolas Spatola\",\"doi\":\"10.1016/j.chbah.2024.100099\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>As artificial intelligence proliferates across various sectors, it is crucial to explore the psychological impacts of over-reliance on these systems. This study examines how different formats of chatbot assistance (instruction-only, answer-only, and combined instruction and answer) influence user performance and reliance over time. In two experiments, participants completed reasoning tests with the aid of a chatbot, \\\"Cogbot,\\\" offering varying levels of explanatory detail and direct answers. In Experiment 1, participants receiving direct answers showed higher reliance on the chatbot compared to those receiving instructions, aligning with the practical hypothesis that prioritizes efficiency over explainability. Experiment 2 introduced transfer problems with incorrect AI guidance, revealing that initial reliance on direct answers impaired performance on subsequent tasks when the AI erred, supporting concerns about automation complacency. Findings indicate that while efficiency-focused AI solutions enhance immediate performance, they risk over-assimilation and reduced vigilance, leading to significant performance drops when AI accuracy falters. Conversely, explanatory guidance did not significantly improve outcomes absent of direct answers. These results highlight the complex dynamics between AI efficiency and accountability, suggesting that responsible AI adoption requires balancing streamlined functionality with safeguards against over-reliance.</div></div>\",\"PeriodicalId\":100324,\"journal\":{\"name\":\"Computers in Human Behavior: Artificial Humans\",\"volume\":\"2 2\",\"pages\":\"Article 100099\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Human Behavior: Artificial Humans\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2949882124000598\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882124000598","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The efficiency-accountability tradeoff in AI integration: Effects on human performance and over-reliance
As artificial intelligence proliferates across various sectors, it is crucial to explore the psychological impacts of over-reliance on these systems. This study examines how different formats of chatbot assistance (instruction-only, answer-only, and combined instruction and answer) influence user performance and reliance over time. In two experiments, participants completed reasoning tests with the aid of a chatbot, "Cogbot," offering varying levels of explanatory detail and direct answers. In Experiment 1, participants receiving direct answers showed higher reliance on the chatbot compared to those receiving instructions, aligning with the practical hypothesis that prioritizes efficiency over explainability. Experiment 2 introduced transfer problems with incorrect AI guidance, revealing that initial reliance on direct answers impaired performance on subsequent tasks when the AI erred, supporting concerns about automation complacency. Findings indicate that while efficiency-focused AI solutions enhance immediate performance, they risk over-assimilation and reduced vigilance, leading to significant performance drops when AI accuracy falters. Conversely, explanatory guidance did not significantly improve outcomes absent of direct answers. These results highlight the complex dynamics between AI efficiency and accountability, suggesting that responsible AI adoption requires balancing streamlined functionality with safeguards against over-reliance.