Robert Thomson , Daniel N. Cassenti , Thom Hawkins
{"title":"好东西太多:不同程度的自动化如何影响用户在模拟入侵检测任务中的表现","authors":"Robert Thomson , Daniel N. Cassenti , Thom Hawkins","doi":"10.1016/j.chbr.2024.100511","DOIUrl":null,"url":null,"abstract":"<div><div>Cyber analysts face a demanding task when prioritizing alerts from intrusion detection systems, balancing the challenge of numerous false positives from rule-based methods with the critical need to detect genuine cyber threats, necessitating unwavering vigilance and imposing a significant cognitive burden. In this field, there exists pressure to incorporate artificial intelligence techniques to enhance the automation of analyst workflows, yet without a clear grasp of how elevating the <em>Level of Automation</em> impacts the allocation of attentional and cognitive resources among analysts. This paper describes a simulated AI-assisted intrusion detection task which varies five degrees of automation as well as the sensitivity of the assistant, evaluating performance-based (e.g., accuracy, response time, sensitivity, response bias) and subjective (e.g., surveys on workload and trust) measures. Participants white-listed a series of time-sensitive alerts in a simulated Snort® environment. Our findings indicate that elevating the level of automation altered participants’ behavior, evident in their tendency to display a response bias towards rejecting hits (reduced hit rate and false alarm rate) when overriding an AI’s decision. Additionally, participants subjectively reported experiencing a decreased cognitive workload with a more precise algorithm, irrespective of any variance in their actual performance. Our findings suggest the necessity for additional research before implementing further automation into analyst workflows, as the demands of tasks evolve with escalating levels of automation.</div></div>","PeriodicalId":72681,"journal":{"name":"Computers in human behavior reports","volume":"16 ","pages":"Article 100511"},"PeriodicalIF":4.9000,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Too much of a good thing: How varying levels of automation impact user performance in a simulated intrusion detection task\",\"authors\":\"Robert Thomson , Daniel N. Cassenti , Thom Hawkins\",\"doi\":\"10.1016/j.chbr.2024.100511\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Cyber analysts face a demanding task when prioritizing alerts from intrusion detection systems, balancing the challenge of numerous false positives from rule-based methods with the critical need to detect genuine cyber threats, necessitating unwavering vigilance and imposing a significant cognitive burden. In this field, there exists pressure to incorporate artificial intelligence techniques to enhance the automation of analyst workflows, yet without a clear grasp of how elevating the <em>Level of Automation</em> impacts the allocation of attentional and cognitive resources among analysts. This paper describes a simulated AI-assisted intrusion detection task which varies five degrees of automation as well as the sensitivity of the assistant, evaluating performance-based (e.g., accuracy, response time, sensitivity, response bias) and subjective (e.g., surveys on workload and trust) measures. Participants white-listed a series of time-sensitive alerts in a simulated Snort® environment. Our findings indicate that elevating the level of automation altered participants’ behavior, evident in their tendency to display a response bias towards rejecting hits (reduced hit rate and false alarm rate) when overriding an AI’s decision. Additionally, participants subjectively reported experiencing a decreased cognitive workload with a more precise algorithm, irrespective of any variance in their actual performance. Our findings suggest the necessity for additional research before implementing further automation into analyst workflows, as the demands of tasks evolve with escalating levels of automation.</div></div>\",\"PeriodicalId\":72681,\"journal\":{\"name\":\"Computers in human behavior reports\",\"volume\":\"16 \",\"pages\":\"Article 100511\"},\"PeriodicalIF\":4.9000,\"publicationDate\":\"2024-11-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in human behavior reports\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2451958824001441\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in human behavior reports","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2451958824001441","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
Too much of a good thing: How varying levels of automation impact user performance in a simulated intrusion detection task
Cyber analysts face a demanding task when prioritizing alerts from intrusion detection systems, balancing the challenge of numerous false positives from rule-based methods with the critical need to detect genuine cyber threats, necessitating unwavering vigilance and imposing a significant cognitive burden. In this field, there exists pressure to incorporate artificial intelligence techniques to enhance the automation of analyst workflows, yet without a clear grasp of how elevating the Level of Automation impacts the allocation of attentional and cognitive resources among analysts. This paper describes a simulated AI-assisted intrusion detection task which varies five degrees of automation as well as the sensitivity of the assistant, evaluating performance-based (e.g., accuracy, response time, sensitivity, response bias) and subjective (e.g., surveys on workload and trust) measures. Participants white-listed a series of time-sensitive alerts in a simulated Snort® environment. Our findings indicate that elevating the level of automation altered participants’ behavior, evident in their tendency to display a response bias towards rejecting hits (reduced hit rate and false alarm rate) when overriding an AI’s decision. Additionally, participants subjectively reported experiencing a decreased cognitive workload with a more precise algorithm, irrespective of any variance in their actual performance. Our findings suggest the necessity for additional research before implementing further automation into analyst workflows, as the demands of tasks evolve with escalating levels of automation.