{"title":"Sensemaking National Security: Applying Design Practice to Explore AI in Cybersecurity","authors":"Mariana Zafeirakopoulos","doi":"10.1109/MTS.2024.3457679","DOIUrl":null,"url":null,"abstract":"Intelligence analysis provides decision support in national security contexts. The current approach to supported decision-making tends toward reductivism and analysis, regardless of the type of national security issue. Currently, there is little research available on the alternative approaches and practices needed for intervening in national security contexts that are emerging and, therefore, not well understood. Consequently, this article explores the idea that a homogenous approach to national security problem-solving is insufficient, and we suggest that different national security issues require different approaches. In this article, we apply practices of exploration, relationality, and participation from the field of design as established approaches to sensemaking. We offer sensemaking as an alternative to reductive analytic thinking by applying it to a national security issue: the role of artificial intelligence (AI) in cybersecurity. To explore sensemaking, six workshops were conducted over six months in 2021. These workshops used design practices (thinking and tools) to explore AI in cybersecurity. From studying the workshop activities and analyzing interviews conducted by the core design team (CDT) (Project Steering Group), the study’s findings suggest new practices for Intelligence to support decision-making in future-oriented contexts. These practices include using design tools such as personas and scenarios to anchor the exploration of future harms, which also give legitimacy to lived experience alongside expert knowledge. This study also identifies possibilities for future engagement, participation, and dialog between government functions such as Intelligence and civil society to explore unknown and emerging issues together. Consequently, a relational approach gives legitimacy to seemingly unconventional ways of thinking, approaching, and knowing about future unknown contexts.","PeriodicalId":55016,"journal":{"name":"IEEE Technology and Society Magazine","volume":"43 4","pages":"72-82"},"PeriodicalIF":2.1000,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Technology and Society Magazine","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10694730/","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Sensemaking National Security: Applying Design Practice to Explore AI in Cybersecurity
Intelligence analysis provides decision support in national security contexts. The current approach to supported decision-making tends toward reductivism and analysis, regardless of the type of national security issue. Currently, there is little research available on the alternative approaches and practices needed for intervening in national security contexts that are emerging and, therefore, not well understood. Consequently, this article explores the idea that a homogenous approach to national security problem-solving is insufficient, and we suggest that different national security issues require different approaches. In this article, we apply practices of exploration, relationality, and participation from the field of design as established approaches to sensemaking. We offer sensemaking as an alternative to reductive analytic thinking by applying it to a national security issue: the role of artificial intelligence (AI) in cybersecurity. To explore sensemaking, six workshops were conducted over six months in 2021. These workshops used design practices (thinking and tools) to explore AI in cybersecurity. From studying the workshop activities and analyzing interviews conducted by the core design team (CDT) (Project Steering Group), the study’s findings suggest new practices for Intelligence to support decision-making in future-oriented contexts. These practices include using design tools such as personas and scenarios to anchor the exploration of future harms, which also give legitimacy to lived experience alongside expert knowledge. This study also identifies possibilities for future engagement, participation, and dialog between government functions such as Intelligence and civil society to explore unknown and emerging issues together. Consequently, a relational approach gives legitimacy to seemingly unconventional ways of thinking, approaching, and knowing about future unknown contexts.
期刊介绍:
IEEE Technology and Society Magazine invites feature articles (refereed), special articles, and commentaries on topics within the scope of the IEEE Society on Social Implications of Technology, in the broad areas of social implications of electrotechnology, history of electrotechnology, and engineering ethics.