Artificial intelligence (AI) systems, particularly generative AI systems, present numerous opportunities for organizations and society. As AI systems become more powerful, ensuring their safe and ethical use necessitates accountability, requiring actors to explain and justify any unintended behavior and outcomes. Recognizing the significance of accountability for AI systems, research from various research disciplines, including information systems (IS), has started investigating the topic. However, accountability for AI systems appears ambiguous across multiple research disciplines. Therefore, we conduct a bibliometric analysis with 5,809 publications to aggregate and synthesize existing research to better understand accountability for AI systems. Our analysis distinguishes IS research, defined by the Web of Science “Computer Science, Information Systems” category, from related non-IS disciplines. This differentiation highlights IS research’s unique socio-technical contribution while ensuring and integrating insights from across the broader academic landscape on accountability for AI systems. Building on these findings, we derive research propositions to lead future research on accountability for AI systems. Finally, we apply these research propositions to the context of generative AI systems and derive a research agenda to guide future research on this emerging topic.
In the age of artificial intelligence (AI) and data-hungry applications, privacy-enhancing technologies (PETs) offer a way for individuals to limit data collection and processing by these applications. Yet, the adoption of PETs remains low and user-focused research on adoption predictors is limited. In this work, we use the unified theory of acceptance and use of technology 2 (UTAUT2) to study the adoption of five categories of personal PETs: private browsing, privacy-focused web browsers, privacy browser extensions, secure (encrypted) messaging, and secure (encrypted) email. Our results confirmed the significant role of social influence and habit as predictors of adoption of PETs but also showed that the adoption of these five categories of PETs was not driven by the same set of factors. These differences call for more contextualized research to study the adoption of PETs and raise the potential issue of the limits of the generalizability of UTAUT2. The discovery of the lack of support for such an established theory also creates the potential for breaking new ground through the development of new theories of adoption that are better suited to the emerging world of technologies including PETs.
As digital transformation progresses, research broadens its perspective to not only focus on the positive effects but also on the adverse impacts of the increasing use of digital technologies. Against this backdrop, the issue of being responsibly digital moves to the foreground in research and practice. The need for further research on the conceptualization of digital responsibility and its role in the context of digital transformation has been indicated by the literature. In this paper, we empirically investigate the interplay of digital responsibility and digital transformation in the context of large-scale digital transformation. Based on our results, we derive four core contributions for IS theory as we highlight the prime role of digital responsibility in contemporary digital transformation processes, refine the conceptualization of digital responsibility in the context of digital transformation, propose six dynamic interplays of digital responsibility and digital transformation, and present promising avenues for further research on this increasingly relevant phenomenon.
The notion of “Responsible Digital” emphasises the ethical and responsible design and use of digital technologies. Having the knowledge and skills to navigate the digital world safely, wisely and securely becomes critical when digital literacy and access to technologies are limited and livelihood possibilities are precarious such as in the context of vulnerable migrants. We use the Responsible Research and Innovation (RRI) framework in its operationalised version called AREA Plus as a lens to reflect on our research-practice in relation to two projects in sensitive contexts that were designed with vulnerable groups to co-create digital interventions aimed at improving their lives. In so doing, we introduce a new ‘sustainability’ dimension to AREA Plus to develop what we term the AREAS framework. We contribute to knowledge by using the AREA Plus framework in the context of Africa, South East Asia and South America migration and by further enhancing it; to methodology by highlighting the procedures followed when working with vulnerable groups; and to practice through the promotion of responsible digital practices.
Cybersecurity incident response (CSIR) is paramount for organizational resilience. At its core, analysts undertake a cognitively demanding process of data analytics to correlate data points, identify patterns, and synthesize diverse information. Recently, artificial intelligence (AI) based solutions have been utilized to streamline CSIR workflows, notably with an increasing focus on explainable AI (XAI) to ensure transparency. However, XAI also poses challenges, requiring analysts to allocate additional time to process explanations. This study addresses the gap in understanding how AI and its explanations can be seamlessly integrated into CSIR workflows. Employing a multi-method approach, we first interviewed analysts to identify their cognitive challenges, interactions with AI, and expectations from XAI. In a subsequent case study, we investigated the evolution of analysts' needs for AI explanations throughout the investigative process. Our findings yield several key propositions for addressing the cognitive impacts of XAI in CSIR, aiming to enhance cognitive fit to reduce analysts' cognitive load during investigations.
Since Taiwan's economy is dominated by small and medium-sized enterprises, the formation of virtual teams is even more important for Taiwanese companies due to the impact of COVID- 19. Hence, this study draws on the virtual team climate and the theory of planned behaviour (TPB). However, it also adds pluralistic ignorance (which stems from social cognitive bias arising from universal behavioural adherence to social norms) as a key inhibitor to explore factors affecting the quantity and quality of knowledge-sharing behaviour through knowledge management systems (KMS) in virtual teams. A field survey of 528 employees from 72 companies in Taiwan and working in virtual teams was analysed utilising partial least squares structural equation modelling (PLS-SEM) to evaluate the outcomes empirically. The findings show that the virtual team climate, perceived behavioural control, and subjective norm still positively affect knowledge-sharing behaviour. However, more surprisingly, the moderating effect of pluralistic ignorance dampens the positive relationships between (1) the intention to share knowledge and knowledge-sharing behaviour and (2) the virtual team climate and knowledge-sharing behaviour within virtual teams. Finally, in addition to the practical implications for managers of knowledge-intensive companies who are keen to adopt virtual teams, this study provides several theoretical and managerial implications for relevant managers and academic researchers.

