Pub Date : 2025-11-06DOI: 10.1016/j.giq.2025.102088
Brecht Weerheijm, Sarah Giest, Bram Klievink
Artificial Intelligence (AI) advisory systems are being implemented in the public sector for more efficient and effective decision-making. Yet, there is a lack of in-depth qualitative and comparative research focusing on how decision-makers in a real-world setting use different types of AI advisory systems. By asking “How do different AI advisory systems affect use by national security decision-makers?”, this study reveals through a qualitative case study and using scenario-based interviews that decision-makers are more likely to use relatively simple AI systems over complex ‘black box’ systems. Additionally, factors such as accountability concerns and compatibility with existing decision-making processes influence their willingness to use AI advisory systems. Ultimately, a more technically advanced AI system is not necessarily perceived as more competent, as decision-makers view processes like data analysis as integral to nuanced and effective decision-making. This suggests that the fit between the perceived competences and compatibility of the AI system and the decision-making task at hand is highly important for the successful implementation of AI advisory systems.
{"title":"Complexity, understandability, and compatibility: A comparative study of AI advisory systems for National Security","authors":"Brecht Weerheijm, Sarah Giest, Bram Klievink","doi":"10.1016/j.giq.2025.102088","DOIUrl":"10.1016/j.giq.2025.102088","url":null,"abstract":"<div><div>Artificial Intelligence (AI) advisory systems are being implemented in the public sector for more efficient and effective decision-making. Yet, there is a lack of in-depth qualitative and comparative research focusing on how decision-makers in a real-world setting use different types of AI advisory systems. By asking “How do different AI advisory systems affect use by national security decision-makers?”, this study reveals through a qualitative case study and using scenario-based interviews that decision-makers are more likely to use relatively simple AI systems over complex ‘black box’ systems. Additionally, factors such as accountability concerns and compatibility with existing decision-making processes influence their willingness to use AI advisory systems. Ultimately, a more technically advanced AI system is not necessarily perceived as more competent, as decision-makers view processes like data analysis as integral to nuanced and effective decision-making. This suggests that the fit between the perceived competences and compatibility of the AI system and the decision-making task at hand is highly important for the successful implementation of AI advisory systems.</div></div>","PeriodicalId":48258,"journal":{"name":"Government Information Quarterly","volume":"42 4","pages":"Article 102088"},"PeriodicalIF":10.0,"publicationDate":"2025-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01DOI: 10.1016/j.giq.2025.102087
Zepeng Gong , Xiao Han , Yueping Zheng
Effective recovery from service failures is critical to the sustainable development of artificial intelligence (AI) government services. However, little is known about this subject in the field of public administration. Three survey experiments (N = 2,368) are administered to investigate if the identity disclosure (IDD) of AI agents could be used as a service recovery strategy to increase user tolerance for government service failures. In addition, the mechanisms and boundaries of IDD’s effects on tolerance and the most effective timing for such disclosures are examined. The findings indicate that: (1) IDD can improve tolerance, an effect fully mediated by a user’s performance expectancy of and perceived respect from a service; (2) the paths for perceived respect are not robust across different levels of service failure severity; and (3) the relatively more effective and economical timing for IDD is pre-failure disclosure. Overall, IDD is an effective strategy for AI government service recovery, and the user’s rational assessment (performance expectancy) plays a more extensive role than emotional assessment (perceived respect) in IDD’s effects on tolerance. This study provides new insights into AI service failure and recovery, thus enriching relevant theories.
{"title":"Recovery from AI government service failures: Is disclosing the identity of the AI agent an effective strategy?","authors":"Zepeng Gong , Xiao Han , Yueping Zheng","doi":"10.1016/j.giq.2025.102087","DOIUrl":"10.1016/j.giq.2025.102087","url":null,"abstract":"<div><div>Effective recovery from service failures is critical to the sustainable development of artificial intelligence (AI) government services. However, little is known about this subject in the field of public administration. Three survey experiments (N = 2,368) are administered to investigate if the identity disclosure (IDD) of AI agents could be used as a service recovery strategy to increase user tolerance for government service failures. In addition, the mechanisms and boundaries of IDD’s effects on tolerance and the most effective timing for such disclosures are examined. The findings indicate that: (1) IDD can improve tolerance, an effect fully mediated by a user’s performance expectancy of and perceived respect from a service; (2) the paths for perceived respect are not robust across different levels of service failure severity; and (3) the relatively more effective and economical timing for IDD is pre-failure disclosure. Overall, IDD is an effective strategy for AI government service recovery, and the user’s rational assessment (performance expectancy) plays a more extensive role than emotional assessment (perceived respect) in IDD’s effects on tolerance. This study provides new insights into AI service failure and recovery, thus enriching relevant theories.</div></div>","PeriodicalId":48258,"journal":{"name":"Government Information Quarterly","volume":"42 4","pages":"Article 102087"},"PeriodicalIF":10.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145416472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-29DOI: 10.1016/j.giq.2025.102089
Wei Zhang, Yili Yu
As artificial intelligence (AI) becomes increasingly embedded in public organizations, a critical challenge is how employees adapt to technological change while maintaining effective job performance.The objective of this study is to examine how AI assistance influences public employees' work adaptability. Drawing on Sociotechnical Systems (STS) theory, we develop a moderated mediation model in which AI assistance enhances adaptability through employee creativity, while task complexity serves as a contextual moderator. To test this model, we conducted a 2-by-2 field experiment (AI-assisted vs. non-assisted; complex vs. simple tasks) in the traffic management division of a municipal public security bureau in southwestern China, involving 408 participants and complemented by semi-structured interviews with 20 employees.The results show that AI enhances employees' work adaptability indirectly by stimulating creativity. Moreover, the positive effects of AI are amplified under high task complexity, indicating that AI performs more effectively in cognitively demanding contexts. These findings advance the theoretical understanding of the “technology-task-human” triadic interaction, extend micro-level behavioral research on AI in public administration, and provide practical guidance for the conditional deployment of AI in public organizations.
{"title":"How AI assistance enhances work adaptability in the public sector: A mixed-methods study from Southwest China","authors":"Wei Zhang, Yili Yu","doi":"10.1016/j.giq.2025.102089","DOIUrl":"10.1016/j.giq.2025.102089","url":null,"abstract":"<div><div>As artificial intelligence (AI) becomes increasingly embedded in public organizations, a critical challenge is how employees adapt to technological change while maintaining effective job performance.The objective of this study is to examine how AI assistance influences public employees' work adaptability. Drawing on Sociotechnical Systems (STS) theory, we develop a moderated mediation model in which AI assistance enhances adaptability through employee creativity, while task complexity serves as a contextual moderator. To test this model, we conducted a 2-by-2 field experiment (AI-assisted vs. non-assisted; complex vs. simple tasks) in the traffic management division of a municipal public security bureau in southwestern China, involving 408 participants and complemented by semi-structured interviews with 20 employees.The results show that AI enhances employees' work adaptability indirectly by stimulating creativity. Moreover, the positive effects of AI are amplified under high task complexity, indicating that AI performs more effectively in cognitively demanding contexts. These findings advance the theoretical understanding of the “technology-task-human” triadic interaction, extend micro-level behavioral research on AI in public administration, and provide practical guidance for the conditional deployment of AI in public organizations.</div></div>","PeriodicalId":48258,"journal":{"name":"Government Information Quarterly","volume":"42 4","pages":"Article 102089"},"PeriodicalIF":10.0,"publicationDate":"2025-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145416471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-29DOI: 10.1016/j.giq.2025.102077
E.C. Oomens, R.S. van Wegberg, M.J.G. van Eeten, A.J. Klievink
Intelligence services must balance values such as national security and privacy when collecting data, with each scenario involving specific contextual trade-offs. While citizens benefit from effective intelligence operations, they also risk having their rights infringed upon. This makes citizen perspectives on acceptable data collection for intelligence and national security salient, as their legitimacy is also contingent upon public support. Yet, important aspects of citizen perspectives are understudied, such as the influence of contextual factors related to the use of intelligence collection methods. This study, inspired by Nissenbaum's contextual integrity framework, uses a factorial survey experiment with vignettes among a representative sample of 1423 Dutch citizens to examine the influence of threat type, duration, data subject, collection method, data type, and data retention on public acceptance of surveillance. Additionally, the study considers the impact of respondents' trust and privacy attitudes. The findings reveal significant influence of both contextual variables – particularly threat type, data subject, and data retention – and respondent predispositions – particularly trust in institutions, trust in intelligence services' competence, and privacy concerns for others. The findings imply that more in-depth contextual knowledge among the public may foster support for intelligence activities.
{"title":"Understanding public acceptance of data collection by intelligence services in the Netherlands: A factorial survey experiment","authors":"E.C. Oomens, R.S. van Wegberg, M.J.G. van Eeten, A.J. Klievink","doi":"10.1016/j.giq.2025.102077","DOIUrl":"10.1016/j.giq.2025.102077","url":null,"abstract":"<div><div>Intelligence services must balance values such as national security and privacy when collecting data, with each scenario involving specific contextual trade-offs. While citizens benefit from effective intelligence operations, they also risk having their rights infringed upon. This makes citizen perspectives on acceptable data collection for intelligence and national security salient, as their legitimacy is also contingent upon public support. Yet, important aspects of citizen perspectives are understudied, such as the influence of contextual factors related to the use of intelligence collection methods. This study, inspired by Nissenbaum's contextual integrity framework, uses a factorial survey experiment with vignettes among a representative sample of 1423 Dutch citizens to examine the influence of threat type, duration, data subject, collection method, data type, and data retention on public acceptance of surveillance. Additionally, the study considers the impact of respondents' trust and privacy attitudes. The findings reveal significant influence of both contextual variables – particularly threat type, data subject, and data retention – and respondent predispositions – particularly trust in institutions, trust in intelligence services' competence, and privacy concerns for others. The findings imply that more in-depth contextual knowledge among the public may foster support for intelligence activities.</div></div>","PeriodicalId":48258,"journal":{"name":"Government Information Quarterly","volume":"42 4","pages":"Article 102077"},"PeriodicalIF":10.0,"publicationDate":"2025-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145416589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-24DOI: 10.1016/j.giq.2025.102084
Valentin Wittmann, Timo Meynhardt
The EU and other institutions worldwide have committed to aligning AI with human values to ensure that the technology contributes to the common good. Yet, criticism persists that debates over which values should guide this alignment are dominated by private and public organizations that prioritize technological considerations. Societal perspectives that emphasize broader, non-normative values are often marginalized. This exclusion generates a democratic deficit and risks forgoing the advantages of aligning AI with citizens' public values - namely trust, acceptance and public value creation. To address this gap, we empirically examine EU citizens' regulatory and value preferences regarding AI and its regulation, drawing on two complementary studies and Public Values theory and tools: one mixed-methods study of the EU's Public Consultation and one study based on the quantitative assessment of a newly developed AI Public Value (PV) Landscape. Our findings show that EU citizens (i) prefer binding regulation of AI, (ii) consider both ethical and technological principles as well as broader, non-normative societal values, especially along the moral-ethical value dimension, important, and (iii) serve as a conciliatory force capable of balancing business interests against those of state and NGO stakeholders. These results underscore the importance of aligning AI with broader PVs, reinforcing ethical foundations, and enhancing public inclusion in AI governance to achieve truly human-centric and socially accepted AI.
{"title":"Human-centric AI governance: what the EU public values, what it really, really values","authors":"Valentin Wittmann, Timo Meynhardt","doi":"10.1016/j.giq.2025.102084","DOIUrl":"10.1016/j.giq.2025.102084","url":null,"abstract":"<div><div>The EU and other institutions worldwide have committed to aligning AI with human values to ensure that the technology contributes to the common good. Yet, criticism persists that debates over which values should guide this alignment are dominated by private and public organizations that prioritize technological considerations. Societal perspectives that emphasize broader, non-normative values are often marginalized. This exclusion generates a democratic deficit and risks forgoing the advantages of aligning AI with citizens' public values - namely trust, acceptance and public value creation. To address this gap, we empirically examine EU citizens' regulatory and value preferences regarding AI and its regulation, drawing on two complementary studies and Public Values theory and tools: one mixed-methods study of the EU's Public Consultation and one study based on the quantitative assessment of a newly developed AI Public Value (PV) Landscape. Our findings show that EU citizens (i) prefer binding regulation of AI, (ii) consider both ethical and technological principles as well as broader, non-normative societal values, especially along the moral-ethical value dimension, important, and (iii) serve as a conciliatory force capable of balancing business interests against those of state and NGO stakeholders. These results underscore the importance of aligning AI with broader PVs, reinforcing ethical foundations, and enhancing public inclusion in AI governance to achieve truly human-centric and socially accepted AI.</div></div>","PeriodicalId":48258,"journal":{"name":"Government Information Quarterly","volume":"42 4","pages":"Article 102084"},"PeriodicalIF":10.0,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145362435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-18DOI: 10.1016/j.giq.2025.102085
Jianhao Hu , Honghui Zou , Qian Wang
Given the impact of environmental, social, and governance (ESG) rating divergence on sustainable practices, its antecedents have garnered increasing attention. In the context of growing demands for transparency from investors and policymakers, the effect of open government data (OGD) policies on ESG rating divergence remains underexplored. To address this gap, this study examines the dynamic relationship between OGD policies and ESG rating divergence. Using panel data from Chinese listed firms and employing a difference-in-differences approach, the analysis reveals that OGD policies significantly exacerbate ESG rating divergence in the short term, with pronounced effects observed among firms subject to mandatory disclosure requirements and those with state ownership. However, over time, OGD policies reduce the ESG rating divergence. By offering a dynamic analysis, this research contributes to the literature on OGD policies and ESG assessment by underscoring the role of city-level policies in driving institutional change, thereby enhancing our understanding of ESG variability and public policy impacts.
{"title":"From disclosure to discrepancy: How open government data alters ESG rating divergence","authors":"Jianhao Hu , Honghui Zou , Qian Wang","doi":"10.1016/j.giq.2025.102085","DOIUrl":"10.1016/j.giq.2025.102085","url":null,"abstract":"<div><div>Given the impact of environmental, social, and governance (ESG) rating divergence on sustainable practices, its antecedents have garnered increasing attention. In the context of growing demands for transparency from investors and policymakers, the effect of open government data (OGD) policies on ESG rating divergence remains underexplored. To address this gap, this study examines the dynamic relationship between OGD policies and ESG rating divergence. Using panel data from Chinese listed firms and employing a difference-in-differences approach, the analysis reveals that OGD policies significantly exacerbate ESG rating divergence in the short term, with pronounced effects observed among firms subject to mandatory disclosure requirements and those with state ownership. However, over time, OGD policies reduce the ESG rating divergence. By offering a dynamic analysis, this research contributes to the literature on OGD policies and ESG assessment by underscoring the role of city-level policies in driving institutional change, thereby enhancing our understanding of ESG variability and public policy impacts.</div></div>","PeriodicalId":48258,"journal":{"name":"Government Information Quarterly","volume":"42 4","pages":"Article 102085"},"PeriodicalIF":10.0,"publicationDate":"2025-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145320790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-14DOI: 10.1016/j.giq.2025.102083
Xiangyu Bian , Bin Wang , Aobo Yang
As governments increasingly adopt artificial intelligence (AI) in public administration, public trust is critical for successful implementation. Using an adapted Technology Acceptance Model (TAM), this study examines how perceived transparency, ethical principles, and perceived benefits of government AI adoption affect public trust. This study is especially applicable in the domain of China's rapid digital transformation and the government's push for AI-driven smart city initiatives, where unique cultural values and governance structures shape public perceptions differently from Western contexts. Data from 608 Chinese citizens were collected through a questionnaire survey to measure perceived AI transparency, ethical principles, perceived benefits, and public trust in government AI use. This research applied structural equation modeling (SEM) to explore the proposed relationships and mediating effects. The findings indicate that both perceived transparency and ethical principles positively affect the perceived benefits of AI technology, which significantly increases public trust in government AI use. Transparency and ethics also directly affect public trust. Notably, perceived benefits mediated the relationship between transparency, ethics, and public trust, suggesting that transparency and ethics indirectly affect trust by influencing perceived benefits. This study validates the extended TAM in the context of government AI applications and shows that improving transparency and ethical compliance in AI use can increase perceived gains and thus public trust in government AI technologies. These insights provide valuable guidance for policymakers to optimize AI application strategies and improve public acceptance, especially in the Chinese context where balancing technological advances with public concerns is becoming increasingly important.
{"title":"The trust trifecta: How transparency, ethics, and benefits shape public confidence in government AI","authors":"Xiangyu Bian , Bin Wang , Aobo Yang","doi":"10.1016/j.giq.2025.102083","DOIUrl":"10.1016/j.giq.2025.102083","url":null,"abstract":"<div><div>As governments increasingly adopt artificial intelligence (AI) in public administration, public trust is critical for successful implementation. Using an adapted Technology Acceptance Model (TAM), this study examines how perceived transparency, ethical principles, and perceived benefits of government AI adoption affect public trust. This study is especially applicable in the domain of China's rapid digital transformation and the government's push for AI-driven smart city initiatives, where unique cultural values and governance structures shape public perceptions differently from Western contexts. Data from 608 Chinese citizens were collected through a questionnaire survey to measure perceived AI transparency, ethical principles, perceived benefits, and public trust in government AI use. This research applied structural equation modeling (SEM) to explore the proposed relationships and mediating effects. The findings indicate that both perceived transparency and ethical principles positively affect the perceived benefits of AI technology, which significantly increases public trust in government AI use. Transparency and ethics also directly affect public trust. Notably, perceived benefits mediated the relationship between transparency, ethics, and public trust, suggesting that transparency and ethics indirectly affect trust by influencing perceived benefits. This study validates the extended TAM in the context of government AI applications and shows that improving transparency and ethical compliance in AI use can increase perceived gains and thus public trust in government AI technologies. These insights provide valuable guidance for policymakers to optimize AI application strategies and improve public acceptance, especially in the Chinese context where balancing technological advances with public concerns is becoming increasingly important.</div></div>","PeriodicalId":48258,"journal":{"name":"Government Information Quarterly","volume":"42 4","pages":"Article 102083"},"PeriodicalIF":10.0,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145320748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-09DOI: 10.1016/j.giq.2025.102080
Philipp Trein, Bastien Presset, Thenia Vagionaki
The implementation of digital innovations in the public sector—such as Electronic Health Records (EHRs)—requires decisionmakers to engage in learning processes. This article investigates how collective learning processes unfold in collaborative innovation, focusing on the development of Switzerland's national Electronic Health Record (EHR) system. Building on policy learning and collaborative governance literatures, we conceptualize learning as comprising two interdependent processes: policy-oriented learning (focused on technical effectiveness) and power-oriented learning (concerned with political feasibility). Drawing on 39 semi-structured interviews and extensive document analysis, we find that the EHR initiative followed a sequential learning pattern—technical solutions were developed before sufficient political support was secured—leading to a politically endorsed but technically flawed implementation. The study introduces the concept of parallel learning loops to explain how simultaneous engagement with technical and political dimensions can improve innovation outcomes. These findings advance theoretical understanding of collaborative learning in digital government and underscore the need for institutional designs that support concurrent technical and political deliberation in complex innovation processes.
{"title":"Parallel learning loops in collaborative innovation: Insights from digital government","authors":"Philipp Trein, Bastien Presset, Thenia Vagionaki","doi":"10.1016/j.giq.2025.102080","DOIUrl":"10.1016/j.giq.2025.102080","url":null,"abstract":"<div><div>The implementation of digital innovations in the public sector—such as Electronic Health Records (EHRs)—requires decisionmakers to engage in learning processes. This article investigates how collective learning processes unfold in collaborative innovation, focusing on the development of Switzerland's national Electronic Health Record (EHR) system. Building on policy learning and collaborative governance literatures, we conceptualize learning as comprising two interdependent processes: policy-oriented learning (focused on technical effectiveness) and power-oriented learning (concerned with political feasibility). Drawing on 39 semi-structured interviews and extensive document analysis, we find that the EHR initiative followed a sequential learning pattern—technical solutions were developed before sufficient political support was secured—leading to a politically endorsed but technically flawed implementation. The study introduces the concept of parallel learning loops to explain how simultaneous engagement with technical and political dimensions can improve innovation outcomes. These findings advance theoretical understanding of collaborative learning in digital government and underscore the need for institutional designs that support concurrent technical and political deliberation in complex innovation processes.</div></div>","PeriodicalId":48258,"journal":{"name":"Government Information Quarterly","volume":"42 4","pages":"Article 102080"},"PeriodicalIF":10.0,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145267842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01DOI: 10.1016/j.giq.2025.102082
Kuang-Ting Tai, Pallavi Awasthi
Originating from private sector software development, agile has permeated the public sector, fostering innovative reforms not just in project management but also in organizational management and collaborative governance. Despite its widespread adoption, there exists a paucity of research delving into the intricacies of agile practices, particularly for the potential conflicts and interactions with the traditional waterfall-based approaches. Employing the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) method, this systematic review aims to address three fundamental research questions concerning the conceptualization, implementation, and impacts of agile government. To deepen theoretical insight and practical application, our study classifies agile into three distinct levels: Micro (project management), Meso (organizational management), and Macro (governance structure). Our analysis uncovers substantial variations in agile practices across these levels, reflecting a deliberate strategy aimed at harmonizing with existing bureaucratic systems. This study concludes by offering policy implications and delineating avenues for future research endeavors.
{"title":"An exploration of agile government in the public sector: A systematic literature review at macro, meso, and micro levels of analysis","authors":"Kuang-Ting Tai, Pallavi Awasthi","doi":"10.1016/j.giq.2025.102082","DOIUrl":"10.1016/j.giq.2025.102082","url":null,"abstract":"<div><div>Originating from private sector software development, agile has permeated the public sector, fostering innovative reforms not just in project management but also in organizational management and collaborative governance. Despite its widespread adoption, there exists a paucity of research delving into the intricacies of agile practices, particularly for the potential conflicts and interactions with the traditional waterfall-based approaches. Employing the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) method, this systematic review aims to address three fundamental research questions concerning the conceptualization, implementation, and impacts of agile government. To deepen theoretical insight and practical application, our study classifies agile into three distinct levels: Micro (project management), Meso (organizational management), and Macro (governance structure). Our analysis uncovers substantial variations in agile practices across these levels, reflecting a deliberate strategy aimed at harmonizing with existing bureaucratic systems. This study concludes by offering policy implications and delineating avenues for future research endeavors.</div></div>","PeriodicalId":48258,"journal":{"name":"Government Information Quarterly","volume":"42 4","pages":"Article 102082"},"PeriodicalIF":10.0,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145220755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-27DOI: 10.1016/j.giq.2025.102081
Seongkyung Cho , Joon-Young Hur , Danee Kim
Amid rapid technological advancements, AI-chatbot integration into government workplaces represents a transformative shift to enhance communication, streamline administrative processes, and boost employee efficiency. Through a mixed-methods design combining survey data and employee interviews, this study analyzes how trust in AI chatbots influences employee utilization of chatbots in government organizations and examines organizational support's moderating role. By demonstrating the pivotal roles of trust and organizational support, this study emphasizes their combined effect on driving adoption and digital transformation within government agencies. Findings provide insights for government administrators and policymakers, guiding the development of trust-building strategies and organizational support mechanisms to promote effective chatbot adoption in public-sector workplaces. This research fills the empirical gap in understanding chatbot adoption from the perspective of government employees and illuminates opportunities and challenges as public-sector employees adapt to technological changes in their work environments.
{"title":"Bridging trust in AI and its adoption: The role of organizational support in AI chatbot implementation in korean government agencies","authors":"Seongkyung Cho , Joon-Young Hur , Danee Kim","doi":"10.1016/j.giq.2025.102081","DOIUrl":"10.1016/j.giq.2025.102081","url":null,"abstract":"<div><div>Amid rapid technological advancements, AI-chatbot integration into government workplaces represents a transformative shift to enhance communication, streamline administrative processes, and boost employee efficiency. Through a mixed-methods design combining survey data and employee interviews, this study analyzes how trust in AI chatbots influences employee utilization of chatbots in government organizations and examines organizational support's moderating role. By demonstrating the pivotal roles of trust and organizational support, this study emphasizes their combined effect on driving adoption and digital transformation within government agencies. Findings provide insights for government administrators and policymakers, guiding the development of trust-building strategies and organizational support mechanisms to promote effective chatbot adoption in public-sector workplaces. This research fills the empirical gap in understanding chatbot adoption from the perspective of government employees and illuminates opportunities and challenges as public-sector employees adapt to technological changes in their work environments.</div></div>","PeriodicalId":48258,"journal":{"name":"Government Information Quarterly","volume":"42 4","pages":"Article 102081"},"PeriodicalIF":10.0,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145158798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}