Workplace security and privacy implications in the GenAI age: A survey

IF 3.7 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Journal of Information Security and Applications Pub Date : 2025-01-07 DOI:10.1016/j.jisa.2024.103960
Abebe Diro , Shahriar Kaisar , Akanksha Saini , Samar Fatima , Pham Cong Hiep , Fikadu Erba
{"title":"Workplace security and privacy implications in the GenAI age: A survey","authors":"Abebe Diro ,&nbsp;Shahriar Kaisar ,&nbsp;Akanksha Saini ,&nbsp;Samar Fatima ,&nbsp;Pham Cong Hiep ,&nbsp;Fikadu Erba","doi":"10.1016/j.jisa.2024.103960","DOIUrl":null,"url":null,"abstract":"<div><div>Generative Artificial Intelligence (GenAI) is transforming the workplace, but its adoption introduces significant risks to data security and privacy. Recent incidents underscore the urgency of addressing these issues. This comprehensive survey investigates the implications of GenAI integration in workplaces, focusing on its impact on organizational operations and security. We analyze vulnerabilities within GenAI systems, threats they face, and repercussions of AI-driven workplace monitoring. By examining diverse attack vectors like model attacks and automated cyberattacks, we expose their potential to undermine data integrity and privacy. Unlike previous works, this survey specifically focuses on the security and privacy implications of GenAI within workplace settings, addressing issues like employee monitoring, deepfakes, and regulatory compliance. We delve into emerging threats during model training and usage phases, proposing countermeasures such as differential privacy for training data and robust authentication for access control. Additionally, we provide a comprehensive analysis of evolving regulatory frameworks governing AI tools globally. Based on our comprehensive analysis, we propose targeted recommendations for future research and policy-making to promote responsible and secure adoption of GenAI in the workplace, such as incentivizing the development of explainable AI (XAI) and establishing clear guidelines for ethical data usage. This survey equips stakeholders with a comprehensive understanding of GenAI’s complex workplace landscape, empowering them to harness its benefits responsibly while mitigating risks.</div></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"89 ","pages":"Article 103960"},"PeriodicalIF":3.7000,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Information Security and Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S221421262400262X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Generative Artificial Intelligence (GenAI) is transforming the workplace, but its adoption introduces significant risks to data security and privacy. Recent incidents underscore the urgency of addressing these issues. This comprehensive survey investigates the implications of GenAI integration in workplaces, focusing on its impact on organizational operations and security. We analyze vulnerabilities within GenAI systems, threats they face, and repercussions of AI-driven workplace monitoring. By examining diverse attack vectors like model attacks and automated cyberattacks, we expose their potential to undermine data integrity and privacy. Unlike previous works, this survey specifically focuses on the security and privacy implications of GenAI within workplace settings, addressing issues like employee monitoring, deepfakes, and regulatory compliance. We delve into emerging threats during model training and usage phases, proposing countermeasures such as differential privacy for training data and robust authentication for access control. Additionally, we provide a comprehensive analysis of evolving regulatory frameworks governing AI tools globally. Based on our comprehensive analysis, we propose targeted recommendations for future research and policy-making to promote responsible and secure adoption of GenAI in the workplace, such as incentivizing the development of explainable AI (XAI) and establishing clear guidelines for ethical data usage. This survey equips stakeholders with a comprehensive understanding of GenAI’s complex workplace landscape, empowering them to harness its benefits responsibly while mitigating risks.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
GenAI时代的工作场所安全和隐私影响:一项调查
生成式人工智能(GenAI)正在改变工作场所,但它的采用给数据安全和隐私带来了重大风险。最近的事件强调了解决这些问题的紧迫性。这项综合调查调查了GenAI集成在工作场所的影响,重点是它对组织运营和安全的影响。我们分析了GenAI系统中的漏洞、它们面临的威胁以及人工智能驱动的工作场所监控的影响。通过研究模型攻击和自动化网络攻击等各种攻击向量,我们揭示了它们破坏数据完整性和隐私的潜力。与以往的研究不同,本次调查特别关注GenAI在工作场所环境中的安全和隐私影响,解决了员工监控、深度造假和监管合规等问题。我们深入研究了在模型训练和使用阶段出现的威胁,提出了诸如训练数据的差分隐私和访问控制的鲁棒身份验证等对策。此外,我们还对全球范围内不断发展的人工智能工具监管框架进行了全面分析。基于我们的综合分析,我们为未来的研究和政策制定提出了有针对性的建议,以促进在工作场所负责任和安全地采用GenAI,例如激励可解释AI (XAI)的发展,并为道德数据使用建立明确的指导方针。这项调查使利益相关者全面了解GenAI复杂的工作场所环境,使他们能够在负责任地利用其利益的同时降低风险。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Journal of Information Security and Applications
Journal of Information Security and Applications Computer Science-Computer Networks and Communications
CiteScore
10.90
自引率
5.40%
发文量
206
审稿时长
56 days
期刊介绍: Journal of Information Security and Applications (JISA) focuses on the original research and practice-driven applications with relevance to information security and applications. JISA provides a common linkage between a vibrant scientific and research community and industry professionals by offering a clear view on modern problems and challenges in information security, as well as identifying promising scientific and "best-practice" solutions. JISA issues offer a balance between original research work and innovative industrial approaches by internationally renowned information security experts and researchers.
期刊最新文献
Editorial Board AMF-CFL: Anomaly model filtering based on clustering in federated learning Zerovision: A privacy-preserving iris authentication framework using zero knowledge proofs and steganographic safeguards Say the image: Auditory masking effect-driven invertible network for progressive image-in-audio steganography FedCPP: A hybrid proactive-passive defense framework for backdoor attack mitigation in federated learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1