AI hype as a cyber security risk: the moral responsibility of implementing generative AI in business

Declan Humphreys, Abigail Koay, Dennis Desmond, Erica Mealy
{"title":"AI hype as a cyber security risk: the moral responsibility of implementing generative AI in business","authors":"Declan Humphreys,&nbsp;Abigail Koay,&nbsp;Dennis Desmond,&nbsp;Erica Mealy","doi":"10.1007/s43681-024-00443-4","DOIUrl":null,"url":null,"abstract":"<div><p>This paper examines the ethical obligations companies have when implementing generative Artificial Intelligence (AI). We point to the potential cyber security risks companies are exposed to when rushing to adopt generative AI solutions or buying into “AI hype”. While the benefits of implementing generative AI solutions for business have been widely touted, the inherent risks associated have been less well publicised. There are growing concerns that the race to integrate generative AI is not being accompanied by adequate safety measures. The rush to buy into the hype of generative AI and not fall behind the competition is potentially exposing companies to broad and possibly catastrophic cyber-attacks or breaches. In this paper, we outline significant cyber security threats generative AI models pose, including potential ‘backdoors’ in AI models that could compromise user data or the risk of ‘poisoned’ AI models producing false results. In light of these the cyber security concerns, we discuss the moral obligations of implementing generative AI into business by considering the ethical principles of beneficence, non-maleficence, autonomy, justice, and explicability. We identify two examples of ethical concern, <i>overreliance</i> and <i>over-trust</i> in generative AI, both of which can negatively influence business decisions, leaving companies vulnerable to cyber security threats. This paper concludes by recommending a set of checklists for ethical implementation of generative AI in business environment to minimise cyber security risk based on the discussed moral responsibilities and ethical concern.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 3","pages":"791 - 804"},"PeriodicalIF":0.0000,"publicationDate":"2024-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-024-00443-4.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-024-00443-4","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper examines the ethical obligations companies have when implementing generative Artificial Intelligence (AI). We point to the potential cyber security risks companies are exposed to when rushing to adopt generative AI solutions or buying into “AI hype”. While the benefits of implementing generative AI solutions for business have been widely touted, the inherent risks associated have been less well publicised. There are growing concerns that the race to integrate generative AI is not being accompanied by adequate safety measures. The rush to buy into the hype of generative AI and not fall behind the competition is potentially exposing companies to broad and possibly catastrophic cyber-attacks or breaches. In this paper, we outline significant cyber security threats generative AI models pose, including potential ‘backdoors’ in AI models that could compromise user data or the risk of ‘poisoned’ AI models producing false results. In light of these the cyber security concerns, we discuss the moral obligations of implementing generative AI into business by considering the ethical principles of beneficence, non-maleficence, autonomy, justice, and explicability. We identify two examples of ethical concern, overreliance and over-trust in generative AI, both of which can negatively influence business decisions, leaving companies vulnerable to cyber security threats. This paper concludes by recommending a set of checklists for ethical implementation of generative AI in business environment to minimise cyber security risk based on the discussed moral responsibilities and ethical concern.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
将人工智能炒作成网络安全风险:在企业中实施生成式人工智能的道德责任
本文探讨了公司在实施生成式人工智能(AI)时所承担的道德义务。我们指出,当企业急于采用生成式人工智能解决方案或接受“人工智能炒作”时,它们面临的潜在网络安全风险。虽然为企业实施生成式人工智能解决方案的好处得到了广泛宣传,但相关的内在风险却没有得到充分宣传。越来越多的人担心,整合生成式人工智能的竞赛没有伴随足够的安全措施。急于接受可生成人工智能的炒作,而不落后于竞争对手,这可能会使企业面临广泛的、可能是灾难性的网络攻击或数据泄露。在本文中,我们概述了生成人工智能模型构成的重大网络安全威胁,包括人工智能模型中可能危及用户数据的潜在“后门”或“中毒”人工智能模型产生错误结果的风险。鉴于这些网络安全问题,我们通过考虑善行、非恶意、自治、正义和可解释性的道德原则,讨论了在商业中实施生成式人工智能的道德义务。我们确定了两个道德问题的例子,即对生成式人工智能的过度依赖和过度信任,这两种情况都会对商业决策产生负面影响,使公司容易受到网络安全威胁。本文最后根据所讨论的道德责任和伦理问题,为商业环境中生成式人工智能的道德实施推荐了一套清单,以最大限度地降低网络安全风险。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Correction: AIgemony: power dynamics, dominant narratives, and colonisation Detecting doctrinal flattening in AI generated responses AI ethics in creative domains: a systematic review of detection, recognition, interpretation, generation, and moral implications in the arts (2000–2025) Truth without belief: can LLM-generated content satisfy classical theories of truth? Reframing Floridi and Cowls’ AI ethics framework through Islamic moral thought
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1