人工智能与人类智能的冲突及其对工艺系统安全的影响

IF 3 Q2 ENGINEERING, CHEMICAL Digital Chemical Engineering Pub Date : 2024-04-05 DOI:10.1016/j.dche.2024.100151
Rajeevan Arunthavanathan , Zaman Sajid , Faisal Khan , Efstratios Pistikopoulos
{"title":"人工智能与人类智能的冲突及其对工艺系统安全的影响","authors":"Rajeevan Arunthavanathan ,&nbsp;Zaman Sajid ,&nbsp;Faisal Khan ,&nbsp;Efstratios Pistikopoulos","doi":"10.1016/j.dche.2024.100151","DOIUrl":null,"url":null,"abstract":"<div><p>In the Industry 4.0 revolution, industries are advancing their operations by leveraging Artificial Intelligence (AI). AI-based systems enhance industries by automating repetitive tasks and improving overall efficiency. However, from a safety perspective, operating a system using AI without human interaction raises concerns regarding its reliability. Recent developments have made it imperative to establish a collaborative system between humans and AI, known as Intelligent Augmentation (IA). Industry 5.0 focuses on developing IA-based systems that facilitate collaboration between humans and AI. However, potential conflicts between humans and AI in controlling process plant operations pose a significant challenge in IA systems. Human-AI conflict in IA-based system operation can arise due to differences in observation, interpretation, and control action. Observation conflict may arise when humans and AI disagree with the observed data or information. Interpretation conflicts may occur due to differences in decision-making based on observed data, influenced by the learning ability of human intelligence (HI) and AI. Control action conflicts may arise when AI-driven control action differs from the human operator action. Conflicts between humans and AI may introduce additional risks to the IA-based system operation. Therefore, it is crucial to understand the concept of human-AI conflict and perform a detailed risk analysis before implementing a collaborative system. This paper aims to investigate the following: 1. Human and AI operations in process systems and the possible conflicts during the collaboration. 2. Formulate the concept of observation, interpretation, and action conflict in an IA-based system. 3. Provide a case study to identify the potential risk of human-AI conflict.</p></div>","PeriodicalId":72815,"journal":{"name":"Digital Chemical Engineering","volume":"11 ","pages":"Article 100151"},"PeriodicalIF":3.0000,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772508124000139/pdfft?md5=717b713a0304b1ad376553ead2d81709&pid=1-s2.0-S2772508124000139-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Artificial intelligence – Human intelligence conflict and its impact on process system safety\",\"authors\":\"Rajeevan Arunthavanathan ,&nbsp;Zaman Sajid ,&nbsp;Faisal Khan ,&nbsp;Efstratios Pistikopoulos\",\"doi\":\"10.1016/j.dche.2024.100151\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>In the Industry 4.0 revolution, industries are advancing their operations by leveraging Artificial Intelligence (AI). AI-based systems enhance industries by automating repetitive tasks and improving overall efficiency. However, from a safety perspective, operating a system using AI without human interaction raises concerns regarding its reliability. Recent developments have made it imperative to establish a collaborative system between humans and AI, known as Intelligent Augmentation (IA). Industry 5.0 focuses on developing IA-based systems that facilitate collaboration between humans and AI. However, potential conflicts between humans and AI in controlling process plant operations pose a significant challenge in IA systems. Human-AI conflict in IA-based system operation can arise due to differences in observation, interpretation, and control action. Observation conflict may arise when humans and AI disagree with the observed data or information. Interpretation conflicts may occur due to differences in decision-making based on observed data, influenced by the learning ability of human intelligence (HI) and AI. Control action conflicts may arise when AI-driven control action differs from the human operator action. Conflicts between humans and AI may introduce additional risks to the IA-based system operation. Therefore, it is crucial to understand the concept of human-AI conflict and perform a detailed risk analysis before implementing a collaborative system. This paper aims to investigate the following: 1. Human and AI operations in process systems and the possible conflicts during the collaboration. 2. Formulate the concept of observation, interpretation, and action conflict in an IA-based system. 3. Provide a case study to identify the potential risk of human-AI conflict.</p></div>\",\"PeriodicalId\":72815,\"journal\":{\"name\":\"Digital Chemical Engineering\",\"volume\":\"11 \",\"pages\":\"Article 100151\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-04-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2772508124000139/pdfft?md5=717b713a0304b1ad376553ead2d81709&pid=1-s2.0-S2772508124000139-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Digital Chemical Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2772508124000139\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, CHEMICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Chemical Engineering","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772508124000139","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, CHEMICAL","Score":null,"Total":0}
引用次数: 0

摘要

在工业 4.0 革命中,各行各业都在利用人工智能(AI)推进其运营。基于人工智能的系统可将重复性任务自动化并提高整体效率,从而提升工业水平。然而,从安全角度来看,在没有人类互动的情况下使用人工智能系统进行操作,会引发对其可靠性的担忧。最近的发展使得在人类和人工智能之间建立一个协作系统(即智能增强(IA))势在必行。工业 5.0 的重点是开发基于 IA 的系统,促进人类与人工智能之间的协作。然而,人类与人工智能在控制加工厂运营方面的潜在冲突给 IA 系统带来了巨大挑战。在基于 IA 的系统操作中,人类与人工智能之间的冲突可能会因观察、解释和控制行动方面的差异而产生。当人类和人工智能对观察到的数据或信息有不同意见时,就会产生观察冲突。受人类智能(HI)和人工智能学习能力的影响,根据观察到的数据做出的决策存在差异,这可能会导致解释冲突。当人工智能驱动的控制行动与人类操作员的行动不同时,可能会出现控制行动冲突。人类与人工智能之间的冲突可能会给基于 IA 的系统运行带来额外风险。因此,在实施协作系统之前,理解人类与人工智能冲突的概念并进行详细的风险分析至关重要。本文旨在研究以下问题:1.流程系统中的人类与人工智能操作以及协作过程中可能出现的冲突。2.提出基于人工智能的系统中观察、解释和行动冲突的概念。3.提供一个案例研究,以确定人类与人工智能冲突的潜在风险。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Artificial intelligence – Human intelligence conflict and its impact on process system safety

In the Industry 4.0 revolution, industries are advancing their operations by leveraging Artificial Intelligence (AI). AI-based systems enhance industries by automating repetitive tasks and improving overall efficiency. However, from a safety perspective, operating a system using AI without human interaction raises concerns regarding its reliability. Recent developments have made it imperative to establish a collaborative system between humans and AI, known as Intelligent Augmentation (IA). Industry 5.0 focuses on developing IA-based systems that facilitate collaboration between humans and AI. However, potential conflicts between humans and AI in controlling process plant operations pose a significant challenge in IA systems. Human-AI conflict in IA-based system operation can arise due to differences in observation, interpretation, and control action. Observation conflict may arise when humans and AI disagree with the observed data or information. Interpretation conflicts may occur due to differences in decision-making based on observed data, influenced by the learning ability of human intelligence (HI) and AI. Control action conflicts may arise when AI-driven control action differs from the human operator action. Conflicts between humans and AI may introduce additional risks to the IA-based system operation. Therefore, it is crucial to understand the concept of human-AI conflict and perform a detailed risk analysis before implementing a collaborative system. This paper aims to investigate the following: 1. Human and AI operations in process systems and the possible conflicts during the collaboration. 2. Formulate the concept of observation, interpretation, and action conflict in an IA-based system. 3. Provide a case study to identify the potential risk of human-AI conflict.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
3.10
自引率
0.00%
发文量
0
期刊最新文献
The trust region filter strategy: Survey of a rigorous approach for optimization with surrogate models Multi-agent distributed control of integrated process networks using an adaptive community detection approach Industrial data-driven machine learning soft sensing for optimal operation of etching tools Process integration technique for targeting carbon credit price subsidy Robust simulation and technical evaluation of large-scale gas oil hydrocracking process via extended water-energy-product (E-WEP) analysis
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1