{"title":"Artificial intelligence – Human intelligence conflict and its impact on process system safety","authors":"Rajeevan Arunthavanathan , Zaman Sajid , Faisal Khan , Efstratios Pistikopoulos","doi":"10.1016/j.dche.2024.100151","DOIUrl":null,"url":null,"abstract":"<div><p>In the Industry 4.0 revolution, industries are advancing their operations by leveraging Artificial Intelligence (AI). AI-based systems enhance industries by automating repetitive tasks and improving overall efficiency. However, from a safety perspective, operating a system using AI without human interaction raises concerns regarding its reliability. Recent developments have made it imperative to establish a collaborative system between humans and AI, known as Intelligent Augmentation (IA). Industry 5.0 focuses on developing IA-based systems that facilitate collaboration between humans and AI. However, potential conflicts between humans and AI in controlling process plant operations pose a significant challenge in IA systems. Human-AI conflict in IA-based system operation can arise due to differences in observation, interpretation, and control action. Observation conflict may arise when humans and AI disagree with the observed data or information. Interpretation conflicts may occur due to differences in decision-making based on observed data, influenced by the learning ability of human intelligence (HI) and AI. Control action conflicts may arise when AI-driven control action differs from the human operator action. Conflicts between humans and AI may introduce additional risks to the IA-based system operation. Therefore, it is crucial to understand the concept of human-AI conflict and perform a detailed risk analysis before implementing a collaborative system. This paper aims to investigate the following: 1. Human and AI operations in process systems and the possible conflicts during the collaboration. 2. Formulate the concept of observation, interpretation, and action conflict in an IA-based system. 3. Provide a case study to identify the potential risk of human-AI conflict.</p></div>","PeriodicalId":72815,"journal":{"name":"Digital Chemical Engineering","volume":"11 ","pages":"Article 100151"},"PeriodicalIF":3.0000,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772508124000139/pdfft?md5=717b713a0304b1ad376553ead2d81709&pid=1-s2.0-S2772508124000139-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Chemical Engineering","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772508124000139","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, CHEMICAL","Score":null,"Total":0}
引用次数: 0
Abstract
In the Industry 4.0 revolution, industries are advancing their operations by leveraging Artificial Intelligence (AI). AI-based systems enhance industries by automating repetitive tasks and improving overall efficiency. However, from a safety perspective, operating a system using AI without human interaction raises concerns regarding its reliability. Recent developments have made it imperative to establish a collaborative system between humans and AI, known as Intelligent Augmentation (IA). Industry 5.0 focuses on developing IA-based systems that facilitate collaboration between humans and AI. However, potential conflicts between humans and AI in controlling process plant operations pose a significant challenge in IA systems. Human-AI conflict in IA-based system operation can arise due to differences in observation, interpretation, and control action. Observation conflict may arise when humans and AI disagree with the observed data or information. Interpretation conflicts may occur due to differences in decision-making based on observed data, influenced by the learning ability of human intelligence (HI) and AI. Control action conflicts may arise when AI-driven control action differs from the human operator action. Conflicts between humans and AI may introduce additional risks to the IA-based system operation. Therefore, it is crucial to understand the concept of human-AI conflict and perform a detailed risk analysis before implementing a collaborative system. This paper aims to investigate the following: 1. Human and AI operations in process systems and the possible conflicts during the collaboration. 2. Formulate the concept of observation, interpretation, and action conflict in an IA-based system. 3. Provide a case study to identify the potential risk of human-AI conflict.
在工业 4.0 革命中,各行各业都在利用人工智能(AI)推进其运营。基于人工智能的系统可将重复性任务自动化并提高整体效率,从而提升工业水平。然而,从安全角度来看,在没有人类互动的情况下使用人工智能系统进行操作,会引发对其可靠性的担忧。最近的发展使得在人类和人工智能之间建立一个协作系统(即智能增强(IA))势在必行。工业 5.0 的重点是开发基于 IA 的系统,促进人类与人工智能之间的协作。然而,人类与人工智能在控制加工厂运营方面的潜在冲突给 IA 系统带来了巨大挑战。在基于 IA 的系统操作中,人类与人工智能之间的冲突可能会因观察、解释和控制行动方面的差异而产生。当人类和人工智能对观察到的数据或信息有不同意见时,就会产生观察冲突。受人类智能(HI)和人工智能学习能力的影响,根据观察到的数据做出的决策存在差异,这可能会导致解释冲突。当人工智能驱动的控制行动与人类操作员的行动不同时,可能会出现控制行动冲突。人类与人工智能之间的冲突可能会给基于 IA 的系统运行带来额外风险。因此,在实施协作系统之前,理解人类与人工智能冲突的概念并进行详细的风险分析至关重要。本文旨在研究以下问题:1.流程系统中的人类与人工智能操作以及协作过程中可能出现的冲突。2.提出基于人工智能的系统中观察、解释和行动冲突的概念。3.提供一个案例研究,以确定人类与人工智能冲突的潜在风险。