An effective approach for early fuel leakage detection with enhanced explainability

Ruimin Chu , Li Chik , Yiliao Song , Jeffrey Chan , Xiaodong Li
{"title":"An effective approach for early fuel leakage detection with enhanced explainability","authors":"Ruimin Chu ,&nbsp;Li Chik ,&nbsp;Yiliao Song ,&nbsp;Jeffrey Chan ,&nbsp;Xiaodong Li","doi":"10.1016/j.iswa.2025.200504","DOIUrl":null,"url":null,"abstract":"<div><div>Leakage detection at service stations with underground storage tanks containing hazardous products, such as fuel, is a critical task. Early detection is important to halt the spread of leaks, which can pose significant economic and ecological impacts on the surrounding community. Existing fuel leakage detection methods typically rely on statistical analysis of low-granularity inventory data, leading to delayed detection. Moreover, explainability, a crucial factor for practitioners to validate detection outcomes, remains unexplored in this domain. To address these limitations, we propose an <strong>EX</strong>plainable <strong>F</strong>uel <strong>L</strong>eakage <strong>D</strong>etection approach called EXFLD, which performs online fuel leakage detection and provides intuitive explanations for detection validation. EXFLD incorporates a high-performance deep learning model for accurate online fuel leakage detection and an inherently interpretable model to generate intuitive textual explanations to assist practitioners in result validation. Unlike existing explainable artificial intelligence methods that often use deep learning models which can be hard to interpret, EXFLD is a human-centric system designed to provide clear and understandable insights to support decision-making. Through case studies, we demonstrate that EXFLD can provide intuitive and meaningful textual explanations that humans can easily understand. Additionally, we show that incorporating semantic constraints during training in the ANFIS model enhances the semantic interpretability of these explanations by improving the coverage and distinguishability of membership functions. Experimental evaluations using a dataset collected from real-world sites in Australia, encompassing 167 tank instances, demonstrate that EXFLD achieves competitive performance compared to baseline methods, with an F2-score of 0.7969. This dual focus on accuracy and human-centric explainability marks a significant advancement in fuel leakage detection, potentially facilitating broader adoption.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"26 ","pages":"Article 200504"},"PeriodicalIF":4.3000,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligent Systems with Applications","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667305325000304","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Leakage detection at service stations with underground storage tanks containing hazardous products, such as fuel, is a critical task. Early detection is important to halt the spread of leaks, which can pose significant economic and ecological impacts on the surrounding community. Existing fuel leakage detection methods typically rely on statistical analysis of low-granularity inventory data, leading to delayed detection. Moreover, explainability, a crucial factor for practitioners to validate detection outcomes, remains unexplored in this domain. To address these limitations, we propose an EXplainable Fuel Leakage Detection approach called EXFLD, which performs online fuel leakage detection and provides intuitive explanations for detection validation. EXFLD incorporates a high-performance deep learning model for accurate online fuel leakage detection and an inherently interpretable model to generate intuitive textual explanations to assist practitioners in result validation. Unlike existing explainable artificial intelligence methods that often use deep learning models which can be hard to interpret, EXFLD is a human-centric system designed to provide clear and understandable insights to support decision-making. Through case studies, we demonstrate that EXFLD can provide intuitive and meaningful textual explanations that humans can easily understand. Additionally, we show that incorporating semantic constraints during training in the ANFIS model enhances the semantic interpretability of these explanations by improving the coverage and distinguishability of membership functions. Experimental evaluations using a dataset collected from real-world sites in Australia, encompassing 167 tank instances, demonstrate that EXFLD achieves competitive performance compared to baseline methods, with an F2-score of 0.7969. This dual focus on accuracy and human-centric explainability marks a significant advancement in fuel leakage detection, potentially facilitating broader adoption.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
一种提高可解释性的燃油泄漏早期检测方法
在装有危险产品(如燃料)的地下储罐的加油站进行泄漏检测是一项关键任务。早期发现对于阻止泄漏的蔓延至关重要,泄漏会对周围社区造成重大的经济和生态影响。现有的燃油泄漏检测方法通常依赖于低粒度库存数据的统计分析,导致检测延迟。此外,可解释性是从业者验证检测结果的关键因素,在这一领域仍未得到探索。为了解决这些限制,我们提出了一种可解释的燃油泄漏检测方法,称为EXFLD,它执行在线燃油泄漏检测,并为检测验证提供直观的解释。EXFLD结合了一个高性能的深度学习模型,用于准确的在线燃油泄漏检测,以及一个固有的可解释模型,用于生成直观的文本解释,以帮助从业者进行结果验证。与现有的可解释人工智能方法不同,EXFLD通常使用难以解释的深度学习模型,EXFLD是一个以人为中心的系统,旨在提供清晰易懂的见解,以支持决策。通过案例研究,我们证明了EXFLD可以提供直观和有意义的文本解释,人类可以很容易地理解。此外,我们表明,在ANFIS模型的训练过程中加入语义约束,通过提高隶属函数的覆盖范围和可分辨性,增强了这些解释的语义可解释性。使用从澳大利亚实际站点收集的数据集(包括167个坦克实例)进行的实验评估表明,与基线方法相比,EXFLD的f2得分为0.7969,具有竞争力。这种对准确性和以人为中心的可解释性的双重关注标志着燃油泄漏检测的重大进步,可能会促进更广泛的应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
5.60
自引率
0.00%
发文量
0
期刊最新文献
Buyer–Seller-Deception-Game Dataset: A new comprehensive dataset for facial expression based deception detection in economic contexts Video anomaly detection for edge-based IoT systems: A survey of input modalities and real-time applications A study on the generalization of DINOv2 features for food recognition tasks: A unified evaluation framework Advancing decision-making: A comprehensive review of intelligent systems, applications, and challenges WOA-FCM-CNN-WNN-informer: An advanced hybrid deep learning model for ultra-accurate PV power forecasting in electric mobility
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1