基于深度强化学习方法的关键基础设施监测框架

Kefas Yunana, I. O. Oyefolahan, S. Bashir
{"title":"基于深度强化学习方法的关键基础设施监测框架","authors":"Kefas Yunana, I. O. Oyefolahan, S. Bashir","doi":"10.1109/ITED56637.2022.10051520","DOIUrl":null,"url":null,"abstract":"Critical Infrastructure (CI) are nowadays linked with IOT devices that communicate data through networks to achieve significant collaboration. With the progress in internet connectivity, IOT has disrupt numerous aspects of CI comprising communication systems, power plants, power grid, gas pipeline, and transportation systems. As a disruptive paradigm, the IOT and Cloud computing utilizing Smart IOT devices equipped with numerous sensors and actuating capabilities play significant roles when deployed in CI surroundings with the aim of monitoring vital observable figures consisting of flow rate, temperature, pressure, and lighting situations. Over the years, oil pipeline infrastructure have been the main economic means for conveying refined oil to assembly and distribution outlets. Though damages to the pipelines in this area by exclusion have influence the normal transport of refined oil to the outlets across the country like Nigeria which has influence the stream of income and damages to the environment. Reinforcement Learning (RL) approach for infrastructure reliability monitoring have receive numerous consideration by researchers denoting that RL centered policy reveals superior operation than regular traditional control systems strategies. Many of the studies utilised mainly algorithms for environment with discrete action and observation spaces unlike others with infinite state space. This study proposed a framework for critical infrastructure monitoring based on Deep Reinforcement Learning (DRL) for oil pipeline network and also developed a pipeline network monitoring (PNM) architecture with expression of the environment dynamics as Markov Decision Process. The sample observation space data and strategy for evaluation of the framework was also presented.","PeriodicalId":246041,"journal":{"name":"2022 5th Information Technology for Education and Development (ITED)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Framework For Critical Infrastructure Monitoring Based On Deep Reinforcement Learning Approach\",\"authors\":\"Kefas Yunana, I. O. Oyefolahan, S. Bashir\",\"doi\":\"10.1109/ITED56637.2022.10051520\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Critical Infrastructure (CI) are nowadays linked with IOT devices that communicate data through networks to achieve significant collaboration. With the progress in internet connectivity, IOT has disrupt numerous aspects of CI comprising communication systems, power plants, power grid, gas pipeline, and transportation systems. As a disruptive paradigm, the IOT and Cloud computing utilizing Smart IOT devices equipped with numerous sensors and actuating capabilities play significant roles when deployed in CI surroundings with the aim of monitoring vital observable figures consisting of flow rate, temperature, pressure, and lighting situations. Over the years, oil pipeline infrastructure have been the main economic means for conveying refined oil to assembly and distribution outlets. Though damages to the pipelines in this area by exclusion have influence the normal transport of refined oil to the outlets across the country like Nigeria which has influence the stream of income and damages to the environment. Reinforcement Learning (RL) approach for infrastructure reliability monitoring have receive numerous consideration by researchers denoting that RL centered policy reveals superior operation than regular traditional control systems strategies. Many of the studies utilised mainly algorithms for environment with discrete action and observation spaces unlike others with infinite state space. This study proposed a framework for critical infrastructure monitoring based on Deep Reinforcement Learning (DRL) for oil pipeline network and also developed a pipeline network monitoring (PNM) architecture with expression of the environment dynamics as Markov Decision Process. The sample observation space data and strategy for evaluation of the framework was also presented.\",\"PeriodicalId\":246041,\"journal\":{\"name\":\"2022 5th Information Technology for Education and Development (ITED)\",\"volume\":\"55 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 5th Information Technology for Education and Development (ITED)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ITED56637.2022.10051520\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 5th Information Technology for Education and Development (ITED)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ITED56637.2022.10051520","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

如今,关键基础设施(CI)与通过网络通信数据以实现重要协作的物联网设备相关联。随着互联网连接的进步,物联网已经颠覆了CI的许多方面,包括通信系统、发电厂、电网、天然气管道和运输系统。作为一种颠覆性范例,物联网和云计算利用配备了众多传感器和驱动功能的智能物联网设备,在部署在CI环境中发挥重要作用,目的是监测流量、温度、压力和照明情况等重要可观察数据。多年来,石油管道基础设施一直是将成品油输送到组装和分销网点的主要经济手段。尽管排除对该地区管道的破坏影响了成品油正常运输到尼日利亚等全国各地的网点,从而影响了收入流和对环境的破坏。基于强化学习(RL)的基础设施可靠性监测方法得到了研究人员的广泛关注,表明以强化学习为中心的策略比常规的传统控制系统策略具有更好的运行效果。与其他具有无限状态空间的研究不同,许多研究主要使用具有离散动作和观察空间的环境算法。提出了一种基于深度强化学习(DRL)的石油管网关键基础设施监测框架,并开发了一种将环境动态表达为马尔可夫决策过程的管网监测(PNM)架构。给出了该框架的样本观测空间数据和评估策略。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A Framework For Critical Infrastructure Monitoring Based On Deep Reinforcement Learning Approach
Critical Infrastructure (CI) are nowadays linked with IOT devices that communicate data through networks to achieve significant collaboration. With the progress in internet connectivity, IOT has disrupt numerous aspects of CI comprising communication systems, power plants, power grid, gas pipeline, and transportation systems. As a disruptive paradigm, the IOT and Cloud computing utilizing Smart IOT devices equipped with numerous sensors and actuating capabilities play significant roles when deployed in CI surroundings with the aim of monitoring vital observable figures consisting of flow rate, temperature, pressure, and lighting situations. Over the years, oil pipeline infrastructure have been the main economic means for conveying refined oil to assembly and distribution outlets. Though damages to the pipelines in this area by exclusion have influence the normal transport of refined oil to the outlets across the country like Nigeria which has influence the stream of income and damages to the environment. Reinforcement Learning (RL) approach for infrastructure reliability monitoring have receive numerous consideration by researchers denoting that RL centered policy reveals superior operation than regular traditional control systems strategies. Many of the studies utilised mainly algorithms for environment with discrete action and observation spaces unlike others with infinite state space. This study proposed a framework for critical infrastructure monitoring based on Deep Reinforcement Learning (DRL) for oil pipeline network and also developed a pipeline network monitoring (PNM) architecture with expression of the environment dynamics as Markov Decision Process. The sample observation space data and strategy for evaluation of the framework was also presented.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Drug Recommender Systems: A Review of State-of-the-Art Algorithms An Improved Password-authentication Model for Access Control in Connected Systems Inset Fed Circular Microstrip Patch Antenna at 2.4 GHz for IWSN Applications Development of Alcohol Detection with Engine Locking and Short Messaging Service Tracking System A Machine Learning Technique for Detection of Diabetes Mellitus
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1