Pub Date : 2024-11-05DOI: 10.1016/j.engappai.2024.109537
Banghui Yang , Chunlei Zhou , Suju Li , Yuzhu Wang
Landslide Named Entity Recognition (LNER) involves extracting specific entities from Chinese unstructured landslide disaster texts, which is crucial for constructing a knowledge graph and supporting landslide prevention efforts. This study proposes a deep learning-based LNER model that utilizes Bidirectional Encoder Representations from Transformer (BERT) for word embeddings and integrates the Conditional Random Fields (CRF) algorithm and projected gradient descent (PGD) adversarial neural networks to enhance sequence labeling accuracy. The practical implications of this research lie in improving the efficiency and precision of disaster information extraction, aiding in real-time decision-making and risk mitigation strategies. Experiments on the constructed dataset show that the model effectively identifies eight types of landslide entities, achieving a highest F1 score of 89.7%.
滑坡命名实体识别(LNER)涉及从中文非结构化滑坡灾害文本中提取特定实体,这对于构建知识图谱和支持滑坡预防工作至关重要。本研究提出了一种基于深度学习的 LNER 模型,该模型利用变换器的双向编码器表示(BERT)进行词嵌入,并集成了条件随机场(CRF)算法和投射梯度下降(PGD)对抗神经网络,以提高序列标注的准确性。这项研究的实际意义在于提高灾害信息提取的效率和精确度,帮助制定实时决策和风险缓解策略。在构建的数据集上进行的实验表明,该模型能有效识别八种类型的滑坡实体,最高 F1 得分为 89.7%。
{"title":"A Chinese named entity recognition method for landslide geological disasters based on deep learning","authors":"Banghui Yang , Chunlei Zhou , Suju Li , Yuzhu Wang","doi":"10.1016/j.engappai.2024.109537","DOIUrl":"10.1016/j.engappai.2024.109537","url":null,"abstract":"<div><div>Landslide Named Entity Recognition (LNER) involves extracting specific entities from Chinese unstructured landslide disaster texts, which is crucial for constructing a knowledge graph and supporting landslide prevention efforts. This study proposes a deep learning-based LNER model that utilizes Bidirectional Encoder Representations from Transformer (BERT) for word embeddings and integrates the Conditional Random Fields (CRF) algorithm and projected gradient descent (PGD) adversarial neural networks to enhance sequence labeling accuracy. The practical implications of this research lie in improving the efficiency and precision of disaster information extraction, aiding in real-time decision-making and risk mitigation strategies. Experiments on the constructed dataset show that the model effectively identifies eight types of landslide entities, achieving a highest F1 score of 89.7%.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"139 ","pages":"Article 109537"},"PeriodicalIF":7.5,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142586199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the primary protection method for transmission lines, distance relays are prone to malfunction during power swings. In fact, the inability of distance relays to differentiate between power swings and short-circuit faults imposes a significant risk to power system stability that can result in blackouts. In recent years, there has been increasing interest in leveraging machine learning techniques to identify various types of faults and power swings in electrical systems. However, previous works mainly focus on fault classification, which is mostly done after a long period from the moment of fault initiation. This is the reason for requiring extensive post-fault data for diagnosis. To address this challenge, this study proposes a predictive protection strategy utilizing deep learning methodologies, specifically a sequence-to-sequence model, to monitor electrical power systems continuously. The objective is to effectively detect power swings from short-circuit faults with minimal reliance on post-fault data and accurately identify short-circuit faults during power swings. In the proposed approach, features are extracted from grid current signals using the Hilbert transform and empirical mode decomposition algorithms. These features are then fed into the sequence-to-sequence model, which issues block/unblock commands upon confirming the presence of a power swing or fault during the power swing. Results from various simulations conducted on an IEEE 39-bus grid in DIgSILENT and MATLAB environments demonstrate that the proposed scheme outperforms baseline methods in the detection of short-circuit faults, power swings, and short-circuit faults occurring during the power swings. The timely and correct operation of the proposed protection scheme contributes to the stability of transmission lines and power systems.
{"title":"A deep sequence-to-sequence model for power swing blocking of distance protection in power transmission lines","authors":"Amin Mehdipour Birgani , Mohammadreza Shams , Mohsen Jannati , Farhad Hatami Aloghareh","doi":"10.1016/j.engappai.2024.109538","DOIUrl":"10.1016/j.engappai.2024.109538","url":null,"abstract":"<div><div>As the primary protection method for transmission lines, distance relays are prone to malfunction during power swings. In fact, the inability of distance relays to differentiate between power swings and short-circuit faults imposes a significant risk to power system stability that can result in blackouts. In recent years, there has been increasing interest in leveraging machine learning techniques to identify various types of faults and power swings in electrical systems. However, previous works mainly focus on fault classification, which is mostly done after a long period from the moment of fault initiation. This is the reason for requiring extensive post-fault data for diagnosis. To address this challenge, this study proposes a predictive protection strategy utilizing deep learning methodologies, specifically a sequence-to-sequence model, to monitor electrical power systems continuously. The objective is to effectively detect power swings from short-circuit faults with minimal reliance on post-fault data and accurately identify short-circuit faults during power swings. In the proposed approach, features are extracted from grid current signals using the Hilbert transform and empirical mode decomposition algorithms. These features are then fed into the sequence-to-sequence model, which issues block/unblock commands upon confirming the presence of a power swing or fault during the power swing. Results from various simulations conducted on an IEEE 39-bus grid in DIgSILENT and MATLAB environments demonstrate that the proposed scheme outperforms baseline methods in the detection of short-circuit faults, power swings, and short-circuit faults occurring during the power swings. The timely and correct operation of the proposed protection scheme contributes to the stability of transmission lines and power systems.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"139 ","pages":"Article 109538"},"PeriodicalIF":7.5,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142586197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-05DOI: 10.1016/j.engappai.2024.109592
Jinpei Liu , Tianqi Shui , Longlong Shao , Feifei Jin , Ligang Zhou
Traditional approaches to group decision-making (GDM) often assume that decision-makers (DMs) are perfectly rational, which neglects the psychological attitudes exhibited by DMs during the decision-making process. To address this issue, this paper designs a novel method for GDM with multiplicative distribution linguistic preference relations (MDLPRs), which incorporates twin multiplicative data envelopment analysis (TMDEA) regret-rejoice cross-efficiency models and a weighting model with individual tolerance. Firstly, we provide a clear definition of MDLPRs based on multiplicative linguistic terms that capture the asymmetrical qualitative recognition of DMs. Then, a weighting model is constructed to obtain DM's weights. This model assumes that DMs are boundedly rational and have a certain tolerance level for group consensus measurement. Subsequently, TMDEA cross-efficiency models are developed based on different perspectives. On this basis, we further propose TMDEA regret-rejoice cross-efficiency models that consider the psychological risks of DMs. Furthermore, the method for GDM based on TMDEA regret-rejoice cross-efficiency models and individual tolerance with MDLPRs is given. Finally, we present a case study of the selection of the Ya'an ecological monitoring station in China to test the validity and applicability of the proposed method. The merits and robustness of the constructed method are highlighted by sensitive analysis and comparative analysis.
{"title":"A distribution linguistic group decision-making method considering twin multiplicative data envelopment analysis regret-rejoice cross-efficiency","authors":"Jinpei Liu , Tianqi Shui , Longlong Shao , Feifei Jin , Ligang Zhou","doi":"10.1016/j.engappai.2024.109592","DOIUrl":"10.1016/j.engappai.2024.109592","url":null,"abstract":"<div><div>Traditional approaches to group decision-making (GDM) often assume that decision-makers (DMs) are perfectly rational, which neglects the psychological attitudes exhibited by DMs during the decision-making process. To address this issue, this paper designs a novel method for GDM with multiplicative distribution linguistic preference relations (MDLPRs), which incorporates twin multiplicative data envelopment analysis (TMDEA) regret-rejoice cross-efficiency models and a weighting model with individual tolerance. Firstly, we provide a clear definition of MDLPRs based on multiplicative linguistic terms that capture the asymmetrical qualitative recognition of DMs. Then, a weighting model is constructed to obtain DM's weights. This model assumes that DMs are boundedly rational and have a certain tolerance level for group consensus measurement. Subsequently, TMDEA cross-efficiency models are developed based on different perspectives. On this basis, we further propose TMDEA regret-rejoice cross-efficiency models that consider the psychological risks of DMs. Furthermore, the method for GDM based on TMDEA regret-rejoice cross-efficiency models and individual tolerance with MDLPRs is given. Finally, we present a case study of the selection of the Ya'an ecological monitoring station in China to test the validity and applicability of the proposed method. The merits and robustness of the constructed method are highlighted by sensitive analysis and comparative analysis.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"139 ","pages":"Article 109592"},"PeriodicalIF":7.5,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Decomposition-based multiobjective evolutionary algorithms is one of the prevailing algorithmic frameworks for multiobjective optimization. This framework distributes the same amount of evolutionary computing resources to each subproblems, but it ignores the variable contributions of different subproblems to population during the evolution. Resource allocation strategies (RAs) have been proposed to dynamically allocate appropriate evolutionary computational resources to different subproblems, with the aim of addressing this limitation. However, the majority of RA strategies result in inefficiencies and mistakes when performing subproblem assessment, thus generating unsuitable algorithmic results. To address this problem, this paper proposes a decomposition-based multiobjective evolutionary algorithm (HK-MOEA/D). The HK-MOEA/D algorithm uses a historical knowledge-guided RA strategy to evaluate the subproblem’s evolvability, allocate evolutionary computational resources based on the evaluation value, and adaptively select genetic operators based on the evaluation value to either help the subproblem converge or move away from a local optimum. Additionally, the density-first individual selection mechanism of the external archive is utilized to improve the diversity of the algorithm. An external archive update mechanism based on -dominance is also used to store solutions that are truly worth keeping to guide the evaluation of subproblem evolvability. The efficacy of the proposed algorithm is evaluated by comparing it with seven state-of-the-art algorithms on three types of benchmark functions and three types of real-world application problems. The experimental results show that HK-MOEA/D accurately evaluates the evolvability of the subproblems and displays reliable performance in a variety of complex Pareto front optimization problems.
基于分解的多目标进化算法是目前流行的多目标优化算法框架之一。该框架将相同数量的进化计算资源分配给每个子问题,但忽略了不同子问题在进化过程中对群体的不同贡献。资源分配策略(RA)被提出来动态地为不同子问题分配适当的进化计算资源,以解决这一局限性。然而,大多数资源分配策略在进行子问题评估时都会导致效率低下和错误,从而产生不合适的算法结果。针对这一问题,本文提出了一种基于分解的多目标进化算法(HK-MOEA/D)。HK-MOEA/D 算法采用历史知识引导的 RA 策略来评估子问题的可演化性,根据评估值分配演化计算资源,并根据评估值自适应地选择遗传算子,以帮助子问题收敛或远离局部最优。此外,还利用外部档案的密度优先个体选择机制来提高算法的多样性。此外,还利用基于 θ 优势的外部档案更新机制来存储真正值得保留的解决方案,以指导对子问题可演化性的评估。通过在三类基准函数和三类实际应用问题上与七种最先进的算法进行比较,评估了所提算法的功效。实验结果表明,HK-MOEA/D 能准确评估子问题的可演化性,并在各种复杂的帕累托前沿优化问题中表现出可靠的性能。
{"title":"HK-MOEA/D: A historical knowledge-guided resource allocation for decomposition multiobjective optimization","authors":"Wei Li , Xiaolong Zeng , Ying Huang , Yiu-ming Cheung","doi":"10.1016/j.engappai.2024.109482","DOIUrl":"10.1016/j.engappai.2024.109482","url":null,"abstract":"<div><div>Decomposition-based multiobjective evolutionary algorithms is one of the prevailing algorithmic frameworks for multiobjective optimization. This framework distributes the same amount of evolutionary computing resources to each subproblems, but it ignores the variable contributions of different subproblems to population during the evolution. Resource allocation strategies (RAs) have been proposed to dynamically allocate appropriate evolutionary computational resources to different subproblems, with the aim of addressing this limitation. However, the majority of RA strategies result in inefficiencies and mistakes when performing subproblem assessment, thus generating unsuitable algorithmic results. To address this problem, this paper proposes a decomposition-based multiobjective evolutionary algorithm (HK-MOEA/D). The HK-MOEA/D algorithm uses a historical knowledge-guided RA strategy to evaluate the subproblem’s evolvability, allocate evolutionary computational resources based on the evaluation value, and adaptively select genetic operators based on the evaluation value to either help the subproblem converge or move away from a local optimum. Additionally, the density-first individual selection mechanism of the external archive is utilized to improve the diversity of the algorithm. An external archive update mechanism based on <span><math><mi>θ</mi></math></span>-dominance is also used to store solutions that are truly worth keeping to guide the evaluation of subproblem evolvability. The efficacy of the proposed algorithm is evaluated by comparing it with seven state-of-the-art algorithms on three types of benchmark functions and three types of real-world application problems. The experimental results show that HK-MOEA/D accurately evaluates the evolvability of the subproblems and displays reliable performance in a variety of complex Pareto front optimization problems.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"139 ","pages":"Article 109482"},"PeriodicalIF":7.5,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-04DOI: 10.1016/j.engappai.2024.109540
Samir Malakar, Nirwan Banerjee, Dilip K. Prasad
In the past few years, we have observed rapid growth in digital content. Even in the biological domain, the arrival of microscopic and nanoscopic images and videos captured for biological investigations increases the need for space to store them. Hence, storing these data in a storage-efficient manner is a pressing need. In this work, we have introduced a compact image representation technique with an eye on preserving the shape that can shrink the memory requirement to store. The compact image representation is different from image compression since it does not include any encoding mechanism. Rather, the idea is that this mechanism stores the positions of key pixels, and when required, the original image can be regenerated. The genetic algorithm is used to select key pixels, while the Gaussian kernel performs the reconstruction task with the help of the positions of the selected key pixels. The model is tested on four different datasets. The proposed technique shrinks the memory requirement by 87% to 98% while evaluated using the bit reduction rate. However, the reconstructed images’ quality is a bit low when evaluated using metrics like structural similarity index (ranges between 0.81 to 0.94), or root means squared error (ranges between 0.06 to 0.08). To investigate the impact of quality reduction in reconstructed images in real-life applications, we performed image classification using reconstructed samples and found 0.13% to 2.30% classification accuracy reduction compared to when classification is done using original samples. The proposed model’s performance is comparable to state-of-the-art’s similar solutions.
{"title":"Compact representation for memory-efficient storage of images using genetic algorithm-guided key pixel selection","authors":"Samir Malakar, Nirwan Banerjee, Dilip K. Prasad","doi":"10.1016/j.engappai.2024.109540","DOIUrl":"10.1016/j.engappai.2024.109540","url":null,"abstract":"<div><div>In the past few years, we have observed rapid growth in digital content. Even in the biological domain, the arrival of microscopic and nanoscopic images and videos captured for biological investigations increases the need for space to store them. Hence, storing these data in a storage-efficient manner is a pressing need. In this work, we have introduced a compact image representation technique with an eye on preserving the shape that can shrink the memory requirement to store. The compact image representation is different from image compression since it does not include any encoding mechanism. Rather, the idea is that this mechanism stores the positions of key pixels, and when required, the original image can be regenerated. The genetic algorithm is used to select key pixels, while the Gaussian kernel performs the reconstruction task with the help of the positions of the selected key pixels. The model is tested on four different datasets. The proposed technique shrinks the memory requirement by 87% to 98% while evaluated using the bit reduction rate. However, the reconstructed images’ quality is a bit low when evaluated using metrics like structural similarity index (ranges between 0.81 to 0.94), or root means squared error (ranges between 0.06 to 0.08). To investigate the impact of quality reduction in reconstructed images in real-life applications, we performed image classification using reconstructed samples and found 0.13% to 2.30% classification accuracy reduction compared to when classification is done using original samples. The proposed model’s performance is comparable to state-of-the-art’s similar solutions.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"139 ","pages":"Article 109540"},"PeriodicalIF":7.5,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-04DOI: 10.1016/j.engappai.2024.109521
Jiaxin Yao , Yongqiang Zhao , Yuanyang Bu , Seong G. Kong , Xun Zhang
Pixel-level fusion of visible and infrared images has demonstrated promise in enhancing information representation. However, nighttime image fusion remains challenging due to low and uneven lighting. Existing fusion methods neglect the preservation of color-related information at night, resulting in unsatisfactory outcomes with insufficient brightness. This paper presents a novel color image fusion framework to prevent color distortion, thus generating results more aligned with human perception. Firstly, we design an image fusion network to retain color information from visible images under low-light conditions. Secondly, we incorporate mature low-light enhancement technology into the network as a flexible component to produce fusion results under normal illumination. The training process is carefully designed to address potential issues of overexposure or noise amplification. Finally, we utilize knowledge distillation to create a lightweight end-to-end network that directly generates fusion results under normal lighting conditions from pairs of low-light images. Experimental results demonstrate that our proposed framework outperforms existing methods in nighttime scenarios.
{"title":"Color-aware fusion of nighttime infrared and visible images","authors":"Jiaxin Yao , Yongqiang Zhao , Yuanyang Bu , Seong G. Kong , Xun Zhang","doi":"10.1016/j.engappai.2024.109521","DOIUrl":"10.1016/j.engappai.2024.109521","url":null,"abstract":"<div><div>Pixel-level fusion of visible and infrared images has demonstrated promise in enhancing information representation. However, nighttime image fusion remains challenging due to low and uneven lighting. Existing fusion methods neglect the preservation of color-related information at night, resulting in unsatisfactory outcomes with insufficient brightness. This paper presents a novel color image fusion framework to prevent color distortion, thus generating results more aligned with human perception. Firstly, we design an image fusion network to retain color information from visible images under low-light conditions. Secondly, we incorporate mature low-light enhancement technology into the network as a flexible component to produce fusion results under normal illumination. The training process is carefully designed to address potential issues of overexposure or noise amplification. Finally, we utilize knowledge distillation to create a lightweight end-to-end network that directly generates fusion results under normal lighting conditions from pairs of low-light images. Experimental results demonstrate that our proposed framework outperforms existing methods in nighttime scenarios.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"139 ","pages":"Article 109521"},"PeriodicalIF":7.5,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-04DOI: 10.1016/j.engappai.2024.109551
José Joaquín Mendoza Lopetegui, Mara Tanelli
In aircraft, the braking system is a safety-critical and heavily used component of the landing gear, prone to significant wear. Anomalies arising in the wear dynamics can degrade the performance of the braking system and compromise the safety of ground handling maneuvers. In this work, we tackle the problem of detecting incipient anomalies in aircraft brakes in a tightly coupled implementation with the Brake Control Unit (BCU). Two complementary approaches are presented. The first one is an observer-based architecture designed on the longitudinal aircraft dynamics that returns physically interpretable outputs connected to the wear process and allows us to improve braking performance online. The second one is an end-to-end convolutional autoencoder-based architecture that returns an anomaly score computed on data collected by the BCU with inherent robustness to modeling uncertainty, which the model-based one does not. A combined architecture that allows one to exploit the features of both model-based and learning-based approaches is proposed, which shows its capability of optimally blending the two. The approaches are evaluated in a MATLAB/Simulink multibody simulation environment that is able to replicate the braking actuator wear dynamics, demonstrating remarkable performances in anomaly detection, anti-skid control performance, and safety improvement.
{"title":"Combining model-based and learning-based anomaly detection schemes for increased performance and safety of aircraft braking controllers","authors":"José Joaquín Mendoza Lopetegui, Mara Tanelli","doi":"10.1016/j.engappai.2024.109551","DOIUrl":"10.1016/j.engappai.2024.109551","url":null,"abstract":"<div><div>In aircraft, the braking system is a safety-critical and heavily used component of the landing gear, prone to significant wear. Anomalies arising in the wear dynamics can degrade the performance of the braking system and compromise the safety of ground handling maneuvers. In this work, we tackle the problem of detecting incipient anomalies in aircraft brakes in a tightly coupled implementation with the Brake Control Unit (BCU). Two complementary approaches are presented. The first one is an observer-based architecture designed on the longitudinal aircraft dynamics that returns physically interpretable outputs connected to the wear process and allows us to improve braking performance online. The second one is an end-to-end convolutional autoencoder-based architecture that returns an anomaly score computed on data collected by the BCU with inherent robustness to modeling uncertainty, which the model-based one does not. A combined architecture that allows one to exploit the features of both model-based and learning-based approaches is proposed, which shows its capability of optimally blending the two. The approaches are evaluated in a MATLAB/Simulink multibody simulation environment that is able to replicate the braking actuator wear dynamics, demonstrating remarkable performances in anomaly detection, anti-skid control performance, and safety improvement.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"139 ","pages":"Article 109551"},"PeriodicalIF":7.5,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-04DOI: 10.1016/j.engappai.2024.109554
Ayesha Razzaq , Zareen A. Khan , Khalid Naeem , Muhammad Riaz
The concept of the picture fuzzy set (PiFS) significantly enhances the multi-criteria decision-making (MCDM) process by incorporating membership value (MV), non-membership value (NMV), and a neutral component. PiFS extends the capabilities of traditional fuzzy sets (FSs), intuitionistic fuzzy sets (IFSs), and other fuzzy models. This paper introduces a novel MCDM approach, the picture fuzzy SWARA-CRITIC-COPRAS (PiF-SCC) method, specifically designed to assist decision-makers (DMs) in evaluating and selecting dynamic digital marketing (DDM) technologies within PiFS settings. The proposed method integrates the strengths of PiFS with step-wise weight assessment ratio analysis (SWARA), criteria importance through intercriteria correlation (CRITIC), and complex proportional assessment (COPRAS), aiming to improve the precision and effectiveness of technology evaluations. To validate the approach, a case study is conducted on DDM technology assessment within a specific business context. The PiF-SCC technique is applied to rank technological options using linguistic terms (LTs), PiFS numbers, an accuracy function (AF), and a score function (SF). Additionally, a comprehensive sensitivity analysis is performed to evaluate the robustness of the proposed method under different input scenarios and uncertainties. A thorough comparison with existing techniques is also provided, demonstrating the superior decision-making capability of the new approach, which leads to more accurate and dependable technology selection results. The manuscript also discusses marginal implications and limitations, along with potential future research directions to further enhance the applicability and effectiveness of the proposed approach.
{"title":"Picture fuzzy complex proportional assessment approach with step-wise weight assessment ratio analysis and criteria importance through intercriteria correlation","authors":"Ayesha Razzaq , Zareen A. Khan , Khalid Naeem , Muhammad Riaz","doi":"10.1016/j.engappai.2024.109554","DOIUrl":"10.1016/j.engappai.2024.109554","url":null,"abstract":"<div><div>The concept of the picture fuzzy set (PiFS) significantly enhances the multi-criteria decision-making (MCDM) process by incorporating membership value (MV), non-membership value (NMV), and a neutral component. PiFS extends the capabilities of traditional fuzzy sets (FSs), intuitionistic fuzzy sets (IFSs), and other fuzzy models. This paper introduces a novel MCDM approach, the picture fuzzy SWARA-CRITIC-COPRAS (PiF-SCC) method, specifically designed to assist decision-makers (DMs) in evaluating and selecting dynamic digital marketing (DDM) technologies within PiFS settings. The proposed method integrates the strengths of PiFS with step-wise weight assessment ratio analysis (SWARA), criteria importance through intercriteria correlation (CRITIC), and complex proportional assessment (COPRAS), aiming to improve the precision and effectiveness of technology evaluations. To validate the approach, a case study is conducted on DDM technology assessment within a specific business context. The PiF-SCC technique is applied to rank technological options using linguistic terms (LTs), PiFS numbers, an accuracy function (AF), and a score function (SF). Additionally, a comprehensive sensitivity analysis is performed to evaluate the robustness of the proposed method under different input scenarios and uncertainties. A thorough comparison with existing techniques is also provided, demonstrating the superior decision-making capability of the new approach, which leads to more accurate and dependable technology selection results. The manuscript also discusses marginal implications and limitations, along with potential future research directions to further enhance the applicability and effectiveness of the proposed approach.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"139 ","pages":"Article 109554"},"PeriodicalIF":7.5,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-04DOI: 10.1016/j.engappai.2024.109544
Ning Zhang , Wei Zhong , Xiaojie Lin , Liuliu Du-Ikonen , Tianyue Qiu
In the district heating systems, the historical operation data of the buildings in those areas would be partially or entirely missing. The traditional data-driven model is hard to predict the ground truth results because the historical data is not available for model training. However, utilizing the physics-based methods for load calculation takes a long time to process and encounters low accuracy issues. This paper investigates several hybrid models that integrate the data-driven model and the physics-based models with different fusion methods. The physics-based models calculate envelope load and infiltration load, based on Fourier's law and the grand canonical ensemble theory, respectively. After undergoing load processing, features fusion, and residual connection, the best advanced hybrid models generate 21.35%, 16.35%, and 12.73% better prediction results compared with the data-driven model. Moreover, the advanced hybride models also perform strong transferability across all the data quantity groups. In terms of practical application, the advanced hybrid models could be deployed with effective generalization in limited data scenarios and robust transfer capabilities. The selected best model constructed by hybrid modeling displays the highest performance and saves the total training costs with strong transferability.
{"title":"Investigation of hybrid modeling and its transferability in building load prediction used for district heating systems","authors":"Ning Zhang , Wei Zhong , Xiaojie Lin , Liuliu Du-Ikonen , Tianyue Qiu","doi":"10.1016/j.engappai.2024.109544","DOIUrl":"10.1016/j.engappai.2024.109544","url":null,"abstract":"<div><div>In the district heating systems, the historical operation data of the buildings in those areas would be partially or entirely missing. The traditional data-driven model is hard to predict the ground truth results because the historical data is not available for model training. However, utilizing the physics-based methods for load calculation takes a long time to process and encounters low accuracy issues. This paper investigates several hybrid models that integrate the data-driven model and the physics-based models with different fusion methods. The physics-based models calculate envelope load and infiltration load, based on Fourier's law and the grand canonical ensemble theory, respectively. After undergoing load processing, features fusion, and residual connection, the best advanced hybrid models generate 21.35%, 16.35%, and 12.73% better prediction results compared with the data-driven model. Moreover, the advanced hybride models also perform strong transferability across all the data quantity groups. In terms of practical application, the advanced hybrid models could be deployed with effective generalization in limited data scenarios and robust transfer capabilities. The selected best model constructed by hybrid modeling displays the highest performance and saves the total training costs with strong transferability.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"139 ","pages":"Article 109544"},"PeriodicalIF":7.5,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-04DOI: 10.1016/j.engappai.2024.109560
Saksham Mittal , Mohammad Wazid , Devesh Pratap Singh , Ashok Kumar Das , M. Shamim Hossain
The Internet of Things (IoT) has been popularized these days due to digitization and automation. It is deployed in various applications, i.e., smart homes, smart agriculture, smart transportation, smart healthcare, and industrial monitoring. In an IoT network, many IoT devices communicate with servers, or users access IoT devices through an open channel via a certain exchange of messages. Besides providing many benefits like efficiency, automation, and convenience, IoT presents significant security challenges due to a lack of proper standard security measures. Thus, malicious actors may be able to infect the network with malware. They may launch destructive attacks with the goal of stealing data or causing damage to the systems’ resources. This can be mitigated by introducing intrusion detection and prevention mechanisms in the network. An intelligent intrusion detection system is required to put preventative measures in place for secure communication and a malware-free network. In this article, we propose a deep learning based ensemble approach for IoT malware attack detection (in short, we call it as DLEX-IMD) trained and tested against benchmark datasets. The important measures, including accuracy, precision, recall, and F1-score, are used to evaluate the performance of the proposed DLEX-IMD. The performance of the proposed scheme is explained utilizing benchmark Explainable Artificial Intelligence (AI) method–LIME (Local Interpretable Model-Agnostic Explanations), which justifies the reliability of the proposed model training. The DLEX-IMD is also compared with a range of other closely related existing schemes and has shown better performance than those schemes with 99.96% accuracy and F1-score of 0.999.
{"title":"A deep learning ensemble approach for malware detection in Internet of Things utilizing Explainable Artificial Intelligence","authors":"Saksham Mittal , Mohammad Wazid , Devesh Pratap Singh , Ashok Kumar Das , M. Shamim Hossain","doi":"10.1016/j.engappai.2024.109560","DOIUrl":"10.1016/j.engappai.2024.109560","url":null,"abstract":"<div><div>The Internet of Things (IoT) has been popularized these days due to digitization and automation. It is deployed in various applications, i.e., smart homes, smart agriculture, smart transportation, smart healthcare, and industrial monitoring. In an IoT network, many IoT devices communicate with servers, or users access IoT devices through an open channel via a certain exchange of messages. Besides providing many benefits like efficiency, automation, and convenience, IoT presents significant security challenges due to a lack of proper standard security measures. Thus, malicious actors may be able to infect the network with malware. They may launch destructive attacks with the goal of stealing data or causing damage to the systems’ resources. This can be mitigated by introducing intrusion detection and prevention mechanisms in the network. An intelligent intrusion detection system is required to put preventative measures in place for secure communication and a malware-free network. In this article, we propose a deep learning based ensemble approach for IoT malware attack detection (in short, we call it as DLEX-IMD) trained and tested against benchmark datasets. The important measures, including accuracy, precision, recall, and F1-score, are used to evaluate the performance of the proposed DLEX-IMD. The performance of the proposed scheme is explained utilizing benchmark Explainable Artificial Intelligence (AI) method–LIME (Local Interpretable Model-Agnostic Explanations), which justifies the reliability of the proposed model training. The DLEX-IMD is also compared with a range of other closely related existing schemes and has shown better performance than those schemes with 99.96% accuracy and F1-score of 0.999.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"139 ","pages":"Article 109560"},"PeriodicalIF":7.5,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}