The performance of heterogeneous sensor networks is enhanced by high-energy heterogeneous nodes. Determining the number and deployment of heterogeneous nodes is a significant research issue. A heterogeneous node configuration algorithm is presented in this paper, which can be used for overall network planning before the deployment of heterogeneous nodes. Subsequently, factors such as network performance and economic cost are comprehensively considered, and integrated into a single index using the entropy weighting method. The proportion of different indicators is then determined, and a formula for calculating the required number of heterogeneous nodes under various network conditions is derived by considering parameters such as network area size, node communication threshold distance, and the number of common nodes. Experimental results demonstrate that the proposed algorithm not only reduces networks costs but also enhances overall networks performance.
{"title":"Node Configuration Algorithm of Energy Heterogeneous Sensor Networks","authors":"Qian Sun, Xiangyue Meng, Xiao Peng, Zhiyao Zhao, Jiping Xu, Huiyan Zhang, Li Wang, Jiabin Yu, Xianglan Guo","doi":"10.1155/int/3949923","DOIUrl":"https://doi.org/10.1155/int/3949923","url":null,"abstract":"<div>\u0000 <p>The performance of heterogeneous sensor networks is enhanced by high-energy heterogeneous nodes. Determining the number and deployment of heterogeneous nodes is a significant research issue. A heterogeneous node configuration algorithm is presented in this paper, which can be used for overall network planning before the deployment of heterogeneous nodes. Subsequently, factors such as network performance and economic cost are comprehensively considered, and integrated into a single index using the entropy weighting method. The proportion of different indicators is then determined, and a formula for calculating the required number of heterogeneous nodes under various network conditions is derived by considering parameters such as network area size, node communication threshold distance, and the number of common nodes. Experimental results demonstrate that the proposed algorithm not only reduces networks costs but also enhances overall networks performance.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/3949923","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143111469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The sixth generation (6G) wireless communication envisions global coverage, all spectra, and full applications, which correspondingly creates many new communication scenarios. As the foundation of 6G communication system design, network planning, and optimization, more intelligent scenario identification algorithms are necessitated in wireless channel modeling to automatically match suitable parameters for various scenarios. With channel statistics and the efficient channel attention (ECA) mechanism, we propose an improved residual network (ResNet) to identify scenarios in the 6G space–air–ground–sea framework. Datasets from both channel measurements and 6G pervasive channel model (6GPCM) simulations are collected to establish a scenario channel characteristic database, including the numbered scenarios and channel statistical properties such as root mean square (RMS) delay spread (DS), RMS angle spread (AS), and stationary distance/time/bandwidth, etc. During the training and verification process, the proposed algorithm is optimized for 29 scenarios, and the identification accuracy of the proposed ECA–ResNet is higher than the convolutional neural network (CNN) and recurrent neural network (RNN). Finally, the cumulative distribution functions (CDFs) of RMS AS and RMS DS for interoffice main road, office outdoor, office, and industrial Internet of Things (IIoT) scenarios are verified according to the measurement data.
{"title":"An ECA–ResNet-Based Intelligent Communication Scenario Identification Algorithm for 6G Wireless Communications","authors":"Wenqi Zhou, Cheng-Xiang Wang, Chen Huang, Rui Feng, Zhen Lv, Zhongyu Qian, Shuyi Ding","doi":"10.1155/int/8860822","DOIUrl":"https://doi.org/10.1155/int/8860822","url":null,"abstract":"<div>\u0000 <p>The sixth generation (6G) wireless communication envisions global coverage, all spectra, and full applications, which correspondingly creates many new communication scenarios. As the foundation of 6G communication system design, network planning, and optimization, more intelligent scenario identification algorithms are necessitated in wireless channel modeling to automatically match suitable parameters for various scenarios. With channel statistics and the efficient channel attention (ECA) mechanism, we propose an improved residual network (ResNet) to identify scenarios in the 6G space–air–ground–sea framework. Datasets from both channel measurements and 6G pervasive channel model (6GPCM) simulations are collected to establish a scenario channel characteristic database, including the numbered scenarios and channel statistical properties such as root mean square (RMS) delay spread (DS), RMS angle spread (AS), and stationary distance/time/bandwidth, etc. During the training and verification process, the proposed algorithm is optimized for 29 scenarios, and the identification accuracy of the proposed ECA–ResNet is higher than the convolutional neural network (CNN) and recurrent neural network (RNN). Finally, the cumulative distribution functions (CDFs) of RMS AS and RMS DS for interoffice main road, office outdoor, office, and industrial Internet of Things (IIoT) scenarios are verified according to the measurement data.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/8860822","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143120863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ultrasound imaging is a widely adopted method for noninvasive examination of internal structures, valued for its cost-effectiveness, real-time imaging capability, and absence of ionizing radiation. Its applications, including peripheral nerve blocking (PNB) procedures, benefit from the direct visualization of nerve structures. However, the inherent distortions in ultrasound images, arising from echo perturbations and speckle noise, pose challenges to the accurate localization of nerve structures, even for experienced practitioners. Computational techniques, particularly Bayesian inference, offer a promising solution by providing uncertainty estimates in model predictions. This article focused on developing and implementing an optimal Bayesian U-Net for nerve segmentation in ultrasound images, presented through a user-friendly application. Bayesian convolution layers and the Monte Carlo dropout method were the two Bayesian techniques explored and compared, with a specific emphasis on facilitating medical professionals’ decision-making processes. The research revealed that integrating the Monte Carlo dropout technique for Bayesian inference yields the most optimal results. The Bayesian model demonstrates an average binary accuracy of 98.99%, an average dice coefficient score of 0.72, and an average IOU score of 0.57 when benchmarked against a typical U-Net. The culmination of this work is an application designed for practical use by medical professionals, providing an intuitive interface for Bayesian nerve segmentation in ultrasound images. This research contributes to the broader understanding of Bayesian techniques in medical imaging models and offers a comprehensive solution that combines advanced methodology with user-friendly accessibility.
{"title":"Nerve Segmentation of Ultrasound Images Bayesian U-Net Models","authors":"Taryn Michael, Ibidun Christiana Obagbuwa","doi":"10.1155/int/6114741","DOIUrl":"https://doi.org/10.1155/int/6114741","url":null,"abstract":"<div>\u0000 <p>Ultrasound imaging is a widely adopted method for noninvasive examination of internal structures, valued for its cost-effectiveness, real-time imaging capability, and absence of ionizing radiation. Its applications, including peripheral nerve blocking (PNB) procedures, benefit from the direct visualization of nerve structures. However, the inherent distortions in ultrasound images, arising from echo perturbations and speckle noise, pose challenges to the accurate localization of nerve structures, even for experienced practitioners. Computational techniques, particularly Bayesian inference, offer a promising solution by providing uncertainty estimates in model predictions. This article focused on developing and implementing an optimal Bayesian U-Net for nerve segmentation in ultrasound images, presented through a user-friendly application. Bayesian convolution layers and the Monte Carlo dropout method were the two Bayesian techniques explored and compared, with a specific emphasis on facilitating medical professionals’ decision-making processes. The research revealed that integrating the Monte Carlo dropout technique for Bayesian inference yields the most optimal results. The Bayesian model demonstrates an average binary accuracy of 98.99%, an average dice coefficient score of 0.72, and an average IOU score of 0.57 when benchmarked against a typical U-Net. The culmination of this work is an application designed for practical use by medical professionals, providing an intuitive interface for Bayesian nerve segmentation in ultrasound images. This research contributes to the broader understanding of Bayesian techniques in medical imaging models and offers a comprehensive solution that combines advanced methodology with user-friendly accessibility.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/6114741","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143119812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Íñigo Elguea-Aguinaco, Ibai Inziarte-Hidalgo, Simon Bøgh, Nestor Arana-Arexolaleiba
Effective motion planning is an indispensable prerequisite for the optimal performance of robotic manipulators in any task. In this regard, the research and application of reinforcement learning in robotic manipulators for motion planning have gained great relevance in recent years. The ability of reinforcement learning agents to adapt to variable environments, especially those featuring dynamic obstacles, has propelled their increasing application in this domain. Notwithstanding, a clear need remains for a resource that critically examines the progress, challenges, and future directions of this machine learning control technique in motion planning. This article undertakes a comprehensive review of the landscape of reinforcement learning, offering a retrospective analysis of its application in motion planning from 2018 to the present. The exploration extends to the trends associated with reinforcement learning in the context of serial manipulators and motion planning, as well as the various technological challenges currently presented by this machine learning control technique. The overarching objective of this review is to serve as a valuable resource for the robotics community, facilitating the ongoing development of systems controlled by reinforcement learning. By delving into the primary challenges intrinsic to this technology, the review seeks to enhance the understanding of reinforcement learning’s role in motion planning and provides insights that may suggest future research directions in this domain.
{"title":"A Review on Reinforcement Learning for Motion Planning of Robotic Manipulators","authors":"Íñigo Elguea-Aguinaco, Ibai Inziarte-Hidalgo, Simon Bøgh, Nestor Arana-Arexolaleiba","doi":"10.1155/int/1636497","DOIUrl":"https://doi.org/10.1155/int/1636497","url":null,"abstract":"<div>\u0000 <p>Effective motion planning is an indispensable prerequisite for the optimal performance of robotic manipulators in any task. In this regard, the research and application of reinforcement learning in robotic manipulators for motion planning have gained great relevance in recent years. The ability of reinforcement learning agents to adapt to variable environments, especially those featuring dynamic obstacles, has propelled their increasing application in this domain. Notwithstanding, a clear need remains for a resource that critically examines the progress, challenges, and future directions of this machine learning control technique in motion planning. This article undertakes a comprehensive review of the landscape of reinforcement learning, offering a retrospective analysis of its application in motion planning from 2018 to the present. The exploration extends to the trends associated with reinforcement learning in the context of serial manipulators and motion planning, as well as the various technological challenges currently presented by this machine learning control technique. The overarching objective of this review is to serve as a valuable resource for the robotics community, facilitating the ongoing development of systems controlled by reinforcement learning. By delving into the primary challenges intrinsic to this technology, the review seeks to enhance the understanding of reinforcement learning’s role in motion planning and provides insights that may suggest future research directions in this domain.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/1636497","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143118897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The emergence of digital twin technology offers a promising solution to address the limitations of traditional methods on early diagnosis and accurate propagation analysis of flight ground service delays. However, the application of digital twin technology in the civil aviation domain still stays at the lower maturity of the L2 level, which focuses on physical assets, operational data, and maintenance planning at airports, and failed to achieve the integration of flight ground operation mechanism and real-time data, making it difficult to realize timely delay diagnosis. The simulation model is also limited to the offline simulation technology, which cannot connect to real-time data for simulation from intermediate processes. In this work, we developed an advanced L3-level airport digital twin system for flight ground service processes delay diagnosis and propagation, which focused on real-time data-driven simulation models and machine learning applications to meet the timely and precision requirements. First, we used the Unity3D platform to construct static three-dimensional models of flight ground service objects on the airport cloud server. By parsing these behavioral state interfaces and mapping real-time dynamic data from the airport sensing and business systems, we achieved accurate visualization of the airport’s dynamic operational processes. Then, a vehicle delay tree–based Bayesian diagnostic model was proposed in the digital twin system to analyze the relationships between multiple flights and service processes, which enables proactive diagnosis of the operation status and provides delay warning information. To improve the accuracy of propagation analysis, we proposed a “breakpoint” simulation method that enables real-time simulation starting from an intermediate moment, facilitating the inference of flight ground service delays since the early warning moment. In addition, two delay tracing and propagation algorithms were proposed to identify delays and investigate propagation paths. Leveraging real-time operational information, our approach provides valuable feedback for decision-making, empowering the airport manager to formulate precise optimization strategies. Experiments on real-world airport data have validated the effectiveness of our proposed method and provided practical recommendations for airport managers to reduce aircraft delays and improve airport operation efficiency.
{"title":"Digital Twin-Enabled Delay Diagnosis Traceability and Propagation Process for Airport Flight Ground Service","authors":"Chang Liu, YuanYuan Zhang, YanRu Chen, ShiJia Liu, ShunFang Hu, Qian Luo, LiangYin Chen","doi":"10.1155/int/7458758","DOIUrl":"https://doi.org/10.1155/int/7458758","url":null,"abstract":"<div>\u0000 <p>The emergence of digital twin technology offers a promising solution to address the limitations of traditional methods on early diagnosis and accurate propagation analysis of flight ground service delays. However, the application of digital twin technology in the civil aviation domain still stays at the lower maturity of the L2 level, which focuses on physical assets, operational data, and maintenance planning at airports, and failed to achieve the integration of flight ground operation mechanism and real-time data, making it difficult to realize timely delay diagnosis. The simulation model is also limited to the offline simulation technology, which cannot connect to real-time data for simulation from intermediate processes. In this work, we developed an advanced L3-level airport digital twin system for flight ground service processes delay diagnosis and propagation, which focused on real-time data-driven simulation models and machine learning applications to meet the timely and precision requirements. First, we used the Unity3D platform to construct static three-dimensional models of flight ground service objects on the airport cloud server. By parsing these behavioral state interfaces and mapping real-time dynamic data from the airport sensing and business systems, we achieved accurate visualization of the airport’s dynamic operational processes. Then, a vehicle delay tree–based Bayesian diagnostic model was proposed in the digital twin system to analyze the relationships between multiple flights and service processes, which enables proactive diagnosis of the operation status and provides delay warning information. To improve the accuracy of propagation analysis, we proposed a “breakpoint” simulation method that enables real-time simulation starting from an intermediate moment, facilitating the inference of flight ground service delays since the early warning moment. In addition, two delay tracing and propagation algorithms were proposed to identify delays and investigate propagation paths. Leveraging real-time operational information, our approach provides valuable feedback for decision-making, empowering the airport manager to formulate precise optimization strategies. Experiments on real-world airport data have validated the effectiveness of our proposed method and provided practical recommendations for airport managers to reduce aircraft delays and improve airport operation efficiency.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/7458758","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143118898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuang Ran, Wei Zhong, Lin Ma, Danting Duan, Long Ye, Qin Zhang
Music is an important way for emotion expression, and traditional manual composition requires a solid knowledge of music theory. It is needed to find a simple but accurate method to express personal emotions in music creation. In this paper, we propose and implement an EEG signal-driven real-time emotional music generation system for generating exclusive emotional music. To achieve real-time emotion recognition, the proposed system can obtain the model suitable for a newcomer quickly through short-time calibration. And then, both the recognized emotion state and music structure features are fed into the network as the conditional inputs to generate exclusive music which is consistent with the user’s real emotional expression. In the real-time emotion recognition module, we propose an optimized style transfer mapping algorithm based on simplified parameter optimization and introduce the strategy of instance selection into the proposed method. The module can obtain and calibrate a suitable model for a new user in short-time, which achieves the purpose of real-time emotion recognition. The accuracies have been improved to 86.78% and 77.68%, and the computing time is just to 7 s and 10 s on the public SEED and self-collected datasets, respectively. In the music generation module, we propose an emotional music generation network based on structure features and embed it into our system, which breaks the limitation of the existing systems by calling third-party software and realizes the controllability of the consistency of generated music with the actual one in emotional expression. The experimental results show that the proposed system can generate fluent, complete, and exclusive music consistent with the user’s real-time emotion recognition results.
{"title":"Mind to Music: An EEG Signal-Driven Real-Time Emotional Music Generation System","authors":"Shuang Ran, Wei Zhong, Lin Ma, Danting Duan, Long Ye, Qin Zhang","doi":"10.1155/int/9618884","DOIUrl":"https://doi.org/10.1155/int/9618884","url":null,"abstract":"<div>\u0000 <p>Music is an important way for emotion expression, and traditional manual composition requires a solid knowledge of music theory. It is needed to find a simple but accurate method to express personal emotions in music creation. In this paper, we propose and implement an EEG signal-driven real-time emotional music generation system for generating exclusive emotional music. To achieve real-time emotion recognition, the proposed system can obtain the model suitable for a newcomer quickly through short-time calibration. And then, both the recognized emotion state and music structure features are fed into the network as the conditional inputs to generate exclusive music which is consistent with the user’s real emotional expression. In the real-time emotion recognition module, we propose an optimized style transfer mapping algorithm based on simplified parameter optimization and introduce the strategy of instance selection into the proposed method. The module can obtain and calibrate a suitable model for a new user in short-time, which achieves the purpose of real-time emotion recognition. The accuracies have been improved to 86.78% and 77.68%, and the computing time is just to 7 s and 10 s on the public SEED and self-collected datasets, respectively. In the music generation module, we propose an emotional music generation network based on structure features and embed it into our system, which breaks the limitation of the existing systems by calling third-party software and realizes the controllability of the consistency of generated music with the actual one in emotional expression. The experimental results show that the proposed system can generate fluent, complete, and exclusive music consistent with the user’s real-time emotion recognition results.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/9618884","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143118107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juan F. Pérez-Pérez, Isis Bonet, María Solange Sánchez-Pinzón, Fabio Caraffini, Christian Lochmuller
Addressing climate change represents one of the most pressing challenges for organisations in developing nations. This is particularly relevant for companies navigating the shift towards a low-carbon economy. This research leverages artificial intelligence (AI) methodologies to evaluate the financial implications of climate transition risks, encompassing both direct and indirect energy usage, including expenditures on electricity and fossil fuels. Advanced machine learning (ML) and deep learning (DL) models are employed to predict electricity and diesel consumption trends along with their associated costs. Findings from this study indicate an average prediction accuracy of 90.36%, underscoring the value of these tools in supporting organisational decision making related to climate transition risks. The study lays a foundation for comprehending not only the added costs linked to climate risks but also the potential advantages of transitioning to a low-carbon economy, particularly from an energy-focused perspective. Additionally, the proposed climate transition risk adjustment factor offers a framework for visualising the financial impacts of scenarios outlined by the Network for Greening the Financial System.
{"title":"Using Artificial Intelligence to Predict the Financial Impact of Climate Transition Risks Within Organisations","authors":"Juan F. Pérez-Pérez, Isis Bonet, María Solange Sánchez-Pinzón, Fabio Caraffini, Christian Lochmuller","doi":"10.1155/int/3334263","DOIUrl":"https://doi.org/10.1155/int/3334263","url":null,"abstract":"<div>\u0000 <p>Addressing climate change represents one of the most pressing challenges for organisations in developing nations. This is particularly relevant for companies navigating the shift towards a low-carbon economy. This research leverages artificial intelligence (AI) methodologies to evaluate the financial implications of climate transition risks, encompassing both direct and indirect energy usage, including expenditures on electricity and fossil fuels. Advanced machine learning (ML) and deep learning (DL) models are employed to predict electricity and diesel consumption trends along with their associated costs. Findings from this study indicate an average prediction accuracy of 90.36%, underscoring the value of these tools in supporting organisational decision making related to climate transition risks. The study lays a foundation for comprehending not only the added costs linked to climate risks but also the potential advantages of transitioning to a low-carbon economy, particularly from an energy-focused perspective. Additionally, the proposed climate transition risk adjustment factor offers a framework for visualising the financial impacts of scenarios outlined by the Network for Greening the Financial System.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/3334263","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143118454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Currently, cloud computing is increasing on a daily basis and has evolved into an efficient and flexible paradigm for addressing large-scale issues. It is recognized as an internet-based computing model where various cloud users share computing and virtual resources such as services, applications, storage, servers and networks. In the present study, we propose an innovative strategy for enhancing the fault tolerance and load balancing capabilities of cloud computing environments: we combined graph neural networks (GNNs) with dynamic multiqueue optimization scheduling (DMQOS). The present study uses GNNs and DMQOS to provide a novel solution to these challenges. GNN–DMQS uses a DMQOS system that adjusts to the dynamic nature of cloud workloads. This dynamic method develops response times and resource consumption, which improve load balancing and system effectiveness. Using GNNs to predict and mitigate probable faults grows fault tolerance and safeguards service accessibility. We evaluate the proposed method, GNN–DMQOS, using extensive experiments on real-world cloud computing datasets. The results demonstrate significant developments: 95.66% in fault tolerance, 97.13% in adaptability, 1598.14 kbps in throughput, 94.78% in resource utilization, 96.77% in reliability, 2.876 ms in response time, 0.141 s in network lifetime, 1.627 s in end-to-end delay and 129.34 ms in time complexity compared with traditional methods. In addition, our method, GNN–DMQOS, exhibits adaptability to varying workloads, making it suitable for dynamic cloud environments.
{"title":"A Graph Neural Network-Based Approach With Dynamic Multiqueue Optimization Scheduling (DMQOS) for Efficient Fault Tolerance and Load Balancing in Cloud Computing","authors":"Chetankumar Kalaskar, Thangam S.","doi":"10.1155/int/6378720","DOIUrl":"https://doi.org/10.1155/int/6378720","url":null,"abstract":"<div>\u0000 <p>Currently, cloud computing is increasing on a daily basis and has evolved into an efficient and flexible paradigm for addressing large-scale issues. It is recognized as an internet-based computing model where various cloud users share computing and virtual resources such as services, applications, storage, servers and networks. In the present study, we propose an innovative strategy for enhancing the fault tolerance and load balancing capabilities of cloud computing environments: we combined graph neural networks (GNNs) with dynamic multiqueue optimization scheduling (DMQOS). The present study uses GNNs and DMQOS to provide a novel solution to these challenges. GNN–DMQS uses a DMQOS system that adjusts to the dynamic nature of cloud workloads. This dynamic method develops response times and resource consumption, which improve load balancing and system effectiveness. Using GNNs to predict and mitigate probable faults grows fault tolerance and safeguards service accessibility. We evaluate the proposed method, GNN–DMQOS, using extensive experiments on real-world cloud computing datasets. The results demonstrate significant developments: 95.66% in fault tolerance, 97.13% in adaptability, 1598.14 kbps in throughput, 94.78% in resource utilization, 96.77% in reliability, 2.876 ms in response time, 0.141 s in network lifetime, 1.627 s in end-to-end delay and 129.34 ms in time complexity compared with traditional methods. In addition, our method, GNN–DMQOS, exhibits adaptability to varying workloads, making it suitable for dynamic cloud environments.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/6378720","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142861786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In response to the limitations of traditional fuzzing approaches that rely on static mutators and fail to dynamically adjust their test case mutations for deeper testing, resulting in the inability to generate targeted inputs to trigger vulnerabilities, this paper proposes a directed fuzzing methodology termed DocFuzz, which is predicated on a feedback mechanism mutator. Initially, a sanitizer is used to target the source code of the tested program and stake in code blocks that may have vulnerabilities. After this, a taint tracking module is used to associate the target code block with the bytes in the test case, forming a high-value byte set. Then, the reinforcement learning mutator of DocFuzz is used to mutate the high-value byte set, generating well-structured inputs that can cover the target code blocks. Finally, utilizing the feedback mechanism of DocFuzz, when the reinforcement learning mutator converges and ceases to optimize, the fuzzer is rebooted to continue mutating toward directions that are more likely to trigger vulnerabilities. Comparative experiments are conducted on multiple test sets, including LAVA-M, and the experimental results demonstrate that the proposed DocFuzz methodology surpasses other fuzzing techniques, offering a more precise, rapid, and effective means of detecting vulnerabilities in source code.
{"title":"DocFuzz: A Directed Fuzzing Method Based on a Feedback Mechanism Mutator","authors":"Lixia Xie, Yuheng Zhao, Hongyu Yang, Ziwen Zhao, Ze Hu, Liang Zhang, Xiang Cheng","doi":"10.1155/int/7931792","DOIUrl":"https://doi.org/10.1155/int/7931792","url":null,"abstract":"<div>\u0000 <p>In response to the limitations of traditional fuzzing approaches that rely on static mutators and fail to dynamically adjust their test case mutations for deeper testing, resulting in the inability to generate targeted inputs to trigger vulnerabilities, this paper proposes a directed fuzzing methodology termed DocFuzz, which is predicated on a feedback mechanism mutator. Initially, a sanitizer is used to target the source code of the tested program and stake in code blocks that may have vulnerabilities. After this, a taint tracking module is used to associate the target code block with the bytes in the test case, forming a high-value byte set. Then, the reinforcement learning mutator of DocFuzz is used to mutate the high-value byte set, generating well-structured inputs that can cover the target code blocks. Finally, utilizing the feedback mechanism of DocFuzz, when the reinforcement learning mutator converges and ceases to optimize, the fuzzer is rebooted to continue mutating toward directions that are more likely to trigger vulnerabilities. Comparative experiments are conducted on multiple test sets, including LAVA-M, and the experimental results demonstrate that the proposed DocFuzz methodology surpasses other fuzzing techniques, offering a more precise, rapid, and effective means of detecting vulnerabilities in source code.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/7931792","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142851359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the wide application of deep learning (DL) across various fields, deep joint source–channel coding (DeepJSCC) schemes have emerged as a new coding approach for image transmission. Compared with traditional separated source and CC (SSCC) schemes, DeepJSCC is more robust to the channel environment. To address the limited sensing capability of individual devices, distributed cooperative transmission is implemented among edge devices. However, this approach significantly increases communication overhead. In addition, existing distributed DeepJSCC schemes primarily focus on specific tasks, such as classification or data recovery. In this paper, we explore the wireless semantic image collaborative nonorthogonal transmission for distributed edge networks, where edge devices distributed across the network extract features of the same target image from different viewpoints and transmit these features to an edge server. A two-view distributed cooperative DeepJSCC (two-view-DC-DeepJSCC) with or without information disentanglement scheme is proposed. In particular, the two-view-DC-DeepJSCC with information disentanglement (two-view-DC-DeepJSCC-D) is proposed for achieving balancing performance between multitasking of image semantic communication; while the two-view-DC-DeepJSCC without information disentanglement only pursues outstanding data recovery performance. Through curriculum learning (CL), the proposed two-view-DC-DeepJSCC-D effectively captures both common and private information from two-view data. The edge server uses the received information to accomplish tasks such as image recovery, classification, and clustering. The experimental results demonstrate that our proposed two-view-DC-DeepJSCC-D scheme is capable of simultaneously performing image recovery, classification, and clustering tasks. In addition, the proposed two-view-DC-DeepJSCC has better recovery performance compared to the existing schemes, while the proposed two-view-DC-DeepJSCC-D not only maintains a competitive advantage in image recovery but also has a significant improvement in classification and clustering accuracy. However, the proposed two-view-DC-DeepJSCC-D will sacrifice some image recovery performance to balance multiple tasks. Furthermore, two-view-DC-DeepJSCC-D exhibits stronger robustness across various signal-to-noise ratios.
{"title":"Two-View Image Semantic Cooperative Nonorthogonal Transmission in Distributed Edge Networks","authors":"Wei Wang, Donghong Cai, Zhicheng Dong, Lisu Yu, Yanqing Xu, Zhiquan Liu","doi":"10.1155/int/5081017","DOIUrl":"https://doi.org/10.1155/int/5081017","url":null,"abstract":"<div>\u0000 <p>With the wide application of deep learning (DL) across various fields, deep joint source–channel coding (DeepJSCC) schemes have emerged as a new coding approach for image transmission. Compared with traditional separated source and CC (SSCC) schemes, DeepJSCC is more robust to the channel environment. To address the limited sensing capability of individual devices, distributed cooperative transmission is implemented among edge devices. However, this approach significantly increases communication overhead. In addition, existing distributed DeepJSCC schemes primarily focus on specific tasks, such as classification or data recovery. In this paper, we explore the wireless semantic image collaborative nonorthogonal transmission for distributed edge networks, where edge devices distributed across the network extract features of the same target image from different viewpoints and transmit these features to an edge server. A two-view distributed cooperative DeepJSCC (two-view-DC-DeepJSCC) with or without information disentanglement scheme is proposed. In particular, the two-view-DC-DeepJSCC with information disentanglement (two-view-DC-DeepJSCC-D) is proposed for achieving balancing performance between multitasking of image semantic communication; while the two-view-DC-DeepJSCC without information disentanglement only pursues outstanding data recovery performance. Through curriculum learning (CL), the proposed two-view-DC-DeepJSCC-D effectively captures both common and private information from two-view data. The edge server uses the received information to accomplish tasks such as image recovery, classification, and clustering. The experimental results demonstrate that our proposed two-view-DC-DeepJSCC-D scheme is capable of simultaneously performing image recovery, classification, and clustering tasks. In addition, the proposed two-view-DC-DeepJSCC has better recovery performance compared to the existing schemes, while the proposed two-view-DC-DeepJSCC-D not only maintains a competitive advantage in image recovery but also has a significant improvement in classification and clustering accuracy. However, the proposed two-view-DC-DeepJSCC-D will sacrifice some image recovery performance to balance multiple tasks. Furthermore, two-view-DC-DeepJSCC-D exhibits stronger robustness across various signal-to-noise ratios.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2024 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/5081017","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142860439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}