Pub Date : 2024-09-10DOI: 10.1007/s11276-024-03826-x
Mingxu Sun, Lingfeng Xiao, Xiujin Zhu, Peng Zhang, Xianping Niu, Tao Shen, Bin Sun, Yuan Xu
This paper proposes a system that applies electroencephalogram (EEG) technology to achieve music intervention therapy. The system can identify emotions of autistic children in real-time and play music considering their emotions as a musical treatment to assist the treatment of music therapists and the principle of playing homogenous music is to finally calm people down. The proposed method firstly collects EEG of autistic children using a 14-channel EMOTIV EPOC + and preprocesses signals through bandpass filtering, wavelet decomposition and reconstruction, then extracts frequency band-power characteristics of reconstructed EEG signals. Later, the data are classified as one of the three types of emotions (positive, middle and negative) using a support vector machine (SVM). The system also displays the recognized emotion type on a user interface and gives real-time emotional state feedback on emotional changes, which helps music therapists to evaluate the treatment and results more conveniently and effectively. Real EEG data are used to conduct the verification of system feasibility which reaches a classification accuracy of 88%. As the Internet of Things develops, the combination of edge computing with Wise Information Technology of 120 (WIT120) becomes a new trend. In this work, we propose a system to combine edge computing devices with cloud computing resources to form the music regulation system for autistic children to meet processing requirements for EEG signals in terms of timeliness and computational performance. In the designed system, preprocessing EEG signals is done in edge nodes then the preprocessed signals are sent to the cloud where frequency band-power characteristics can be extracted as features to be used in SVM. At last, the results are sent to a mobile app or computer software for therapists to evaluate.
{"title":"An EEG signal-based music treatment system for autistic children using edge computing devices","authors":"Mingxu Sun, Lingfeng Xiao, Xiujin Zhu, Peng Zhang, Xianping Niu, Tao Shen, Bin Sun, Yuan Xu","doi":"10.1007/s11276-024-03826-x","DOIUrl":"https://doi.org/10.1007/s11276-024-03826-x","url":null,"abstract":"<p>This paper proposes a system that applies electroencephalogram (EEG) technology to achieve music intervention therapy. The system can identify emotions of autistic children in real-time and play music considering their emotions as a musical treatment to assist the treatment of music therapists and the principle of playing homogenous music is to finally calm people down. The proposed method firstly collects EEG of autistic children using a 14-channel EMOTIV EPOC + and preprocesses signals through bandpass filtering, wavelet decomposition and reconstruction, then extracts frequency band-power characteristics of reconstructed EEG signals. Later, the data are classified as one of the three types of emotions (positive, middle and negative) using a support vector machine (SVM). The system also displays the recognized emotion type on a user interface and gives real-time emotional state feedback on emotional changes, which helps music therapists to evaluate the treatment and results more conveniently and effectively. Real EEG data are used to conduct the verification of system feasibility which reaches a classification accuracy of 88%. As the Internet of Things develops, the combination of edge computing with Wise Information Technology of 120 (WIT120) becomes a new trend. In this work, we propose a system to combine edge computing devices with cloud computing resources to form the music regulation system for autistic children to meet processing requirements for EEG signals in terms of timeliness and computational performance. In the designed system, preprocessing EEG signals is done in edge nodes then the preprocessed signals are sent to the cloud where frequency band-power characteristics can be extracted as features to be used in SVM. At last, the results are sent to a mobile app or computer software for therapists to evaluate.</p>","PeriodicalId":23750,"journal":{"name":"Wireless Networks","volume":"4 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-10DOI: 10.1007/s11276-024-03827-w
Lei Zhang, Yujing Deng, Jia Fu, Lei Li, Jinhua Hu, Kangjian Di
Sea surface sensor node localization accuracy is often hindered by seawater flow, while sea storms affect the transmission of radio signals. To improve the localization accuracy of the Distance Vector-Hop (DV-Hop) algorithm in Sea surface wireless sensor networks, we propose a DV-Hop localization algorithm enhanced through a multi-strategy sparrow search algorithm. The sea surface communication model is established, with drones as sink nodes, and the number of hops between nodes in the Sea Surface network is subdivided using non-uniform communication radii. Then, the average hop distance of the node is corrected by combining the weighted minimum mean square error and the cosine theorem. Finally, the calculated localization error is used as the fitness function. The localization of unknown nodes is initialized using the elite reversal strategy, and the Harris Hawk optimization method combined with the differential evolution algorithm is used to update the localization of the sparrow population discoverer to improve the population diversity. In the simulation experiments, the effectiveness of our algorithm is verified in anisotropic topologies. After that, we compared DV-Hop, Sparrow Search Algorithm for Optimizing DV-Hop (SSA-DV-Hop), Whale Optimization Algorithm for Optimizing DV-Hop (WOA-DV-Hop), and Harris Hawk Optimization Algorithm for Optimizing DV-Hop (HHO-DV-Hop) with our algorithm to verify the accuracy of the algorithm. The results show that, across various communication radii, the average localization error exhibited a reduction of 66.91% in comparison to DV-Hop. In addition, in different scenarios with different numbers of beacon nodes, the average localization error decreased by 66.78% compared to DV-Hop. Therefore, the proposed algorithm can effectively improve localization accuracy.
{"title":"A DV-Hop localization algorithm corrected based on multi-strategy sparrow algorithm in sea-surface wireless sensor networks","authors":"Lei Zhang, Yujing Deng, Jia Fu, Lei Li, Jinhua Hu, Kangjian Di","doi":"10.1007/s11276-024-03827-w","DOIUrl":"https://doi.org/10.1007/s11276-024-03827-w","url":null,"abstract":"<p>Sea surface sensor node localization accuracy is often hindered by seawater flow, while sea storms affect the transmission of radio signals. To improve the localization accuracy of the Distance Vector-Hop (DV-Hop) algorithm in Sea surface wireless sensor networks, we propose a DV-Hop localization algorithm enhanced through a multi-strategy sparrow search algorithm. The sea surface communication model is established, with drones as sink nodes, and the number of hops between nodes in the Sea Surface network is subdivided using non-uniform communication radii. Then, the average hop distance of the node is corrected by combining the weighted minimum mean square error and the cosine theorem. Finally, the calculated localization error is used as the fitness function. The localization of unknown nodes is initialized using the elite reversal strategy, and the Harris Hawk optimization method combined with the differential evolution algorithm is used to update the localization of the sparrow population discoverer to improve the population diversity. In the simulation experiments, the effectiveness of our algorithm is verified in anisotropic topologies. After that, we compared DV-Hop, Sparrow Search Algorithm for Optimizing DV-Hop (SSA-DV-Hop), Whale Optimization Algorithm for Optimizing DV-Hop (WOA-DV-Hop), and Harris Hawk Optimization Algorithm for Optimizing DV-Hop (HHO-DV-Hop) with our algorithm to verify the accuracy of the algorithm. The results show that, across various communication radii, the average localization error exhibited a reduction of 66.91% in comparison to DV-Hop. In addition, in different scenarios with different numbers of beacon nodes, the average localization error decreased by 66.78% compared to DV-Hop. Therefore, the proposed algorithm can effectively improve localization accuracy.</p>","PeriodicalId":23750,"journal":{"name":"Wireless Networks","volume":"419 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The emerging sixth-generation (6G) systems aim to integrate machine learning (ML) capabilities into the network architecture. Open Radio Access Network (O-RAN) is a paradigm that supports this vision. However, deep integration of 6G edge intelligence and O-RAN can face challenges in efficient execution of ML tasks due to finite link bandwidth and data privacy concerns. We propose a new Multi-Layer Collaborative Federated Learning (MLCFL) architecture for O-RAN, as well as a workflow and deployment design, which are demonstrated through the important RAN use case of intelligent mobility management. Simulation results show that MLCFL effectively improves the mobility prediction and reduces energy consumption and delay through flexible deployment adjustments. MLCFL has the potential to advance the O-RAN architecture design and provides guidelines for efficient deployment of edge intelligence in 6G.
新兴的第六代(6G)系统旨在将机器学习(ML)功能集成到网络架构中。开放无线接入网(O-RAN)是支持这一愿景的范例。然而,由于链路带宽有限和数据隐私问题,6G 边缘智能与 O-RAN 的深度集成在高效执行 ML 任务方面可能面临挑战。我们为 O-RAN 提出了一种新的多层协作联合学习(MLCFL)架构以及工作流程和部署设计,并通过智能移动管理这一重要的 RAN 用例进行了演示。仿真结果表明,MLCFL 通过灵活的部署调整,有效改善了移动性预测,降低了能耗和延迟。MLCFL 有潜力推进 O-RAN 架构设计,并为 6G 边缘智能的高效部署提供指导。
{"title":"Multi-Layer Collaborative Federated Learning architecture for 6G Open RAN","authors":"Borui Zhao, Qimei Cui, Wei Ni, Xueqi Li, Shengyuan Liang","doi":"10.1007/s11276-024-03823-0","DOIUrl":"https://doi.org/10.1007/s11276-024-03823-0","url":null,"abstract":"<p>The emerging sixth-generation (6G) systems aim to integrate machine learning (ML) capabilities into the network architecture. Open Radio Access Network (O-RAN) is a paradigm that supports this vision. However, deep integration of 6G edge intelligence and O-RAN can face challenges in efficient execution of ML tasks due to finite link bandwidth and data privacy concerns. We propose a new Multi-Layer Collaborative Federated Learning (MLCFL) architecture for O-RAN, as well as a workflow and deployment design, which are demonstrated through the important RAN use case of intelligent mobility management. Simulation results show that MLCFL effectively improves the mobility prediction and reduces energy consumption and delay through flexible deployment adjustments. MLCFL has the potential to advance the O-RAN architecture design and provides guidelines for efficient deployment of edge intelligence in 6G.</p>","PeriodicalId":23750,"journal":{"name":"Wireless Networks","volume":"4 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142226971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-04DOI: 10.1007/s11276-024-03824-z
Qichang Guo, Zhanyue Xu, Jiabin Yuan, Yifei Wei
Driven by technologies such as deep learning, online detection equipment can perform comprehensive and continuous monitoring of high-speed railways (HSR). However, these detection tasks in the railway Internet of Things (IoT) are typically computation-intensive and delay-sensitive, that makes task processing challenging. Meanwhile, the dynamic and resource-constrained nature of HSR scenarios poses significant challenges for effective resource allocation. In this paper, we propose a cloud-edge collaboration architecture for deep learning-based detection tasks in railway IoT. Within this system model, we introduce a distributed inference mode that partitions tasks into two parts, offloading task processing to the edge side. Then we jointly optimize the computing offloading strategy and model partitioning strategy to minimize the average delay while ensuring accuracy requirements. However, this optimization problem is a complex mixed-integer nonlinear programming (MINLP) issue. We divide it into two sub-problems: computing offloading decisions and model partitioning decisions. For model partitioning, we propose a Partition Point Selection (PPS) algorithm; for computing offloading decisions, we formulate it as a Markov Decision Process (MDP) and solve it using DDPG. Simulation results demonstrate that PPS can rapidly select the globally optimal partition points, and combined with DDPG, it can better adapt to the offloading challenges of detection tasks in HSR scenarios.
{"title":"Cloud-edge collaboration-based task offloading strategy in railway IoT for intelligent detection","authors":"Qichang Guo, Zhanyue Xu, Jiabin Yuan, Yifei Wei","doi":"10.1007/s11276-024-03824-z","DOIUrl":"https://doi.org/10.1007/s11276-024-03824-z","url":null,"abstract":"<p>Driven by technologies such as deep learning, online detection equipment can perform comprehensive and continuous monitoring of high-speed railways (HSR). However, these detection tasks in the railway Internet of Things (IoT) are typically computation-intensive and delay-sensitive, that makes task processing challenging. Meanwhile, the dynamic and resource-constrained nature of HSR scenarios poses significant challenges for effective resource allocation. In this paper, we propose a cloud-edge collaboration architecture for deep learning-based detection tasks in railway IoT. Within this system model, we introduce a distributed inference mode that partitions tasks into two parts, offloading task processing to the edge side. Then we jointly optimize the computing offloading strategy and model partitioning strategy to minimize the average delay while ensuring accuracy requirements. However, this optimization problem is a complex mixed-integer nonlinear programming (MINLP) issue. We divide it into two sub-problems: computing offloading decisions and model partitioning decisions. For model partitioning, we propose a Partition Point Selection (PPS) algorithm; for computing offloading decisions, we formulate it as a Markov Decision Process (MDP) and solve it using DDPG. Simulation results demonstrate that PPS can rapidly select the globally optimal partition points, and combined with DDPG, it can better adapt to the offloading challenges of detection tasks in HSR scenarios.</p>","PeriodicalId":23750,"journal":{"name":"Wireless Networks","volume":"17 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-31DOI: 10.1007/s11276-024-03796-0
Xin Yu
On-demand routing protocols discover routes through network-wide searches. Route requests are broadcast to a large number of nodes, and route replies may contain long routes. In this paper, we address the route discovery problem and aim to reduce route discovery overhead. We propose using data packets to discover routes. A source sets a boolean variable in a data packet to be true when it has only one route to the destination. This variable is a new form of a route request. The nodes forwarding the data packet send route replies containing cached routes. To prevent nodes from sending duplicate routes to the source, we define a forward list and a backward list in the data packet. The node sending a route reply records route diverging and converging information about the route in the route reply. Subsequent nodes use the information in the data packet to decide whether to send a route reply. Our algorithm reduces route discovery latency and discovers routes shorter than or having the same length as the active data path. Due to these shorter routes, it reduces the total size of route requests and route replies significantly. Routing overhead increases slowly as mobility or network load increases. Our algorithm is independent of node movement. It improves packet delivery ratio by 15% and reduces latency by 54% for the 100-node networks at node mean speed of 20 m/s.
{"title":"Exploiting data transmission for route discoveries in mobile ad hoc networks","authors":"Xin Yu","doi":"10.1007/s11276-024-03796-0","DOIUrl":"https://doi.org/10.1007/s11276-024-03796-0","url":null,"abstract":"<p>On-demand routing protocols discover routes through network-wide searches. Route requests are broadcast to a large number of nodes, and route replies may contain long routes. In this paper, we address the route discovery problem and aim to reduce route discovery overhead. We propose using data packets to discover routes. A source sets a boolean variable in a data packet to be true when it has only one route to the destination. This variable is a new form of a route request. The nodes forwarding the data packet send route replies containing cached routes. To prevent nodes from sending duplicate routes to the source, we define a <i>forward</i> list and a <i>backward</i> list in the data packet. The node sending a route reply records route diverging and converging information about the route in the route reply. Subsequent nodes use the information in the data packet to decide whether to send a route reply. Our algorithm reduces route discovery latency and discovers routes shorter than or having the same length as the <i>active</i> data path. Due to these shorter routes, it reduces the total size of route requests and route replies significantly. Routing overhead increases slowly as mobility or network load increases. Our algorithm is independent of node movement. It improves packet delivery ratio by 15% and reduces latency by 54% for the 100-node networks at node mean speed of 20 m/s.</p>","PeriodicalId":23750,"journal":{"name":"Wireless Networks","volume":"319 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142226972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-14DOI: 10.1007/s11276-024-03821-2
Huailiang Peng, Yujun Zhang, Xu Bai, Qiong Dai
Social bot detection is crucial for ensuring the active participation of digital twins and edge intelligence in future social media platforms. Nevertheless, the performance of existing detection methods is impeded by the limited availability of labeled accounts. Despite the notable progress made in some fields by deep semi-supervised learning with label propagation, which utilizes unlabeled data to enhance method performance, its effectiveness is significantly hindered in social bot detection due to the misdistribution of individuation users (MIU). To address these challenges, we propose a novel deep semi-supervised bot detection method, which adopts a coarse-to-fine label propagation (LP-CF) with the hybridized representation models over multi-relational graphs (HR-MRG) to enhance the accuracy of label propagation, thereby improving the effectiveness of unlabeled data in supporting the detection task. Specifically, considering the potential confusion among accounts in the MIU phenomenon, we utilize HR-MRG to obtain high-quality user representations. Subsequently, we introduce a sample selection strategy to partition unlabeled samples into two subsets and apply LP-CF to generate pseudo labels for each subset. Finally, the predicted pseudo labels of unlabeled samples, combined with labeled samples, are used to fine-tune the detection models. Comprehensive experiments on two widely used real datasets demonstrate that our method outperforms other semi-supervised approaches and achieves comparable performance to the fully supervised social bot detection method.
{"title":"Coarse-to-fine label propagation with hybrid representation for deep semi-supervised bot detection","authors":"Huailiang Peng, Yujun Zhang, Xu Bai, Qiong Dai","doi":"10.1007/s11276-024-03821-2","DOIUrl":"https://doi.org/10.1007/s11276-024-03821-2","url":null,"abstract":"<p>Social bot detection is crucial for ensuring the active participation of digital twins and edge intelligence in future social media platforms. Nevertheless, the performance of existing detection methods is impeded by the limited availability of labeled accounts. Despite the notable progress made in some fields by deep semi-supervised learning with label propagation, which utilizes unlabeled data to enhance method performance, its effectiveness is significantly hindered in social bot detection due to the misdistribution of individuation users (MIU). To address these challenges, we propose a novel deep semi-supervised bot detection method, which adopts a coarse-to-fine label propagation (LP-CF) with the hybridized representation models over multi-relational graphs (HR-MRG) to enhance the accuracy of label propagation, thereby improving the effectiveness of unlabeled data in supporting the detection task. Specifically, considering the potential confusion among accounts in the MIU phenomenon, we utilize HR-MRG to obtain high-quality user representations. Subsequently, we introduce a sample selection strategy to partition unlabeled samples into two subsets and apply LP-CF to generate pseudo labels for each subset. Finally, the predicted pseudo labels of unlabeled samples, combined with labeled samples, are used to fine-tune the detection models. Comprehensive experiments on two widely used real datasets demonstrate that our method outperforms other semi-supervised approaches and achieves comparable performance to the fully supervised social bot detection method.</p>","PeriodicalId":23750,"journal":{"name":"Wireless Networks","volume":"420 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-07DOI: 10.1007/s11276-024-03818-x
Achilleas Spanos, Ioanna Kantzavelou
Conventional electing procedures cannot fulfill advanced requirements in modern times. Secure electronic voting systems have been a concern of many researchers for years to replace traditional practices. Decentralized approaches, such as Blockchain technology, are essential to provide compulsory guarantees for secure voting platforms, that hold the properties of transparency, immutability, and confidentiality. This paper presents EtherVote, a secure decentralized electronic voting system, which is based on the Ethereum Blockchain network. The EtherVote is a serverless e-voting model, relying solely on Ethereum and smart contracts, that does not include a database, and thus it enhances security and privacy. The model incorporates an effective method for voter registration and identification to strengthen security. The main properties of EtherVote include encrypted votes, efficiency in handling elections with numerous participants, and simplicity. The system is tested and evaluated, vulnerabilities and possible attacks are exposed through a security analysis, and anonymity, integrity, and unlinkability are retained.
{"title":"EtherVote: a secure smart contract-based e-voting system","authors":"Achilleas Spanos, Ioanna Kantzavelou","doi":"10.1007/s11276-024-03818-x","DOIUrl":"https://doi.org/10.1007/s11276-024-03818-x","url":null,"abstract":"<p>Conventional electing procedures cannot fulfill advanced requirements in modern times. Secure electronic voting systems have been a concern of many researchers for years to replace traditional practices. Decentralized approaches, such as Blockchain technology, are essential to provide compulsory guarantees for secure voting platforms, that hold the properties of transparency, immutability, and confidentiality. This paper presents EtherVote, a secure decentralized electronic voting system, which is based on the Ethereum Blockchain network. The EtherVote is a serverless e-voting model, relying solely on Ethereum and smart contracts, that does not include a database, and thus it enhances security and privacy. The model incorporates an effective method for voter registration and identification to strengthen security. The main properties of EtherVote include encrypted votes, efficiency in handling elections with numerous participants, and simplicity. The system is tested and evaluated, vulnerabilities and possible attacks are exposed through a security analysis, and anonymity, integrity, and unlinkability are retained.\u0000</p>","PeriodicalId":23750,"journal":{"name":"Wireless Networks","volume":"10 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141943941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-03DOI: 10.1007/s11276-024-03800-7
Y. Alekya Rani, E. Sreenivasa Reddy
In recent times, the several cyber attacks are occurred on the network and thus, essential tools are needed for detecting intrusion over the network. Moreover, the network intrusion detection systems become an important tool thus, it has the ability to safeguard the source data from all malicious activities or threats as well as protect the insecurity among individual privacy. Moreover, many existing research works are explored to detect the network intrusion model but it fails to protect the target network efficiently based on the statistical features. A major issue in the designed model is regarded as the robustness or generalization that has the capability to control the working performance when the data is attained from various distributions. To handle all the difficulties, a new meta-heuristic hybrid-based deep learning model is introduced to detect the intrusion. Initially, the input data is garnered from the standard data sources. It is then undergone the pre-processing phase, which is accomplished through duplicate removal, replacing the NAN values, and normalization. With the resultant of pre-processed data, the auto encoder is utilized for extracting the significant features. To further improve the performance, it requires choosing the optimal features with the help of an Improved chimp optimization algorithm known as IChOA. Subsequently, the optimal features are subjected to the newly developed hybrid deep learning model. The hybrid model is built by incorporating the deep temporal convolution network and gated recurrent unit, and it is termed as DINet, in which the hyper parameters are tuned by an improved IChOA algorithm for attaining optimal solutions. Finally, the proposed detection model is evaluated and compared with the former detection approaches. The analysis shows the developed model is suggested to provide 97% in terms of accuracy and precision. Thus, the enhanced model elucidates that to effectively detect malware, which tends to improve data transmission significantly and securely.
近来,网络上发生了多起网络攻击事件,因此需要有必要的工具来检测网络入侵。此外,网络入侵检测系统已成为一种重要工具,它能够保护源数据免受所有恶意活动或威胁,并保护个人隐私的不安全性。此外,现有的许多研究工作都在探索网络入侵检测模型,但却无法根据统计特征有效地保护目标网络。所设计模型的一个主要问题是鲁棒性或通用性,当数据来自不同的分布时,鲁棒性或通用性能够控制工作性能。为了解决所有难题,我们引入了一种新的基于元启发式混合深度学习模型来检测入侵。最初,输入数据来自标准数据源。然后,对数据进行预处理,包括去除重复数据、替换 NAN 值和归一化。有了预处理数据的结果,就可以利用自动编码器来提取重要特征。为了进一步提高性能,它需要借助一种称为 IChOA 的改进黑猩猩优化算法来选择最佳特征。随后,最佳特征将被应用到新开发的混合深度学习模型中。该混合模型由深度时空卷积网络和门控递归单元构建而成,被称为 DINet,其中的超参数通过改进的 IChOA 算法进行调整,以获得最优解。最后,对所提出的检测模型进行了评估,并与之前的检测方法进行了比较。分析结果表明,所开发模型的准确率和精确度均达到了 97%。因此,增强型模型阐明了如何有效地检测恶意软件,从而显著提高数据传输的安全性。
{"title":"Deep intrusion net: an efficient framework for network intrusion detection using hybrid deep TCN and GRU with integral features","authors":"Y. Alekya Rani, E. Sreenivasa Reddy","doi":"10.1007/s11276-024-03800-7","DOIUrl":"https://doi.org/10.1007/s11276-024-03800-7","url":null,"abstract":"<p>In recent times, the several cyber attacks are occurred on the network and thus, essential tools are needed for detecting intrusion over the network. Moreover, the network intrusion detection systems become an important tool thus, it has the ability to safeguard the source data from all malicious activities or threats as well as protect the insecurity among individual privacy. Moreover, many existing research works are explored to detect the network intrusion model but it fails to protect the target network efficiently based on the statistical features. A major issue in the designed model is regarded as the robustness or generalization that has the capability to control the working performance when the data is attained from various distributions. To handle all the difficulties, a new meta-heuristic hybrid-based deep learning model is introduced to detect the intrusion. Initially, the input data is garnered from the standard data sources. It is then undergone the pre-processing phase, which is accomplished through duplicate removal, replacing the NAN values, and normalization. With the resultant of pre-processed data, the auto encoder is utilized for extracting the significant features. To further improve the performance, it requires choosing the optimal features with the help of an Improved chimp optimization algorithm known as IChOA. Subsequently, the optimal features are subjected to the newly developed hybrid deep learning model. The hybrid model is built by incorporating the deep temporal convolution network and gated recurrent unit, and it is termed as DINet, in which the hyper parameters are tuned by an improved IChOA algorithm for attaining optimal solutions. Finally, the proposed detection model is evaluated and compared with the former detection approaches. The analysis shows the developed model is suggested to provide 97% in terms of accuracy and precision. Thus, the enhanced model elucidates that to effectively detect malware, which tends to improve data transmission significantly and securely.</p>","PeriodicalId":23750,"journal":{"name":"Wireless Networks","volume":"55 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141885699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the development of 5 G communication and Internet of Things (IoT) technology, increasing data is generated by a large number of IoT devices at edge networks. Therefore, increasing need for distributed Data Centers (DCs) are seen from enterprises and building elastic applications upon DCs deployed over decentralized edge infrastructures is becoming popular. Nevertheless, it remains a great difficulty to effectively schedule computational tasks to appropriate DCs at the edge end with low energy consumption and satisfactory user-perceived Quality of Service. It is especially true when DCs deployed over an edge environment, which can be highly inhomogeneous in terms of resource configurations and computing capabilities. To this end, we develop an edge task scheduling method by synthesizing a M/G/1/PR queuing model for characterizing the workload distribution and a Deep Deterministic Policy Gradient algorithm for yielding high-quality schedules with low energy cost. We conduct extensive numerical analysis as well and show that our proposed method outperforms state-of-the-art methods in terms of average task response time and energy consumption.
随着 5 G 通信和物联网(IoT)技术的发展,大量物联网设备在边缘网络上产生了越来越多的数据。因此,企业对分布式数据中心(DC)的需求越来越大,在部署于分散边缘基础设施的 DC 上构建弹性应用也变得越来越流行。然而,如何有效地将计算任务调度到边缘端的适当 DC,同时实现低能耗和令人满意的用户感知服务质量,仍然是一个很大的难题。在边缘环境中部署的 DC 在资源配置和计算能力方面可能极不均匀,在这种情况下尤其如此。为此,我们开发了一种边缘任务调度方法,综合了用于描述工作负载分布的 M/G/1/PR 队列模型和用于产生低能耗成本高质量调度的深度确定性策略梯度算法。我们还进行了大量数值分析,结果表明我们提出的方法在平均任务响应时间和能耗方面优于最先进的方法。
{"title":"Ets-ddpg: an energy-efficient and QoS-guaranteed edge task scheduling approach based on deep reinforcement learning","authors":"Jiale Zhao, Yunni Xia, Xiaoning Sun, Tingyan Long, Qinglan Peng, Shangzhi Guo, Fei Meng, Yumin Dong, Qing Xia","doi":"10.1007/s11276-024-03820-3","DOIUrl":"https://doi.org/10.1007/s11276-024-03820-3","url":null,"abstract":"<p>With the development of 5 G communication and Internet of Things (IoT) technology, increasing data is generated by a large number of IoT devices at edge networks. Therefore, increasing need for distributed Data Centers (DCs) are seen from enterprises and building elastic applications upon DCs deployed over decentralized edge infrastructures is becoming popular. Nevertheless, it remains a great difficulty to effectively schedule computational tasks to appropriate DCs at the edge end with low energy consumption and satisfactory user-perceived Quality of Service. It is especially true when DCs deployed over an edge environment, which can be highly inhomogeneous in terms of resource configurations and computing capabilities. To this end, we develop an edge task scheduling method by synthesizing a M/G/1/PR queuing model for characterizing the workload distribution and a Deep Deterministic Policy Gradient algorithm for yielding high-quality schedules with low energy cost. We conduct extensive numerical analysis as well and show that our proposed method outperforms state-of-the-art methods in terms of average task response time and energy consumption.</p>","PeriodicalId":23750,"journal":{"name":"Wireless Networks","volume":"14 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141869949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-26DOI: 10.1007/s11276-024-03819-w
Huidan Zhang, Li Feng
With the development of edge computing and artificial intelligence technology, edge federated learning (EFL) has been widely applied in the Internet of Vehicles (IOV) due to its distributed characteristics and advantages in privacy protection. In this paper, we study the mobile computing power trading between edge servers (ES) and mobile vehicle-mounted equipment (MVE) in the IOV scene. In order to reduce the influence of MVEs’ flexibility, which can easily lead to single point failure or offline problem, we propose semi-synchronous FL aggregation. Considering that multiple federated learning (FL) tasks have different budgets and MVEs have different computing resources, we design an incentive mechanism to encourage selfish MVEs to actively participate in FL task training, so as to obtain higher quality FL models. Furthermore, we propose a fast association decision method based on dynamic state space Markov decision process (DSS-MDP). Simulation experiment data show that, MVEs can obtain higher quality local models at the same energy consumption, thus gaining higher utility. Semi-synchronous FL aggregation is able to improve the accuracy of FL global model by 0.764% on average and reduce the idle time of MVEs by 90.44% compared with the way of allocating aggregation weights according to the data volume.
{"title":"Mobile computing power trading decision-making method for vehicle-mounted devices in multi-task edge federated learning","authors":"Huidan Zhang, Li Feng","doi":"10.1007/s11276-024-03819-w","DOIUrl":"https://doi.org/10.1007/s11276-024-03819-w","url":null,"abstract":"<p>With the development of edge computing and artificial intelligence technology, edge federated learning (EFL) has been widely applied in the Internet of Vehicles (IOV) due to its distributed characteristics and advantages in privacy protection. In this paper, we study the mobile computing power trading between edge servers (ES) and mobile vehicle-mounted equipment (MVE) in the IOV scene. In order to reduce the influence of MVEs’ flexibility, which can easily lead to single point failure or offline problem, we propose semi-synchronous FL aggregation. Considering that multiple federated learning (FL) tasks have different budgets and MVEs have different computing resources, we design an incentive mechanism to encourage selfish MVEs to actively participate in FL task training, so as to obtain higher quality FL models. Furthermore, we propose a fast association decision method based on dynamic state space Markov decision process (DSS-MDP). Simulation experiment data show that, MVEs can obtain higher quality local models at the same energy consumption, thus gaining higher utility. Semi-synchronous FL aggregation is able to improve the accuracy of FL global model by 0.764% on average and reduce the idle time of MVEs by 90.44% compared with the way of allocating aggregation weights according to the data volume.</p>","PeriodicalId":23750,"journal":{"name":"Wireless Networks","volume":"67 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141772781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}