首页 > 最新文献

Journal of Network and Computer Applications最新文献

英文 中文
Fatriot: Fault-tolerant MEC architecture for mission-critical systems using a SmartNIC Fatriot:使用 SmartNIC 的关键任务系统容错 MEC 架构
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-29 DOI: 10.1016/j.jnca.2024.103978
Taejune Park , Myoungsung You , Jinwoo Kim , Seungsoo Lee

Multi-access edge computing (MEC), deploying cloud infrastructures proximate to end-devices and reducing latency, takes pivotal roles for mission-critical services such as smart grids, self-driving cars, and healthcare. Ensuring fault-tolerance is paramount for mission-critical services, as failures in these services can lead to fatal accidents and blackouts. However, the distributed nature of MEC architectures makes them more susceptible to failures than traditional cloud systems. Existing research in this field has focused on enhancing robustness to prevent failures in MEC systems rather than restoring them from failure conditions. To bridge this gap, we introduce Fatriot, a SmartNIC-based architecture designed to ensure fault-tolerance in MEC systems. Fatriot actively monitors for anomalies on MEC hosts and seamlessly redirects incoming service traffic to backup hosts upon detecting failures. Operating as a stand-alone solution on a SmartNIC, Fatriot guarantees the continuous operation of its fault-tolerance mechanism, even during severe errors (e.g., kernel failure) on the MEC host, maintaining uninterrupted service in mission-critical services. Our prototype of Fatriot, implemented on the NetFPGA-SUME, demonstrates effective mitigation of various failure scenarios, achieving this with minimal overhead to services (less than 1%).

多接入边缘计算(MEC)可在终端设备附近部署云基础设施并减少延迟,在智能电网、自动驾驶汽车和医疗保健等关键任务服务中发挥着举足轻重的作用。确保容错性对关键任务服务至关重要,因为这些服务的故障可能导致致命事故和停电。然而,MEC 架构的分布式特性使其比传统云系统更容易出现故障。该领域的现有研究主要集中在增强鲁棒性,以防止 MEC 系统发生故障,而不是从故障条件下恢复系统。为了弥补这一差距,我们引入了 Fatriot,这是一种基于 SmartNIC 的架构,旨在确保 MEC 系统的容错性。Fatriot 可主动监控 MEC 主机上的异常情况,并在检测到故障时将传入的服务流量无缝重定向到备份主机。Fatriot 作为 SmartNIC 上的独立解决方案,即使在 MEC 主机出现严重错误(如内核故障)时,也能保证其容错机制的持续运行,从而维持关键任务服务的不间断服务。我们在 NetFPGA-SUME 上实现的 Fatriot 原型展示了对各种故障情况的有效缓解,并以最小的服务开销(低于 1%)实现了这一目标。
{"title":"Fatriot: Fault-tolerant MEC architecture for mission-critical systems using a SmartNIC","authors":"Taejune Park ,&nbsp;Myoungsung You ,&nbsp;Jinwoo Kim ,&nbsp;Seungsoo Lee","doi":"10.1016/j.jnca.2024.103978","DOIUrl":"10.1016/j.jnca.2024.103978","url":null,"abstract":"<div><p>Multi-access edge computing (MEC), deploying cloud infrastructures proximate to end-devices and reducing latency, takes pivotal roles for mission-critical services such as smart grids, self-driving cars, and healthcare. Ensuring fault-tolerance is paramount for mission-critical services, as failures in these services can lead to fatal accidents and blackouts. However, the distributed nature of MEC architectures makes them more susceptible to failures than traditional cloud systems. Existing research in this field has focused on enhancing <em>robustness</em> to prevent failures in MEC systems rather than restoring them from failure conditions. To bridge this gap, we introduce <em>Fatriot</em>, a SmartNIC-based architecture designed to ensure fault-tolerance in MEC systems. <em>Fatriot</em> actively monitors for anomalies on MEC hosts and seamlessly redirects incoming service traffic to backup hosts upon detecting failures. Operating as a stand-alone solution on a SmartNIC, <em>Fatriot</em> guarantees the continuous operation of its fault-tolerance mechanism, even during severe errors (e.g., kernel failure) on the MEC host, maintaining uninterrupted service in mission-critical services. Our prototype of <em>Fatriot</em>, implemented on the NetFPGA-SUME, demonstrates effective mitigation of various failure scenarios, achieving this with minimal overhead to services (less than 1%).</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"231 ","pages":"Article 103978"},"PeriodicalIF":7.7,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141963236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anomalous state detection in radio access networks: A proof-of-concept 无线接入网络中的异常状态检测:概念验证
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-26 DOI: 10.1016/j.jnca.2024.103979
Michael Frey , Thomas Evans , Angela Folz , Mary Gregg , Jeanne Quimby , Jacob D. Rezac

Modern radio access networks (RANs) are both highly complex and potentially vulnerable to unauthorized security setting changes. A RAN is studied in a proof-of-concept experiment to demonstrate that an unauthorized network state is detectable at layers in the RAN architecture away from the source of the state setting. Specifically, encryption state is set at the packet data convergence protocol (PDCP) layer in the Long-Term Evolution (LTE) network model and an anomalous cipher-OFF state is shown to be detectable at the physical layer. Three tranches of experimental data totaling 1,987 runs and each involving 285 measurands were collected and used to construct and demonstrate single-feature, multi-feature, and multi-run encryption state detectors. These detectors show a range of performances with the single-feature detector based on reference signal received quality achieving near-0% false alarms and near-100% true detections. Multi-run averaging detectors show similar low-error performance, even just based on marginally effective detector features. The detectors’ performances are studied across the three tranches of experimental data and found by multiple complementary measures to be generalizable provided the testbed protocol is carefully controlled. Essential to these results was an automated, comprehensively instrumented experiment testbed in which measurands were treated as distributions.

现代无线接入网络(RAN)不仅高度复杂,而且很容易受到未经授权的安全设置更改的影响。我们在概念验证实验中对一个 RAN 进行了研究,以证明未经授权的网络状态可在 RAN 架构中远离状态设置源的各层被检测到。具体来说,在长期演进(LTE)网络模型的分组数据汇聚协议(PDCP)层设置加密状态,并在物理层检测到异常的密码关闭状态。收集了三批实验数据,总计 1,987 次运行,每次涉及 285 个测量值,用于构建和演示单特征、多特征和多运行加密状态检测器。这些检测器的性能各不相同,其中基于参考信号接收质量的单特征检测器的误报率接近 0%,真实检测率接近 100%。多运行平均检测器显示出类似的低误报性能,甚至只是基于微弱有效的检测器特征。在对三批实验数据进行研究后发现,只要测试平台协议得到严格控制,探测器的性能可以通过多种互补措施加以推广。取得这些结果的关键在于一个自动化的、全面仪器化的实验平台,在这个平台上,测量值被视为分布。
{"title":"Anomalous state detection in radio access networks: A proof-of-concept","authors":"Michael Frey ,&nbsp;Thomas Evans ,&nbsp;Angela Folz ,&nbsp;Mary Gregg ,&nbsp;Jeanne Quimby ,&nbsp;Jacob D. Rezac","doi":"10.1016/j.jnca.2024.103979","DOIUrl":"10.1016/j.jnca.2024.103979","url":null,"abstract":"<div><p>Modern radio access networks (RANs) are both highly complex and potentially vulnerable to unauthorized security setting changes. A RAN is studied in a proof-of-concept experiment to demonstrate that an unauthorized network state is detectable at layers in the RAN architecture away from the source of the state setting. Specifically, encryption state is set at the packet data convergence protocol (PDCP) layer in the Long-Term Evolution (LTE) network model and an anomalous cipher-<span><math><mi>OFF</mi></math></span> state is shown to be detectable at the physical layer. Three tranches of experimental data totaling 1,987 runs and each involving 285 measurands were collected and used to construct and demonstrate single-feature, multi-feature, and multi-run encryption state detectors. These detectors show a range of performances with the single-feature detector based on reference signal received quality achieving near-0% false alarms and near-100% true detections. Multi-run averaging detectors show similar low-error performance, even just based on marginally effective detector features. The detectors’ performances are studied across the three tranches of experimental data and found by multiple complementary measures to be generalizable provided the testbed protocol is carefully controlled. Essential to these results was an automated, comprehensively instrumented experiment testbed in which measurands were treated as distributions.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"231 ","pages":"Article 103979"},"PeriodicalIF":7.7,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141846654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A low-storage synchronization framework for blockchain systems 区块链系统的低存储同步框架
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-25 DOI: 10.1016/j.jnca.2024.103977
Yi-Xiang Wang , Yu-Ling Hsueh

The advent of blockchain technology has brought major changes to traditional centralized storage. Therefore, various fields have begun to study the application and development of blockchain. However, blockchain technology has a serious shortcoming of data bloating. The reason is that blockchain technology achieves decentralization by storing complete blockchain data at each node, incurring a significant amount of blockchain data. Therefore, each node must spend significant amount of storage space and initialization synchronization time. To solve the above problems, in this research, we propose a secure and agile synchronization framework for low storage blockchains. First, we design a K-extreme segment algorithm, which reduces the synchronization time by returning only the first and last k blocks of each block segment at once to the local storage. Next, we decentrally store the block data of the blockchain by IPFS and establish a backup mechanism by IPFS-cluster. Finally, due to use of distributed storage, the nodes must request un-stored block data from IPFS, causing an increase in the throughput of the blockchain network. To avoid network congestion, we propose the working set algorithm to improve the hit ratio of the local storage and reduce the number of requests to decrease throughput. In summary, our experiments demonstrate that the ratio of full nodes to low storage nodes is significantly lower for nodes with higher storage limits compared to those with lower storage limits. In other words, a higher storage limit results in more low storage nodes which can be permitted to ensure that the blockchain network is robust and reliable. Therefore, our proposed framework can provide reliable low storage nodes for the blockchain. The node can reduce the local storage pressure and can still maintain the full functionality of blockchains.

区块链技术的出现给传统的中心化存储带来了重大变革。因此,各个领域都开始研究区块链的应用和发展。然而,区块链技术存在数据臃肿的严重缺陷。原因在于,区块链技术通过在每个节点存储完整的区块链数据来实现去中心化,会产生大量的区块链数据。因此,每个节点必须花费大量的存储空间和初始化同步时间。为了解决上述问题,我们在本研究中提出了一种安全、敏捷的低存储区块链同步框架。首先,我们设计了一种 K-极端区段算法,它只将每个区块链段的前 k 个区块和后 k 个区块一次性返回到本地存储空间,从而减少了同步时间。接下来,我们通过 IPFS 分散存储区块链的区块数据,并通过 IPFS-cluster 建立备份机制。最后,由于使用分布式存储,节点必须向 IPFS 请求未存储的区块数据,导致区块链网络吞吐量增加。为了避免网络拥塞,我们提出了工作集算法,以提高本地存储的命中率,减少请求次数,从而降低吞吐量。总之,我们的实验证明,与存储限额较低的节点相比,存储限额较高的节点的满节点与低存储节点的比例明显较低。换句话说,较高的存储限制会导致更多的低存储节点被允许,从而确保区块链网络的稳健性和可靠性。因此,我们提出的框架可以为区块链提供可靠的低存储节点。该节点可以减少本地存储压力,仍能保持区块链的全部功能。
{"title":"A low-storage synchronization framework for blockchain systems","authors":"Yi-Xiang Wang ,&nbsp;Yu-Ling Hsueh","doi":"10.1016/j.jnca.2024.103977","DOIUrl":"10.1016/j.jnca.2024.103977","url":null,"abstract":"<div><p>The advent of blockchain technology has brought major changes to traditional centralized storage. Therefore, various fields have begun to study the application and development of blockchain. However, blockchain technology has a serious shortcoming of data bloating. The reason is that blockchain technology achieves decentralization by storing complete blockchain data at each node, incurring a significant amount of blockchain data. Therefore, each node must spend significant amount of storage space and initialization synchronization time. To solve the above problems, in this research, we propose a secure and agile synchronization framework for low storage blockchains. First, we design a K-extreme segment algorithm, which reduces the synchronization time by returning only the first and last <span><math><mi>k</mi></math></span> blocks of each block segment at once to the local storage. Next, we decentrally store the block data of the blockchain by IPFS and establish a backup mechanism by IPFS-cluster. Finally, due to use of distributed storage, the nodes must request un-stored block data from IPFS, causing an increase in the throughput of the blockchain network. To avoid network congestion, we propose the working set algorithm to improve the hit ratio of the local storage and reduce the number of requests to decrease throughput. In summary, our experiments demonstrate that the ratio of full nodes to low storage nodes is significantly lower for nodes with higher storage limits compared to those with lower storage limits. In other words, a higher storage limit results in more low storage nodes which can be permitted to ensure that the blockchain network is robust and reliable. Therefore, our proposed framework can provide reliable low storage nodes for the blockchain. The node can reduce the local storage pressure and can still maintain the full functionality of blockchains.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"231 ","pages":"Article 103977"},"PeriodicalIF":7.7,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141838713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A lightweight SEL for attack detection in IoT/IIoT networks 用于物联网/物联网网络攻击检测的轻量级 SEL
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-25 DOI: 10.1016/j.jnca.2024.103980
Sulyman Age Abdulkareem , Chuan Heng Foh , François Carrez , Klaus Moessner

Intrusion detection systems (IDSs) that continuously monitor data flow and take swift action when attacks are identified safeguard networks. Conventional IDS exhibit limitations, such as reduced detection rates and increased computational complexity, attributed to the redundancy and substantial correlation of network data. Ensemble learning (EL) is effective for detecting network attacks. Nonetheless, network traffic data and memory space requirements are typically significant. Therefore, deploying the EL approach on Internet-of-Things (IoT) devices with limited memory is challenging. In this paper, we use feature importance (FI), a filter-based feature selection technique for feature dimensionality reduction, to reduce the feature dimensions of an IoT/IIoT network traffic dataset. We also employ lightweight stacking ensemble learning (SEL) to appropriately identify network traffic records and analyse the reduced features after applying FI to the dataset. Extensive experiments use the Edge-IIoTset dataset containing IoT and IIoT network records. We show that FI reduces the storage space needed to store comprehensive network traffic data by 86.9%, leading to a significant decrease in training and testing time. Regarding accuracy, precision, recall, training and test time, our classifier that utilised the eight best dataset features recorded 87.37%, 90.65%, 77.73%, 80.88%, 16.18 s and 0.10 s for its overall performance. Despite the reduced features, our proposed SEL classifier shows insignificant accuracy compromise. Finally, we pioneered the explanation of SEL by using a decision tree to analyse its performance gain against single learners.

入侵检测系统(IDS)能够持续监控数据流,并在发现攻击时迅速采取行动,为网络提供保护。传统的入侵检测系统有其局限性,如检测率降低和计算复杂性增加,这归因于网络数据的冗余和大量相关性。集合学习(EL)可有效检测网络攻击。然而,网络流量数据和内存空间需求通常很大。因此,在内存有限的物联网(IoT)设备上部署组合学习方法具有挑战性。在本文中,我们使用基于滤波器的特征选择技术--特征重要性(FI)来降低特征维度,从而减少物联网/物联网网络流量数据集的特征维度。我们还采用轻量级堆叠集合学习(SEL)来适当识别网络流量记录,并对数据集应用 FI 后的缩减特征进行分析。广泛的实验使用了包含物联网和 IIoT 网络记录的 Edge-IIoTset 数据集。实验结果表明,FI 将存储综合网络流量数据所需的存储空间减少了 86.9%,从而显著减少了训练和测试时间。在准确率、精确度、召回率、训练和测试时间方面,我们利用八个最佳数据集特征的分类器的总体性能分别达到了 87.37%、90.65%、77.73%、80.88%、16.18 秒和 0.10 秒。尽管特征减少了,但我们提出的 SEL 分类器的准确性却没有受到明显影响。最后,我们开创性地使用决策树来解释 SEL,分析其与单一学习者相比的性能增益。
{"title":"A lightweight SEL for attack detection in IoT/IIoT networks","authors":"Sulyman Age Abdulkareem ,&nbsp;Chuan Heng Foh ,&nbsp;François Carrez ,&nbsp;Klaus Moessner","doi":"10.1016/j.jnca.2024.103980","DOIUrl":"10.1016/j.jnca.2024.103980","url":null,"abstract":"<div><p>Intrusion detection systems (IDSs) that continuously monitor data flow and take swift action when attacks are identified safeguard networks. Conventional IDS exhibit limitations, such as reduced detection rates and increased computational complexity, attributed to the redundancy and substantial correlation of network data. Ensemble learning (EL) is effective for detecting network attacks. Nonetheless, network traffic data and memory space requirements are typically significant. Therefore, deploying the EL approach on Internet-of-Things (IoT) devices with limited memory is challenging. In this paper, we use feature importance (FI), a filter-based feature selection technique for feature dimensionality reduction, to reduce the feature dimensions of an IoT/IIoT network traffic dataset. We also employ lightweight stacking ensemble learning (SEL) to appropriately identify network traffic records and analyse the reduced features after applying FI to the dataset. Extensive experiments use the Edge-IIoTset dataset containing IoT and IIoT network records. We show that FI reduces the storage space needed to store comprehensive network traffic data by 86.9%, leading to a significant decrease in training and testing time. Regarding accuracy, precision, recall, training and test time, our classifier that utilised the eight best dataset features recorded 87.37%, 90.65%, 77.73%, 80.88%, 16.18 s and 0.10 s for its overall performance. Despite the reduced features, our proposed SEL classifier shows insignificant accuracy compromise. Finally, we pioneered the explanation of SEL by using a decision tree to analyse its performance gain against single learners.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"230 ","pages":"Article 103980"},"PeriodicalIF":7.7,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1084804524001577/pdfft?md5=7cbe8f7f7873a91af312d783143ed134&pid=1-s2.0-S1084804524001577-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141838946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A bandwidth delay product based modified Veno for high-speed networks: BDP-Veno 基于改进型 Veno 的高速网络带宽延迟积:BDP-Veno
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-25 DOI: 10.1016/j.jnca.2024.103983
Subhra Priyadarshini Biswal, Sanjeev Patel

In recent years, we have seen a significant enhancement in the performance of standard Transmission Control Protocol (TCP) congestion control algorithms. The number of packet drops and high round-trip time (RTT) are indications of network congestion. Many congestion control mechanisms have been proposed to overcome the challenge of achieving increased throughput and reduced latency. We have reviewed many TCP congestion control algorithms which are discussed in the literature. The limitation of the existing work is a trade-off between throughput, loss ratio, and delay. It is not possible for any algorithm to outperform the existing algorithm in terms of all the performance measures. We attempt to achieve the best performance while our proposed algorithm competes with CUBIC and Bottleneck Bandwidth and Round-trip propagation time (BBR). According to the observed results in the literature, TCP Veno dominates among other existing algorithms. We have proposed a bandwidth-delay product (BDP) based TCP (BDP-Veno) congestion control algorithm by modifying Veno to incorporate the information of BDP of the bottleneck. The proposed algorithm is implemented using ns-2. Moreover, we have analyzed the performances of standard TCP congestion control algorithms by considering different network scenarios. Our proposed algorithm performs better compared to other existing TCP congestion control schemes such as Reno, Newreno, BIC, CUBIC, Vegas, Veno, and Compound TCP in terms of average throughput in most of the scenarios. In Scenario 1, our proposed algorithm enhances the throughput with respect to Veno by 57%. Further, we have also compared the throughput with BBR using ns3 where we receive comparable throughput with BBR.

近年来,我们看到标准传输控制协议(TCP)拥塞控制算法的性能有了显著提高。丢包数和高往返时间(RTT)是网络拥塞的标志。为了克服提高吞吐量和减少延迟的挑战,人们提出了许多拥塞控制机制。我们回顾了文献中讨论的许多 TCP 拥塞控制算法。现有工作的局限性在于吞吐量、损失率和延迟之间的权衡。任何算法都不可能在所有性能指标上都优于现有算法。我们试图在我们提出的算法与 CUBIC、瓶颈带宽和往返传播时间(BBR)竞争的同时实现最佳性能。根据文献中的观察结果,TCP Veno 在其他现有算法中占主导地位。我们提出了一种基于带宽-延迟积(BDP)的 TCP(BDP-Veno)拥塞控制算法,通过修改 Veno,将瓶颈带宽-延迟积信息纳入其中。我们使用 ns-2 实现了所提出的算法。此外,我们还考虑了不同的网络场景,分析了标准 TCP 拥塞控制算法的性能。与其他现有 TCP 拥塞控制方案(如 Reno、Newreno、BIC、CUBIC、Vegas、Veno 和 Compound TCP)相比,我们提出的算法在大多数场景下的平均吞吐量表现更好。在场景 1 中,我们提出的算法比 Veno 提高了 57% 的吞吐量。此外,我们还使用 ns3 将吞吐量与 BBR 进行了比较,结果与 BBR 的吞吐量相当。
{"title":"A bandwidth delay product based modified Veno for high-speed networks: BDP-Veno","authors":"Subhra Priyadarshini Biswal,&nbsp;Sanjeev Patel","doi":"10.1016/j.jnca.2024.103983","DOIUrl":"10.1016/j.jnca.2024.103983","url":null,"abstract":"<div><p>In recent years, we have seen a significant enhancement in the performance of standard Transmission Control Protocol (TCP) congestion control algorithms. The number of packet drops and high round-trip time (RTT) are indications of network congestion. Many congestion control mechanisms have been proposed to overcome the challenge of achieving increased throughput and reduced latency. We have reviewed many TCP congestion control algorithms which are discussed in the literature. The limitation of the existing work is a trade-off between throughput, loss ratio, and delay. It is not possible for any algorithm to outperform the existing algorithm in terms of all the performance measures. We attempt to achieve the best performance while our proposed algorithm competes with CUBIC and Bottleneck Bandwidth and Round-trip propagation time (BBR). According to the observed results in the literature, TCP Veno dominates among other existing algorithms. We have proposed a bandwidth-delay product (BDP) based TCP (BDP-Veno) congestion control algorithm by modifying Veno to incorporate the information of BDP of the bottleneck. The proposed algorithm is implemented using ns-2. Moreover, we have analyzed the performances of standard TCP congestion control algorithms by considering different network scenarios. Our proposed algorithm performs better compared to other existing TCP congestion control schemes such as Reno, Newreno, BIC, CUBIC, Vegas, Veno, and Compound TCP in terms of average throughput in most of the scenarios. In Scenario 1, our proposed algorithm enhances the throughput with respect to Veno by 57%. Further, we have also compared the throughput with BBR using ns3 where we receive comparable throughput with BBR.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"231 ","pages":"Article 103983"},"PeriodicalIF":7.7,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141845470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CIBORG: CIrcuit-Based and ORiented Graph theory permutation routing protocol for single-hop IoT networks CIBORG:面向单跳物联网网络的基于CIrcuit和ORiented图论的排列路由协议
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-25 DOI: 10.1016/j.jnca.2024.103986
Alain Bertrand Bomgni , Garrik Brel Jagho Mdemaya , Miguel Landry Foko Sindjoung , Mthulisi Velempini , Celine Cabrelle Tchuenko Djoko , Jean Frederic Myoupo

The Internet of Things (IoT) has emerged as a promising paradigm which facilitates the seamless integration of physical devices and digital systems, thereby transforming multiple sectors such as healthcare, transportation, and urban planning. This paradigm is also known as ad-hoc networks. IoT is characterized by several pieces of equipment called objects. These objects have different and limited capacities such as battery, memory, and computing power. These limited capabilities make it difficult to design routing protocols for IoT networks because of the high number of objects in a network. In IoT, objects often have data which does not belong to them and which should be sent to other objects, then leading to a problem known as permutation routing problems. The solution to that problem is found when each object receives its items. In this paper, we propose a new approach to addressing the permutation routing problem in single-hop IoT networks. To this end, we start by representing an IoT network as an oriented graph, and then, based on a reservation channel protocol, we first define a permutation routing protocol for an IoT in a single channel. Secondly, we generalize the previous protocol to make it work in multiple channels. Routing is done using graph theory approaches. The obtained results show that the wake-up times and activities of IoT objects are greatly reduced, thus optimizing network lifetime. This is an effective solution for the permutation routing problem in IoT networks. The proposed approach considerably reduces energy consumption and computation time. It saves 5.2 to 32.04% residual energy depending on the number of items and channels used. Low energy and low computational cost demonstrate that the performance of circuit-based and oriented graph theory is better than the state-of-the-art protocol and therefore is a better candidate for the resolution of the permutation routing problem in single-hop environment.

物联网(IoT)已成为一种前景广阔的模式,它促进了物理设备与数字系统的无缝集成,从而改变了医疗保健、交通和城市规划等多个领域。这种模式也被称为 ad-hoc 网络。物联网的特点是由多个被称为物体的设备组成。这些对象具有不同且有限的能力,如电池、内存和计算能力。由于网络中的对象数量众多,这些有限的能力使得为物联网网络设计路由协议变得十分困难。在物联网中,物体往往拥有不属于自己的数据,而这些数据应该发送给其他物体,这就导致了一个被称为 "置换路由问题 "的问题。该问题的解决方案是在每个对象收到其物品时找到的。在本文中,我们提出了一种新方法来解决单跳物联网网络中的排列路由问题。为此,我们首先将物联网网络表示为面向图,然后基于预约信道协议,首先定义了单信道物联网的置换路由协议。其次,我们对之前的协议进行概括,使其适用于多通道。路由选择采用图论方法。结果表明,物联网对象的唤醒时间和活动大大减少,从而优化了网络寿命。这是物联网网络中包络路由问题的有效解决方案。所提出的方法大大减少了能耗和计算时间。它能节省 5.2% 至 32.04% 的剩余能量,具体取决于所使用的项目和信道的数量。低能耗和低计算成本表明,基于电路和面向图论的性能优于最先进的协议,因此是解决单跳环境中置换路由问题的更好选择。
{"title":"CIBORG: CIrcuit-Based and ORiented Graph theory permutation routing protocol for single-hop IoT networks","authors":"Alain Bertrand Bomgni ,&nbsp;Garrik Brel Jagho Mdemaya ,&nbsp;Miguel Landry Foko Sindjoung ,&nbsp;Mthulisi Velempini ,&nbsp;Celine Cabrelle Tchuenko Djoko ,&nbsp;Jean Frederic Myoupo","doi":"10.1016/j.jnca.2024.103986","DOIUrl":"10.1016/j.jnca.2024.103986","url":null,"abstract":"<div><p>The Internet of Things (IoT) has emerged as a promising paradigm which facilitates the seamless integration of physical devices and digital systems, thereby transforming multiple sectors such as healthcare, transportation, and urban planning. This paradigm is also known as ad-hoc networks. IoT is characterized by several pieces of equipment called objects. These objects have different and limited capacities such as battery, memory, and computing power. These limited capabilities make it difficult to design routing protocols for IoT networks because of the high number of objects in a network. In IoT, objects often have data which does not belong to them and which should be sent to other objects, then leading to a problem known as permutation routing problems. The solution to that problem is found when each object receives its items. In this paper, we propose a new approach to addressing the permutation routing problem in single-hop IoT networks. To this end, we start by representing an IoT network as an oriented graph, and then, based on a reservation channel protocol, we first define a permutation routing protocol for an IoT in a single channel. Secondly, we generalize the previous protocol to make it work in multiple channels. Routing is done using graph theory approaches. The obtained results show that the wake-up times and activities of IoT objects are greatly reduced, thus optimizing network lifetime. This is an effective solution for the permutation routing problem in IoT networks. The proposed approach considerably reduces energy consumption and computation time. It saves 5.2 to 32.04% residual energy depending on the number of items and channels used. Low energy and low computational cost demonstrate that the performance of circuit-based and oriented graph theory is better than the state-of-the-art protocol and therefore is a better candidate for the resolution of the permutation routing problem in single-hop environment.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"231 ","pages":"Article 103986"},"PeriodicalIF":7.7,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1084804524001632/pdfft?md5=ca1303a2b5ed2851b156c60360791ed3&pid=1-s2.0-S1084804524001632-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141851865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HRMF-DRP: A next-generation solution for overcoming provisioning challenges in cloud environments HRMF-DRP:克服云环境中供应挑战的新一代解决方案
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-24 DOI: 10.1016/j.jnca.2024.103982
Devi D, Godfrey Winster S

The cloud computing infrastructure is a distributed environment and the existing research works have some provisioning problems such as suboptimal resource utilization and high execution time. The Heterogeneity Resource Management Framework for Dynamic Resource Provisioning (HRMF-DRP) is proposed for focusing on task scheduling and workload management. This framework incorporates advanced algorithms for dataset preprocessing, task clustering, workload prediction, and dynamic resource provisioning. For data preprocessing, the real-world workload traces were captured from the Planet Lab dataset that are taken as input for the preprocessing stage. The data preprocessing is responsible for ensuring data quality and reliability by using different models like missing data handling, outlier detection and removal as well as standardization and normalization. In this paper, the tasks are grouped into clusters by utilizing Density-Based Spatial Clustering of Applications with Noise (DBSCAN) model and this model categorizes the data points into border points, core points and noise points based on their density. The temporal dependencies are captured for the workload prediction by using Long Short-Term Memory (LSTM) neural network model. A Gaussian Mixture Model (GMM) model is responsible for estimating the number of Virtual machines (VMs) present in the workload prediction process. The Self-Adaptive Genetic Algorithm (SAGA) is implemented for task mapping that adjusts the parameters to change workload patterns for contributing adaptability and robustness. The different experimental evaluations are conducted based on the task completion time, workload balance index, resource utilization efficiency and workload prediction accuracy. The proposed model achieved the workload prediction accuracy of 98.5%, cost of $89.6, execution time of 125ms, Task Completion Time (TCT) of 40ms, Workload Balance Index (WBI) of 0.96 and Resource Utilization Efficiency (RUE) of 0.93. The quantitative results collectively position HRMF-DRP as a practical and efficient solution, promising advancements in dynamic resource provisioning for cloud computing, particularly within the Infrastructure as a Service (IaaS) cloud model.

云计算基础设施是一种分布式环境,现有的研究工作存在一些资源调配问题,如资源利用率不理想和执行时间过长。为重点解决任务调度和工作量管理问题,提出了动态资源调配的异构资源管理框架(HRMF-DRP)。该框架采用先进的算法进行数据集预处理、任务聚类、工作量预测和动态资源调配。在数据预处理方面,从 Planet Lab 数据集中获取了真实世界的工作负载轨迹,作为预处理阶段的输入。数据预处理负责使用不同的模型确保数据质量和可靠性,如缺失数据处理、离群点检测和移除以及标准化和规范化。本文利用基于密度的带噪声应用空间聚类(DBSCAN)模型将任务分组,该模型根据密度将数据点分为边界点、核心点和噪声点。利用长短期记忆(LSTM)神经网络模型捕捉时间依赖性,进行工作量预测。高斯混杂模型(GMM)负责估算工作量预测过程中出现的虚拟机(VM)数量。自适应遗传算法(SAGA)用于任务映射,可根据工作量模式的变化调整参数,以提高适应性和鲁棒性。根据任务完成时间、工作量平衡指数、资源利用效率和工作量预测准确性进行了不同的实验评估。所提模型的工作量预测准确率为 98.5%,成本为 89.6 美元,执行时间为 125 毫秒,任务完成时间(TCT)为 40 毫秒,工作量平衡指数(WBI)为 0.96,资源利用效率(RUE)为 0.93。这些定量结果共同将 HRMF-DRP 定义为实用高效的解决方案,有望推动云计算动态资源调配的发展,特别是在基础设施即服务(IaaS)云模式中。
{"title":"HRMF-DRP: A next-generation solution for overcoming provisioning challenges in cloud environments","authors":"Devi D,&nbsp;Godfrey Winster S","doi":"10.1016/j.jnca.2024.103982","DOIUrl":"10.1016/j.jnca.2024.103982","url":null,"abstract":"<div><p>The cloud computing infrastructure is a distributed environment and the existing research works have some provisioning problems such as suboptimal resource utilization and high execution time. The Heterogeneity Resource Management Framework for Dynamic Resource Provisioning (HRMF-DRP) is proposed for focusing on task scheduling and workload management. This framework incorporates advanced algorithms for dataset preprocessing, task clustering, workload prediction, and dynamic resource provisioning. For data preprocessing, the real-world workload traces were captured from the Planet Lab dataset that are taken as input for the preprocessing stage. The data preprocessing is responsible for ensuring data quality and reliability by using different models like missing data handling, outlier detection and removal as well as standardization and normalization. In this paper, the tasks are grouped into clusters by utilizing Density-Based Spatial Clustering of Applications with Noise (DBSCAN) model and this model categorizes the data points into border points, core points and noise points based on their density. The temporal dependencies are captured for the workload prediction by using Long Short-Term Memory (LSTM) neural network model. A Gaussian Mixture Model (GMM) model is responsible for estimating the number of Virtual machines (VMs) present in the workload prediction process. The Self-Adaptive Genetic Algorithm (SAGA) is implemented for task mapping that adjusts the parameters to change workload patterns for contributing adaptability and robustness. The different experimental evaluations are conducted based on the task completion time, workload balance index, resource utilization efficiency and workload prediction accuracy. The proposed model achieved the workload prediction accuracy of 98.5%, cost of $89.6, execution time of 125ms, Task Completion Time (TCT) of 40ms, Workload Balance Index (WBI) of 0.96 and Resource Utilization Efficiency (RUE) of 0.93. The quantitative results collectively position HRMF-DRP as a practical and efficient solution, promising advancements in dynamic resource provisioning for cloud computing, particularly within the Infrastructure as a Service (IaaS) cloud model.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"231 ","pages":"Article 103982"},"PeriodicalIF":7.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141769014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blockchain applications in UAV industry: Review, opportunities, and challenges 无人机行业的区块链应用:回顾、机遇和挑战
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-23 DOI: 10.1016/j.jnca.2024.103932
Diana Hawashin , Mohamed Nemer , Senay A. Gebreab , Khaled Salah , Raja Jayaraman , Muhammad Khurram Khan , Ernesto Damiani

In recent years, the application of blockchain technology in the Unmanned Aerial Vehicle (UAV) industry has shown promise in making a substantial impact on various aspects of the field. Blockchain can provide key solutions to several challenges related to security, data integrity, and operational efficiency within UAV systems. In this paper, we conduct an in-depth investigation of the transformative role of blockchain in the UAV industry. Through a comprehensive literature review, we examine the potential impact and applications of blockchain technology in this field, with a particular focus on its capacity to address challenges across the manufacturing, planning, and operational phases of UAV systems. We explore how blockchain implementation within UAV networks enhances secure data traceability within supply chain processes and facilitates more efficient flight operations management. Our findings reveal that blockchain technology significantly improves data traceability and operational efficiency in UAV systems, offering robust solutions to challenges related to trust, transparency, data integrity, and access control within UAV networks, thereby enhancing overall system reliability and performance. Furthermore, we highlight some of the future potential opportunities and use cases for blockchain in the UAV industry, including real-time data management and decentralized verification mechanisms. We discuss the primary challenges obstructing the widespread adoption of blockchain in this industry and also propose some future research directions.

近年来,区块链技术在无人机(UAV)行业的应用已显示出对该领域各个方面产生实质性影响的前景。区块链可以为无人机系统内与安全性、数据完整性和运行效率有关的若干挑战提供关键解决方案。在本文中,我们将深入研究区块链在无人机行业中的变革作用。通过全面的文献综述,我们研究了区块链技术在该领域的潜在影响和应用,尤其关注其应对无人机系统制造、规划和运营阶段挑战的能力。我们探讨了在无人机网络中实施区块链如何增强供应链流程中的安全数据可追溯性,以及如何促进更高效的飞行操作管理。我们的研究结果表明,区块链技术大大提高了无人机系统的数据可追溯性和运行效率,为无人机网络中与信任、透明度、数据完整性和访问控制相关的挑战提供了强大的解决方案,从而提高了整个系统的可靠性和性能。此外,我们还强调了区块链在无人机行业未来的一些潜在机遇和用例,包括实时数据管理和去中心化验证机制。我们讨论了阻碍区块链在该行业广泛应用的主要挑战,并提出了一些未来研究方向。
{"title":"Blockchain applications in UAV industry: Review, opportunities, and challenges","authors":"Diana Hawashin ,&nbsp;Mohamed Nemer ,&nbsp;Senay A. Gebreab ,&nbsp;Khaled Salah ,&nbsp;Raja Jayaraman ,&nbsp;Muhammad Khurram Khan ,&nbsp;Ernesto Damiani","doi":"10.1016/j.jnca.2024.103932","DOIUrl":"10.1016/j.jnca.2024.103932","url":null,"abstract":"<div><p>In recent years, the application of blockchain technology in the Unmanned Aerial Vehicle (UAV) industry has shown promise in making a substantial impact on various aspects of the field. Blockchain can provide key solutions to several challenges related to security, data integrity, and operational efficiency within UAV systems. In this paper, we conduct an in-depth investigation of the transformative role of blockchain in the UAV industry. Through a comprehensive literature review, we examine the potential impact and applications of blockchain technology in this field, with a particular focus on its capacity to address challenges across the manufacturing, planning, and operational phases of UAV systems. We explore how blockchain implementation within UAV networks enhances secure data traceability within supply chain processes and facilitates more efficient flight operations management. Our findings reveal that blockchain technology significantly improves data traceability and operational efficiency in UAV systems, offering robust solutions to challenges related to trust, transparency, data integrity, and access control within UAV networks, thereby enhancing overall system reliability and performance. Furthermore, we highlight some of the future potential opportunities and use cases for blockchain in the UAV industry, including real-time data management and decentralized verification mechanisms. We discuss the primary challenges obstructing the widespread adoption of blockchain in this industry and also propose some future research directions.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"230 ","pages":"Article 103932"},"PeriodicalIF":7.7,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141769013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploiting web content semantic features to detect web robots from weblogs 利用网络内容语义特征从网络日志中检测网络机器人
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-22 DOI: 10.1016/j.jnca.2024.103975
Rikhi Ram Jagat, Dilip Singh Sisodia, Pradeep Singh

Nowadays, web robots are predominantly used for auto-accessing web content, sharing almost one-third of the total web traffic and often posing threats to various web applications’ security, privacy, and performance. Detecting these robots is essential, and both online and offline methods are employed. One popular offline method is the use of weblog feature-based automated learning. However, this method alone cannot accurately identify web robots that continuously evolve and camouflage. Web content features combined with weblog features are used to detect such robots based on the assumption that human users exhibit specific interests while robots randomly navigate web pages. State-of-the-art web content-based feature methods lack the ability to generate coherent topics, which can confound the performance of classification models. Therefore, we propose a new content semantic feature extraction method that uses the LDA2Vec topic model, combining the strengths of LDA and the Word2Vec model to produce more semantically coherent topics by exploiting website content for a web session. To effectively detect web robots, web resource content semantic features are combined with log-based features in the proposed web robot detection approach. The proposed approach is evaluated in an e-commerce website access logs and content data. The F-score, balanced accuracy, G-mean, and Jaccard similarity are used for performance measures, and the coherence score metric is used to determine the number of topics for a session. Experimental results demonstrate that a combination of weblogs and content semantic features is effective in web robot detection.

如今,网络机器人主要用于自动访问网络内容,几乎占到网络总流量的三分之一,并经常对各种网络应用程序的安全、隐私和性能构成威胁。检测这些机器人非常重要,目前采用了在线和离线两种方法。一种流行的离线方法是使用基于网络日志特征的自动学习。然而,仅靠这种方法无法准确识别不断进化和伪装的网络机器人。基于人类用户表现出特定兴趣而机器人随机浏览网页的假设,网络内容特征与网络日志特征相结合,可用于检测此类机器人。最先进的基于网页内容特征的方法缺乏生成连贯主题的能力,这会影响分类模型的性能。因此,我们提出了一种新的内容语义特征提取方法,该方法使用 LDA2Vec 主题模型,结合了 LDA 和 Word2Vec 模型的优势,通过利用网站内容为网络会话生成语义更一致的主题。为了有效地检测网络机器人,所提出的网络机器人检测方法将网络资源内容语义特征与基于日志的特征相结合。我们在一个电子商务网站的访问日志和内容数据中对所提出的方法进行了评估。性能指标采用了 F 分数、平衡准确率、G 平均值和 Jaccard 相似度,一致性分数指标用于确定会话的主题数量。实验结果表明,结合网络日志和内容语义特征能有效地检测网络机器人。
{"title":"Exploiting web content semantic features to detect web robots from weblogs","authors":"Rikhi Ram Jagat,&nbsp;Dilip Singh Sisodia,&nbsp;Pradeep Singh","doi":"10.1016/j.jnca.2024.103975","DOIUrl":"10.1016/j.jnca.2024.103975","url":null,"abstract":"<div><p>Nowadays, web robots are predominantly used for auto-accessing web content, sharing almost one-third of the total web traffic and often posing threats to various web applications’ security, privacy, and performance. Detecting these robots is essential, and both online and offline methods are employed. One popular offline method is the use of weblog feature-based automated learning. However, this method alone cannot accurately identify web robots that continuously evolve and camouflage. Web content features combined with weblog features are used to detect such robots based on the assumption that human users exhibit specific interests while robots randomly navigate web pages. State-of-the-art web content-based feature methods lack the ability to generate coherent topics, which can confound the performance of classification models. Therefore, we propose a new content semantic feature extraction method that uses the LDA2Vec topic model, combining the strengths of LDA and the Word2Vec model to produce more semantically coherent topics by exploiting website content for a web session. To effectively detect web robots, web resource content semantic features are combined with log-based features in the proposed web robot detection approach. The proposed approach is evaluated in an e-commerce website access logs and content data. The F-score, balanced accuracy, G-mean, and Jaccard similarity are used for performance measures, and the coherence score metric is used to determine the number of topics for a session. Experimental results demonstrate that a combination of weblogs and content semantic features is effective in web robot detection.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"230 ","pages":"Article 103975"},"PeriodicalIF":7.7,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141769015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CGSNet: Cross-consistency guiding semi-supervised semantic segmentation network for remote sensing of plateau lake CGSNet:用于高原湖泊遥感的交叉一致性指导半监督语义分割网络
IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-20 DOI: 10.1016/j.jnca.2024.103974
Guangchen Chen , Benjie Shi , Yinhui Zhang, Zifen He, Pengcheng Zhang

Analyzing the geographical information for the Plateau Lake region with remote sensing images (RSI) is an emerging technology to monitor the changes of the ecological environment. To alleviate the requirement of abundant labels for supervised RSI segmentation, the Cross-consistency Guiding Semi-supervised Learning (SSL) Semantic Segmentation Network is proposed, and it can perform high-quality multi-category semantic segmentation for complex remote sensing scenes with limited quantity of labeled images. Firstly, based on the SSL semantic segmentation framework, through the cross-consistency method training a teacher model with less annotated images and plentiful unannotated images, then generating higher-quality pseudo labels to guide the learning process of the student model. Secondly, dense conditional random field and mask hole repair are used to patch and fill the flaw areas of pseudo-labels based on the pixel features of position, color, and texture, further improving the granularity and reliability of the student model training dataset. Additionally, to improve the accuracy of the model, we designed a strong data augmentation (SDA) method based on a stochastic cascaded strategy, which connects multiple augmentation techniques in random order and probability cascade to generate new training samples. It mimics a variety of image transformations and noise conditions that occur in the real world to enhance the robustness in complex scenarios. To validate the effectiveness of CGSNet in complex remote sensing scenes, extended experiments are conducted on the self-built plateau lake RSI dataset and two public multi-category RSI datasets. The experiment results demonstrate that, compared with other state-of-the-art SSL methods, the proposed CGSNet achieves the highest 77.47% mIoU and 87.06% F1 scores with a limited quantity of annotated data.

利用遥感图像(RSI)分析高原湖泊地区的地理信息是监测生态环境变化的一项新兴技术。为了缓解遥感图像监督分割对丰富标签的要求,本文提出了交叉一致性指导半监督学习(SSL)语义分割网络,它可以在有限的标签图像数量下对复杂的遥感场景进行高质量的多类别语义分割。首先,基于 SSL 语义分割框架,通过交叉一致性方法,在注释图像较少而未注释图像较多的情况下训练教师模型,然后生成更高质量的伪标签来指导学生模型的学习过程。其次,根据位置、颜色和纹理等像素特征,利用密集条件随机场和掩膜孔修复来修补和填补伪标签的缺陷区域,进一步提高学生模型训练数据集的粒度和可靠性。此外,为了提高模型的准确性,我们设计了一种基于随机级联策略的强数据增强(SDA)方法,将多种增强技术以随机顺序和概率级联的方式连接起来,生成新的训练样本。它模拟了现实世界中出现的各种图像变换和噪声条件,以增强复杂场景下的鲁棒性。为了验证 CGSNet 在复杂遥感场景中的有效性,我们在自建的高原湖泊 RSI 数据集和两个公开的多类别 RSI 数据集上进行了扩展实验。实验结果表明,与其他最先进的 SSL 方法相比,所提出的 CGSNet 在有限的注释数据量下取得了最高的 77.47% mIoU 和 87.06% F1 分数。
{"title":"CGSNet: Cross-consistency guiding semi-supervised semantic segmentation network for remote sensing of plateau lake","authors":"Guangchen Chen ,&nbsp;Benjie Shi ,&nbsp;Yinhui Zhang,&nbsp;Zifen He,&nbsp;Pengcheng Zhang","doi":"10.1016/j.jnca.2024.103974","DOIUrl":"10.1016/j.jnca.2024.103974","url":null,"abstract":"<div><p>Analyzing the geographical information for the Plateau Lake region with remote sensing images (RSI) is an emerging technology to monitor the changes of the ecological environment. To alleviate the requirement of abundant labels for supervised RSI segmentation, the Cross-consistency Guiding Semi-supervised Learning (SSL) Semantic Segmentation Network is proposed, and it can perform high-quality multi-category semantic segmentation for complex remote sensing scenes with limited quantity of labeled images. Firstly, based on the SSL semantic segmentation framework, through the cross-consistency method training a teacher model with less annotated images and plentiful unannotated images, then generating higher-quality pseudo labels to guide the learning process of the student model. Secondly, dense conditional random field and mask hole repair are used to patch and fill the flaw areas of pseudo-labels based on the pixel features of position, color, and texture, further improving the granularity and reliability of the student model training dataset. Additionally, to improve the accuracy of the model, we designed a strong data augmentation (SDA) method based on a stochastic cascaded strategy, which connects multiple augmentation techniques in random order and probability cascade to generate new training samples. It mimics a variety of image transformations and noise conditions that occur in the real world to enhance the robustness in complex scenarios. To validate the effectiveness of CGSNet in complex remote sensing scenes, extended experiments are conducted on the self-built plateau lake RSI dataset and two public multi-category RSI datasets. The experiment results demonstrate that, compared with other state-of-the-art SSL methods, the proposed CGSNet achieves the highest 77.47% mIoU and 87.06% F1 scores with a limited quantity of annotated data.</p></div>","PeriodicalId":54784,"journal":{"name":"Journal of Network and Computer Applications","volume":"230 ","pages":"Article 103974"},"PeriodicalIF":7.7,"publicationDate":"2024-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141840326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Network and Computer Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1