首页 > 最新文献

Transactions on Emerging Telecommunications Technologies最新文献

英文 中文
Enhanced Protection for IoT System With Intelligent Swift Scan Quantum-Resilient Intrusion Detection System (ISS-QR-IDS) 智能Swift扫描量子弹性入侵检测系统(ISS-QR-IDS)增强物联网系统防护
IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS Pub Date : 2026-01-21 DOI: 10.1002/ett.70331
K. Vinay Bharadwaj, L. Vidya Shree

The Internet of Things (IoT) is particularly vulnerable in this new era, as IoT devices often rely on lightweight encryption and security measures due to their limited processing capabilities and power constraints. Existing signature-based intrusion detection strategies are inadequate against the advanced and adaptive nature of quantum attacks, which can exploit the polymorphic and metamorphic behavior of quantum-enhanced malware. This research addresses the challenges posed by a possible attack scenario named the “Quantum-Enhanced Cloak Malware (QECM) Attack” and aims to provide enhanced protection to IoT systems against this sophisticated threat. The proposed “Intelligent Swift Scan Quantum-Resilient Intrusion Detection System (ISS-QR-IDS)” integrates advanced techniques to enhance detection speed and accuracy, mitigate risks associated with polymorphic and metamorphic malware behaviors, and secure communication channels against quantum threats. The model incorporates the Parallel Vario-Isolation Detector, which combines Variational Autoencoders (VAEs) and Parallelized Isolation Forests to detect quantum-enhanced malware, and Hypergraph Attention Networks (HGA-Net), leveraging Hypergraph Neural Networks (HGNNs) and Graph Attention Networks (GATs) to detect the critical interactions and improve anomaly detection accuracy. Additionally, postquantum cryptographic algorithms like NTRU Encrypt and FALCON ensure secure communication channels and data integrity. By combining these advanced techniques, ISS-QR-IDS aims to provide a robust defense mechanism against sophisticated cyber threats targeting IoT networks, ensuring their security and resilience in the face of quantum computing advancements.

物联网(IoT)在这个新时代尤其脆弱,因为物联网设备由于其有限的处理能力和功率限制,通常依赖于轻量级加密和安全措施。现有的基于签名的入侵检测策略不足以对抗量子攻击的高级和自适应特性,量子攻击可以利用量子增强恶意软件的多态和变形行为。本研究解决了一种名为“量子增强斗篷恶意软件(QECM)攻击”的可能攻击场景所带来的挑战,旨在为物联网系统提供增强的保护,抵御这种复杂的威胁。提出的“智能Swift扫描量子弹性入侵检测系统(ISS-QR-IDS)”集成了先进的技术,以提高检测速度和准确性,降低与多态和变质恶意软件行为相关的风险,并保护通信通道免受量子威胁。该模型结合了并行变分自编码器(VAEs)和并行隔离森林来检测量子增强恶意软件的并行变隔离检测器,以及利用超图神经网络(hgnn)和图注意网络(GATs)来检测关键交互并提高异常检测精度的超图注意网络(HGA-Net)。此外,NTRU Encrypt和FALCON等后量子加密算法确保了通信通道的安全性和数据完整性。通过结合这些先进技术,ISS-QR-IDS旨在为针对物联网网络的复杂网络威胁提供强大的防御机制,确保其在面对量子计算进步时的安全性和弹性。
{"title":"Enhanced Protection for IoT System With Intelligent Swift Scan Quantum-Resilient Intrusion Detection System (ISS-QR-IDS)","authors":"K. Vinay Bharadwaj,&nbsp;L. Vidya Shree","doi":"10.1002/ett.70331","DOIUrl":"https://doi.org/10.1002/ett.70331","url":null,"abstract":"<div>\u0000 \u0000 <p>The Internet of Things (IoT) is particularly vulnerable in this new era, as IoT devices often rely on lightweight encryption and security measures due to their limited processing capabilities and power constraints. Existing signature-based intrusion detection strategies are inadequate against the advanced and adaptive nature of quantum attacks, which can exploit the polymorphic and metamorphic behavior of quantum-enhanced malware. This research addresses the challenges posed by a possible attack scenario named the “Quantum-Enhanced Cloak Malware (QECM) Attack” and aims to provide enhanced protection to IoT systems against this sophisticated threat. The proposed “Intelligent Swift Scan Quantum-Resilient Intrusion Detection System (ISS-QR-IDS)” integrates advanced techniques to enhance detection speed and accuracy, mitigate risks associated with polymorphic and metamorphic malware behaviors, and secure communication channels against quantum threats. The model incorporates the Parallel Vario-Isolation Detector, which combines Variational Autoencoders (VAEs) and Parallelized Isolation Forests to detect quantum-enhanced malware, and Hypergraph Attention Networks (HGA-Net), leveraging Hypergraph Neural Networks (HGNNs) and Graph Attention Networks (GATs) to detect the critical interactions and improve anomaly detection accuracy. Additionally, postquantum cryptographic algorithms like NTRU Encrypt and FALCON ensure secure communication channels and data integrity. By combining these advanced techniques, ISS-QR-IDS aims to provide a robust defense mechanism against sophisticated cyber threats targeting IoT networks, ensuring their security and resilience in the face of quantum computing advancements.</p>\u0000 </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"37 2","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146096461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid Deep Learning and Classification Framework for Automatic Traffic Inspection Classification Based on Image Detection 基于图像检测的交通检测自动分类混合深度学习与分类框架
IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS Pub Date : 2026-01-21 DOI: 10.1002/ett.70325
Qingning Chen, Yong Zhao, Xiangsheng Luo, Wanjun He, Guanyu Shi

To address the challenges of false negatives and false positives of small objects and the difficulty of fine-grained behavior recognition in complex traffic scenarios, this paper constructs a hybrid deep learning framework based on image detection to synergistically improve multi-object localization accuracy and semantic understanding capabilities. The framework first uses a combination of Gaussian and bilateral filtering for denoising, enhancing input quality and improving detection sensitivity for small objects. In the detection phase, the YOLOv5s (You Only Look Once 5s) model is used as the baseline. The Convolutional Block Attention Module (CBAM) attention mechanism is applied to enhance the representation of key features. K-means clustering is used to adaptively generate prior anchor boxes that match the scale distribution of objects in traffic scenarios. The CIoU (Complete Intersection over Union) loss function is also used to optimize bounding box regression accuracy, improving small object detection performance while maintaining model lightweight. To achieve fine-grained semantic understanding, a two-branch classification network is designed. The attribute branch uses the ConvNeXt-Tiny (Convolutional Next-Generation Tiny) structure to extract static appearance features, while the event branch utilizes the nonlocal operations module to capture dynamic contextual dependencies. Weighted fusion of these two features enables joint recognition of attributes and behaviors. A GNN-CNN (Graph Neural Network-Convolutional Neural Network) hybrid classification module is also constructed. The GNN models the spatiotemporal interactions between vehicles, while a lightweight CNN extracts local texture features. These features are adaptively fused using the Squeeze-and-Excitation (SE) attention mechanism, and a softmax classifier performs traffic behavior judgment. Experiments show that the YOLOv5s-CBAM model achieves a mean average precision (mAP) of 0.55 for detecting extremely small objects (< 16 × 16). In the overloaded vehicle detection task, the GNN-CNN module achieves accuracy and recall of 0.92 and 0.90, respectively. This hybrid deep learning framework provides reliable technical support for automated traffic inspections. It improves the accuracy and stability of small object detection and fine-grained event recognition in complex traffic scenarios. Its modular design and strong scalability make it widely applicable and conducive to promoting intelligent transportation towards higher levels of automation.

针对复杂交通场景下小目标的假阴性和假阳性以及细粒度行为识别的困难,本文构建了基于图像检测的混合深度学习框架,协同提高多目标定位精度和语义理解能力。该框架首先结合高斯滤波和双边滤波进行去噪,增强输入质量,提高对小物体的检测灵敏度。在检测阶段,使用YOLOv5s (You Only Look Once 5s)模型作为基线。采用卷积块注意模块(CBAM)注意机制来增强关键特征的表示。采用K-means聚类自适应生成匹配交通场景中目标尺度分布的先验锚盒。CIoU (Complete Intersection over Union)损失函数也用于优化边界盒回归精度,在保持模型轻量化的同时提高小目标检测性能。为了实现细粒度的语义理解,设计了一个双分支分类网络。属性分支使用卷积极小结构提取静态外观特征,而事件分支使用非局部操作模块捕获动态上下文依赖关系。这两个特征的加权融合可以实现属性和行为的联合识别。构建了GNN-CNN(图神经网络-卷积神经网络)混合分类模块。GNN对车辆之间的时空相互作用进行建模,而轻量级CNN提取局部纹理特征。这些特征使用挤压和激励(SE)注意机制自适应融合,并使用softmax分类器进行流量行为判断。实验表明,YOLOv5s-CBAM模型在检测极小目标(< 16 × 16)时的平均精度(mAP)为0.55。在超载车辆检测任务中,GNN-CNN模块的准确率和召回率分别达到0.92和0.90。这种混合深度学习框架为自动交通检测提供了可靠的技术支持。提高了复杂交通场景下小目标检测和细粒度事件识别的准确性和稳定性。其模块化设计和强大的可扩展性使其具有广泛的适用性,有利于推动智能交通向更高的自动化水平发展。
{"title":"Hybrid Deep Learning and Classification Framework for Automatic Traffic Inspection Classification Based on Image Detection","authors":"Qingning Chen,&nbsp;Yong Zhao,&nbsp;Xiangsheng Luo,&nbsp;Wanjun He,&nbsp;Guanyu Shi","doi":"10.1002/ett.70325","DOIUrl":"https://doi.org/10.1002/ett.70325","url":null,"abstract":"<div>\u0000 \u0000 <p>To address the challenges of false negatives and false positives of small objects and the difficulty of fine-grained behavior recognition in complex traffic scenarios, this paper constructs a hybrid deep learning framework based on image detection to synergistically improve multi-object localization accuracy and semantic understanding capabilities. The framework first uses a combination of Gaussian and bilateral filtering for denoising, enhancing input quality and improving detection sensitivity for small objects. In the detection phase, the YOLOv5s (You Only Look Once 5s) model is used as the baseline. The Convolutional Block Attention Module (CBAM) attention mechanism is applied to enhance the representation of key features. K-means clustering is used to adaptively generate prior anchor boxes that match the scale distribution of objects in traffic scenarios. The CIoU (Complete Intersection over Union) loss function is also used to optimize bounding box regression accuracy, improving small object detection performance while maintaining model lightweight. To achieve fine-grained semantic understanding, a two-branch classification network is designed. The attribute branch uses the ConvNeXt-Tiny (Convolutional Next-Generation Tiny) structure to extract static appearance features, while the event branch utilizes the nonlocal operations module to capture dynamic contextual dependencies. Weighted fusion of these two features enables joint recognition of attributes and behaviors. A GNN-CNN (Graph Neural Network-Convolutional Neural Network) hybrid classification module is also constructed. The GNN models the spatiotemporal interactions between vehicles, while a lightweight CNN extracts local texture features. These features are adaptively fused using the Squeeze-and-Excitation (SE) attention mechanism, and a softmax classifier performs traffic behavior judgment. Experiments show that the YOLOv5s-CBAM model achieves a mean average precision (mAP) of 0.55 for detecting extremely small objects (&lt; 16 × 16). In the overloaded vehicle detection task, the GNN-CNN module achieves accuracy and recall of 0.92 and 0.90, respectively. This hybrid deep learning framework provides reliable technical support for automated traffic inspections. It improves the accuracy and stability of small object detection and fine-grained event recognition in complex traffic scenarios. Its modular design and strong scalability make it widely applicable and conducive to promoting intelligent transportation towards higher levels of automation.</p>\u0000 </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"37 2","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146091486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Intrusion Detection in Cloud Environments Using Optimized Sparse and Contractive Autoencoders 云环境下使用优化稀疏和压缩自编码器的高效入侵检测
IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS Pub Date : 2026-01-21 DOI: 10.1002/ett.70367
P. Sruthi Mol, N. Sathish Kumar

In recent years, cloud computing has gained enormous development in computing systems. The cloud computing environment provides diverse benefits to its cloud users via the internet including storage, applications, on-demand services, etc. Nowadays, it has been accepted and utilized by various companies for uploading their massive amounts of data to a cloud platform. As a result, various cloud computing-based IDS (CIDS) techniques are developed to prevent a cloud network from attacks and protect the data from internal and external anomalous activities. Despite that, security and privacy concerns remain a significant challenge, which demands an effective methodology to ensure user confidentiality and integrity. Thus, we proposed a Stacked Convolutional and Recurrent Contractive Sparse Autoencoder (SCRCS-AE) and Levy flight and reconstructed mathematical optimization acceleration-based Arithmetic Optimization Algorithm (LRMOA-AOA) for an efficient ID in a cloud environment. In this paper, we stack three SCRCS-AE blocks to extract their features. A single SCRCS-AE block involves a convolutional encoder and recurrent decoder to capture long-term dependencies and rebuild input in forward and reverse directions for excellent feature extraction and classification of different intrusions. The integration of sparse and contractive loss is deployed to extract high-dimensional data features to boost the SCRCS-AE model's generalizability and robustness. The LRMOA-AOA optimization algorithm integrates a Levy flight distribution and arithmetic optimization algorithm (AOA) approach that tunes the hyperparameters to enhance the efficacy of the SCRCS-AE. The proposed SCRCS-AE achieved 98.73% accuracy, 98.46% detection rate, 1.27% false alarm rate, and 98.61% precision on the UNSW-NB15 dataset and attained 97.93% accuracy, 97.74% detection rate, 2.07% false alarm rate, and 97.59% precision on the NSL-KDD dataset. These superior outcomes show that the proposed SCRCS-AE technique works well on CIDS in detecting diverse assaults with higher detection rates and lower false alarm rates to strengthen cloud network security and privacy.

近年来,云计算在计算系统中获得了巨大的发展。云计算环境通过互联网为其云用户提供各种好处,包括存储、应用程序、按需服务等。如今,它已被各种公司接受并利用,用于将大量数据上传到云平台。因此,开发了各种基于云计算的IDS (CIDS)技术,以防止云网络受到攻击,并保护数据免受内部和外部异常活动的影响。尽管如此,安全和隐私问题仍然是一个重大挑战,这需要一种有效的方法来确保用户的机密性和完整性。因此,我们提出了堆叠卷积和循环收缩稀疏自编码器(SCRCS-AE)和Levy飞行,并重构了基于数学优化加速度的算法优化算法(LRMOA-AOA),以实现云环境下的高效ID。在本文中,我们将三个scscs - ae块堆叠以提取其特征。单个SCRCS-AE块包括卷积编码器和循环解码器,以捕获长期依赖关系,并在正向和反向重建输入,以实现出色的特征提取和不同入侵的分类。采用稀疏损失和收缩损失相结合的方法提取高维数据特征,提高了SCRCS-AE模型的泛化性和鲁棒性。LRMOA-AOA优化算法集成了Levy飞行分布和算术优化算法(AOA)方法,通过调整超参数来提高SCRCS-AE的有效性。本文提出的SCRCS-AE在UNSW-NB15数据集上的准确率为98.73%,检出率为98.46%,虚警率为1.27%,精密度为98.61%;在NSL-KDD数据集上的准确率为97.93%,检出率为97.74%,虚警率为2.07%,精密度为97.59%。这些优异的结果表明,本文提出的scscs - ae技术在CIDS上能够很好地检测各种攻击,具有较高的检测率和较低的虚警率,从而增强了云网络的安全性和隐私性。
{"title":"Efficient Intrusion Detection in Cloud Environments Using Optimized Sparse and Contractive Autoencoders","authors":"P. Sruthi Mol,&nbsp;N. Sathish Kumar","doi":"10.1002/ett.70367","DOIUrl":"https://doi.org/10.1002/ett.70367","url":null,"abstract":"<div>\u0000 \u0000 <p>In recent years, cloud computing has gained enormous development in computing systems. The cloud computing environment provides diverse benefits to its cloud users via the internet including storage, applications, on-demand services, etc. Nowadays, it has been accepted and utilized by various companies for uploading their massive amounts of data to a cloud platform. As a result, various cloud computing-based IDS (CIDS) techniques are developed to prevent a cloud network from attacks and protect the data from internal and external anomalous activities. Despite that, security and privacy concerns remain a significant challenge, which demands an effective methodology to ensure user confidentiality and integrity. Thus, we proposed a Stacked Convolutional and Recurrent Contractive Sparse Autoencoder (SCRCS-AE) and Levy flight and reconstructed mathematical optimization acceleration-based Arithmetic Optimization Algorithm (LRMOA-AOA) for an efficient ID in a cloud environment. In this paper, we stack three SCRCS-AE blocks to extract their features. A single SCRCS-AE block involves a convolutional encoder and recurrent decoder to capture long-term dependencies and rebuild input in forward and reverse directions for excellent feature extraction and classification of different intrusions. The integration of sparse and contractive loss is deployed to extract high-dimensional data features to boost the SCRCS-AE model's generalizability and robustness. The LRMOA-AOA optimization algorithm integrates a Levy flight distribution and arithmetic optimization algorithm (AOA) approach that tunes the hyperparameters to enhance the efficacy of the SCRCS-AE. The proposed SCRCS-AE achieved 98.73% accuracy, 98.46% detection rate, 1.27% false alarm rate, and 98.61% precision on the UNSW-NB15 dataset and attained 97.93% accuracy, 97.74% detection rate, 2.07% false alarm rate, and 97.59% precision on the NSL-KDD dataset. These superior outcomes show that the proposed SCRCS-AE technique works well on CIDS in detecting diverse assaults with higher detection rates and lower false alarm rates to strengthen cloud network security and privacy.</p>\u0000 </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"37 2","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146091487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OSAG-Net: Toward Accurate OSA Severity Classification Through Deep Recurrent Learning and Self-Feature Controllable-Black Window Optimization OSAG-Net:通过深度循环学习和自特征可控黑窗优化实现OSA严重程度的准确分类
IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS Pub Date : 2026-01-20 DOI: 10.1002/ett.70361
M. Laxman Rao, K. Raja Kumar

Obstructive sleep apnea (OSA) becomes a sleep disease caused by recurrent cessation of breathing during sleep; it also leads to various health complications. Despite the availability of diagnostic methods, there are challenges in accurately identifying and classifying OSA severity. This research addresses the need of an efficient and reliable automated system for OSA detection using deep learning techniques. Existing problems include the complexity of OSA diagnosis, reliance on manual scoring, and variability in interpretation. The proposed OSA grading network (OSAG-Net) encompasses several steps: preprocessing of raw Electrocardiogram (ECG) data to extract relevant features, application of self-feature controllable-black window optimization (SFC-BWO) for feature selection to enhance classification performance, and utilization of bidirectional gated recurrent neural network (BGRNN) architecture with recurrent neural networks (RNN) and bidirectional gated recurrent units (Bi-GRU) for OSA severity classification. Preprocessing involves filtering noise and artifacts from ECG signals, followed by segmenting data into smaller windows to extract informative features. The SFC-BWO technique optimally selects the features by iteratively refining feature subsets based on classification performance, effectively reducing dimensionality and enhancing model interpretability. The RNN architecture with Bi-GRU units is employed to capture temporal dependencies of sequential data, such as ECG recordings, enabling more accurate classification of OSA severity levels. Finally, the performance of the system is validated with different metrics. Hence, the proposed OSAG-Net model achieves a high accuracy value of more than 4.77% of SVM, 3.67% compared to Grad-CAM, and 2.54% of CNN-LSTM, respectively. This results in improvement in the system proves that it rapidly and effectively diagnoses the disease and treats the patients accordingly.

阻塞性睡眠呼吸暂停(OSA)成为一种睡眠疾病,由睡眠时反复停止呼吸引起;它还会导致各种健康并发症。尽管有可用的诊断方法,但在准确识别和分类OSA严重程度方面存在挑战。本研究解决了使用深度学习技术进行OSA检测的高效可靠的自动化系统的需求。存在的问题包括OSA诊断的复杂性、依赖人工评分和解释的可变性。提出的OSA分级网络(OSAG-Net)包括以下几个步骤:对原始心电图(ECG)数据进行预处理以提取相关特征;应用自特征可控黑窗优化(SFC-BWO)进行特征选择以提高分类性能;利用双向门控递归神经网络(BGRNN)架构,结合递归神经网络(RNN)和双向门控递归单元(Bi-GRU)进行OSA严重程度分类。预处理包括从心电信号中过滤噪声和伪影,然后将数据分割成更小的窗口以提取信息特征。SFC-BWO技术基于分类性能,通过迭代细化特征子集来优选特征,有效地降低了维数,增强了模型的可解释性。采用带有Bi-GRU单元的RNN架构来捕获序列数据(如ECG记录)的时间依赖性,从而更准确地分类OSA严重程度。最后,用不同的指标对系统的性能进行了验证。因此,所提出的OSAG-Net模型的准确率分别高于SVM的4.77%、Grad-CAM的3.67%和CNN-LSTM的2.54%。结果表明,该系统能够快速有效地诊断疾病并对患者进行相应的治疗。
{"title":"OSAG-Net: Toward Accurate OSA Severity Classification Through Deep Recurrent Learning and Self-Feature Controllable-Black Window Optimization","authors":"M. Laxman Rao,&nbsp;K. Raja Kumar","doi":"10.1002/ett.70361","DOIUrl":"https://doi.org/10.1002/ett.70361","url":null,"abstract":"<p>Obstructive sleep apnea (OSA) becomes a sleep disease caused by recurrent cessation of breathing during sleep; it also leads to various health complications. Despite the availability of diagnostic methods, there are challenges in accurately identifying and classifying OSA severity. This research addresses the need of an efficient and reliable automated system for OSA detection using deep learning techniques. Existing problems include the complexity of OSA diagnosis, reliance on manual scoring, and variability in interpretation. The proposed OSA grading network (OSAG-Net) encompasses several steps: preprocessing of raw Electrocardiogram (ECG) data to extract relevant features, application of self-feature controllable-black window optimization (SFC-BWO) for feature selection to enhance classification performance, and utilization of bidirectional gated recurrent neural network (BGRNN) architecture with recurrent neural networks (RNN) and bidirectional gated recurrent units (Bi-GRU) for OSA severity classification. Preprocessing involves filtering noise and artifacts from ECG signals, followed by segmenting data into smaller windows to extract informative features. The SFC-BWO technique optimally selects the features by iteratively refining feature subsets based on classification performance, effectively reducing dimensionality and enhancing model interpretability. The RNN architecture with Bi-GRU units is employed to capture temporal dependencies of sequential data, such as ECG recordings, enabling more accurate classification of OSA severity levels. Finally, the performance of the system is validated with different metrics. Hence, the proposed OSAG-Net model achieves a high accuracy value of more than 4.77% of SVM, 3.67% compared to Grad-CAM, and 2.54% of CNN-LSTM, respectively. This results in improvement in the system proves that it rapidly and effectively diagnoses the disease and treats the patients accordingly.</p>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"37 2","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146002354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blockchain-Enhanced Hierarchical Federated Learning for Efficient and Scalable Communication in the Internet of Vehicles 区块链增强的分层联邦学习在车联网中的高效和可扩展通信
IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS Pub Date : 2026-01-19 DOI: 10.1002/ett.70359
Wenjie Long, Lejun Zhang, Juxia Li, Ran Guo

The Internet of Vehicles (IoV) collects real-time data on traffic, environmental conditions, and vehicle behavior through vehicle interconnection and interaction with infrastructure, providing support for the of Machine Learning (ML) in intelligent decision-making. However, centralized learning approaches suffer from issues like privacy leakage and high communication costs. Federated Learning (FL) addresses these issues by sharing local model updates, but in IoV environments, challenges such as data heterogeneity result in slow convergence, limited communication resources, and security threats like gradient leakage. To tackle these challenges, this paper proposes Adaptive Blockchain-based Hierarchical Federated Learning with Gradient Alignment (ABHFL). ABHFL groups vehicle nodes and RSUs into a hierarchical structure to perform local training, gradient alignment, and model aggregation at different levels. The proposed Adaptive Gradient Alignment (AGA) mechanism aligns the update directions of nodes towards the global optimal direction through multiple rounds of alignment after local gradient computation, accelerating model convergence and ensuring that the gradients uploaded contribute positively to global optimization. In addition, a lightweight Proof-of-Gradient-Alignment (PoGA) consensus mechanism is designed, which performs two-stage verification of the uploaded gradients and integrates reputation scores and blockchain storage to guarantee gradient reliability and protect against attacks. Extensive experiments demonstrate that ABHFL significantly improves model convergence, communication efficiency, and security reliability, providing an effective and robust solution for FL in IoV scenarios.

车联网(IoV)通过车辆互联和与基础设施的交互,收集交通、环境状况和车辆行为的实时数据,为机器学习(ML)的智能决策提供支持。然而,集中式学习方法存在隐私泄露和高通信成本等问题。联邦学习(FL)通过共享本地模型更新来解决这些问题,但在车联网环境中,数据异构等挑战会导致缓慢的收敛、有限的通信资源以及梯度泄漏等安全威胁。为了应对这些挑战,本文提出了基于自适应区块链的梯度对齐分层联邦学习(ABHFL)。ABHFL将车辆节点和rsu分组成层次结构,在不同层次上进行局部训练、梯度对齐和模型聚合。提出的自适应梯度对齐(Adaptive Gradient Alignment, AGA)机制在局部梯度计算后,通过多轮对齐,将节点的更新方向对准全局最优方向,加快模型收敛速度,保证上传的梯度对全局优化有积极的贡献。此外,设计了轻量级的PoGA (Proof-of-Gradient-Alignment)共识机制,该机制对上传的梯度进行两阶段验证,并集成了信誉评分和区块链存储,以保证梯度的可靠性并防止攻击。大量实验表明,ABHFL显著提高了模型收敛性、通信效率和安全可靠性,为车联网场景下的FL提供了有效且稳健的解决方案。
{"title":"Blockchain-Enhanced Hierarchical Federated Learning for Efficient and Scalable Communication in the Internet of Vehicles","authors":"Wenjie Long,&nbsp;Lejun Zhang,&nbsp;Juxia Li,&nbsp;Ran Guo","doi":"10.1002/ett.70359","DOIUrl":"https://doi.org/10.1002/ett.70359","url":null,"abstract":"<div>\u0000 \u0000 <p>The Internet of Vehicles (IoV) collects real-time data on traffic, environmental conditions, and vehicle behavior through vehicle interconnection and interaction with infrastructure, providing support for the of Machine Learning (ML) in intelligent decision-making. However, centralized learning approaches suffer from issues like privacy leakage and high communication costs. Federated Learning (FL) addresses these issues by sharing local model updates, but in IoV environments, challenges such as data heterogeneity result in slow convergence, limited communication resources, and security threats like gradient leakage. To tackle these challenges, this paper proposes Adaptive Blockchain-based Hierarchical Federated Learning with Gradient Alignment (ABHFL). ABHFL groups vehicle nodes and RSUs into a hierarchical structure to perform local training, gradient alignment, and model aggregation at different levels. The proposed Adaptive Gradient Alignment (AGA) mechanism aligns the update directions of nodes towards the global optimal direction through multiple rounds of alignment after local gradient computation, accelerating model convergence and ensuring that the gradients uploaded contribute positively to global optimization. In addition, a lightweight Proof-of-Gradient-Alignment (PoGA) consensus mechanism is designed, which performs two-stage verification of the uploaded gradients and integrates reputation scores and blockchain storage to guarantee gradient reliability and protect against attacks. Extensive experiments demonstrate that ABHFL significantly improves model convergence, communication efficiency, and security reliability, providing an effective and robust solution for FL in IoV scenarios.</p>\u0000 </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"37 2","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146002285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint Power and Delay Optimization Based Resource Allocation in Mu-MIMO-OFDM System Using Optimized Enhanced Elman Spiking Sparse Graph Neural Network 基于优化增强Elman尖峰稀疏图神经网络的Mu-MIMO-OFDM系统联合功率和延迟优化资源分配
IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS Pub Date : 2026-01-18 DOI: 10.1002/ett.70337
Shivakumar Kagi, Kothapalli Ramesh Chandra, Sree krishnan Sreethar, Muthukumaran Dhakshnamoorthy

In general, multi-user multiple input multiple output orthogonal frequency division multiplexing (MU-MIMO-OFDM) allows multiple users to interconnect to a base station simultaneously using OFDM modulation and various antennas. However, managing resources like energy and minimizing delays is difficult, requiring smart solutions for smooth operation and better performance. Thus, a joint power and delay optimization based resource allocation using Enhanced Elman Spiking Sparse Graph Networks (EESS-Gnet) with Humboldt Squid Optimization Algorithm (HSOA) and Reuse-Based Online Joint Routing Scheduling Optimization (ROJR) (EESS-GNet-HSOA-ROJR) in MU-MIMO-OFDM system is proposed in this manuscript. The proposed mechanism is performed in two stages that are power allocation and delay optimization. The goal of the first phase is to maximize throughput by allocating network resources to user equipments (UEs) based on transmission rate and power through an EESS-Gnet. In order to reduce the loss function, HSOA is proposed to optimize the layers of EESS-Gnet. In the second stage, ROJR is proposed for optimizing delay in the MU-MIMO-OFDM system. In the ROJR approach, the delay bound value is estimated by scheduling the transmission flows in the channel. The simulations of EESS-GNet-HSOA-ROJR were conducted using MATLAB software. The suggested resource allocation algorithm's performance is assessed and contrasted with the current method of measuring different QoS metrics, including throughput, delay, fairness index, power consumption, spectrum capacity, and loss rate. Thus, the proposed approach has attained 26.46%, 23.09%, and 21.98% higher throughput, 29.78%, 26.86%, and 20.25% improved energy efficiency, 17.45%, 15.98%, and 14.02% lower processing time, and 27.89%, 34.87%, and 23.56% lower loss rate than other conventional approaches like PDO-URA, PCO-OBT, and ADNN-ALSTM-TRDA methods respectively.

一般来说,多用户多输入多输出正交频分复用(MU-MIMO-OFDM)允许多个用户使用OFDM调制和各种天线同时互连到基站。然而,管理能源和最小化延迟等资源是困难的,需要智能解决方案来实现平稳运行和更好的性能。因此,本文提出了一种基于功率和延迟联合优化的MU-MIMO-OFDM系统资源分配方法,该方法采用基于Humboldt Squid优化算法(HSOA)的增强型Elman峰值稀疏图网络(EESS-Gnet)和基于重用的在线联合路由调度优化(ROJR) (EESS-Gnet -HSOA-ROJR)。该机制分为功率分配和时延优化两个阶段。第一阶段的目标是通过EESS-Gnet根据传输速率和功率将网络资源分配给用户设备(ue),从而实现吞吐量最大化。为了减小损失函数,提出了对EESS-Gnet层进行优化的HSOA。在第二阶段,提出了用于优化MU-MIMO-OFDM系统延迟的ROJR。在ROJR方法中,通过调度信道中的传输流来估计延迟边界值。利用MATLAB软件对EESS-GNet-HSOA-ROJR进行了仿真。对所建议的资源分配算法的性能进行了评估,并与当前测量不同QoS指标(包括吞吐量、延迟、公平指数、功耗、频谱容量和损失率)的方法进行了对比。与PDO-URA、PCO-OBT和ADNN-ALSTM-TRDA方法相比,该方法的吞吐量分别提高了26.46%、23.09%和21.98%,能效分别提高了29.78%、26.86%和20.25%,处理时间分别降低了17.45%、15.98%和14.02%,损失率分别降低了27.89%、34.87%和23.56%。
{"title":"Joint Power and Delay Optimization Based Resource Allocation in Mu-MIMO-OFDM System Using Optimized Enhanced Elman Spiking Sparse Graph Neural Network","authors":"Shivakumar Kagi,&nbsp;Kothapalli Ramesh Chandra,&nbsp;Sree krishnan Sreethar,&nbsp;Muthukumaran Dhakshnamoorthy","doi":"10.1002/ett.70337","DOIUrl":"https://doi.org/10.1002/ett.70337","url":null,"abstract":"<div>\u0000 \u0000 <p>In general, multi-user multiple input multiple output orthogonal frequency division multiplexing (MU-MIMO-OFDM) allows multiple users to interconnect to a base station simultaneously using OFDM modulation and various antennas. However, managing resources like energy and minimizing delays is difficult, requiring smart solutions for smooth operation and better performance. Thus, a joint power and delay optimization based resource allocation using Enhanced Elman Spiking Sparse Graph Networks (EESS-Gnet) with Humboldt Squid Optimization Algorithm (HSOA) and Reuse-Based Online Joint Routing Scheduling Optimization (ROJR) (EESS-GNet-HSOA-ROJR) in MU-MIMO-OFDM system is proposed in this manuscript. The proposed mechanism is performed in two stages that are power allocation and delay optimization. The goal of the first phase is to maximize throughput by allocating network resources to user equipments (UEs) based on transmission rate and power through an EESS-Gnet. In order to reduce the loss function, HSOA is proposed to optimize the layers of EESS-Gnet. In the second stage, ROJR is proposed for optimizing delay in the MU-MIMO-OFDM system. In the ROJR approach, the delay bound value is estimated by scheduling the transmission flows in the channel. The simulations of EESS-GNet-HSOA-ROJR were conducted using MATLAB software. The suggested resource allocation algorithm's performance is assessed and contrasted with the current method of measuring different QoS metrics, including throughput, delay, fairness index, power consumption, spectrum capacity, and loss rate. Thus, the proposed approach has attained 26.46%, 23.09%, and 21.98% higher throughput, 29.78%, 26.86%, and 20.25% improved energy efficiency, 17.45%, 15.98%, and 14.02% lower processing time, and 27.89%, 34.87%, and 23.56% lower loss rate than other conventional approaches like PDO-URA, PCO-OBT, and ADNN-ALSTM-TRDA methods respectively.</p>\u0000 </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"37 2","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146002412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dependability Analysis of Cloud-Based VoIP Under an Advanced Persistent Threat Attack: A Semi-Markov Approach 基于云的VoIP在高级持续威胁攻击下的可靠性分析:半马尔可夫方法
IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS Pub Date : 2026-01-18 DOI: 10.1002/ett.70353
Nikesh Choudhary, Vandana Khaitan

Voice over Internet Protocol (VoIP) has emerged as a game-changing communication technology given that it allows for low-cost long-distance conversations with plenty of additional benefits. In this era of cloud computing, VoIP can offer even cheaper calls and scalable services with the help of virtualized telephone infrastructure. The integration of virtualized telephone infrastructure with VoIP is known as “cloud-based VoIP.” In this paper, we investigate a cloud-based VoIP under the advanced persistent threat (APT) attack. An APT attack is a sophisticated type of cyberattack that tries to steal personal information by staying in the infected system for an extended period of time, thereby impacting the system dependability. “Dependability is a measure of a system's availability, reliability, maintainability, and in some cases, other characteristics such as durability, safety and security”. Hence, we develop a robust mechanism for mitigating APT attack in a cloud-based VoIP phone system and investigate its dependability to minimize the aftermaths of the attack. We employ a semi-Markov process (SMP) model to study the dependability as it gives consideration to the non-Markovian nature of the holding times of various system states. The SMP model is then used to analyze both the time-dependent behavior and the long-term (stationary) performance characteristic of the cloud-based VoIP system, specifically in terms of availability, reliability, and confidentiality. Numerical results are displayed graphically, and the proposed dependability model is supported by stochastic simulation. It has been established from the numerical results that the cloud-based VoIP is the most sensitive and critical when it is exploited by cyberattacks, and the lifetime of the system can be extended if the weaknesses of the system are discovered before it is exploited by the attackers.

互联网协议语音(VoIP)已经成为一种改变游戏规则的通信技术,因为它允许低成本的长途通话,并有很多额外的好处。在这个云计算时代,VoIP可以在虚拟电话基础设施的帮助下提供更便宜的通话和可扩展的服务。虚拟电话基础设施与VoIP的集成被称为“基于云的VoIP”。在本文中,我们研究了基于云的VoIP在高级持续威胁(APT)攻击下的应用。APT攻击是一种复杂的网络攻击,它试图通过在被感染的系统中停留较长时间来窃取个人信息,从而影响系统的可靠性。“可靠性是对系统可用性、可靠性、可维护性的度量,在某些情况下,还包括耐久性、安全性和安全性等其他特性”。因此,我们开发了一种强大的机制来减轻基于云的VoIP电话系统中的APT攻击,并研究其可靠性,以最大限度地减少攻击的后果。我们采用半马尔可夫过程(SMP)模型来研究可靠性,因为它考虑了各种系统状态保持时间的非马尔可夫性质。然后使用SMP模型来分析基于云的VoIP系统的时间依赖行为和长期(平稳)性能特征,特别是在可用性、可靠性和保密性方面。数值结果以图形形式显示,可靠性模型得到了随机仿真的支持。数值结果表明,基于云的VoIP在受到网络攻击时是最敏感和最关键的,如果在攻击者利用之前发现系统的弱点,可以延长系统的生命周期。
{"title":"Dependability Analysis of Cloud-Based VoIP Under an Advanced Persistent Threat Attack: A Semi-Markov Approach","authors":"Nikesh Choudhary,&nbsp;Vandana Khaitan","doi":"10.1002/ett.70353","DOIUrl":"https://doi.org/10.1002/ett.70353","url":null,"abstract":"<div>\u0000 \u0000 <p>Voice over Internet Protocol (VoIP) has emerged as a game-changing communication technology given that it allows for low-cost long-distance conversations with plenty of additional benefits. In this era of cloud computing, VoIP can offer even cheaper calls and scalable services with the help of virtualized telephone infrastructure. The integration of virtualized telephone infrastructure with VoIP is known as “<i>cloud-based VoIP</i>.” In this paper, we investigate a cloud-based VoIP under the advanced persistent threat (APT) attack. An APT attack is a sophisticated type of cyberattack that tries to steal personal information by staying in the infected system for an extended period of time, thereby impacting the system dependability. “Dependability is a measure of a system's availability, reliability, maintainability, and in some cases, other characteristics such as durability, safety and security”. Hence, we develop a robust mechanism for mitigating APT attack in a cloud-based VoIP phone system and investigate its dependability to minimize the aftermaths of the attack. We employ a semi-Markov process (SMP) model to study the dependability as it gives consideration to the non-Markovian nature of the holding times of various system states. The SMP model is then used to analyze both the time-dependent behavior and the long-term (stationary) performance characteristic of the cloud-based VoIP system, specifically in terms of availability, reliability, and confidentiality. Numerical results are displayed graphically, and the proposed dependability model is supported by stochastic simulation. It has been established from the numerical results that the cloud-based VoIP is the most sensitive and critical when it is exploited by cyberattacks, and the lifetime of the system can be extended if the weaknesses of the system are discovered before it is exploited by the attackers.</p>\u0000 </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"37 2","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146007853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy and Deadline Aware Workflow Scheduling Based on Task Classification 基于任务分类的能量和截止日期感知工作流调度
IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS Pub Date : 2026-01-18 DOI: 10.1002/ett.70369
Vidya Srivastava, Rakesh Kumar

The scalability, adaptability, and pay-per-use nature of cloud computing have contributed to its meteoric rise to prominence, enabling customers to access services regardless of their physical location. A major obstacle to effective resource management is the wide variety of services provided and the wide variety of user needs. Due to inadequate resource use and suboptimal scheduling tactics, cloud data centers, which consist of physical machines (PMs) hosting many virtual machines (VMs), frequently experience significant energy consumption. A task scheduling technique is introduced in this study to tackle energy efficiency in cloud environments. It integrates two meta-heuristic algorithms. A Slack-based classification algorithm is used first to cluster tasks and then rank them according to their criticality. In order to schedule vital work, we use the Remora Optimization Algorithm (ROA). For noncritical jobs, we use Particle Swarm Optimization (PSO). Researchers tested several configurations ofVMs and job counts in an experimental setting and then compared the outcomes to those of more conventional approaches like Genetic Algorithm (GA) and baseline PSO. The proposed approach shows promise as an efficient scheduling approach for environmentally conscious cloud computing, thanks to its substantial reductions in execution time and energy consumption. Evaluations were carried out in a simulated cloud environment, incorporating different task counts and VM configurations. The proposed mechanism underwent a comparative analysis with eight benchmark methods. The findings indicate that the proposed method shows a marked superiority over current techniques, realizing a 33.5% decrease in execution time (168.57 s compared to 253.47 s) and an 11%–52% enhancement in energy efficiency (0.653 kWh vs. a maximum of 0.852 kWh). The results validate the efficacy of the scheduling strategy in improving energy efficiency and performance within cloud computing environments.

云计算的可伸缩性、适应性和按使用付费的特性使其迅速崛起,使客户能够访问服务,而不管他们的物理位置如何。有效管理资源的一个主要障碍是所提供的服务种类繁多,用户的需要也千差万别。由于资源使用不足和次优调度策略,由托管许多虚拟机(vm)的物理机(pm)组成的云数据中心经常经历大量的能源消耗。本文介绍了一种任务调度技术来解决云环境下的能源效率问题。它集成了两种元启发式算法。首先使用基于slack的分类算法对任务进行聚类,然后根据它们的临界程度对它们进行排序。为了安排重要的工作,我们使用了remoa优化算法(ROA)。对于非关键作业,我们使用粒子群优化(PSO)。研究人员在实验环境中测试了几种虚拟机配置和作业数,然后将结果与遗传算法(GA)和基线PSO等更传统的方法进行了比较。由于大大减少了执行时间和能耗,所提出的方法有望成为具有环保意识的云计算的有效调度方法。评估是在模拟的云环境中进行的,包含不同的任务数和VM配置。该机制与8种基准方法进行了对比分析。研究结果表明,与现有技术相比,该方法具有明显的优势,执行时间减少33.5% (168.57 s比253.47 s),能源效率提高11%-52% (0.653 kWh比0.852 kWh)。结果验证了调度策略在提高云计算环境下的能效和性能方面的有效性。
{"title":"Energy and Deadline Aware Workflow Scheduling Based on Task Classification","authors":"Vidya Srivastava,&nbsp;Rakesh Kumar","doi":"10.1002/ett.70369","DOIUrl":"https://doi.org/10.1002/ett.70369","url":null,"abstract":"<div>\u0000 \u0000 <p>The scalability, adaptability, and pay-per-use nature of cloud computing have contributed to its meteoric rise to prominence, enabling customers to access services regardless of their physical location. A major obstacle to effective resource management is the wide variety of services provided and the wide variety of user needs. Due to inadequate resource use and suboptimal scheduling tactics, cloud data centers, which consist of physical machines (PMs) hosting many virtual machines (VMs), frequently experience significant energy consumption. A task scheduling technique is introduced in this study to tackle energy efficiency in cloud environments. It integrates two meta-heuristic algorithms. A Slack-based classification algorithm is used first to cluster tasks and then rank them according to their criticality. In order to schedule vital work, we use the Remora Optimization Algorithm (ROA). For noncritical jobs, we use Particle Swarm Optimization (PSO). Researchers tested several configurations ofVMs and job counts in an experimental setting and then compared the outcomes to those of more conventional approaches like Genetic Algorithm (GA) and baseline PSO. The proposed approach shows promise as an efficient scheduling approach for environmentally conscious cloud computing, thanks to its substantial reductions in execution time and energy consumption. Evaluations were carried out in a simulated cloud environment, incorporating different task counts and VM configurations. The proposed mechanism underwent a comparative analysis with eight benchmark methods. The findings indicate that the proposed method shows a marked superiority over current techniques, realizing a 33.5% decrease in execution time (168.57 s compared to 253.47 s) and an 11%–52% enhancement in energy efficiency (0.653 kWh vs. a maximum of 0.852 kWh). The results validate the efficacy of the scheduling strategy in improving energy efficiency and performance within cloud computing environments.</p>\u0000 </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"37 2","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146007852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Intelligent Latency Aware DDoS Detection Framework for Secure Vehicular Ad Hoc Networks 面向安全车载自组网的智能感知延迟DDoS检测框架
IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS Pub Date : 2026-01-14 DOI: 10.1002/ett.70348
Amnah Alshahrani, Nabil Almashfi, Mohammed H. Alghamdi, Ali Abdulaziz Alzubaidi, Mohammed Alahmadi, Adel Albshri, Hussain Alshahrani, Abdulbasit A. Darem

Vehicular ad hoc networks (VANETs) enable real-time communication but are vulnerable to security threats, particularly distributed denial of service (DDoS) attacks, that cause delays and network failures. Traditional static detection systems struggle to adapt to dynamic traffic conditions. To address this problem, we propose ELITE, a lightweight and intelligent DDoS detection framework designed for secure VANETs. ELITE employs a three-layer architecture featuring a random fuzzy tree (RFT) classifier, which combines the speed of decision trees with adaptive fuzzy reasoning for efficient anomaly detection. It also includes a latency-aware scheduling system that ensures the urgent traffic is handled, while a few essential requests are sent to nearby edge servers or to the cloud. This work has three distinct contributions: integration of X and Y into one intelligent smart-environment architecture, like edge-cloud optimization model 96% stability of delay-sensitive edge-cloud optimization, and development of a lightweight threat detection module with improved accuracy and real-time capability. Experimental results demonstrate that ELITE achieves a high detection accuracy of 95.7%, effectively adapts to traffic changes, reduces false positives, and improves latency performance.

车辆自组织网络(vanet)支持实时通信,但容易受到安全威胁,特别是分布式拒绝服务(DDoS)攻击,导致延迟和网络故障。传统的静态检测系统难以适应动态交通条件。为了解决这个问题,我们提出了ELITE,一个专为安全vanet设计的轻量级智能DDoS检测框架。ELITE采用随机模糊树(RFT)分类器的三层结构,将决策树的速度与自适应模糊推理相结合,实现了高效的异常检测。它还包括一个延迟感知调度系统,确保处理紧急流量,同时将一些基本请求发送到附近的边缘服务器或云。这项工作有三个明显的贡献:将X和Y集成到一个智能智能环境架构中,如边缘云优化模型,具有96%的延迟敏感边缘云优化稳定性,以及开发具有更高准确性和实时性的轻量级威胁检测模块。实验结果表明,ELITE检测准确率高达95.7%,能够有效适应流量变化,减少误报,提高时延性能。
{"title":"An Intelligent Latency Aware DDoS Detection Framework for Secure Vehicular Ad Hoc Networks","authors":"Amnah Alshahrani,&nbsp;Nabil Almashfi,&nbsp;Mohammed H. Alghamdi,&nbsp;Ali Abdulaziz Alzubaidi,&nbsp;Mohammed Alahmadi,&nbsp;Adel Albshri,&nbsp;Hussain Alshahrani,&nbsp;Abdulbasit A. Darem","doi":"10.1002/ett.70348","DOIUrl":"https://doi.org/10.1002/ett.70348","url":null,"abstract":"<div>\u0000 \u0000 <p>Vehicular ad hoc networks (VANETs) enable real-time communication but are vulnerable to security threats, particularly distributed denial of service (DDoS) attacks, that cause delays and network failures. Traditional static detection systems struggle to adapt to dynamic traffic conditions. To address this problem, we propose ELITE, a lightweight and intelligent DDoS detection framework designed for secure VANETs. ELITE employs a three-layer architecture featuring a random fuzzy tree (RFT) classifier, which combines the speed of decision trees with adaptive fuzzy reasoning for efficient anomaly detection. It also includes a latency-aware scheduling system that ensures the urgent traffic is handled, while a few essential requests are sent to nearby edge servers or to the cloud. This work has three distinct contributions: integration of X and Y into one intelligent smart-environment architecture, like edge-cloud optimization model 96% stability of delay-sensitive edge-cloud optimization, and development of a lightweight threat detection module with improved accuracy and real-time capability. Experimental results demonstrate that ELITE achieves a high detection accuracy of 95.7%, effectively adapts to traffic changes, reduces false positives, and improves latency performance.</p>\u0000 </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"37 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146002093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GCOA: An Effective Task Scheduling for Load Balancing in the Cloud Framework GCOA:云框架中有效的负载均衡任务调度
IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS Pub Date : 2026-01-14 DOI: 10.1002/ett.70343
Bathini Ravinder, D. Haritha, Vurukonda Naresh

Over the last few years, cloud computing has emerged as the best option for offering various applications. It can supply databases, web services, processing, storage, development platforms, and web services to help businesses swiftly expand their infrastructure and service offerings. However, massive amounts of data will severely burden the cloud computing environment. Due to this, load-balanced task scheduling has remained a crucial aspect of resource distribution from a data center, ensuring that each virtual machine (VM) has a balanced load to fulfill its full potential. Overloading or underloading a host or server can cause issues with processing speed or even cause a system crash. To prevent this, we need an intelligent way to schedule tasks. Therefore, the hybrid optimization algorithm called gazelle coati optimization algorithm (GCOA) is introduced in this paper to schedule tasks in a cloud environment. This algorithm integrates the coati optimization algorithm (COA) and the gazelle optimization algorithm (GOA) to enhance the GOA's exploitation process. The main objective of this hybrid approach is to optimize scheduling, maximize VM throughput and resource utilization, and establish load balancing between VMs based on makespan, energy, and cost. The performance assessment of the proposed approach is conducted on two real-world workloads, such as Google Cloud Jobs (GoCJ) and the heterogeneous computing scheduling problems (HCSP) datasets, using several performance metrics, and the results are compared with the previous scheduling and load balancing methods. The experiment results show that the suggested strategy produced significant gains in makespan, energy, cost, resource utilization, and throughput—up to 10% and 60%, respectively—making it appropriate for real-world cloud infrastructures.

在过去几年中,云计算已经成为提供各种应用程序的最佳选择。它可以提供数据库、web服务、处理、存储、开发平台和web服务,以帮助企业迅速扩展其基础设施和服务产品。然而,海量的数据会给云计算环境带来沉重的负担。因此,负载均衡任务调度仍然是数据中心资源分配的一个关键方面,它确保每个虚拟机(VM)具有均衡的负载,以充分发挥其潜力。主机或服务器的过载或过低负载可能导致处理速度问题,甚至导致系统崩溃。为了防止这种情况,我们需要一种智能的方式来安排任务。因此,本文引入一种混合优化算法,即gazelle coati优化算法(GCOA),用于云环境下的任务调度。该算法结合了羚羊优化算法(COA)和瞪羚优化算法(GOA),提高了瞪羚优化算法的开发效率。这种混合方法的主要目标是优化调度,最大限度地提高虚拟机吞吐量和资源利用率,并根据makespan、能源和成本在虚拟机之间建立负载平衡。在谷歌Cloud Jobs (GoCJ)和异构计算调度问题(HCSP)两个实际工作负载数据集上,使用多个性能指标对所提出的方法进行了性能评估,并将结果与之前的调度和负载均衡方法进行了比较。实验结果表明,建议的策略在完工时间、能源、成本、资源利用率和吞吐量方面产生了显著的收益——分别达到10%和60%——使其适合实际的云基础设施。
{"title":"GCOA: An Effective Task Scheduling for Load Balancing in the Cloud Framework","authors":"Bathini Ravinder,&nbsp;D. Haritha,&nbsp;Vurukonda Naresh","doi":"10.1002/ett.70343","DOIUrl":"https://doi.org/10.1002/ett.70343","url":null,"abstract":"<div>\u0000 \u0000 <p>Over the last few years, cloud computing has emerged as the best option for offering various applications. It can supply databases, web services, processing, storage, development platforms, and web services to help businesses swiftly expand their infrastructure and service offerings. However, massive amounts of data will severely burden the cloud computing environment. Due to this, load-balanced task scheduling has remained a crucial aspect of resource distribution from a data center, ensuring that each virtual machine (VM) has a balanced load to fulfill its full potential. Overloading or underloading a host or server can cause issues with processing speed or even cause a system crash. To prevent this, we need an intelligent way to schedule tasks. Therefore, the hybrid optimization algorithm called gazelle coati optimization algorithm (GCOA) is introduced in this paper to schedule tasks in a cloud environment. This algorithm integrates the coati optimization algorithm (COA) and the gazelle optimization algorithm (GOA) to enhance the GOA's exploitation process. The main objective of this hybrid approach is to optimize scheduling, maximize VM throughput and resource utilization, and establish load balancing between VMs based on makespan, energy, and cost. The performance assessment of the proposed approach is conducted on two real-world workloads, such as Google Cloud Jobs (GoCJ) and the heterogeneous computing scheduling problems (HCSP) datasets, using several performance metrics, and the results are compared with the previous scheduling and load balancing methods. The experiment results show that the suggested strategy produced significant gains in makespan, energy, cost, resource utilization, and throughput—up to 10% and 60%, respectively—making it appropriate for real-world cloud infrastructures.</p>\u0000 </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"37 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146007483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Transactions on Emerging Telecommunications Technologies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1