首页 > 最新文献

Transactions on Emerging Telecommunications Technologies最新文献

英文 中文
A-ASCENet: An Intelligent Adaptive and Attentional Serial Cascaded Ensemble Network With Optimization Strategy for Cybersecurity in WSN 基于网络安全优化策略的智能自适应关注级联集成网络A-ASCENet
IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS Pub Date : 2026-01-14 DOI: 10.1002/ett.70349
C. Sivasankar, U. Samson Ebenezar, S. Parthiban, Agoramoorthy Moorthy, V. Sarala

Cyber security is very important in Wireless Sensor Networks (WSNs) for securing the transfer of files from attackers. Cyber-physical systems (CPs) are essential to monitor and observe the location of data in WSN. CPs are essential to monitor and track the location of data in a WSN. Many researchers have implemented different mechanisms to improve cybersecurity in WSN-enabled CP. These mechanisms are effectively performed based on the mobile anchor node or mobility of the head node. This algorithm suffers from computational complexity. Some traditional cybersecurity systems suffer from data loss, important data theft, and information leakage. In addition, the CPs also suffer from service interference issues. Black holes, scheduling, gray holes, and flooding are some of the examples of common WSN attacks that damage the entire security system in WSN. The WSN has disadvantages such as low identification rates, high computing overhead, and increased false alarm rates. Conventional cybersecurity systems are required to decrease data redundancy and increase the data correlation for better data transformation. In this paper, a new cybersecurity system in WSN is developed to detect WSN intrusions effectively to enhance adaptability and security. The normal and anomalous information is gathered from online resources. Initially, the gathered information is given to the Adaptive and Attention serial Cascaded Ensemble Network (A-ASCENet) for detecting various intrusions. Here, the variational autoencoder, Convolutional Neural Network (CNN), and extreme learning are integrated into a cascaded form to develop an A-ASCENet model. Here, the parameters are optimized using the Revised Fitness-based Lyrebird Optimization Algorithm (RF-ILOA) from A-ASCENet to enhance the performance of cybersecurity. At last, various WSN attacks like gray, scheduling, flooding, black holes, and holes are effectively detected. The performance of cybersecurity in WSN is compared over different traditional methods with some performance metrics.

在无线传感器网络(WSNs)中,网络安全对于保护文件传输免受攻击者的攻击是非常重要的。网络物理系统(CPs)是监测和观察无线传感器网络中数据位置的关键。CPs对于监控和跟踪WSN中数据的位置至关重要。许多研究人员已经实现了不同的机制来提高基于wsn的CP的网络安全,这些机制都是基于移动锚节点或头节点的移动性来有效执行的。该算法存在计算复杂性的问题。一些传统的网络安全系统存在数据丢失、重要数据被盗、信息泄露等问题。此外,CPs还存在业务干扰问题。黑洞、调度、灰洞和泛洪是常见的WSN攻击,它们会破坏整个WSN的安全系统。无线传感器网络存在识别率低、计算开销大、虚警率高等缺点。传统的网络安全系统需要减少数据冗余,增加数据相关性,以便更好地进行数据转换。本文开发了一种新的无线传感器网络网络安全系统,能够有效地检测到无线传感器网络的入侵,提高其自适应性和安全性。正常和异常信息来源于在线资源。首先,收集到的信息被提供给自适应和关注串行级联集成网络(A-ASCENet)来检测各种入侵。在这里,变分自编码器、卷积神经网络(CNN)和极限学习被集成到级联形式中,以开发a - ascenet模型。本文采用A-ASCENet改进的基于适应度的Lyrebird优化算法(RF-ILOA)对参数进行优化,以提高网络安全性能。最后有效检测出灰色、调度、泛洪、黑洞、洞等各种WSN攻击。通过一些性能指标,比较了不同传统方法在无线传感器网络中的网络安全性能。
{"title":"A-ASCENet: An Intelligent Adaptive and Attentional Serial Cascaded Ensemble Network With Optimization Strategy for Cybersecurity in WSN","authors":"C. Sivasankar,&nbsp;U. Samson Ebenezar,&nbsp;S. Parthiban,&nbsp;Agoramoorthy Moorthy,&nbsp;V. Sarala","doi":"10.1002/ett.70349","DOIUrl":"https://doi.org/10.1002/ett.70349","url":null,"abstract":"<div>\u0000 \u0000 <p>Cyber security is very important in Wireless Sensor Networks (WSNs) for securing the transfer of files from attackers. Cyber-physical systems (CPs) are essential to monitor and observe the location of data in WSN. CPs are essential to monitor and track the location of data in a WSN. Many researchers have implemented different mechanisms to improve cybersecurity in WSN-enabled CP. These mechanisms are effectively performed based on the mobile anchor node or mobility of the head node. This algorithm suffers from computational complexity. Some traditional cybersecurity systems suffer from data loss, important data theft, and information leakage. In addition, the CPs also suffer from service interference issues. Black holes, scheduling, gray holes, and flooding are some of the examples of common WSN attacks that damage the entire security system in WSN. The WSN has disadvantages such as low identification rates, high computing overhead, and increased false alarm rates. Conventional cybersecurity systems are required to decrease data redundancy and increase the data correlation for better data transformation. In this paper, a new cybersecurity system in WSN is developed to detect WSN intrusions effectively to enhance adaptability and security. The normal and anomalous information is gathered from online resources. Initially, the gathered information is given to the Adaptive and Attention serial Cascaded Ensemble Network (A-ASCENet) for detecting various intrusions. Here, the variational autoencoder, Convolutional Neural Network (CNN), and extreme learning are integrated into a cascaded form to develop an A-ASCENet model. Here, the parameters are optimized using the Revised Fitness-based Lyrebird Optimization Algorithm (RF-ILOA) from A-ASCENet to enhance the performance of cybersecurity. At last, various WSN attacks like gray, scheduling, flooding, black holes, and holes are effectively detected. The performance of cybersecurity in WSN is compared over different traditional methods with some performance metrics.</p>\u0000 </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"37 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146002094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cloud-Based Intrusion Detection With TFSEA: Utilizing Graylevel Radial Component Analysis and Threshold-Based Kernel Extreme Learning Machine 基于TFSEA的云入侵检测:利用灰度径向分量分析和基于阈值的核极限学习机
IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS Pub Date : 2026-01-14 DOI: 10.1002/ett.70350
Saravanan Selvaraj, K. Lalitha Devi, N. P. Ponnuviji, Santhi Subbaian

The emergence of cloud computing has revolutionized business operations by providing effective scalability and flexibility. Security concerns have intensified due to the vast amount of data processed and stored in the cloud; hence protecting cloud infrastructure from cyber threats is crucial. Intrusion detection system plays a pivotal role in seamless monitoring of network traffic for exhibiting unauthenticated or malicious attempts. Recent advancements in IDS highlight certain issues such as low classification accuracy, high false positive rate, as well as overfitting when processing various network data. The feature extraction uses graylevel radial component analysis (GRCA) to extract salient features, while dimensionality reduction is performed by introducing the radial basis function principal component analysis. In this work, the crossover boosted dynamic cheetah optimization algorithm is employed in the feature selection process, which integrates Cheetah Optimization with dynamic evolutionary strategies to improve the overall search efficiency and tackle local optimal issues. The detection and classification of intrusion are performed by proposing a novel threshold-based kernel extreme learning machine, which uses different thresholds to enhance generalization capability. Extensive experimental and statistical analysis is carried out, and the results exhibit that the proposed framework achieves a classification accuracy, precision, recall, F1 score, and security rate of 98.84%, 97.22%, 97%, 97.2%, and 98.85%, respectively, compared to all other existing models. Finally, the classified data is stored in cloud infrastructure that allows third-party monitoring services to assess and analyze critical intrusions and also provide threat analysis.

云计算的出现通过提供有效的可伸缩性和灵活性,彻底改变了业务操作。由于大量数据被处理和存储在云端,安全问题加剧了;因此,保护云基础设施免受网络威胁至关重要。入侵检测系统在对网络流量进行无缝监控以发现未经认证或恶意企图方面起着至关重要的作用。IDS的最新进展突出了分类精度低、误报率高以及处理各种网络数据时的过拟合等问题。特征提取采用灰度径向分量分析(GRCA)提取显著特征,引入径向基函数主成分分析进行降维。在特征选择过程中,采用交叉推进的动态猎豹优化算法,将猎豹优化与动态进化策略相结合,提高了整体搜索效率,解决了局部最优问题。提出了一种新的基于阈值的核极值学习机,利用不同的阈值增强入侵的泛化能力,实现了入侵的检测和分类。进行了大量的实验和统计分析,结果表明,与所有现有模型相比,该框架的分类准确率、精密度、召回率、F1分数和安全率分别达到98.84%、97.22%、97%、97.2%和98.85%。最后,机密数据存储在云基础设施中,允许第三方监控服务评估和分析关键入侵,并提供威胁分析。
{"title":"Cloud-Based Intrusion Detection With TFSEA: Utilizing Graylevel Radial Component Analysis and Threshold-Based Kernel Extreme Learning Machine","authors":"Saravanan Selvaraj,&nbsp;K. Lalitha Devi,&nbsp;N. P. Ponnuviji,&nbsp;Santhi Subbaian","doi":"10.1002/ett.70350","DOIUrl":"https://doi.org/10.1002/ett.70350","url":null,"abstract":"<div>\u0000 \u0000 <p>The emergence of cloud computing has revolutionized business operations by providing effective scalability and flexibility. Security concerns have intensified due to the vast amount of data processed and stored in the cloud; hence protecting cloud infrastructure from cyber threats is crucial. Intrusion detection system plays a pivotal role in seamless monitoring of network traffic for exhibiting unauthenticated or malicious attempts. Recent advancements in IDS highlight certain issues such as low classification accuracy, high false positive rate, as well as overfitting when processing various network data. The feature extraction uses graylevel radial component analysis (GRCA) to extract salient features, while dimensionality reduction is performed by introducing the radial basis function principal component analysis. In this work, the crossover boosted dynamic cheetah optimization algorithm is employed in the feature selection process, which integrates Cheetah Optimization with dynamic evolutionary strategies to improve the overall search efficiency and tackle local optimal issues. The detection and classification of intrusion are performed by proposing a novel threshold-based kernel extreme learning machine, which uses different thresholds to enhance generalization capability. Extensive experimental and statistical analysis is carried out, and the results exhibit that the proposed framework achieves a classification accuracy, precision, recall, <i>F</i>1 score, and security rate of 98.84%, 97.22%, 97%, 97.2%, and 98.85%, respectively, compared to all other existing models. Finally, the classified data is stored in cloud infrastructure that allows third-party monitoring services to assess and analyze critical intrusions and also provide threat analysis.</p>\u0000 </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"37 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146002096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Blockchain Framework for Digital Twin Security: Enhancing Privacy Through Optimized Key Generation 一种新的数字孪生安全区块链框架:通过优化密钥生成增强隐私
IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS Pub Date : 2026-01-14 DOI: 10.1002/ett.70328
Lakshmi B, Ameelia Roseline A

Digital twin technology has emerged as a key innovation in digitalization, gaining significant attention for its wide applicability across space and manufacturing industries. Its primary goal is to enable efficient command execution and secure data access, empowering users within a virtual environment. Digital twins support various functions, such as real-time monitoring, data analysis, and synchronized operations. However, despite their growing adoption, critical issues related to data privacy and security within digital twin systems remain underexplored. To address this, the article introduces an advanced optimization algorithmic technique, Chronological_Fossa Optimization Algorithm_Secure Key Generation (CFOA_Seckeygen), for generating an optimal key to improve the security and privacy of data stored in a digital twin environment with a blockchain framework. Towards this, different entities, like the twin manager, data owner, database server, and data user, are involved in the authentication process, which is executed by considering different functions, like Exclusive OR (XOR) operations, cryptographic hashing, encryption, and keys. Following this, a secret key is generated using CFOA_Seckeygen to increase security as well as the privacy of digital twin data. Furthermore, the CFOA_Seckeygen model demonstrates superior performance, achieving a communication cost of 3007.556, memory usage of 43.876 MB, a normalized variance of 0.885, and a conditional privacy score of 0.886.

数字孪生技术作为数字化的一项关键创新,因其在航天和制造业领域的广泛适用性而备受关注。它的主要目标是实现高效的命令执行和安全的数据访问,为虚拟环境中的用户提供支持。数字孪生支持实时监控、数据分析、同步操作等多种功能。然而,尽管它们的采用越来越多,但数字孪生系统中与数据隐私和安全相关的关键问题仍未得到充分探讨。为了解决这个问题,本文介绍了一种先进的优化算法技术,Chronological_Fossa优化算法安全密钥生成(CFOA_Seckeygen),用于生成最优密钥,以提高存储在区块链框架的数字孪生环境中的数据的安全性和隐私性。为此,身份验证过程涉及不同的实体,如孪生管理器、数据所有者、数据库服务器和数据用户,该过程通过考虑不同的功能来执行,如异或(XOR)操作、加密散列、加密和密钥。接下来,使用CFOA_Seckeygen生成一个密钥,以提高数字孪生数据的安全性和隐私性。此外,CFOA_Seckeygen模型表现出优异的性能,通信成本为3007.556,内存使用为43.876 MB,归一化方差为0.885,条件隐私得分为0.886。
{"title":"A Novel Blockchain Framework for Digital Twin Security: Enhancing Privacy Through Optimized Key Generation","authors":"Lakshmi B,&nbsp;Ameelia Roseline A","doi":"10.1002/ett.70328","DOIUrl":"https://doi.org/10.1002/ett.70328","url":null,"abstract":"<div>\u0000 \u0000 <p>Digital twin technology has emerged as a key innovation in digitalization, gaining significant attention for its wide applicability across space and manufacturing industries. Its primary goal is to enable efficient command execution and secure data access, empowering users within a virtual environment. Digital twins support various functions, such as real-time monitoring, data analysis, and synchronized operations. However, despite their growing adoption, critical issues related to data privacy and security within digital twin systems remain underexplored. To address this, the article introduces an advanced optimization algorithmic technique, Chronological_Fossa Optimization Algorithm_Secure Key Generation (CFOA_Seckeygen), for generating an optimal key to improve the security and privacy of data stored in a digital twin environment with a blockchain framework. Towards this, different entities, like the twin manager, data owner, database server, and data user, are involved in the authentication process, which is executed by considering different functions, like Exclusive OR (XOR) operations, cryptographic hashing, encryption, and keys. Following this, a secret key is generated using CFOA_Seckeygen to increase security as well as the privacy of digital twin data. Furthermore, the CFOA_Seckeygen model demonstrates superior performance, achieving a communication cost of 3007.556, memory usage of 43.876 MB, a normalized variance of 0.885, and a conditional privacy score of 0.886.</p>\u0000 </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"37 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146002095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enabling A Better Learning Algorithm Compared With Machine Learning and Deep Learning Algorithms for Enhancing Security and Privacy in the Internet of Things Network 与机器学习和深度学习算法相比,实现更好的学习算法以增强物联网网络的安全性和隐私性
IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS Pub Date : 2026-01-12 DOI: 10.1002/ett.70341
Abdullah Saleh Alqahtani

The Internet of Things is growing tremendously due to new technologies, advancements, and big data. With the digitization of data and continuous technological progress, network data traffic has seen a significant increase. This growth makes IoT networks more vulnerable to attacks because of the rising number of devices and the massive amount of data they generate. One of the emerging topics in the research field is security in IoT. The enormous volume of data poses significant challenges to privacy and cybersecurity, and the frequency of attacks is directly proportional to Internet usage. Intrusion Detection Systems (IDS) have proven effective in detecting various attacks, malicious activities, and unauthorized access in IoT networks, helping to prevent intrusions. Furthermore, advanced AI technologies such as machine learning, deep learning, ensemble learning, and transfer learning have shown promising results in efficiently identifying intrusions, attacks, and malicious actions. This paper presents the development of an effective Intrusion Detection System using Machine and Deep Learning algorithms, compares their performance, and identifies the most effective algorithm for securing IoT data while preserving privacy. Random Forest, Convolutional Neural Networks, and Deep Neural Networks are implemented, tested, and compared with other machine learning algorithms, including Decision Trees, Gaussian Naïve Bayes, and XG-Boost. The implementation is carried out in Python, using the benchmark KDD dataset. This paper covers the processes of data generation, preprocessing, analysis, and intrusion detection. The experimental results are compared with other state-of-the-art methods to evaluate overall performance. The performance metrics such as accuracy, precision, recall, and F1 score have been computed for the case of deep learning and machine learning for given IoT network.

由于新技术、进步和大数据,物联网正在迅速发展。随着数据的数字化和技术的不断进步,网络数据流量大幅增加。这种增长使得物联网网络更容易受到攻击,因为设备数量的增加和它们产生的大量数据。物联网安全是研究领域的新兴课题之一。庞大的数据量对隐私和网络安全构成了重大挑战,而攻击的频率与互联网的使用成正比。入侵检测系统(IDS)已被证明在检测物联网网络中的各种攻击、恶意活动和未经授权的访问方面是有效的,有助于防止入侵。此外,先进的人工智能技术,如机器学习、深度学习、集成学习和迁移学习,在有效识别入侵、攻击和恶意行为方面显示出了有希望的结果。本文介绍了使用机器和深度学习算法的有效入侵检测系统的开发,比较了它们的性能,并确定了在保护隐私的同时保护物联网数据的最有效算法。随机森林,卷积神经网络和深度神经网络实现,测试,并与其他机器学习算法,包括决策树,高斯Naïve贝叶斯和XG-Boost进行比较。该实现是用Python实现的,使用基准KDD数据集。本文涵盖了数据生成、预处理、分析和入侵检测的过程。实验结果与其他最先进的方法进行了比较,以评估整体性能。针对给定的物联网网络,计算了深度学习和机器学习的准确性、精密度、召回率和F1分数等性能指标。
{"title":"Enabling A Better Learning Algorithm Compared With Machine Learning and Deep Learning Algorithms for Enhancing Security and Privacy in the Internet of Things Network","authors":"Abdullah Saleh Alqahtani","doi":"10.1002/ett.70341","DOIUrl":"https://doi.org/10.1002/ett.70341","url":null,"abstract":"<div>\u0000 \u0000 <p>The Internet of Things is growing tremendously due to new technologies, advancements, and big data. With the digitization of data and continuous technological progress, network data traffic has seen a significant increase. This growth makes IoT networks more vulnerable to attacks because of the rising number of devices and the massive amount of data they generate. One of the emerging topics in the research field is security in IoT. The enormous volume of data poses significant challenges to privacy and cybersecurity, and the frequency of attacks is directly proportional to Internet usage. Intrusion Detection Systems (IDS) have proven effective in detecting various attacks, malicious activities, and unauthorized access in IoT networks, helping to prevent intrusions. Furthermore, advanced AI technologies such as machine learning, deep learning, ensemble learning, and transfer learning have shown promising results in efficiently identifying intrusions, attacks, and malicious actions. This paper presents the development of an effective Intrusion Detection System using Machine and Deep Learning algorithms, compares their performance, and identifies the most effective algorithm for securing IoT data while preserving privacy. Random Forest, Convolutional Neural Networks, and Deep Neural Networks are implemented, tested, and compared with other machine learning algorithms, including Decision Trees, Gaussian Naïve Bayes, and XG-Boost. The implementation is carried out in Python, using the benchmark KDD dataset. This paper covers the processes of data generation, preprocessing, analysis, and intrusion detection. The experimental results are compared with other state-of-the-art methods to evaluate overall performance. The performance metrics such as accuracy, precision, recall, and F1 score have been computed for the case of deep learning and machine learning for given IoT network.</p>\u0000 </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"37 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145964157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IntelliMetro-Hybrid: A Machine Learning and Deep Learning Fusion Model for Economic Optimization in Smart Metro Systems 智能地铁-混合:用于智能地铁系统经济优化的机器学习和深度学习融合模型
IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS Pub Date : 2026-01-09 DOI: 10.1002/ett.70334
Sijin Peng, Yongchang Wei, Zhigang Sun, Yong Chen, Jiang Huang, Hao Chen, Liuyi Chen

Accurate anomaly detection in metro systems is crucial for ensuring operational safety, minimizing costly equipment failures, and enhancing predictive maintenance strategies. Despite the promise of existing machine learning (ML) and deep learning (DL) techniques, their effectiveness is often constrained by imbalanced datasets, temporal dependencies, and heterogeneous sensor data. To overcome these challenges, we propose IntelliMetro, a novel hybrid ensemble framework that seamlessly integrates tree-based ML models with deep neural networks. IntelliMetro is rigorously evaluated against six classical ML models (XGBoost, Decision Tree, K-Nearest Neighbors, Linear Regression, Support Vector Machine, Random Forest) and three DL architectures (ANN, LSTM, CNN) using the MetroPT-3 dataset high-resolution multivariate time series dataset capturing sensor readings from metro air compressors. The proposed IntelliMetro system consists of two main phases: the first phase involves the application of tree-based models, such as Random Forest and XGBoost, to extract considerable patterns from the sensor data; and the second phase involves the combination of these features, followed by classification of anomalies with high accuracy using a light-weight deep neural network. Experimental results demonstrate that IntelliMetro achieves state-of-the-art performance with 98.7% accuracy, 98.3% precision, 99.3% recall, and 99.0% F1-score, outperforming baseline models by 12%–18% in F1-score. Notably, the framework reduces training time by 37% compared to pure DL models, while preserving interpretability through feature importance analysis. Its robustness is further validated under real-world conditions, including sensor noise and temporal drifts. These findings underscore IntelliMetro's potential to revolutionize predictive maintenance in transit systems by reducing unplanned downtime (projected 22% cost savings) and enhancing passenger safety. This work advances ensemble learning for industrial IoT applications and provides a scalable template for anomaly detection in critical infrastructure systems.

在地铁系统中,准确的异常检测对于确保运行安全、最大限度地减少昂贵的设备故障和增强预测性维护策略至关重要。尽管现有的机器学习(ML)和深度学习(DL)技术前景广阔,但它们的有效性往往受到不平衡数据集、时间依赖性和异构传感器数据的限制。为了克服这些挑战,我们提出了一种新的混合集成框架IntelliMetro,它将基于树的机器学习模型与深度神经网络无缝集成。使用metro -3数据集高分辨率多变量时间序列数据集捕获地铁空气压缩机的传感器数据,对六种经典ML模型(XGBoost、决策树、k近邻、线性回归、支持向量机、随机森林)和三种深度学习架构(ANN、LSTM、CNN)进行了严格评估。提出的IntelliMetro系统包括两个主要阶段:第一阶段涉及应用基于树的模型,如Random Forest和XGBoost,从传感器数据中提取大量模式;第二阶段包括这些特征的组合,然后使用轻量级深度神经网络对异常进行高精度分类。实验结果表明,IntelliMetro的准确率为98.7%,精密度为98.3%,召回率为99.3%,f1得分为99.0%,比基准模型的f1得分高出12%-18%。值得注意的是,与纯深度学习模型相比,该框架减少了37%的训练时间,同时通过特征重要性分析保持了可解释性。在包括传感器噪声和时间漂移在内的现实条件下,进一步验证了其鲁棒性。这些发现强调了IntelliMetro通过减少计划外停机时间(预计节省22%的成本)和提高乘客安全来彻底改变交通系统预测性维护的潜力。这项工作推进了工业物联网应用的集成学习,并为关键基础设施系统中的异常检测提供了可扩展的模板。
{"title":"IntelliMetro-Hybrid: A Machine Learning and Deep Learning Fusion Model for Economic Optimization in Smart Metro Systems","authors":"Sijin Peng,&nbsp;Yongchang Wei,&nbsp;Zhigang Sun,&nbsp;Yong Chen,&nbsp;Jiang Huang,&nbsp;Hao Chen,&nbsp;Liuyi Chen","doi":"10.1002/ett.70334","DOIUrl":"https://doi.org/10.1002/ett.70334","url":null,"abstract":"<p>Accurate anomaly detection in metro systems is crucial for ensuring operational safety, minimizing costly equipment failures, and enhancing predictive maintenance strategies. Despite the promise of existing machine learning (ML) and deep learning (DL) techniques, their effectiveness is often constrained by imbalanced datasets, temporal dependencies, and heterogeneous sensor data. To overcome these challenges, we propose IntelliMetro, a novel hybrid ensemble framework that seamlessly integrates tree-based ML models with deep neural networks. IntelliMetro is rigorously evaluated against six classical ML models (XGBoost, Decision Tree, K-Nearest Neighbors, Linear Regression, Support Vector Machine, Random Forest) and three DL architectures (ANN, LSTM, CNN) using the MetroPT-3 dataset high-resolution multivariate time series dataset capturing sensor readings from metro air compressors. The proposed IntelliMetro system consists of two main phases: the first phase involves the application of tree-based models, such as Random Forest and XGBoost, to extract considerable patterns from the sensor data; and the second phase involves the combination of these features, followed by classification of anomalies with high accuracy using a light-weight deep neural network. Experimental results demonstrate that IntelliMetro achieves state-of-the-art performance with 98.7% accuracy, 98.3% precision, 99.3% recall, and 99.0% F1-score, outperforming baseline models by 12%–18% in F1-score. Notably, the framework reduces training time by 37% compared to pure DL models, while preserving interpretability through feature importance analysis. Its robustness is further validated under real-world conditions, including sensor noise and temporal drifts. These findings underscore IntelliMetro's potential to revolutionize predictive maintenance in transit systems by reducing unplanned downtime (projected 22% cost savings) and enhancing passenger safety. This work advances ensemble learning for industrial IoT applications and provides a scalable template for anomaly detection in critical infrastructure systems.</p>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"37 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ett.70334","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145986816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on Optimal Travel Route Recommendation Algorithm Based on Time Sensitive Conditional Transition Graph Under Multiple Constraints 多约束下基于时间敏感条件转移图的最优出行路线推荐算法研究
IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS Pub Date : 2026-01-08 DOI: 10.1002/ett.70313
Gangqing He, Chunyue Gao

Overcrowding of tourists at scenic spots can easily lead to safety accidents and a decline in tourists' travel experience. Designing or recommending tourist routes for tourists is an effective method of passenger flow guidance. The crowdedness of scenic spots is used to describe the crowded conditions of scenic spots, and a tourism experience utility function is proposed. Based on this, considering the constraints of scenic spot service time, travel time and cost budget, a travel route optimization model based on the maximization of travel experience utility is established, and an ant colony algorithm is designed to solve it. On this basis, a time-sensitive travel route recommendation method based on dynamic transition graphs is proposed, a dynamic transition graph model method based on hierarchical clustering is constructed, a method for removing popular sequence anomalies is designed, and a stable pattern law is established. The pattern law accurately recommends the best tourist route suitable for the user's travel time. Through the experimental verification of real data, compared with the existing work, the user's income has increased by more than 10%, which verifies the effectiveness of the proposed method.

景区游客过度拥挤,容易造成安全事故,降低游客的旅游体验。为游客设计或推荐旅游路线是客流引导的有效方法。用景区拥挤度来描述景区拥挤状况,提出了旅游体验效用函数。在此基础上,考虑景区服务时间、出行时间和成本预算约束,建立了基于出行体验效用最大化的出行路线优化模型,并设计了蚁群算法进行求解。在此基础上,提出了一种基于动态过渡图的时敏感出行路线推荐方法,构造了一种基于层次聚类的动态过渡图模型方法,设计了一种消除流行序列异常的方法,并建立了稳定的模式律。模式法精确地推荐最适合用户出行时间的旅游路线。通过对真实数据的实验验证,与现有工作相比,用户的收入提高了10%以上,验证了所提方法的有效性。
{"title":"Research on Optimal Travel Route Recommendation Algorithm Based on Time Sensitive Conditional Transition Graph Under Multiple Constraints","authors":"Gangqing He,&nbsp;Chunyue Gao","doi":"10.1002/ett.70313","DOIUrl":"https://doi.org/10.1002/ett.70313","url":null,"abstract":"<div>\u0000 \u0000 <p>Overcrowding of tourists at scenic spots can easily lead to safety accidents and a decline in tourists' travel experience. Designing or recommending tourist routes for tourists is an effective method of passenger flow guidance. The crowdedness of scenic spots is used to describe the crowded conditions of scenic spots, and a tourism experience utility function is proposed. Based on this, considering the constraints of scenic spot service time, travel time and cost budget, a travel route optimization model based on the maximization of travel experience utility is established, and an ant colony algorithm is designed to solve it. On this basis, a time-sensitive travel route recommendation method based on dynamic transition graphs is proposed, a dynamic transition graph model method based on hierarchical clustering is constructed, a method for removing popular sequence anomalies is designed, and a stable pattern law is established. The pattern law accurately recommends the best tourist route suitable for the user's travel time. Through the experimental verification of real data, compared with the existing work, the user's income has increased by more than 10%, which verifies the effectiveness of the proposed method.</p>\u0000 </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"37 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145969771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effective Performance Analysis of DCT OFDM-IM Using Deep Learning Detector Under Different Fading Channels 基于深度学习检测器的DCT OFDM-IM在不同衰落信道下的有效性能分析
IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS Pub Date : 2026-01-07 DOI: 10.1002/ett.70346
Anusha Chilupuri, Anuradha Sundru

This work introduces an orthogonal frequency division multiplexing based discrete cosine transform assisted index modulation with novel signal identification technique. To take use of the design flexibility offered by the twice the number of accessible subcarriers under the same bandwidth, it combines the concepts of IM and DCT assisted Orthogonal Frequency Division Multiplexing (DCT-OFDM). The performance of DCT-OFDM-IM in contrast to OFDM-IM is enhanced in the proposed study by the employment of a deep learning detector. The Deep Learning based detector (DLD), in contrast to conventional detectors like Maximum Likelihood (ML), Greedy Detector (GD), Log Likelihood Ratio (LLR), and others, improves system performance and lowers system overhead. In order to perceive data bits at the OFDM-IM system's receiver in Rayleigh, Rician, and Nakagami-m Fading channels, the proposed DLD uses a Deep Neural Network with completely automated linking layers. To start with, DLD is trained offline by assembling datasets of simulated results in order to enhance BER performance. Next, the model is trained to recognize DCT-OFDM-IM signals at the receiver under various fading channels. The results demonstrate that the DLD outperforms conventional approaches for all multipath fading channels in terms of BER, and that BER for DCT OFDM-IM has improved over that of OFDM-IM.

本文介绍了一种基于正交频分复用的离散余弦变换辅助指数调制的新型信号识别技术。为了利用在相同带宽下可访问子载波数量增加一倍所提供的设计灵活性,它结合了IM和DCT辅助正交频分复用(DCT- ofdm)的概念。与OFDM-IM相比,DCT-OFDM-IM的性能通过使用深度学习检测器得到了提高。与最大似然(ML)、贪婪检测器(GD)、对数似然比(LLR)等传统检测器相比,基于深度学习的检测器(DLD)提高了系统性能并降低了系统开销。为了感知OFDM-IM系统接收机在瑞利、瑞利和Nakagami-m衰落信道中的数据位,所提出的DLD使用具有完全自动化连接层的深度神经网络。首先,通过组装模拟结果的数据集来离线训练DLD,以提高误码率性能。然后,训练该模型识别接收端各种衰落信道下的DCT-OFDM-IM信号。结果表明,DLD在所有多径衰落信道下的误码率都优于传统方法,DCT OFDM-IM的误码率比OFDM-IM的误码率有所提高。
{"title":"Effective Performance Analysis of DCT OFDM-IM Using Deep Learning Detector Under Different Fading Channels","authors":"Anusha Chilupuri,&nbsp;Anuradha Sundru","doi":"10.1002/ett.70346","DOIUrl":"https://doi.org/10.1002/ett.70346","url":null,"abstract":"<div>\u0000 \u0000 <p>This work introduces an orthogonal frequency division multiplexing based discrete cosine transform assisted index modulation with novel signal identification technique. To take use of the design flexibility offered by the twice the number of accessible subcarriers under the same bandwidth, it combines the concepts of IM and DCT assisted Orthogonal Frequency Division Multiplexing (DCT-OFDM). The performance of DCT-OFDM-IM in contrast to OFDM-IM is enhanced in the proposed study by the employment of a deep learning detector. The Deep Learning based detector (DLD), in contrast to conventional detectors like Maximum Likelihood (ML), Greedy Detector (GD), Log Likelihood Ratio (LLR), and others, improves system performance and lowers system overhead. In order to perceive data bits at the OFDM-IM system's receiver in Rayleigh, Rician, and Nakagami-<i>m</i> Fading channels, the proposed DLD uses a Deep Neural Network with completely automated linking layers. To start with, DLD is trained offline by assembling datasets of simulated results in order to enhance BER performance. Next, the model is trained to recognize DCT-OFDM-IM signals at the receiver under various fading channels. The results demonstrate that the DLD outperforms conventional approaches for all multipath fading channels in terms of BER, and that BER for DCT OFDM-IM has improved over that of OFDM-IM.</p>\u0000 </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"37 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145963853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction to “Cloud-Edge-End Collaborative Dependent Computing Schedule Strategy for Immersive Media” 对“沉浸式媒体的云边缘协同依赖计算调度策略”的修正
IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS Pub Date : 2026-01-06 DOI: 10.1002/ett.70339

X. Wang, S. Yang, H. Tang, et al., “Cloud-Edge-End Collaborative Dependent Computing Schedule Strategy for Immersive Media,” Transactions on Emerging Telecommunications Technologies 36, no. 10 (2025): e70247, https://doi.org/10.1002/ett.70247.

The author list for this article has been updated. The completed author list is provided below:

“Xiaoxi Wang, Shujie Yang, Hong Tang, Xueying Li, Wei Wang, Hui Xiao, Yuxing Liu, and Jia Chen”

The online version of the article has also been updated.

王晓明,杨树林,唐宏,等,“沉浸式媒体的云边缘端协同依赖计算调度策略”,《通信技术学报》第36期。10 (2025): e70247, https://doi.org/10.1002/ett.70247.The本文作者列表已更新。完整的作者名单如下:“王晓曦、杨淑洁、唐虹、李雪莹、王伟、肖辉、刘宇星、陈佳”。文章的网络版也已更新。
{"title":"Correction to “Cloud-Edge-End Collaborative Dependent Computing Schedule Strategy for Immersive Media”","authors":"","doi":"10.1002/ett.70339","DOIUrl":"https://doi.org/10.1002/ett.70339","url":null,"abstract":"<p>X. Wang, S. Yang, H. Tang, et al., “Cloud-Edge-End Collaborative Dependent Computing Schedule Strategy for Immersive Media,” <i>Transactions on Emerging Telecommunications Technologies</i> 36, no. 10 (2025): e70247, https://doi.org/10.1002/ett.70247.</p><p>The author list for this article has been updated. The completed author list is provided below:</p><p>“Xiaoxi Wang, Shujie Yang, Hong Tang, Xueying Li, Wei Wang, Hui Xiao, Yuxing Liu, and Jia Chen”</p><p>The online version of the article has also been updated.</p>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"37 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ett.70339","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145963802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multi-Modal Healthcare Data Prediction Model With Fusion of Multi-Scale Dilated RAN With Adaptive Hybrid Deep Learning Using Improved Optimization Algorithm 基于改进优化算法的多尺度扩展RAN与自适应混合深度学习融合的多模态医疗数据预测模型
IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS Pub Date : 2026-01-05 DOI: 10.1002/ett.70330
S. Kayalvizhi, S. Nagarajan, B. S. Liya, P. D. Sheba Kezia Malarchelvi

The advancement of digital technologies is used for providing more enhanced healthcare services to patients in a timely and effective manner. The multi-modal data encompasses a huge amount of information when compared to the single-modal data. Fusing and analyzing various data types provides a more comprehensive understanding of the patient's condition. Fusing these multi-modal data poses several technical challenges because of its data incompatibility. Therefore, this research work focuses on implementing a deep learning-based disease prediction model using multi-modal data to generate precise prediction results regarding healthcare applications. Initially, the required multi-modal data such as signal, data and image are gathered from the standardized benchmark data sources. Then, the collected data is subjected to the implemented multi-modal data-based disease prediction network (MMPredNet). This network is developed by combining an adaptive hybrid deep learning network (AHDLN) and a multi-scale dilated residual attention network (MDRAN). Here, MDRAN performs a feature extraction process to extract the features from the input data. further, the prediction process is carried out using the AHDLN model. It is a hybridized network generated by fusing a deep Bayesian network (DBN) with a deep shallow network (DSN). The parameters of the AHDLN are optimized using the adaptive learning rate-based dove swarm optimization (ALR-DSO) algorithm to reduce FPR and enhance the precision, NPV, and accuracy of the prediction outcome. From the MMPredNet, the final disease prediction outcomes are provided. The performance of the implemented multi-modal data processing model is evaluated with various conventional methods to showcase its effectiveness in healthcare. The accuracy of the developed model on text data is 97.31%, images are 98.02%, and the signal is 97.23%, which is enhanced than the prior works. Hence, it is proved that the developed framework can accurately predict the disease at an early stage and helps to improve patient outcomes and prevent the progression of diseases in patients.

利用数码科技的进步,及时有效地为病人提供更优质的医疗服务。与单模态数据相比,多模态数据包含了大量的信息。融合和分析各种数据类型可以更全面地了解患者的病情。由于数据不兼容,融合这些多模态数据带来了一些技术挑战。因此,本研究的重点是利用多模态数据实现基于深度学习的疾病预测模型,以产生针对医疗保健应用的精确预测结果。首先,从标准化的基准数据源中收集所需的信号、数据和图像等多模态数据。然后,将收集到的数据用于实现的基于数据的多模式疾病预测网络(MMPredNet)。该网络将自适应混合深度学习网络(AHDLN)和多尺度扩展剩余注意网络(MDRAN)相结合。在这里,MDRAN执行一个特征提取过程,从输入数据中提取特征。利用AHDLN模型进行预测。它是由深贝叶斯网络(DBN)和深浅网络(DSN)融合而成的混合网络。采用基于自适应学习率的鸽子群优化算法(ALR-DSO)对AHDLN的参数进行优化,以降低FPR,提高预测结果的精度、NPV和准确度。从MMPredNet,提供了最终的疾病预测结果。使用各种常规方法评估所实现的多模态数据处理模型的性能,以展示其在医疗保健中的有效性。该模型在文本数据上的准确率为97.31%,在图像上的准确率为98.02%,在信号上的准确率为97.23%,比以往的工作有了很大的提高。因此,证明所开发的框架可以在早期阶段准确预测疾病,有助于改善患者的预后,防止患者疾病的发展。
{"title":"A Multi-Modal Healthcare Data Prediction Model With Fusion of Multi-Scale Dilated RAN With Adaptive Hybrid Deep Learning Using Improved Optimization Algorithm","authors":"S. Kayalvizhi,&nbsp;S. Nagarajan,&nbsp;B. S. Liya,&nbsp;P. D. Sheba Kezia Malarchelvi","doi":"10.1002/ett.70330","DOIUrl":"https://doi.org/10.1002/ett.70330","url":null,"abstract":"<div>\u0000 \u0000 <p>The advancement of digital technologies is used for providing more enhanced healthcare services to patients in a timely and effective manner. The multi-modal data encompasses a huge amount of information when compared to the single-modal data. Fusing and analyzing various data types provides a more comprehensive understanding of the patient's condition. Fusing these multi-modal data poses several technical challenges because of its data incompatibility. Therefore, this research work focuses on implementing a deep learning-based disease prediction model using multi-modal data to generate precise prediction results regarding healthcare applications. Initially, the required multi-modal data such as signal, data and image are gathered from the standardized benchmark data sources. Then, the collected data is subjected to the implemented multi-modal data-based disease prediction network (MMPredNet). This network is developed by combining an adaptive hybrid deep learning network (AHDLN) and a multi-scale dilated residual attention network (MDRAN). Here, MDRAN performs a feature extraction process to extract the features from the input data. further, the prediction process is carried out using the AHDLN model. It is a hybridized network generated by fusing a deep Bayesian network (DBN) with a deep shallow network (DSN). The parameters of the AHDLN are optimized using the adaptive learning rate-based dove swarm optimization (ALR-DSO) algorithm to reduce FPR and enhance the precision, NPV, and accuracy of the prediction outcome. From the MMPredNet, the final disease prediction outcomes are provided. The performance of the implemented multi-modal data processing model is evaluated with various conventional methods to showcase its effectiveness in healthcare. The accuracy of the developed model on text data is 97.31%, images are 98.02%, and the signal is 97.23%, which is enhanced than the prior works. Hence, it is proved that the developed framework can accurately predict the disease at an early stage and helps to improve patient outcomes and prevent the progression of diseases in patients.</p>\u0000 </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"37 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145983434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing Release Points for Precise Payload Delivery by UAVs Under Wind Uncertainty: A Knowledge-Based Approach Using Differential Evolution 风不确定性下无人机精确载荷投放的优化释放点:基于知识的差分进化方法
IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS Pub Date : 2026-01-05 DOI: 10.1002/ett.70345
Ruchi Garg, Sumit Kumar

Windy conditions challenge precise payload delivery by unmanned aerial vehicle (UAV), but wind variability defined within lower and upper bounds at any instance can assist to optimize candidate release points. Therefore, at first, it is crucial to collect candidate release points caused by wind variability. In this paper, knowledge of candidate release points is drawn by applying ballistic equation. The knowledge about the points then initializes differential evolution (DE) optimization to search an optimum payload release point. Therefore, the proposed method is named as DE with knowledge-based initialization (KI), that is, DE-KI. Simulations demonstrate DE-KI's effectiveness by measuring landing error as root mean square error (RMSE) and achieve an average reduction in RMSE compared to existing methods. For instance, DE-KI outperforms two other alternative approaches by an average RMSE of and with varying payload weight, and and with varying wind speed.

多风条件对无人机(UAV)的精确有效载荷递送提出了挑战,但在任何情况下,在上下边界内定义的风变异性可以帮助优化候选释放点。因此,首先收集由风变率引起的候选释放点是至关重要的。本文利用弹道方程,给出了候选释放点的知识。然后,关于这些点的知识初始化差分演化(DE)优化,以搜索最佳负载释放点。因此,本文提出的方法被命名为DE with knowledge-based initialization (KI),即DE-KI。通过将着陆误差测量为均方根误差(RMSE),仿真证明了DE-KI的有效性,并且与现有方法相比,实现了均方根误差的平均降低。例如,DE-KI在不同载荷重量和不同风速下的平均RMSE优于其他两种替代方法。
{"title":"Optimizing Release Points for Precise Payload Delivery by UAVs Under Wind Uncertainty: A Knowledge-Based Approach Using Differential Evolution","authors":"Ruchi Garg,&nbsp;Sumit Kumar","doi":"10.1002/ett.70345","DOIUrl":"https://doi.org/10.1002/ett.70345","url":null,"abstract":"<div>\u0000 \u0000 <p>Windy conditions challenge precise payload delivery by unmanned aerial vehicle (UAV), but wind variability defined within lower and upper bounds at any instance can assist to optimize candidate release points. Therefore, at first, it is crucial to collect candidate release points caused by wind variability. In this paper, knowledge of candidate release points is drawn by applying ballistic equation. The knowledge about the points then initializes differential evolution (DE) optimization to search an optimum payload release point. Therefore, the proposed method is named as DE with knowledge-based initialization (KI), that is, DE-KI. Simulations demonstrate DE-KI's effectiveness by measuring landing error as root mean square error (RMSE) and achieve an average reduction in RMSE compared to existing methods. For instance, DE-KI outperforms two other alternative approaches by an average RMSE of <span></span><math></math> and <span></span><math></math> with varying payload weight, and <span></span><math></math> and <span></span><math></math> with varying wind speed.</p>\u0000 </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"37 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145963870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Transactions on Emerging Telecommunications Technologies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1