首页 > 最新文献

Transactions on Emerging Telecommunications Technologies最新文献

英文 中文
Dependability Analysis of Cloud-Based VoIP Under an Advanced Persistent Threat Attack: A Semi-Markov Approach 基于云的VoIP在高级持续威胁攻击下的可靠性分析:半马尔可夫方法
IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS Pub Date : 2026-01-18 DOI: 10.1002/ett.70353
Nikesh Choudhary, Vandana Khaitan

Voice over Internet Protocol (VoIP) has emerged as a game-changing communication technology given that it allows for low-cost long-distance conversations with plenty of additional benefits. In this era of cloud computing, VoIP can offer even cheaper calls and scalable services with the help of virtualized telephone infrastructure. The integration of virtualized telephone infrastructure with VoIP is known as “cloud-based VoIP.” In this paper, we investigate a cloud-based VoIP under the advanced persistent threat (APT) attack. An APT attack is a sophisticated type of cyberattack that tries to steal personal information by staying in the infected system for an extended period of time, thereby impacting the system dependability. “Dependability is a measure of a system's availability, reliability, maintainability, and in some cases, other characteristics such as durability, safety and security”. Hence, we develop a robust mechanism for mitigating APT attack in a cloud-based VoIP phone system and investigate its dependability to minimize the aftermaths of the attack. We employ a semi-Markov process (SMP) model to study the dependability as it gives consideration to the non-Markovian nature of the holding times of various system states. The SMP model is then used to analyze both the time-dependent behavior and the long-term (stationary) performance characteristic of the cloud-based VoIP system, specifically in terms of availability, reliability, and confidentiality. Numerical results are displayed graphically, and the proposed dependability model is supported by stochastic simulation. It has been established from the numerical results that the cloud-based VoIP is the most sensitive and critical when it is exploited by cyberattacks, and the lifetime of the system can be extended if the weaknesses of the system are discovered before it is exploited by the attackers.

互联网协议语音(VoIP)已经成为一种改变游戏规则的通信技术,因为它允许低成本的长途通话,并有很多额外的好处。在这个云计算时代,VoIP可以在虚拟电话基础设施的帮助下提供更便宜的通话和可扩展的服务。虚拟电话基础设施与VoIP的集成被称为“基于云的VoIP”。在本文中,我们研究了基于云的VoIP在高级持续威胁(APT)攻击下的应用。APT攻击是一种复杂的网络攻击,它试图通过在被感染的系统中停留较长时间来窃取个人信息,从而影响系统的可靠性。“可靠性是对系统可用性、可靠性、可维护性的度量,在某些情况下,还包括耐久性、安全性和安全性等其他特性”。因此,我们开发了一种强大的机制来减轻基于云的VoIP电话系统中的APT攻击,并研究其可靠性,以最大限度地减少攻击的后果。我们采用半马尔可夫过程(SMP)模型来研究可靠性,因为它考虑了各种系统状态保持时间的非马尔可夫性质。然后使用SMP模型来分析基于云的VoIP系统的时间依赖行为和长期(平稳)性能特征,特别是在可用性、可靠性和保密性方面。数值结果以图形形式显示,可靠性模型得到了随机仿真的支持。数值结果表明,基于云的VoIP在受到网络攻击时是最敏感和最关键的,如果在攻击者利用之前发现系统的弱点,可以延长系统的生命周期。
{"title":"Dependability Analysis of Cloud-Based VoIP Under an Advanced Persistent Threat Attack: A Semi-Markov Approach","authors":"Nikesh Choudhary,&nbsp;Vandana Khaitan","doi":"10.1002/ett.70353","DOIUrl":"https://doi.org/10.1002/ett.70353","url":null,"abstract":"<div>\u0000 \u0000 <p>Voice over Internet Protocol (VoIP) has emerged as a game-changing communication technology given that it allows for low-cost long-distance conversations with plenty of additional benefits. In this era of cloud computing, VoIP can offer even cheaper calls and scalable services with the help of virtualized telephone infrastructure. The integration of virtualized telephone infrastructure with VoIP is known as “<i>cloud-based VoIP</i>.” In this paper, we investigate a cloud-based VoIP under the advanced persistent threat (APT) attack. An APT attack is a sophisticated type of cyberattack that tries to steal personal information by staying in the infected system for an extended period of time, thereby impacting the system dependability. “Dependability is a measure of a system's availability, reliability, maintainability, and in some cases, other characteristics such as durability, safety and security”. Hence, we develop a robust mechanism for mitigating APT attack in a cloud-based VoIP phone system and investigate its dependability to minimize the aftermaths of the attack. We employ a semi-Markov process (SMP) model to study the dependability as it gives consideration to the non-Markovian nature of the holding times of various system states. The SMP model is then used to analyze both the time-dependent behavior and the long-term (stationary) performance characteristic of the cloud-based VoIP system, specifically in terms of availability, reliability, and confidentiality. Numerical results are displayed graphically, and the proposed dependability model is supported by stochastic simulation. It has been established from the numerical results that the cloud-based VoIP is the most sensitive and critical when it is exploited by cyberattacks, and the lifetime of the system can be extended if the weaknesses of the system are discovered before it is exploited by the attackers.</p>\u0000 </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"37 2","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146007853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy and Deadline Aware Workflow Scheduling Based on Task Classification 基于任务分类的能量和截止日期感知工作流调度
IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS Pub Date : 2026-01-18 DOI: 10.1002/ett.70369
Vidya Srivastava, Rakesh Kumar

The scalability, adaptability, and pay-per-use nature of cloud computing have contributed to its meteoric rise to prominence, enabling customers to access services regardless of their physical location. A major obstacle to effective resource management is the wide variety of services provided and the wide variety of user needs. Due to inadequate resource use and suboptimal scheduling tactics, cloud data centers, which consist of physical machines (PMs) hosting many virtual machines (VMs), frequently experience significant energy consumption. A task scheduling technique is introduced in this study to tackle energy efficiency in cloud environments. It integrates two meta-heuristic algorithms. A Slack-based classification algorithm is used first to cluster tasks and then rank them according to their criticality. In order to schedule vital work, we use the Remora Optimization Algorithm (ROA). For noncritical jobs, we use Particle Swarm Optimization (PSO). Researchers tested several configurations ofVMs and job counts in an experimental setting and then compared the outcomes to those of more conventional approaches like Genetic Algorithm (GA) and baseline PSO. The proposed approach shows promise as an efficient scheduling approach for environmentally conscious cloud computing, thanks to its substantial reductions in execution time and energy consumption. Evaluations were carried out in a simulated cloud environment, incorporating different task counts and VM configurations. The proposed mechanism underwent a comparative analysis with eight benchmark methods. The findings indicate that the proposed method shows a marked superiority over current techniques, realizing a 33.5% decrease in execution time (168.57 s compared to 253.47 s) and an 11%–52% enhancement in energy efficiency (0.653 kWh vs. a maximum of 0.852 kWh). The results validate the efficacy of the scheduling strategy in improving energy efficiency and performance within cloud computing environments.

云计算的可伸缩性、适应性和按使用付费的特性使其迅速崛起,使客户能够访问服务,而不管他们的物理位置如何。有效管理资源的一个主要障碍是所提供的服务种类繁多,用户的需要也千差万别。由于资源使用不足和次优调度策略,由托管许多虚拟机(vm)的物理机(pm)组成的云数据中心经常经历大量的能源消耗。本文介绍了一种任务调度技术来解决云环境下的能源效率问题。它集成了两种元启发式算法。首先使用基于slack的分类算法对任务进行聚类,然后根据它们的临界程度对它们进行排序。为了安排重要的工作,我们使用了remoa优化算法(ROA)。对于非关键作业,我们使用粒子群优化(PSO)。研究人员在实验环境中测试了几种虚拟机配置和作业数,然后将结果与遗传算法(GA)和基线PSO等更传统的方法进行了比较。由于大大减少了执行时间和能耗,所提出的方法有望成为具有环保意识的云计算的有效调度方法。评估是在模拟的云环境中进行的,包含不同的任务数和VM配置。该机制与8种基准方法进行了对比分析。研究结果表明,与现有技术相比,该方法具有明显的优势,执行时间减少33.5% (168.57 s比253.47 s),能源效率提高11%-52% (0.653 kWh比0.852 kWh)。结果验证了调度策略在提高云计算环境下的能效和性能方面的有效性。
{"title":"Energy and Deadline Aware Workflow Scheduling Based on Task Classification","authors":"Vidya Srivastava,&nbsp;Rakesh Kumar","doi":"10.1002/ett.70369","DOIUrl":"https://doi.org/10.1002/ett.70369","url":null,"abstract":"<div>\u0000 \u0000 <p>The scalability, adaptability, and pay-per-use nature of cloud computing have contributed to its meteoric rise to prominence, enabling customers to access services regardless of their physical location. A major obstacle to effective resource management is the wide variety of services provided and the wide variety of user needs. Due to inadequate resource use and suboptimal scheduling tactics, cloud data centers, which consist of physical machines (PMs) hosting many virtual machines (VMs), frequently experience significant energy consumption. A task scheduling technique is introduced in this study to tackle energy efficiency in cloud environments. It integrates two meta-heuristic algorithms. A Slack-based classification algorithm is used first to cluster tasks and then rank them according to their criticality. In order to schedule vital work, we use the Remora Optimization Algorithm (ROA). For noncritical jobs, we use Particle Swarm Optimization (PSO). Researchers tested several configurations ofVMs and job counts in an experimental setting and then compared the outcomes to those of more conventional approaches like Genetic Algorithm (GA) and baseline PSO. The proposed approach shows promise as an efficient scheduling approach for environmentally conscious cloud computing, thanks to its substantial reductions in execution time and energy consumption. Evaluations were carried out in a simulated cloud environment, incorporating different task counts and VM configurations. The proposed mechanism underwent a comparative analysis with eight benchmark methods. The findings indicate that the proposed method shows a marked superiority over current techniques, realizing a 33.5% decrease in execution time (168.57 s compared to 253.47 s) and an 11%–52% enhancement in energy efficiency (0.653 kWh vs. a maximum of 0.852 kWh). The results validate the efficacy of the scheduling strategy in improving energy efficiency and performance within cloud computing environments.</p>\u0000 </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"37 2","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146007852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Intelligent Latency Aware DDoS Detection Framework for Secure Vehicular Ad Hoc Networks 面向安全车载自组网的智能感知延迟DDoS检测框架
IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS Pub Date : 2026-01-14 DOI: 10.1002/ett.70348
Amnah Alshahrani, Nabil Almashfi, Mohammed H. Alghamdi, Ali Abdulaziz Alzubaidi, Mohammed Alahmadi, Adel Albshri, Hussain Alshahrani, Abdulbasit A. Darem

Vehicular ad hoc networks (VANETs) enable real-time communication but are vulnerable to security threats, particularly distributed denial of service (DDoS) attacks, that cause delays and network failures. Traditional static detection systems struggle to adapt to dynamic traffic conditions. To address this problem, we propose ELITE, a lightweight and intelligent DDoS detection framework designed for secure VANETs. ELITE employs a three-layer architecture featuring a random fuzzy tree (RFT) classifier, which combines the speed of decision trees with adaptive fuzzy reasoning for efficient anomaly detection. It also includes a latency-aware scheduling system that ensures the urgent traffic is handled, while a few essential requests are sent to nearby edge servers or to the cloud. This work has three distinct contributions: integration of X and Y into one intelligent smart-environment architecture, like edge-cloud optimization model 96% stability of delay-sensitive edge-cloud optimization, and development of a lightweight threat detection module with improved accuracy and real-time capability. Experimental results demonstrate that ELITE achieves a high detection accuracy of 95.7%, effectively adapts to traffic changes, reduces false positives, and improves latency performance.

车辆自组织网络(vanet)支持实时通信,但容易受到安全威胁,特别是分布式拒绝服务(DDoS)攻击,导致延迟和网络故障。传统的静态检测系统难以适应动态交通条件。为了解决这个问题,我们提出了ELITE,一个专为安全vanet设计的轻量级智能DDoS检测框架。ELITE采用随机模糊树(RFT)分类器的三层结构,将决策树的速度与自适应模糊推理相结合,实现了高效的异常检测。它还包括一个延迟感知调度系统,确保处理紧急流量,同时将一些基本请求发送到附近的边缘服务器或云。这项工作有三个明显的贡献:将X和Y集成到一个智能智能环境架构中,如边缘云优化模型,具有96%的延迟敏感边缘云优化稳定性,以及开发具有更高准确性和实时性的轻量级威胁检测模块。实验结果表明,ELITE检测准确率高达95.7%,能够有效适应流量变化,减少误报,提高时延性能。
{"title":"An Intelligent Latency Aware DDoS Detection Framework for Secure Vehicular Ad Hoc Networks","authors":"Amnah Alshahrani,&nbsp;Nabil Almashfi,&nbsp;Mohammed H. Alghamdi,&nbsp;Ali Abdulaziz Alzubaidi,&nbsp;Mohammed Alahmadi,&nbsp;Adel Albshri,&nbsp;Hussain Alshahrani,&nbsp;Abdulbasit A. Darem","doi":"10.1002/ett.70348","DOIUrl":"https://doi.org/10.1002/ett.70348","url":null,"abstract":"<div>\u0000 \u0000 <p>Vehicular ad hoc networks (VANETs) enable real-time communication but are vulnerable to security threats, particularly distributed denial of service (DDoS) attacks, that cause delays and network failures. Traditional static detection systems struggle to adapt to dynamic traffic conditions. To address this problem, we propose ELITE, a lightweight and intelligent DDoS detection framework designed for secure VANETs. ELITE employs a three-layer architecture featuring a random fuzzy tree (RFT) classifier, which combines the speed of decision trees with adaptive fuzzy reasoning for efficient anomaly detection. It also includes a latency-aware scheduling system that ensures the urgent traffic is handled, while a few essential requests are sent to nearby edge servers or to the cloud. This work has three distinct contributions: integration of X and Y into one intelligent smart-environment architecture, like edge-cloud optimization model 96% stability of delay-sensitive edge-cloud optimization, and development of a lightweight threat detection module with improved accuracy and real-time capability. Experimental results demonstrate that ELITE achieves a high detection accuracy of 95.7%, effectively adapts to traffic changes, reduces false positives, and improves latency performance.</p>\u0000 </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"37 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146002093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GCOA: An Effective Task Scheduling for Load Balancing in the Cloud Framework GCOA:云框架中有效的负载均衡任务调度
IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS Pub Date : 2026-01-14 DOI: 10.1002/ett.70343
Bathini Ravinder, D. Haritha, Vurukonda Naresh

Over the last few years, cloud computing has emerged as the best option for offering various applications. It can supply databases, web services, processing, storage, development platforms, and web services to help businesses swiftly expand their infrastructure and service offerings. However, massive amounts of data will severely burden the cloud computing environment. Due to this, load-balanced task scheduling has remained a crucial aspect of resource distribution from a data center, ensuring that each virtual machine (VM) has a balanced load to fulfill its full potential. Overloading or underloading a host or server can cause issues with processing speed or even cause a system crash. To prevent this, we need an intelligent way to schedule tasks. Therefore, the hybrid optimization algorithm called gazelle coati optimization algorithm (GCOA) is introduced in this paper to schedule tasks in a cloud environment. This algorithm integrates the coati optimization algorithm (COA) and the gazelle optimization algorithm (GOA) to enhance the GOA's exploitation process. The main objective of this hybrid approach is to optimize scheduling, maximize VM throughput and resource utilization, and establish load balancing between VMs based on makespan, energy, and cost. The performance assessment of the proposed approach is conducted on two real-world workloads, such as Google Cloud Jobs (GoCJ) and the heterogeneous computing scheduling problems (HCSP) datasets, using several performance metrics, and the results are compared with the previous scheduling and load balancing methods. The experiment results show that the suggested strategy produced significant gains in makespan, energy, cost, resource utilization, and throughput—up to 10% and 60%, respectively—making it appropriate for real-world cloud infrastructures.

在过去几年中,云计算已经成为提供各种应用程序的最佳选择。它可以提供数据库、web服务、处理、存储、开发平台和web服务,以帮助企业迅速扩展其基础设施和服务产品。然而,海量的数据会给云计算环境带来沉重的负担。因此,负载均衡任务调度仍然是数据中心资源分配的一个关键方面,它确保每个虚拟机(VM)具有均衡的负载,以充分发挥其潜力。主机或服务器的过载或过低负载可能导致处理速度问题,甚至导致系统崩溃。为了防止这种情况,我们需要一种智能的方式来安排任务。因此,本文引入一种混合优化算法,即gazelle coati优化算法(GCOA),用于云环境下的任务调度。该算法结合了羚羊优化算法(COA)和瞪羚优化算法(GOA),提高了瞪羚优化算法的开发效率。这种混合方法的主要目标是优化调度,最大限度地提高虚拟机吞吐量和资源利用率,并根据makespan、能源和成本在虚拟机之间建立负载平衡。在谷歌Cloud Jobs (GoCJ)和异构计算调度问题(HCSP)两个实际工作负载数据集上,使用多个性能指标对所提出的方法进行了性能评估,并将结果与之前的调度和负载均衡方法进行了比较。实验结果表明,建议的策略在完工时间、能源、成本、资源利用率和吞吐量方面产生了显著的收益——分别达到10%和60%——使其适合实际的云基础设施。
{"title":"GCOA: An Effective Task Scheduling for Load Balancing in the Cloud Framework","authors":"Bathini Ravinder,&nbsp;D. Haritha,&nbsp;Vurukonda Naresh","doi":"10.1002/ett.70343","DOIUrl":"https://doi.org/10.1002/ett.70343","url":null,"abstract":"<div>\u0000 \u0000 <p>Over the last few years, cloud computing has emerged as the best option for offering various applications. It can supply databases, web services, processing, storage, development platforms, and web services to help businesses swiftly expand their infrastructure and service offerings. However, massive amounts of data will severely burden the cloud computing environment. Due to this, load-balanced task scheduling has remained a crucial aspect of resource distribution from a data center, ensuring that each virtual machine (VM) has a balanced load to fulfill its full potential. Overloading or underloading a host or server can cause issues with processing speed or even cause a system crash. To prevent this, we need an intelligent way to schedule tasks. Therefore, the hybrid optimization algorithm called gazelle coati optimization algorithm (GCOA) is introduced in this paper to schedule tasks in a cloud environment. This algorithm integrates the coati optimization algorithm (COA) and the gazelle optimization algorithm (GOA) to enhance the GOA's exploitation process. The main objective of this hybrid approach is to optimize scheduling, maximize VM throughput and resource utilization, and establish load balancing between VMs based on makespan, energy, and cost. The performance assessment of the proposed approach is conducted on two real-world workloads, such as Google Cloud Jobs (GoCJ) and the heterogeneous computing scheduling problems (HCSP) datasets, using several performance metrics, and the results are compared with the previous scheduling and load balancing methods. The experiment results show that the suggested strategy produced significant gains in makespan, energy, cost, resource utilization, and throughput—up to 10% and 60%, respectively—making it appropriate for real-world cloud infrastructures.</p>\u0000 </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"37 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146007483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A-ASCENet: An Intelligent Adaptive and Attentional Serial Cascaded Ensemble Network With Optimization Strategy for Cybersecurity in WSN 基于网络安全优化策略的智能自适应关注级联集成网络A-ASCENet
IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS Pub Date : 2026-01-14 DOI: 10.1002/ett.70349
C. Sivasankar, U. Samson Ebenezar, S. Parthiban, Agoramoorthy Moorthy, V. Sarala

Cyber security is very important in Wireless Sensor Networks (WSNs) for securing the transfer of files from attackers. Cyber-physical systems (CPs) are essential to monitor and observe the location of data in WSN. CPs are essential to monitor and track the location of data in a WSN. Many researchers have implemented different mechanisms to improve cybersecurity in WSN-enabled CP. These mechanisms are effectively performed based on the mobile anchor node or mobility of the head node. This algorithm suffers from computational complexity. Some traditional cybersecurity systems suffer from data loss, important data theft, and information leakage. In addition, the CPs also suffer from service interference issues. Black holes, scheduling, gray holes, and flooding are some of the examples of common WSN attacks that damage the entire security system in WSN. The WSN has disadvantages such as low identification rates, high computing overhead, and increased false alarm rates. Conventional cybersecurity systems are required to decrease data redundancy and increase the data correlation for better data transformation. In this paper, a new cybersecurity system in WSN is developed to detect WSN intrusions effectively to enhance adaptability and security. The normal and anomalous information is gathered from online resources. Initially, the gathered information is given to the Adaptive and Attention serial Cascaded Ensemble Network (A-ASCENet) for detecting various intrusions. Here, the variational autoencoder, Convolutional Neural Network (CNN), and extreme learning are integrated into a cascaded form to develop an A-ASCENet model. Here, the parameters are optimized using the Revised Fitness-based Lyrebird Optimization Algorithm (RF-ILOA) from A-ASCENet to enhance the performance of cybersecurity. At last, various WSN attacks like gray, scheduling, flooding, black holes, and holes are effectively detected. The performance of cybersecurity in WSN is compared over different traditional methods with some performance metrics.

在无线传感器网络(WSNs)中,网络安全对于保护文件传输免受攻击者的攻击是非常重要的。网络物理系统(CPs)是监测和观察无线传感器网络中数据位置的关键。CPs对于监控和跟踪WSN中数据的位置至关重要。许多研究人员已经实现了不同的机制来提高基于wsn的CP的网络安全,这些机制都是基于移动锚节点或头节点的移动性来有效执行的。该算法存在计算复杂性的问题。一些传统的网络安全系统存在数据丢失、重要数据被盗、信息泄露等问题。此外,CPs还存在业务干扰问题。黑洞、调度、灰洞和泛洪是常见的WSN攻击,它们会破坏整个WSN的安全系统。无线传感器网络存在识别率低、计算开销大、虚警率高等缺点。传统的网络安全系统需要减少数据冗余,增加数据相关性,以便更好地进行数据转换。本文开发了一种新的无线传感器网络网络安全系统,能够有效地检测到无线传感器网络的入侵,提高其自适应性和安全性。正常和异常信息来源于在线资源。首先,收集到的信息被提供给自适应和关注串行级联集成网络(A-ASCENet)来检测各种入侵。在这里,变分自编码器、卷积神经网络(CNN)和极限学习被集成到级联形式中,以开发a - ascenet模型。本文采用A-ASCENet改进的基于适应度的Lyrebird优化算法(RF-ILOA)对参数进行优化,以提高网络安全性能。最后有效检测出灰色、调度、泛洪、黑洞、洞等各种WSN攻击。通过一些性能指标,比较了不同传统方法在无线传感器网络中的网络安全性能。
{"title":"A-ASCENet: An Intelligent Adaptive and Attentional Serial Cascaded Ensemble Network With Optimization Strategy for Cybersecurity in WSN","authors":"C. Sivasankar,&nbsp;U. Samson Ebenezar,&nbsp;S. Parthiban,&nbsp;Agoramoorthy Moorthy,&nbsp;V. Sarala","doi":"10.1002/ett.70349","DOIUrl":"https://doi.org/10.1002/ett.70349","url":null,"abstract":"<div>\u0000 \u0000 <p>Cyber security is very important in Wireless Sensor Networks (WSNs) for securing the transfer of files from attackers. Cyber-physical systems (CPs) are essential to monitor and observe the location of data in WSN. CPs are essential to monitor and track the location of data in a WSN. Many researchers have implemented different mechanisms to improve cybersecurity in WSN-enabled CP. These mechanisms are effectively performed based on the mobile anchor node or mobility of the head node. This algorithm suffers from computational complexity. Some traditional cybersecurity systems suffer from data loss, important data theft, and information leakage. In addition, the CPs also suffer from service interference issues. Black holes, scheduling, gray holes, and flooding are some of the examples of common WSN attacks that damage the entire security system in WSN. The WSN has disadvantages such as low identification rates, high computing overhead, and increased false alarm rates. Conventional cybersecurity systems are required to decrease data redundancy and increase the data correlation for better data transformation. In this paper, a new cybersecurity system in WSN is developed to detect WSN intrusions effectively to enhance adaptability and security. The normal and anomalous information is gathered from online resources. Initially, the gathered information is given to the Adaptive and Attention serial Cascaded Ensemble Network (A-ASCENet) for detecting various intrusions. Here, the variational autoencoder, Convolutional Neural Network (CNN), and extreme learning are integrated into a cascaded form to develop an A-ASCENet model. Here, the parameters are optimized using the Revised Fitness-based Lyrebird Optimization Algorithm (RF-ILOA) from A-ASCENet to enhance the performance of cybersecurity. At last, various WSN attacks like gray, scheduling, flooding, black holes, and holes are effectively detected. The performance of cybersecurity in WSN is compared over different traditional methods with some performance metrics.</p>\u0000 </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"37 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146002094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cloud-Based Intrusion Detection With TFSEA: Utilizing Graylevel Radial Component Analysis and Threshold-Based Kernel Extreme Learning Machine 基于TFSEA的云入侵检测:利用灰度径向分量分析和基于阈值的核极限学习机
IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS Pub Date : 2026-01-14 DOI: 10.1002/ett.70350
Saravanan Selvaraj, K. Lalitha Devi, N. P. Ponnuviji, Santhi Subbaian

The emergence of cloud computing has revolutionized business operations by providing effective scalability and flexibility. Security concerns have intensified due to the vast amount of data processed and stored in the cloud; hence protecting cloud infrastructure from cyber threats is crucial. Intrusion detection system plays a pivotal role in seamless monitoring of network traffic for exhibiting unauthenticated or malicious attempts. Recent advancements in IDS highlight certain issues such as low classification accuracy, high false positive rate, as well as overfitting when processing various network data. The feature extraction uses graylevel radial component analysis (GRCA) to extract salient features, while dimensionality reduction is performed by introducing the radial basis function principal component analysis. In this work, the crossover boosted dynamic cheetah optimization algorithm is employed in the feature selection process, which integrates Cheetah Optimization with dynamic evolutionary strategies to improve the overall search efficiency and tackle local optimal issues. The detection and classification of intrusion are performed by proposing a novel threshold-based kernel extreme learning machine, which uses different thresholds to enhance generalization capability. Extensive experimental and statistical analysis is carried out, and the results exhibit that the proposed framework achieves a classification accuracy, precision, recall, F1 score, and security rate of 98.84%, 97.22%, 97%, 97.2%, and 98.85%, respectively, compared to all other existing models. Finally, the classified data is stored in cloud infrastructure that allows third-party monitoring services to assess and analyze critical intrusions and also provide threat analysis.

云计算的出现通过提供有效的可伸缩性和灵活性,彻底改变了业务操作。由于大量数据被处理和存储在云端,安全问题加剧了;因此,保护云基础设施免受网络威胁至关重要。入侵检测系统在对网络流量进行无缝监控以发现未经认证或恶意企图方面起着至关重要的作用。IDS的最新进展突出了分类精度低、误报率高以及处理各种网络数据时的过拟合等问题。特征提取采用灰度径向分量分析(GRCA)提取显著特征,引入径向基函数主成分分析进行降维。在特征选择过程中,采用交叉推进的动态猎豹优化算法,将猎豹优化与动态进化策略相结合,提高了整体搜索效率,解决了局部最优问题。提出了一种新的基于阈值的核极值学习机,利用不同的阈值增强入侵的泛化能力,实现了入侵的检测和分类。进行了大量的实验和统计分析,结果表明,与所有现有模型相比,该框架的分类准确率、精密度、召回率、F1分数和安全率分别达到98.84%、97.22%、97%、97.2%和98.85%。最后,机密数据存储在云基础设施中,允许第三方监控服务评估和分析关键入侵,并提供威胁分析。
{"title":"Cloud-Based Intrusion Detection With TFSEA: Utilizing Graylevel Radial Component Analysis and Threshold-Based Kernel Extreme Learning Machine","authors":"Saravanan Selvaraj,&nbsp;K. Lalitha Devi,&nbsp;N. P. Ponnuviji,&nbsp;Santhi Subbaian","doi":"10.1002/ett.70350","DOIUrl":"https://doi.org/10.1002/ett.70350","url":null,"abstract":"<div>\u0000 \u0000 <p>The emergence of cloud computing has revolutionized business operations by providing effective scalability and flexibility. Security concerns have intensified due to the vast amount of data processed and stored in the cloud; hence protecting cloud infrastructure from cyber threats is crucial. Intrusion detection system plays a pivotal role in seamless monitoring of network traffic for exhibiting unauthenticated or malicious attempts. Recent advancements in IDS highlight certain issues such as low classification accuracy, high false positive rate, as well as overfitting when processing various network data. The feature extraction uses graylevel radial component analysis (GRCA) to extract salient features, while dimensionality reduction is performed by introducing the radial basis function principal component analysis. In this work, the crossover boosted dynamic cheetah optimization algorithm is employed in the feature selection process, which integrates Cheetah Optimization with dynamic evolutionary strategies to improve the overall search efficiency and tackle local optimal issues. The detection and classification of intrusion are performed by proposing a novel threshold-based kernel extreme learning machine, which uses different thresholds to enhance generalization capability. Extensive experimental and statistical analysis is carried out, and the results exhibit that the proposed framework achieves a classification accuracy, precision, recall, <i>F</i>1 score, and security rate of 98.84%, 97.22%, 97%, 97.2%, and 98.85%, respectively, compared to all other existing models. Finally, the classified data is stored in cloud infrastructure that allows third-party monitoring services to assess and analyze critical intrusions and also provide threat analysis.</p>\u0000 </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"37 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146002096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Blockchain Framework for Digital Twin Security: Enhancing Privacy Through Optimized Key Generation 一种新的数字孪生安全区块链框架:通过优化密钥生成增强隐私
IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS Pub Date : 2026-01-14 DOI: 10.1002/ett.70328
Lakshmi B, Ameelia Roseline A

Digital twin technology has emerged as a key innovation in digitalization, gaining significant attention for its wide applicability across space and manufacturing industries. Its primary goal is to enable efficient command execution and secure data access, empowering users within a virtual environment. Digital twins support various functions, such as real-time monitoring, data analysis, and synchronized operations. However, despite their growing adoption, critical issues related to data privacy and security within digital twin systems remain underexplored. To address this, the article introduces an advanced optimization algorithmic technique, Chronological_Fossa Optimization Algorithm_Secure Key Generation (CFOA_Seckeygen), for generating an optimal key to improve the security and privacy of data stored in a digital twin environment with a blockchain framework. Towards this, different entities, like the twin manager, data owner, database server, and data user, are involved in the authentication process, which is executed by considering different functions, like Exclusive OR (XOR) operations, cryptographic hashing, encryption, and keys. Following this, a secret key is generated using CFOA_Seckeygen to increase security as well as the privacy of digital twin data. Furthermore, the CFOA_Seckeygen model demonstrates superior performance, achieving a communication cost of 3007.556, memory usage of 43.876 MB, a normalized variance of 0.885, and a conditional privacy score of 0.886.

数字孪生技术作为数字化的一项关键创新,因其在航天和制造业领域的广泛适用性而备受关注。它的主要目标是实现高效的命令执行和安全的数据访问,为虚拟环境中的用户提供支持。数字孪生支持实时监控、数据分析、同步操作等多种功能。然而,尽管它们的采用越来越多,但数字孪生系统中与数据隐私和安全相关的关键问题仍未得到充分探讨。为了解决这个问题,本文介绍了一种先进的优化算法技术,Chronological_Fossa优化算法安全密钥生成(CFOA_Seckeygen),用于生成最优密钥,以提高存储在区块链框架的数字孪生环境中的数据的安全性和隐私性。为此,身份验证过程涉及不同的实体,如孪生管理器、数据所有者、数据库服务器和数据用户,该过程通过考虑不同的功能来执行,如异或(XOR)操作、加密散列、加密和密钥。接下来,使用CFOA_Seckeygen生成一个密钥,以提高数字孪生数据的安全性和隐私性。此外,CFOA_Seckeygen模型表现出优异的性能,通信成本为3007.556,内存使用为43.876 MB,归一化方差为0.885,条件隐私得分为0.886。
{"title":"A Novel Blockchain Framework for Digital Twin Security: Enhancing Privacy Through Optimized Key Generation","authors":"Lakshmi B,&nbsp;Ameelia Roseline A","doi":"10.1002/ett.70328","DOIUrl":"https://doi.org/10.1002/ett.70328","url":null,"abstract":"<div>\u0000 \u0000 <p>Digital twin technology has emerged as a key innovation in digitalization, gaining significant attention for its wide applicability across space and manufacturing industries. Its primary goal is to enable efficient command execution and secure data access, empowering users within a virtual environment. Digital twins support various functions, such as real-time monitoring, data analysis, and synchronized operations. However, despite their growing adoption, critical issues related to data privacy and security within digital twin systems remain underexplored. To address this, the article introduces an advanced optimization algorithmic technique, Chronological_Fossa Optimization Algorithm_Secure Key Generation (CFOA_Seckeygen), for generating an optimal key to improve the security and privacy of data stored in a digital twin environment with a blockchain framework. Towards this, different entities, like the twin manager, data owner, database server, and data user, are involved in the authentication process, which is executed by considering different functions, like Exclusive OR (XOR) operations, cryptographic hashing, encryption, and keys. Following this, a secret key is generated using CFOA_Seckeygen to increase security as well as the privacy of digital twin data. Furthermore, the CFOA_Seckeygen model demonstrates superior performance, achieving a communication cost of 3007.556, memory usage of 43.876 MB, a normalized variance of 0.885, and a conditional privacy score of 0.886.</p>\u0000 </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"37 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146002095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enabling A Better Learning Algorithm Compared With Machine Learning and Deep Learning Algorithms for Enhancing Security and Privacy in the Internet of Things Network 与机器学习和深度学习算法相比,实现更好的学习算法以增强物联网网络的安全性和隐私性
IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS Pub Date : 2026-01-12 DOI: 10.1002/ett.70341
Abdullah Saleh Alqahtani

The Internet of Things is growing tremendously due to new technologies, advancements, and big data. With the digitization of data and continuous technological progress, network data traffic has seen a significant increase. This growth makes IoT networks more vulnerable to attacks because of the rising number of devices and the massive amount of data they generate. One of the emerging topics in the research field is security in IoT. The enormous volume of data poses significant challenges to privacy and cybersecurity, and the frequency of attacks is directly proportional to Internet usage. Intrusion Detection Systems (IDS) have proven effective in detecting various attacks, malicious activities, and unauthorized access in IoT networks, helping to prevent intrusions. Furthermore, advanced AI technologies such as machine learning, deep learning, ensemble learning, and transfer learning have shown promising results in efficiently identifying intrusions, attacks, and malicious actions. This paper presents the development of an effective Intrusion Detection System using Machine and Deep Learning algorithms, compares their performance, and identifies the most effective algorithm for securing IoT data while preserving privacy. Random Forest, Convolutional Neural Networks, and Deep Neural Networks are implemented, tested, and compared with other machine learning algorithms, including Decision Trees, Gaussian Naïve Bayes, and XG-Boost. The implementation is carried out in Python, using the benchmark KDD dataset. This paper covers the processes of data generation, preprocessing, analysis, and intrusion detection. The experimental results are compared with other state-of-the-art methods to evaluate overall performance. The performance metrics such as accuracy, precision, recall, and F1 score have been computed for the case of deep learning and machine learning for given IoT network.

由于新技术、进步和大数据,物联网正在迅速发展。随着数据的数字化和技术的不断进步,网络数据流量大幅增加。这种增长使得物联网网络更容易受到攻击,因为设备数量的增加和它们产生的大量数据。物联网安全是研究领域的新兴课题之一。庞大的数据量对隐私和网络安全构成了重大挑战,而攻击的频率与互联网的使用成正比。入侵检测系统(IDS)已被证明在检测物联网网络中的各种攻击、恶意活动和未经授权的访问方面是有效的,有助于防止入侵。此外,先进的人工智能技术,如机器学习、深度学习、集成学习和迁移学习,在有效识别入侵、攻击和恶意行为方面显示出了有希望的结果。本文介绍了使用机器和深度学习算法的有效入侵检测系统的开发,比较了它们的性能,并确定了在保护隐私的同时保护物联网数据的最有效算法。随机森林,卷积神经网络和深度神经网络实现,测试,并与其他机器学习算法,包括决策树,高斯Naïve贝叶斯和XG-Boost进行比较。该实现是用Python实现的,使用基准KDD数据集。本文涵盖了数据生成、预处理、分析和入侵检测的过程。实验结果与其他最先进的方法进行了比较,以评估整体性能。针对给定的物联网网络,计算了深度学习和机器学习的准确性、精密度、召回率和F1分数等性能指标。
{"title":"Enabling A Better Learning Algorithm Compared With Machine Learning and Deep Learning Algorithms for Enhancing Security and Privacy in the Internet of Things Network","authors":"Abdullah Saleh Alqahtani","doi":"10.1002/ett.70341","DOIUrl":"https://doi.org/10.1002/ett.70341","url":null,"abstract":"<div>\u0000 \u0000 <p>The Internet of Things is growing tremendously due to new technologies, advancements, and big data. With the digitization of data and continuous technological progress, network data traffic has seen a significant increase. This growth makes IoT networks more vulnerable to attacks because of the rising number of devices and the massive amount of data they generate. One of the emerging topics in the research field is security in IoT. The enormous volume of data poses significant challenges to privacy and cybersecurity, and the frequency of attacks is directly proportional to Internet usage. Intrusion Detection Systems (IDS) have proven effective in detecting various attacks, malicious activities, and unauthorized access in IoT networks, helping to prevent intrusions. Furthermore, advanced AI technologies such as machine learning, deep learning, ensemble learning, and transfer learning have shown promising results in efficiently identifying intrusions, attacks, and malicious actions. This paper presents the development of an effective Intrusion Detection System using Machine and Deep Learning algorithms, compares their performance, and identifies the most effective algorithm for securing IoT data while preserving privacy. Random Forest, Convolutional Neural Networks, and Deep Neural Networks are implemented, tested, and compared with other machine learning algorithms, including Decision Trees, Gaussian Naïve Bayes, and XG-Boost. The implementation is carried out in Python, using the benchmark KDD dataset. This paper covers the processes of data generation, preprocessing, analysis, and intrusion detection. The experimental results are compared with other state-of-the-art methods to evaluate overall performance. The performance metrics such as accuracy, precision, recall, and F1 score have been computed for the case of deep learning and machine learning for given IoT network.</p>\u0000 </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"37 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145964157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IntelliMetro-Hybrid: A Machine Learning and Deep Learning Fusion Model for Economic Optimization in Smart Metro Systems 智能地铁-混合:用于智能地铁系统经济优化的机器学习和深度学习融合模型
IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS Pub Date : 2026-01-09 DOI: 10.1002/ett.70334
Sijin Peng, Yongchang Wei, Zhigang Sun, Yong Chen, Jiang Huang, Hao Chen, Liuyi Chen

Accurate anomaly detection in metro systems is crucial for ensuring operational safety, minimizing costly equipment failures, and enhancing predictive maintenance strategies. Despite the promise of existing machine learning (ML) and deep learning (DL) techniques, their effectiveness is often constrained by imbalanced datasets, temporal dependencies, and heterogeneous sensor data. To overcome these challenges, we propose IntelliMetro, a novel hybrid ensemble framework that seamlessly integrates tree-based ML models with deep neural networks. IntelliMetro is rigorously evaluated against six classical ML models (XGBoost, Decision Tree, K-Nearest Neighbors, Linear Regression, Support Vector Machine, Random Forest) and three DL architectures (ANN, LSTM, CNN) using the MetroPT-3 dataset high-resolution multivariate time series dataset capturing sensor readings from metro air compressors. The proposed IntelliMetro system consists of two main phases: the first phase involves the application of tree-based models, such as Random Forest and XGBoost, to extract considerable patterns from the sensor data; and the second phase involves the combination of these features, followed by classification of anomalies with high accuracy using a light-weight deep neural network. Experimental results demonstrate that IntelliMetro achieves state-of-the-art performance with 98.7% accuracy, 98.3% precision, 99.3% recall, and 99.0% F1-score, outperforming baseline models by 12%–18% in F1-score. Notably, the framework reduces training time by 37% compared to pure DL models, while preserving interpretability through feature importance analysis. Its robustness is further validated under real-world conditions, including sensor noise and temporal drifts. These findings underscore IntelliMetro's potential to revolutionize predictive maintenance in transit systems by reducing unplanned downtime (projected 22% cost savings) and enhancing passenger safety. This work advances ensemble learning for industrial IoT applications and provides a scalable template for anomaly detection in critical infrastructure systems.

在地铁系统中,准确的异常检测对于确保运行安全、最大限度地减少昂贵的设备故障和增强预测性维护策略至关重要。尽管现有的机器学习(ML)和深度学习(DL)技术前景广阔,但它们的有效性往往受到不平衡数据集、时间依赖性和异构传感器数据的限制。为了克服这些挑战,我们提出了一种新的混合集成框架IntelliMetro,它将基于树的机器学习模型与深度神经网络无缝集成。使用metro -3数据集高分辨率多变量时间序列数据集捕获地铁空气压缩机的传感器数据,对六种经典ML模型(XGBoost、决策树、k近邻、线性回归、支持向量机、随机森林)和三种深度学习架构(ANN、LSTM、CNN)进行了严格评估。提出的IntelliMetro系统包括两个主要阶段:第一阶段涉及应用基于树的模型,如Random Forest和XGBoost,从传感器数据中提取大量模式;第二阶段包括这些特征的组合,然后使用轻量级深度神经网络对异常进行高精度分类。实验结果表明,IntelliMetro的准确率为98.7%,精密度为98.3%,召回率为99.3%,f1得分为99.0%,比基准模型的f1得分高出12%-18%。值得注意的是,与纯深度学习模型相比,该框架减少了37%的训练时间,同时通过特征重要性分析保持了可解释性。在包括传感器噪声和时间漂移在内的现实条件下,进一步验证了其鲁棒性。这些发现强调了IntelliMetro通过减少计划外停机时间(预计节省22%的成本)和提高乘客安全来彻底改变交通系统预测性维护的潜力。这项工作推进了工业物联网应用的集成学习,并为关键基础设施系统中的异常检测提供了可扩展的模板。
{"title":"IntelliMetro-Hybrid: A Machine Learning and Deep Learning Fusion Model for Economic Optimization in Smart Metro Systems","authors":"Sijin Peng,&nbsp;Yongchang Wei,&nbsp;Zhigang Sun,&nbsp;Yong Chen,&nbsp;Jiang Huang,&nbsp;Hao Chen,&nbsp;Liuyi Chen","doi":"10.1002/ett.70334","DOIUrl":"https://doi.org/10.1002/ett.70334","url":null,"abstract":"<p>Accurate anomaly detection in metro systems is crucial for ensuring operational safety, minimizing costly equipment failures, and enhancing predictive maintenance strategies. Despite the promise of existing machine learning (ML) and deep learning (DL) techniques, their effectiveness is often constrained by imbalanced datasets, temporal dependencies, and heterogeneous sensor data. To overcome these challenges, we propose IntelliMetro, a novel hybrid ensemble framework that seamlessly integrates tree-based ML models with deep neural networks. IntelliMetro is rigorously evaluated against six classical ML models (XGBoost, Decision Tree, K-Nearest Neighbors, Linear Regression, Support Vector Machine, Random Forest) and three DL architectures (ANN, LSTM, CNN) using the MetroPT-3 dataset high-resolution multivariate time series dataset capturing sensor readings from metro air compressors. The proposed IntelliMetro system consists of two main phases: the first phase involves the application of tree-based models, such as Random Forest and XGBoost, to extract considerable patterns from the sensor data; and the second phase involves the combination of these features, followed by classification of anomalies with high accuracy using a light-weight deep neural network. Experimental results demonstrate that IntelliMetro achieves state-of-the-art performance with 98.7% accuracy, 98.3% precision, 99.3% recall, and 99.0% F1-score, outperforming baseline models by 12%–18% in F1-score. Notably, the framework reduces training time by 37% compared to pure DL models, while preserving interpretability through feature importance analysis. Its robustness is further validated under real-world conditions, including sensor noise and temporal drifts. These findings underscore IntelliMetro's potential to revolutionize predictive maintenance in transit systems by reducing unplanned downtime (projected 22% cost savings) and enhancing passenger safety. This work advances ensemble learning for industrial IoT applications and provides a scalable template for anomaly detection in critical infrastructure systems.</p>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"37 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ett.70334","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145986816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on Optimal Travel Route Recommendation Algorithm Based on Time Sensitive Conditional Transition Graph Under Multiple Constraints 多约束下基于时间敏感条件转移图的最优出行路线推荐算法研究
IF 2.5 4区 计算机科学 Q3 TELECOMMUNICATIONS Pub Date : 2026-01-08 DOI: 10.1002/ett.70313
Gangqing He, Chunyue Gao

Overcrowding of tourists at scenic spots can easily lead to safety accidents and a decline in tourists' travel experience. Designing or recommending tourist routes for tourists is an effective method of passenger flow guidance. The crowdedness of scenic spots is used to describe the crowded conditions of scenic spots, and a tourism experience utility function is proposed. Based on this, considering the constraints of scenic spot service time, travel time and cost budget, a travel route optimization model based on the maximization of travel experience utility is established, and an ant colony algorithm is designed to solve it. On this basis, a time-sensitive travel route recommendation method based on dynamic transition graphs is proposed, a dynamic transition graph model method based on hierarchical clustering is constructed, a method for removing popular sequence anomalies is designed, and a stable pattern law is established. The pattern law accurately recommends the best tourist route suitable for the user's travel time. Through the experimental verification of real data, compared with the existing work, the user's income has increased by more than 10%, which verifies the effectiveness of the proposed method.

景区游客过度拥挤,容易造成安全事故,降低游客的旅游体验。为游客设计或推荐旅游路线是客流引导的有效方法。用景区拥挤度来描述景区拥挤状况,提出了旅游体验效用函数。在此基础上,考虑景区服务时间、出行时间和成本预算约束,建立了基于出行体验效用最大化的出行路线优化模型,并设计了蚁群算法进行求解。在此基础上,提出了一种基于动态过渡图的时敏感出行路线推荐方法,构造了一种基于层次聚类的动态过渡图模型方法,设计了一种消除流行序列异常的方法,并建立了稳定的模式律。模式法精确地推荐最适合用户出行时间的旅游路线。通过对真实数据的实验验证,与现有工作相比,用户的收入提高了10%以上,验证了所提方法的有效性。
{"title":"Research on Optimal Travel Route Recommendation Algorithm Based on Time Sensitive Conditional Transition Graph Under Multiple Constraints","authors":"Gangqing He,&nbsp;Chunyue Gao","doi":"10.1002/ett.70313","DOIUrl":"https://doi.org/10.1002/ett.70313","url":null,"abstract":"<div>\u0000 \u0000 <p>Overcrowding of tourists at scenic spots can easily lead to safety accidents and a decline in tourists' travel experience. Designing or recommending tourist routes for tourists is an effective method of passenger flow guidance. The crowdedness of scenic spots is used to describe the crowded conditions of scenic spots, and a tourism experience utility function is proposed. Based on this, considering the constraints of scenic spot service time, travel time and cost budget, a travel route optimization model based on the maximization of travel experience utility is established, and an ant colony algorithm is designed to solve it. On this basis, a time-sensitive travel route recommendation method based on dynamic transition graphs is proposed, a dynamic transition graph model method based on hierarchical clustering is constructed, a method for removing popular sequence anomalies is designed, and a stable pattern law is established. The pattern law accurately recommends the best tourist route suitable for the user's travel time. Through the experimental verification of real data, compared with the existing work, the user's income has increased by more than 10%, which verifies the effectiveness of the proposed method.</p>\u0000 </div>","PeriodicalId":23282,"journal":{"name":"Transactions on Emerging Telecommunications Technologies","volume":"37 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145969771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Transactions on Emerging Telecommunications Technologies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1