首页 > 最新文献

Internet Technology Letters最新文献

英文 中文
Large Model-Based Experiential Landscape Design in Wireless Networks 无线网络中基于模型的大型体验景观设计
IF 0.5 Q4 TELECOMMUNICATIONS Pub Date : 2025-10-21 DOI: 10.1002/itl2.70165
Tong Li

To address the problems of dynamic user experience optimization, adaptive resource allocation, and personalized service provisioning in modern wireless communication networks, this letter proposes a large model-based experiential landscape design method utilizing contrastive learning and pre-trained transformer architectures. Considering the high complexity and heterogeneous nature of wireless environments, along with the insufficient performance of traditional optimization methods for experiential landscape design tasks, wireless signal sequences are first transformed into token-like representations similar to those used in natural language processing. Subsequently, a pre-trained transformer model is employed to convert shallow representations into universal wireless experience representations suitable for various downstream landscape design tasks. By transforming the experiential landscape optimization problem into a similarity analysis problem, a diversity-sensitive transformer model architecture is designed based on contrastive learning principles, which enhances the model's sensitivity to experience differences through positive and negative sample pairs of wireless environments and proposes using information noise contrastive estimation as the loss function for fine-tuning downstream landscape design tasks. Experimental results demonstrate that the proposed method outperforms mainstream approaches in terms of user satisfaction accuracy, quality of experience precision, service continuity recall, and F1 score metrics, achieving 89.73% accuracy in wireless experiential landscape classification.

为了解决现代无线通信网络中动态用户体验优化、自适应资源分配和个性化服务提供的问题,本文提出了一种利用对比学习和预训练变压器架构的基于模型的大型体验景观设计方法。考虑到无线环境的高复杂性和异构性,以及传统优化方法在体验式景观设计任务中的性能不足,首先将无线信号序列转换为类似于自然语言处理中使用的token表示。随后,使用预训练的变压器模型将浅层表示转换为适用于各种下游景观设计任务的通用无线体验表示。通过将经验景观优化问题转化为相似度分析问题,基于对比学习原理设计了一种多样性敏感的变压器模型架构,增强了模型对无线环境正、负样本对经验差异的敏感性,并提出以信息噪声对比估计作为损失函数对下游景观设计任务进行微调。实验结果表明,该方法在用户满意度准确率、体验质量精度、服务连续性召回率和F1评分指标上均优于主流方法,在无线体验景观分类中准确率达到89.73%。
{"title":"Large Model-Based Experiential Landscape Design in Wireless Networks","authors":"Tong Li","doi":"10.1002/itl2.70165","DOIUrl":"https://doi.org/10.1002/itl2.70165","url":null,"abstract":"<div>\u0000 \u0000 <p>To address the problems of dynamic user experience optimization, adaptive resource allocation, and personalized service provisioning in modern wireless communication networks, this letter proposes a large model-based experiential landscape design method utilizing contrastive learning and pre-trained transformer architectures. Considering the high complexity and heterogeneous nature of wireless environments, along with the insufficient performance of traditional optimization methods for experiential landscape design tasks, wireless signal sequences are first transformed into token-like representations similar to those used in natural language processing. Subsequently, a pre-trained transformer model is employed to convert shallow representations into universal wireless experience representations suitable for various downstream landscape design tasks. By transforming the experiential landscape optimization problem into a similarity analysis problem, a diversity-sensitive transformer model architecture is designed based on contrastive learning principles, which enhances the model's sensitivity to experience differences through positive and negative sample pairs of wireless environments and proposes using information noise contrastive estimation as the loss function for fine-tuning downstream landscape design tasks. Experimental results demonstrate that the proposed method outperforms mainstream approaches in terms of user satisfaction accuracy, quality of experience precision, service continuity recall, and F1 score metrics, achieving 89.73% accuracy in wireless experiential landscape classification.</p>\u0000 </div>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 6","pages":""},"PeriodicalIF":0.5,"publicationDate":"2025-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145366648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic Sensor Analysis in MOOC System via Deep Learning 基于深度学习的MOOC系统语义传感器分析
IF 0.5 Q4 TELECOMMUNICATIONS Pub Date : 2025-10-21 DOI: 10.1002/itl2.70166
Bifeng Li

With the application of Internet of Things (IoT) technology, the field of online education has developed rapidly. Online users can flexibly obtain learning resources on the MOOC learning platform and conduct online course learning. The development of IoT enables various systems with sensing functions to continuously access the Internet. The interconnected data makes the development of education tend to be digital and intelligent. Semantic sensor associates semantic technology with a large number of sensors in IoT, providing effective technical means for data representation, management, and sharing. It can provide a theoretical basis for knowledge-based intelligent semantic sensor data processing in IoT. This paper analyzes conventional deep learning algorithms, that is, connectionist text proposal network (CTPN) and convolutional recurrent neural network (CRNN). Then we combine CTPN and CRNN to propose the improved algorithm. The improved algorithm can more accurately recognize text and be used in IoT to extract spatial features in semantics. It can solve contextual relationships between texts, capture temporal features, effectively handle the complexity and diversity of semantic sensor data in the MOOC system. Finally, compared with the traditional algorithms, the improved algorithm achieves higher accuracy and faster recognition speed.

随着物联网(IoT)技术的应用,在线教育领域得到了迅速发展。在线用户可以在MOOC学习平台上灵活获取学习资源,进行在线课程学习。物联网的发展使各种具有传感功能的系统能够持续接入互联网。数据互联使得教育发展趋向数字化、智能化。语义传感器将语义技术与物联网中的大量传感器相关联,为数据表示、管理和共享提供了有效的技术手段。为物联网中基于知识的智能语义传感器数据处理提供理论依据。本文分析了传统的深度学习算法,即连接主义文本建议网络(connectionist text proposal network, CTPN)和卷积递归神经网络(convolutional recurrent neural network, CRNN)。然后结合CTPN和CRNN提出改进算法。改进后的算法可以更准确地识别文本,并用于物联网中提取语义上的空间特征。它可以解决文本之间的上下文关系,捕获时态特征,有效处理MOOC系统中语义传感器数据的复杂性和多样性。最后,与传统算法相比,改进算法实现了更高的准确率和更快的识别速度。
{"title":"Semantic Sensor Analysis in MOOC System via Deep Learning","authors":"Bifeng Li","doi":"10.1002/itl2.70166","DOIUrl":"https://doi.org/10.1002/itl2.70166","url":null,"abstract":"<div>\u0000 \u0000 <p>With the application of Internet of Things (IoT) technology, the field of online education has developed rapidly. Online users can flexibly obtain learning resources on the MOOC learning platform and conduct online course learning. The development of IoT enables various systems with sensing functions to continuously access the Internet. The interconnected data makes the development of education tend to be digital and intelligent. Semantic sensor associates semantic technology with a large number of sensors in IoT, providing effective technical means for data representation, management, and sharing. It can provide a theoretical basis for knowledge-based intelligent semantic sensor data processing in IoT. This paper analyzes conventional deep learning algorithms, that is, connectionist text proposal network (CTPN) and convolutional recurrent neural network (CRNN). Then we combine CTPN and CRNN to propose the improved algorithm. The improved algorithm can more accurately recognize text and be used in IoT to extract spatial features in semantics. It can solve contextual relationships between texts, capture temporal features, effectively handle the complexity and diversity of semantic sensor data in the MOOC system. Finally, compared with the traditional algorithms, the improved algorithm achieves higher accuracy and faster recognition speed.</p>\u0000 </div>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 6","pages":""},"PeriodicalIF":0.5,"publicationDate":"2025-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145366482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Edge Name Routing Meets Core IP Routing for IoT Networks 边缘名称路由满足物联网网络的核心IP路由
IF 0.5 Q4 TELECOMMUNICATIONS Pub Date : 2025-10-18 DOI: 10.1002/itl2.70142
Zhiwei Yan, Sherali Zeadally, Hidenori Nakazato

The rapid advances of communication technologies have fueled significant IoT growth, with increasingly interconnected smart devices generating massive data. While various naming services and routing schemes have been proposed, existing solutions remain use-case specific, with Information-Centric Networking (ICN)/Named Data Networking (NDN) facing scalability challenges in edge networks and hybrid designs sacrificing naming flexibility, lacking a holistic architecture for future IoT networks. To address these challenges, we propose the Address Name Transfer Network (ANT-Net), a novel architecture that uniquely combines seamless integration of heterogeneous naming services with enhanced data retrieval efficiency. ANT-Net employs name-based routing at the edge network and IP-based routing within the core network, maintaining full TCP/IP compatibility for both naming and routing services. Furthermore, the distinct separation between edge and core networks in ANT-Net allows for the implementation of different routing strategies, which can significantly improve data sharing and overall communication efficiency.

通信技术的快速发展推动了物联网的显著增长,越来越多的互联智能设备产生了大量数据。虽然已经提出了各种命名服务和路由方案,但现有的解决方案仍然是特定于用例的,以信息为中心的网络(ICN)/命名数据网络(NDN)在边缘网络和牺牲命名灵活性的混合设计中面临可扩展性挑战,缺乏面向未来物联网网络的整体架构。为了应对这些挑战,我们提出了地址名称传输网络(ANT-Net),这是一种新颖的体系结构,它独特地将异构命名服务的无缝集成与增强的数据检索效率相结合。ANT-Net在边缘网络中使用基于名称的路由,在核心网络中使用基于IP的路由,为命名和路由服务保持完全的TCP/IP兼容性。此外,ANT-Net中边缘网和核心网之间的明显分离允许实现不同的路由策略,这可以显着提高数据共享和整体通信效率。
{"title":"Edge Name Routing Meets Core IP Routing for IoT Networks","authors":"Zhiwei Yan,&nbsp;Sherali Zeadally,&nbsp;Hidenori Nakazato","doi":"10.1002/itl2.70142","DOIUrl":"https://doi.org/10.1002/itl2.70142","url":null,"abstract":"<div>\u0000 \u0000 <p>The rapid advances of communication technologies have fueled significant IoT growth, with increasingly interconnected smart devices generating massive data. While various naming services and routing schemes have been proposed, existing solutions remain use-case specific, with Information-Centric Networking (ICN)/Named Data Networking (NDN) facing scalability challenges in edge networks and hybrid designs sacrificing naming flexibility, lacking a holistic architecture for future IoT networks. To address these challenges, we propose the Address Name Transfer Network (ANT-Net), a novel architecture that uniquely combines seamless integration of heterogeneous naming services with enhanced data retrieval efficiency. ANT-Net employs name-based routing at the edge network and IP-based routing within the core network, maintaining full TCP/IP compatibility for both naming and routing services. Furthermore, the distinct separation between edge and core networks in ANT-Net allows for the implementation of different routing strategies, which can significantly improve data sharing and overall communication efficiency.</p>\u0000 </div>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 6","pages":""},"PeriodicalIF":0.5,"publicationDate":"2025-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145317460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on the Digital Transformation of Private Enterprises Based on the Industrial Internet of Things 基于工业物联网的民营企业数字化转型研究
IF 0.5 Q4 TELECOMMUNICATIONS Pub Date : 2025-10-17 DOI: 10.1002/itl2.70150
Fan Yang

With the rapid development of the global digital economy, digital transformation has become an essential path for enterprises to strengthen their market position and ensure sustainable growth. As an important force in China's economic development, private enterprises face both rare opportunities and unique challenges in promoting the digital transformation. This study focuses on industrial internet of things, a key digital technology, and conducts an in-depth analysis of case studies of private enterprises' transformation and upgrading based on industrial internet of things through a “technology–organization–environment” framework. It finds that industrial internet of things technology can effectively enhance the operational efficiency of private enterprises; however, successful digital transformation requires the coordinated advancement of technological support and organizational change. Therefore, this study proposes a step-by-step, modular implementation path tailored to the characteristics of private enterprises, providing theoretical guidance and development references for digital transformation based on industrial internet of things in private enterprises.

随着全球数字经济的快速发展,数字化转型已成为企业加强市场地位、确保可持续发展的重要途径。民营企业作为中国经济发展的重要力量,在推进数字化转型中既面临着难得的机遇,也面临着独特的挑战。本研究围绕工业物联网这一关键数字技术,通过“技术-组织-环境”框架,对民营企业基于工业物联网的转型升级案例进行深入分析。研究发现,工业物联网技术可以有效提升民营企业的运营效率;然而,成功的数字化转型需要技术支持和组织变革的协调推进。因此,本研究提出了适合民营企业特点的分步模块化实施路径,为民营企业基于工业物联网的数字化转型提供理论指导和发展参考。
{"title":"Research on the Digital Transformation of Private Enterprises Based on the Industrial Internet of Things","authors":"Fan Yang","doi":"10.1002/itl2.70150","DOIUrl":"https://doi.org/10.1002/itl2.70150","url":null,"abstract":"<div>\u0000 \u0000 <p>With the rapid development of the global digital economy, digital transformation has become an essential path for enterprises to strengthen their market position and ensure sustainable growth. As an important force in China's economic development, private enterprises face both rare opportunities and unique challenges in promoting the digital transformation. This study focuses on industrial internet of things, a key digital technology, and conducts an in-depth analysis of case studies of private enterprises' transformation and upgrading based on industrial internet of things through a “technology–organization–environment” framework. It finds that industrial internet of things technology can effectively enhance the operational efficiency of private enterprises; however, successful digital transformation requires the coordinated advancement of technological support and organizational change. Therefore, this study proposes a step-by-step, modular implementation path tailored to the characteristics of private enterprises, providing theoretical guidance and development references for digital transformation based on industrial internet of things in private enterprises.</p>\u0000 </div>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 6","pages":""},"PeriodicalIF":0.5,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145317506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Study on the Coupled Network of Quality Chain Risk Propagation Based on MHCM and Industry 5.0 基于MHCM和工业5.0的质量链风险传播耦合网络研究
IF 0.5 Q4 TELECOMMUNICATIONS Pub Date : 2025-10-17 DOI: 10.1002/itl2.70148
Peng Dong, Ge Han, Luwen Yuan

In response to the challenges of information fragmentation, ambiguous risk propagation mechanisms, and insufficient cross-stage collaboration in equipment full-lifecycle quality management, this study redefines the equipment quality chain in the context of Industrial Internet of Things (IIoT) technology enabling the interconnection of all equipment elements. By integrating real-time data collection technology from the IIoT, a quality risk propagation function is established to quantify single-factor sensitivity and multi-node coupling effects, revealing the laws of deviation accumulation and cascading amplification in risk propagation. An innovative fusion of the Multilayer Hypergraph Coupling Model (MHCM) and IoT technology is used to construct a risk propagation coupling network model, uncovering the general principles of risk propagation in the equipment quality chain. Monte Carlo simulation is used to identify key sensitive nodes and validate the rationality of the quality risk propagation function. Based on this, quality management strategies adapted to the resilient manufacturing requirements of Industry 5.0 are proposed, forming a full-chain control system and providing a quantifiable, traceable, and interceptable quality management paradigm for complex equipment systems, effectively enhancing equipment effectiveness and economy.

针对设备全生命周期质量管理中存在的信息碎片化、风险传播机制模糊、跨阶段协作不足等问题,本研究在工业物联网(IIoT)技术背景下,对设备质量链进行了重新定义,实现了设备各要素的互联互通。通过集成工业物联网实时数据采集技术,建立质量风险传播函数,量化单因素敏感性和多节点耦合效应,揭示风险传播中的偏差积累和级联放大规律。将多层超图耦合模型(MHCM)与物联网技术创新融合,构建了风险传播耦合网络模型,揭示了设备质量链中风险传播的一般规律。采用蒙特卡罗仿真方法识别关键敏感节点,验证质量风险传播函数的合理性。在此基础上,提出适应工业5.0弹性制造要求的质量管理策略,形成全链条控制体系,为复杂设备系统提供可量化、可追溯、可拦截的质量管理范式,有效提升设备效能和经济性。
{"title":"A Study on the Coupled Network of Quality Chain Risk Propagation Based on MHCM and Industry 5.0","authors":"Peng Dong,&nbsp;Ge Han,&nbsp;Luwen Yuan","doi":"10.1002/itl2.70148","DOIUrl":"https://doi.org/10.1002/itl2.70148","url":null,"abstract":"<div>\u0000 \u0000 <p>In response to the challenges of information fragmentation, ambiguous risk propagation mechanisms, and insufficient cross-stage collaboration in equipment full-lifecycle quality management, this study redefines the equipment quality chain in the context of Industrial Internet of Things (IIoT) technology enabling the interconnection of all equipment elements. By integrating real-time data collection technology from the IIoT, a quality risk propagation function is established to quantify single-factor sensitivity and multi-node coupling effects, revealing the laws of deviation accumulation and cascading amplification in risk propagation. An innovative fusion of the Multilayer Hypergraph Coupling Model (MHCM) and IoT technology is used to construct a risk propagation coupling network model, uncovering the general principles of risk propagation in the equipment quality chain. Monte Carlo simulation is used to identify key sensitive nodes and validate the rationality of the quality risk propagation function. Based on this, quality management strategies adapted to the resilient manufacturing requirements of Industry 5.0 are proposed, forming a full-chain control system and providing a quantifiable, traceable, and interceptable quality management paradigm for complex equipment systems, effectively enhancing equipment effectiveness and economy.</p>\u0000 </div>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 6","pages":""},"PeriodicalIF":0.5,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145317507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging AI in 5G Networks for Estimating Signal-to-Interference-and-Noise Ratio 利用AI在5G网络中估计信噪比
IF 0.5 Q4 TELECOMMUNICATIONS Pub Date : 2025-10-17 DOI: 10.1002/itl2.70098
Gulzat Ziyatbekova, Saurabh Jain, Deepak Dasaratha Rao, Manpreet Singh,  Tusha, Gunveen Ahluwalia

A key role for artificial intelligence (AI) is anticipated in the era of 5G networks. Effective radio resource management has become more and more important for network operators. But when new technologies, network topologies, and sophisticated equipment are integrated more quickly, it's getting harder to allocate enough radio resources for precise channel condition assessment in mobile networks. Predicting channel conditions automatically helps to make effective use of resources. Research presents the mutated gray wolf-driven spiking neural network (MGW-SNN) model, an ML-based method for signal-to-interference-and-noise ratio (SINR) estimation based on the cyber-physical system (CPS) location. An MGW-SNN model uses the present position of the CPS to forecast the SINR. Initially, obtain the data to validate suggested algorithms. Min–max normalization and noise reduction were used to preprocess after collecting data. The spiking neural network (SNN) structure's parameters are optimized with the usage of the gray wolf optimization (GWO) method to improve the network's performance. The research was implemented using MATLAB. The proposed strategy in terms of accuracy (93%), RMSE (1.35), R2$$ {R}^2 $$ (0.99), MAE (0.75), and average SINR (10 dB) is validated. It shows that the proposal is effective in predicting the SINR. The approach stabilizes the computational economy while simultaneously increasing accuracy, which makes it suitable for real-time applications in dynamic network environments.

预计人工智能(AI)将在5G网络时代发挥关键作用。有效的无线电资源管理对网络运营商来说变得越来越重要。但是,当新技术、网络拓扑结构和复杂设备更快地集成时,分配足够的无线电资源以进行移动网络中精确的信道状况评估变得越来越困难。自动预测信道状况有助于有效地利用资源。研究了突变灰狼驱动的峰值神经网络(MGW-SNN)模型,这是一种基于网络物理系统(CPS)位置的基于ml的信噪比估计方法。MGW-SNN模型利用CPS的当前位置来预测信噪比。首先,获取数据以验证建议的算法。采集数据后采用最小-最大归一化和降噪方法进行预处理。采用灰狼优化(GWO)方法对尖峰神经网络(SNN)结构参数进行优化,以提高网络的性能。该研究是通过MATLAB实现的。建议的策略在准确性方面(93)%), RMSE (1.35), R 2 $$ {R}^2 $$ (0.99), MAE (0.75), and average SINR (10 dB) is validated. It shows that the proposal is effective in predicting the SINR. The approach stabilizes the computational economy while simultaneously increasing accuracy, which makes it suitable for real-time applications in dynamic network environments.
{"title":"Leveraging AI in 5G Networks for Estimating Signal-to-Interference-and-Noise Ratio","authors":"Gulzat Ziyatbekova,&nbsp;Saurabh Jain,&nbsp;Deepak Dasaratha Rao,&nbsp;Manpreet Singh,&nbsp; Tusha,&nbsp;Gunveen Ahluwalia","doi":"10.1002/itl2.70098","DOIUrl":"https://doi.org/10.1002/itl2.70098","url":null,"abstract":"<div>\u0000 \u0000 <p>A key role for artificial intelligence (AI) is anticipated in the era of 5G networks. Effective radio resource management has become more and more important for network operators. But when new technologies, network topologies, and sophisticated equipment are integrated more quickly, it's getting harder to allocate enough radio resources for precise channel condition assessment in mobile networks. Predicting channel conditions automatically helps to make effective use of resources. Research presents the mutated gray wolf-driven spiking neural network (MGW-SNN) model, an ML-based method for signal-to-interference-and-noise ratio (SINR) estimation based on the cyber-physical system (CPS) location. An MGW-SNN model uses the present position of the CPS to forecast the SINR. Initially, obtain the data to validate suggested algorithms. Min–max normalization and noise reduction were used to preprocess after collecting data. The spiking neural network (SNN) structure's parameters are optimized with the usage of the gray wolf optimization (GWO) method to improve the network's performance. The research was implemented using MATLAB. The proposed strategy in terms of accuracy (93%), RMSE (1.35), <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <msup>\u0000 <mi>R</mi>\u0000 <mn>2</mn>\u0000 </msup>\u0000 </mrow>\u0000 <annotation>$$ {R}^2 $$</annotation>\u0000 </semantics></math> (0.99), MAE (0.75), and average SINR (10 dB) is validated. It shows that the proposal is effective in predicting the SINR. The approach stabilizes the computational economy while simultaneously increasing accuracy, which makes it suitable for real-time applications in dynamic network environments.</p>\u0000 </div>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 6","pages":""},"PeriodicalIF":0.5,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145317402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantum Computing Method for Prediction of the Cardiovascular Disease 心血管疾病预测的量子计算方法
IF 0.5 Q4 TELECOMMUNICATIONS Pub Date : 2025-10-14 DOI: 10.1002/itl2.70059
M. Gayathri, R. Thilagavathy, M. Pushpalatha, Raghvendra Kumar, Kusum Yadav, Lulwah M. Alkwai

Cardiovascular complications pertain to damages done to the myocardium and the arteries, as well as other parts of the circulatory system. Some important ischemic heart disease (IHD) risk factors include some health assessment metrics, age, and gender. In the prediction of cardiac diseases and in other tough decisions involving the analytics of large volumes of medical data, quantum computation techniques have become a staple in the healthcare field. Studies recently focus on the application of such procedures in quantum learning (QL) algorithms. For heart disease diagnosis, this work came up with a set of QL algorithms, including optimized quantum qubit vector learning (OQQVL) and quantum stacked convolutional neural networks (QSCNN), which we consider computationally light. By incorporating strong quantum learning based pre-processing and feature clustering techniques like quantum K means, the proposed models' prediction reliability along with accuracy is sharpened whilst ensuring robustness. A trade-off between the proposed and more sophisticated recently published models is done using available performance metrics. Thus, using the proposed techniques, the models achieved the highest accuracies of 99.1% and 99.5%, respectively. Moreover, the sophisticated structure provided reasonable execution times, making them suitable for real-time healthcare use.

心血管并发症涉及心肌和动脉以及循环系统其他部分的损害。一些重要的缺血性心脏病(IHD)危险因素包括一些健康评估指标,年龄和性别。在心脏病预测和其他涉及分析大量医疗数据的艰难决策中,量子计算技术已成为医疗保健领域的主要技术。最近的研究集中在这些过程在量子学习(QL)算法中的应用。对于心脏病诊断,这项工作提出了一套QL算法,包括优化量子量子比特向量学习(OQQVL)和量子堆叠卷积神经网络(QSCNN),我们认为它们的计算量很轻。通过结合强大的基于量子学习的预处理和特征聚类技术,如量子K均值,所提出的模型的预测可靠性和准确性得到提高,同时确保鲁棒性。使用可用的性能指标,可以在建议的模型和最近发布的更复杂的模型之间进行权衡。因此,使用所提出的技术,模型分别达到了99.1%和99.5%的最高精度。此外,复杂的结构提供了合理的执行时间,使其适合实时医疗保健使用。
{"title":"Quantum Computing Method for Prediction of the Cardiovascular Disease","authors":"M. Gayathri,&nbsp;R. Thilagavathy,&nbsp;M. Pushpalatha,&nbsp;Raghvendra Kumar,&nbsp;Kusum Yadav,&nbsp;Lulwah M. Alkwai","doi":"10.1002/itl2.70059","DOIUrl":"https://doi.org/10.1002/itl2.70059","url":null,"abstract":"<div>\u0000 \u0000 <p>Cardiovascular complications pertain to damages done to the myocardium and the arteries, as well as other parts of the circulatory system. Some important ischemic heart disease (IHD) risk factors include some health assessment metrics, age, and gender. In the prediction of cardiac diseases and in other tough decisions involving the analytics of large volumes of medical data, quantum computation techniques have become a staple in the healthcare field. Studies recently focus on the application of such procedures in quantum learning (QL) algorithms. For heart disease diagnosis, this work came up with a set of QL algorithms, including optimized quantum qubit vector learning (OQQVL) and quantum stacked convolutional neural networks (QSCNN), which we consider computationally light. By incorporating strong quantum learning based pre-processing and feature clustering techniques like quantum K means, the proposed models' prediction reliability along with accuracy is sharpened whilst ensuring robustness. A trade-off between the proposed and more sophisticated recently published models is done using available performance metrics. Thus, using the proposed techniques, the models achieved the highest accuracies of 99.1% and 99.5%, respectively. Moreover, the sophisticated structure provided reasonable execution times, making them suitable for real-time healthcare use.</p>\u0000 </div>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 6","pages":""},"PeriodicalIF":0.5,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145316839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight Graph Networks for AI-Integrated Network Traffic Prediction: Towards Efficient Edge Computing Solutions 面向ai集成网络流量预测的轻量级图网络:走向高效的边缘计算解决方案
IF 0.5 Q4 TELECOMMUNICATIONS Pub Date : 2025-10-13 DOI: 10.1002/itl2.70152
Leilei Zhu, Xiuli Sun, Lili Huang

Network traffic prediction stands as a cornerstone for the realization of intelligent network management. Traditional cloud-centric solutions encounter significant challenges, including latency and bandwidth limitations, when tasked with handling real-time predictive analytics. This paper introduces an innovative approach centered around an edge computing-based lightweight Graph Neural Network (GNN) model designed to facilitate AI-integrated traffic forecasting. The proposed methodology achieves model lightness through the employment of adaptive graph sampling techniques, while traffic predictions are executed by seamlessly integrating the formulated GNN model. Our comprehensive experimentation demonstrates that the proposed framework significantly outperforms existing state-of-the-art methodologies in multiple performance metrics. Specifically, our approach achieves a reduction in the mean absolute error by at least 7.9%, along with the lowest root mean squared error. Furthermore, the inference latency is reduced by approximately 10% or more, whereas the prediction accuracy reaches more than 93%, exceeding that of competing methods.

网络流量预测是实现网络智能管理的基石。传统的以云为中心的解决方案在处理实时预测分析时遇到了重大挑战,包括延迟和带宽限制。本文介绍了一种以基于边缘计算的轻量级图神经网络(GNN)模型为中心的创新方法,旨在促进人工智能集成的交通预测。提出的方法通过采用自适应图采样技术实现模型轻量化,而流量预测通过无缝集成制定的GNN模型来执行。我们的综合实验表明,所提出的框架在多个性能指标上明显优于现有的最先进的方法。具体来说,我们的方法将平均绝对误差降低了至少7.9%,同时实现了最低的均方根误差。此外,推理延迟降低了约10%或更多,而预测精度达到93%以上,超过了竞争方法。
{"title":"Lightweight Graph Networks for AI-Integrated Network Traffic Prediction: Towards Efficient Edge Computing Solutions","authors":"Leilei Zhu,&nbsp;Xiuli Sun,&nbsp;Lili Huang","doi":"10.1002/itl2.70152","DOIUrl":"https://doi.org/10.1002/itl2.70152","url":null,"abstract":"<div>\u0000 \u0000 <p>Network traffic prediction stands as a cornerstone for the realization of intelligent network management. Traditional cloud-centric solutions encounter significant challenges, including latency and bandwidth limitations, when tasked with handling real-time predictive analytics. This paper introduces an innovative approach centered around an edge computing-based lightweight Graph Neural Network (GNN) model designed to facilitate AI-integrated traffic forecasting. The proposed methodology achieves model lightness through the employment of adaptive graph sampling techniques, while traffic predictions are executed by seamlessly integrating the formulated GNN model. Our comprehensive experimentation demonstrates that the proposed framework significantly outperforms existing state-of-the-art methodologies in multiple performance metrics. Specifically, our approach achieves a reduction in the mean absolute error by at least 7.9%, along with the lowest root mean squared error. Furthermore, the inference latency is reduced by approximately 10% or more, whereas the prediction accuracy reaches more than 93%, exceeding that of competing methods.</p>\u0000 </div>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 6","pages":""},"PeriodicalIF":0.5,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145316793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WearPPG-Former: A Wearable-Optimized Transformer Using Dynamic Sparse Attention for Motion-Resilient HR Estimation Under Intense Exercise Scenarios WearPPG-Former:一种基于动态稀疏注意的可穿戴优化变压器,用于高强度运动场景下的运动弹性HR估计
IF 0.5 Q4 TELECOMMUNICATIONS Pub Date : 2025-10-13 DOI: 10.1002/itl2.70161
Haibo Wang, Ningning Li, Bin Wu

Photoplethysmography (PPG) signals are widely used in heart rate monitoring via wearable devices, but motion artifact interference and the need for lightweight methods remain the major obstacles to their practical implementation. To address the issues that existing deep learning methods have insufficient anti-artifact capabilities in strenuous exercise scenarios and excessively high model complexity, making them difficult to adapt to wearable devices, a lightweight Transformer method, called WearPPG-Former, is proposed for heart rate (HR) estimation. The proposed method incorporates a sparse attention mechanism into transformer modules, which selectively concentrates on heart rate-associated key feature regions. The design simultaneously reduces computational complexity while enhancing suppression capabilities against motion artifacts, including baseline wander and high-frequency noise. Experimental results demonstrate that WearPPG-Former outperforms the state-of-the-art methods in mean absolute error (MAE) on the PPG-DALIA dataset. Specifically, it achieves a reduced average MAE of 3.18 beats per minute (bpm) under intense exercise scenarios. When deployed on a resource-constrained wearable embedded platform, the model attains an inference speed of 410.4 ms, fulfilling real-time monitoring requirements for wearable devices. This delivers an efficient solution for PPG-based heart rate estimation in dynamic scenarios, thereby advancing the practical implementation of wearable health monitoring technology.

光电容积脉搏波(PPG)信号广泛应用于可穿戴设备的心率监测,但运动伪影干扰和对轻量级方法的需求仍然是其实际实施的主要障碍。为了解决现有深度学习方法在剧烈运动场景下抗伪像能力不足和模型复杂性过高,难以适应可穿戴设备的问题,提出了一种轻量级的Transformer方法,称为WearPPG-Former,用于心率(HR)估计。该方法在变压器模块中引入稀疏注意机制,选择性地集中于与心率相关的关键特征区域。该设计同时降低了计算复杂度,同时增强了对运动伪像(包括基线漂移和高频噪声)的抑制能力。实验结果表明,WearPPG-Former在PPG-DALIA数据集上的平均绝对误差(MAE)优于最先进的方法。具体来说,在剧烈运动的情况下,它的平均MAE降低到了每分钟3.18次。该模型部署在资源受限的可穿戴嵌入式平台上,推理速度达到410.4 ms,满足可穿戴设备的实时监控需求。这为动态场景下基于ppg的心率估计提供了一个高效的解决方案,从而推进可穿戴健康监测技术的实际实施。
{"title":"WearPPG-Former: A Wearable-Optimized Transformer Using Dynamic Sparse Attention for Motion-Resilient HR Estimation Under Intense Exercise Scenarios","authors":"Haibo Wang,&nbsp;Ningning Li,&nbsp;Bin Wu","doi":"10.1002/itl2.70161","DOIUrl":"https://doi.org/10.1002/itl2.70161","url":null,"abstract":"<div>\u0000 \u0000 <p>Photoplethysmography (PPG) signals are widely used in heart rate monitoring via wearable devices, but motion artifact interference and the need for lightweight methods remain the major obstacles to their practical implementation. To address the issues that existing deep learning methods have insufficient anti-artifact capabilities in strenuous exercise scenarios and excessively high model complexity, making them difficult to adapt to wearable devices, a lightweight Transformer method, called WearPPG-Former, is proposed for heart rate (HR) estimation. The proposed method incorporates a sparse attention mechanism into transformer modules, which selectively concentrates on heart rate-associated key feature regions. The design simultaneously reduces computational complexity while enhancing suppression capabilities against motion artifacts, including baseline wander and high-frequency noise. Experimental results demonstrate that WearPPG-Former outperforms the state-of-the-art methods in mean absolute error (MAE) on the PPG-DALIA dataset. Specifically, it achieves a reduced average MAE of 3.18 beats per minute (bpm) under intense exercise scenarios. When deployed on a resource-constrained wearable embedded platform, the model attains an inference speed of 410.4 ms, fulfilling real-time monitoring requirements for wearable devices. This delivers an efficient solution for PPG-based heart rate estimation in dynamic scenarios, thereby advancing the practical implementation of wearable health monitoring technology.</p>\u0000 </div>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 6","pages":""},"PeriodicalIF":0.5,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145316879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy-Aware Cross-Layer Routing Using Transformer Models in Wireless Sensor Networks 无线传感器网络中基于变压器模型的能量感知跨层路由
IF 0.5 Q4 TELECOMMUNICATIONS Pub Date : 2025-10-09 DOI: 10.1002/itl2.70146
Shashi Tanwar, Abdul Lateef Haroon Phulara Shaik, M. Vasantha Kumara, Afshan Kaleem, S. Ranganatha

Recently, wireless communication networks have played a vital role in environmental monitoring and other data-driven applications. Even though these networks often struggle with limited energy and redundant data transmissions. Moreover, traditional routing protocols, such as the Cross-layer Opportunistic Routing Protocol (CORP), rely heavily on static routing decisions with fixed-cost functions, leading to a lack of adaptability. To address these issues, this study proposes a Mistral 7B-based Cross-layer Optimization (M7BCO), which integrates adaptive reasoning and prompt-based telemetry compression for energy-aware decisions. The proposed M7BCO model utilizes a Partially Informed Sparse Autoencoder (PISA) to select a minimal subset of informative nodes by learning spatial correlations while preserving data reconstructability. Then, the proposed M7BCO model generates a real-time decision for next-hop selection and transmits power adjustment as it replaces the static optimization with adaptive reasoning. Unlike pure sequential models, the proposed model introduced a lightweight training loop between PISA telemetry selection and Mistral 7B adaptive reasoning. From the results, the proposed M7BCO model achieved better results when compared to the existing CORP model in terms of Energy Efficiency (EE) of 22.5, 65.3, and 100.2 mJ for 150, 300, and 500 nodes respectively.

近年来,无线通信网络在环境监测和其他数据驱动应用中发挥了至关重要的作用。尽管这些网络经常与有限的能量和冗余的数据传输作斗争。此外,传统的路由协议,如跨层机会路由协议(Cross-layer Opportunistic routing Protocol, CORP),严重依赖具有固定成本函数的静态路由决策,导致适应性不足。为了解决这些问题,本研究提出了一种基于Mistral 7b的跨层优化(M7BCO),它集成了自适应推理和基于提示的遥测压缩,用于能量感知决策。所提出的M7BCO模型利用部分知情稀疏自编码器(PISA)通过学习空间相关性来选择信息节点的最小子集,同时保持数据的可重构性。M7BCO模型用自适应推理取代静态优化,实时生成下一跳选择决策并传输功率调整。与纯序列模型不同,该模型在PISA遥测选择和Mistral 7B自适应推理之间引入了轻量级训练循环。从结果来看,与现有的CORP模型相比,M7BCO模型在150、300和500个节点上的能效(EE)分别为22.5、65.3和100.2 mJ,取得了更好的结果。
{"title":"Energy-Aware Cross-Layer Routing Using Transformer Models in Wireless Sensor Networks","authors":"Shashi Tanwar,&nbsp;Abdul Lateef Haroon Phulara Shaik,&nbsp;M. Vasantha Kumara,&nbsp;Afshan Kaleem,&nbsp;S. Ranganatha","doi":"10.1002/itl2.70146","DOIUrl":"https://doi.org/10.1002/itl2.70146","url":null,"abstract":"<div>\u0000 \u0000 <p>Recently, wireless communication networks have played a vital role in environmental monitoring and other data-driven applications. Even though these networks often struggle with limited energy and redundant data transmissions. Moreover, traditional routing protocols, such as the Cross-layer Opportunistic Routing Protocol (CORP), rely heavily on static routing decisions with fixed-cost functions, leading to a lack of adaptability. To address these issues, this study proposes a Mistral 7B-based Cross-layer Optimization (M7BCO), which integrates adaptive reasoning and prompt-based telemetry compression for energy-aware decisions. The proposed M7BCO model utilizes a Partially Informed Sparse Autoencoder (PISA) to select a minimal subset of informative nodes by learning spatial correlations while preserving data reconstructability. Then, the proposed M7BCO model generates a real-time decision for next-hop selection and transmits power adjustment as it replaces the static optimization with adaptive reasoning. Unlike pure sequential models, the proposed model introduced a lightweight training loop between PISA telemetry selection and Mistral 7B adaptive reasoning. From the results, the proposed M7BCO model achieved better results when compared to the existing CORP model in terms of Energy Efficiency (EE) of 22.5, 65.3, and 100.2 mJ for 150, 300, and 500 nodes respectively.</p>\u0000 </div>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 6","pages":""},"PeriodicalIF":0.5,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145272565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Internet Technology Letters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1