首页 > 最新文献

Internet Technology Letters最新文献

英文 中文
Enhanced RoBERTaSN Model for Industrial IoT Text Similarity Analysis in Smart Manufacturing Systems 智能制造系统中工业物联网文本相似度分析的增强RoBERTaSN模型
IF 0.5 Q4 TELECOMMUNICATIONS Pub Date : 2025-10-02 DOI: 10.1002/itl2.70155
Maochun Xu, Qiang Liu, Gang Li, Chengmeng Li, Lei Ma, Ke Lin

In Industrial Internet of Things (IIoT) environments, smart manufacturing systems generate massive textual data (equipment logs, maintenance reports, etc.) requiring accurate similarity analysis for fault diagnosis and predictive maintenance. Traditional methods underperform in Industry 5.0 scenarios due to technical vocabulary and domain-specific language. This paper presents RoBERTaSN, an enhanced model combining RoBERTa with a Siamese network, featuring self-attention and dual pooling optimized for industrial texts. It enables precise similarity calculations between fault descriptions and historical records. Experiments on industrial datasets (e.g., equipment fault logs, maintenance reports) yield 94.2% accuracy in fault diagnosis text matching—7.8% higher than traditional TF-IDF (86.4%) and 6.0% higher than mainstream pretrained models (BERT: 88.2% accuracy; BiMPM: 84.67% F1-score), addressing semantic challenges in smart factories and advancing Industry 5.0's human–machine collaboration and intelligent decision-making goals.

在工业物联网(IIoT)环境中,智能制造系统生成大量文本数据(设备日志、维护报告等),需要进行准确的相似度分析,用于故障诊断和预测性维护。由于技术词汇和特定于领域的语言,传统方法在工业5.0场景中表现不佳。本文提出了RoBERTaSN,这是一个将RoBERTa与Siamese网络相结合的增强模型,具有针对工业文本优化的自关注和双池化特征。它可以精确地计算故障描述和历史记录之间的相似度。在工业数据集(如设备故障日志、维护报告)上的实验,故障诊断文本匹配的准确率为94.2%,比传统的TF-IDF(86.4%)高7.8%,比主流预训练模型(BERT: 88.2%准确率;BiMPM: 84.67% f1得分)高6.0%,解决了智能工厂中的语义挑战,推进了工业5.0的人机协作和智能决策目标。
{"title":"Enhanced RoBERTaSN Model for Industrial IoT Text Similarity Analysis in Smart Manufacturing Systems","authors":"Maochun Xu,&nbsp;Qiang Liu,&nbsp;Gang Li,&nbsp;Chengmeng Li,&nbsp;Lei Ma,&nbsp;Ke Lin","doi":"10.1002/itl2.70155","DOIUrl":"https://doi.org/10.1002/itl2.70155","url":null,"abstract":"<div>\u0000 \u0000 <p>In Industrial Internet of Things (IIoT) environments, smart manufacturing systems generate massive textual data (equipment logs, maintenance reports, etc.) requiring accurate similarity analysis for fault diagnosis and predictive maintenance. Traditional methods underperform in Industry 5.0 scenarios due to technical vocabulary and domain-specific language. This paper presents RoBERTaSN, an enhanced model combining RoBERTa with a Siamese network, featuring self-attention and dual pooling optimized for industrial texts. It enables precise similarity calculations between fault descriptions and historical records. Experiments on industrial datasets (e.g., equipment fault logs, maintenance reports) yield 94.2% accuracy in fault diagnosis text matching—7.8% higher than traditional TF-IDF (86.4%) and 6.0% higher than mainstream pretrained models (BERT: 88.2% accuracy; BiMPM: 84.67% <i>F</i>1-score), addressing semantic challenges in smart factories and advancing Industry 5.0's human–machine collaboration and intelligent decision-making goals.</p>\u0000 </div>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 6","pages":""},"PeriodicalIF":0.5,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145223847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-Time English Text Recognition Using Lightweight AI in Wireless Communication Networks 基于轻量级AI的无线通信网络实时英语文本识别
IF 0.5 Q4 TELECOMMUNICATIONS Pub Date : 2025-09-30 DOI: 10.1002/itl2.70125
Baoying Sun, Yingwei Liu

In the era of pervasive wireless communication, the need for efficient and accurate text recognition systems is growing, especially for applications involving edge devices in resource-constrained environments. This paper proposes a lightweight AI-based approach for English text recognition, leveraging a hybrid model combining convolutional neural networks (CNN) and gated recurrent units (GRU). The model effectively handles noisy wireless signals by capturing both spatial and temporal features from modulated signals. We incorporate techniques such as pruning and depthwise separable convolution (DSC) to reduce the model's size, making it suitable for deployment in wireless communication systems. Experimental results demonstrate that the proposed model outperforms several state-of-the-art methods, including traditional modulation recognition and deep learning-based alternatives, in terms of both recognition accuracy and model efficiency, even in low signal-to-noise ratio (SNR) conditions. The proposed model offers a promising solution for real-time text recognition in wireless communication environments.

在无线通信普及的时代,对高效、准确的文本识别系统的需求日益增长,特别是对于资源受限环境中涉及边缘设备的应用。本文提出了一种轻量级的基于人工智能的英语文本识别方法,利用卷积神经网络(CNN)和门控循环单元(GRU)相结合的混合模型。该模型通过捕获调制信号的时空特征,有效地处理了含噪无线信号。我们结合了修剪和深度可分离卷积(DSC)等技术来减小模型的大小,使其适合部署在无线通信系统中。实验结果表明,即使在低信噪比(SNR)条件下,该模型在识别精度和模型效率方面也优于几种最先进的方法,包括传统的调制识别和基于深度学习的替代方法。该模型为无线通信环境下的实时文本识别提供了一种很有前景的解决方案。
{"title":"Real-Time English Text Recognition Using Lightweight AI in Wireless Communication Networks","authors":"Baoying Sun,&nbsp;Yingwei Liu","doi":"10.1002/itl2.70125","DOIUrl":"https://doi.org/10.1002/itl2.70125","url":null,"abstract":"<div>\u0000 \u0000 <p>In the era of pervasive wireless communication, the need for efficient and accurate text recognition systems is growing, especially for applications involving edge devices in resource-constrained environments. This paper proposes a lightweight AI-based approach for English text recognition, leveraging a hybrid model combining convolutional neural networks (CNN) and gated recurrent units (GRU). The model effectively handles noisy wireless signals by capturing both spatial and temporal features from modulated signals. We incorporate techniques such as pruning and depthwise separable convolution (DSC) to reduce the model's size, making it suitable for deployment in wireless communication systems. Experimental results demonstrate that the proposed model outperforms several state-of-the-art methods, including traditional modulation recognition and deep learning-based alternatives, in terms of both recognition accuracy and model efficiency, even in low signal-to-noise ratio (SNR) conditions. The proposed model offers a promising solution for real-time text recognition in wireless communication environments.</p>\u0000 </div>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 6","pages":""},"PeriodicalIF":0.5,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145224563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IIoT-Enabled Apparel Demand Forecasting: A Random Forest Approach Mining E-Commerce Reviews 基于工业物联网的服装需求预测:随机森林方法挖掘电子商务评论
IF 0.5 Q4 TELECOMMUNICATIONS Pub Date : 2025-09-30 DOI: 10.1002/itl2.70158
Zhihang Tang, Jinyang Shi, Zipei Tang

Within the industrial internet of things (IIoT) ecosystems, apparel manufacturers face the dual challenge of integrating high-velocity consumer feedback streams from e-commerce platforms and translating them into real-time, high-fidelity demand forecasts. This study presents an IIoT-native framework that employs random forest regression (RFR) to fuse multi-modal review features—sentiment polarity, key phrases, and aggregate ratings—collected via edge gateways from 1100 men's garments on JD.com. Innovatively, the proposed framework not only outperforms traditional linear models such as ordinary least squares (OLS) and multiple linear regression (MLR) in terms of predictive accuracy but also demonstrates robustness to noise and outliers across heterogeneous product categories. The cloud-hosted RFR model achieves an R2 of 0.9442 and root mean square error (RMSE) of 105.76, representing a 5.6% and 35.9% improvement over MLR and OLS in RMSE, respectively. This study provides the first multi-category empirical evidence that fusing review-level sentiment, key phrases, and ratings via RFR yields significant enhancements in IIoT-scale apparel demand forecasting.

在工业物联网(IIoT)生态系统中,服装制造商面临着双重挑战,即整合来自电子商务平台的高速消费者反馈流,并将其转化为实时、高保真的需求预测。本研究提出了一个工业物联网原生框架,该框架采用随机森林回归(RFR)融合多模态评论特征——情感极性、关键短语和综合评分——通过边缘网关从京东上的1100件男装中收集。创新的是,所提出的框架不仅在预测精度方面优于传统的线性模型,如普通最小二乘(OLS)和多元线性回归(MLR),而且在异构产品类别中表现出对噪声和异常值的鲁棒性。云托管RFR模型的R2为0.9442,均方根误差(RMSE)为105.76,在RMSE上分别比MLR和OLS提高了5.6%和35.9%。这项研究提供了第一个多类别的经验证据,通过RFR融合评论级情绪、关键短语和评级,可以显著增强工业物联网规模的服装需求预测。
{"title":"IIoT-Enabled Apparel Demand Forecasting: A Random Forest Approach Mining E-Commerce Reviews","authors":"Zhihang Tang,&nbsp;Jinyang Shi,&nbsp;Zipei Tang","doi":"10.1002/itl2.70158","DOIUrl":"https://doi.org/10.1002/itl2.70158","url":null,"abstract":"<div>\u0000 \u0000 <p>Within the industrial internet of things (IIoT) ecosystems, apparel manufacturers face the dual challenge of integrating high-velocity consumer feedback streams from e-commerce platforms and translating them into real-time, high-fidelity demand forecasts. This study presents an IIoT-native framework that employs random forest regression (RFR) to fuse multi-modal review features—sentiment polarity, key phrases, and aggregate ratings—collected via edge gateways from 1100 men's garments on \u0000JD.com. Innovatively, the proposed framework not only outperforms traditional linear models such as ordinary least squares (OLS) and multiple linear regression (MLR) in terms of predictive accuracy but also demonstrates robustness to noise and outliers across heterogeneous product categories. The cloud-hosted RFR model achieves an <i>R</i><sup>2</sup> of 0.9442 and root mean square error (RMSE) of 105.76, representing a 5.6% and 35.9% improvement over MLR and OLS in RMSE, respectively. This study provides the first multi-category empirical evidence that fusing review-level sentiment, key phrases, and ratings via RFR yields significant enhancements in IIoT-scale apparel demand forecasting.</p>\u0000 </div>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 6","pages":""},"PeriodicalIF":0.5,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145224465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large Model-Driven Digital Twin Networks for Transmission Optimization in 5G/6G Wireless Communication Systems 面向5G/6G无线通信系统传输优化的大型模型驱动数字孪生网络
IF 0.5 Q4 TELECOMMUNICATIONS Pub Date : 2025-09-26 DOI: 10.1002/itl2.70113
Ankita Sharma, Shalli Rani

The next generation of Wireless Communication Networks (WCNs), such as 5G and 6G, require highly adaptive, intelligent, and efficient transmission mechanisms to meet the demands of low latency, high throughput, and robust Quality of Experience (QoE). This paper introduces a novel framework that integrates Large Models (LMs), particularly transformer-based deep learning architectures, with Digital Twin Networks (DTNs) for predictive and real-time optimization in WCNs. The proposed LM-enhanced DTN architecture enables advanced capabilities such as traffic classification, predictive scheduling, quality-aware transmission, and failure forecasting. Experimental evaluations using real-world telemetry datasets demonstrate the superiority of the LM-powered system in achieving over 98% classification accuracy and enhancing 12.4% improvement in QoE in congested scenarios. Additionally, a case study in industrial networks illustrates the effectiveness of this approach in predictive maintenance and adaptive traffic management. This work paves the way for self-optimizing, intelligent wireless networks by harnessing the cognitive power of large AI models in virtual network replicas.

5G、6G等下一代无线通信网络需要高自适应、智能、高效的传输机制来满足低时延、高吞吐量和高质量体验的需求。本文介绍了一种新的框架,该框架将大型模型(LMs),特别是基于变压器的深度学习架构,与数字孪生网络(DTNs)集成在一起,用于wcn的预测和实时优化。提出的lm增强型DTN架构支持诸如流量分类、预测调度、质量感知传输和故障预测等高级功能。使用真实遥测数据集的实验评估表明,在拥挤场景下,lm驱动的系统在实现98%以上的分类准确率和提高12.4%的QoE方面具有优势。此外,工业网络中的一个案例研究说明了该方法在预测性维护和自适应流量管理中的有效性。这项工作通过在虚拟网络副本中利用大型人工智能模型的认知能力,为自我优化智能无线网络铺平了道路。
{"title":"Large Model-Driven Digital Twin Networks for Transmission Optimization in 5G/6G Wireless Communication Systems","authors":"Ankita Sharma,&nbsp;Shalli Rani","doi":"10.1002/itl2.70113","DOIUrl":"https://doi.org/10.1002/itl2.70113","url":null,"abstract":"<div>\u0000 \u0000 <p>The next generation of Wireless Communication Networks (WCNs), such as 5G and 6G, require highly adaptive, intelligent, and efficient transmission mechanisms to meet the demands of low latency, high throughput, and robust Quality of Experience (QoE). This paper introduces a novel framework that integrates Large Models (LMs), particularly transformer-based deep learning architectures, with Digital Twin Networks (DTNs) for predictive and real-time optimization in WCNs. The proposed LM-enhanced DTN architecture enables advanced capabilities such as traffic classification, predictive scheduling, quality-aware transmission, and failure forecasting. Experimental evaluations using real-world telemetry datasets demonstrate the superiority of the LM-powered system in achieving over 98% classification accuracy and enhancing 12.4% improvement in QoE in congested scenarios. Additionally, a case study in industrial networks illustrates the effectiveness of this approach in predictive maintenance and adaptive traffic management. This work paves the way for self-optimizing, intelligent wireless networks by harnessing the cognitive power of large AI models in virtual network replicas.</p>\u0000 </div>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 6","pages":""},"PeriodicalIF":0.5,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145146442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and Implementation of Intelligent Construction Automation System Based on 6G Network 基于6G网络的智能建筑自动化系统的设计与实现
IF 0.5 Q4 TELECOMMUNICATIONS Pub Date : 2025-09-26 DOI: 10.1002/itl2.70045
Xiujun Nie, Xiaolin Zhang, Xuguo Liu, Ran Wang

In modern intelligent construction automation systems, due to the interference of network delay, task synchronization between devices is hindered, resulting in uncoordinated operations between robots and collisions, and task conflicts. This paper builds an intelligent construction automation system based on a 6G network, using the low latency and high bandwidth characteristics of a 6G network to effectively solve the delay problem in task synchronization and collaborative work. Its innovative combination of network slicing technology and edge computing methods customizes specific network resources for different application scenarios to minimize latency. The fusion of convolutional neural network (CNN) and long short-term memory (LSTM) models can make better predictions, and combined with the deep reinforcement learning model (DRL), a path planning plan can be formulated based on the prediction results to avoid collision problems in the robot's work. The experimental results show that after the 6G network optimization system, the task scheduling rate of the robot can reach 0.95, compared with 5G network optimization, which only reaches 0.90, and the collision problem of the robot can be well avoided. The collision rate after optimization can approach 0, which can ensure the smooth progress of the construction process and the safety and reliability of task execution.

在现代智能建筑自动化系统中,由于网络时延的干扰,阻碍了设备之间的任务同步,导致机器人之间的操作不协调,发生碰撞,产生任务冲突。本文构建了基于6G网络的智能化施工自动化系统,利用6G网络低时延、高带宽的特点,有效解决任务同步协同工作中的时延问题。它将网络切片技术和边缘计算方法创新地结合在一起,为不同的应用场景定制特定的网络资源,最大限度地减少延迟。卷积神经网络(CNN)和长短期记忆(LSTM)模型的融合可以做出更好的预测,并结合深度强化学习模型(DRL),可以根据预测结果制定路径规划方案,避免机器人工作中的碰撞问题。实验结果表明,经过6G网络优化系统后,机器人的任务调度率可以达到0.95,而5G网络优化仅达到0.90,并且可以很好地避免机器人的碰撞问题。优化后的碰撞率可以接近于0,可以保证施工过程的顺利进行和任务执行的安全可靠。
{"title":"Design and Implementation of Intelligent Construction Automation System Based on 6G Network","authors":"Xiujun Nie,&nbsp;Xiaolin Zhang,&nbsp;Xuguo Liu,&nbsp;Ran Wang","doi":"10.1002/itl2.70045","DOIUrl":"https://doi.org/10.1002/itl2.70045","url":null,"abstract":"<div>\u0000 \u0000 <p>In modern intelligent construction automation systems, due to the interference of network delay, task synchronization between devices is hindered, resulting in uncoordinated operations between robots and collisions, and task conflicts. This paper builds an intelligent construction automation system based on a 6G network, using the low latency and high bandwidth characteristics of a 6G network to effectively solve the delay problem in task synchronization and collaborative work. Its innovative combination of network slicing technology and edge computing methods customizes specific network resources for different application scenarios to minimize latency. The fusion of convolutional neural network (CNN) and long short-term memory (LSTM) models can make better predictions, and combined with the deep reinforcement learning model (DRL), a path planning plan can be formulated based on the prediction results to avoid collision problems in the robot's work. The experimental results show that after the 6G network optimization system, the task scheduling rate of the robot can reach 0.95, compared with 5G network optimization, which only reaches 0.90, and the collision problem of the robot can be well avoided. The collision rate after optimization can approach 0, which can ensure the smooth progress of the construction process and the safety and reliability of task execution.</p>\u0000 </div>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 6","pages":""},"PeriodicalIF":0.5,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145146443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Hybrid Network Speech Recognition Method for English Short Passage Reading Emotion Analysis in Multi-Access Edge Intelligence Scenarios 多访问边缘智能场景下英语短文阅读情感分析的混合网络语音识别方法
IF 0.5 Q4 TELECOMMUNICATIONS Pub Date : 2025-09-26 DOI: 10.1002/itl2.70108
Jun Liao

Speech emotion recognition based on edge computing technology and deep learning can effectively assist in improving the quality of English short passage reading instruction. Restricted by limited computing resources of different edge devices, existing deep models pose a huge challenge for mobile deployment. To alleviate this issue, this paper proposes a novel hybrid speech emotion recognition model in multi-access edge intelligence scenarios. Firstly, we extract the Log Mel features from the speech signal collected by different clients' microphone sensors. Then, on the cloud platform, we deploy an efficient feature extraction backbone by exploiting 1D convolution operations, a minimal gated unit (MGU) module, and a Mamba module, which is introduced for exploiting long-range dependencies with linear computational complexity. We conducted extensive comparative experiments on the public dataset and our own English reading sentiment dataset, and our proposed model achieved the highest recognition performance.

基于边缘计算技术和深度学习的语音情感识别可以有效帮助提高英语短文阅读教学质量。现有的深度模型受到不同边缘设备计算资源的限制,给移动部署带来了巨大的挑战。为了解决这一问题,本文提出了一种新的多接入边缘智能场景下的混合语音情感识别模型。首先,从不同客户端麦克风传感器采集的语音信号中提取Log Mel特征;然后,在云平台上,我们通过利用1D卷积操作、最小门控单元(MGU)模块和Mamba模块部署了高效的特征提取骨干,Mamba模块用于利用具有线性计算复杂性的远程依赖关系。我们在公共数据集和我们自己的英语阅读情感数据集上进行了广泛的对比实验,我们提出的模型取得了最高的识别性能。
{"title":"A Hybrid Network Speech Recognition Method for English Short Passage Reading Emotion Analysis in Multi-Access Edge Intelligence Scenarios","authors":"Jun Liao","doi":"10.1002/itl2.70108","DOIUrl":"https://doi.org/10.1002/itl2.70108","url":null,"abstract":"<div>\u0000 \u0000 <p>Speech emotion recognition based on edge computing technology and deep learning can effectively assist in improving the quality of English short passage reading instruction. Restricted by limited computing resources of different edge devices, existing deep models pose a huge challenge for mobile deployment. To alleviate this issue, this paper proposes a novel hybrid speech emotion recognition model in multi-access edge intelligence scenarios. Firstly, we extract the Log Mel features from the speech signal collected by different clients' microphone sensors. Then, on the cloud platform, we deploy an efficient feature extraction backbone by exploiting 1D convolution operations, a minimal gated unit (MGU) module, and a Mamba module, which is introduced for exploiting long-range dependencies with linear computational complexity. We conducted extensive comparative experiments on the public dataset and our own English reading sentiment dataset, and our proposed model achieved the highest recognition performance.</p>\u0000 </div>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 6","pages":""},"PeriodicalIF":0.5,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145146444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Queue-Aware Congestion Avoidance in IoHT: Enabling Future Integration With Large Models for Transmission Optimization IoHT中的队列感知拥塞避免:实现未来与传输优化大型模型的集成
IF 0.5 Q4 TELECOMMUNICATIONS Pub Date : 2025-09-25 DOI: 10.1002/itl2.70136
Muhammad Zafarullah, Ata Ullah, Fazli Subhan, Sajjad A. Ghauri, Mazliham Mohd Suud, M. Mansoor Alam

The internet of healthcare things (IoHT) has advanced considerably, improving healthcare operations and patient monitoring by continuously collecting data from health sensors attached to patients. Current congestion detection techniques are insufficient for early detection since senders often remain unaware of the size of the residual queue. The real-time transmission of critical health data is essential, yet frequent congestion at intermediate nodes can lead to increased packet loss, delays, and diminished system reliability. To tackle these challenges, we propose a robust and low-complexity QACA algorithm tailored specifically for patient-centric IoHT networks, which dynamically adjusts the frequency of acknowledgments based on real-time queue occupancy thresholds. By integrating interval-based acknowledgments with a priority-based queuing strategy, QACA ensures that high-priority medical data is transmitted promptly, even in the face of heavy network loads. Simulation results indicate that QACA significantly improves performance over the analytical model and DCCA regarding packet loss and packet delay reduction. Moreover, the current framework may be enhanced in future work with the use of LMs to add predictive estimation of queue status, traffic classification, and an intelligent transmission scheduling, thus paving the way toward scalable and intelligent congestion management in next-generation IoHT systems.

医疗保健物联网(IoHT)已经取得了长足的进步,通过不断收集附着在患者身上的健康传感器的数据,改善了医疗保健操作和患者监测。当前的拥塞检测技术不足以进行早期检测,因为发送方通常不知道剩余队列的大小。关键健康数据的实时传输至关重要,但是中间节点上频繁的拥塞可能导致丢包、延迟和系统可靠性降低。为了应对这些挑战,我们提出了一种专为以患者为中心的IoHT网络量身定制的鲁棒且低复杂度的QACA算法,该算法基于实时队列占用阈值动态调整确认频率。通过将基于间隔的确认与基于优先级的排队策略集成在一起,QACA确保高优先级的医疗数据能够及时传输,即使在面对繁重的网络负载时也是如此。仿真结果表明,与分析模型和DCCA相比,QACA在减少丢包和包延迟方面的性能有显著提高。此外,当前的框架可能会在未来的工作中得到增强,使用LMs来增加队列状态的预测估计、流量分类和智能传输调度,从而为下一代IoHT系统的可扩展和智能拥塞管理铺平道路。
{"title":"Queue-Aware Congestion Avoidance in IoHT: Enabling Future Integration With Large Models for Transmission Optimization","authors":"Muhammad Zafarullah,&nbsp;Ata Ullah,&nbsp;Fazli Subhan,&nbsp;Sajjad A. Ghauri,&nbsp;Mazliham Mohd Suud,&nbsp;M. Mansoor Alam","doi":"10.1002/itl2.70136","DOIUrl":"https://doi.org/10.1002/itl2.70136","url":null,"abstract":"<p>The internet of healthcare things (IoHT) has advanced considerably, improving healthcare operations and patient monitoring by continuously collecting data from health sensors attached to patients. Current congestion detection techniques are insufficient for early detection since senders often remain unaware of the size of the residual queue. The real-time transmission of critical health data is essential, yet frequent congestion at intermediate nodes can lead to increased packet loss, delays, and diminished system reliability. To tackle these challenges, we propose a robust and low-complexity QACA algorithm tailored specifically for patient-centric IoHT networks, which dynamically adjusts the frequency of acknowledgments based on real-time queue occupancy thresholds. By integrating interval-based acknowledgments with a priority-based queuing strategy, QACA ensures that high-priority medical data is transmitted promptly, even in the face of heavy network loads. Simulation results indicate that QACA significantly improves performance over the analytical model and DCCA regarding packet loss and packet delay reduction. Moreover, the current framework may be enhanced in future work with the use of LMs to add predictive estimation of queue status, traffic classification, and an intelligent transmission scheduling, thus paving the way toward scalable and intelligent congestion management in next-generation IoHT systems.</p>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 6","pages":""},"PeriodicalIF":0.5,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/itl2.70136","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145146506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large Model-Based Multi-Community Virtual Interaction Scheme in Wireless Networks 无线网络中基于大模型的多社区虚拟交互方案
IF 0.5 Q4 TELECOMMUNICATIONS Pub Date : 2025-09-23 DOI: 10.1002/itl2.70147
Liping Zhang

The increasing ubiquity of 5G/6G wireless communication networks has created unprecedented opportunities for multi-community collaboration and virtual engagement. However, existing platforms often fail to support scalable, context-aware, and efficient interaction across decentralized social groups. In this paper, we propose LM-MCVIS (Large Model-Based Multi-Community Virtual Interaction Scheme), a novel framework designed to facilitate personalized content exchange and context-aware message routing in wireless community networks. LM-MCVIS integrates three key components: (1) an Edge-Aware Prompt Compression (EPC) module that semantically distills conversation inputs to reduce wireless transmission overhead; (2) a Community State Encoder (CSE) that models dynamic group structures and latent social contexts; and (3) a Federated Reinforcement Optimizer (FRO) that enables privacy-preserving, feedback-driven content routing. We evaluate LM-MCVIS on three public multi-community datasets using a high-fidelity 5G/6G emulator and benchmark its performance against five strong baselines. Results demonstrate significant gains in engagement depth, interaction diversity, response latency, and bandwidth savings. Ablation studies further validate the individual impact of each module. LM-MCVIS offers a scalable and modular paradigm for intelligent community interaction over wireless networks, with broad implications for collaborative learning, digital governance, and decentralized social ecosystems.

5G/6G无线通信网络的日益普及为多社区协作和虚拟参与创造了前所未有的机会。然而,现有的平台往往不能支持可扩展的、上下文感知的、跨分散社会群体的高效交互。在本文中,我们提出了LM-MCVIS(基于大型模型的多社区虚拟交互方案),这是一个新的框架,旨在促进无线社区网络中的个性化内容交换和上下文感知消息路由。LM-MCVIS集成了三个关键组件:(1)边缘感知提示压缩(EPC)模块,该模块从语义上提取会话输入,以减少无线传输开销;(2)社区状态编码器(Community State Encoder, CSE),用于模拟动态群体结构和潜在社会背景;(3)支持隐私保护、反馈驱动的内容路由的联邦强化优化器(FRO)。我们使用高保真5G/6G模拟器在三个公共多社区数据集上评估LM-MCVIS,并根据五个强基线对其性能进行基准测试。结果表明,在参与深度、交互多样性、响应延迟和带宽节省方面有显著的提高。消融研究进一步验证了每个模块的单独影响。LM-MCVIS为无线网络上的智能社区交互提供了可扩展和模块化的范例,对协作学习、数字治理和分散的社会生态系统具有广泛的影响。
{"title":"Large Model-Based Multi-Community Virtual Interaction Scheme in Wireless Networks","authors":"Liping Zhang","doi":"10.1002/itl2.70147","DOIUrl":"https://doi.org/10.1002/itl2.70147","url":null,"abstract":"<div>\u0000 \u0000 <p>The increasing ubiquity of 5G/6G wireless communication networks has created unprecedented opportunities for multi-community collaboration and virtual engagement. However, existing platforms often fail to support scalable, context-aware, and efficient interaction across decentralized social groups. In this paper, we propose LM-MCVIS (Large Model-Based Multi-Community Virtual Interaction Scheme), a novel framework designed to facilitate personalized content exchange and context-aware message routing in wireless community networks. LM-MCVIS integrates three key components: (1) an Edge-Aware Prompt Compression (EPC) module that semantically distills conversation inputs to reduce wireless transmission overhead; (2) a Community State Encoder (CSE) that models dynamic group structures and latent social contexts; and (3) a Federated Reinforcement Optimizer (FRO) that enables privacy-preserving, feedback-driven content routing. We evaluate LM-MCVIS on three public multi-community datasets using a high-fidelity 5G/6G emulator and benchmark its performance against five strong baselines. Results demonstrate significant gains in engagement depth, interaction diversity, response latency, and bandwidth savings. Ablation studies further validate the individual impact of each module. LM-MCVIS offers a scalable and modular paradigm for intelligent community interaction over wireless networks, with broad implications for collaborative learning, digital governance, and decentralized social ecosystems.</p>\u0000 </div>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 6","pages":""},"PeriodicalIF":0.5,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145146321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Review of 6G Wireless Communication System With Artificial Intelligence 基于人工智能的6G无线通信系统综述
IF 0.5 Q4 TELECOMMUNICATIONS Pub Date : 2025-09-22 DOI: 10.1002/itl2.70127
Suman Turpati, B. Geetha Rani, A. V. Prabu, Amrit Mukherjee, Sudan Jha, K. C. T. Swamy

Wireless communication has been in high demand over the last decades. Soon, the globe will be equipped with fifth-generation (5G) communications, which provide an incredible number of additional capabilities compared to fourth-generation communications. An innovative paradigm has evolved with the combination of artificial intelligence (AI) with sixth-generation (6G) communication networks in response to the increasing need for intelligent communication and seamless connection. This integration enables optimum resource allocation and greater efficiency. It also enhances adaptive system performance by incorporating AI across multiple network layers. The next generation of wireless networks must address many fundamental issues, including increasing system capacity, data throughput, latency, security, and quality of service in comparison to 5G. This article provides a through review of the vision of future 6G network AI and wireless communication architecture, touching on their conceptual foundations, inherent difficulties, and potential fields for further study. Some new technologies discussed in this article include AI, terahertz communications, free-space optical networks, blockchain, quantum communications, drones, mobile free communications, integrated sensing and communication, dynamic network slicing, big data analytics, and wireless optical technology. This could all be useful in ensuring the quality of service in the 6G architecture development. Furthermore, we provide a concise overview of the AI standardization process for wireless networks, focusing on essential achievements and current initiatives. We also examine the significant obstacles that 6G's AI and communication integration encountered. Lastly, we provide an overview of prospective future studies that ideally promote advancing and improving AI and 6G communications by describing possible obstacles and possibilities.

在过去的几十年里,无线通信的需求一直很高。很快,全球将配备第五代(5G)通信,与第四代通信相比,它提供了令人难以置信的额外功能。人工智能(AI)与第六代(6G)通信网络相结合,为应对日益增长的智能通信和无缝连接需求,形成了一种创新范式。这种集成可以实现最佳的资源分配和更高的效率。它还通过跨多个网络层整合人工智能来增强自适应系统性能。下一代无线网络必须解决许多基本问题,包括与5G相比增加系统容量、数据吞吐量、延迟、安全性和服务质量。本文全面回顾了未来6G网络人工智能和无线通信架构的愿景,触及了它们的概念基础、固有困难和潜在的进一步研究领域。本文讨论的一些新技术包括人工智能、太赫兹通信、自由空间光网络、区块链、量子通信、无人机、移动自由通信、集成传感与通信、动态网络切片、大数据分析和无线光学技术。这些都有助于确保6G架构开发中的服务质量。此外,我们简要概述了无线网络的人工智能标准化过程,重点介绍了基本成就和当前举措。我们还研究了6G人工智能和通信集成遇到的重大障碍。最后,我们概述了未来的前瞻性研究,通过描述可能的障碍和可能性,理想地促进推进和改善人工智能和6G通信。
{"title":"Review of 6G Wireless Communication System With Artificial Intelligence","authors":"Suman Turpati,&nbsp;B. Geetha Rani,&nbsp;A. V. Prabu,&nbsp;Amrit Mukherjee,&nbsp;Sudan Jha,&nbsp;K. C. T. Swamy","doi":"10.1002/itl2.70127","DOIUrl":"https://doi.org/10.1002/itl2.70127","url":null,"abstract":"<div>\u0000 \u0000 <p>Wireless communication has been in high demand over the last decades. Soon, the globe will be equipped with fifth-generation (5G) communications, which provide an incredible number of additional capabilities compared to fourth-generation communications. An innovative paradigm has evolved with the combination of artificial intelligence (AI) with sixth-generation (6G) communication networks in response to the increasing need for intelligent communication and seamless connection. This integration enables optimum resource allocation and greater efficiency. It also enhances adaptive system performance by incorporating AI across multiple network layers. The next generation of wireless networks must address many fundamental issues, including increasing system capacity, data throughput, latency, security, and quality of service in comparison to 5G. This article provides a through review of the vision of future 6G network AI and wireless communication architecture, touching on their conceptual foundations, inherent difficulties, and potential fields for further study. Some new technologies discussed in this article include AI, terahertz communications, free-space optical networks, blockchain, quantum communications, drones, mobile free communications, integrated sensing and communication, dynamic network slicing, big data analytics, and wireless optical technology. This could all be useful in ensuring the quality of service in the 6G architecture development. Furthermore, we provide a concise overview of the AI standardization process for wireless networks, focusing on essential achievements and current initiatives. We also examine the significant obstacles that 6G's AI and communication integration encountered. Lastly, we provide an overview of prospective future studies that ideally promote advancing and improving AI and 6G communications by describing possible obstacles and possibilities.</p>\u0000 </div>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 6","pages":""},"PeriodicalIF":0.5,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145110802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint Optimization of User Association and Power Allocation in Wireless Networks Using a Large Spatio-Temporal Graph Transformer Model 基于大时空图变压器模型的无线网络用户关联与功率分配联合优化
IF 0.5 Q4 TELECOMMUNICATIONS Pub Date : 2025-09-19 DOI: 10.1002/itl2.70131
D. S. Keerthi, P. Vishwanath, Kothuri Parashu Ramulu, Gopinath Anjinappa, Hirald Dwaraka Praveena

In this era, Wireless Communication Networks (WCNs) need dynamic and adaptive resource allocation approaches to handle user association and power allocation specifically under multi-connectivity and diverse traffic conditions. However, the conventional approaches struggle due to high computational cost, poor adaptability, and limited generalization. Therefore, this research proposes a large Spatio-Temporal Graph Transformer-based Reinforcement Learning (STGT-RL) model to jointly optimize user association and power allocation in large-scale WCNs. Initially, the network topology is designed using graph representations and incorporates a hybrid encoder that integrates Graph Transformers for spatial user-Base Station (BS) relationships and Spatio-Temporal Transformers for capturing time-varying traffic and channel states. Further, to ensure adaptive decision-making, a Transformer-RL policy agent is trained through a multi-objective reward function that assists in balancing throughput maximization and power efficiency. Furthermore, to enable stable policy learning, the model is initially trained using high-quality supervision from CRFSMA-generated labels, followed by reinforcement-based policy refinement. Hence, the experimental results are simulated on WCN environments to demonstrate that the proposed STGT-RL significantly outperforms baseline deep learning and heuristic-based methods in terms of throughput, energy efficiency, and fairness.

在这个时代,无线通信网络(WCNs)需要动态的、自适应的资源分配方法来处理多连接和不同流量条件下的用户关联和功率分配。然而,传统的方法由于计算成本高、适应性差、泛化有限等问题而陷入困境。因此,本研究提出了一种基于大型时空图变换的强化学习(STGT-RL)模型,用于联合优化大规模wcn中的用户关联和功率分配。最初,网络拓扑是使用图形表示设计的,并结合了一个混合编码器,该编码器集成了用于空间用户基站(BS)关系的图形转换器和用于捕获时变流量和信道状态的时空转换器。此外,为了确保自适应决策,Transformer-RL策略代理通过多目标奖励函数进行训练,该函数有助于平衡吞吐量最大化和功率效率。此外,为了实现稳定的策略学习,模型最初使用来自crfsma生成标签的高质量监督进行训练,然后进行基于强化的策略细化。因此,实验结果在WCN环境中进行了模拟,以证明所提出的STGT-RL在吞吐量、能源效率和公平性方面显著优于基线深度学习和基于启发式的方法。
{"title":"Joint Optimization of User Association and Power Allocation in Wireless Networks Using a Large Spatio-Temporal Graph Transformer Model","authors":"D. S. Keerthi,&nbsp;P. Vishwanath,&nbsp;Kothuri Parashu Ramulu,&nbsp;Gopinath Anjinappa,&nbsp;Hirald Dwaraka Praveena","doi":"10.1002/itl2.70131","DOIUrl":"https://doi.org/10.1002/itl2.70131","url":null,"abstract":"<div>\u0000 \u0000 <p>In this era, Wireless Communication Networks (WCNs) need dynamic and adaptive resource allocation approaches to handle user association and power allocation specifically under multi-connectivity and diverse traffic conditions. However, the conventional approaches struggle due to high computational cost, poor adaptability, and limited generalization. Therefore, this research proposes a large Spatio-Temporal Graph Transformer-based Reinforcement Learning (STGT-RL) model to jointly optimize user association and power allocation in large-scale WCNs. Initially, the network topology is designed using graph representations and incorporates a hybrid encoder that integrates Graph Transformers for spatial user-Base Station (BS) relationships and Spatio-Temporal Transformers for capturing time-varying traffic and channel states. Further, to ensure adaptive decision-making, a Transformer-RL policy agent is trained through a multi-objective reward function that assists in balancing throughput maximization and power efficiency. Furthermore, to enable stable policy learning, the model is initially trained using high-quality supervision from CRFSMA-generated labels, followed by reinforcement-based policy refinement. Hence, the experimental results are simulated on WCN environments to demonstrate that the proposed STGT-RL significantly outperforms baseline deep learning and heuristic-based methods in terms of throughput, energy efficiency, and fairness.</p>\u0000 </div>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 6","pages":""},"PeriodicalIF":0.5,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145101828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Internet Technology Letters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1