Pub Date : 2025-06-01DOI: 10.13052/jicts2245-800X.1321
Wei Lu;Qinying Li
The control of networked vehicle platoons is a core challenge in automated highway systems, where communication delay and packet loss significantly degrade cooperative driving performance. This study constructs a leader-predecessor-following (LPF) model with linearized state feedback, innovatively describing communication delays via Bernoulli sequence distribution and quantifying packet loss using the real-time transport protocol (RTP) rate formula. MATLAB simulations under mixed urban arterial (60%) and highway (40%) scenarios reveal that platoon spacing errors increase from 0.1 m to 0.78 m as delays rise from 0 ms to 8 ms, with speed errors reaching 0.6 m/s and acceleration fluctuations widening to [−4.8, 2.2] m/s2 at a 30% packet loss rate. Notably, the proposed Bernoulli-based delay model improves scenario fitting accuracy by 23% compared to static models, while an RTP-aware adaptive controller reduces acceleration fluctuations by 41 % under high loss conditions. These findings establish an 8 ms delay + 30% packet loss critical threshold for platoon instability, providing a theoretical foundation for robust V2X control strategies in intelligent transportation systems.
{"title":"Research on the Influence of Communication Delay and Packet Loss on the Platooning of Connected Vehicles","authors":"Wei Lu;Qinying Li","doi":"10.13052/jicts2245-800X.1321","DOIUrl":"https://doi.org/10.13052/jicts2245-800X.1321","url":null,"abstract":"The control of networked vehicle platoons is a core challenge in automated highway systems, where communication delay and packet loss significantly degrade cooperative driving performance. This study constructs a leader-predecessor-following (LPF) model with linearized state feedback, innovatively describing communication delays via Bernoulli sequence distribution and quantifying packet loss using the real-time transport protocol (RTP) rate formula. MATLAB simulations under mixed urban arterial (60%) and highway (40%) scenarios reveal that platoon spacing errors increase from 0.1 m to 0.78 m as delays rise from 0 ms to 8 ms, with speed errors reaching 0.6 m/s and acceleration fluctuations widening to [−4.8, 2.2] m/s<sup>2</sup> at a 30% packet loss rate. Notably, the proposed Bernoulli-based delay model improves scenario fitting accuracy by 23% compared to static models, while an RTP-aware adaptive controller reduces acceleration fluctuations by 41 % under high loss conditions. These findings establish an 8 ms delay + 30% packet loss critical threshold for platoon instability, providing a theoretical foundation for robust V2X control strategies in intelligent transportation systems.","PeriodicalId":36697,"journal":{"name":"Journal of ICT Standardization","volume":"13 2","pages":"93-110"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11267162","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145584593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-01DOI: 10.13052/jicts2245-800X.1323
Giada Lalli
Interoperability is a cornerstone of modern scientific and technological progress, enabling seamless data exchange and collaboration across diverse domains such as e-health, logistics, and IT. However, the lack of a unified definition has led to significant fragmentation, with over 117 distinct definitions documented across various fields. This paper addresses the challenge of defining interoperability by tracing its historical evolution from its military origins to its current applications in sectors like healthcare and logistics. This work proposes a novel, universal definition encompassing multiple interoperability dimensions, including technical, semantic, syntactic, legal, and organisational aspects. This comprehensive definition aims to resolve the inconsistencies and gaps in current practices, providing a robust framework for enhancing global collaboration and driving innovation. The proposed definition is evaluated against key criteria such as flexibility, clarity, measurability, scalability, and the establishment of common standards, demonstrating its potential to unify efforts across different fields. This work highlights the profound impact a standardised interoperability approach can have on critical areas like healthcare, where streamlined patient data exchange and improved outcomes are urgently needed.
{"title":"Defining Interoperability: A Universal Standard","authors":"Giada Lalli","doi":"10.13052/jicts2245-800X.1323","DOIUrl":"https://doi.org/10.13052/jicts2245-800X.1323","url":null,"abstract":"Interoperability is a cornerstone of modern scientific and technological progress, enabling seamless data exchange and collaboration across diverse domains such as e-health, logistics, and IT. However, the lack of a unified definition has led to significant fragmentation, with over 117 distinct definitions documented across various fields. This paper addresses the challenge of defining interoperability by tracing its historical evolution from its military origins to its current applications in sectors like healthcare and logistics. This work proposes a novel, universal definition encompassing multiple interoperability dimensions, including technical, semantic, syntactic, legal, and organisational aspects. This comprehensive definition aims to resolve the inconsistencies and gaps in current practices, providing a robust framework for enhancing global collaboration and driving innovation. The proposed definition is evaluated against key criteria such as flexibility, clarity, measurability, scalability, and the establishment of common standards, demonstrating its potential to unify efforts across different fields. This work highlights the profound impact a standardised interoperability approach can have on critical areas like healthcare, where streamlined patient data exchange and improved outcomes are urgently needed.","PeriodicalId":36697,"journal":{"name":"Journal of ICT Standardization","volume":"13 2","pages":"139-156"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11267158","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145584694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-01DOI: 10.13052/jicts2245-800X.1324
Jihua He
Ultra-dense networks (UDNs) face serious latency fluctuations and throughput degradation issues under high concurrency access and resource competition conditions. Traditional transmission protocols struggle to balance low latency and high stability in dynamic scenarios. In response to this challenge, this paper proposes a low latency adaptive communication protocol (MLACP), which constructs a multilayer control system consisting of a physical access layer, a resource scheduling layer, and an adaptive decision layer. Through a cross layer feedback mechanism combined with RNN based short-term state prediction and DQN based strategy optimization, dynamic adjustment of resource slicing, distributed collaboration, and path selection is achieved. The protocol design is implemented in the system level simulation environment of a 3GPP UMi SC channel model and a Poisson cluster process, and integrated with ZeroMQ and PyTorch on the NS-3.36 platform. The experiment covered different user densities and link states, with each scenario running independently 10 times and taking the average. The results showed that under high-density conditions of 1500 UE/km2, MLACP outperformed TCP Reno, QUIC, and the URLLC simplification scheme in terms of end-to-end latency, peak throughput, packet loss rate, path stability, and energy consumption. Moreover, it maintained controllable performance degradation in robustness tests such as link interruption, prediction bias, and base station failure. This result validates the feasibility and adaptability of the proposed protocol in dynamic and interference complex UDN environments, providing methodological references and an experimental basis for the design of low latency and intelligent communication systems.
{"title":"Low-Latency Adaptive Communication Protocols for Ultra-Dense Network Environments","authors":"Jihua He","doi":"10.13052/jicts2245-800X.1324","DOIUrl":"https://doi.org/10.13052/jicts2245-800X.1324","url":null,"abstract":"Ultra-dense networks (UDNs) face serious latency fluctuations and throughput degradation issues under high concurrency access and resource competition conditions. Traditional transmission protocols struggle to balance low latency and high stability in dynamic scenarios. In response to this challenge, this paper proposes a low latency adaptive communication protocol (MLACP), which constructs a multilayer control system consisting of a physical access layer, a resource scheduling layer, and an adaptive decision layer. Through a cross layer feedback mechanism combined with RNN based short-term state prediction and DQN based strategy optimization, dynamic adjustment of resource slicing, distributed collaboration, and path selection is achieved. The protocol design is implemented in the system level simulation environment of a 3GPP UMi SC channel model and a Poisson cluster process, and integrated with ZeroMQ and PyTorch on the NS-3.36 platform. The experiment covered different user densities and link states, with each scenario running independently 10 times and taking the average. The results showed that under high-density conditions of 1500 UE/km<sup>2</sup>, MLACP outperformed TCP Reno, QUIC, and the URLLC simplification scheme in terms of end-to-end latency, peak throughput, packet loss rate, path stability, and energy consumption. Moreover, it maintained controllable performance degradation in robustness tests such as link interruption, prediction bias, and base station failure. This result validates the feasibility and adaptability of the proposed protocol in dynamic and interference complex UDN environments, providing methodological references and an experimental basis for the design of low latency and intelligent communication systems.","PeriodicalId":36697,"journal":{"name":"Journal of ICT Standardization","volume":"13 2","pages":"157-180"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11267159","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145584679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-01DOI: 10.13052/jicts2245-800X.1325
Xiaonan Sun;Shuang Yang;Yuan Cao;Yaxin Zhao;Zhiyu Wang
The rise of intelligent financial platforms driven by innovations in embedded finance, real-time analytics, and API-based service delivery has fundamentally altered the landscape of digital financial ecosystems. However, this transformation has outpaced the development of interoperable and secure interface standards. Existing regulatory frameworks like PSD2 and Open Banking have initiated progress through data-sharing APIs, but practical deployments remain fragmented due to proprietary implementations, incompatible schemas, and insufficient governance across multi-actor environments. This paper addresses the critical gap in interface-level standardization by proposing a novel, layered architecture: the standardized interface frame-work for intelligent financial platforms (SIFFP). SIFFP integrates acquisition, knowledge, interoperability, intelligent service, and support layers, drawing inspiration from IoT architectural paradigms while tailoring them to the specific demands of financial systems. The framework is validated through a comprehensive proof-of-concept deployment in an e-commerce context, showcasing a working API suite (e.g., /loan/apply, /payment, /risk/analyze) with embedded metadata covering security (OAuth 2.0, mTLS), compliance (ISO 20022, Payment Card Industry Data Security Standard (PCI-DSS)), and schema formats (JSON/XML). Interoperability assessments demonstrate full compatibility with ISO/IEC 19941, and performance benchmarks confirm low-latency transaction processing under concurrent user conditions. Moreover, the work introduces a stakeholder-standards heatmap and standards lifecycle mapping, aligning the framework with pre-standardization best practices and demonstrating its readiness for engagement with formal standards bodies. By bridging theoretical architecture with implementation, SIFFP provides a scalable, extensible, and regulatorily aligned foundation for next-generation financial platforms. This research contributes not only a blueprint for modular financial system design but also a concrete pathway to de facto and formal standard development, laying the groundwork for future interoperability in embedded lending, insurance, and open finance ecosystems.
{"title":"Standardized Interface Framework for Intelligent Financial Platforms: A Pre-Standardization Study","authors":"Xiaonan Sun;Shuang Yang;Yuan Cao;Yaxin Zhao;Zhiyu Wang","doi":"10.13052/jicts2245-800X.1325","DOIUrl":"https://doi.org/10.13052/jicts2245-800X.1325","url":null,"abstract":"The rise of intelligent financial platforms driven by innovations in embedded finance, real-time analytics, and API-based service delivery has fundamentally altered the landscape of digital financial ecosystems. However, this transformation has outpaced the development of interoperable and secure interface standards. Existing regulatory frameworks like PSD2 and Open Banking have initiated progress through data-sharing APIs, but practical deployments remain fragmented due to proprietary implementations, incompatible schemas, and insufficient governance across multi-actor environments. This paper addresses the critical gap in interface-level standardization by proposing a novel, layered architecture: the standardized interface frame-work for intelligent financial platforms (SIFFP). SIFFP integrates acquisition, knowledge, interoperability, intelligent service, and support layers, drawing inspiration from IoT architectural paradigms while tailoring them to the specific demands of financial systems. The framework is validated through a comprehensive proof-of-concept deployment in an e-commerce context, showcasing a working API suite (e.g., /loan/apply, /payment, /risk/analyze) with embedded metadata covering security (OAuth 2.0, mTLS), compliance (ISO 20022, Payment Card Industry Data Security Standard (PCI-DSS)), and schema formats (JSON/XML). Interoperability assessments demonstrate full compatibility with ISO/IEC 19941, and performance benchmarks confirm low-latency transaction processing under concurrent user conditions. Moreover, the work introduces a stakeholder-standards heatmap and standards lifecycle mapping, aligning the framework with pre-standardization best practices and demonstrating its readiness for engagement with formal standards bodies. By bridging theoretical architecture with implementation, SIFFP provides a scalable, extensible, and regulatorily aligned foundation for next-generation financial platforms. This research contributes not only a blueprint for modular financial system design but also a concrete pathway to de facto and formal standard development, laying the groundwork for future interoperability in embedded lending, insurance, and open finance ecosystems.","PeriodicalId":36697,"journal":{"name":"Journal of ICT Standardization","volume":"13 2","pages":"181-210"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11267163","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145584636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-01DOI: 10.13052/jicts2245-800X.1322
JiaLi Zhou;Yuecen Liu
Aimed at the problems of high delay and low resource allocation efficiency of multi-source heterogeneous data task scheduling in 5G edge computing environment, this paper designs a multi-source data scheduling algorithm framework for low-latency optimization. An end-edge-cloud cooperative system model is constructed, and a set of dynamic priority scheduling strategies is proposed based on the task's directed acyclic graph (DAG) graph to express the inter-task data dependency relationships, and the task scheduling order is adjusted in real time by fusing the task tightness urgency, the resource pressure and the network state changes. In order to improve the stability of the system under high load, a multidimensional load evaluation mechanism and a granularity-adaptive task partitioning and merging method are introduced, and a cache hit-aware resource allocation function and an edge node cache replacement strategy are designed. In addition, a QoS guarantee mechanism and a network state-aware feedback module are constructed to realize dynamic correction of task scheduling accuracy under delay constraints. Multiple rounds of comparison experiments are carried out in the simulation platform, and the results show that this paper's algorithm can control the average task completion delay within 45 ms under medium-high load conditions, significantly reducing the critical path delay, stabilizing the QoS compliance rate to more than 94%, increasing the resource utilization rate to 87.5%, and achieving a scheduling hit rate of 92.4%. The above results verify the algorithm's low latency control capability and system resource synergy in dynamic task environments, with good engineering adaptability, suitable for edge intelligent application deployment with high real-time requirements in 5G scenarios.
{"title":"Design of a Low-Latency Multi-Source Data Scheduling Algorithm for a 5G Environment","authors":"JiaLi Zhou;Yuecen Liu","doi":"10.13052/jicts2245-800X.1322","DOIUrl":"https://doi.org/10.13052/jicts2245-800X.1322","url":null,"abstract":"Aimed at the problems of high delay and low resource allocation efficiency of multi-source heterogeneous data task scheduling in 5G edge computing environment, this paper designs a multi-source data scheduling algorithm framework for low-latency optimization. An end-edge-cloud cooperative system model is constructed, and a set of dynamic priority scheduling strategies is proposed based on the task's directed acyclic graph (DAG) graph to express the inter-task data dependency relationships, and the task scheduling order is adjusted in real time by fusing the task tightness urgency, the resource pressure and the network state changes. In order to improve the stability of the system under high load, a multidimensional load evaluation mechanism and a granularity-adaptive task partitioning and merging method are introduced, and a cache hit-aware resource allocation function and an edge node cache replacement strategy are designed. In addition, a QoS guarantee mechanism and a network state-aware feedback module are constructed to realize dynamic correction of task scheduling accuracy under delay constraints. Multiple rounds of comparison experiments are carried out in the simulation platform, and the results show that this paper's algorithm can control the average task completion delay within 45 ms under medium-high load conditions, significantly reducing the critical path delay, stabilizing the QoS compliance rate to more than 94%, increasing the resource utilization rate to 87.5%, and achieving a scheduling hit rate of 92.4%. The above results verify the algorithm's low latency control capability and system resource synergy in dynamic task environments, with good engineering adaptability, suitable for edge intelligent application deployment with high real-time requirements in 5G scenarios.","PeriodicalId":36697,"journal":{"name":"Journal of ICT Standardization","volume":"13 2","pages":"111-138"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11267161","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145584685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The increasing demand for real-time video services characterizes next-generation wireless networks. This demand exacerbates the conflict between bandwidth-intensive applications and resource-constrained edge infrastructure. This study proposes an ML-driven co-optimization framework that integrates lightweight compression with adaptive bitrate allocation using distributed edge intelligence. The methodology employs a depthwise separable CNN encoder enhanced by channel pruning and quantization-aware training to minimize computational requirements, achieving model sizes of ≤500 KB and computational complexity of 0.8 GFLOPs per frame on resource-limited nodes. Concurrently, a proximal policy optimization controller is adopted to dynamically adjust bitrate based on real-time channel state information and motion complexity features. A federated alternating optimization mechanism jointly reduces latency, energy consumption, and distortion while preserving data privacy. Experimental validation on edge IoT testbeds demonstrated substantial improvements over state-of-the-art baselines, achieving 42.7% lower encoding latency, 3.2 dB higher PSNR, and 38.5% reduced energy consumption with sub-100 ms processing times. By addressing the fundamental disconnect between compression and transmission optimization, this framework provides a scalable solution for 6G-enabled massive IoT video systems. It effectively bridges theoretical machine learning advances with practical deployment constraints in ultra-reliable low-latency communication environments.
{"title":"ML-Driven Co-Optimization of Lightweight Compression and Adaptive Bitrate Allocation for Edge IoT Distributed Video Coding","authors":"Wenyue Qu;Jinglong Wang;Yiming Zhang;Xinyan Pei;Zhuang Liang","doi":"10.13052/jicts2245-800X.1326","DOIUrl":"https://doi.org/10.13052/jicts2245-800X.1326","url":null,"abstract":"The increasing demand for real-time video services characterizes next-generation wireless networks. This demand exacerbates the conflict between bandwidth-intensive applications and resource-constrained edge infrastructure. This study proposes an ML-driven co-optimization framework that integrates lightweight compression with adaptive bitrate allocation using distributed edge intelligence. The methodology employs a depthwise separable CNN encoder enhanced by channel pruning and quantization-aware training to minimize computational requirements, achieving model sizes of ≤500 KB and computational complexity of 0.8 GFLOPs per frame on resource-limited nodes. Concurrently, a proximal policy optimization controller is adopted to dynamically adjust bitrate based on real-time channel state information and motion complexity features. A federated alternating optimization mechanism jointly reduces latency, energy consumption, and distortion while preserving data privacy. Experimental validation on edge IoT testbeds demonstrated substantial improvements over state-of-the-art baselines, achieving 42.7% lower encoding latency, 3.2 dB higher PSNR, and 38.5% reduced energy consumption with sub-100 ms processing times. By addressing the fundamental disconnect between compression and transmission optimization, this framework provides a scalable solution for 6G-enabled massive IoT video systems. It effectively bridges theoretical machine learning advances with practical deployment constraints in ultra-reliable low-latency communication environments.","PeriodicalId":36697,"journal":{"name":"Journal of ICT Standardization","volume":"13 2","pages":"211-242"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11267160","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145584592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01DOI: 10.13052/jicts2245-800X.1313
Jialing Wang;Jun Zheng
Aimed at the multidimensional and nonlinear characteristics of user behavior in the media industry, this paper proposes an intelligent user modeling and recommendation framework (MUMA) based on hybrid machine learning. The system constructs a spatial-temporal dual-driven user characterization system by fusing heterogeneous data from multiple sources (clickstream, viewing duration, social graph, and eye-movement hotspot). The core technological breakthroughs include: (1) designing a dynamic interest-aware network (DIN) and adopting a hybrid LSTM-Transformer architecture with a time decay factor to capture short-term/long-term behavioral patterns; (2) developing a cross-domain migratory learning module based on a heterogeneous information network (HIN) to realize collaborative recommendation of news/video/advertising business; (3) innovatively combining reinforcement learning and causal inference to construct a bandit-propensity hybrid recommendation strategy, balancing the contradiction between exploration and development. At the system realization level, build a Flink+Redis realtime feature engineering pipeline to support millisecond update of thousands of dimensional features; deploy an XGBoost-LightGBM dual-engine ranking model to realize an interpretable recommendation by SHAP value. Experiments show that in the 800 million behavioral logs test of the head video platform, compared with traditional collaborative filtering methods, this scheme improves CTR by 29.7%, viewing completion by 18.3%, and coldstart user recommendation satisfaction by 82.5% (A/B test $P < 0.005$). This study provides new ideas for user behavior modeling in the media industry, as well as theoretical and practical references for the design and implementation of personalized recommendation systems.
{"title":"Application of Machine Learning Algorithms in User Behavior Analysis and a Personalized Recommendation System in the Media Industry","authors":"Jialing Wang;Jun Zheng","doi":"10.13052/jicts2245-800X.1313","DOIUrl":"https://doi.org/10.13052/jicts2245-800X.1313","url":null,"abstract":"Aimed at the multidimensional and nonlinear characteristics of user behavior in the media industry, this paper proposes an intelligent user modeling and recommendation framework (MUMA) based on hybrid machine learning. The system constructs a spatial-temporal dual-driven user characterization system by fusing heterogeneous data from multiple sources (clickstream, viewing duration, social graph, and eye-movement hotspot). The core technological breakthroughs include: (1) designing a dynamic interest-aware network (DIN) and adopting a hybrid LSTM-Transformer architecture with a time decay factor to capture short-term/long-term behavioral patterns; (2) developing a cross-domain migratory learning module based on a heterogeneous information network (HIN) to realize collaborative recommendation of news/video/advertising business; (3) innovatively combining reinforcement learning and causal inference to construct a bandit-propensity hybrid recommendation strategy, balancing the contradiction between exploration and development. At the system realization level, build a Flink+Redis realtime feature engineering pipeline to support millisecond update of thousands of dimensional features; deploy an XGBoost-LightGBM dual-engine ranking model to realize an interpretable recommendation by SHAP value. Experiments show that in the 800 million behavioral logs test of the head video platform, compared with traditional collaborative filtering methods, this scheme improves CTR by 29.7%, viewing completion by 18.3%, and coldstart user recommendation satisfaction by 82.5% (A/B test <tex>$P < 0.005$</tex>). This study provides new ideas for user behavior modeling in the media industry, as well as theoretical and practical references for the design and implementation of personalized recommendation systems.","PeriodicalId":36697,"journal":{"name":"Journal of ICT Standardization","volume":"13 1","pages":"41-66"},"PeriodicalIF":0.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11042905","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144314676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01DOI: 10.13052/jicts2245-800X.1312
Chenwen Zhang
With the increasing number of web texts, the classification of web texts has become an important task. In this paper, the text word vector representation method is first analyzed, and bidirectional encoder representations from transformers (BERT) are selected to extract the word vector. The bidirectional gated recurrent unit (BiGRU), convolutional neural network (CNN), and attention mechanism are combined to obtain the context and local features of the text, respectively. Experiments were carried out using the THUCNews dataset. The results showed that in the comparison between word-to-vector (Word2vec), Glove, and BERT, the BERT obtained the best classification result. In the classification of different types of text, the average accuracy and F1value of the BERT-BGCA method reached 0.9521 and 0.9436, respectively, which were superior to other deep learning methods such as TextCNN. The results suggest that the BERT-BGCA method is effective in classifying web texts and can be applied in practice.
{"title":"Natural Language Processing: Classification of Web Texts Combined with Deep Learning","authors":"Chenwen Zhang","doi":"10.13052/jicts2245-800X.1312","DOIUrl":"https://doi.org/10.13052/jicts2245-800X.1312","url":null,"abstract":"With the increasing number of web texts, the classification of web texts has become an important task. In this paper, the text word vector representation method is first analyzed, and bidirectional encoder representations from transformers (BERT) are selected to extract the word vector. The bidirectional gated recurrent unit (BiGRU), convolutional neural network (CNN), and attention mechanism are combined to obtain the context and local features of the text, respectively. Experiments were carried out using the THUCNews dataset. The results showed that in the comparison between word-to-vector (Word2vec), Glove, and BERT, the BERT obtained the best classification result. In the classification of different types of text, the average accuracy and F1value of the BERT-BGCA method reached 0.9521 and 0.9436, respectively, which were superior to other deep learning methods such as TextCNN. The results suggest that the BERT-BGCA method is effective in classifying web texts and can be applied in practice.","PeriodicalId":36697,"journal":{"name":"Journal of ICT Standardization","volume":"13 1","pages":"25-40"},"PeriodicalIF":0.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11042907","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144314883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01DOI: 10.13052/jicts2245-800X.1314
Yuqin Dai;Xinjie Qian;Chunmei Yang
In recent years, intrusion detection systems (IDSs) have become a critical component of network security, due to the growing number and complexity of cyber-attacks. Traditional IDS methods, including signature-based and anomaly-based detection, often struggle with the high-dimensional and imbalanced nature of network traffic, leading to suboptimal performance. Moreover, many existing models fail to efficiently handle the diverse and complex attack types. In response to these challenges, we propose a novel deep learning-based IDS framework that leverages a deep asymmetric convolutional autoencoder (DACA) architecture. Our model combines advanced techniques for feature extraction, dimensionality reduction, and anomaly detection into a single cohesive framework. The DACA model is designed to effectively capture complex patterns and subtle anomalies in network traffic while significantly reducing computational complexity. By employing this architecture, we achieve superior detection accuracy across various types of attacks even in imbalanced datasets. Experimental results demonstrate that our approach surpasses several state-of-the-art methods, including HCM-SVM, D1-IDDS, and GNN -IDS, achieving high accuracy, precision, recall, and F1-score on benchmark datasets such as NSL-KDD and UNSW-NB15. The results emphasize how effectively our model identifies complex and varied attack patterns. In conclusion, the proposed IDS model offers a promising solution to the limitations of current detection systems, with significant improvements in performance and efficiency. This approach contributes to advancing the development of robust and scalable network security solutions.
{"title":"Deep Reinforcement Learning-Based Asymmetric Convolutional Autoencoder for Intrusion Detection","authors":"Yuqin Dai;Xinjie Qian;Chunmei Yang","doi":"10.13052/jicts2245-800X.1314","DOIUrl":"https://doi.org/10.13052/jicts2245-800X.1314","url":null,"abstract":"In recent years, intrusion detection systems (IDSs) have become a critical component of network security, due to the growing number and complexity of cyber-attacks. Traditional IDS methods, including signature-based and anomaly-based detection, often struggle with the high-dimensional and imbalanced nature of network traffic, leading to suboptimal performance. Moreover, many existing models fail to efficiently handle the diverse and complex attack types. In response to these challenges, we propose a novel deep learning-based IDS framework that leverages a deep asymmetric convolutional autoencoder (DACA) architecture. Our model combines advanced techniques for feature extraction, dimensionality reduction, and anomaly detection into a single cohesive framework. The DACA model is designed to effectively capture complex patterns and subtle anomalies in network traffic while significantly reducing computational complexity. By employing this architecture, we achieve superior detection accuracy across various types of attacks even in imbalanced datasets. Experimental results demonstrate that our approach surpasses several state-of-the-art methods, including HCM-SVM, D1-IDDS, and GNN -IDS, achieving high accuracy, precision, recall, and F1-score on benchmark datasets such as NSL-KDD and UNSW-NB15. The results emphasize how effectively our model identifies complex and varied attack patterns. In conclusion, the proposed IDS model offers a promising solution to the limitations of current detection systems, with significant improvements in performance and efficiency. This approach contributes to advancing the development of robust and scalable network security solutions.","PeriodicalId":36697,"journal":{"name":"Journal of ICT Standardization","volume":"13 1","pages":"67-92"},"PeriodicalIF":0.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11042904","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144314683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-01DOI: 10.13052/jicts2245-800X.1311
Roar E. Georgsen;Geir M. Køien
Municipal infrastructure in Norway is built primarily by small specialist companies acting as subcontractors, mostly with minimal experience working with information and communication technology (ICT). This combination of inexperience and lack of resources presents a unique challenge. This paper applies model-based systems engineering (MBSE) using the systems modelling language (SysML) to combine validation of reliability and security requirements within a mission-aware interdisciplinary context. The use case is a 6LoWPAN/CoAP-based system for urban spill water management.
{"title":"Validating Reliability and Security Requirements in Public Sector Infrastructure Built by Small Companies","authors":"Roar E. Georgsen;Geir M. Køien","doi":"10.13052/jicts2245-800X.1311","DOIUrl":"https://doi.org/10.13052/jicts2245-800X.1311","url":null,"abstract":"Municipal infrastructure in Norway is built primarily by small specialist companies acting as subcontractors, mostly with minimal experience working with information and communication technology (ICT). This combination of inexperience and lack of resources presents a unique challenge. This paper applies model-based systems engineering (MBSE) using the systems modelling language (SysML) to combine validation of reliability and security requirements within a mission-aware interdisciplinary context. The use case is a 6LoWPAN/CoAP-based system for urban spill water management.","PeriodicalId":36697,"journal":{"name":"Journal of ICT Standardization","volume":"13 1","pages":"1-24"},"PeriodicalIF":0.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11042906","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144314677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}