首页 > 最新文献

Journal of ICT Standardization最新文献

英文 中文
Research on the Influence of Communication Delay and Packet Loss on the Platooning of Connected Vehicles 通信延迟和丢包对网联车辆队列的影响研究
Q3 Decision Sciences Pub Date : 2025-06-01 DOI: 10.13052/jicts2245-800X.1321
Wei Lu;Qinying Li
The control of networked vehicle platoons is a core challenge in automated highway systems, where communication delay and packet loss significantly degrade cooperative driving performance. This study constructs a leader-predecessor-following (LPF) model with linearized state feedback, innovatively describing communication delays via Bernoulli sequence distribution and quantifying packet loss using the real-time transport protocol (RTP) rate formula. MATLAB simulations under mixed urban arterial (60%) and highway (40%) scenarios reveal that platoon spacing errors increase from 0.1 m to 0.78 m as delays rise from 0 ms to 8 ms, with speed errors reaching 0.6 m/s and acceleration fluctuations widening to [−4.8, 2.2] m/s2 at a 30% packet loss rate. Notably, the proposed Bernoulli-based delay model improves scenario fitting accuracy by 23% compared to static models, while an RTP-aware adaptive controller reduces acceleration fluctuations by 41 % under high loss conditions. These findings establish an 8 ms delay + 30% packet loss critical threshold for platoon instability, providing a theoretical foundation for robust V2X control strategies in intelligent transportation systems.
网络车辆队列的控制是自动公路系统的核心挑战,其中通信延迟和数据包丢失严重降低了协同驾驶性能。本文构建了一个具有线性化状态反馈的领导者-前任-跟随(LPF)模型,创新地通过伯努利序列分布描述通信延迟,并使用实时传输协议(RTP)速率公式量化丢包。在城市干道(60%)和高速公路(40%)混合场景下的MATLAB仿真表明,当延迟从0 ms增加到8 ms时,排距误差从0.1 m增加到0.78 m,速度误差达到0.6 m/s,加速度波动扩大到[−4.8,2.2]m/s2,丢包率为30%。值得注意的是,与静态模型相比,本文提出的基于伯努利的延迟模型将场景拟合精度提高了23%,而rtp感知自适应控制器在高损耗条件下将加速度波动降低了41%。这些发现为队列不稳定建立了8 ms延迟+ 30%丢包的临界阈值,为智能交通系统中稳健的V2X控制策略提供了理论基础。
{"title":"Research on the Influence of Communication Delay and Packet Loss on the Platooning of Connected Vehicles","authors":"Wei Lu;Qinying Li","doi":"10.13052/jicts2245-800X.1321","DOIUrl":"https://doi.org/10.13052/jicts2245-800X.1321","url":null,"abstract":"The control of networked vehicle platoons is a core challenge in automated highway systems, where communication delay and packet loss significantly degrade cooperative driving performance. This study constructs a leader-predecessor-following (LPF) model with linearized state feedback, innovatively describing communication delays via Bernoulli sequence distribution and quantifying packet loss using the real-time transport protocol (RTP) rate formula. MATLAB simulations under mixed urban arterial (60%) and highway (40%) scenarios reveal that platoon spacing errors increase from 0.1 m to 0.78 m as delays rise from 0 ms to 8 ms, with speed errors reaching 0.6 m/s and acceleration fluctuations widening to [−4.8, 2.2] m/s<sup>2</sup> at a 30% packet loss rate. Notably, the proposed Bernoulli-based delay model improves scenario fitting accuracy by 23% compared to static models, while an RTP-aware adaptive controller reduces acceleration fluctuations by 41 % under high loss conditions. These findings establish an 8 ms delay + 30% packet loss critical threshold for platoon instability, providing a theoretical foundation for robust V2X control strategies in intelligent transportation systems.","PeriodicalId":36697,"journal":{"name":"Journal of ICT Standardization","volume":"13 2","pages":"93-110"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11267162","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145584593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Defining Interoperability: A Universal Standard 定义互操作性:一个通用标准
Q3 Decision Sciences Pub Date : 2025-06-01 DOI: 10.13052/jicts2245-800X.1323
Giada Lalli
Interoperability is a cornerstone of modern scientific and technological progress, enabling seamless data exchange and collaboration across diverse domains such as e-health, logistics, and IT. However, the lack of a unified definition has led to significant fragmentation, with over 117 distinct definitions documented across various fields. This paper addresses the challenge of defining interoperability by tracing its historical evolution from its military origins to its current applications in sectors like healthcare and logistics. This work proposes a novel, universal definition encompassing multiple interoperability dimensions, including technical, semantic, syntactic, legal, and organisational aspects. This comprehensive definition aims to resolve the inconsistencies and gaps in current practices, providing a robust framework for enhancing global collaboration and driving innovation. The proposed definition is evaluated against key criteria such as flexibility, clarity, measurability, scalability, and the establishment of common standards, demonstrating its potential to unify efforts across different fields. This work highlights the profound impact a standardised interoperability approach can have on critical areas like healthcare, where streamlined patient data exchange and improved outcomes are urgently needed.
互操作性是现代科学技术进步的基石,可实现跨不同领域(如电子卫生、物流和IT)的无缝数据交换和协作。然而,缺乏统一的定义导致了严重的碎片化,在不同的领域有超过117个不同的定义。本文通过追溯互操作性的历史演变,从其军事起源到其在医疗保健和物流等部门的当前应用,解决了定义互操作性的挑战。这项工作提出了一个新的、通用的定义,涵盖了多个互操作性维度,包括技术、语义、句法、法律和组织方面。这一全面的定义旨在解决当前实践中的不一致和差距,为加强全球合作和推动创新提供一个强有力的框架。根据关键标准,如灵活性、清晰度、可度量性、可伸缩性和公共标准的建立,对提议的定义进行评估,展示其在不同领域统一工作的潜力。这项工作突出了标准化互操作性方法对医疗保健等关键领域可能产生的深远影响,在这些领域,迫切需要简化患者数据交换并改善结果。
{"title":"Defining Interoperability: A Universal Standard","authors":"Giada Lalli","doi":"10.13052/jicts2245-800X.1323","DOIUrl":"https://doi.org/10.13052/jicts2245-800X.1323","url":null,"abstract":"Interoperability is a cornerstone of modern scientific and technological progress, enabling seamless data exchange and collaboration across diverse domains such as e-health, logistics, and IT. However, the lack of a unified definition has led to significant fragmentation, with over 117 distinct definitions documented across various fields. This paper addresses the challenge of defining interoperability by tracing its historical evolution from its military origins to its current applications in sectors like healthcare and logistics. This work proposes a novel, universal definition encompassing multiple interoperability dimensions, including technical, semantic, syntactic, legal, and organisational aspects. This comprehensive definition aims to resolve the inconsistencies and gaps in current practices, providing a robust framework for enhancing global collaboration and driving innovation. The proposed definition is evaluated against key criteria such as flexibility, clarity, measurability, scalability, and the establishment of common standards, demonstrating its potential to unify efforts across different fields. This work highlights the profound impact a standardised interoperability approach can have on critical areas like healthcare, where streamlined patient data exchange and improved outcomes are urgently needed.","PeriodicalId":36697,"journal":{"name":"Journal of ICT Standardization","volume":"13 2","pages":"139-156"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11267158","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145584694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-Latency Adaptive Communication Protocols for Ultra-Dense Network Environments 面向超密集网络环境的低延迟自适应通信协议
Q3 Decision Sciences Pub Date : 2025-06-01 DOI: 10.13052/jicts2245-800X.1324
Jihua He
Ultra-dense networks (UDNs) face serious latency fluctuations and throughput degradation issues under high concurrency access and resource competition conditions. Traditional transmission protocols struggle to balance low latency and high stability in dynamic scenarios. In response to this challenge, this paper proposes a low latency adaptive communication protocol (MLACP), which constructs a multilayer control system consisting of a physical access layer, a resource scheduling layer, and an adaptive decision layer. Through a cross layer feedback mechanism combined with RNN based short-term state prediction and DQN based strategy optimization, dynamic adjustment of resource slicing, distributed collaboration, and path selection is achieved. The protocol design is implemented in the system level simulation environment of a 3GPP UMi SC channel model and a Poisson cluster process, and integrated with ZeroMQ and PyTorch on the NS-3.36 platform. The experiment covered different user densities and link states, with each scenario running independently 10 times and taking the average. The results showed that under high-density conditions of 1500 UE/km2, MLACP outperformed TCP Reno, QUIC, and the URLLC simplification scheme in terms of end-to-end latency, peak throughput, packet loss rate, path stability, and energy consumption. Moreover, it maintained controllable performance degradation in robustness tests such as link interruption, prediction bias, and base station failure. This result validates the feasibility and adaptability of the proposed protocol in dynamic and interference complex UDN environments, providing methodological references and an experimental basis for the design of low latency and intelligent communication systems.
在高并发访问和资源竞争条件下,超密集网络面临着严重的延迟波动和吞吐量下降问题。传统的传输协议很难在动态场景下平衡低延迟和高稳定性。针对这一挑战,本文提出了一种低延迟自适应通信协议(MLACP),该协议构建了一个由物理访问层、资源调度层和自适应决策层组成的多层控制系统。通过基于RNN的短期状态预测和基于DQN的策略优化相结合的跨层反馈机制,实现了资源切片、分布式协作和路径选择的动态调整。协议设计在3GPP UMi SC通道模型和泊松集群进程的系统级仿真环境中实现,并在NS-3.36平台上集成了ZeroMQ和PyTorch。实验涵盖了不同的用户密度和链路状态,每个场景独立运行10次,取平均值。结果表明,在1500 UE/km2的高密度条件下,MLACP在端到端延迟、峰值吞吐量、丢包率、路径稳定性和能耗方面均优于TCP Reno、QUIC和URLLC简化方案。此外,它在链路中断、预测偏差和基站故障等鲁棒性测试中保持了可控的性能退化。实验结果验证了该协议在动态、干扰复杂的UDN环境下的可行性和适应性,为低时延智能通信系统的设计提供了方法参考和实验依据。
{"title":"Low-Latency Adaptive Communication Protocols for Ultra-Dense Network Environments","authors":"Jihua He","doi":"10.13052/jicts2245-800X.1324","DOIUrl":"https://doi.org/10.13052/jicts2245-800X.1324","url":null,"abstract":"Ultra-dense networks (UDNs) face serious latency fluctuations and throughput degradation issues under high concurrency access and resource competition conditions. Traditional transmission protocols struggle to balance low latency and high stability in dynamic scenarios. In response to this challenge, this paper proposes a low latency adaptive communication protocol (MLACP), which constructs a multilayer control system consisting of a physical access layer, a resource scheduling layer, and an adaptive decision layer. Through a cross layer feedback mechanism combined with RNN based short-term state prediction and DQN based strategy optimization, dynamic adjustment of resource slicing, distributed collaboration, and path selection is achieved. The protocol design is implemented in the system level simulation environment of a 3GPP UMi SC channel model and a Poisson cluster process, and integrated with ZeroMQ and PyTorch on the NS-3.36 platform. The experiment covered different user densities and link states, with each scenario running independently 10 times and taking the average. The results showed that under high-density conditions of 1500 UE/km<sup>2</sup>, MLACP outperformed TCP Reno, QUIC, and the URLLC simplification scheme in terms of end-to-end latency, peak throughput, packet loss rate, path stability, and energy consumption. Moreover, it maintained controllable performance degradation in robustness tests such as link interruption, prediction bias, and base station failure. This result validates the feasibility and adaptability of the proposed protocol in dynamic and interference complex UDN environments, providing methodological references and an experimental basis for the design of low latency and intelligent communication systems.","PeriodicalId":36697,"journal":{"name":"Journal of ICT Standardization","volume":"13 2","pages":"157-180"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11267159","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145584679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Standardized Interface Framework for Intelligent Financial Platforms: A Pre-Standardization Study 智能金融平台的标准化接口框架:标准化前研究
Q3 Decision Sciences Pub Date : 2025-06-01 DOI: 10.13052/jicts2245-800X.1325
Xiaonan Sun;Shuang Yang;Yuan Cao;Yaxin Zhao;Zhiyu Wang
The rise of intelligent financial platforms driven by innovations in embedded finance, real-time analytics, and API-based service delivery has fundamentally altered the landscape of digital financial ecosystems. However, this transformation has outpaced the development of interoperable and secure interface standards. Existing regulatory frameworks like PSD2 and Open Banking have initiated progress through data-sharing APIs, but practical deployments remain fragmented due to proprietary implementations, incompatible schemas, and insufficient governance across multi-actor environments. This paper addresses the critical gap in interface-level standardization by proposing a novel, layered architecture: the standardized interface frame-work for intelligent financial platforms (SIFFP). SIFFP integrates acquisition, knowledge, interoperability, intelligent service, and support layers, drawing inspiration from IoT architectural paradigms while tailoring them to the specific demands of financial systems. The framework is validated through a comprehensive proof-of-concept deployment in an e-commerce context, showcasing a working API suite (e.g., /loan/apply, /payment, /risk/analyze) with embedded metadata covering security (OAuth 2.0, mTLS), compliance (ISO 20022, Payment Card Industry Data Security Standard (PCI-DSS)), and schema formats (JSON/XML). Interoperability assessments demonstrate full compatibility with ISO/IEC 19941, and performance benchmarks confirm low-latency transaction processing under concurrent user conditions. Moreover, the work introduces a stakeholder-standards heatmap and standards lifecycle mapping, aligning the framework with pre-standardization best practices and demonstrating its readiness for engagement with formal standards bodies. By bridging theoretical architecture with implementation, SIFFP provides a scalable, extensible, and regulatorily aligned foundation for next-generation financial platforms. This research contributes not only a blueprint for modular financial system design but also a concrete pathway to de facto and formal standard development, laying the groundwork for future interoperability in embedded lending, insurance, and open finance ecosystems.
在嵌入式金融、实时分析和基于api的服务交付创新的推动下,智能金融平台的兴起从根本上改变了数字金融生态系统的格局。然而,这种转变已经超过了可互操作和安全接口标准的发展速度。现有的监管框架(如PSD2和Open Banking)已经通过数据共享api取得了进展,但由于专有实现、不兼容的模式以及跨多参与者环境的治理不足,实际部署仍然是碎片化的。本文通过提出一种新颖的分层架构:智能金融平台的标准化接口框架(SIFFP),解决了接口级标准化的关键差距。SIFFP集成了采集层、知识层、互操作性层、智能服务层和支持层,从物联网架构范式中汲取灵感,同时根据金融系统的具体需求进行定制。该框架通过在电子商务环境中进行全面的概念验证部署来验证,展示了一个工作API套件(例如,/loan/apply、/payment、/risk/analyze),其中嵌入了涵盖安全性(OAuth 2.0、mTLS)、合规性(ISO 20022、支付卡行业数据安全标准(PCI-DSS))和模式格式(JSON/XML)的元数据。互操作性评估证明了与ISO/IEC 19941的完全兼容性,性能基准测试确认了并发用户条件下的低延迟事务处理。此外,该工作还引入了涉众标准热图和标准生命周期映射,使框架与标准化前的最佳实践保持一致,并展示了其与正式标准机构合作的准备情况。通过将理论体系结构与实现相结合,SIFFP为下一代金融平台提供了一个可扩展、可扩展和监管一致的基础。这项研究不仅为模块化金融系统设计提供了蓝图,而且为事实上和正式的标准开发提供了具体途径,为嵌入式贷款、保险和开放金融生态系统的未来互操作性奠定了基础。
{"title":"Standardized Interface Framework for Intelligent Financial Platforms: A Pre-Standardization Study","authors":"Xiaonan Sun;Shuang Yang;Yuan Cao;Yaxin Zhao;Zhiyu Wang","doi":"10.13052/jicts2245-800X.1325","DOIUrl":"https://doi.org/10.13052/jicts2245-800X.1325","url":null,"abstract":"The rise of intelligent financial platforms driven by innovations in embedded finance, real-time analytics, and API-based service delivery has fundamentally altered the landscape of digital financial ecosystems. However, this transformation has outpaced the development of interoperable and secure interface standards. Existing regulatory frameworks like PSD2 and Open Banking have initiated progress through data-sharing APIs, but practical deployments remain fragmented due to proprietary implementations, incompatible schemas, and insufficient governance across multi-actor environments. This paper addresses the critical gap in interface-level standardization by proposing a novel, layered architecture: the standardized interface frame-work for intelligent financial platforms (SIFFP). SIFFP integrates acquisition, knowledge, interoperability, intelligent service, and support layers, drawing inspiration from IoT architectural paradigms while tailoring them to the specific demands of financial systems. The framework is validated through a comprehensive proof-of-concept deployment in an e-commerce context, showcasing a working API suite (e.g., /loan/apply, /payment, /risk/analyze) with embedded metadata covering security (OAuth 2.0, mTLS), compliance (ISO 20022, Payment Card Industry Data Security Standard (PCI-DSS)), and schema formats (JSON/XML). Interoperability assessments demonstrate full compatibility with ISO/IEC 19941, and performance benchmarks confirm low-latency transaction processing under concurrent user conditions. Moreover, the work introduces a stakeholder-standards heatmap and standards lifecycle mapping, aligning the framework with pre-standardization best practices and demonstrating its readiness for engagement with formal standards bodies. By bridging theoretical architecture with implementation, SIFFP provides a scalable, extensible, and regulatorily aligned foundation for next-generation financial platforms. This research contributes not only a blueprint for modular financial system design but also a concrete pathway to de facto and formal standard development, laying the groundwork for future interoperability in embedded lending, insurance, and open finance ecosystems.","PeriodicalId":36697,"journal":{"name":"Journal of ICT Standardization","volume":"13 2","pages":"181-210"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11267163","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145584636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design of a Low-Latency Multi-Source Data Scheduling Algorithm for a 5G Environment 5G环境下低延迟多源数据调度算法设计
Q3 Decision Sciences Pub Date : 2025-06-01 DOI: 10.13052/jicts2245-800X.1322
JiaLi Zhou;Yuecen Liu
Aimed at the problems of high delay and low resource allocation efficiency of multi-source heterogeneous data task scheduling in 5G edge computing environment, this paper designs a multi-source data scheduling algorithm framework for low-latency optimization. An end-edge-cloud cooperative system model is constructed, and a set of dynamic priority scheduling strategies is proposed based on the task's directed acyclic graph (DAG) graph to express the inter-task data dependency relationships, and the task scheduling order is adjusted in real time by fusing the task tightness urgency, the resource pressure and the network state changes. In order to improve the stability of the system under high load, a multidimensional load evaluation mechanism and a granularity-adaptive task partitioning and merging method are introduced, and a cache hit-aware resource allocation function and an edge node cache replacement strategy are designed. In addition, a QoS guarantee mechanism and a network state-aware feedback module are constructed to realize dynamic correction of task scheduling accuracy under delay constraints. Multiple rounds of comparison experiments are carried out in the simulation platform, and the results show that this paper's algorithm can control the average task completion delay within 45 ms under medium-high load conditions, significantly reducing the critical path delay, stabilizing the QoS compliance rate to more than 94%, increasing the resource utilization rate to 87.5%, and achieving a scheduling hit rate of 92.4%. The above results verify the algorithm's low latency control capability and system resource synergy in dynamic task environments, with good engineering adaptability, suitable for edge intelligent application deployment with high real-time requirements in 5G scenarios.
针对5G边缘计算环境下多源异构数据任务调度存在的时延高、资源分配效率低等问题,设计了一种低时延优化的多源数据调度算法框架。构建了端-边缘云协同系统模型,基于任务的有向无环图(DAG)图,提出了一套动态优先级调度策略来表达任务间的数据依赖关系,并通过融合任务的紧度、紧迫性、资源压力和网络状态变化,实时调整任务调度顺序。为了提高系统在高负载下的稳定性,引入了多维负载评估机制和粒度自适应任务划分与合并方法,设计了缓存命中感知的资源分配函数和边缘节点缓存替换策略。构建了QoS保证机制和网络状态感知反馈模块,实现了时延约束下任务调度精度的动态修正。在仿真平台上进行了多轮对比实验,结果表明,本文算法在中高负载条件下可将任务完成平均延迟控制在45 ms以内,显著降低关键路径延迟,QoS符合率稳定在94%以上,资源利用率提高到87.5%,调度命中率达到92.4%。以上结果验证了该算法在动态任务环境下的低时延控制能力和系统资源协同能力,具有良好的工程适应性,适用于5G场景下实时性要求较高的边缘智能应用部署。
{"title":"Design of a Low-Latency Multi-Source Data Scheduling Algorithm for a 5G Environment","authors":"JiaLi Zhou;Yuecen Liu","doi":"10.13052/jicts2245-800X.1322","DOIUrl":"https://doi.org/10.13052/jicts2245-800X.1322","url":null,"abstract":"Aimed at the problems of high delay and low resource allocation efficiency of multi-source heterogeneous data task scheduling in 5G edge computing environment, this paper designs a multi-source data scheduling algorithm framework for low-latency optimization. An end-edge-cloud cooperative system model is constructed, and a set of dynamic priority scheduling strategies is proposed based on the task's directed acyclic graph (DAG) graph to express the inter-task data dependency relationships, and the task scheduling order is adjusted in real time by fusing the task tightness urgency, the resource pressure and the network state changes. In order to improve the stability of the system under high load, a multidimensional load evaluation mechanism and a granularity-adaptive task partitioning and merging method are introduced, and a cache hit-aware resource allocation function and an edge node cache replacement strategy are designed. In addition, a QoS guarantee mechanism and a network state-aware feedback module are constructed to realize dynamic correction of task scheduling accuracy under delay constraints. Multiple rounds of comparison experiments are carried out in the simulation platform, and the results show that this paper's algorithm can control the average task completion delay within 45 ms under medium-high load conditions, significantly reducing the critical path delay, stabilizing the QoS compliance rate to more than 94%, increasing the resource utilization rate to 87.5%, and achieving a scheduling hit rate of 92.4%. The above results verify the algorithm's low latency control capability and system resource synergy in dynamic task environments, with good engineering adaptability, suitable for edge intelligent application deployment with high real-time requirements in 5G scenarios.","PeriodicalId":36697,"journal":{"name":"Journal of ICT Standardization","volume":"13 2","pages":"111-138"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11267161","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145584685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ML-Driven Co-Optimization of Lightweight Compression and Adaptive Bitrate Allocation for Edge IoT Distributed Video Coding 边缘物联网分布式视频编码中ml驱动的轻量级压缩和自适应比特率分配协同优化
Q3 Decision Sciences Pub Date : 2025-06-01 DOI: 10.13052/jicts2245-800X.1326
Wenyue Qu;Jinglong Wang;Yiming Zhang;Xinyan Pei;Zhuang Liang
The increasing demand for real-time video services characterizes next-generation wireless networks. This demand exacerbates the conflict between bandwidth-intensive applications and resource-constrained edge infrastructure. This study proposes an ML-driven co-optimization framework that integrates lightweight compression with adaptive bitrate allocation using distributed edge intelligence. The methodology employs a depthwise separable CNN encoder enhanced by channel pruning and quantization-aware training to minimize computational requirements, achieving model sizes of ≤500 KB and computational complexity of 0.8 GFLOPs per frame on resource-limited nodes. Concurrently, a proximal policy optimization controller is adopted to dynamically adjust bitrate based on real-time channel state information and motion complexity features. A federated alternating optimization mechanism jointly reduces latency, energy consumption, and distortion while preserving data privacy. Experimental validation on edge IoT testbeds demonstrated substantial improvements over state-of-the-art baselines, achieving 42.7% lower encoding latency, 3.2 dB higher PSNR, and 38.5% reduced energy consumption with sub-100 ms processing times. By addressing the fundamental disconnect between compression and transmission optimization, this framework provides a scalable solution for 6G-enabled massive IoT video systems. It effectively bridges theoretical machine learning advances with practical deployment constraints in ultra-reliable low-latency communication environments.
对实时视频业务的需求日益增长是下一代无线网络的特点。这种需求加剧了带宽密集型应用程序和资源受限边缘基础设施之间的冲突。本研究提出了一个机器学习驱动的协同优化框架,该框架使用分布式边缘智能将轻量级压缩与自适应比特率分配集成在一起。该方法采用深度可分离CNN编码器,通过信道修剪和量化感知训练增强,以最大限度地减少计算需求,在资源有限的节点上实现了≤500 KB的模型大小和每帧0.8 GFLOPs的计算复杂度。同时,基于实时信道状态信息和运动复杂度特征,采用近端策略优化控制器对比特率进行动态调整。联邦交替优化机制共同减少延迟、能耗和失真,同时保护数据隐私。在边缘物联网测试平台上进行的实验验证表明,与最先进的基线相比,编码延迟降低了42.7%,PSNR提高了3.2 dB,处理时间低于100 ms,能耗降低了38.5%。通过解决压缩和传输优化之间的根本脱节,该框架为支持6g的大规模物联网视频系统提供了可扩展的解决方案。它有效地将理论机器学习进展与超可靠低延迟通信环境中的实际部署约束联系起来。
{"title":"ML-Driven Co-Optimization of Lightweight Compression and Adaptive Bitrate Allocation for Edge IoT Distributed Video Coding","authors":"Wenyue Qu;Jinglong Wang;Yiming Zhang;Xinyan Pei;Zhuang Liang","doi":"10.13052/jicts2245-800X.1326","DOIUrl":"https://doi.org/10.13052/jicts2245-800X.1326","url":null,"abstract":"The increasing demand for real-time video services characterizes next-generation wireless networks. This demand exacerbates the conflict between bandwidth-intensive applications and resource-constrained edge infrastructure. This study proposes an ML-driven co-optimization framework that integrates lightweight compression with adaptive bitrate allocation using distributed edge intelligence. The methodology employs a depthwise separable CNN encoder enhanced by channel pruning and quantization-aware training to minimize computational requirements, achieving model sizes of ≤500 KB and computational complexity of 0.8 GFLOPs per frame on resource-limited nodes. Concurrently, a proximal policy optimization controller is adopted to dynamically adjust bitrate based on real-time channel state information and motion complexity features. A federated alternating optimization mechanism jointly reduces latency, energy consumption, and distortion while preserving data privacy. Experimental validation on edge IoT testbeds demonstrated substantial improvements over state-of-the-art baselines, achieving 42.7% lower encoding latency, 3.2 dB higher PSNR, and 38.5% reduced energy consumption with sub-100 ms processing times. By addressing the fundamental disconnect between compression and transmission optimization, this framework provides a scalable solution for 6G-enabled massive IoT video systems. It effectively bridges theoretical machine learning advances with practical deployment constraints in ultra-reliable low-latency communication environments.","PeriodicalId":36697,"journal":{"name":"Journal of ICT Standardization","volume":"13 2","pages":"211-242"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11267160","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145584592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of Machine Learning Algorithms in User Behavior Analysis and a Personalized Recommendation System in the Media Industry 机器学习算法在用户行为分析和媒体行业个性化推荐系统中的应用
Q3 Decision Sciences Pub Date : 2025-03-01 DOI: 10.13052/jicts2245-800X.1313
Jialing Wang;Jun Zheng
Aimed at the multidimensional and nonlinear characteristics of user behavior in the media industry, this paper proposes an intelligent user modeling and recommendation framework (MUMA) based on hybrid machine learning. The system constructs a spatial-temporal dual-driven user characterization system by fusing heterogeneous data from multiple sources (clickstream, viewing duration, social graph, and eye-movement hotspot). The core technological breakthroughs include: (1) designing a dynamic interest-aware network (DIN) and adopting a hybrid LSTM-Transformer architecture with a time decay factor to capture short-term/long-term behavioral patterns; (2) developing a cross-domain migratory learning module based on a heterogeneous information network (HIN) to realize collaborative recommendation of news/video/advertising business; (3) innovatively combining reinforcement learning and causal inference to construct a bandit-propensity hybrid recommendation strategy, balancing the contradiction between exploration and development. At the system realization level, build a Flink+Redis realtime feature engineering pipeline to support millisecond update of thousands of dimensional features; deploy an XGBoost-LightGBM dual-engine ranking model to realize an interpretable recommendation by SHAP value. Experiments show that in the 800 million behavioral logs test of the head video platform, compared with traditional collaborative filtering methods, this scheme improves CTR by 29.7%, viewing completion by 18.3%, and coldstart user recommendation satisfaction by 82.5% (A/B test $P < 0.005$). This study provides new ideas for user behavior modeling in the media industry, as well as theoretical and practical references for the design and implementation of personalized recommendation systems.
针对媒体行业用户行为的多维和非线性特征,提出了一种基于混合机器学习的智能用户建模与推荐框架(MUMA)。该系统通过融合多源异构数据(点击流、观看时长、社交图谱、眼动热点),构建了一个时空双驱动的用户表征系统。核心技术突破包括:(1)设计动态兴趣感知网络(DIN),采用带时间衰减因子的混合LSTM-Transformer架构捕捉短期/长期行为模式;(2)开发基于异构信息网络(HIN)的跨域迁移学习模块,实现新闻/视频/广告业务协同推荐;(3)创新地将强化学习与因果推理相结合,构建土匪倾向混合推荐策略,平衡了探索与发展的矛盾。在系统实现层面,构建Flink+Redis实时特征工程管道,支持千维特征毫秒级更新;部署XGBoost-LightGBM双引擎排序模型,实现可解释的SHAP值推荐。实验表明,在头部视频平台的8亿条行为日志测试中,与传统协同过滤方法相比,该方案的点击率提高了29.7%,观看完成度提高了18.3%,冷启动用户推荐满意度提高了82.5% (A/B测试$P <;0.005美元)。本研究为媒体行业的用户行为建模提供了新的思路,也为个性化推荐系统的设计和实现提供了理论和实践参考。
{"title":"Application of Machine Learning Algorithms in User Behavior Analysis and a Personalized Recommendation System in the Media Industry","authors":"Jialing Wang;Jun Zheng","doi":"10.13052/jicts2245-800X.1313","DOIUrl":"https://doi.org/10.13052/jicts2245-800X.1313","url":null,"abstract":"Aimed at the multidimensional and nonlinear characteristics of user behavior in the media industry, this paper proposes an intelligent user modeling and recommendation framework (MUMA) based on hybrid machine learning. The system constructs a spatial-temporal dual-driven user characterization system by fusing heterogeneous data from multiple sources (clickstream, viewing duration, social graph, and eye-movement hotspot). The core technological breakthroughs include: (1) designing a dynamic interest-aware network (DIN) and adopting a hybrid LSTM-Transformer architecture with a time decay factor to capture short-term/long-term behavioral patterns; (2) developing a cross-domain migratory learning module based on a heterogeneous information network (HIN) to realize collaborative recommendation of news/video/advertising business; (3) innovatively combining reinforcement learning and causal inference to construct a bandit-propensity hybrid recommendation strategy, balancing the contradiction between exploration and development. At the system realization level, build a Flink+Redis realtime feature engineering pipeline to support millisecond update of thousands of dimensional features; deploy an XGBoost-LightGBM dual-engine ranking model to realize an interpretable recommendation by SHAP value. Experiments show that in the 800 million behavioral logs test of the head video platform, compared with traditional collaborative filtering methods, this scheme improves CTR by 29.7%, viewing completion by 18.3%, and coldstart user recommendation satisfaction by 82.5% (A/B test <tex>$P &lt; 0.005$</tex>). This study provides new ideas for user behavior modeling in the media industry, as well as theoretical and practical references for the design and implementation of personalized recommendation systems.","PeriodicalId":36697,"journal":{"name":"Journal of ICT Standardization","volume":"13 1","pages":"41-66"},"PeriodicalIF":0.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11042905","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144314676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Natural Language Processing: Classification of Web Texts Combined with Deep Learning 自然语言处理:结合深度学习的网络文本分类
Q3 Decision Sciences Pub Date : 2025-03-01 DOI: 10.13052/jicts2245-800X.1312
Chenwen Zhang
With the increasing number of web texts, the classification of web texts has become an important task. In this paper, the text word vector representation method is first analyzed, and bidirectional encoder representations from transformers (BERT) are selected to extract the word vector. The bidirectional gated recurrent unit (BiGRU), convolutional neural network (CNN), and attention mechanism are combined to obtain the context and local features of the text, respectively. Experiments were carried out using the THUCNews dataset. The results showed that in the comparison between word-to-vector (Word2vec), Glove, and BERT, the BERT obtained the best classification result. In the classification of different types of text, the average accuracy and F1value of the BERT-BGCA method reached 0.9521 and 0.9436, respectively, which were superior to other deep learning methods such as TextCNN. The results suggest that the BERT-BGCA method is effective in classifying web texts and can be applied in practice.
随着网络文本数量的不断增加,对网络文本进行分类已成为一项重要的任务。本文首先对文本词向量表示方法进行了分析,并选择了来自变压器的双向编码器表示(BERT)来提取词向量。将双向门控循环单元(BiGRU)、卷积神经网络(CNN)和注意机制相结合,分别获取文本的上下文特征和局部特征。实验使用THUCNews数据集进行。结果表明,在word-to-vector (Word2vec)、Glove和BERT的对比中,BERT获得了最好的分类结果。在对不同类型文本的分类中,BERT-BGCA方法的平均准确率和f1值分别达到0.9521和0.9436,优于TextCNN等其他深度学习方法。结果表明,BERT-BGCA方法对网络文本分类是有效的,可以在实际中应用。
{"title":"Natural Language Processing: Classification of Web Texts Combined with Deep Learning","authors":"Chenwen Zhang","doi":"10.13052/jicts2245-800X.1312","DOIUrl":"https://doi.org/10.13052/jicts2245-800X.1312","url":null,"abstract":"With the increasing number of web texts, the classification of web texts has become an important task. In this paper, the text word vector representation method is first analyzed, and bidirectional encoder representations from transformers (BERT) are selected to extract the word vector. The bidirectional gated recurrent unit (BiGRU), convolutional neural network (CNN), and attention mechanism are combined to obtain the context and local features of the text, respectively. Experiments were carried out using the THUCNews dataset. The results showed that in the comparison between word-to-vector (Word2vec), Glove, and BERT, the BERT obtained the best classification result. In the classification of different types of text, the average accuracy and F1value of the BERT-BGCA method reached 0.9521 and 0.9436, respectively, which were superior to other deep learning methods such as TextCNN. The results suggest that the BERT-BGCA method is effective in classifying web texts and can be applied in practice.","PeriodicalId":36697,"journal":{"name":"Journal of ICT Standardization","volume":"13 1","pages":"25-40"},"PeriodicalIF":0.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11042907","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144314883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Reinforcement Learning-Based Asymmetric Convolutional Autoencoder for Intrusion Detection 基于深度强化学习的非对称卷积自编码器入侵检测
Q3 Decision Sciences Pub Date : 2025-03-01 DOI: 10.13052/jicts2245-800X.1314
Yuqin Dai;Xinjie Qian;Chunmei Yang
In recent years, intrusion detection systems (IDSs) have become a critical component of network security, due to the growing number and complexity of cyber-attacks. Traditional IDS methods, including signature-based and anomaly-based detection, often struggle with the high-dimensional and imbalanced nature of network traffic, leading to suboptimal performance. Moreover, many existing models fail to efficiently handle the diverse and complex attack types. In response to these challenges, we propose a novel deep learning-based IDS framework that leverages a deep asymmetric convolutional autoencoder (DACA) architecture. Our model combines advanced techniques for feature extraction, dimensionality reduction, and anomaly detection into a single cohesive framework. The DACA model is designed to effectively capture complex patterns and subtle anomalies in network traffic while significantly reducing computational complexity. By employing this architecture, we achieve superior detection accuracy across various types of attacks even in imbalanced datasets. Experimental results demonstrate that our approach surpasses several state-of-the-art methods, including HCM-SVM, D1-IDDS, and GNN -IDS, achieving high accuracy, precision, recall, and F1-score on benchmark datasets such as NSL-KDD and UNSW-NB15. The results emphasize how effectively our model identifies complex and varied attack patterns. In conclusion, the proposed IDS model offers a promising solution to the limitations of current detection systems, with significant improvements in performance and efficiency. This approach contributes to advancing the development of robust and scalable network security solutions.
近年来,由于网络攻击的数量和复杂性不断增加,入侵检测系统(ids)已成为网络安全的重要组成部分。传统的入侵检测方法,包括基于签名的检测和基于异常的检测,经常与网络流量的高维和不平衡特性作斗争,导致性能不佳。此外,现有的许多模型无法有效地处理各种复杂的攻击类型。为了应对这些挑战,我们提出了一种新的基于深度学习的IDS框架,该框架利用了深度非对称卷积自编码器(DACA)架构。我们的模型将特征提取、降维和异常检测的先进技术结合到一个单一的内聚框架中。DACA模型旨在有效地捕获网络流量中的复杂模式和微妙异常,同时显着降低计算复杂性。通过采用这种架构,即使在不平衡的数据集中,我们也能在各种类型的攻击中实现更高的检测精度。实验结果表明,我们的方法超越了几种最先进的方法,包括HCM-SVM, D1-IDDS和GNN -IDS,在NSL-KDD和UNSW-NB15等基准数据集上实现了较高的准确度,精密度,召回率和f1分数。结果强调了我们的模型如何有效地识别复杂和多样的攻击模式。总之,所提出的IDS模型为解决当前检测系统的局限性提供了一个有希望的解决方案,在性能和效率方面都有显着提高。这种方法有助于推进健壮且可扩展的网络安全解决方案的开发。
{"title":"Deep Reinforcement Learning-Based Asymmetric Convolutional Autoencoder for Intrusion Detection","authors":"Yuqin Dai;Xinjie Qian;Chunmei Yang","doi":"10.13052/jicts2245-800X.1314","DOIUrl":"https://doi.org/10.13052/jicts2245-800X.1314","url":null,"abstract":"In recent years, intrusion detection systems (IDSs) have become a critical component of network security, due to the growing number and complexity of cyber-attacks. Traditional IDS methods, including signature-based and anomaly-based detection, often struggle with the high-dimensional and imbalanced nature of network traffic, leading to suboptimal performance. Moreover, many existing models fail to efficiently handle the diverse and complex attack types. In response to these challenges, we propose a novel deep learning-based IDS framework that leverages a deep asymmetric convolutional autoencoder (DACA) architecture. Our model combines advanced techniques for feature extraction, dimensionality reduction, and anomaly detection into a single cohesive framework. The DACA model is designed to effectively capture complex patterns and subtle anomalies in network traffic while significantly reducing computational complexity. By employing this architecture, we achieve superior detection accuracy across various types of attacks even in imbalanced datasets. Experimental results demonstrate that our approach surpasses several state-of-the-art methods, including HCM-SVM, D1-IDDS, and GNN -IDS, achieving high accuracy, precision, recall, and F1-score on benchmark datasets such as NSL-KDD and UNSW-NB15. The results emphasize how effectively our model identifies complex and varied attack patterns. In conclusion, the proposed IDS model offers a promising solution to the limitations of current detection systems, with significant improvements in performance and efficiency. This approach contributes to advancing the development of robust and scalable network security solutions.","PeriodicalId":36697,"journal":{"name":"Journal of ICT Standardization","volume":"13 1","pages":"67-92"},"PeriodicalIF":0.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11042904","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144314683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Validating Reliability and Security Requirements in Public Sector Infrastructure Built by Small Companies 验证小公司建设的公共部门基础设施的可靠性和安全性要求
Q3 Decision Sciences Pub Date : 2025-03-01 DOI: 10.13052/jicts2245-800X.1311
Roar E. Georgsen;Geir M. Køien
Municipal infrastructure in Norway is built primarily by small specialist companies acting as subcontractors, mostly with minimal experience working with information and communication technology (ICT). This combination of inexperience and lack of resources presents a unique challenge. This paper applies model-based systems engineering (MBSE) using the systems modelling language (SysML) to combine validation of reliability and security requirements within a mission-aware interdisciplinary context. The use case is a 6LoWPAN/CoAP-based system for urban spill water management.
挪威的市政基础设施主要由小型专业公司作为分包商建造,这些公司在信息和通信技术(ICT)方面的工作经验很少。这种缺乏经验和缺乏资源的结合构成了一个独特的挑战。本文应用基于模型的系统工程(MBSE),使用系统建模语言(SysML)在任务感知的跨学科上下文中结合可靠性和安全性需求的验证。用例是一个基于6LoWPAN/ coap的城市溢水管理系统。
{"title":"Validating Reliability and Security Requirements in Public Sector Infrastructure Built by Small Companies","authors":"Roar E. Georgsen;Geir M. Køien","doi":"10.13052/jicts2245-800X.1311","DOIUrl":"https://doi.org/10.13052/jicts2245-800X.1311","url":null,"abstract":"Municipal infrastructure in Norway is built primarily by small specialist companies acting as subcontractors, mostly with minimal experience working with information and communication technology (ICT). This combination of inexperience and lack of resources presents a unique challenge. This paper applies model-based systems engineering (MBSE) using the systems modelling language (SysML) to combine validation of reliability and security requirements within a mission-aware interdisciplinary context. The use case is a 6LoWPAN/CoAP-based system for urban spill water management.","PeriodicalId":36697,"journal":{"name":"Journal of ICT Standardization","volume":"13 1","pages":"1-24"},"PeriodicalIF":0.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11042906","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144314677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of ICT Standardization
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1