首页 > 最新文献

Future Generation Computer Systems-The International Journal of Escience最新文献

英文 中文
Digital twins in public bus transport: A systematic literature review of architectures, intelligence, and interaction 公共巴士交通中的数字孪生:架构、智能和交互的系统文献综述
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-19 DOI: 10.1016/j.future.2026.108379
Manuel Andruccioli, Giovanni Delnevo, Roberto Girau, Paola Salomoni
The adoption of Digital Twin (DT) technologies in public transport systems, particularly bus networks, is gaining momentum as cities seek smarter, more responsive, and efficient mobility solutions. Enabled by advances in IoT, AI, and Big Data Analytics, DTs offer real-time monitoring, simulation, and optimization of transit operations. However, despite their potential, the application of DTs in bus-based public transport remains relatively underexplored and fragmented across the literature. This study presents a Systematic Literature Review (SLR) aimed at synthesizing current research on DT technologies in this domain. Specifically, it investigates architectural models, technological frameworks, and platform designs; examines how AI and machine learning models are integrated to support operational tasks; and analyzes the role of Human-Computer Interaction (HCI) in the design and usability of such systems. By identifying key trends, challenges, and research gaps, this work provides a structured overview of the current landscape. Furthermore, it outlines directions for future research in DT-enabled public transportation systems.
随着城市寻求更智能、反应更快、更高效的交通解决方案,数字孪生(DT)技术在公共交通系统(特别是公交网络)中的应用势头日益强劲。在物联网、人工智能和大数据分析技术的推动下,DTs可以对交通运营进行实时监控、模拟和优化。然而,尽管其潜力巨大,在以公交为基础的公共交通中的应用在整个文献中仍然相对未被充分探索和分散。本文通过系统文献综述(SLR),对该领域的DT技术研究现状进行了综述。具体来说,它研究了体系结构模型、技术框架和平台设计;研究人工智能和机器学习模型如何集成以支持操作任务;并分析了人机交互(HCI)在这些系统的设计和可用性中的作用。通过确定关键趋势、挑战和研究差距,本工作提供了当前景观的结构化概述。此外,它还概述了未来基于dt的公共交通系统的研究方向。
{"title":"Digital twins in public bus transport: A systematic literature review of architectures, intelligence, and interaction","authors":"Manuel Andruccioli,&nbsp;Giovanni Delnevo,&nbsp;Roberto Girau,&nbsp;Paola Salomoni","doi":"10.1016/j.future.2026.108379","DOIUrl":"10.1016/j.future.2026.108379","url":null,"abstract":"<div><div>The adoption of Digital Twin (DT) technologies in public transport systems, particularly bus networks, is gaining momentum as cities seek smarter, more responsive, and efficient mobility solutions. Enabled by advances in IoT, AI, and Big Data Analytics, DTs offer real-time monitoring, simulation, and optimization of transit operations. However, despite their potential, the application of DTs in bus-based public transport remains relatively underexplored and fragmented across the literature. This study presents a Systematic Literature Review (SLR) aimed at synthesizing current research on DT technologies in this domain. Specifically, it investigates architectural models, technological frameworks, and platform designs; examines how AI and machine learning models are integrated to support operational tasks; and analyzes the role of Human-Computer Interaction (HCI) in the design and usability of such systems. By identifying key trends, challenges, and research gaps, this work provides a structured overview of the current landscape. Furthermore, it outlines directions for future research in DT-enabled public transportation systems.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"180 ","pages":"Article 108379"},"PeriodicalIF":6.2,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146000904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A coclustering and computational intelligence-based approach for internet-of-things services composition 物联网服务组合的聚类和基于计算智能的方法
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-19 DOI: 10.1016/j.future.2026.108381
Nawel Atmani , Mohamed Essaid Khanouche , Ahror Belaid , Abdelghani Chibani
The Internet of Things (IoT) paradigm aims at interconnecting heterogeneous devices, called smart objects and seamlessly offering a multitude of services tailored to the user requirements. With the extremely rapid growth of the number of connected objects, the IoT services composition process becomes an NP-hard challenge due to the very high increase of the number of services offering similar functionalities but that may differ in their Quality of Service (QoS) parameter values. Various approaches have been proposed in the literature to obtain compositions with suboptimal QoS in a reasonable computation time. However, when the number of services and QoS parameters increases, the performance of these approaches is limited in terms of the composition time and/or the QoS utility of the composition. To address these limitations, a coclustering-based approach for QoS-constrained services composition (CoQSC) is proposed to reduce the composition space and improve the composition time as well as the composition utility. Unlike existing services composition algorithms where the composition space is reduced only in terms of the number of candidate services, the CoQSC approach exploits a coclustering method to reduce both the number of candidate services and the number of QoS parameters to be considered in the composition process. This reduction allows the composition process to find suboptimal compositions in a reduced computation time using eight among the most representative and recent computational intelligence (CI) techniques in the literature separately. The formulation of the CoQSC approach is complemented by a complexity analysis. Simulation scenarios show that the CoQSC approach significantly improves the QoS utility of composition and substantially decreases the composition time compared to recent and representative state-of-the-art composition approaches, making it suitable for large-scale IoT service environments.
物联网(IoT)范例旨在将称为智能对象的异构设备互连起来,并无缝地提供根据用户需求量身定制的多种服务。随着连接对象数量的快速增长,物联网服务组合过程成为一个np困难的挑战,因为提供类似功能但服务质量(QoS)参数值可能不同的服务数量急剧增加。文献中已经提出了各种方法来在合理的计算时间内获得次优QoS的组合。然而,当服务和QoS参数的数量增加时,这些方法的性能受到组合时间和/或组合的QoS效用的限制。为了解决这些限制,提出了一种基于协聚的qos约束服务组合(CoQSC)方法,以减少组合空间,提高组合时间和组合效用。现有的服务组合算法仅根据候选服务的数量来减少组合空间,而CoQSC方法利用共聚方法来减少组合过程中要考虑的候选服务的数量和QoS参数的数量。这种简化允许组合过程在减少的计算时间内找到次优组合,分别使用文献中最具代表性和最新的计算智能(CI)技术中的八种。CoQSC方法的制定是由复杂性分析补充的。仿真场景表明,与最新代表性的最先进组合方法相比,CoQSC方法显着提高了组合的QoS效用,并大大缩短了组合时间,使其适用于大规模物联网服务环境。
{"title":"A coclustering and computational intelligence-based approach for internet-of-things services composition","authors":"Nawel Atmani ,&nbsp;Mohamed Essaid Khanouche ,&nbsp;Ahror Belaid ,&nbsp;Abdelghani Chibani","doi":"10.1016/j.future.2026.108381","DOIUrl":"10.1016/j.future.2026.108381","url":null,"abstract":"<div><div>The Internet of Things (IoT) paradigm aims at interconnecting heterogeneous devices, called <em>smart objects</em> and seamlessly offering a multitude of services tailored to the user requirements. With the extremely rapid growth of the number of connected objects, the IoT services composition process becomes an NP-hard challenge due to the very high increase of the number of services offering similar functionalities but that may differ in their Quality of Service (QoS) parameter values. Various approaches have been proposed in the literature to obtain compositions with suboptimal QoS in a reasonable computation time. However, when the number of services and QoS parameters increases, the performance of these approaches is limited in terms of the composition time and/or the QoS utility of the composition. To address these limitations, a coclustering-based approach for QoS-constrained services composition (CoQSC) is proposed to reduce the composition space and improve the composition time as well as the composition utility. Unlike existing services composition algorithms where the composition space is reduced only in terms of the number of candidate services, the CoQSC approach exploits a coclustering method to reduce both the number of candidate services and the number of QoS parameters to be considered in the composition process. This reduction allows the composition process to find suboptimal compositions in a reduced computation time using eight among the most representative and recent computational intelligence (CI) techniques in the literature separately. The formulation of the CoQSC approach is complemented by a complexity analysis. Simulation scenarios show that the CoQSC approach significantly improves the QoS utility of composition and substantially decreases the composition time compared to recent and representative state-of-the-art composition approaches, making it suitable for large-scale IoT service environments.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"180 ","pages":"Article 108381"},"PeriodicalIF":6.2,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146000901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AFMIS: An approximate floating-point multiplier based on input segmentation 基于输入分割的近似浮点乘法器
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-18 DOI: 10.1016/j.future.2026.108377
Asma Naseri Rad , Shaghayegh Vahdat , Ali Afzali-Kusha , Massoud Pedram
This paper proposes an approximate floating-point (FP) multiplier, called AFMIS, which is based on input segmentation. The AFMIS multiplier statically divides the input mantissas into several segments and performs exact multiplication on the selected segments. This approach eliminates the need for a costly leading-one detector (LOD) circuit. The static segmentation and limited segment count in the proposed design reduce the number of required post-multiplication shift values. With only a few possible shifts, a simple multiplexer can replace a full shifter. This substitution improves speed compared with that of dynamic segmentation approaches. The proposed structure allows for adjustable accuracy levels by modifying the number of bits in each segment, making it suitable for a wide range of applications. To evaluate the efficiency of the AFMIS multiplier, its hardware parameters are compared to those of an exact FP multiplier and several other approximate FP multipliers. The comparison is performed using Synopsys Design Compiler in a 7 nm technology. The results show that the proposed multiplier achieves a mean relative error distance (MRED) of 0.27% to 18.6% while improving delay, area, and power consumption by up to 81.7%, 98%, and 99%, respectively, compared to the exact FP multiplier. Furthermore, the AFMIS multiplier outperforms other approximate FP multipliers in terms of speed, area, and energy consumption at similar accuracy levels. The utility of the AFMIS multiplier is demonstrated by its application in regression and classification tasks using neural networks (NNs) and JPEG compression. The results indicate that, in most cases, the output differences between the AFMIS multiplier and the exact multiplier are negligible.
本文提出了一种基于输入分割的近似浮点乘法器AFMIS。AFMIS乘数静态地将输入的尾数分成几个部分,并对所选部分执行精确的乘法。这种方法消除了对昂贵的前置检测器(LOD)电路的需求。所提出的设计中的静态分段和有限分段计数减少了所需的乘法后移位值的数量。只有几个可能的移位,一个简单的多路复用器可以取代一个完整的移位器。与动态分割方法相比,这种替换提高了速度。所提出的结构允许通过修改每个段中的位数来调节精度水平,使其适用于广泛的应用。为了评估AFMIS乘法器的效率,将其硬件参数与精确FP乘法器和其他几种近似FP乘法器的硬件参数进行了比较。比较是使用7纳米技术的Synopsys设计编译器进行的。结果表明,与精确的FP乘法器相比,该乘法器的平均相对误差距离(MRED)为0.27%至18.6%,同时延迟、面积和功耗分别提高了81.7%、98%和99%。此外,在类似精度水平下,AFMIS乘法器在速度、面积和能耗方面优于其他近似FP乘法器。AFMIS乘数在神经网络(nn)和JPEG压缩的回归和分类任务中的应用证明了它的实用性。结果表明,在大多数情况下,AFMIS乘法器和精确乘法器之间的输出差异可以忽略不计。
{"title":"AFMIS: An approximate floating-point multiplier based on input segmentation","authors":"Asma Naseri Rad ,&nbsp;Shaghayegh Vahdat ,&nbsp;Ali Afzali-Kusha ,&nbsp;Massoud Pedram","doi":"10.1016/j.future.2026.108377","DOIUrl":"10.1016/j.future.2026.108377","url":null,"abstract":"<div><div>This paper proposes an approximate floating-point (FP) multiplier, called AFMIS, which is based on input segmentation. The AFMIS multiplier statically divides the input mantissas into several segments and performs exact multiplication on the selected segments. This approach eliminates the need for a costly leading-one detector (LOD) circuit. The static segmentation and limited segment count in the proposed design reduce the number of required post-multiplication shift values. With only a few possible shifts, a simple multiplexer can replace a full shifter. This substitution improves speed compared with that of dynamic segmentation approaches. The proposed structure allows for adjustable accuracy levels by modifying the number of bits in each segment, making it suitable for a wide range of applications. To evaluate the efficiency of the AFMIS multiplier, its hardware parameters are compared to those of an exact FP multiplier and several other approximate FP multipliers. The comparison is performed using Synopsys Design Compiler in a 7 nm technology. The results show that the proposed multiplier achieves a mean relative error distance (MRED) of 0.27% to 18.6% while improving delay, area, and power consumption by up to 81.7%, 98%, and 99%, respectively, compared to the exact FP multiplier. Furthermore, the AFMIS multiplier outperforms other approximate FP multipliers in terms of speed, area, and energy consumption at similar accuracy levels. The utility of the AFMIS multiplier is demonstrated by its application in regression and classification tasks using neural networks (NNs) and JPEG compression. The results indicate that, in most cases, the output differences between the AFMIS multiplier and the exact multiplier are negligible.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"180 ","pages":"Article 108377"},"PeriodicalIF":6.2,"publicationDate":"2026-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145995521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On-device Artificial Intelligence solutions with applications to Smart Environments 设备上的人工智能解决方案及其在智能环境中的应用
IF 7.5 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-14 DOI: 10.1016/j.future.2026.108373
Fabrizio De Vita, Dario Bruneo, Sajal K. Das
{"title":"On-device Artificial Intelligence solutions with applications to Smart Environments","authors":"Fabrizio De Vita, Dario Bruneo, Sajal K. Das","doi":"10.1016/j.future.2026.108373","DOIUrl":"https://doi.org/10.1016/j.future.2026.108373","url":null,"abstract":"","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"141 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145962154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging cutting-edge high performance computing for large-scale applications 利用尖端的高性能计算大规模应用程序
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-13 DOI: 10.1016/j.future.2026.108374
Claude Tadonki , Gabriele Mencagli , Leonel Sousa
High Performance Computing (HPC) recently entered into the exascale era, marking an important milestone of its history. High-end supercomputers and clusters with remarkable levels of performance are now commonly available for general and specific computational needs, thereby increasing the focus on HPC and related topics. Leveraging the potential of high-speed processing units is an HPC skillful task that requires in-depth knowledge in both hardware and software domains. In fact, the architectural structure of cutting-edge HPC processors is complex and involves several specialized features provided through specific units/mechanisms, the processing constraint/overhead of which can turn out to be an efficiency bottleneck. Large-scale supercomputers present greater challenges due to the significant overhead associated with interprocessor communication and synchronization. The evolution of HPC appears closely tied to the growing demand for speed from large-scale applications like complex combinatorial problems, big data applications, the training of large-scale AI models and high-precision simulations, to name a few. As a result, the implementation of cutting-edge techniques should remain scalable on large-scale machines for the benefit of end-users.
高性能计算(HPC)最近进入了百亿亿次时代,标志着其历史上的一个重要里程碑。具有卓越性能水平的高端超级计算机和集群现在通常可用于一般和特定的计算需求,从而增加了对HPC和相关主题的关注。利用高速处理单元的潜力是一项HPC技术任务,需要在硬件和软件领域有深入的知识。事实上,尖端HPC处理器的体系结构是复杂的,并且涉及到通过特定单元/机制提供的一些专门功能,其处理约束/开销可能成为效率瓶颈。由于与处理器间通信和同步相关的巨大开销,大型超级计算机提出了更大的挑战。HPC的发展似乎与大规模应用(如复杂的组合问题、大数据应用、大规模人工智能模型的训练和高精度模拟)对速度的日益增长的需求密切相关。因此,为了最终用户的利益,尖端技术的实现应该在大型机器上保持可伸缩性。
{"title":"Leveraging cutting-edge high performance computing for large-scale applications","authors":"Claude Tadonki ,&nbsp;Gabriele Mencagli ,&nbsp;Leonel Sousa","doi":"10.1016/j.future.2026.108374","DOIUrl":"10.1016/j.future.2026.108374","url":null,"abstract":"<div><div>High Performance Computing (HPC) recently entered into the exascale era, marking an important milestone of its history. High-end supercomputers and clusters with remarkable levels of performance are now commonly available for general and specific computational needs, thereby increasing the focus on HPC and related topics. Leveraging the potential of high-speed processing units is an HPC skillful task that requires in-depth knowledge in both hardware and software domains. In fact, the architectural structure of cutting-edge HPC processors is complex and involves several specialized features provided through specific units/mechanisms, the processing constraint/overhead of which can turn out to be an efficiency bottleneck. Large-scale supercomputers present greater challenges due to the significant overhead associated with interprocessor communication and synchronization. The evolution of HPC appears closely tied to the growing demand for speed from large-scale applications like complex combinatorial problems, big data applications, the training of large-scale AI models and high-precision simulations, to name a few. As a result, the implementation of cutting-edge techniques should remain scalable on large-scale machines for the benefit of end-users.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108374"},"PeriodicalIF":6.2,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145962155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IRL-D3QN: An intelligent multi-agent learning framework for dynamic spectrum management in vehicular networks 基于IRL-D3QN的车辆网络动态频谱管理智能多智能体学习框架
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-10 DOI: 10.1016/j.future.2026.108371
Jing Wang , Wenshi Dan , Ke Yang , Xing Tang , Lingyu Yan
The proliferation of vehicular networks within intelligent transportation systems (ITS) has significantly increased the demand for efficient and adaptive spectrum resource allocation. Spectrum coordination is challenging due to high vehicle traffic, intensive communication environments and diversified service requirements. These are of particular significance in Vehicle-to-Everything (V2X) communications, where adaptive conditions call out powerful solutions. Multi-agent reinforcement learning (MARL) techniques are promising and have been applied to the management of dynamic spectrum access, but with limitations including overestimated value functions, unsteady policy convergence, and dependence on manual choices of rewards, these techniques have limitations as far as their application in practice. This paper presents a new framework of spectrum management IRL-D3QN, which combines Inverse Reinforcement Learning (IRL) and a Dueling Double Deep Q-Network (D3QN). This algorithm involves a prediction network of rewards on determining intrinsic motivation according to its interplay with environments, eliminating the necessity of a danger of designing rewards manually. This enhances generalization in various situations. The dueling network design contributes to learning that is more stable because it keeps the values of state and values of the action apart. In the meantime, the bias of overestimation is minimized in the case of double q-learning. It has been demonstrated through simulations that IRL-D3QN can support a higher Vehicle to Infrastructure (V2I) transmission rate by 7.94 percent and demonstrate significantly less performance degradation under heavy communication loads than state of the art RL algorithms. Therefore, it will provide a solution to the distribution of dynamic spectrum, which will be scalable and self-sufficient in the next generation of vehicular communication systems.
智能交通系统(ITS)中车辆网络的激增显著增加了对高效和自适应频谱资源分配的需求。由于高车流量、密集通信环境和多样化的业务需求,频谱协调具有挑战性。这在车联网(V2X)通信中尤为重要,因为自适应条件需要强大的解决方案。多智能体强化学习(MARL)技术很有前途,已经应用于动态频谱接入的管理,但由于存在价值函数高估、策略收敛不稳定以及依赖人工选择奖励等局限性,这些技术在实际应用中存在局限性。本文提出了一种新的频谱管理框架IRL-D3QN,该框架将逆强化学习(IRL)和Dueling双深度Q-Network (D3QN)相结合。该算法包含一个奖励预测网络,根据其与环境的相互作用来确定内在动机,从而消除了手动设计奖励的必要性。这增强了在各种情况下的泛化。决斗网络设计有助于学习更稳定,因为它使状态值和动作值分开。同时,在双q学习的情况下,高估的偏差被最小化。通过仿真已经证明,IRL-D3QN可以支持7.94%的更高的车辆到基础设施(V2I)传输速率,并且在高通信负载下的性能下降明显小于最先进的RL算法。因此,它将为下一代车载通信系统提供可扩展和自给自足的动态频谱分配解决方案。
{"title":"IRL-D3QN: An intelligent multi-agent learning framework for dynamic spectrum management in vehicular networks","authors":"Jing Wang ,&nbsp;Wenshi Dan ,&nbsp;Ke Yang ,&nbsp;Xing Tang ,&nbsp;Lingyu Yan","doi":"10.1016/j.future.2026.108371","DOIUrl":"10.1016/j.future.2026.108371","url":null,"abstract":"<div><div>The proliferation of vehicular networks within intelligent transportation systems (ITS) has significantly increased the demand for efficient and adaptive spectrum resource allocation. Spectrum coordination is challenging due to high vehicle traffic, intensive communication environments and diversified service requirements. These are of particular significance in Vehicle-to-Everything (V2X) communications, where adaptive conditions call out powerful solutions. Multi-agent reinforcement learning (MARL) techniques are promising and have been applied to the management of dynamic spectrum access, but with limitations including overestimated value functions, unsteady policy convergence, and dependence on manual choices of rewards, these techniques have limitations as far as their application in practice. This paper presents a new framework of spectrum management IRL-D3QN, which combines Inverse Reinforcement Learning (IRL) and a Dueling Double Deep Q-Network (D3QN). This algorithm involves a prediction network of rewards on determining intrinsic motivation according to its interplay with environments, eliminating the necessity of a danger of designing rewards manually. This enhances generalization in various situations. The dueling network design contributes to learning that is more stable because it keeps the values of state and values of the action apart. In the meantime, the bias of overestimation is minimized in the case of double q-learning. It has been demonstrated through simulations that IRL-D3QN can support a higher Vehicle to Infrastructure (V2I) transmission rate by 7.94 percent and demonstrate significantly less performance degradation under heavy communication loads than state of the art RL algorithms. Therefore, it will provide a solution to the distribution of dynamic spectrum, which will be scalable and self-sufficient in the next generation of vehicular communication systems.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"180 ","pages":"Article 108371"},"PeriodicalIF":6.2,"publicationDate":"2026-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145957070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Striking the balance between speed and compression ratio: A fast bit-grouping algorithm and adaptive compressor selection for scientific data 在速度和压缩比之间取得平衡:科学数据的快速位分组算法和自适应压缩器选择
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-10 DOI: 10.1016/j.future.2026.108370
Michael Middlezong
High-performance computing (HPC) systems have enabled unprecedented advancements in scientific simulation, producing larger and larger quantities of data to be analyzed. The resulting storage and I/O overheads present a significant bottleneck to scientific workflows. While many compression algorithms have been developed to address the issue, achieving the optimal balance between compression ratio and throughput remains a challenge. Furthermore, strict error bound requirements are inadequately addressed by current solutions. This paper introduces GRASP, a fast bit-grouping compressor that leverages the local smoothness of data to achieve high throughput while maintaining competitive compression ratios under tight error constraints. For the purposes of compressor selection, we also propose a novel efficiency metric that considers both compression and I/O performance, allowing the user to make an informed decision about which compressor to use. We also develop an adaptive compression selection framework based on this metric, using sampling to determine at runtime the optimal compressor for specific use cases. Experimental results across six diverse datasets demonstrate that GRASP outperforms traditional error-bounded compressors such as SZ3 and ZFP in speed while achieving similar compression ratios under tight error bounds. Additionally, we assess scenarios in which a naive compressor selection fails to select the optimal compressor, demonstrating the importance of an adaptive compressor selection framework. These contributions provide a practical approach to balancing speed and compression ratio in modern scientific data management.
高性能计算(HPC)系统使科学模拟取得了前所未有的进步,产生了越来越多的需要分析的数据。由此产生的存储和I/O开销是科学工作流程的一个重要瓶颈。虽然已经开发了许多压缩算法来解决这个问题,但在压缩比和吞吐量之间实现最佳平衡仍然是一个挑战。此外,当前的解决方案没有充分解决严格的错误边界要求。本文介绍了一种快速的位分组压缩器GRASP,它利用数据的局部平滑性来实现高吞吐量,同时在严格的错误约束下保持有竞争力的压缩比。为了选择压缩机,我们还提出了一种考虑压缩和I/O性能的新型效率指标,允许用户对使用哪种压缩机做出明智的决定。我们还基于该指标开发了一个自适应压缩选择框架,使用采样在运行时确定特定用例的最佳压缩器。在六个不同数据集上的实验结果表明,GRASP在速度上优于传统的错误有界压缩器(如SZ3和ZFP),同时在严格的错误界限下获得相似的压缩比。此外,我们还评估了原始压缩机选择无法选择最佳压缩机的情况,证明了自适应压缩机选择框架的重要性。这些贡献为现代科学数据管理中平衡速度和压缩比提供了一种实用的方法。
{"title":"Striking the balance between speed and compression ratio: A fast bit-grouping algorithm and adaptive compressor selection for scientific data","authors":"Michael Middlezong","doi":"10.1016/j.future.2026.108370","DOIUrl":"10.1016/j.future.2026.108370","url":null,"abstract":"<div><div>High-performance computing (HPC) systems have enabled unprecedented advancements in scientific simulation, producing larger and larger quantities of data to be analyzed. The resulting storage and I/O overheads present a significant bottleneck to scientific workflows. While many compression algorithms have been developed to address the issue, achieving the optimal balance between compression ratio and throughput remains a challenge. Furthermore, strict error bound requirements are inadequately addressed by current solutions. This paper introduces GRASP, a fast bit-grouping compressor that leverages the local smoothness of data to achieve high throughput while maintaining competitive compression ratios under tight error constraints. For the purposes of compressor selection, we also propose a novel efficiency metric that considers both compression and I/O performance, allowing the user to make an informed decision about which compressor to use. We also develop an adaptive compression selection framework based on this metric, using sampling to determine at runtime the optimal compressor for specific use cases. Experimental results across six diverse datasets demonstrate that GRASP outperforms traditional error-bounded compressors such as SZ3 and ZFP in speed while achieving similar compression ratios under tight error bounds. Additionally, we assess scenarios in which a naive compressor selection fails to select the optimal compressor, demonstrating the importance of an adaptive compressor selection framework. These contributions provide a practical approach to balancing speed and compression ratio in modern scientific data management.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"180 ","pages":"Article 108370"},"PeriodicalIF":6.2,"publicationDate":"2026-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145957071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BiD-Accel: Accelerated bidimensional input-aware SDC vulnerability assessment for GPU static instructions BiD-Accel: GPU静态指令加速二维输入感知SDC漏洞评估
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-08 DOI: 10.1016/j.future.2026.108372
Zhenyu Qian , Lianguo Wang , Pengfei Zhang , Jianing Rao
Graphics Processing Units (GPUs) are increasingly used in safety-critical systems where Silent Data Corruptions (SDCs) pose severe risks. Selective Instruction Duplication (SID) can mitigate these risks but relies on accurate static-instruction vulnerability assessment, which is complicated by variations in input values and sizes. This paper presents a comprehensive study of how input characteristics shape instruction-level SDC vulnerability, which we quantify using the Static Instruction Error Probability (SIEP) and the SDC Occurrence rate (SDCO). We extend gpuFI-4 to enable fault injection mapping at the static-instruction level. Across 14 benchmarks and more than ten million single-, double-, and triple-bit injections, we find that SIEP is largely value-insensitive, whereas SDCO is highly value-sensitive. For register instructions, SDCO remains stable for random and structured-sparse inputs but differs markedly for all-zero, NaN, or denormal inputs. Moreover, when SIEP is size-sensitive, SDCO also tends to exhibit size sensitivity. We further observe that invalid-injection rates decrease with input size and that shared-memory instructions, though few, can contribute disproportionately to SDCs. Leveraging these insights, we propose BiD-Accel, a bi-dimensional, input-aware framework for accelerated static-instruction SDC vulnerability assessment. Its SIEP-driven Descending Order Sort (DOS) method achieves stable SDCO rankings with injections on only 70.4% of instructions on average, compared with 86.2% for the Random Ordering (RO) method, thereby meaningfully reducing assessment cost while preserving ranking fidelity and providing actionable guidance for robust SID under input-varying GPU workloads.
图形处理单元(gpu)越来越多地用于安全关键系统,在这些系统中,静默数据损坏(sdc)会带来严重的风险。选择性指令复制(SID)可以减轻这些风险,但它依赖于准确的静态指令漏洞评估,而输入值和大小的变化使其变得复杂。本文全面研究了输入特征如何影响指令级SDC漏洞,并使用静态指令错误概率(SIEP)和SDC发生率(SDCO)对其进行量化。我们扩展了gpuFI-4,以在静态指令级别启用故障注入映射。通过14次基准测试和数千万次单、双、三比特注入,我们发现SIEP在很大程度上对值不敏感,但对大小敏感,而SDCO对值高度敏感。对于寄存器指令,SDCO对于随机和结构化稀疏输入保持稳定,但对于全零、NaN或非正常输入则明显不同。此外,当SIEP对规模敏感时,SDCO也表现出规模敏感性(Pearson r=0.609, p=1.85×10−5)。我们进一步观察到,无效注入率随输入大小而降低,共享内存指令虽然很少,但对sdc的贡献不成比例。利用这些见解,我们提出了BiD-Accel,这是一个双向的、输入感知的框架,用于加速静态指令SDC漏洞评估。其siep驱动的递减顺序排序(DOS)方法平均仅对70.4%的指令进行注入,实现了稳定的SDC漏洞排名,而随机排序(RO)方法的平均注入率为86.2%,从而大大降低了评估成本,同时保持了排名保真度,并为不同输入GPU工作负载下的鲁棒SID提供了可操作的指导。
{"title":"BiD-Accel: Accelerated bidimensional input-aware SDC vulnerability assessment for GPU static instructions","authors":"Zhenyu Qian ,&nbsp;Lianguo Wang ,&nbsp;Pengfei Zhang ,&nbsp;Jianing Rao","doi":"10.1016/j.future.2026.108372","DOIUrl":"10.1016/j.future.2026.108372","url":null,"abstract":"<div><div>Graphics Processing Units (GPUs) are increasingly used in safety-critical systems where Silent Data Corruptions (SDCs) pose severe risks. Selective Instruction Duplication (SID) can mitigate these risks but relies on accurate static-instruction vulnerability assessment, which is complicated by variations in input values and sizes. This paper presents a comprehensive study of how input characteristics shape instruction-level SDC vulnerability, which we quantify using the Static Instruction Error Probability (SIEP) and the SDC Occurrence rate (SDCO). We extend gpuFI-4 to enable fault injection mapping at the static-instruction level. Across 14 benchmarks and more than ten million single-, double-, and triple-bit injections, we find that SIEP is largely value-insensitive, whereas SDCO is highly value-sensitive. For register instructions, SDCO remains stable for random and structured-sparse inputs but differs markedly for all-zero, NaN, or denormal inputs. Moreover, when SIEP is size-sensitive, SDCO also tends to exhibit size sensitivity. We further observe that invalid-injection rates decrease with input size and that shared-memory instructions, though few, can contribute disproportionately to SDCs. Leveraging these insights, we propose BiD-Accel, a bi-dimensional, input-aware framework for accelerated static-instruction SDC vulnerability assessment. Its SIEP-driven Descending Order Sort (DOS) method achieves stable SDCO rankings with injections on only 70.4% of instructions on average, compared with 86.2% for the Random Ordering (RO) method, thereby meaningfully reducing assessment cost while preserving ranking fidelity and providing actionable guidance for robust SID under input-varying GPU workloads.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"180 ","pages":"Article 108372"},"PeriodicalIF":6.2,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145957074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic task transmission control and improved greedy strategy for vehicular edge computing 车辆边缘计算的动态任务传输控制与改进贪婪策略
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-07 DOI: 10.1016/j.future.2026.108369
Sheng Cai , Jianmao Xiao , Yuanlong Cao , Qinghang Gao , Zhiyong Feng , Shuiguang Deng
With the rapid development of on-board applications, edge computing is now widely used in Internet of vehicles, enabling vehicles with limited resources to offload tasks to the edge for execution via computation offloading. However, current research methods are often hard to adapt to dynamic scenarios due to model training costs and vehicle mobility and also lack consideration for load balancing in high-load situations. To improve the quality of experience of users and balance the load of edge servers simultaneously, this paper proposes an improved greedy strategy method for computation offloading. First, to mitigate potential communication overload during peak hours, this study analyzes the relationship between transmission scheduling and execution queues, and investigates a dynamic task transmission control method. Second, explicit modeling of round-trip communication reliability in mobile environments is provided to extend the vehicle interconnection model. Subsequently, by analyzing the structure of the optimal solution for total latency optimization, the priority of offloaded tasks is classified. A multi-perspective analysis of task offloading is then conducted, and a greedy strategy is adopted to ensure both the quality of user experience and load balancing at the edge. Finally, comparative experiments on real-world datasets validate the efficiency of the proposed method and model under high-mobility and high-load experimental scenarios.
随着车载应用的快速发展,边缘计算已广泛应用于车联网,使资源有限的车辆可以通过计算卸载将任务卸载到边缘执行。然而,目前的研究方法由于模型训练成本和车辆移动性等因素,往往难以适应动态场景,也缺乏对高负载情况下负载平衡的考虑。为了提高用户的体验质量,同时平衡边缘服务器的负载,本文提出了一种改进的贪心策略计算卸载方法。首先,为了缓解高峰时段潜在的通信过载,本研究分析了传输调度与执行队列之间的关系,并研究了一种动态任务传输控制方法。其次,对移动环境下的往返通信可靠性进行了显式建模,扩展了车辆互联模型。然后,通过分析总时延优化最优解的结构,对卸载任务的优先级进行分类。然后对任务卸载进行多角度分析,采用贪心策略保证用户体验质量和边缘负载均衡。最后,通过对真实数据集的对比实验,验证了该方法和模型在高迁移率和高负载实验场景下的有效性。
{"title":"Dynamic task transmission control and improved greedy strategy for vehicular edge computing","authors":"Sheng Cai ,&nbsp;Jianmao Xiao ,&nbsp;Yuanlong Cao ,&nbsp;Qinghang Gao ,&nbsp;Zhiyong Feng ,&nbsp;Shuiguang Deng","doi":"10.1016/j.future.2026.108369","DOIUrl":"10.1016/j.future.2026.108369","url":null,"abstract":"<div><div>With the rapid development of on-board applications, edge computing is now widely used in Internet of vehicles, enabling vehicles with limited resources to offload tasks to the edge for execution via computation offloading. However, current research methods are often hard to adapt to dynamic scenarios due to model training costs and vehicle mobility and also lack consideration for load balancing in high-load situations. To improve the quality of experience of users and balance the load of edge servers simultaneously, this paper proposes an improved greedy strategy method for computation offloading. First, to mitigate potential communication overload during peak hours, this study analyzes the relationship between transmission scheduling and execution queues, and investigates a dynamic task transmission control method. Second, explicit modeling of round-trip communication reliability in mobile environments is provided to extend the vehicle interconnection model. Subsequently, by analyzing the structure of the optimal solution for total latency optimization, the priority of offloaded tasks is classified. A multi-perspective analysis of task offloading is then conducted, and a greedy strategy is adopted to ensure both the quality of user experience and load balancing at the edge. Finally, comparative experiments on real-world datasets validate the efficiency of the proposed method and model under high-mobility and high-load experimental scenarios.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"180 ","pages":"Article 108369"},"PeriodicalIF":6.2,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145947955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LE OS: A lightweight edge operating system for industrial internet of things under resource constraints LE OS:资源受限下的工业物联网轻量级边缘操作系统
IF 6.2 2区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS Pub Date : 2026-01-07 DOI: 10.1016/j.future.2025.108360
Xianhui Liu , Yangyang Yang , Chenlin Zhu , Yihan Hu , Weidong Zhao
With the rise of Industry 4.0 and edge computing, intelligent manufacturing has undergone rapid development. However, existing research on operating systems for resource-constrained edge devices still exhibits significant limitations: mainstream operating systems require large hardware resources and lack adaptability for edge deployment; the industrial Internet lacks a unified and efficient scheduling and management framework for large-scale devices; and traditional monolithic systems suffer from tight component coupling, where a single component failure can cause system-wide crashes, threatening production stability. To address these challenges, this paper proposes LE OS, a lightweight edge operating system tailored for resource-constrained industrial Internet environments. LE OS leverages container technology to encapsulate system-level components into functional system containers and integrates them with the seL4 microkernel, forming a lightweight, containerized microkernel operating system.Experimental evaluation shows that LE OS improves CPU and I/O performance by 10%-40% and reduces system-level memory usage by over 70% compared with mainstream operating systems, while maintaining high resource efficiency and strong isolation. These results demonstrate that LE OS effectively overcomes the limitations of existing systems and provides a practical and scalable foundation for next-generation industrial Internet edge operating systems.
随着工业4.0和边缘计算的兴起,智能制造得到了快速发展。然而,现有的针对资源受限边缘设备的操作系统研究仍然存在明显的局限性:主流操作系统需要大量硬件资源,缺乏对边缘部署的适应性;工业互联网缺乏统一高效的大规模设备调度管理框架;传统的单片系统受到组件紧密耦合的影响,其中单个组件的故障可能导致系统范围的崩溃,从而威胁到生产的稳定性。为了应对这些挑战,本文提出了LE OS,这是一种为资源受限的工业互联网环境量身定制的轻量级边缘操作系统。LE OS利用容器技术将系统级组件封装到功能性系统容器中,并将它们与seL4微内核集成,形成轻量级的容器化微内核操作系统。实验评估表明,与主流操作系统相比,LE OS的CPU和I/O性能提高了10%-40%,系统级内存使用率降低了70%以上,同时保持了较高的资源效率和强隔离性。这些结果表明,LE OS有效地克服了现有系统的局限性,为下一代工业互联网边缘操作系统提供了实用和可扩展的基础。
{"title":"LE OS: A lightweight edge operating system for industrial internet of things under resource constraints","authors":"Xianhui Liu ,&nbsp;Yangyang Yang ,&nbsp;Chenlin Zhu ,&nbsp;Yihan Hu ,&nbsp;Weidong Zhao","doi":"10.1016/j.future.2025.108360","DOIUrl":"10.1016/j.future.2025.108360","url":null,"abstract":"<div><div>With the rise of Industry 4.0 and edge computing, intelligent manufacturing has undergone rapid development. However, existing research on operating systems for resource-constrained edge devices still exhibits significant limitations: mainstream operating systems require large hardware resources and lack adaptability for edge deployment; the industrial Internet lacks a unified and efficient scheduling and management framework for large-scale devices; and traditional monolithic systems suffer from tight component coupling, where a single component failure can cause system-wide crashes, threatening production stability. To address these challenges, this paper proposes LE OS, a lightweight edge operating system tailored for resource-constrained industrial Internet environments. LE OS leverages container technology to encapsulate system-level components into functional system containers and integrates them with the seL4 microkernel, forming a lightweight, containerized microkernel operating system.Experimental evaluation shows that LE OS improves CPU and I/O performance by 10%-40% and reduces system-level memory usage by over 70% compared with mainstream operating systems, while maintaining high resource efficiency and strong isolation. These results demonstrate that LE OS effectively overcomes the limitations of existing systems and provides a practical and scalable foundation for next-generation industrial Internet edge operating systems.</div></div>","PeriodicalId":55132,"journal":{"name":"Future Generation Computer Systems-The International Journal of Escience","volume":"179 ","pages":"Article 108360"},"PeriodicalIF":6.2,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145957077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Future Generation Computer Systems-The International Journal of Escience
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1