首页 > 最新文献

IEEE Transactions on Computers最新文献

英文 中文
An On-Board Executable Pareto-Based Iterated Local Search Algorithm for Embedded Multi-Core Processor Task Scheduling 嵌入式多核处理器任务调度中基于pareto的机载可执行迭代局部搜索算法
IF 3.8 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-09-01 DOI: 10.1109/TC.2025.3603699
Qinglin Zhao;Lixin Zhang;Qi Pan;Kunbo Cui;Mingqi Zhao;Fuze Tian;Bin Hu
The advancement of wearable electronic technology has facilitated the integration of smart wearable devices into artificial intelligence (AI)-driven medical assisted diagnosis. Embedded multi-core processors (MPs) have gradually emerged as pivotal hardware components for smart wearable medical diagnostic devices due to their high performance and flexibility. However, embedded MPs face the challenge of balancing performance, power consumption, and load-balancing. In response, we introduce a Pareto-based iterated local search (PILS) algorithm for task scheduling, which systematically optimizes multiple objectives, alongside a task list model to reduce the dimension of the decision space and enhance scheduling performance. In addition, we present a two-stage discretization scheme to ensure that the proposed algorithm offers meaningful guidance throughout the scheduling process. Simulation and on-board testing results show that the proposed algorithm effectively optimizes energy consumption, task execution time, and load-balancing in embedded MPs task scheduling, indicating the potential of the proposed algorithm in enhancing the performance of smart wearable medical diagnostic devices powered by embedded MPs.
可穿戴电子技术的进步促进了智能可穿戴设备与人工智能(AI)驱动的医疗辅助诊断的融合。嵌入式多核处理器(MPs)因其高性能和灵活性逐渐成为智能可穿戴医疗诊断设备的关键硬件组件。然而,嵌入式MPs面临着平衡性能、功耗和负载平衡的挑战。为此,我们引入了一种基于pareto的任务调度迭代局部搜索(PILS)算法,该算法系统地优化了多个目标,并结合任务列表模型降低了决策空间的维数,提高了调度性能。此外,我们提出了一个两阶段的离散化方案,以确保所提出的算法在整个调度过程中提供有意义的指导。仿真和车载测试结果表明,该算法在嵌入式MPs任务调度中有效地优化了能耗、任务执行时间和负载均衡,表明了该算法在提升嵌入式MPs驱动的智能可穿戴医疗诊断设备性能方面的潜力。
{"title":"An On-Board Executable Pareto-Based Iterated Local Search Algorithm for Embedded Multi-Core Processor Task Scheduling","authors":"Qinglin Zhao;Lixin Zhang;Qi Pan;Kunbo Cui;Mingqi Zhao;Fuze Tian;Bin Hu","doi":"10.1109/TC.2025.3603699","DOIUrl":"https://doi.org/10.1109/TC.2025.3603699","url":null,"abstract":"The advancement of wearable electronic technology has facilitated the integration of smart wearable devices into artificial intelligence (AI)-driven medical assisted diagnosis. Embedded multi-core processors (MPs) have gradually emerged as pivotal hardware components for smart wearable medical diagnostic devices due to their high performance and flexibility. However, embedded MPs face the challenge of balancing performance, power consumption, and load-balancing. In response, we introduce a Pareto-based iterated local search (PILS) algorithm for task scheduling, which systematically optimizes multiple objectives, alongside a task list model to reduce the dimension of the decision space and enhance scheduling performance. In addition, we present a two-stage discretization scheme to ensure that the proposed algorithm offers meaningful guidance throughout the scheduling process. Simulation and on-board testing results show that the proposed algorithm effectively optimizes energy consumption, task execution time, and load-balancing in embedded MPs task scheduling, indicating the potential of the proposed algorithm in enhancing the performance of smart wearable medical diagnostic devices powered by embedded MPs.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 11","pages":"3696-3709"},"PeriodicalIF":3.8,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145248061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FSA-Hash: Flow-Size-Aware Sketch Hashing for Software Switches FSA-Hash:用于软件交换机的流量大小感知草图哈希
IF 3.8 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-09-01 DOI: 10.1109/TC.2025.3603716
Fuliang Li;Kejun Guo;Yiming Lv;Jiaxing Shen;Yuting Liu;Xingwei Wang;Jiannong Cao
In modern data centers and enterprise networks, software switches have become critical components for achieving flexible and efficient network management. Due to resource constraints in software switches, sketches have emerged as a promising approach for network traffic measurement. However, their accuracy is often impacted by hash collisions. Existing hash functions treat all collisions equally, failing to account for the differing impacts of collisions involving elephant flows versus mouse flows. We propose FSA-Hash, a novel flow-size-aware hashing scheme that separates elephant flows from each other and from mouse flows, minimizing the most detrimental collisions. FSA-Hash is designed based on two insights: separating elephant flows from mouse flows avoids overestimating mouse flows, while separating elephant flows from each other enables accurate heavy-hitter detection. We implement FSA-Hash using machine learning models trained on network traffic data (LFSA-Hash), and also design a lightweight online variant (OLFSA-Hash) that learns the hash model solely from sketch queries on the software switch, obviating traffic collection overheads. Evaluations across four sketches and two tasks demonstrate FSA-Hash’s superior accuracy over standard hash functions. Moreover, OLFSA-Hash closely matches LFSA-Hash’s performance, making it an attractive option for adaptively refining the hash model without monitoring traffic.
在现代数据中心和企业网络中,软件交换机已经成为实现灵活、高效的网络管理的关键部件。由于软件交换机的资源限制,草图已经成为一种很有前途的网络流量测量方法。然而,它们的准确性经常受到哈希冲突的影响。现有的散列函数平等地对待所有的碰撞,没有考虑到大象流和鼠标流碰撞的不同影响。我们提出了FSA-Hash,这是一种新颖的流大小感知哈希方案,可以将大象流彼此分开,并将大象流与鼠标流分开,从而最大限度地减少最有害的碰撞。FSA-Hash的设计基于两个观点:将象流与鼠标流分离可以避免高估鼠标流,而将象流彼此分离可以精确地检测出重量级对象。我们使用在网络流量数据(LFSA-Hash)上训练的机器学习模型来实现FSA-Hash,并且还设计了一个轻量级的在线变体(OLFSA-Hash),该变体仅从软件交换机上的草图查询中学习哈希模型,从而避免了流量收集开销。对四个草图和两个任务的评估表明,FSA-Hash比标准哈希函数具有更高的准确性。此外,OLFSA-Hash的性能与LFSA-Hash非常接近,这使得它成为一个有吸引力的选项,可以在不监视流量的情况下自适应地改进哈希模型。
{"title":"FSA-Hash: Flow-Size-Aware Sketch Hashing for Software Switches","authors":"Fuliang Li;Kejun Guo;Yiming Lv;Jiaxing Shen;Yuting Liu;Xingwei Wang;Jiannong Cao","doi":"10.1109/TC.2025.3603716","DOIUrl":"https://doi.org/10.1109/TC.2025.3603716","url":null,"abstract":"In modern data centers and enterprise networks, software switches have become critical components for achieving flexible and efficient network management. Due to resource constraints in software switches, sketches have emerged as a promising approach for network traffic measurement. However, their accuracy is often impacted by hash collisions. Existing hash functions treat all collisions equally, failing to account for the differing impacts of collisions involving elephant flows versus mouse flows. We propose FSA-Hash, a novel flow-size-aware hashing scheme that separates elephant flows from each other and from mouse flows, minimizing the most detrimental collisions. FSA-Hash is designed based on two insights: separating elephant flows from mouse flows avoids overestimating mouse flows, while separating elephant flows from each other enables accurate heavy-hitter detection. We implement FSA-Hash using machine learning models trained on network traffic data (LFSA-Hash), and also design a lightweight online variant (OLFSA-Hash) that learns the hash model solely from sketch queries on the software switch, obviating traffic collection overheads. Evaluations across four sketches and two tasks demonstrate FSA-Hash’s superior accuracy over standard hash functions. Moreover, OLFSA-Hash closely matches LFSA-Hash’s performance, making it an attractive option for adaptively refining the hash model without monitoring traffic.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 11","pages":"3736-3749"},"PeriodicalIF":3.8,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145248041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SCC: Synchronization Congestion Control for Multi-Tenant Learning Over Geo-Distributed Clouds SCC:地理分布式云上多租户学习的同步拥塞控制
IF 3.8 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-09-01 DOI: 10.1109/TC.2025.3604486
Chengxi Gao;Fuliang Li;Kejiang Ye;Yang Wang;Pengfei Wang;Xingwei Wang;Chengzhong Xu
Distributed machine learning over geo-distributed clouds enables joint training of data located in different regions, alleviating the burden of transferring large volumes of training datasets, which greatly saves bandwidth. However, the limited capacity of WAN links slows down the inter-cloud communications, which significantly decelerates the synchronization of distributed machine learning over geo-distributed clouds. Besides, the multi-tenancy in clouds results in multiple training tasks running simultaneously, whose synchronizations consistently compete for the limited WAN bandwidth with each other, which further aggravates the training performance of each task. While existing works optimize synchronizations through techniques like gradient compression, multi-resource interleaving and so on, none of them targets at the synchronization congestion especially due to multi-tenant learning, which results in inferior training performance. To solve these problems, we propose a simple but effective scheme, SCC, for fast and efficient multi-tenant learning via synchronization congestion control. SCC monitors the cross-cloud network conditions and evaluates the synchronization congestion level based on the round-trip transmission time for each synchronization. Then SCC alleviates synchronization congestion via controlling the synchronization frequency according to the synchronization congestion level in a probabilistic way. Extensive experiments are conducted within our testbeds consisted of 16 NVIDIA V100 GPUs to evaluate the performance of SCC, and comparison results show that SCC can reduce the average training completion time and makespan by up to 28.6% and 43.2% over SAP-SGD [1]. Targeted experiments are conducted to demonstrate the effectiveness and robustness of SCC.
基于地理分布式云的分布式机器学习可以对位于不同区域的数据进行联合训练,减轻了传输大量训练数据集的负担,极大地节省了带宽。然而,广域网链路的有限容量减慢了云间通信的速度,这大大降低了地理分布式云上分布式机器学习的同步速度。此外,云中的多租户导致多个训练任务同时运行,这些任务的同步性不断地相互竞争有限的WAN带宽,这进一步加剧了每个任务的训练性能。现有的研究通过梯度压缩、多资源交织等技术对同步进行优化,但都没有针对多租户学习导致的同步拥塞问题,导致训练性能较差。为了解决这些问题,我们提出了一个简单而有效的方案,SCC,通过同步拥塞控制快速有效的多租户学习。SCC监控跨云网络状况,并根据每次同步的往返传输时间评估同步拥塞程度。SCC根据同步拥塞程度,以概率方式控制同步频率,缓解同步拥塞。我们在由16个NVIDIA V100 gpu组成的测试平台上进行了大量的实验来评估SCC的性能,对比结果表明,SCC比SAP-SGD[1]可将平均训练完成时间和完成时间分别减少28.6%和43.2%。有针对性的实验证明了SCC的有效性和鲁棒性。
{"title":"SCC: Synchronization Congestion Control for Multi-Tenant Learning Over Geo-Distributed Clouds","authors":"Chengxi Gao;Fuliang Li;Kejiang Ye;Yang Wang;Pengfei Wang;Xingwei Wang;Chengzhong Xu","doi":"10.1109/TC.2025.3604486","DOIUrl":"https://doi.org/10.1109/TC.2025.3604486","url":null,"abstract":"Distributed machine learning over geo-distributed clouds enables joint training of data located in different regions, alleviating the burden of transferring large volumes of training datasets, which greatly saves bandwidth. However, the limited capacity of WAN links slows down the inter-cloud communications, which significantly decelerates the synchronization of distributed machine learning over geo-distributed clouds. Besides, the multi-tenancy in clouds results in multiple training tasks running simultaneously, whose synchronizations consistently compete for the limited WAN bandwidth with each other, which further aggravates the training performance of each task. While existing works optimize synchronizations through techniques like gradient compression, multi-resource interleaving and so on, none of them targets at the synchronization congestion especially due to multi-tenant learning, which results in inferior training performance. To solve these problems, we propose a simple but effective scheme, SCC, for fast and efficient multi-tenant learning via synchronization congestion control. SCC monitors the cross-cloud network conditions and evaluates the synchronization congestion level based on the round-trip transmission time for each synchronization. Then SCC alleviates synchronization congestion via controlling the synchronization frequency according to the synchronization congestion level in a probabilistic way. Extensive experiments are conducted within our testbeds consisted of 16 NVIDIA V100 GPUs to evaluate the performance of SCC, and comparison results show that SCC can reduce the average training completion time and makespan by up to 28.6% and 43.2% over SAP-SGD <xref>[1]</xref>. Targeted experiments are conducted to demonstrate the effectiveness and robustness of SCC.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 11","pages":"3911-3924"},"PeriodicalIF":3.8,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145248095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PRRQ: Privacy-Preserving Resilient RkNN Query Over Encrypted Outsourced Multiattribute Data PRRQ:加密外包多属性数据的隐私保护弹性RkNN查询
IF 3.8 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-09-01 DOI: 10.1109/TC.2025.3603688
Jing Wang;Haiyong Bao;Na Ruan;Qinglei Kong;Cheng Huang;Hong-Ning Dai
Traditional reverse k-nearest neighbor (RkNN) query schemes typically assume that users are available online in real-time for interactive key reception, overlooking scenarios where users might be offline. Moreover, existing privacy-preserving RkNN query schemes primarily focus on user features or spatial data, neglecting the significance of user reputation values. To address these limitations, we propose a privacy-preserving resilient RkNN query scheme over encrypted outsourced multi-attribute data (PRRQ). Specifically, to mitigate the challenges posed by resilient online presence (i.e., non-real-time online) of users for interactive key reception, we incorporate a non-interactive key exchange (NIKE) protocol and the Diffie-Hellman two-party key exchange algorithm to propose a multi-party NIKE algorithm (2K-NIKE), facilitating non-interactive key reception for multiple users. Considering the privacy leakage issues, PRRQ encodes original multi-attribute data (i.e., spatial, feature, and reputation values) alongside query requests based on formalized criteria. Additionally, we integrate the proposed 2K-NIKE and the improved symmetric homomorphic encryption (iSHE) algorithms to encrypt them. Furthermore, catering to the requirements of ciphertext-based RkNN queries, we propose a private RkNN query eligibility-checking (PREC) algorithm and a private reputation-verifying (PRRV) algorithm, which validate the compliance of encrypted outsourced multi-attribute data with query requests. Security analysis demonstrates that PRRQ achieves simulation-based security under an honest-but-curious model. Experimental results show that PRRQ offers superior computational efficiency compared to comparative schemes.
传统的反向k最近邻(RkNN)查询方案通常假设用户可以实时在线接收交互式密钥,忽略了用户可能离线的场景。此外,现有的保护隐私的RkNN查询方案主要关注用户特征或空间数据,而忽略了用户声誉值的重要性。为了解决这些限制,我们提出了一种基于加密外包多属性数据(PRRQ)的隐私保护弹性RkNN查询方案。具体来说,为了缓解用户的弹性在线存在(即非实时在线)对交互式密钥接收带来的挑战,我们将非交互式密钥交换(NIKE)协议和Diffie-Hellman两方密钥交换算法结合起来,提出了多方NIKE算法(2K-NIKE),促进了多用户的非交互式密钥接收。考虑到隐私泄露问题,PRRQ将原始的多属性数据(即空间、特征和声誉值)与基于形式化标准的查询请求一起编码。此外,我们将提出的2K-NIKE和改进的对称同态加密(iSHE)算法集成在一起对它们进行加密。此外,针对基于密文的RkNN查询的需求,我们提出了一种私有RkNN查询资格检查(PREC)算法和私有声誉验证(PRRV)算法,用于验证加密外包多属性数据与查询请求的合规性。安全性分析表明,PRRQ在诚实但好奇的模型下实现了基于仿真的安全性。实验结果表明,PRRQ算法的计算效率高于其他算法。
{"title":"PRRQ: Privacy-Preserving Resilient RkNN Query Over Encrypted Outsourced Multiattribute Data","authors":"Jing Wang;Haiyong Bao;Na Ruan;Qinglei Kong;Cheng Huang;Hong-Ning Dai","doi":"10.1109/TC.2025.3603688","DOIUrl":"https://doi.org/10.1109/TC.2025.3603688","url":null,"abstract":"Traditional reverse k-nearest neighbor (RkNN) query schemes typically assume that users are available online in real-time for interactive key reception, overlooking scenarios where users might be offline. Moreover, existing privacy-preserving RkNN query schemes primarily focus on user features or spatial data, neglecting the significance of user reputation values. To address these limitations, we propose a privacy-preserving resilient RkNN query scheme over encrypted outsourced multi-attribute data (PRRQ). Specifically, to mitigate the challenges posed by resilient online presence (i.e., non-real-time online) of users for interactive key reception, we incorporate a non-interactive key exchange (NIKE) protocol and the Diffie-Hellman two-party key exchange algorithm to propose a multi-party NIKE algorithm (2K-NIKE), facilitating non-interactive key reception for multiple users. Considering the privacy leakage issues, PRRQ encodes original multi-attribute data (i.e., spatial, feature, and reputation values) alongside query requests based on formalized criteria. Additionally, we integrate the proposed 2K-NIKE and the improved symmetric homomorphic encryption (iSHE) algorithms to encrypt them. Furthermore, catering to the requirements of ciphertext-based RkNN queries, we propose a private RkNN query eligibility-checking (PREC) algorithm and a private reputation-verifying (PRRV) algorithm, which validate the compliance of encrypted outsourced multi-attribute data with query requests. Security analysis demonstrates that PRRQ achieves simulation-based security under an <italic>honest-but-curious</i> model. Experimental results show that PRRQ offers superior computational efficiency compared to comparative schemes.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 11","pages":"3652-3666"},"PeriodicalIF":3.8,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145248046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing In-Network Computing Deployment via Collaboration Across Planes 通过跨平面协作增强网络内计算部署
IF 3.8 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-08-29 DOI: 10.1109/TC.2025.3603730
Xiaoquan Zhang;Lin Cui;WaiMing Lau;Fung Po Tso;Yuhui Deng;Weijia Jia
The new paradigm of In-network computing (INC) permits service computation to be executed within network paths, rather than solely on dedicated servers. Although the programmable data plane has showcased notable performance advantages for INC application deployments, its effectiveness is constrained by resource limitations, potentially impeding the expressiveness and scalability of these deployments. Conversely, delegating computational tasks to the control plane, supported by general-purpose servers with abundant resources, offers increased flexibility. Nonetheless, this strategy compromises efficiency to a considerable extent, particularly when the system operates under heavy load. To simultaneously exploit the efficiency of data plane and the flexibility of control plane, we propose Carlo, a cross-plane collaborative optimization framework to support the network-wide deployment of multiple INC applications across both the control and data plane. Carlo first analyzes resource requirements of various INC applications across different planes. It then establishes mathematical models for resource allocation in cross-plane and automatically generates solutions using proposed algorithms. We have implemented the prototype of Carlo on Intel Tofino ASIC switches and DPDK. Experimental results demonstrate that Carlo can effectively trade off between computation time and deployment performance while avoiding performance degradation.
网络内计算(INC)的新范式允许在网络路径中执行服务计算,而不是单独在专用服务器上执行。尽管可编程数据平面为INC应用程序部署展示了显著的性能优势,但其有效性受到资源限制的限制,潜在地阻碍了这些部署的表现力和可伸缩性。相反,将计算任务委托给控制平面(由具有丰富资源的通用服务器支持)提供了更高的灵活性。尽管如此,这种策略在很大程度上损害了效率,特别是当系统在高负载下运行时。为了同时利用数据平面的效率和控制平面的灵活性,我们提出了一个跨平面协同优化框架Carlo,以支持跨控制平面和数据平面的多个INC应用程序的全网部署。Carlo首先分析了跨不同平面的各种INC应用程序的资源需求。然后建立了跨平面资源分配的数学模型,并利用所提出的算法自动生成求解方案。我们在Intel Tofino ASIC开关和DPDK上实现了Carlo的原型。实验结果表明,Carlo可以有效地在计算时间和部署性能之间进行权衡,同时避免性能下降。
{"title":"Enhancing In-Network Computing Deployment via Collaboration Across Planes","authors":"Xiaoquan Zhang;Lin Cui;WaiMing Lau;Fung Po Tso;Yuhui Deng;Weijia Jia","doi":"10.1109/TC.2025.3603730","DOIUrl":"https://doi.org/10.1109/TC.2025.3603730","url":null,"abstract":"The new paradigm of In-network computing (INC) permits service computation to be executed within network paths, rather than solely on dedicated servers. Although the programmable data plane has showcased notable performance advantages for INC application deployments, its effectiveness is constrained by resource limitations, potentially impeding the expressiveness and scalability of these deployments. Conversely, delegating computational tasks to the control plane, supported by general-purpose servers with abundant resources, offers increased flexibility. Nonetheless, this strategy compromises efficiency to a considerable extent, particularly when the system operates under heavy load. To simultaneously exploit the efficiency of data plane and the flexibility of control plane, we propose <italic>Carlo</i>, a cross-plane collaborative optimization framework to support the network-wide deployment of multiple INC applications across both the control and data plane. <italic>Carlo</i> first analyzes resource requirements of various INC applications across different planes. It then establishes mathematical models for resource allocation in cross-plane and automatically generates solutions using proposed algorithms. We have implemented the prototype of <italic>Carlo</i> on Intel Tofino ASIC switches and DPDK. Experimental results demonstrate that <italic>Carlo</i> can effectively trade off between computation time and deployment performance while avoiding performance degradation.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 11","pages":"3805-3817"},"PeriodicalIF":3.8,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145248040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scalable Encrypted Deduplication Based on Location-Hiding Secret Sharing of Data Keys 基于位置隐藏密钥共享的可扩展加密重复数据删除
IF 3.8 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-08-29 DOI: 10.1109/TC.2025.3603710
Guanxiong Ha;Yuchen Chen;Chunfu Jia;Keyan Chen;Rongxi Wang;Qiaowen Jia
Encrypted deduplication is attractive because it can provide high storage efficiency while protecting data privacy. Most existing schemes achieve encrypted deduplication against brute-force attacks (BFAs) based on server-aided encryption. Unfortunately, the centralized key server in server-aided encryption can potentially become a single point of failure. To this end, distributed server-aided encryption is presented, which splits a system-level master key into multiple shares and distributes them across several key servers. However, it is hard to improve security and scalability with this method simultaneously. This paper presents a secure and scalable encrypted deduplication scheme ScalaDep. ScalaDep achieves a new design paradigm centered on location-hiding secret sharing of data keys. As the number of deployed key servers increases, the attack cost of adversaries increases while the number of requests handled by each key server decreases, enhancing both scalability and security. Furthermore, we propose a two-phase duplicate detection method for our paradigm, which utilizes short hashes and key identifiers to achieve secure duplicate detection against BFAs. Additionally, based on the allreduce algorithm, ScalaDep enables all key servers to collaboratively record the number of client requests and resist online BFAs by enforcing rate limiting. Security analysis and performance evaluation demonstrate the security and efficiency of ScalaDep.
加密重复数据删除的优点是在保护数据隐私的同时提供较高的存储效率。大多数现有方案基于服务器辅助加密实现加密重复数据删除,以防止暴力攻击(bfa)。不幸的是,服务器辅助加密中的集中式密钥服务器可能会成为单点故障。为此,提出了分布式服务器辅助加密,它将系统级主密钥拆分为多个共享,并将它们分发到多个密钥服务器上。然而,这种方法很难同时提高安全性和可扩展性。提出了一种安全、可扩展的加密重复数据删除方案ScalaDep。ScalaDep实现了一种新的设计范式,其核心是数据密钥的位置隐藏秘密共享。随着部署的密钥服务器数量的增加,攻击者的攻击成本增加,而每个密钥服务器处理的请求数量减少,从而增强了可伸缩性和安全性。此外,我们为我们的范例提出了一种两阶段重复检测方法,该方法利用短哈希和密钥标识符来实现对bfa的安全重复检测。此外,基于allreduce算法,ScalaDep允许所有关键服务器协作记录客户端请求的数量,并通过强制速率限制来抵制在线bfa。安全性分析和性能评估验证了ScalaDep的安全性和高效性。
{"title":"Scalable Encrypted Deduplication Based on Location-Hiding Secret Sharing of Data Keys","authors":"Guanxiong Ha;Yuchen Chen;Chunfu Jia;Keyan Chen;Rongxi Wang;Qiaowen Jia","doi":"10.1109/TC.2025.3603710","DOIUrl":"https://doi.org/10.1109/TC.2025.3603710","url":null,"abstract":"Encrypted deduplication is attractive because it can provide high storage efficiency while protecting data privacy. Most existing schemes achieve encrypted deduplication against brute-force attacks (BFAs) based on server-aided encryption. Unfortunately, the centralized key server in server-aided encryption can potentially become a single point of failure. To this end, distributed server-aided encryption is presented, which splits a system-level master key into multiple shares and distributes them across several key servers. However, it is hard to improve security and scalability with this method simultaneously. This paper presents a secure and scalable encrypted deduplication scheme ScalaDep. ScalaDep achieves a new design paradigm centered on location-hiding secret sharing of data keys. As the number of deployed key servers increases, the attack cost of adversaries increases while the number of requests handled by each key server decreases, enhancing both scalability and security. Furthermore, we propose a two-phase duplicate detection method for our paradigm, which utilizes short hashes and key identifiers to achieve secure duplicate detection against BFAs. Additionally, based on the allreduce algorithm, ScalaDep enables all key servers to collaboratively record the number of client requests and resist online BFAs by enforcing rate limiting. Security analysis and performance evaluation demonstrate the security and efficiency of ScalaDep.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 11","pages":"3710-3721"},"PeriodicalIF":3.8,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145248079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Federated Learning Through Dynamic Model Splitting and Multi-Objective Clustering 基于动态模型分裂和多目标聚类的自适应联邦学习
IF 3.8 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-08-29 DOI: 10.1109/TC.2025.3603681
Ousman Manjang;Yanlong Zhai;Jun Shen;Adil Sarwar;Liehuang Zhu
Federated Learning (FL) enables multiple parties to collaboratively train models without centralizing data, making it ideal for privacy-sensitive applications. However, the heterogeneity and resource limitation of devices pose a critical challenge to the collaborative training process, incurring a significant communication cost to achieve convergence. Existing research has attempted to use clustering to address these issues. However, these approaches relied on a single clustering objective, limiting their effectiveness in a multifaceted heterogeneous environment. In this paper, we propose FedMSC, which employs an evolutionary-based multi-objective optimization approach to organize clients into distinct clusters via their similarities on independent factors such as response speed and local model updates. FedMSC iteratively generates Pareto-optimal cluster solutions, ensuring that no single solution outperforms another, while concurrently optimizing multiple objectives. Moreover, to account for computational diversity across clusters, FedMSC adopts a multi-exit training strategy in which the model is divided into blocks of layers, each equipped with auxiliary classifiers for early inference. Meanwhile, we devise a unique algorithm which dynamically assigns model blocks to devices through combinatorial optimization of devices’ resource capabilities and the computational requirements of the blocks. Experimental results demonstrate that FedMSC significantly reduce communication costs while maintaining a comparable accuracy to the baselines.
联邦学习(FL)使多方能够在不集中数据的情况下协作训练模型,使其成为隐私敏感应用程序的理想选择。然而,设备的异构性和资源限制对协同训练过程提出了严峻的挑战,为实现融合而产生了巨大的通信成本。现有的研究已经尝试使用聚类来解决这些问题。然而,这些方法依赖于单一的聚类目标,限制了它们在多方面异构环境中的有效性。在本文中,我们提出了FedMSC,它采用一种基于进化的多目标优化方法,通过客户端在响应速度和局部模型更新等独立因素上的相似性将其组织成不同的集群。FedMSC迭代生成帕累托最优集群解决方案,确保没有一个解决方案优于另一个,同时优化多个目标。此外,为了考虑跨集群的计算多样性,FedMSC采用了一种多出口训练策略,将模型划分为多层块,每层块都配备辅助分类器进行早期推理。同时,我们设计了一种独特的算法,通过组合优化设备的资源能力和块的计算需求,动态地将模型块分配给设备。实验结果表明,FedMSC显著降低了通信成本,同时保持了与基线相当的精度。
{"title":"Adaptive Federated Learning Through Dynamic Model Splitting and Multi-Objective Clustering","authors":"Ousman Manjang;Yanlong Zhai;Jun Shen;Adil Sarwar;Liehuang Zhu","doi":"10.1109/TC.2025.3603681","DOIUrl":"https://doi.org/10.1109/TC.2025.3603681","url":null,"abstract":"Federated Learning (FL) enables multiple parties to collaboratively train models without centralizing data, making it ideal for privacy-sensitive applications. However, the heterogeneity and resource limitation of devices pose a critical challenge to the collaborative training process, incurring a significant communication cost to achieve convergence. Existing research has attempted to use clustering to address these issues. However, these approaches relied on a single clustering objective, limiting their effectiveness in a multifaceted heterogeneous environment. In this paper, we propose FedMSC, which employs an evolutionary-based multi-objective optimization approach to organize clients into distinct clusters via their similarities on independent factors such as response speed and local model updates. FedMSC iteratively generates Pareto-optimal cluster solutions, ensuring that no single solution outperforms another, while concurrently optimizing multiple objectives. Moreover, to account for computational diversity across clusters, FedMSC adopts a multi-exit training strategy in which the model is divided into blocks of layers, each equipped with auxiliary classifiers for early inference. Meanwhile, we devise a unique algorithm which dynamically assigns model blocks to devices through combinatorial optimization of devices’ resource capabilities and the computational requirements of the blocks. Experimental results demonstrate that FedMSC significantly reduce communication costs while maintaining a comparable accuracy to the baselines.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 12","pages":"3953-3967"},"PeriodicalIF":3.8,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Caravan: Incentive-Driven Account Migration via Transaction Aggregation in Sharded Blockchain Caravan:通过分片区块链中的交易聚合来激励驱动的账户迁移
IF 3.8 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-08-29 DOI: 10.1109/TC.2025.3603672
Yu Tao;Shouchen Zhou;Lu Zhou;Zhe Liu
Blockchain sharding is a promising solution for scalability but struggles to reach the expected performance due to the high ratio of cross-shard transactions. Account migration has emerged as a critical approach to optimizing shard performance. However, existing migration solutions suffer from inefficient handling of queued withdrawal transactions from a migrating account and inadequate priority mechanism for migration transaction, resulting in prolonged transaction makespan and reduced system throughput. This paper proposes Caravan, a novel blockchain sharding system for optimizing account migration. First, Caravan proposes a transaction aggregation-based migration scheme to efficiently handle withdrawal congestion post-migration. It incorporates a multi-level Merkle tree and cross-shard synchronization protocol to ensure cross-shard security. Second, Caravan presents an economic incentive-driven priority mechanism that motivates miners to perform transaction aggregation and prioritize migration transactions by increasing the associated revenue. Furthermore, its gas recycling strategy enables users to finance migration costs without awareness or extra expenses. Finally, we develop the Caravan prototype, deploy it on Alibaba Cloud, and experiment with real Ethereum transactions. The results show that compared to the state-of-the-art account migration schemes, Caravan significantly mitigates the transaction surge caused by migration, achieving up to a 3.2× throughput improvement and a 65% reduction in transaction confirmation latency. And users share considerable migration costs without extra expenses, significantly reduce system costs.
区块链分片是一个很有前途的可伸缩性解决方案,但由于跨分片事务的高比率,很难达到预期的性能。帐户迁移已经成为优化分片性能的关键方法。然而,现有的迁移解决方案存在以下问题:对来自迁移帐户的排队提取事务的处理效率低下,以及迁移事务的优先级机制不足,从而导致事务最大跨度延长,降低系统吞吐量。本文提出了一种新的区块链分片系统Caravan,用于优化账户迁移。首先,Caravan提出了一种基于事务聚合的迁移方案,以有效地处理迁移后的取款拥塞。它结合了多层次的默克尔树和跨分片同步协议,以确保跨分片的安全性。其次,Caravan提出了一种经济激励驱动的优先机制,通过增加相关收入来激励矿工执行交易聚合并优先处理迁移交易。此外,其天然气回收策略使用户无需意识到或额外支出即可为迁移成本提供资金。最后,我们开发了Caravan原型,将其部署在阿里云上,并对真实的以太坊交易进行了实验。结果表明,与最先进的账户迁移方案相比,Caravan显着减轻了迁移引起的事务激增,实现了高达3.2倍的吞吐量改进,并减少了65%的事务确认延迟。并且用户无需额外支出即可分摊可观的迁移成本,显著降低系统成本。
{"title":"Caravan: Incentive-Driven Account Migration via Transaction Aggregation in Sharded Blockchain","authors":"Yu Tao;Shouchen Zhou;Lu Zhou;Zhe Liu","doi":"10.1109/TC.2025.3603672","DOIUrl":"https://doi.org/10.1109/TC.2025.3603672","url":null,"abstract":"Blockchain sharding is a promising solution for scalability but struggles to reach the expected performance due to the high ratio of cross-shard transactions. Account migration has emerged as a critical approach to optimizing shard performance. However, existing migration solutions suffer from inefficient handling of queued withdrawal transactions from a migrating account and inadequate priority mechanism for migration transaction, resulting in prolonged transaction makespan and reduced system throughput. This paper proposes Caravan, a novel blockchain sharding system for optimizing account migration. First, Caravan proposes a transaction aggregation-based migration scheme to efficiently handle withdrawal congestion post-migration. It incorporates a multi-level Merkle tree and cross-shard synchronization protocol to ensure cross-shard security. Second, Caravan presents an economic incentive-driven priority mechanism that motivates miners to perform transaction aggregation and prioritize migration transactions by increasing the associated revenue. Furthermore, its gas recycling strategy enables users to finance migration costs without awareness or extra expenses. Finally, we develop the Caravan prototype, deploy it on Alibaba Cloud, and experiment with real Ethereum transactions. The results show that compared to the state-of-the-art account migration schemes, Caravan significantly mitigates the transaction surge caused by migration, achieving up to a 3.2× throughput improvement and a 65% reduction in transaction confirmation latency. And users share considerable migration costs without extra expenses, significantly reduce system costs.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 11","pages":"3609-3622"},"PeriodicalIF":3.8,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145248065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Thermal Elasticity-Aware Host Resource Provision for Carbon Efficiency on Virtualized Servers 基于热弹性的虚拟化服务器碳效率主机资源分配
IF 3.8 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-08-29 DOI: 10.1109/TC.2025.3603698
Da Zhang;Haojun Xia;Xiaotong Wang;Yanchang Feng;Haohao Liu;Bibo Tu
Servers in modern data centers face increasing challenges from energy inefficiency and thermal-related outages, both of which significantly contribute to their overall carbon footprint. These challenges often arise from a lack of coordination between computational resource provisioning and thermal management capabilities. This paper introduces the concept of thermal elasticity, a system’s intrinsic ability to absorb thermal stress without requiring additional cooling, as a guiding metric for sustainable thermal management. Building on this, we propose a collaborative in-band and out-of-band resource provisioning framework that adjusts CPU allocation based on real-time thermal feedback. By leveraging a machine learning model and runtime monitoring, the framework dynamically provisions CPU clusters to virtual machines co-located on the same host. Evaluations on real servers with multiple workloads show that our method reduces peak power consumption from 5.2% to 9.6%, and lowers peak temperatures between 4${^{boldsymbol{circ}}}$C and 6.5${^{boldsymbol{circ}}}$C (up to 40${^{boldsymbol{circ}}}$C in extreme cases). Carbon emissions are also reduced from 7% to 37% during SPEC benchmark runs. These results highlight the framework’s potential to alleviate stress on power and cooling infrastructure, thereby enhancing energy efficiency, reducing carbon footprint, and improving service continuity during thermal challenges.
现代数据中心的服务器面临着越来越多的能源效率低下和与热相关的中断的挑战,这两者都对其总体碳足迹有很大影响。这些挑战通常源于计算资源供应和热管理能力之间缺乏协调。本文介绍了热弹性的概念,即系统在不需要额外冷却的情况下吸收热应力的内在能力,作为可持续热管理的指导性指标。在此基础上,我们提出了一个协作的带内和带外资源配置框架,该框架根据实时热反馈调整CPU分配。通过利用机器学习模型和运行时监控,该框架动态地为位于同一主机上的虚拟机提供CPU集群。对具有多个工作负载的真实服务器的评估表明,我们的方法将峰值功耗从5.2%降低到9.6%,并将峰值温度降低到4${^{boldsymbol{circ}}}$C和6.5${^{boldsymbol{circ}}}$C之间(在极端情况下高达40${^{boldsymbol{circ}}}$C)。在SPEC基准运行期间,碳排放量也从7%降至37%。这些结果突出了该框架在缓解电力和冷却基础设施压力方面的潜力,从而提高能源效率,减少碳足迹,并改善热挑战期间的服务连续性。
{"title":"Thermal Elasticity-Aware Host Resource Provision for Carbon Efficiency on Virtualized Servers","authors":"Da Zhang;Haojun Xia;Xiaotong Wang;Yanchang Feng;Haohao Liu;Bibo Tu","doi":"10.1109/TC.2025.3603698","DOIUrl":"https://doi.org/10.1109/TC.2025.3603698","url":null,"abstract":"Servers in modern data centers face increasing challenges from energy inefficiency and thermal-related outages, both of which significantly contribute to their overall carbon footprint. These challenges often arise from a lack of coordination between computational resource provisioning and thermal management capabilities. This paper introduces the concept of thermal elasticity, a system’s intrinsic ability to absorb thermal stress without requiring additional cooling, as a guiding metric for sustainable thermal management. Building on this, we propose a collaborative in-band and out-of-band resource provisioning framework that adjusts CPU allocation based on real-time thermal feedback. By leveraging a machine learning model and runtime monitoring, the framework dynamically provisions CPU clusters to virtual machines co-located on the same host. Evaluations on real servers with multiple workloads show that our method reduces peak power consumption from 5.2% to 9.6%, and lowers peak temperatures between 4<inline-formula><tex-math>${^{boldsymbol{circ}}}$</tex-math></inline-formula>C and 6.5<inline-formula><tex-math>${^{boldsymbol{circ}}}$</tex-math></inline-formula>C (up to 40<inline-formula><tex-math>${^{boldsymbol{circ}}}$</tex-math></inline-formula>C in extreme cases). Carbon emissions are also reduced from 7% to 37% during SPEC benchmark runs. These results highlight the framework’s potential to alleviate stress on power and cooling infrastructure, thereby enhancing energy efficiency, reducing carbon footprint, and improving service continuity during thermal challenges.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 11","pages":"3682-3695"},"PeriodicalIF":3.8,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145248098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EABE-PUFPH: Efficient Attribute-Based Encryption With Reliable Policy Updating Under Full Policy Hiding EABE-PUFPH:在完全策略隐藏下具有可靠策略更新的高效基于属性的加密
IF 3.8 2区 计算机科学 Q2 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-08-29 DOI: 10.1109/TC.2025.3603717
Chenghao Gu;Jiguo Li;Yichen Zhang;Yang Lu;Jian Shen
Ciphertext-policy attribute-based encryption (CP-ABE) has garnered significant attention for enabling fine-grained access control over encrypted data in cloud environments. However, in traditional CP-ABE schemes, access policies are transmitted in plaintext, which can lead to sensitive information leakage. To mitigate this risk, hiding access policies has become essential. Under the condition of full hidden access policies, realizing efficient and accurate decryption and dynamic policy updating has become an urgent challenge. To tackle these challenges, we present an efficient attribute-based encryption with reliable policy updating under full policy hiding (EABE-PUFPH) scheme, which effectively integrates full policy hiding with policy updating capabilities. Furthermore, we conduct a rigorous security analysis and performance evaluation of the EABE-PUFPH scheme. Evaluation results show that the EABE-PUFPH scheme achieves full hidden access policies without affecting decryption efficiency, and its efficiency surpasses other similar schemes that achieve full policy hiding.
基于密文策略属性的加密(CP-ABE)由于能够对云环境中的加密数据进行细粒度访问控制而引起了极大的关注。但是在传统的CP-ABE方案中,访问策略以明文方式传输,容易导致敏感信息泄露。为了减轻这种风险,隐藏访问策略变得至关重要。在全隐式访问策略条件下,实现高效准确的解密和策略动态更新已成为一个迫切的挑战。为了解决这些问题,我们提出了一种有效的基于属性的加密和可靠的策略更新全策略隐藏(EABE-PUFPH)方案,该方案有效地集成了全策略隐藏和策略更新功能。此外,我们对EABE-PUFPH方案进行了严格的安全性分析和性能评估。评估结果表明,EABE-PUFPH方案在不影响解密效率的情况下实现了完全隐藏的访问策略,其效率优于其他实现完全策略隐藏的同类方案。
{"title":"EABE-PUFPH: Efficient Attribute-Based Encryption With Reliable Policy Updating Under Full Policy Hiding","authors":"Chenghao Gu;Jiguo Li;Yichen Zhang;Yang Lu;Jian Shen","doi":"10.1109/TC.2025.3603717","DOIUrl":"https://doi.org/10.1109/TC.2025.3603717","url":null,"abstract":"Ciphertext-policy attribute-based encryption (CP-ABE) has garnered significant attention for enabling fine-grained access control over encrypted data in cloud environments. However, in traditional CP-ABE schemes, access policies are transmitted in plaintext, which can lead to sensitive information leakage. To mitigate this risk, hiding access policies has become essential. Under the condition of full hidden access policies, realizing efficient and accurate decryption and dynamic policy updating has become an urgent challenge. To tackle these challenges, we present an efficient attribute-based encryption with reliable policy updating under full policy hiding (EABE-PUFPH) scheme, which effectively integrates full policy hiding with policy updating capabilities. Furthermore, we conduct a rigorous security analysis and performance evaluation of the EABE-PUFPH scheme. Evaluation results show that the EABE-PUFPH scheme achieves full hidden access policies without affecting decryption efficiency, and its efficiency surpasses other similar schemes that achieve full policy hiding.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 11","pages":"3750-3762"},"PeriodicalIF":3.8,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145248087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Computers
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1