首页 > 最新文献

Journal of Computer Science and Technology最新文献

英文 中文
Leveraging index compression techniques to optimize the use of co-processors 利用索引压缩技术优化协处理器的使用
IF 1.9 3区 计算机科学 Q2 Computer Science Pub Date : 2024-04-22 DOI: 10.24215/16666038.24.e01
Manuel Freire, Raúl Marichal, Agustin Martinez, Daniel Padron, E. Dufrechou, P. Ezzatti
The significant presence that many-core devices like GPUs have these days, and their enormous computational power, motivates the study of sparse matrix operations in this hardware. The essential sparse kernels in scientific computing, such as the sparse matrix-vector multiplication (SpMV), usually have many different high-performance GPU implementations. Sparse matrix problems typically imply memory-bound operations, and this characteristic is particularly limiting in massively parallel processors. This work revisits the main ideas about reducing the volume of data required by sparse storage formats and advances in understanding some compression techniques. In particular, we study the use of index compression combined with sparse matrix reordering techniques in CSR and explore other approaches using a blocked format. The systematic experimental evaluation on a large set of real-world matrices confirms that this approach achieves meaningful data storage reductions. Additionally, we find promising results of the impact of the storage reduction on the execution time when using accelerators to perform the mathematical kernels.
如今,多核设备(如 GPU)的出现及其巨大的计算能力促使人们开始研究这种硬件中的稀疏矩阵运算。科学计算中必不可少的稀疏内核,如稀疏矩阵向量乘法(SpMV),通常有许多不同的高性能 GPU 实现。稀疏矩阵问题通常意味着内存绑定操作,而这一特性在大规模并行处理器中尤其具有局限性。这项研究重新审视了有关减少稀疏存储格式所需数据量的主要观点,并进一步了解了一些压缩技术。特别是,我们研究了 CSR 中结合稀疏矩阵重排序技术的索引压缩使用方法,并探索了使用阻塞格式的其他方法。在大量实际矩阵上进行的系统实验评估证实,这种方法能有效减少数据存储量。此外,我们还发现,在使用加速器执行数学内核时,存储量的减少对执行时间的影响很有希望。
{"title":"Leveraging index compression techniques to optimize the use of co-processors","authors":"Manuel Freire, Raúl Marichal, Agustin Martinez, Daniel Padron, E. Dufrechou, P. Ezzatti","doi":"10.24215/16666038.24.e01","DOIUrl":"https://doi.org/10.24215/16666038.24.e01","url":null,"abstract":"\u0000 \u0000 \u0000The significant presence that many-core devices like GPUs have these days, and their enormous computational power, motivates the study of sparse matrix operations in this hardware. The essential sparse kernels in scientific computing, such as the sparse matrix-vector multiplication (SpMV), usually have many different high-performance GPU implementations. Sparse matrix problems typically imply memory-bound operations, and this characteristic is particularly limiting in massively parallel processors. This work revisits the main ideas about reducing the volume of data required by sparse storage formats and advances in understanding some compression techniques. In particular, we study the use of index compression combined with sparse matrix reordering techniques in CSR and explore other approaches using a blocked format. The systematic experimental evaluation on a large set of real-world matrices confirms that this approach achieves meaningful data storage reductions. Additionally, we find promising results of the impact of the storage reduction on the execution time when using accelerators to perform the mathematical kernels. \u0000 \u0000 \u0000","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140676144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph Representations for Reinforcement Learning 强化学习的图形表示法
IF 1.9 3区 计算机科学 Q2 Computer Science Pub Date : 2024-04-22 DOI: 10.24215/16666038.24.e03
Esteban Schab, Carla Casanova, Fabiana Piccoli
Graph analysis is becoming increasingly important due to the expressive power of graph models and the efficient algorithms available for processing them. Reinforcement Learning is one domain that could benefit from advancements in graph analysis, given that a learning agent may be integrated into an environment that can be represented as a graph. Nevertheless, the structural irregularity of graphs and the lack of prior labels make it difficult to integrate such a model into modern Reinforcement Learning frameworks that rely on artificial neural networks. Graph embedding enables the learning of low-dimensional vector representations that are more suited for machine learning algorithms, while retaining essential graph features. This paper presents a framework for evaluating graph embedding algorithms and their ability to preserve the structure and relevant features of graphs by means of an internal validation metric, without resorting to subsequent tasks that require labels for training. Based on this framework, three defined algorithms that meet the necessary requirements for solving a specific problem of Reinforcement Learning in graphs are selected, analyzed, and compared. These algorithms are Graph2Vec, GL2Vec, and Wavelet Characteristics, with the latter two demonstrating superior performance.
由于图模型具有强大的表现力,而且处理图模型的算法非常高效,因此图分析变得越来越重要。强化学习是一个可以从图分析的进步中受益的领域,因为学习代理可以集成到一个可以表示为图的环境中。然而,由于图的结构不规则和缺乏先验标签,很难将这种模型集成到依赖人工神经网络的现代强化学习框架中。图嵌入可以学习更适合机器学习算法的低维向量表示,同时保留基本的图特征。本文提出了一个框架,用于评估图嵌入算法及其通过内部验证指标保留图的结构和相关特征的能力,而无需借助需要标签进行训练的后续任务。基于这一框架,本文选择、分析和比较了三种符合解决图中强化学习特定问题必要条件的定义算法。这些算法分别是 Graph2Vec、GL2Vec 和小波特征,其中后两种算法表现出卓越的性能。
{"title":"Graph Representations for Reinforcement Learning","authors":"Esteban Schab, Carla Casanova, Fabiana Piccoli","doi":"10.24215/16666038.24.e03","DOIUrl":"https://doi.org/10.24215/16666038.24.e03","url":null,"abstract":"Graph analysis is becoming increasingly important due to the expressive power of graph models and the efficient algorithms available for processing them. Reinforcement Learning is one domain that could benefit from advancements in graph analysis, given that a learning agent may be integrated into an environment that can be represented as a graph. Nevertheless, the structural irregularity of graphs and the lack of prior labels make it difficult to integrate such a model into modern Reinforcement Learning frameworks that rely on artificial neural networks. Graph embedding enables the learning of low-dimensional vector representations that are more suited for machine learning algorithms, while retaining essential graph features. This paper presents a framework for evaluating graph embedding algorithms and their ability to preserve the structure and relevant features of graphs by means of an internal validation metric, without resorting to subsequent tasks that require labels for training. Based on this framework, three defined algorithms that meet the necessary requirements for solving a specific problem of Reinforcement Learning in graphs are selected, analyzed, and compared. These algorithms are Graph2Vec, GL2Vec, and Wavelet Characteristics, with the latter two demonstrating superior performance.","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140676947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CFP: A Coherence-Free Processor Design CFP:无相干处理器设计
IF 1.9 3区 计算机科学 Q2 Computer Science Pub Date : 2024-01-30 DOI: 10.1007/s11390-023-3964-5
Franklin Yang

This paper presents the design of a Coherence-Free Processor (CFP) that enables a scalable multiprocessor by eliminating cache coherence operations in both hardware and software. The CFP uses a coherence-free cache (CFC) that can improve the cost-effectiveness and performance-effectiveness of the existing multiprocessors for commonly used workloads. The CFC is feasible because not all program data that reside in a multiprocessor cache need to be accessed by other processors, and private caches at level 1 (L1) and level 2 (L2) facilitate this method of sharing. Reentrant programs are specifically designed to protect their data from modification by other tasks. Program data that are modified but not shared with other tasks do not require a coherence protocol. Adding processors reduces the multitasking queue, reducing elapsed time. Simultaneous execution replaces concurrent execution.

本文介绍了无一致性处理器(CFP)的设计,它通过消除硬件和软件中的高速缓存一致性操作,实现了可扩展的多处理器。CFP 采用无一致性高速缓存(CFC),可提高现有多核处理器在处理常用工作负载时的成本效益和性能效益。CFC 是可行的,因为并不是所有存在多处理器高速缓存中的程序数据都需要被其他处理器访问,而一级(L1)和二级(L2)的专用高速缓存为这种共享方式提供了便利。可重入程序专门用于保护其数据不被其他任务修改。被修改但不与其他任务共享的程序数据不需要一致性协议。增加处理器可减少多任务队列,从而缩短耗时。同时执行取代并发执行。
{"title":"CFP: A Coherence-Free Processor Design","authors":"Franklin Yang","doi":"10.1007/s11390-023-3964-5","DOIUrl":"https://doi.org/10.1007/s11390-023-3964-5","url":null,"abstract":"<p>This paper presents the design of a Coherence-Free Processor (CFP) that enables a scalable multiprocessor by eliminating cache coherence operations in both hardware and software. The CFP uses a coherence-free cache (CFC) that can improve the cost-effectiveness and performance-effectiveness of the existing multiprocessors for commonly used workloads. The CFC is feasible because not all program data that reside in a multiprocessor cache need to be accessed by other processors, and private caches at level 1 (L1) and level 2 (L2) facilitate this method of sharing. Reentrant programs are specifically designed to protect their data from modification by other tasks. Program data that are modified but not shared with other tasks do not require a coherence protocol. Adding processors reduces the multitasking queue, reducing elapsed time. Simultaneous execution replaces concurrent execution.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140581906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Online Algorithm Based on Replication for Using Spot Instances in IaaS Clouds 基于复制的在线算法,用于在 IaaS 云中使用现货实例
IF 1.9 3区 计算机科学 Q2 Computer Science Pub Date : 2024-01-30 DOI: 10.1007/s11390-023-1535-4
Zhi-Wei Xu, Li Pan, Shi-Jun Liu

Infrastructure-as-a-Service (IaaS) cloud platforms offer resources with diverse buying options. Users can run an instance on the on-demand market which is stable but expensive or on the spot market with a significant discount. However, users have to carefully weigh the low cost of spot instances against their poor availability. Spot instances will be revoked when the revocation event occurs. Thus, an important problem that an IaaS user faces now is how to use spot instances in a cost-effective and low-risk way. Based on the replication-based fault tolerance mechanism, we propose an online termination algorithm that optimizes the cost of using spot instances while ensuring operational stability. We prove that in most cases, the cost of our proposed online algorithm will not exceed twice the minimum cost of the optimal offline algorithm that knows the exact future a priori. Through a large number of experiments, we verify that our algorithm in most cases has a competitive ratio of no more than 2, and in other cases it can also reach the guaranteed competitive ratio.

基础设施即服务(IaaS)云平台提供多种购买方式的资源。用户可以在稳定但昂贵的按需市场上运行实例,也可以在折扣较大的现货市场上运行实例。不过,用户必须仔细权衡现货实例的低成本和低可用性。当撤销事件发生时,现货实例将被撤销。因此,IaaS 用户目前面临的一个重要问题是如何以低成本、低风险的方式使用现货实例。基于基于复制的容错机制,我们提出了一种在线终止算法,在确保运行稳定性的同时优化了使用现货实例的成本。我们证明,在大多数情况下,我们提出的在线算法的成本不会超过事先知道确切未来的最优离线算法最低成本的两倍。通过大量实验,我们验证了我们的算法在大多数情况下竞争率不超过 2,在其他情况下也能达到保证的竞争率。
{"title":"An Online Algorithm Based on Replication for Using Spot Instances in IaaS Clouds","authors":"Zhi-Wei Xu, Li Pan, Shi-Jun Liu","doi":"10.1007/s11390-023-1535-4","DOIUrl":"https://doi.org/10.1007/s11390-023-1535-4","url":null,"abstract":"<p>Infrastructure-as-a-Service (IaaS) cloud platforms offer resources with diverse buying options. Users can run an instance on the on-demand market which is stable but expensive or on the spot market with a significant discount. However, users have to carefully weigh the low cost of spot instances against their poor availability. Spot instances will be revoked when the revocation event occurs. Thus, an important problem that an IaaS user faces now is how to use spot instances in a cost-effective and low-risk way. Based on the replication-based fault tolerance mechanism, we propose an online termination algorithm that optimizes the cost of using spot instances while ensuring operational stability. We prove that in most cases, the cost of our proposed online algorithm will not exceed twice the minimum cost of the optimal offline algorithm that knows the exact future a priori. Through a large number of experiments, we verify that our algorithm in most cases has a competitive ratio of no more than 2, and in other cases it can also reach the guaranteed competitive ratio.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140581917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
4D-MAP: Multipath Adaptive Packet Scheduling for Live Streaming over QUIC 4D-MAP:QUIC 实时流媒体的多路径自适应数据包调度
IF 1.9 3区 计算机科学 Q2 Computer Science Pub Date : 2024-01-30 DOI: 10.1007/s11390-023-3204-z
Cong-Xi Song, Biao Han, Jin-Shu Su

In recent years, live streaming has become a popular application, which uses TCP as its primary transport protocol. Quick UDP Internet Connections (QUIC) protocol opens up new opportunities for live streaming. However, how to leverage QUIC to transmit live videos has not been studied yet. This paper first investigates the achievable quality of experience (QoE) of streaming live videos over TCP, QUIC, and their multipath extensions Multipath TCP (MPTCP) and Multipath QUIC (MPQUIC). We observe that MPQUIC achieves the best performance with bandwidth aggregation and transmission reliability. However, network fluctuations may cause heterogeneous paths, high path loss, and bandwidth degradation, resulting in significant QoE deterioration. Motivated by the above observations, we investigate the multipath packet scheduling problem in live streaming and design 4D-MAP, a multipath adaptive packet scheduling scheme over QUIC. Specifically, a linear upper confidence bound (LinUCB)-based online learning algorithm, along with four novel scheduling mechanisms, i.e., Dispatch, Duplicate, Discard, and Decompensate, is proposed to conquer the above problems. 4D-MAP has been evaluated in both controlled emulation and real-world networks to make comparison with the state-of-the-art multipath transmission schemes. Experimental results reveal that 4D-MAP outperforms others in terms of improving the QoE of live streaming.

近年来,使用 TCP 作为主要传输协议的直播流媒体已成为一种流行的应用。快速 UDP 互联网连接(QUIC)协议为直播流媒体带来了新的机遇。然而,如何利用 QUIC 传输直播视频尚未得到研究。本文首先研究了通过 TCP、QUIC 及其多路径扩展协议 Multipath TCP (MPTCP) 和 Multipath QUIC (MPQUIC) 传输直播视频的可实现体验质量(QoE)。我们发现,MPQUIC 在带宽聚合和传输可靠性方面表现最佳。然而,网络波动可能会导致异构路径、高路径损耗和带宽下降,从而导致 QoE 严重恶化。基于上述观察结果,我们研究了直播流媒体中的多路径数据包调度问题,并设计了一种基于 QUIC 的多路径自适应数据包调度方案 4D-MAP。具体来说,我们提出了一种基于线性置信上限(LinUCB)的在线学习算法,以及四种新型调度机制,即分派、重复、丢弃和解补偿,以解决上述问题。4D-MAP 在受控仿真和实际网络中进行了评估,并与最先进的多径传输方案进行了比较。实验结果表明,4D-MAP 在改善直播流媒体的 QoE 方面优于其他方案。
{"title":"4D-MAP: Multipath Adaptive Packet Scheduling for Live Streaming over QUIC","authors":"Cong-Xi Song, Biao Han, Jin-Shu Su","doi":"10.1007/s11390-023-3204-z","DOIUrl":"https://doi.org/10.1007/s11390-023-3204-z","url":null,"abstract":"<p>In recent years, live streaming has become a popular application, which uses TCP as its primary transport protocol. Quick UDP Internet Connections (QUIC) protocol opens up new opportunities for live streaming. However, how to leverage QUIC to transmit live videos has not been studied yet. This paper first investigates the achievable quality of experience (QoE) of streaming live videos over TCP, QUIC, and their multipath extensions Multipath TCP (MPTCP) and Multipath QUIC (MPQUIC). We observe that MPQUIC achieves the best performance with bandwidth aggregation and transmission reliability. However, network fluctuations may cause heterogeneous paths, high path loss, and bandwidth degradation, resulting in significant QoE deterioration. Motivated by the above observations, we investigate the multipath packet scheduling problem in live streaming and design 4D-MAP, a multipath adaptive packet scheduling scheme over QUIC. Specifically, a linear upper confidence bound (LinUCB)-based online learning algorithm, along with four novel scheduling mechanisms, i.e., Dispatch, Duplicate, Discard, and Decompensate, is proposed to conquer the above problems. 4D-MAP has been evaluated in both controlled emulation and real-world networks to make comparison with the state-of-the-art multipath transmission schemes. Experimental results reveal that 4D-MAP outperforms others in terms of improving the QoE of live streaming.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140581921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Approximate Similarity-Aware Compression for Non-Volatile Main Memory 非易失性主存储器的近似相似意识压缩
IF 1.9 3区 计算机科学 Q2 Computer Science Pub Date : 2024-01-30 DOI: 10.1007/s11390-023-2565-7
Zhang-Yu Chen, Yu Hua, Peng-Fei Zuo, Yuan-Yuan Sun, Yun-Cheng Guo

Image bitmaps, i.e., data containing pixels and visual perception, have been widely used in emerging applications for pixel operations while consuming lots of memory space and energy. Compared with legacy DRAM (dynamic random access memory), non-volatile memories (NVMs) are suitable for bitmap storage due to the salient features of high density and intrinsic durability. However, writing NVMs suffers from higher energy consumption and latency compared with read accesses. Existing precise or approximate compression schemes in NVM controllers show limited performance for bitmaps due to the irregular data patterns and variance in bitmaps. We observe the pixel-level similarity when writing bitmaps due to the analogous contents in adjacent pixels. By exploiting the pixel-level similarity, we propose SimCom, an approximate similarity-aware compression scheme in the NVM module controller, to efficiently compress data for each write access on-the-fly. The idea behind SimCom is to compress continuous similar words into the pairs of base words with runs. The storage costs for small runs are further mitigated by reusing the least significant bits of base words. SimCom adaptively selects an appropriate compression mode for various bitmap formats, thus achieving an efficient trade-off between quality and memory performance. We implement SimCom on GEM5/zsim with NVMain and evaluate the performance with real-world image/video workloads. Our results demonstrate the efficacy and efficiency of our SimCom with an efficient quality-performance trade-off.

图像位图,即包含像素和视觉感知的数据,已广泛应用于新兴应用中的像素操作,但却消耗大量内存空间和能源。与传统的 DRAM(动态随机存取存储器)相比,非易失性存储器(NVM)具有高密度和固有耐用性等显著特点,适合用于位图存储。然而,与读取访问相比,写入 NVM 的能耗和延迟较高。由于位图的数据模式不规则且存在差异,NVM 控制器中现有的精确或近似压缩方案对位图的性能有限。由于相邻像素中的内容相似,我们在写入位图时会观察到像素级的相似性。通过利用像素级相似性,我们在 NVM 模块控制器中提出了近似相似性感知压缩方案 SimCom,为每次写入访问即时有效地压缩数据。SimCom 背后的理念是将连续的相似单词压缩成带运行的基词对。通过重复使用基字的最小有效位,进一步降低了小运行的存储成本。SimCom 可为各种位图格式自适应地选择适当的压缩模式,从而在质量和内存性能之间实现有效权衡。我们在带有 NVMain 的 GEM5/zsim 上实现了 SimCom,并利用真实世界的图像/视频工作负载对其性能进行了评估。结果表明,我们的 SimCom 在质量和性能之间实现了有效权衡,具有很高的功效和效率。
{"title":"Approximate Similarity-Aware Compression for Non-Volatile Main Memory","authors":"Zhang-Yu Chen, Yu Hua, Peng-Fei Zuo, Yuan-Yuan Sun, Yun-Cheng Guo","doi":"10.1007/s11390-023-2565-7","DOIUrl":"https://doi.org/10.1007/s11390-023-2565-7","url":null,"abstract":"<p>Image bitmaps, i.e., data containing pixels and visual perception, have been widely used in emerging applications for pixel operations while consuming lots of memory space and energy. Compared with legacy DRAM (dynamic random access memory), non-volatile memories (NVMs) are suitable for bitmap storage due to the salient features of high density and intrinsic durability. However, writing NVMs suffers from higher energy consumption and latency compared with read accesses. Existing precise or approximate compression schemes in NVM controllers show limited performance for bitmaps due to the irregular data patterns and variance in bitmaps. We observe the pixel-level similarity when writing bitmaps due to the analogous contents in adjacent pixels. By exploiting the pixel-level similarity, we propose SimCom, an approximate similarity-aware compression scheme in the NVM module controller, to efficiently compress data for each write access on-the-fly. The idea behind SimCom is to compress continuous similar words into the pairs of base words with runs. The storage costs for small runs are further mitigated by reusing the least significant bits of base words. SimCom adaptively selects an appropriate compression mode for various bitmap formats, thus achieving an efficient trade-off between quality and memory performance. We implement SimCom on GEM5/zsim with NVMain and evaluate the performance with real-world image/video workloads. Our results demonstrate the efficacy and efficiency of our SimCom with an efficient quality-performance trade-off.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140602216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identity-Preserving Adversarial Training for Robust Network Embedding 针对鲁棒网络嵌入的身份保护对抗训练
IF 1.9 3区 计算机科学 Q2 Computer Science Pub Date : 2024-01-30 DOI: 10.1007/s11390-023-2256-4
Ke-Ting Cen, Hua-Wei Shen, Qi Cao, Bing-Bing Xu, Xue-Qi Cheng

Network embedding, as an approach to learning low-dimensional representations of nodes, has been proved extremely useful in many applications, e.g., node classification and link prediction. Unfortunately, existing network embedding models are vulnerable to random or adversarial perturbations, which may degrade the performance of network embedding when being applied to downstream tasks. To achieve robust network embedding, researchers introduce adversarial training to regularize the embedding learning process by training on a mixture of adversarial examples and original examples. However, existing methods generate adversarial examples heuristically, failing to guarantee the imperceptibility of generated adversarial examples, and thus limit the power of adversarial training. In this paper, we propose a novel method Identity-Preserving Adversarial Training (IPAT) for network embedding, which generates imperceptible adversarial examples with explicit identity-preserving regularization. We formalize such identity-preserving regularization as a multi-class classification problem where each node represents a class, and we encourage each adversarial example to be discriminated as the class of its original node. Extensive experimental results on real-world datasets demonstrate that our proposed IPAT method significantly improves the robustness of network embedding models and the generalization of the learned node representations on various downstream tasks.

网络嵌入作为一种学习节点低维表示的方法,已被证明在节点分类和链接预测等许多应用中极为有用。遗憾的是,现有的网络嵌入模型容易受到随机或对抗性扰动的影响,这可能会降低网络嵌入应用于下游任务时的性能。为了实现稳健的网络嵌入,研究人员引入了对抗训练,通过对抗示例和原始示例的混合训练来规范嵌入学习过程。然而,现有方法都是启发式地生成对抗示例,无法保证生成的对抗示例不被感知,从而限制了对抗训练的威力。在本文中,我们提出了一种用于网络嵌入的新方法--保身份对抗训练(IPAT),该方法通过明确的保身份正则化生成不可感知的对抗示例。我们将这种保身份正则化形式化为一个多类分类问题,其中每个节点代表一个类,我们鼓励将每个对抗示例判别为其原始节点的类。在真实世界数据集上的大量实验结果表明,我们提出的 IPAT 方法显著提高了网络嵌入模型的鲁棒性,以及所学节点表征在各种下游任务中的泛化能力。
{"title":"Identity-Preserving Adversarial Training for Robust Network Embedding","authors":"Ke-Ting Cen, Hua-Wei Shen, Qi Cao, Bing-Bing Xu, Xue-Qi Cheng","doi":"10.1007/s11390-023-2256-4","DOIUrl":"https://doi.org/10.1007/s11390-023-2256-4","url":null,"abstract":"<p>Network embedding, as an approach to learning low-dimensional representations of nodes, has been proved extremely useful in many applications, e.g., node classification and link prediction. Unfortunately, existing network embedding models are vulnerable to random or adversarial perturbations, which may degrade the performance of network embedding when being applied to downstream tasks. To achieve robust network embedding, researchers introduce adversarial training to regularize the embedding learning process by training on a mixture of adversarial examples and original examples. However, existing methods generate adversarial examples heuristically, failing to guarantee the imperceptibility of generated adversarial examples, and thus limit the power of adversarial training. In this paper, we propose a novel method Identity-Preserving Adversarial Training (IPAT) for network embedding, which generates imperceptible adversarial examples with explicit identity-preserving regularization. We formalize such identity-preserving regularization as a multi-class classification problem where each node represents a class, and we encourage each adversarial example to be discriminated as the class of its original node. Extensive experimental results on real-world datasets demonstrate that our proposed IPAT method significantly improves the robustness of network embedding models and the generalization of the learned node representations on various downstream tasks.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140581919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Federated Dynamic Client Selection for Fairness Guarantee in Heterogeneous Edge Computing 异构边缘计算中保证公平性的联合动态客户端选择
IF 1.9 3区 计算机科学 Q2 Computer Science Pub Date : 2024-01-30 DOI: 10.1007/s11390-023-2972-9
Ying-Chi Mao, Li-Juan Shen, Jun Wu, Ping Ping, Jie Wu

Federated learning has emerged as a distributed learning paradigm by training at each client and aggregating at a parameter server. System heterogeneity hinders stragglers from responding to the server in time with huge communication costs. Although client grouping in federated learning can solve the straggler problem, the stochastic selection strategy in client grouping neglects the impact of data distribution within each group. Besides, current client grouping approaches make clients suffer unfair participation, leading to biased performances for different clients. In order to guarantee the fairness of client participation and mitigate biased local performances, we propose a federated dynamic client selection method based on data representativity (FedSDR). FedSDR clusters clients into groups correlated with their own local computational efficiency. To estimate the significance of client datasets, we design a novel data representativity evaluation scheme based on local data distribution. Furthermore, the two most representative clients in each group are selected to optimize the global model. Finally, the DYNAMIC-SELECT algorithm updates local computational efficiency and data representativity states to regroup clients after periodic average aggregation. Evaluations on real datasets show that FedSDR improves client participation by 27.4%, 37.9%, and 23.3% compared with FedAvg, TiFL, and FedSS, respectively, taking fairness into account in federated learning. In addition, FedSDR surpasses FedAvg, FedGS, and FedMS by 21.32%, 20.4%, and 6.90%, respectively, in local test accuracy variance, balancing the performance bias of the global model across clients.

联合学习是一种分布式学习模式,它在每个客户端进行训练,并在参数服务器上进行汇总。系统的异质性阻碍了散兵游勇及时响应服务器,通信成本巨大。虽然联合学习中的客户端分组可以解决游离者问题,但客户端分组中的随机选择策略忽略了每个组内数据分布的影响。此外,当前的客户端分组方法会使客户端遭受不公平参与,导致不同客户端的表现存在偏差。为了保证客户参与的公平性,减少局部性能偏差,我们提出了一种基于数据代表性的联合动态客户选择方法(FedSDR)。FedSDR 将客户端聚类为与其本地计算效率相关的组。为了评估客户端数据集的重要性,我们设计了一种基于本地数据分布的新型数据代表性评估方案。此外,我们还在每个组中选择了两个最具代表性的客户端来优化全局模型。最后,动态选择算法会更新本地计算效率和数据代表性状态,以便在定期平均聚合后重新分组客户。在真实数据集上进行的评估表明,与 FedAvg、TiFL 和 FedSS 相比,考虑到联合学习中的公平性,FedSDR 将客户参与度分别提高了 27.4%、37.9% 和 23.3%。此外,FedSDR 在本地测试准确率差异方面分别比 FedAvg、FedGS 和 FedMS 高出 21.32%、20.4% 和 6.90%,平衡了全局模型在客户端之间的性能偏差。
{"title":"Federated Dynamic Client Selection for Fairness Guarantee in Heterogeneous Edge Computing","authors":"Ying-Chi Mao, Li-Juan Shen, Jun Wu, Ping Ping, Jie Wu","doi":"10.1007/s11390-023-2972-9","DOIUrl":"https://doi.org/10.1007/s11390-023-2972-9","url":null,"abstract":"<p>Federated learning has emerged as a distributed learning paradigm by training at each client and aggregating at a parameter server. System heterogeneity hinders stragglers from responding to the server in time with huge communication costs. Although client grouping in federated learning can solve the straggler problem, the stochastic selection strategy in client grouping neglects the impact of data distribution within each group. Besides, current client grouping approaches make clients suffer unfair participation, leading to biased performances for different clients. In order to guarantee the fairness of client participation and mitigate biased local performances, we propose a federated dynamic client selection method based on data representativity (FedSDR). FedSDR clusters clients into groups correlated with their own local computational efficiency. To estimate the significance of client datasets, we design a novel data representativity evaluation scheme based on local data distribution. Furthermore, the two most representative clients in each group are selected to optimize the global model. Finally, the DYNAMIC-SELECT algorithm updates local computational efficiency and data representativity states to regroup clients after periodic average aggregation. Evaluations on real datasets show that FedSDR improves client participation by 27.4%, 37.9%, and 23.3% compared with FedAvg, TiFL, and FedSS, respectively, taking fairness into account in federated learning. In addition, FedSDR surpasses FedAvg, FedGS, and FedMS by 21.32%, 20.4%, and 6.90%, respectively, in local test accuracy variance, balancing the performance bias of the global model across clients.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140581938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Online Nonstop Task Management for Storm-Based Distributed Stream Processing Engines 基于风暴的分布式流处理引擎的在线不间断任务管理
IF 1.9 3区 计算机科学 Q2 Computer Science Pub Date : 2024-01-30 DOI: 10.1007/s11390-021-1629-9
Zhou Zhang, Pei-Quan Jin, Xi-Ke Xie, Xiao-Liang Wang, Rui-Cheng Liu, Shou-Hong Wan

Most distributed stream processing engines (DSPEs) do not support online task management and cannot adapt to time-varying data flows. Recently, some studies have proposed online task deployment algorithms to solve this problem. However, these approaches do not guarantee the Quality of Service (QoS) when the task deployment changes at runtime, because the task migrations caused by the change of task deployments will impose an exorbitant cost. We study one of the most popular DSPEs, Apache Storm, and find out that when a task needs to be migrated, Storm has to stop the resource (implemented as a process of Worker in Storm) where the task is deployed. This will lead to the stop and restart of all tasks in the resource, resulting in the poor performance of task migrations. Aiming to solve this problem, in this paper, we propose N-Storm (Nonstop Storm), which is a task-resource decoupling DSPE. N-Storm allows tasks allocated to resources to be changed at runtime, which is implemented by a thread-level scheme for task migrations. Particularly, we add a local shared key/value store on each node to make resources aware of the changes in the allocation plan. Thus, each resource can manage its tasks at runtime. Based on N-Storm, we further propose Online Task Deployment (OTD). Differing from traditional task deployment algorithms that deploy all tasks at once without considering the cost of task migrations caused by a task re-deployment, OTD can gradually adjust the current task deployment to an optimized one based on the communication cost and the runtime states of resources. We demonstrate that OTD can adapt to different kinds of applications including computation- and communication-intensive applications. The experimental results on a real DSPE cluster show that N-Storm can avoid the system stop and save up to 87% of the performance degradation time, compared with Apache Storm and other state-of-the-art approaches. In addition, OTD can increase the average CPU usage by 51% for computation-intensive applications and reduce network communication costs by 88% for communication-intensive applications.

大多数分布式流处理引擎(DSPE)不支持在线任务管理,无法适应时变数据流。最近,一些研究提出了在线任务部署算法来解决这一问题。然而,当任务部署在运行时发生变化时,这些方法无法保证服务质量(QoS),因为任务部署变化引起的任务迁移将带来高昂的成本。我们研究了最流行的 DSPE 之一 Apache Storm,发现当任务需要迁移时,Storm 必须停止部署任务的资源(在 Storm 中以 Worker 进程的形式实现)。这将导致资源中的所有任务停止并重新启动,从而导致任务迁移性能低下。为了解决这个问题,我们在本文中提出了 N-Storm(Nonstop Storm),它是一种任务与资源解耦的 DSPE。N-Storm 允许在运行时更改分配给资源的任务,这是由线程级任务迁移方案实现的。特别是,我们在每个节点上添加了一个本地共享键/值存储,以便让资源了解分配计划的变化。因此,每个资源都能在运行时管理自己的任务。在 N-Storm 的基础上,我们进一步提出了在线任务部署(OTD)。传统的任务部署算法会一次性部署所有任务,而不考虑任务重新部署带来的任务迁移成本,与之不同的是,OTD 可以根据通信成本和资源的运行状态,逐步调整当前的任务部署,使之达到最优。我们证明了 OTD 能够适应不同类型的应用,包括计算密集型和通信密集型应用。在一个真实的 DSPE 集群上的实验结果表明,与 Apache Storm 和其他最先进的方法相比,N-Storm 可以避免系统停止,并节省多达 87% 的性能下降时间。此外,对于计算密集型应用,OTD 可以将 CPU 的平均使用率提高 51%,而对于通信密集型应用,则可以将网络通信成本降低 88%。
{"title":"Online Nonstop Task Management for Storm-Based Distributed Stream Processing Engines","authors":"Zhou Zhang, Pei-Quan Jin, Xi-Ke Xie, Xiao-Liang Wang, Rui-Cheng Liu, Shou-Hong Wan","doi":"10.1007/s11390-021-1629-9","DOIUrl":"https://doi.org/10.1007/s11390-021-1629-9","url":null,"abstract":"<p>Most distributed stream processing engines (DSPEs) do not support online task management and cannot adapt to time-varying data flows. Recently, some studies have proposed online task deployment algorithms to solve this problem. However, these approaches do not guarantee the Quality of Service (QoS) when the task deployment changes at runtime, because the task migrations caused by the change of task deployments will impose an exorbitant cost. We study one of the most popular DSPEs, Apache Storm, and find out that when a task needs to be migrated, Storm has to stop the resource (implemented as a process of Worker in Storm) where the task is deployed. This will lead to the stop and restart of all tasks in the resource, resulting in the poor performance of task migrations. Aiming to solve this problem, in this paper, we propose N-Storm (Nonstop Storm), which is a task-resource decoupling DSPE. N-Storm allows tasks allocated to resources to be changed at runtime, which is implemented by a thread-level scheme for task migrations. Particularly, we add a local shared key/value store on each node to make resources aware of the changes in the allocation plan. Thus, each resource can manage its tasks at runtime. Based on N-Storm, we further propose Online Task Deployment (OTD). Differing from traditional task deployment algorithms that deploy all tasks at once without considering the cost of task migrations caused by a task re-deployment, OTD can gradually adjust the current task deployment to an optimized one based on the communication cost and the runtime states of resources. We demonstrate that OTD can adapt to different kinds of applications including computation- and communication-intensive applications. The experimental results on a real DSPE cluster show that N-Storm can avoid the system stop and save up to 87% of the performance degradation time, compared with Apache Storm and other state-of-the-art approaches. In addition, OTD can increase the average CPU usage by 51% for computation-intensive applications and reduce network communication costs by 88% for communication-intensive applications.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140581909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Minimal Context-Switching Data Race Detection with Dataflow Tracking 利用数据流跟踪进行最小上下文切换数据竞赛检测
IF 1.9 3区 计算机科学 Q2 Computer Science Pub Date : 2024-01-30 DOI: 10.1007/s11390-023-1569-7
Long Zheng, Yang Li, Jie Xin, Hai-Feng Liu, Ran Zheng, Xiao-Fei Liao, Hai Jin

Data race is one of the most important concurrent anomalies in multi-threaded programs. Emerging constraint- based techniques are leveraged into race detection, which is able to find all the races that can be found by any other sound race detector. However, this constraint-based approach has serious limitations on helping programmers analyze and understand data races. First, it may report a large number of false positives due to the unrecognized dataflow propagation of the program. Second, it recommends a wide range of thread context switches to schedule the reported race (including the false one) whenever this race is exposed during the constraint-solving process. This ad hoc recommendation imposes too many context switches, which complicates the data race analysis. To address these two limitations in the state-of-the-art constraint-based race detection, this paper proposes DFTracker, an improved constraint-based race detector to recommend each data race with minimal thread context switches. Specifically, we reduce the false positives by analyzing and tracking the dataflow in the program. By this means, DFTracker thus reduces the unnecessary analysis of false race schedules. We further propose a novel algorithm to recommend an effective race schedule with minimal thread context switches for each data race. Our experimental results on the real applications demonstrate that 1) without removing any true data race, DFTracker effectively prunes false positives by 68% in comparison with the state-of-the-art constraint-based race detector; 2) DFTracker recommends as low as 2.6–8.3 (4.7 on average) thread context switches per data race in the real world, which is 81.6% fewer context switches per data race than the state-of-the-art constraint based race detector. Therefore, DFTracker can be used as an effective tool to understand the data race for programmers.

数据竞赛是多线程程序中最重要的并发异常之一。新出现的基于约束的技术被运用到竞赛检测中,它能发现任何其他健全的竞赛检测器所能发现的所有竞赛。然而,这种基于约束的方法在帮助程序员分析和理解数据竞赛方面存在严重的局限性。首先,由于程序的数据流传播未被识别,它可能会报告大量的误报。其次,只要在解决约束的过程中暴露出所报告的竞赛(包括误报),它就会建议进行大范围的线程上下文切换,以安排该竞赛。这种临时建议会带来过多的上下文切换,从而使数据竞赛分析复杂化。为了解决最先进的基于约束的竞赛检测中存在的这两个局限性,本文提出了一种改进的基于约束的竞赛检测器 DFTracker,它能以最少的线程上下文切换推荐每个数据竞赛。具体来说,我们通过分析和跟踪程序中的数据流来减少误报。通过这种方法,DFTracker 减少了对错误竞赛时间表的不必要分析。我们还进一步提出了一种新颖的算法,为每个数据竞赛推荐一个有效的竞赛时间表,并尽量减少线程上下文切换。我们在实际应用中的实验结果表明:1)与最先进的基于约束的竞赛检测器相比,在不移除任何真实数据竞赛的情况下,DFTracker 有效地清除了 68% 的误报;2)在现实世界中,DFTracker 为每个数据竞赛推荐了低至 2.6-8.3 次(平均 4.7 次)的线程上下文切换,与最先进的基于约束的竞赛检测器相比,每个数据竞赛的上下文切换次数减少了 81.6%。因此,DFTracker 可以作为程序员了解数据竞赛的有效工具。
{"title":"Minimal Context-Switching Data Race Detection with Dataflow Tracking","authors":"Long Zheng, Yang Li, Jie Xin, Hai-Feng Liu, Ran Zheng, Xiao-Fei Liao, Hai Jin","doi":"10.1007/s11390-023-1569-7","DOIUrl":"https://doi.org/10.1007/s11390-023-1569-7","url":null,"abstract":"<p>Data race is one of the most important concurrent anomalies in multi-threaded programs. Emerging constraint- based techniques are leveraged into race detection, which is able to find all the races that can be found by any other sound race detector. However, this constraint-based approach has serious limitations on helping programmers analyze and understand data races. First, it may report a large number of false positives due to the unrecognized dataflow propagation of the program. Second, it recommends a wide range of thread context switches to schedule the reported race (including the false one) whenever this race is exposed during the constraint-solving process. This ad hoc recommendation imposes too many context switches, which complicates the data race analysis. To address these two limitations in the state-of-the-art constraint-based race detection, this paper proposes DFTracker, an improved constraint-based race detector to recommend each data race with minimal thread context switches. Specifically, we reduce the false positives by analyzing and tracking the dataflow in the program. By this means, DFTracker thus reduces the unnecessary analysis of false race schedules. We further propose a novel algorithm to recommend an effective race schedule with minimal thread context switches for each data race. Our experimental results on the real applications demonstrate that 1) without removing any true data race, DFTracker effectively prunes false positives by 68% in comparison with the state-of-the-art constraint-based race detector; 2) DFTracker recommends as low as 2.6–8.3 (4.7 on average) thread context switches per data race in the real world, which is 81.6% fewer context switches per data race than the state-of-the-art constraint based race detector. Therefore, DFTracker can be used as an effective tool to understand the data race for programmers.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140581825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Computer Science and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1