首页 > 最新文献

Journal of Computer Science and Technology最新文献

英文 中文
VPI: Vehicle Programming Interface for Vehicle Computing VPI:车辆计算编程接口
IF 1.9 3区 计算机科学 Q2 Computer Science Pub Date : 2024-01-30 DOI: 10.1007/s11390-024-4035-2
Bao-Fu Wu, Ren Zhong, Yuxin Wang, Jian Wan, Ji-Lin Zhang, Weisong Shi

The emergence of software-defined vehicles (SDVs), combined with autonomous driving technologies, has enabled a new era of vehicle computing (VC), where vehicles serve as a mobile computing platform. However, the interdisciplinary complexities of automotive systems and diverse technological requirements make developing applications for autonomous vehicles challenging. To simplify the development of applications running on SDVs, we propose a comprehensive suite of vehicle programming interfaces (VPIs). In this study, we rigorously explore the nuanced requirements for application development within the realm of VC, centering our analysis on the architectural intricacies of the Open Vehicular Data Analytics Platform (OpenVDAP). We then detail our creation of a comprehensive suite of standardized VPIs, spanning five critical categories: Hardware, Data, Computation, Service, and Management, to address these evolving programming requirements. To validate the design of VPIs, we conduct experiments using the indoor autonomous vehicle, Zebra, and develop the OpenVDAP prototype system. By comparing it with the industry-influential AUTOSAR interface, our VPIs demonstrate significant enhancements in programming efficiency, marking an important advancement in the field of SDV application development. We also show a case study and evaluate its performance. Our work highlights that VPIs significantly enhance the efficiency of developing applications on VC. They meet both current and future technological demands and propel the software-defined automotive industry toward a more interconnected and intelligent future.

软件定义汽车(SDV)的出现与自动驾驶技术相结合,开创了汽车计算(VC)的新时代,即把汽车作为移动计算平台。然而,汽车系统跨学科的复杂性和多样化的技术要求使得为自动驾驶汽车开发应用软件充满挑战。为了简化在 SDV 上运行的应用程序的开发,我们提出了一套全面的车辆编程接口 (VPI)。在本研究中,我们以开放式车载数据分析平台(OpenVDAP)错综复杂的架构为分析中心,严格探讨了 VC 领域内应用程序开发的细微要求。然后,我们详细介绍了我们创建的一整套标准化 VPI,涵盖五个关键类别:硬件、数据、计算、服务和管理,以满足这些不断变化的编程要求。为了验证 VPI 的设计,我们使用室内自动驾驶汽车斑马进行了实验,并开发了 OpenVDAP 原型系统。通过与业界通用的 AUTOSAR 界面进行比较,我们的 VPI 显著提高了编程效率,标志着 SDV 应用开发领域的重要进步。我们还展示了一个案例研究,并对其性能进行了评估。我们的工作表明,VPI 显著提高了在 VC 上开发应用程序的效率。它们能满足当前和未来的技术需求,推动软件定义的汽车行业走向更加互联和智能的未来。
{"title":"VPI: Vehicle Programming Interface for Vehicle Computing","authors":"Bao-Fu Wu, Ren Zhong, Yuxin Wang, Jian Wan, Ji-Lin Zhang, Weisong Shi","doi":"10.1007/s11390-024-4035-2","DOIUrl":"https://doi.org/10.1007/s11390-024-4035-2","url":null,"abstract":"<p>The emergence of software-defined vehicles (SDVs), combined with autonomous driving technologies, has enabled a new era of vehicle computing (VC), where vehicles serve as a mobile computing platform. However, the interdisciplinary complexities of automotive systems and diverse technological requirements make developing applications for autonomous vehicles challenging. To simplify the development of applications running on SDVs, we propose a comprehensive suite of vehicle programming interfaces (VPIs). In this study, we rigorously explore the nuanced requirements for application development within the realm of VC, centering our analysis on the architectural intricacies of the Open Vehicular Data Analytics Platform (OpenVDAP). We then detail our creation of a comprehensive suite of standardized VPIs, spanning five critical categories: Hardware, Data, Computation, Service, and Management, to address these evolving programming requirements. To validate the design of VPIs, we conduct experiments using the indoor autonomous vehicle, Zebra, and develop the OpenVDAP prototype system. By comparing it with the industry-influential AUTOSAR interface, our VPIs demonstrate significant enhancements in programming efficiency, marking an important advancement in the field of SDV application development. We also show a case study and evaluate its performance. Our work highlights that VPIs significantly enhance the efficiency of developing applications on VC. They meet both current and future technological demands and propel the software-defined automotive industry toward a more interconnected and intelligent future.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140581911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
10-Million Atoms Simulation of First-Principle Package LS3DF 第一原理封装 LS3DF 的千万原子模拟
IF 1.9 3区 计算机科学 Q2 Computer Science Pub Date : 2024-01-30 DOI: 10.1007/s11390-023-3011-6
Yu-Jin Yan, Hai-Bo Li, Tong Zhao, Lin-Wang Wang, Lin Shi, Tao Liu, Guang-Ming Tan, Wei-Le Jia, Ning-Hui Sun

The growing demand for semiconductor devices simulation poses a big challenge for large-scale electronic structure calculations. Among various methods, the linearly scaling three-dimensional fragment (LS3DF) method exhibits excellent scalability in large-scale simulations. Based on algorithmic and system-level optimizations, we propose a highly scalable and highly efficient implementation of LS3DF on a domestic heterogeneous supercomputer equipped with accelerators. In terms of algorithmic optimizations, the original all-band conjugate gradient algorithm is refined to achieve faster convergence, and mixed precision computing is adopted to increase overall efficiency. In terms of system-level optimizations, the original two-layer parallel structure is replaced by a coarse-grained parallel method. Optimization strategies such as multi-stream, kernel fusion, and redundant computation removal are proposed to increase further utilization of the computational power provided by the heterogeneous machines. As a result, our optimized LS3DF can scale to a 10-million silicon atoms system, attaining a peak performance of 34.8 PFLOPS (21.2% of the peak). All the improvements can be adapted to the next-generation supercomputers for larger simulations.

半导体器件仿真需求的不断增长对大规模电子结构计算提出了巨大挑战。在各种方法中,线性扩展三维片段(LS3DF)方法在大规模模拟中表现出优异的可扩展性。在算法和系统级优化的基础上,我们提出了在配备加速器的国产异构超级计算机上实现 LS3DF 的高扩展性和高效率。在算法优化方面,对原有的全波段共轭梯度算法进行了改进,以达到更快的收敛速度,并采用混合精度计算提高整体效率。在系统级优化方面,用粗粒度并行方法取代了原来的双层并行结构。此外,还提出了多流、内核融合和去除冗余计算等优化策略,以进一步提高异构计算机计算能力的利用率。因此,经过优化的 LS3DF 可以扩展到千万硅原子系统,达到 34.8 PFLOPS 的峰值性能(峰值的 21.2%)。所有这些改进都可适用于下一代超级计算机,以进行更大规模的模拟。
{"title":"10-Million Atoms Simulation of First-Principle Package LS3DF","authors":"Yu-Jin Yan, Hai-Bo Li, Tong Zhao, Lin-Wang Wang, Lin Shi, Tao Liu, Guang-Ming Tan, Wei-Le Jia, Ning-Hui Sun","doi":"10.1007/s11390-023-3011-6","DOIUrl":"https://doi.org/10.1007/s11390-023-3011-6","url":null,"abstract":"<p>The growing demand for semiconductor devices simulation poses a big challenge for large-scale electronic structure calculations. Among various methods, the linearly scaling three-dimensional fragment (LS3DF) method exhibits excellent scalability in large-scale simulations. Based on algorithmic and system-level optimizations, we propose a highly scalable and highly efficient implementation of LS3DF on a domestic heterogeneous supercomputer equipped with accelerators. In terms of algorithmic optimizations, the original all-band conjugate gradient algorithm is refined to achieve faster convergence, and mixed precision computing is adopted to increase overall efficiency. In terms of system-level optimizations, the original two-layer parallel structure is replaced by a coarse-grained parallel method. Optimization strategies such as multi-stream, kernel fusion, and redundant computation removal are proposed to increase further utilization of the computational power provided by the heterogeneous machines. As a result, our optimized LS3DF can scale to a 10-million silicon atoms system, attaining a peak performance of 34.8 PFLOPS (21.2% of the peak). All the improvements can be adapted to the next-generation supercomputers for larger simulations.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140581920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SMEC: Scene Mining for E-Commerce SMEC:电子商务场景挖掘
IF 1.9 3区 计算机科学 Q2 Computer Science Pub Date : 2024-01-30 DOI: 10.1007/s11390-021-1277-0
Gang Wang, Xiang Li, Zi-Yi Guo, Da-Wei Yin, Shuai Ma

Scene-based recommendation has proven its usefulness in E-commerce, by recommending commodities based on a given scene. However, scenes are typically unknown in advance, which necessitates scene discovery for E-commerce. In this article, we study scene discovery for E-commerce systems. We first formalize a scene as a set of commodity categories that occur simultaneously and frequently in real-world situations, and model an E-commerce platform as a heterogeneous information network (HIN), whose nodes and links represent different types of objects and different types of relationships between objects, respectively. We then formulate the scene mining problem for E-commerce as an unsupervised learning problem that finds the overlapping clusters of commodity categories in the HIN. To solve the problem, we propose a non-negative matrix factorization based method SMEC (Scene Mining for E-Commerce), and theoretically prove its convergence. Using six real-world E-commerce datasets, we finally conduct an extensive experimental study to evaluate SMEC against 13 other methods, and show that SMEC consistently outperforms its competitors with regard to various evaluation measures.

基于场景的推荐已被证明在电子商务中非常有用,它可以根据给定的场景推荐商品。然而,场景通常是事先未知的,这就需要为电子商务发现场景。本文将研究电子商务系统的场景发现。我们首先将场景形式化为一组在现实世界中同时频繁出现的商品类别,并将电子商务平台建模为一个异构信息网络(HIN),其节点和链接分别代表不同类型的对象和对象之间不同类型的关系。然后,我们将电子商务的场景挖掘问题表述为一个无监督学习问题,即在 HIN 中找到商品类别的重叠聚类。为了解决这个问题,我们提出了一种基于非负矩阵因式分解的方法 SMEC(电子商务场景挖掘),并从理论上证明了它的收敛性。最后,我们利用六个真实世界的电子商务数据集进行了广泛的实验研究,将 SMEC 与其他 13 种方法进行了对比评估,结果表明 SMEC 在各种评估指标上始终优于其竞争对手。
{"title":"SMEC: Scene Mining for E-Commerce","authors":"Gang Wang, Xiang Li, Zi-Yi Guo, Da-Wei Yin, Shuai Ma","doi":"10.1007/s11390-021-1277-0","DOIUrl":"https://doi.org/10.1007/s11390-021-1277-0","url":null,"abstract":"<p>Scene-based recommendation has proven its usefulness in E-commerce, by recommending commodities based on a given scene. However, scenes are typically unknown in advance, which necessitates scene discovery for E-commerce. In this article, we study scene discovery for E-commerce systems. We first formalize a scene as a set of commodity categories that occur simultaneously and frequently in real-world situations, and model an E-commerce platform as a heterogeneous information network (HIN), whose nodes and links represent different types of objects and different types of relationships between objects, respectively. We then formulate the scene mining problem for E-commerce as an unsupervised learning problem that finds the overlapping clusters of commodity categories in the HIN. To solve the problem, we propose a non-negative matrix factorization based method SMEC (Scene Mining for E-Commerce), and theoretically prove its convergence. Using six real-world E-commerce datasets, we finally conduct an extensive experimental study to evaluate SMEC against 13 other methods, and show that SMEC consistently outperforms its competitors with regard to various evaluation measures.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140602367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DIR: Dynamic Request Interleaving for Improving the Read Performance of Aged Solid-State Drives DIR:动态请求交错提高老化固态硬盘的读取性能
IF 1.9 3区 计算机科学 Q2 Computer Science Pub Date : 2024-01-30 DOI: 10.1007/s11390-023-1601-y
Shi-Qiang Nie, Chi Zhang, Wei-Guo Wu

Triple-level cell (TLC) NAND flash is increasingly adopted to build solid-state drives (SSDs) for modern computer systems. While TLC NAND flash effectively improves storage density, it faces severe reliability issues; in particular, the pages exhibit different raw bit error rates (RBERs). Integrating strong low-density parity-check (LDPC) code helps to improve reliability but suffers from prolonged and proportional read latency due to multiple read retries for worse pages. The straightforward idea is that dispersing page-size data across several pages in different types can achieve a lower average RBER and reduce the read latency. However, directly implementing this simple idea into flash translation layer (FTL) induces the read amplification issue as one logic page residing in more than one physical page brings several read operations. In this paper, we propose the Dynamic Request Interleaving (DIR) technology for improving the performance of TLC NAND flash-based SSDs, in particular, the aged ones with large RBERs. DIR exploits the observation that the latency of an I/O request is determined, without considering the queuing time, by the access of the slowest device page, i.e., the page that has the highest RBER. By grouping consecutive logical pages that have high locality and interleaving their encoded data in different types of device pages that have different RBERs, DIR effectively reduces the number of read retries for LDPC with limited read amplification. To meet the requirement of allocating hybrid page types for interleaved data, we also design a page-interleaving friendly page allocation scheme, which splits all the planes into multi-plane regions for storing the interleaved data and single-plane regions for storing the normal data. The pages in the multi-plane region can be read/written in parallel by the proposed multi-plane command and avoid the read amplification issue. Based on the DIR scheme and the proposed page allocation scheme, we build DIR-enable FTL, which integrates the proposed schemes into the FTL with some modifications. Our experimental results show that adopting DIR in aged SSDs exploits nearly 33% locality from I/O requests and, on average, reduces 43% read latency over conventional aged SSDs.

现代计算机系统越来越多地采用三层单元(TLC)NAND 闪存来制造固态硬盘(SSD)。虽然 TLC NAND 闪存能有效提高存储密度,但却面临着严重的可靠性问题,尤其是页面显示出不同的原始比特错误率(RBER)。集成强低密度奇偶校验(LDPC)码有助于提高可靠性,但由于对较差页面进行多次读取重试,会导致读取延迟时间延长,并与读取延迟时间成正比。直截了当的想法是,将页面大小的数据分散到不同类型的多个页面中,可以实现较低的平均 RBER 值,并减少读取延迟。然而,直接在闪存转换层(FTL)中实现这一简单想法会导致读取放大问题,因为一个逻辑页驻留在多个物理页中,会带来多个读取操作。在本文中,我们提出了动态请求交错(DIR)技术,以提高基于 TLC NAND 闪存的固态硬盘的性能,尤其是具有较大 RBER 的老式固态硬盘。DIR 利用了一个观察结果,即在不考虑排队时间的情况下,I/O 请求的延迟取决于对最慢设备页面(即 RBER 最高的页面)的访问。通过将具有高定位性的连续逻辑页分组,并将其编码数据交错在具有不同 RBER 的不同类型设备页中,DIR 可以有效减少具有有限读取放大功能的 LDPC 的读取重试次数。为了满足为交错数据分配混合页面类型的要求,我们还设计了一种页面交错友好型页面分配方案,该方案将所有平面划分为多平面区域和单平面区域,前者用于存储交错数据,后者用于存储正常数据。多平面区域中的页面可以通过拟议的多平面指令并行读/写,避免了读放大问题。在 DIR 方案和建议的页面分配方案的基础上,我们构建了 DIR-enable FTL,将建议的方案进行一些修改后集成到 FTL 中。我们的实验结果表明,在老化的固态硬盘中采用 DIR 可以从 I/O 请求中利用近 33% 的本地性,与传统的老化固态硬盘相比,平均可减少 43% 的读取延迟。
{"title":"DIR: Dynamic Request Interleaving for Improving the Read Performance of Aged Solid-State Drives","authors":"Shi-Qiang Nie, Chi Zhang, Wei-Guo Wu","doi":"10.1007/s11390-023-1601-y","DOIUrl":"https://doi.org/10.1007/s11390-023-1601-y","url":null,"abstract":"<p>Triple-level cell (TLC) NAND flash is increasingly adopted to build solid-state drives (SSDs) for modern computer systems. While TLC NAND flash effectively improves storage density, it faces severe reliability issues; in particular, the pages exhibit different raw bit error rates (RBERs). Integrating strong low-density parity-check (LDPC) code helps to improve reliability but suffers from prolonged and proportional read latency due to multiple read retries for worse pages. The straightforward idea is that dispersing page-size data across several pages in different types can achieve a lower average RBER and reduce the read latency. However, directly implementing this simple idea into flash translation layer (FTL) induces the read amplification issue as one logic page residing in more than one physical page brings several read operations. In this paper, we propose the Dynamic Request Interleaving (DIR) technology for improving the performance of TLC NAND flash-based SSDs, in particular, the aged ones with large RBERs. DIR exploits the observation that the latency of an I/O request is determined, without considering the queuing time, by the access of the slowest device page, i.e., the page that has the highest RBER. By grouping consecutive logical pages that have high locality and interleaving their encoded data in different types of device pages that have different RBERs, DIR effectively reduces the number of read retries for LDPC with limited read amplification. To meet the requirement of allocating hybrid page types for interleaved data, we also design a page-interleaving friendly page allocation scheme, which splits all the planes into multi-plane regions for storing the interleaved data and single-plane regions for storing the normal data. The pages in the multi-plane region can be read/written in parallel by the proposed multi-plane command and avoid the read amplification issue. Based on the DIR scheme and the proposed page allocation scheme, we build DIR-enable FTL, which integrates the proposed schemes into the FTL with some modifications. Our experimental results show that adopting DIR in aged SSDs exploits nearly 33% locality from I/O requests and, on average, reduces 43% read latency over conventional aged SSDs.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140581824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on General-Purpose Brain-Inspired Computing Systems 通用脑启发计算系统研究
IF 1.9 3区 计算机科学 Q2 Computer Science Pub Date : 2024-01-30 DOI: 10.1007/s11390-023-4002-3
Peng Qu, Xing-Long Ji, Jia-Jie Chen, Meng Pang, Yu-Chen Li, Xiao-Yi Liu, You-Hui Zhang

Brain-inspired computing is a new technology that draws on the principles of brain science and is oriented to the efficient development of artificial general intelligence (AGI), and a brain-inspired computing system is a hierarchical system composed of neuromorphic chips, basic software and hardware, and algorithms/applications that embody this technology. While the system is developing rapidly, it faces various challenges and opportunities brought by interdisciplinary research, including the issue of software and hardware fragmentation. This paper analyzes the status quo of brain-inspired computing systems. Enlightened by some design principle and methodology of general-purpose computers, it is proposed to construct “general-purpose” brain-inspired computing systems. A general-purpose brain-inspired computing system refers to a brain-inspired computing hierarchy constructed based on the design philosophy of decoupling software and hardware, which can flexibly support various brain-inspired computing applications and neuromorphic chips with different architectures. Further, this paper introduces our recent work in these aspects, including the ANN (artificial neural network)/SNN (spiking neural network) development tools, the hardware agnostic compilation infrastructure, and the chip micro-architecture with high flexibility of programming and high performance; these studies show that the “general-purpose” system can remarkably improve the efficiency of application development and enhance the productivity of basic software, thereby being conductive to accelerating the advancement of various brain-inspired algorithms and applications. We believe that this is the key to the collaborative research and development, and the evolution of applications, basic software and chips in this field, and conducive to building a favorable software/hardware ecosystem of brain-inspired computing.

脑启发计算是一种借鉴脑科学原理、面向人工通用智能(AGI)高效发展的新技术,脑启发计算系统是由神经形态芯片、基础软硬件和体现该技术的算法/应用组成的分层系统。该系统在快速发展的同时,也面临着跨学科研究带来的各种挑战和机遇,其中就包括软硬件碎片化问题。本文分析了脑启发计算系统的现状。在通用计算机一些设计原理和方法的启发下,提出构建 "通用型 "脑启发计算系统。通用脑启发计算系统是指基于软硬件解耦设计理念构建的脑启发计算层次结构,可以灵活支持各种脑启发计算应用和不同架构的神经形态芯片。此外,本文还介绍了我们最近在这些方面的工作,包括 ANN(人工神经网络)/SNN(尖峰神经网络)开发工具、硬件无关的编译基础架构,以及具有高编程灵活性和高性能的芯片微体系结构;这些研究表明,"通用 "系统可以显著提高应用开发的效率,提高基础软件的生产力,从而有利于加速各种脑启发算法和应用的发展。我们相信,这是该领域应用、基础软件和芯片协同研发和演进的关键,有利于构建良好的脑启发计算软硬件生态系统。
{"title":"Research on General-Purpose Brain-Inspired Computing Systems","authors":"Peng Qu, Xing-Long Ji, Jia-Jie Chen, Meng Pang, Yu-Chen Li, Xiao-Yi Liu, You-Hui Zhang","doi":"10.1007/s11390-023-4002-3","DOIUrl":"https://doi.org/10.1007/s11390-023-4002-3","url":null,"abstract":"<p>Brain-inspired computing is a new technology that draws on the principles of brain science and is oriented to the efficient development of artificial general intelligence (AGI), and a brain-inspired computing system is a hierarchical system composed of neuromorphic chips, basic software and hardware, and algorithms/applications that embody this technology. While the system is developing rapidly, it faces various challenges and opportunities brought by interdisciplinary research, including the issue of software and hardware fragmentation. This paper analyzes the status quo of brain-inspired computing systems. Enlightened by some design principle and methodology of general-purpose computers, it is proposed to construct “general-purpose” brain-inspired computing systems. A general-purpose brain-inspired computing system refers to a brain-inspired computing hierarchy constructed based on the design philosophy of decoupling software and hardware, which can flexibly support various brain-inspired computing applications and neuromorphic chips with different architectures. Further, this paper introduces our recent work in these aspects, including the ANN (artificial neural network)/SNN (spiking neural network) development tools, the hardware agnostic compilation infrastructure, and the chip micro-architecture with high flexibility of programming and high performance; these studies show that the “general-purpose” system can remarkably improve the efficiency of application development and enhance the productivity of basic software, thereby being conductive to accelerating the advancement of various brain-inspired algorithms and applications. We believe that this is the key to the collaborative research and development, and the evolution of applications, basic software and chips in this field, and conducive to building a favorable software/hardware ecosystem of brain-inspired computing.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140582163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Motion-Inspired Real-Time Garment Synthesis with Temporal-Consistency 具有时空一致性的运动启发实时服装合成
IF 1.9 3区 计算机科学 Q2 Computer Science Pub Date : 2023-12-01 DOI: 10.1007/s11390-022-1887-1

Abstract

Synthesizing garment dynamics according to body motions is a vital technique in computer graphics. Physics-based simulation depends on an accurate model of the law of kinetics of cloth, which is time-consuming, hard to implement, and complex to control. Existing data-driven approaches either lack temporal consistency, or fail to handle garments that are different from body topology. In this paper, we present a motion-inspired real-time garment synthesis workflow that enables high-level control of garment shape. Given a sequence of body motions, our workflow is able to generate corresponding garment dynamics with both spatial and temporal coherence. To that end, we develop a transformerbased garment synthesis network to learn the mapping from body motions to garment dynamics. Frame-level attention is employed to capture the dependency of garments and body motions. Moreover, a post-processing procedure is further taken to perform penetration removal and auto-texturing. Then, textured clothing animation that is collision-free and temporally-consistent is generated. We quantitatively and qualitatively evaluated our proposed workflow from different aspects. Extensive experiments demonstrate that our network is able to deliver clothing dynamics which retain the wrinkles from the physics-based simulation, while running 1 000 times faster. Besides, our workflow achieved superior synthesis performance compared with alternative approaches. To stimulate further research in this direction, our code will be publicly available soon.

摘要 根据人体运动合成服装动态是计算机制图中的一项重要技术。基于物理的仿真依赖于精确的布料动力学规律模型,而该模型耗时长、难以实现且控制复杂。现有的数据驱动方法要么缺乏时间一致性,要么无法处理与身体拓扑结构不同的服装。在本文中,我们提出了一种受运动启发的实时服装合成工作流程,可实现对服装形状的高级控制。给定一系列身体运动,我们的工作流程就能生成具有空间和时间一致性的相应服装动态。为此,我们开发了基于变压器的服装合成网络,以学习从身体运动到服装动态的映射。我们采用帧级关注来捕捉服装和身体运动之间的依赖关系。此外,还进一步采用后处理程序来执行穿透去除和自动贴图。然后,生成无碰撞且时间上一致的服装纹理动画。我们从不同方面对我们提出的工作流程进行了定量和定性评估。广泛的实验证明,我们的网络能够提供保留了基于物理模拟的褶皱的服装动态效果,同时运行速度提高了 1000 倍。此外,与其他方法相比,我们的工作流程实现了卓越的合成性能。为了激励在这一方向上的进一步研究,我们的代码即将公开发布。
{"title":"Motion-Inspired Real-Time Garment Synthesis with Temporal-Consistency","authors":"","doi":"10.1007/s11390-022-1887-1","DOIUrl":"https://doi.org/10.1007/s11390-022-1887-1","url":null,"abstract":"<h3>Abstract</h3> <p>Synthesizing garment dynamics according to body motions is a vital technique in computer graphics. Physics-based simulation depends on an accurate model of the law of kinetics of cloth, which is time-consuming, hard to implement, and complex to control. Existing data-driven approaches either lack temporal consistency, or fail to handle garments that are different from body topology. In this paper, we present a motion-inspired real-time garment synthesis workflow that enables high-level control of garment shape. Given a sequence of body motions, our workflow is able to generate corresponding garment dynamics with both spatial and temporal coherence. To that end, we develop a transformerbased garment synthesis network to learn the mapping from body motions to garment dynamics. Frame-level attention is employed to capture the dependency of garments and body motions. Moreover, a post-processing procedure is further taken to perform penetration removal and auto-texturing. Then, textured clothing animation that is collision-free and temporally-consistent is generated. We quantitatively and qualitatively evaluated our proposed workflow from different aspects. Extensive experiments demonstrate that our network is able to deliver clothing dynamics which retain the wrinkles from the physics-based simulation, while running 1 000 times faster. Besides, our workflow achieved superior synthesis performance compared with alternative approaches. To stimulate further research in this direction, our code will be publicly available soon.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139656281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Target Description File Generation 自动生成目标描述文件
IF 1.9 3区 计算机科学 Q2 Computer Science Pub Date : 2023-12-01 DOI: 10.1007/s11390-022-1919-x

Abstract

Agile hardware design is gaining increasing momentum and bringing new chips in larger quantities to the market faster. However, it also takes new challenges for compiler developers to retarget existing compilers to these new chips in shorter time than ever before. Currently, retargeting a compiler backend, e.g., an LLVM backend to a new target, requires compiler developers to write manually a set of target description files (totalling 10 300+ lines of code (LOC) for RISC-V in LLVM), which is error-prone and time-consuming. In this paper, we introduce a new approach, Automatic Target Description File Generation (ATG), which accelerates the generation of a compiler backend for a new target by generating its target description files automatically. Given a new target, ATG proceeds in two stages. First, ATG synthesizes a small list of target-specific properties and a list of code-layout templates from the target description files of a set of existing targets with similar instruction set architectures (ISAs). Second, ATG requests compiler developers to fill in the information for each instruction in the new target in tabular form according to the list of target-specific properties synthesized and then generates its target description files automatically according to the list of code-layout templates synthesized. The first stage can often be reused by different new targets sharing similar ISAs. We evaluate ATG using nine RISC-V instruction sets drawn from a total of 1 029 instructions in LLVM 12.0. ATG enables compiler developers to generate compiler backends for these ISAs that emit the same assembly code as the existing compiler backends for RISC-V but with significantly less development effort (by specifying each instruction in terms of up to 61 target-specific properties only).

摘要 敏捷硬件设计的发展势头日益强劲,并能更快地将更多新芯片推向市场。然而,这也给编译器开发人员带来了新的挑战,他们需要在比以往更短的时间内将现有编译器重定向到这些新芯片。目前,将编译器后端(如 LLVM 后端)重定向到新目标需要编译器开发人员手动编写一组目标描述文件(LLVM 中 RISC-V 的代码行数超过 10 300 行),既容易出错又耗时。在本文中,我们介绍了一种新方法--自动目标描述文件生成(ATG),它通过自动生成目标描述文件来加速新目标编译器后端的生成。给定一个新目标,ATG 分两个阶段进行。首先,ATG 从一组具有类似指令集架构(ISA)的现有目标机的目标机描述文件中合成一小部分目标机特定属性列表和代码布局模板列表。其次,ATG 要求编译器开发人员根据合成的目标特定属性列表,以表格形式填写新目标中每条指令的信息,然后根据合成的代码布局模板列表自动生成目标描述文件。共享类似 ISA 的不同新目标通常可以重复使用第一阶段。我们使用 LLVM 12.0 中总共 1 029 条指令中的九个 RISC-V 指令集对 ATG 进行了评估。ATG 使编译器开发人员能够为这些 ISA 生成编译器后端,这些后端可生成与现有 RISC-V 编译器后端相同的汇编代码,但开发工作量却大大减少(只需在多达 61 个目标特定属性中指定每条指令)。
{"title":"Automatic Target Description File Generation","authors":"","doi":"10.1007/s11390-022-1919-x","DOIUrl":"https://doi.org/10.1007/s11390-022-1919-x","url":null,"abstract":"<h3>Abstract</h3> <p>Agile hardware design is gaining increasing momentum and bringing new chips in larger quantities to the market faster. However, it also takes new challenges for compiler developers to retarget existing compilers to these new chips in shorter time than ever before. Currently, retargeting a compiler backend, e.g., an LLVM backend to a new target, requires compiler developers to write manually a set of target description files (totalling 10 300+ lines of code (LOC) for RISC-V in LLVM), which is error-prone and time-consuming. In this paper, we introduce a new approach, Automatic Target Description File Generation (ATG), which accelerates the generation of a compiler backend for a new target by generating its target description files automatically. Given a new target, ATG proceeds in two stages. First, ATG synthesizes a small list of target-specific properties and a list of code-layout templates from the target description files of a set of existing targets with similar instruction set architectures (ISAs). Second, ATG requests compiler developers to fill in the information for each instruction in the new target in tabular form according to the list of target-specific properties synthesized and then generates its target description files automatically according to the list of code-layout templates synthesized. The first stage can often be reused by different new targets sharing similar ISAs. We evaluate ATG using nine RISC-V instruction sets drawn from a total of 1 029 instructions in LLVM 12.0. ATG enables compiler developers to generate compiler backends for these ISAs that emit the same assembly code as the existing compiler backends for RISC-V but with significantly less development effort (by specifying each instruction in terms of up to 61 target-specific properties only).</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139656170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hadamard Encoding Based Frequent Itemset Mining under Local Differential Privacy 局部差异隐私下基于哈达玛编码的常项集挖掘
IF 1.9 3区 计算机科学 Q2 Computer Science Pub Date : 2023-12-01 DOI: 10.1007/s11390-023-1346-7

Abstract

Local differential privacy (LDP) approaches to collecting sensitive information for frequent itemset mining (FIM) can reliably guarantee privacy. Most current approaches to FIM under LDP add “padding and sampling” steps to obtain frequent itemsets and their frequencies because each user transaction represents a set of items. The current state-of-the-art approach, namely set-value itemset mining (SVSM), must balance variance and bias to achieve accurate results. Thus, an unbiased FIM approach with lower variance is highly promising. To narrow this gap, we propose an Item-Level LDP frequency oracle approach, named the Integrated-with-Hadamard-Transform-Based Frequency Oracle (IHFO). For the first time, Hadamard encoding is introduced to a set of values to encode all items into a fixed vector, and perturbation can be subsequently applied to the vector. An FIM approach, called optimized united itemset mining (O-UISM), is proposed to combine the padding-and-sampling-based frequency oracle (PSFO) and the IHFO into a framework for acquiring accurate frequent itemsets with their frequencies. Finally, we theoretically and experimentally demonstrate that O-UISM significantly outperforms the extant approaches in finding frequent itemsets and estimating their frequencies under the same privacy guarantee.

摘要 为频繁项集挖掘(FIM)收集敏感信息的局部差分隐私(LDP)方法可以可靠地保证隐私。由于每笔用户交易都代表一组项目,因此目前大多数 LDP 下的频繁项集挖掘方法都增加了 "填充和采样 "步骤,以获得频繁项集及其频率。目前最先进的方法,即集值项集挖掘(SVSM),必须在方差和偏差之间取得平衡,才能获得准确的结果。因此,一种无偏见、方差较小的 FIM 方法大有可为。为了缩小这一差距,我们提出了一种项级 LDP 频率甲骨文方法,名为基于哈达玛德变换的集成频率甲骨文(IHFO)。我们首次在一组值中引入哈达玛编码,将所有项目编码为一个固定的向量,随后可对该向量进行扰动。我们提出了一种称为优化联合项集挖掘(O-UISM)的 FIM 方法,将基于填充和采样的频率神谕(PSFO)和 IHFO 结合到一个框架中,以获取精确的频繁项集及其频率。最后,我们通过理论和实验证明,O-UISM 在寻找频繁项集和估算其频率方面明显优于现有方法。
{"title":"Hadamard Encoding Based Frequent Itemset Mining under Local Differential Privacy","authors":"","doi":"10.1007/s11390-023-1346-7","DOIUrl":"https://doi.org/10.1007/s11390-023-1346-7","url":null,"abstract":"<h3>Abstract</h3> <p>Local differential privacy (LDP) approaches to collecting sensitive information for frequent itemset mining (FIM) can reliably guarantee privacy. Most current approaches to FIM under LDP add “padding and sampling” steps to obtain frequent itemsets and their frequencies because each user transaction represents a set of items. The current state-of-the-art approach, namely set-value itemset mining (SVSM), must balance variance and bias to achieve accurate results. Thus, an unbiased FIM approach with lower variance is highly promising. To narrow this gap, we propose an Item-Level LDP frequency oracle approach, named the Integrated-with-Hadamard-Transform-Based Frequency Oracle (IHFO). For the first time, Hadamard encoding is introduced to a set of values to encode all items into a fixed vector, and perturbation can be subsequently applied to the vector. An FIM approach, called optimized united itemset mining (O-UISM), is proposed to combine the padding-and-sampling-based frequency oracle (PSFO) and the IHFO into a framework for acquiring accurate frequent itemsets with their frequencies. Finally, we theoretically and experimentally demonstrate that O-UISM significantly outperforms the extant approaches in finding frequent itemsets and estimating their frequencies under the same privacy guarantee.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139659401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2k-Vertex Kernels for Cluster Deletion and Strong Triadic Closure 用于簇删除和强三元封闭的 2k 顶点内核
IF 1.9 3区 计算机科学 Q2 Computer Science Pub Date : 2023-11-30 DOI: 10.1007/s11390-023-1420-1
Wen-Yu Gao, Hang Gao

Cluster deletion and strong triadic closure are two important NP-complete problems that have received significant attention due to their applications in various areas, including social networks and data analysis. Although cluster deletion and strong triadic closure are closely linked by induced paths on three vertices, there are subtle differences between them. In some cases, the solutions of strong triadic closure and cluster deletion are quite different. In this paper, we study the parameterized algorithms for these two problems. More specifically, we focus on the kernels of these two problems. Instead of separating the critical clique and its neighbors for analysis, we consider them as a whole, which allows us to more effectively bound the number of related vertices. In addition, in analyzing the kernel of strong triadic closure, we introduce the concept of edge-disjoint induced path on three vertices, which enables us to obtain the lower bound of weak edge number in a more concise way. Our analysis demonstrates that cluster deletion and strong triadic closure both admit 2k-vertex kernels. These results represent improvements over previously best-known kernels for both problems. Furthermore, our analysis provides additional insights into the relationship between cluster deletion and strong triadic closure.

簇删除和强三元组闭合是两个重要的 NP-完全问题,由于它们在社交网络和数据分析等多个领域的应用而备受关注。虽然簇删除和强三元组闭合因三个顶点上的诱导路径而密切相关,但它们之间存在细微差别。在某些情况下,强三元组闭合的解和簇删除的解截然不同。本文将研究这两个问题的参数化算法。更具体地说,我们关注这两个问题的内核。我们不再将临界小群及其邻域分开分析,而是将它们视为一个整体,这样就能更有效地约束相关顶点的数量。此外,在分析强三元封闭的内核时,我们引入了三个顶点上的边不相交诱导路径的概念,这使我们能以更简洁的方式获得弱边数量的下界。我们的分析表明,簇删除和强三元组闭合都允许 2k 顶点内核。这些结果代表了对这两个问题已知内核的改进。此外,我们的分析还提供了关于簇删除和强三元封闭之间关系的更多见解。
{"title":"2k-Vertex Kernels for Cluster Deletion and Strong Triadic Closure","authors":"Wen-Yu Gao, Hang Gao","doi":"10.1007/s11390-023-1420-1","DOIUrl":"https://doi.org/10.1007/s11390-023-1420-1","url":null,"abstract":"<p>Cluster deletion and strong triadic closure are two important NP-complete problems that have received significant attention due to their applications in various areas, including social networks and data analysis. Although cluster deletion and strong triadic closure are closely linked by induced paths on three vertices, there are subtle differences between them. In some cases, the solutions of strong triadic closure and cluster deletion are quite different. In this paper, we study the parameterized algorithms for these two problems. More specifically, we focus on the kernels of these two problems. Instead of separating the critical clique and its neighbors for analysis, we consider them as a whole, which allows us to more effectively bound the number of related vertices. In addition, in analyzing the kernel of strong triadic closure, we introduce the concept of edge-disjoint induced path on three vertices, which enables us to obtain the lower bound of weak edge number in a more concise way. Our analysis demonstrates that cluster deletion and strong triadic closure both admit 2<i>k</i>-vertex kernels. These results represent improvements over previously best-known kernels for both problems. Furthermore, our analysis provides additional insights into the relationship between cluster deletion and strong triadic closure.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139656204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Composing Like an Ancient Chinese Poet: Learn to Generate Rhythmic Chinese Poetry 像中国古代诗人一样作诗:学习创作有韵律的中国诗歌
IF 1.9 3区 计算机科学 Q2 Computer Science Pub Date : 2023-11-30 DOI: 10.1007/s11390-023-1295-1
Ming He, Yan Chen, Hong-Ke Zhao, Qi Liu, Le Wu, Yu Cui, Gui-Hua Zeng, Gui-Quan Liu

Automatic generation of Chinese classical poetry is still a challenging problem in artificial intelligence. Recently, Encoder-Decoder models have provided a few viable methods for poetry generation. However, by reviewing the prior methods, two major issues still need to be settled: 1) most of them are one-stage generation methods without further polishing; 2) they rarely take into consideration the restrictions of poetry, such as tone and rhyme. Intuitively, some ancient Chinese poets tended first to write a coarse poem underlying aesthetics and then deliberated its semantics; while others first create a semantic poem and then refine its aesthetics. On this basis, in order to better imitate the human creation procedure of poems, we propose a two-stage method (i.e., restricted polishing generation method) of which each stage focuses on the different aspects of poems (i.e., semantics and aesthetics), which can produce a higher quality of generated poems. In this way, the two-stage method develops into two symmetrical generation methods, the aesthetics-to-semantics method and the semantics-to-aesthetics method. In particular, we design a sampling method and a gate to formulate the tone and rhyme restrictions, which can further improve the rhythm of the generated poems. Experimental results demonstrate the superiority of our proposed two-stage method in both automatic evaluation metrics and human evaluation metrics compared with baselines, especially in yielding consistent improvements in tone and rhyme.

中国古典诗词的自动生成仍然是人工智能领域一个具有挑战性的问题。最近,编码器-解码器模型为诗歌生成提供了一些可行的方法。然而,回顾以往的方法,仍有两大问题亟待解决:1)它们大多是未经进一步打磨的单阶段生成方法;2)它们很少考虑诗歌的限制,如声调和韵律。从直观上看,中国古代诗人有的倾向于先写出一首具有美学基础的粗诗,然后再斟酌其语义;有的则先创作一首语义诗,然后再提炼其美学。在此基础上,为了更好地模仿人类创作诗歌的过程,我们提出了一种两阶段法(即限制性打磨生成法),其中每个阶段都侧重于诗歌的不同方面(即语义和美学),这样可以生成更高质量的诗歌。这样,两阶段法就发展成为两种对称的生成方法,即从美学到语义学法和从语义学到美学法。特别是,我们设计了一种采样方法和一个制定声调和韵律限制的门,可以进一步提高生成诗歌的韵律。实验结果表明,我们提出的两阶段方法在自动评价指标和人工评价指标上都优于基线方法,特别是在声调和韵律方面的改进更为一致。
{"title":"Composing Like an Ancient Chinese Poet: Learn to Generate Rhythmic Chinese Poetry","authors":"Ming He, Yan Chen, Hong-Ke Zhao, Qi Liu, Le Wu, Yu Cui, Gui-Hua Zeng, Gui-Quan Liu","doi":"10.1007/s11390-023-1295-1","DOIUrl":"https://doi.org/10.1007/s11390-023-1295-1","url":null,"abstract":"<p>Automatic generation of Chinese classical poetry is still a challenging problem in artificial intelligence. Recently, Encoder-Decoder models have provided a few viable methods for poetry generation. However, by reviewing the prior methods, two major issues still need to be settled: 1) most of them are one-stage generation methods without further polishing; 2) they rarely take into consideration the restrictions of poetry, such as tone and rhyme. Intuitively, some ancient Chinese poets tended first to write a coarse poem underlying aesthetics and then deliberated its semantics; while others first create a semantic poem and then refine its aesthetics. On this basis, in order to better imitate the human creation procedure of poems, we propose a two-stage method (i.e., restricted polishing generation method) of which each stage focuses on the different aspects of poems (i.e., semantics and aesthetics), which can produce a higher quality of generated poems. In this way, the two-stage method develops into two symmetrical generation methods, the aesthetics-to-semantics method and the semantics-to-aesthetics method. In particular, we design a sampling method and a gate to formulate the tone and rhyme restrictions, which can further improve the rhythm of the generated poems. Experimental results demonstrate the superiority of our proposed two-stage method in both automatic evaluation metrics and human evaluation metrics compared with baselines, especially in yielding consistent improvements in tone and rhyme.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.9,"publicationDate":"2023-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139657219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Computer Science and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1