首页 > 最新文献

Journal of Computer Science and Technology最新文献

英文 中文
SG-NeRF: Sparse-Input Generalized Neural Radiance Fields for Novel View Synthesis SG-NeRF:用于新颖视图合成的稀疏输入广义神经辐射场
IF 1.9 3区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-26 DOI: 10.1007/s11390-024-4157-6
Kuo Xu, Jie Li, Zhen-Qiang Li, Yang-Jie Cao

Traditional neural radiance fields for rendering novel views require intensive input images and pre-scene optimization, which limits their practical applications. We propose a generalization method to infer scenes from input images and perform high-quality rendering without pre-scene optimization named SG-NeRF (Sparse-Input Generalized Neural Radiance Fields). Firstly, we construct an improved multi-view stereo structure based on the convolutional attention and multi-level fusion mechanism to obtain the geometric features and appearance features of the scene from the sparse input images, and then these features are aggregated by multi-head attention as the input of the neural radiance fields. This strategy of utilizing neural radiance fields to decode scene features instead of mapping positions and orientations enables our method to perform cross-scene training as well as inference, thus enabling neural radiance fields to generalize for novel view synthesis on unseen scenes. We tested the generalization ability on DTU dataset, and our PSNR (peak signal-to-noise ratio) improved by 3.14 compared with the baseline method under the same input conditions. In addition, if the scene has dense input views available, the average PSNR can be improved by 1.04 through further refinement training in a short time, and a higher quality rendering effect can be obtained.

传统的神经辐射场渲染新颖视图需要密集的输入图像和预场景优化,这限制了其实际应用。我们提出了一种从输入图像推断场景的广义方法,无需场景前优化即可实现高质量渲染,命名为 SG-NeRF(稀疏输入广义神经辐射场)。首先,我们基于卷积注意力和多级融合机制构建了一种改进的多视角立体结构,从稀疏输入图像中获取场景的几何特征和外观特征,然后通过多头注意力将这些特征聚合起来,作为神经辐射场的输入。这种利用神经辐射场解码场景特征而不是映射位置和方向的策略,使我们的方法能够进行跨场景训练和推理,从而使神经辐射场能够在未见过的场景中进行新颖视图合成的泛化。我们在 DTU 数据集上测试了泛化能力,在相同的输入条件下,我们的 PSNR(峰值信噪比)比基线方法提高了 3.14。此外,如果场景中有密集的输入视图,通过进一步细化训练,平均 PSNR 还能在短时间内提高 1.04,获得更高质量的渲染效果。
{"title":"SG-NeRF: Sparse-Input Generalized Neural Radiance Fields for Novel View Synthesis","authors":"Kuo Xu, Jie Li, Zhen-Qiang Li, Yang-Jie Cao","doi":"10.1007/s11390-024-4157-6","DOIUrl":"https://doi.org/10.1007/s11390-024-4157-6","url":null,"abstract":"<p>Traditional neural radiance fields for rendering novel views require intensive input images and pre-scene optimization, which limits their practical applications. We propose a generalization method to infer scenes from input images and perform high-quality rendering without pre-scene optimization named SG-NeRF (Sparse-Input Generalized Neural Radiance Fields). Firstly, we construct an improved multi-view stereo structure based on the convolutional attention and multi-level fusion mechanism to obtain the geometric features and appearance features of the scene from the sparse input images, and then these features are aggregated by multi-head attention as the input of the neural radiance fields. This strategy of utilizing neural radiance fields to decode scene features instead of mapping positions and orientations enables our method to perform cross-scene training as well as inference, thus enabling neural radiance fields to generalize for novel view synthesis on unseen scenes. We tested the generalization ability on DTU dataset, and our PSNR (peak signal-to-noise ratio) improved by 3.14 compared with the baseline method under the same input conditions. In addition, if the scene has dense input views available, the average PSNR can be improved by 1.04 through further refinement training in a short time, and a higher quality rendering effect can be obtained.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":"691 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Empirical Study on Automated Test Generation Tools for Java: Effectiveness and Challenges Java 自动测试生成工具实证研究:有效性与挑战
IF 1.9 3区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-26 DOI: 10.1007/s11390-023-1935-5
Xiang-Jun Liu, Ping Yu, Xiao-Xing Ma

Automated test generation tools enable test automation and further alleviate the low efficiency caused by writing hand-crafted test cases. However, existing automated tools are not mature enough to be widely used by software testing groups. This paper conducts an empirical study on the state-of-the-art automated tools for Java, i.e., EvoSuite, Randoop, JDoop, JTeXpert, T3, and Tardis. We design a test workflow to facilitate the process, which can automatically run tools for test generation, collect data, and evaluate various metrics. Furthermore, we conduct empirical analysis on these six tools and their related techniques from different aspects, i.e., code coverage, mutation score, test suite size, readability, and real fault detection ability. We discuss about the benefits and drawbacks of hybrid techniques based on experimental results. Besides, we introduce our experience in setting up and executing these tools, and summarize their usability and user-friendliness. Finally, we give some insights into automated tools in terms of test suite readability improvement, meaningful assertion generation, test suite reduction for random testing tools, and symbolic execution integration.

自动测试生成工具实现了测试自动化,进一步缓解了手工编写测试用例造成的低效率问题。然而,现有的自动化工具还不够成熟,不能被软件测试小组广泛使用。本文对最先进的 Java 自动化工具(即 EvoSuite、Randoop、JDoop、JTeXpert、T3 和 Tardis)进行了实证研究。我们设计了一个测试工作流程来促进这一过程,它可以自动运行工具来生成测试、收集数据和评估各种指标。此外,我们还从代码覆盖率、突变得分、测试套件大小、可读性和实际故障检测能力等不同方面对这六种工具及其相关技术进行了实证分析。我们根据实验结果讨论了混合技术的优点和缺点。此外,我们还介绍了建立和执行这些工具的经验,并总结了这些工具的可用性和用户友好性。最后,我们从提高测试套件的可读性、生成有意义的断言、减少随机测试工具的测试套件以及符号执行集成等方面对自动化工具提出了一些见解。
{"title":"An Empirical Study on Automated Test Generation Tools for Java: Effectiveness and Challenges","authors":"Xiang-Jun Liu, Ping Yu, Xiao-Xing Ma","doi":"10.1007/s11390-023-1935-5","DOIUrl":"https://doi.org/10.1007/s11390-023-1935-5","url":null,"abstract":"<p>Automated test generation tools enable test automation and further alleviate the low efficiency caused by writing hand-crafted test cases. However, existing automated tools are not mature enough to be widely used by software testing groups. This paper conducts an empirical study on the state-of-the-art automated tools for Java, i.e., EvoSuite, Randoop, JDoop, JTeXpert, T3, and Tardis. We design a test workflow to facilitate the process, which can automatically run tools for test generation, collect data, and evaluate various metrics. Furthermore, we conduct empirical analysis on these six tools and their related techniques from different aspects, i.e., code coverage, mutation score, test suite size, readability, and real fault detection ability. We discuss about the benefits and drawbacks of hybrid techniques based on experimental results. Besides, we introduce our experience in setting up and executing these tools, and summarize their usability and user-friendliness. Finally, we give some insights into automated tools in terms of test suite readability improvement, meaningful assertion generation, test suite reduction for random testing tools, and symbolic execution integration.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":"110 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Survey and Experimental Review on Data Distribution Strategies for Parallel Spatial Clustering Algorithms 并行空间聚类算法数据分布策略调查与实验综述
IF 1.9 3区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-26 DOI: 10.1007/s11390-024-2700-0
Jagat Sesh Challa, Navneet Goyal, Amogh Sharma, Nikhil Sreekumar, Sundar Balasubramaniam, Poonam Goyal

The advent of Big Data has led to the rapid growth in the usage of parallel clustering algorithms that work over distributed computing frameworks such as MPI, MapReduce, and Spark. An important step for any parallel clustering algorithm is the distribution of data amongst the cluster nodes. This step governs the methodology and performance of the entire algorithm. Researchers typically use random, or a spatial/geometric distribution strategy like kd-tree based partitioning and grid-based partitioning, as per the requirements of the algorithm. However, these strategies are generic and are not tailor-made for any specific parallel clustering algorithm. In this paper, we give a very comprehensive literature survey of MPI-based parallel clustering algorithms with special reference to the specific data distribution strategies they employ. We also propose three new data distribution strategies namely Parameterized Dimensional Split for parallel density-based clustering algorithms like DBSCAN and OPTICS, Cell-Based Dimensional Split for dGridSLINK, which is a grid-based hierarchical clustering algorithm that exhibits efficiency for disjoint spatial distribution, and Projection-Based Split, which is a generic distribution strategy. All of these preserve spatial locality, achieve disjoint partitioning, and ensure good data load balancing. The experimental analysis shows the benefits of using the proposed data distribution strategies for algorithms they are designed for, based on which we give appropriate recommendations for their usage.

大数据的出现使通过分布式计算框架(如 MPI、MapReduce 和 Spark)工作的并行聚类算法的使用量迅速增长。任何并行聚类算法的一个重要步骤都是在集群节点之间分配数据。这一步决定了整个算法的方法和性能。研究人员通常根据算法的要求,使用随机或空间/几何分布策略,如基于 kd 树的分区和基于网格的分区。然而,这些策略都是通用的,并不是为任何特定的并行聚类算法量身定制的。在本文中,我们对基于 MPI 的并行聚类算法进行了非常全面的文献调查,并特别提到了这些算法所采用的特定数据分布策略。我们还提出了三种新的数据分布策略,即针对 DBSCAN 和 OPTICS 等基于密度的并行聚类算法的参数化维度拆分、针对 dGridSLINK 的基于单元格的维度拆分(dGridSLINK 是一种基于网格的分层聚类算法,在空间分布不连续时表现出高效性)以及基于投影的拆分(一种通用的分布策略)。所有这些都能保持空间位置性,实现不相交的分区,并确保良好的数据负载平衡。实验分析表明了针对所设计的算法使用所提出的数据分布策略的好处,在此基础上,我们给出了使用这些策略的适当建议。
{"title":"A Survey and Experimental Review on Data Distribution Strategies for Parallel Spatial Clustering Algorithms","authors":"Jagat Sesh Challa, Navneet Goyal, Amogh Sharma, Nikhil Sreekumar, Sundar Balasubramaniam, Poonam Goyal","doi":"10.1007/s11390-024-2700-0","DOIUrl":"https://doi.org/10.1007/s11390-024-2700-0","url":null,"abstract":"<p>The advent of Big Data has led to the rapid growth in the usage of parallel clustering algorithms that work over distributed computing frameworks such as MPI, MapReduce, and Spark. An important step for any parallel clustering algorithm is the distribution of data amongst the cluster nodes. This step governs the methodology and performance of the entire algorithm. Researchers typically use random, or a spatial/geometric distribution strategy like <i>kd</i>-tree based partitioning and grid-based partitioning, as per the requirements of the algorithm. However, these strategies are generic and are not tailor-made for any specific parallel clustering algorithm. In this paper, we give a very comprehensive literature survey of MPI-based parallel clustering algorithms with special reference to the specific data distribution strategies they employ. We also propose three new data distribution strategies namely Parameterized Dimensional Split for parallel density-based clustering algorithms like DBSCAN and OPTICS, Cell-Based Dimensional Split for dGridSLINK, which is a grid-based hierarchical clustering algorithm that exhibits efficiency for disjoint spatial distribution, and Projection-Based Split, which is a generic distribution strategy. All of these preserve spatial locality, achieve disjoint partitioning, and ensure good data load balancing. The experimental analysis shows the benefits of using the proposed data distribution strategies for algorithms they are designed for, based on which we give appropriate recommendations for their usage.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":"16 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal Dependence Attention and Large-Scale Data Based Offline Handwritten Formula Recognition 多模态依赖注意和基于大规模数据的离线手写公式识别
IF 1.9 3区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-26 DOI: 10.1007/s11390-022-1987-y
Han-Chao Liu, Lan-Fang Dong, Xin-Ming Zhang

Offline handwritten formula recognition is a challenging task due to the variety of handwritten symbols and two-dimensional formula structures. Recently, the deep neural network recognizers based on the encoder-decoder framework have achieved great improvements on this task. However, the unsatisfactory recognition performance for formulas with long LATEX strings is one shortcoming of the existing work. Moreover, lacking sufficient training data also limits the capability of these recognizers. In this paper, we design a multimodal dependence attention (MDA) module to help the model learn visual and semantic dependencies among symbols in the same formula to improve the recognition performance of the formulas with long LATEX strings. To alleviate overfitting and further improve the recognition performance, we also propose a new dataset, Handwritten Formula Image Dataset (HFID), which contains 25 620 handwritten formula images collected from real life. We conduct extensive experiments to demonstrate the effectiveness of our proposed MDA module and HFID dataset and achieve state-of-the-art performances, 63.79% and 65.24% expression accuracy on CROHME 2014 and CROHME 2016, respectively.

由于手写符号和二维公式结构的多样性,离线手写公式识别是一项具有挑战性的任务。最近,基于编码器-解码器框架的深度神经网络识别器在这项任务上取得了很大的进步。然而,现有工作的一个不足之处是,对长 LATEX 字符串公式的识别效果并不理想。此外,缺乏足够的训练数据也限制了这些识别器的能力。在本文中,我们设计了一个多模态依赖注意(MDA)模块,帮助模型学习同一公式中符号之间的视觉和语义依赖关系,从而提高长 LATEX 字符串公式的识别性能。为了减轻过拟合并进一步提高识别性能,我们还提出了一个新的数据集--手写公式图像数据集(HFID),其中包含从现实生活中收集的 25 620 幅手写公式图像。我们进行了大量实验,证明了我们提出的 MDA 模块和 HFID 数据集的有效性,并在 CROHME 2014 和 CROHME 2016 上取得了最先进的性能,表达准确率分别为 63.79% 和 65.24%。
{"title":"Multimodal Dependence Attention and Large-Scale Data Based Offline Handwritten Formula Recognition","authors":"Han-Chao Liu, Lan-Fang Dong, Xin-Ming Zhang","doi":"10.1007/s11390-022-1987-y","DOIUrl":"https://doi.org/10.1007/s11390-022-1987-y","url":null,"abstract":"<p>Offline handwritten formula recognition is a challenging task due to the variety of handwritten symbols and two-dimensional formula structures. Recently, the deep neural network recognizers based on the encoder-decoder framework have achieved great improvements on this task. However, the unsatisfactory recognition performance for formulas with long LATEX strings is one shortcoming of the existing work. Moreover, lacking sufficient training data also limits the capability of these recognizers. In this paper, we design a multimodal dependence attention (MDA) module to help the model learn visual and semantic dependencies among symbols in the same formula to improve the recognition performance of the formulas with long LATEX strings. To alleviate overfitting and further improve the recognition performance, we also propose a new dataset, Handwritten Formula Image Dataset (HFID), which contains 25 620 handwritten formula images collected from real life. We conduct extensive experiments to demonstrate the effectiveness of our proposed MDA module and HFID dataset and achieve state-of-the-art performances, 63.79% and 65.24% expression accuracy on CROHME 2014 and CROHME 2016, respectively.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":"18 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141519207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SAIH: A Scalable Evaluation Methodology for Understanding AI Performance Trend on HPC Systems SAIH:用于了解高性能计算系统上人工智能性能趋势的可扩展评估方法
IF 1.9 3区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-06 DOI: 10.1007/s11390-023-1840-y
Jiang-Su Du, Dong-Sheng Li, Ying-Peng Wen, Jia-Zhi Jiang, Dan Huang, Xiang-Ke Liao, Yu-Tong Lu

Novel artificial intelligence (AI) technology has expedited various scientific research, e.g., cosmology, physics, and bioinformatics, inevitably becoming a significant category of workload on high-performance computing (HPC) systems. Existing AI benchmarks tend to customize well-recognized AI applications, so as to evaluate the AI performance of HPC systems under the predefined problem size, in terms of datasets and AI models. However, driven by novel AI technology, most of AI applications are evolving fast on models and datasets to achieve higher accuracy and be applicable to more scenarios. Due to the lack of scalability on the problem size, static AI benchmarks might be under competent to help understand the performance trend of evolving AI applications on HPC systems, in particular, the scientific AI applications on large-scale systems. In this paper, we propose a scalable evaluation methodology (SAIH) for analyzing the AI performance trend of HPC systems with scaling the problem sizes of customized AI applications. To enable scalability, SAIH builds a set of novel mechanisms for augmenting problem sizes. As the data and model constantly scale, we can investigate the trend and range of AI performance on HPC systems, and further diagnose system bottlenecks. To verify our methodology, we augment a cosmological AI application to evaluate a real HPC system equipped with GPUs as a case study of SAIH. With data and model augment, SAIH can progressively evaluate the AI performance trend of HPC systems, e.g., increasing from 5.2% to 59.6% of the peak theoretical hardware performance. The evaluation results are analyzed and summarized into insight findings on performance issues. For instance, we find that the AI application constantly consumes the I/O bandwidth of the shared parallel file system during its iteratively training model. If I/O contention exists, the shared parallel file system might become a bottleneck.

新颖的人工智能(AI)技术加速了各种科学研究,如宇宙学、物理学和生物信息学,不可避免地成为高性能计算(HPC)系统的重要工作负载类别。现有的人工智能基准往往是定制公认的人工智能应用,以便在预定义的问题规模下,从数据集和人工智能模型方面评估高性能计算系统的人工智能性能。然而,在新型人工智能技术的推动下,大多数人工智能应用都在模型和数据集上快速发展,以实现更高的准确性并适用于更多场景。由于缺乏对问题规模的可扩展性,静态人工智能基准可能无法帮助理解 HPC 系统上不断演进的人工智能应用的性能趋势,特别是大规模系统上的科学人工智能应用。在本文中,我们提出了一种可扩展的评估方法(SAIH),用于分析 HPC 系统的人工智能性能趋势,同时扩展定制化人工智能应用的问题规模。为了实现可扩展性,SAIH 建立了一套新颖的问题规模扩展机制。随着数据和模型的不断扩展,我们可以研究 HPC 系统上人工智能性能的趋势和范围,并进一步诊断系统瓶颈。为了验证我们的方法,我们增强了一个宇宙学人工智能应用,以评估配备了 GPU 的真实 HPC 系统,作为 SAIH 的案例研究。随着数据和模型的增强,SAIH 可以逐步评估 HPC 系统的人工智能性能趋势,例如,从理论硬件性能峰值的 5.2% 提高到 59.6%。我们对评估结果进行了分析,并将其总结为对性能问题的深刻见解。例如,我们发现人工智能应用在迭代训练模型的过程中会不断消耗共享并行文件系统的 I/O 带宽。如果存在 I/O 竞争,共享并行文件系统可能会成为瓶颈。
{"title":"SAIH: A Scalable Evaluation Methodology for Understanding AI Performance Trend on HPC Systems","authors":"Jiang-Su Du, Dong-Sheng Li, Ying-Peng Wen, Jia-Zhi Jiang, Dan Huang, Xiang-Ke Liao, Yu-Tong Lu","doi":"10.1007/s11390-023-1840-y","DOIUrl":"https://doi.org/10.1007/s11390-023-1840-y","url":null,"abstract":"<p>Novel artificial intelligence (AI) technology has expedited various scientific research, e.g., cosmology, physics, and bioinformatics, inevitably becoming a significant category of workload on high-performance computing (HPC) systems. Existing AI benchmarks tend to customize well-recognized AI applications, so as to evaluate the AI performance of HPC systems under the predefined problem size, in terms of datasets and AI models. However, driven by novel AI technology, most of AI applications are evolving fast on models and datasets to achieve higher accuracy and be applicable to more scenarios. Due to the lack of scalability on the problem size, static AI benchmarks might be under competent to help understand the performance trend of evolving AI applications on HPC systems, in particular, the scientific AI applications on large-scale systems. In this paper, we propose a scalable evaluation methodology (SAIH) for analyzing the AI performance trend of HPC systems with scaling the problem sizes of customized AI applications. To enable scalability, SAIH builds a set of novel mechanisms for augmenting problem sizes. As the data and model constantly scale, we can investigate the trend and range of AI performance on HPC systems, and further diagnose system bottlenecks. To verify our methodology, we augment a cosmological AI application to evaluate a real HPC system equipped with GPUs as a case study of SAIH. With data and model augment, SAIH can progressively evaluate the AI performance trend of HPC systems, e.g., increasing from 5.2% to 59.6% of the peak theoretical hardware performance. The evaluation results are analyzed and summarized into insight findings on performance issues. For instance, we find that the AI application constantly consumes the I/O bandwidth of the shared parallel file system during its iteratively training model. If I/O contention exists, the shared parallel file system might become a bottleneck.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":"17 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141519212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computational Approaches for Traditional Chinese Painting: From the “Six Principles of Painting” Perspective 中国传统绘画的计算方法:从 "绘画六法 "的角度看中国传统绘画的计算方法
IF 1.9 3区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-06 DOI: 10.1007/s11390-024-3408-x
Wei Zhang, Jian-Wei Zhang, Kam-Kwai Wong, Yi-Fang Wang, Ying-Chao-Jie Feng, Lu-Wei Wang, Wei Chen

Traditional Chinese painting (TCP) is an invaluable cultural heritage resource and a unique visual art style. In recent years, there has been a growing emphasis on the digitalization of TCP for cultural preservation and revitalization. The resulting digital copies have enabled the advancement of computational methods for a structured and systematic understanding of TCP. To explore this topic, we conduct an in-depth analysis of 94 pieces of literature. We examine the current use of computer technologies on TCP from three perspectives, based on numerous conversations with specialists. First, in light of the “Six Principles of Painting” theory, we categorize the articles according to their research focus on artistic elements. Second, we create a four-stage framework to illustrate the purposes of TCP applications. Third, we summarize the popular computational techniques applied to TCP. This work also provides insights into potential applications and prospects, with professional opinion.

中国传统绘画(TCP)是一种宝贵的文化遗产资源和独特的视觉艺术风格。近年来,人们越来越重视将传统中国画数字化,以保护和振兴文化。由此产生的数字拷贝推动了计算方法的进步,有助于对 TCP 进行结构化和系统化的理解。为了探讨这一主题,我们对 94 篇文献进行了深入分析。在与专家多次交流的基础上,我们从三个方面考察了计算机技术在 TCP 中的应用现状。首先,根据 "绘画六原则 "理论,我们按照文章对艺术元素的研究重点进行了分类。其次,我们创建了一个四阶段框架来说明 TCP 应用的目的。第三,我们总结了应用于 TCP 的流行计算技术。这项工作还提供了对潜在应用和前景的见解以及专业意见。
{"title":"Computational Approaches for Traditional Chinese Painting: From the “Six Principles of Painting” Perspective","authors":"Wei Zhang, Jian-Wei Zhang, Kam-Kwai Wong, Yi-Fang Wang, Ying-Chao-Jie Feng, Lu-Wei Wang, Wei Chen","doi":"10.1007/s11390-024-3408-x","DOIUrl":"https://doi.org/10.1007/s11390-024-3408-x","url":null,"abstract":"<p>Traditional Chinese painting (TCP) is an invaluable cultural heritage resource and a unique visual art style. In recent years, there has been a growing emphasis on the digitalization of TCP for cultural preservation and revitalization. The resulting digital copies have enabled the advancement of computational methods for a structured and systematic understanding of TCP. To explore this topic, we conduct an in-depth analysis of 94 pieces of literature. We examine the current use of computer technologies on TCP from three perspectives, based on numerous conversations with specialists. First, in light of the “Six Principles of Painting” theory, we categorize the articles according to their research focus on artistic elements. Second, we create a four-stage framework to illustrate the purposes of TCP applications. Third, we summarize the popular computational techniques applied to TCP. This work also provides insights into potential applications and prospects, with professional opinion.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":"49 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141546845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SinGRAV: Learning a Generative Radiance Volume from a Single Natural Scene SinGRAV: 从单一自然场景中学习生成辐射量
IF 1.9 3区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-06 DOI: 10.1007/s11390-023-3596-9
Yu-Jie Wang, Xue-Lin Chen, Bao-Quan Chen

We present SinGRAV, an attempt to learn a generative radiance volume from multi-view observations of a single natural scene, in stark contrast to existing category-level 3D generative models that learn from images of many object-centric scenes. Inspired by SinGAN, we also learn the internal distribution of the input scene, which necessitates our key designs w.r.t. the scene representation and network architecture. Unlike popular multi-layer perceptrons (MLP)-based architectures, we particularly employ convolutional generators and discriminators, which inherently possess spatial locality bias, to operate over voxelized volumes for learning the internal distribution over a plethora of overlapping regions. On the other hand, localizing the adversarial generators and discriminators over confined areas with limited receptive fields easily leads to highly implausible geometric structures in the spatial. Our remedy is to use spatial inductive bias and joint discrimination on geometric clues in the form of 2D depth maps. This strategy is effective in improving spatial arrangement while incurring negligible additional computational cost. Experimental results demonstrate the ability of SinGRAV in generating plausible and diverse variations from a single scene, the merits of SinGRAV over state-of-the-art generative neural scene models, and the versatility of SinGRAV by its use in a variety of applications. Code and data will be released to facilitate further research.

SinGRAV 尝试从单一自然场景的多视角观测中学习生成辐射量,这与现有的从许多以物体为中心的场景图像中学习的类别级三维生成模型形成了鲜明对比。受 SinGAN 的启发,我们还学习了输入场景的内部分布,这就需要我们在场景表示和网络架构方面进行关键设计。与流行的基于多层感知器(MLP)的架构不同,我们特别采用了卷积生成器和判别器,它们本身具有空间定位偏差,可在体素化体积上运行,以学习大量重叠区域的内部分布。另一方面,将对抗发生器和鉴别器定位在有限感受野的封闭区域,很容易导致空间几何结构非常不合理。我们的补救措施是利用空间归纳偏差和二维深度图形式的几何线索进行联合判别。这种策略能有效改善空间排列,同时产生的额外计算成本几乎可以忽略不计。实验结果表明,SinGRAV 有能力从单一场景中生成可信且多样的变化,与最先进的生成神经场景模型相比,SinGRAV 更胜一筹,而且 SinGRAV 在各种应用中都有很好的通用性。我们将发布代码和数据,以促进进一步的研究。
{"title":"SinGRAV: Learning a Generative Radiance Volume from a Single Natural Scene","authors":"Yu-Jie Wang, Xue-Lin Chen, Bao-Quan Chen","doi":"10.1007/s11390-023-3596-9","DOIUrl":"https://doi.org/10.1007/s11390-023-3596-9","url":null,"abstract":"<p>We present SinGRAV, an attempt to learn a generative radiance volume from multi-view observations of a single natural scene, in stark contrast to existing category-level 3D generative models that learn from images of many object-centric scenes. Inspired by SinGAN, we also learn the internal distribution of the input scene, which necessitates our key designs w.r.t. the scene representation and network architecture. Unlike popular multi-layer perceptrons (MLP)-based architectures, we particularly employ convolutional generators and discriminators, which inherently possess spatial locality bias, to operate over voxelized volumes for learning the internal distribution over a plethora of overlapping regions. On the other hand, localizing the adversarial generators and discriminators over confined areas with limited receptive fields easily leads to highly implausible geometric structures in the spatial. Our remedy is to use spatial inductive bias and joint discrimination on geometric clues in the form of 2D depth maps. This strategy is effective in improving spatial arrangement while incurring negligible additional computational cost. Experimental results demonstrate the ability of SinGRAV in generating plausible and diverse variations from a single scene, the merits of SinGRAV over state-of-the-art generative neural scene models, and the versatility of SinGRAV by its use in a variety of applications. Code and data will be released to facilitate further research.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":"107 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141546849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Transformer-Assisted Cascade Learning Network for Choroidal Vessel Segmentation 用于脉络膜血管分割的变压器辅助级联学习网络
IF 1.9 3区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-06 DOI: 10.1007/s11390-024-3679-2
Yang Wen, Yi-Lin Wu, Lei Bi, Wu-Zhen Shi, Xiao-Xiao Liu, Yu-Peng Xu, Xun Xu, Wen-Ming Cao, David Dagan Feng

As a highly vascular eye part, the choroid is crucial in various eye disease diagnoses. However, limited research has focused on the inner structure of the choroid due to the challenges in obtaining sufficient accurate label data, particularly for the choroidal vessels. Meanwhile, the existing direct choroidal vessel segmentation methods for the intelligent diagnosis of vascular assisted ophthalmic diseases are still unsatisfactory due to noise data, while the synergistic segmentation methods compromise vessel segmentation performance for the choroid layer segmentation tasks. Common cascaded structures grapple with error propagation during training. To address these challenges, we propose a cascade learning segmentation method for the inner vessel structures of the choroid in this paper. Specifically, we propose Transformer-Assisted Cascade Learning Network (TACLNet) for choroidal vessel segmentation, which comprises a two-stage training strategy: pre-training for choroid layer segmentation and joint training for choroid layer and choroidal vessel segmentation. We also enhance the skip connection structures by introducing a multi-scale subtraction connection module designated as MSC, capturing differential and detailed information simultaneously. Additionally, we implement an auxiliary Transformer branch named ATB to integrate global features into the segmentation process. Experimental results exhibit that our method achieves the state-of-the-art performance for choroidal vessel segmentation. Besides, we further validate the significant superiority of the proposed method for retinal fluid segmentation in optical coherence tomography (OCT) scans on a publicly available dataset. All these fully prove that our TACLNet contributes to the advancement of choroidal vessel segmentation and is of great significance for ophthalmic research and clinical application.

脉络膜是眼部血管丰富的部位,对各种眼病的诊断至关重要。然而,由于难以获得足够准确的标记数据,尤其是脉络膜血管的标记数据,对脉络膜内部结构的研究十分有限。同时,由于噪声数据的影响,用于血管辅助眼科疾病智能诊断的现有直接脉络膜血管分割方法仍不尽如人意,而协同分割方法在脉络膜层分割任务中的血管分割性能也大打折扣。常见的级联结构在训练过程中会遇到误差传播问题。为了解决这些难题,我们在本文中提出了脉络膜内部血管结构的级联学习分割方法。具体来说,我们提出了用于脉络膜血管分割的变换器辅助级联学习网络(TACLNet),它包括两阶段训练策略:脉络膜层分割的预训练和脉络膜层与脉络膜血管分割的联合训练。我们还通过引入多尺度减法连接模块(称为 MSC)来增强跳转连接结构,从而同时捕捉差异和细节信息。此外,我们还实现了一个名为 ATB 的辅助变换器分支,将全局特征整合到分割过程中。实验结果表明,我们的方法在脉络膜血管分割方面达到了最先进的性能。此外,我们还在一个公开的数据集上进一步验证了所提出的方法在光学相干断层扫描(OCT)扫描中视网膜液体分割方面的显著优势。所有这些充分证明了我们的 TACLNet 有助于脉络膜血管分割的进步,对眼科研究和临床应用具有重要意义。
{"title":"A Transformer-Assisted Cascade Learning Network for Choroidal Vessel Segmentation","authors":"Yang Wen, Yi-Lin Wu, Lei Bi, Wu-Zhen Shi, Xiao-Xiao Liu, Yu-Peng Xu, Xun Xu, Wen-Ming Cao, David Dagan Feng","doi":"10.1007/s11390-024-3679-2","DOIUrl":"https://doi.org/10.1007/s11390-024-3679-2","url":null,"abstract":"<p>As a highly vascular eye part, the choroid is crucial in various eye disease diagnoses. However, limited research has focused on the inner structure of the choroid due to the challenges in obtaining sufficient accurate label data, particularly for the choroidal vessels. Meanwhile, the existing direct choroidal vessel segmentation methods for the intelligent diagnosis of vascular assisted ophthalmic diseases are still unsatisfactory due to noise data, while the synergistic segmentation methods compromise vessel segmentation performance for the choroid layer segmentation tasks. Common cascaded structures grapple with error propagation during training. To address these challenges, we propose a cascade learning segmentation method for the inner vessel structures of the choroid in this paper. Specifically, we propose Transformer-Assisted Cascade Learning Network (TACLNet) for choroidal vessel segmentation, which comprises a two-stage training strategy: pre-training for choroid layer segmentation and joint training for choroid layer and choroidal vessel segmentation. We also enhance the skip connection structures by introducing a multi-scale subtraction connection module designated as MSC, capturing differential and detailed information simultaneously. Additionally, we implement an auxiliary Transformer branch named ATB to integrate global features into the segmentation process. Experimental results exhibit that our method achieves the state-of-the-art performance for choroidal vessel segmentation. Besides, we further validate the significant superiority of the proposed method for retinal fluid segmentation in optical coherence tomography (OCT) scans on a publicly available dataset. All these fully prove that our TACLNet contributes to the advancement of choroidal vessel segmentation and is of great significance for ophthalmic research and clinical application.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":"23 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141546910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SHA: QoS-Aware Software and Hardware Auto-Tuning for Database Systems SHA:数据库系统的 QoS 感知软件和硬件自动调整
IF 1.9 3区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-06 DOI: 10.1007/s11390-022-1751-3
Jin Li, Quan Chen, Xiao-Xin Tang, Min-Yi Guo

While databases are widely-used in commercial user-facing services that have stringent quality-of-service (QoS) requirement, it is crucial to ensure their good performance and minimize the hardware usage at the same time. Our investigation shows that the optimal DBMS (database management system) software configuration varies for different user request patterns (i.e., workloads) and hardware configurations. It is challenging to identify the optimal software and hardware configurations for a database workload, because DBMSs have hundreds of tunable knobs, the effect of tuning a knob depends on other knobs, and the dependency relationship changes under different hardware configurations. In this paper, we propose SHA, a software and hardware auto-tuning system for DBMSs. SHA is comprised of a scaling-based performance predictor, a reinforcement learning (RL) based software tuner, and a QoS-aware resource reallocator. The performance predictor predicts its optimal performance with different hardware configurations and identifies the minimum amount of resources for satisfying its performance requirement. The software tuner fine-tunes the DBMS software knobs to optimize the performance of the workload. The resource reallocator assigns the saved resources to other applications to improve resource utilization without incurring QoS violation of the database workload. Experimental results show that SHA improves the performance of database workloads by 9.9% on average compared with a state-of-the-art solution when the hardware configuration is fixed, and improves 43.2% of resource utilization while ensuring the QoS.

数据库广泛应用于对服务质量(QoS)有严格要求的面向用户的商业服务中,因此确保数据库的良好性能并同时最大限度地减少硬件使用量至关重要。我们的研究表明,对于不同的用户请求模式(即工作负载)和硬件配置,最佳 DBMS(数据库管理系统)软件配置各不相同。确定数据库工作负载的最佳软件和硬件配置具有挑战性,因为 DBMS 有数百个可调整的旋钮,调整一个旋钮的效果取决于其他旋钮,而且在不同的硬件配置下依赖关系也会发生变化。本文提出了 DBMS 的软硬件自动调整系统 SHA。SHA 由一个基于扩展的性能预测器、一个基于强化学习(RL)的软件调整器和一个 QoS 感知资源重新分配器组成。性能预测器可预测不同硬件配置下的最佳性能,并确定满足性能要求的最小资源量。软件调谐器对 DBMS 软件旋钮进行微调,以优化工作负载的性能。资源重新分配器将节省下来的资源分配给其他应用程序,以提高资源利用率,同时不违反数据库工作负载的服务质量。实验结果表明,在硬件配置固定的情况下,与最先进的解决方案相比,SHA 可将数据库工作负载的性能平均提高 9.9%,并在确保 QoS 的前提下提高 43.2% 的资源利用率。
{"title":"SHA: QoS-Aware Software and Hardware Auto-Tuning for Database Systems","authors":"Jin Li, Quan Chen, Xiao-Xin Tang, Min-Yi Guo","doi":"10.1007/s11390-022-1751-3","DOIUrl":"https://doi.org/10.1007/s11390-022-1751-3","url":null,"abstract":"<p>While databases are widely-used in commercial user-facing services that have stringent quality-of-service (QoS) requirement, it is crucial to ensure their good performance and minimize the hardware usage at the same time. Our investigation shows that the optimal DBMS (database management system) software configuration varies for different user request patterns (i.e., workloads) and hardware configurations. It is challenging to identify the optimal software and hardware configurations for a database workload, because DBMSs have hundreds of tunable knobs, the effect of tuning a knob depends on other knobs, and the dependency relationship changes under different hardware configurations. In this paper, we propose SHA, a software and hardware auto-tuning system for DBMSs. SHA is comprised of a scaling-based performance predictor, a reinforcement learning (RL) based software tuner, and a QoS-aware resource reallocator. The performance predictor predicts its optimal performance with different hardware configurations and identifies the minimum amount of resources for satisfying its performance requirement. The software tuner fine-tunes the DBMS software knobs to optimize the performance of the workload. The resource reallocator assigns the saved resources to other applications to improve resource utilization without incurring QoS violation of the database workload. Experimental results show that SHA improves the performance of database workloads by 9.9% on average compared with a state-of-the-art solution when the hardware configuration is fixed, and improves 43.2% of resource utilization while ensuring the QoS.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":"200 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141519209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding and Detecting Inefficient Image Displaying Issues in Android Apps 了解并检测 Android 应用程序中的低效图像显示问题
IF 1.9 3区 计算机科学 Q4 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-06 DOI: 10.1007/s11390-022-1670-3
Wen-Jie Li, Jun Ma, Yan-Yan Jiang, Chang Xu, Xiao-Xing Ma

Mobile applications (apps for short) often need to display images. However, inefficient image displaying (IID) issues are pervasive in mobile apps, and can severely impact app performance and user experience. This paper first establishes a descriptive framework for the image displaying procedures of IID issues. Based on the descriptive framework, we conduct an empirical study of 216 real-world IID issues collected from 243 popular open-source Android apps to validate the presence and severity of IID issues, and then shed light on these issues’ characteristics to support research on effective issue detection. With the findings of this study, we propose a static IID issue detection tool TAPIR and evaluate it with 243 real-world Android apps. Encouragingly, 49 and 64 previously-unknown IID issues in two different versions of 16 apps reported by TAPIR are manually confirmed as true positives, respectively, and 16 previously-unknown IID issues reported by TAPIR have been confirmed by developers and 13 have been fixed. Then, we further evaluate the performance impact of these detected IID issues and the performance improvement if they are fixed. The results demonstrate that the IID issues detected by TAPIR indeed cause significant performance degradation, which further show the effectiveness and efficiency of TAPIR.

移动应用程序(简称应用程序)经常需要显示图像。然而,低效图像显示(IID)问题在移动应用程序中普遍存在,会严重影响应用程序的性能和用户体验。本文首先为 IID 问题的图像显示程序建立了一个描述性框架。在描述框架的基础上,我们对从 243 个流行的开源 Android 应用程序中收集到的 216 个真实世界中的 IID 问题进行了实证研究,以验证 IID 问题的存在和严重程度,然后阐明这些问题的特征,为有效检测问题的研究提供支持。根据这项研究的结果,我们提出了一种静态 IID 问题检测工具 TAPIR,并用 243 个真实 Android 应用程序对其进行了评估。令人鼓舞的是,在 TAPIR 报告的两个不同版本的 16 个应用程序中,分别有 49 个和 64 个以前未知的 IID 问题被人工确认为真阳性,TAPIR 报告的 16 个以前未知的 IID 问题已被开发人员确认,13 个已被修复。然后,我们进一步评估了这些已检测到的 IID 问题对性能的影响,以及修复这些问题后性能的提升情况。结果表明,TAPIR 检测到的 IID 问题确实导致了显著的性能下降,这进一步证明了 TAPIR 的有效性和效率。
{"title":"Understanding and Detecting Inefficient Image Displaying Issues in Android Apps","authors":"Wen-Jie Li, Jun Ma, Yan-Yan Jiang, Chang Xu, Xiao-Xing Ma","doi":"10.1007/s11390-022-1670-3","DOIUrl":"https://doi.org/10.1007/s11390-022-1670-3","url":null,"abstract":"<p>Mobile applications (apps for short) often need to display images. However, inefficient image displaying (IID) issues are pervasive in mobile apps, and can severely impact app performance and user experience. This paper first establishes a descriptive framework for the image displaying procedures of IID issues. Based on the descriptive framework, we conduct an empirical study of 216 real-world IID issues collected from 243 popular open-source Android apps to validate the presence and severity of IID issues, and then shed light on these issues’ characteristics to support research on effective issue detection. With the findings of this study, we propose a static IID issue detection tool TAPIR and evaluate it with 243 real-world Android apps. Encouragingly, 49 and 64 previously-unknown IID issues in two different versions of 16 apps reported by TAPIR are manually confirmed as true positives, respectively, and 16 previously-unknown IID issues reported by TAPIR have been confirmed by developers and 13 have been fixed. Then, we further evaluate the performance impact of these detected IID issues and the performance improvement if they are fixed. The results demonstrate that the IID issues detected by TAPIR indeed cause significant performance degradation, which further show the effectiveness and efficiency of TAPIR.</p>","PeriodicalId":50222,"journal":{"name":"Journal of Computer Science and Technology","volume":"122 1","pages":""},"PeriodicalIF":1.9,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141519211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Computer Science and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1