首页 > 最新文献

arXiv - CS - Mathematical Software最新文献

英文 中文
A prony method variant which surpasses the Adaptive LMS filter in the output signal's representation of input 在输出信号对输入信号的表示方面超越自适应 LMS 滤波器的质子方法变体
Pub Date : 2024-09-02 DOI: arxiv-2409.01272
Parthasarathy Srinivasan
The Prony method for approximating signals comprising sinusoidal/exponentialcomponents is known through the pioneering work of Prony in his seminaldissertation in the year 1795. However, the Prony method saw the light of realworld application only upon the advent of the computational era, which madefeasible the extensive numerical intricacies and labor which the method demandsinherently. The Adaptive LMS Filter which has been the most pervasive methodfor signal filtration and approximation since its inception in 1965 does notprovide a consistently assured level of highly precise results as the extendedexperiment in this work proves. As a remedy this study improvises upon theProny method by observing that a better (more precise) computationalapproximation can be obtained under the premise that adjustment can be made forcomputational error , in the autoregressive model setup in the initial step ofthe Prony computation itself. This adjustment is in proportion to the deviationof the coefficients in the same autoregressive model. The results obtained bythis improvisation live up to the expectations of obtaining consistency andhigher value in the precision of the output (recovered signal) approximationsas shown in this current work and as compared with the results obtained usingthe Adaptive LMS Filter.
普罗尼在其 1795 年的开创性论文中提出了用于逼近包含正弦/指数成分信号的普罗尼方法。然而,普罗尼方法只是在计算时代到来后才在现实世界中得到应用,计算时代的到来使得该方法本身所要求的大量复杂数值计算和劳动变得可行。自 1965 年诞生以来,自适应 LMS 滤波法一直是信号滤波和近似的最常用方法,但它并不能始终如一地提供高精度的结果,本研究的扩展实验也证明了这一点。作为一种补救措施,本研究对普罗尼方法进行了改进,发现在普罗尼计算本身的初始步骤中,可以对自回归模型设置中的计算误差进行调整,在此前提下可以获得更好(更精确)的计算近似结果。这种调整与同一自回归模型中系数的偏差成正比。与使用自适应 LMS 滤波器获得的结果相比,这一改进所获得的结果达到了预期目标,即获得一致性和更高精度的输出(恢复信号)近似值。
{"title":"A prony method variant which surpasses the Adaptive LMS filter in the output signal's representation of input","authors":"Parthasarathy Srinivasan","doi":"arxiv-2409.01272","DOIUrl":"https://doi.org/arxiv-2409.01272","url":null,"abstract":"The Prony method for approximating signals comprising sinusoidal/exponential\u0000components is known through the pioneering work of Prony in his seminal\u0000dissertation in the year 1795. However, the Prony method saw the light of real\u0000world application only upon the advent of the computational era, which made\u0000feasible the extensive numerical intricacies and labor which the method demands\u0000inherently. The Adaptive LMS Filter which has been the most pervasive method\u0000for signal filtration and approximation since its inception in 1965 does not\u0000provide a consistently assured level of highly precise results as the extended\u0000experiment in this work proves. As a remedy this study improvises upon the\u0000Prony method by observing that a better (more precise) computational\u0000approximation can be obtained under the premise that adjustment can be made for\u0000computational error , in the autoregressive model setup in the initial step of\u0000the Prony computation itself. This adjustment is in proportion to the deviation\u0000of the coefficients in the same autoregressive model. The results obtained by\u0000this improvisation live up to the expectations of obtaining consistency and\u0000higher value in the precision of the output (recovered signal) approximations\u0000as shown in this current work and as compared with the results obtained using\u0000the Adaptive LMS Filter.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"59 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TorchDA: A Python package for performing data assimilation with deep learning forward and transformation functions TorchDA:利用深度学习前向和转换函数执行数据同化的 Python 软件包
Pub Date : 2024-08-30 DOI: arxiv-2409.00244
Sibo Cheng, Jinyang Min, Che Liu, Rossella Arcucci
Data assimilation techniques are often confronted with challenges handlingcomplex high dimensional physical systems, because high precision simulation incomplex high dimensional physical systems is computationally expensive and theexact observation functions that can be applied in these systems are difficultto obtain. It prompts growing interest in integrating deep learning modelswithin data assimilation workflows, but current software packages for dataassimilation cannot handle deep learning models inside. This study presents anovel Python package seamlessly combining data assimilation with deep neuralnetworks to serve as models for state transition and observation functions. Thepackage, named TorchDA, implements Kalman Filter, Ensemble Kalman Filter(EnKF), 3D Variational (3DVar), and 4D Variational (4DVar) algorithms, allowingflexible algorithm selection based on application requirements. Comprehensiveexperiments conducted on the Lorenz 63 and a two-dimensional shallow watersystem demonstrate significantly enhanced performance over standalone modelpredictions without assimilation. The shallow water analysis validates dataassimilation capabilities mapping between different physical quantity spaces ineither full space or reduced order space. Overall, this innovative softwarepackage enables flexible integration of deep learning representations withindata assimilation, conferring a versatile tool to tackle complex highdimensional dynamical systems across scientific domains.
数据同化技术在处理复杂的高维物理系统时经常面临挑战,因为对复杂的高维物理系统进行高精度模拟的计算成本很高,而且很难获得可用于这些系统的精确观测函数。这促使人们对在数据同化工作流程中集成深度学习模型越来越感兴趣,但目前的数据同化软件包无法在内部处理深度学习模型。本研究提出了一个新颖的 Python 软件包,将数据同化与深度神经网络无缝结合,作为状态转换和观测函数的模型。该软件包名为 TorchDA,实现了卡尔曼滤波、集合卡尔曼滤波(EnKF)、三维变分(3DVar)和四维变分(4DVar)算法,允许根据应用需求灵活选择算法。在洛伦兹 63 和二维浅水系统上进行的综合实验表明,与没有同化的独立模型预测相比,该算法的性能显著提高。浅水分析验证了数据同化在不同物理量空间(全空间或降阶空间)之间的映射能力。总之,这一创新软件包实现了深度学习表征与数据同化的灵活集成,为处理跨科学领域的复杂高维动态系统提供了多功能工具。
{"title":"TorchDA: A Python package for performing data assimilation with deep learning forward and transformation functions","authors":"Sibo Cheng, Jinyang Min, Che Liu, Rossella Arcucci","doi":"arxiv-2409.00244","DOIUrl":"https://doi.org/arxiv-2409.00244","url":null,"abstract":"Data assimilation techniques are often confronted with challenges handling\u0000complex high dimensional physical systems, because high precision simulation in\u0000complex high dimensional physical systems is computationally expensive and the\u0000exact observation functions that can be applied in these systems are difficult\u0000to obtain. It prompts growing interest in integrating deep learning models\u0000within data assimilation workflows, but current software packages for data\u0000assimilation cannot handle deep learning models inside. This study presents a\u0000novel Python package seamlessly combining data assimilation with deep neural\u0000networks to serve as models for state transition and observation functions. The\u0000package, named TorchDA, implements Kalman Filter, Ensemble Kalman Filter\u0000(EnKF), 3D Variational (3DVar), and 4D Variational (4DVar) algorithms, allowing\u0000flexible algorithm selection based on application requirements. Comprehensive\u0000experiments conducted on the Lorenz 63 and a two-dimensional shallow water\u0000system demonstrate significantly enhanced performance over standalone model\u0000predictions without assimilation. The shallow water analysis validates data\u0000assimilation capabilities mapping between different physical quantity spaces in\u0000either full space or reduced order space. Overall, this innovative software\u0000package enables flexible integration of deep learning representations within\u0000data assimilation, conferring a versatile tool to tackle complex high\u0000dimensional dynamical systems across scientific domains.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HOBOTAN: Efficient Higher Order Binary Optimization Solver with Tensor Networks and PyTorch HOBOTAN:利用张量网络和 PyTorch 的高效高阶二进制优化求解器
Pub Date : 2024-07-29 DOI: arxiv-2407.19987
Shoya Yasuda, Shunsuke Sotobayashi, Yuichiro Minato
In this study, we introduce HOBOTAN, a new solver designed for Higher OrderBinary Optimization (HOBO). HOBOTAN supports both CPU and GPU, with the GPUversion developed based on PyTorch, offering a fast and scalable system. Thissolver utilizes tensor networks to solve combinatorial optimization problems,employing a HOBO tensor that maps the problem and performs tensor contractionsas needed. Additionally, by combining techniques such as batch processing fortensor optimization and binary-based integer encoding, we significantly enhancethe efficiency of combinatorial optimization. In the future, the utilization ofincreased GPU numbers is expected to harness greater computational power,enabling efficient collaboration between multiple GPUs for high scalability.Moreover, HOBOTAN is designed within the framework of quantum computing, thusproviding insights for future quantum computer applications. This paper detailsthe design, implementation, performance evaluation, and scalability of HOBOTAN,demonstrating its effectiveness.
在本研究中,我们介绍了专为高阶二元优化(HOBO)设计的新型求解器 HOBOTAN。HOBOTAN 同时支持 CPU 和 GPU,其中 GPU 版本基于 PyTorch 开发,提供了一个快速、可扩展的系统。该求解器利用张量网络求解组合优化问题,采用 HOBO 张量映射问题并根据需要执行张量收缩。此外,通过结合张量优化批处理和基于二进制的整数编码等技术,我们大大提高了组合优化的效率。此外,HOBOTAN 是在量子计算的框架内设计的,从而为未来的量子计算机应用提供了启示。本文详细介绍了 HOBOTAN 的设计、实现、性能评估和可扩展性,展示了其有效性。
{"title":"HOBOTAN: Efficient Higher Order Binary Optimization Solver with Tensor Networks and PyTorch","authors":"Shoya Yasuda, Shunsuke Sotobayashi, Yuichiro Minato","doi":"arxiv-2407.19987","DOIUrl":"https://doi.org/arxiv-2407.19987","url":null,"abstract":"In this study, we introduce HOBOTAN, a new solver designed for Higher Order\u0000Binary Optimization (HOBO). HOBOTAN supports both CPU and GPU, with the GPU\u0000version developed based on PyTorch, offering a fast and scalable system. This\u0000solver utilizes tensor networks to solve combinatorial optimization problems,\u0000employing a HOBO tensor that maps the problem and performs tensor contractions\u0000as needed. Additionally, by combining techniques such as batch processing for\u0000tensor optimization and binary-based integer encoding, we significantly enhance\u0000the efficiency of combinatorial optimization. In the future, the utilization of\u0000increased GPU numbers is expected to harness greater computational power,\u0000enabling efficient collaboration between multiple GPUs for high scalability.\u0000Moreover, HOBOTAN is designed within the framework of quantum computing, thus\u0000providing insights for future quantum computer applications. This paper details\u0000the design, implementation, performance evaluation, and scalability of HOBOTAN,\u0000demonstrating its effectiveness.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"124 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141862913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MPAT: Modular Petri Net Assembly Toolkit MPAT:模块化 Petri 网组装工具包
Pub Date : 2024-07-15 DOI: arxiv-2407.10372
Stefano Chiaradonna, Petar Jevtic, Beckett Sterner
We present a Python package called Modular Petri Net Assembly Toolkit (MPAT)that empowers users to easily create large-scale, modular Petri Nets forvarious spatial configurations, including extensive spatial grids or thosederived from shape files, augmented with heterogeneous information layers.Petri Nets are powerful discrete event system modeling tools in computationalbiology and engineering. However, their utility for automated construction oflarge-scale spatial models has been limited by gaps in existing modelingsoftware packages. MPAT addresses this gap by supporting the development ofmodular Petri Net models with flexible spatial geometries.
我们介绍了一个名为模块化 Petri 网组装工具包(MPAT)的 Python 软件包,它能帮助用户轻松创建各种空间配置的大规模模块化 Petri 网,包括广泛的空间网格或从形状文件中生成的网格,并添加异构信息层。Petri 网是计算生物学和工程学领域强大的离散事件系统建模工具。Petri 网是计算生物学和工程学中强大的离散事件系统建模工具,但由于现有建模软件包的缺陷,其在自动构建大规模空间模型方面的实用性受到了限制。MPAT 支持开发具有灵活空间几何结构的模块化 Petri 网模型,从而弥补了这一不足。
{"title":"MPAT: Modular Petri Net Assembly Toolkit","authors":"Stefano Chiaradonna, Petar Jevtic, Beckett Sterner","doi":"arxiv-2407.10372","DOIUrl":"https://doi.org/arxiv-2407.10372","url":null,"abstract":"We present a Python package called Modular Petri Net Assembly Toolkit (MPAT)\u0000that empowers users to easily create large-scale, modular Petri Nets for\u0000various spatial configurations, including extensive spatial grids or those\u0000derived from shape files, augmented with heterogeneous information layers.\u0000Petri Nets are powerful discrete event system modeling tools in computational\u0000biology and engineering. However, their utility for automated construction of\u0000large-scale spatial models has been limited by gaps in existing modeling\u0000software packages. MPAT addresses this gap by supporting the development of\u0000modular Petri Net models with flexible spatial geometries.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"37 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141717812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enabling MPI communication within Numba/LLVM JIT-compiled Python code using numba-mpi v1.0 使用 numba-mpi v1.0 在 Numba/LLVM JIT 编译的 Python 代码中启用 MPI 通信
Pub Date : 2024-07-01 DOI: arxiv-2407.13712
Kacper Derlatka, Maciej Manna, Oleksii Bulenok, David Zwicker, Sylwester Arabas
The numba-mpi package offers access to the Message Passing Interface (MPI)routines from Python code that uses the Numba just-in-time (JIT) compiler. As aresult, high-performance and multi-threaded Python code may utilize MPIcommunication facilities without leaving the JIT-compiled code blocks, which isnot possible with the mpi4py package, a higher-level Python interface to MPI.For debugging purposes, numba-mpi retains full functionality of the code evenif the JIT compilation is disabled. The numba-mpi API constitutes a thinwrapper around the C API of MPI and is built around Numpy arrays includinghandling of non-contiguous views over array slices. Project development ishosted at GitHub leveraging the mpi4py/setup-mpi workflow enabling continuousintegration tests on Linux (MPICH, OpenMPI & Intel MPI), macOS (MPICH &OpenMPI) and Windows (MS MPI). The paper covers an overview of the packagefeatures, architecture and performance. As of v1.0, the following MPI routinesare exposed and covered by unit tests: size/rank, [i]send/[i]recv,wait[all|any], test[all|any], allreduce, bcast, barrier, scatter/[all]gather &wtime. The package is implemented in pure Python and depends on numpy, numbaand mpi4py (the latter used at initialization and as a source of utilityroutines only). The performance advantage of using numba-mpi compared to mpi4pyis depicted with a simple example, with entirety of the code included inlistings discussed in the text. Application of numba-mpi for handling domaindecomposition in numerical solvers for partial differential equations ispresented using two external packages that depend on numba-mpi: py-pde andPyMPDATA-MPI.
numba-mpi 软件包提供了从使用 Numba 即时(JIT)编译器的 Python 代码访问消息传递接口(MPI)例程的功能。因此,高性能和多线程 Python 代码可以在不离开 JIT 编译代码块的情况下使用 MPI 通信设施,而 mpi4py 包(MPI 的高级 Python 接口)则无法做到这一点。为了调试目的,即使禁用 JIT 编译,numba-mpi 也能保留代码的全部功能。numba-mpi API 是 MPI C API 的精简版,围绕 Numpy 数组构建,包括处理数组切片上的非连续视图。项目开发托管在 GitHub 上,利用 mpi4py/setup-mpi 工作流,可在 Linux(MPICH、OpenMPI 和 Intel MPI)、macOS(MPICH 和 OpenMPI)和 Windows(MS MPI)上进行持续集成测试。本文概述了软件包的功能、架构和性能。截至 v1.0,以下 MPI 例程已被公开并包含在单元测试中:size/rank、[i]send/[i]recv、wait[all|any]、test[all|any]、allreduce、bcast、barrier、scatter/[all]gather &wtime。该软件包用纯 Python 实现,依赖于 numpy、numba 和 mpi4py(后者仅用于初始化和作为实用程序的源代码)。与 mpi4py 相比,使用 numba-mpi 在性能上的优势通过一个简单的示例来说明,整个代码包含在文中讨论的列表中。本文使用两个依赖于 numba-mpi 的外部软件包:py-pde 和 PyMPDATA-MPI,介绍了 numba-mpi 在偏微分方程数值求解器中处理域分解的应用。
{"title":"Enabling MPI communication within Numba/LLVM JIT-compiled Python code using numba-mpi v1.0","authors":"Kacper Derlatka, Maciej Manna, Oleksii Bulenok, David Zwicker, Sylwester Arabas","doi":"arxiv-2407.13712","DOIUrl":"https://doi.org/arxiv-2407.13712","url":null,"abstract":"The numba-mpi package offers access to the Message Passing Interface (MPI)\u0000routines from Python code that uses the Numba just-in-time (JIT) compiler. As a\u0000result, high-performance and multi-threaded Python code may utilize MPI\u0000communication facilities without leaving the JIT-compiled code blocks, which is\u0000not possible with the mpi4py package, a higher-level Python interface to MPI.\u0000For debugging purposes, numba-mpi retains full functionality of the code even\u0000if the JIT compilation is disabled. The numba-mpi API constitutes a thin\u0000wrapper around the C API of MPI and is built around Numpy arrays including\u0000handling of non-contiguous views over array slices. Project development is\u0000hosted at GitHub leveraging the mpi4py/setup-mpi workflow enabling continuous\u0000integration tests on Linux (MPICH, OpenMPI & Intel MPI), macOS (MPICH &\u0000OpenMPI) and Windows (MS MPI). The paper covers an overview of the package\u0000features, architecture and performance. As of v1.0, the following MPI routines\u0000are exposed and covered by unit tests: size/rank, [i]send/[i]recv,\u0000wait[all|any], test[all|any], allreduce, bcast, barrier, scatter/[all]gather &\u0000wtime. The package is implemented in pure Python and depends on numpy, numba\u0000and mpi4py (the latter used at initialization and as a source of utility\u0000routines only). The performance advantage of using numba-mpi compared to mpi4py\u0000is depicted with a simple example, with entirety of the code included in\u0000listings discussed in the text. Application of numba-mpi for handling domain\u0000decomposition in numerical solvers for partial differential equations is\u0000presented using two external packages that depend on numba-mpi: py-pde and\u0000PyMPDATA-MPI.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"61 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141737352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FVEL: Interactive Formal Verification Environment with Large Language Models via Theorem Proving FVEL:通过定理证明建立大型语言模型的交互式形式化验证环境
Pub Date : 2024-06-20 DOI: arxiv-2406.14408
Xiaohan Lin, Qingxing Cao, Yinya Huang, Haiming Wang, Jianqiao Lu, Zhengying Liu, Linqi Song, Xiaodan Liang
Formal verification (FV) has witnessed growing significance with currentemerging program synthesis by the evolving large language models (LLMs).However, current formal verification mainly resorts to symbolic verifiers orhand-craft rules, resulting in limitations for extensive and flexibleverification. On the other hand, formal languages for automated theoremproving, such as Isabelle, as another line of rigorous verification, aremaintained with comprehensive rules and theorems. In this paper, we proposeFVEL, an interactive Formal Verification Environment with LLMs. Specifically,FVEL transforms a given code to be verified into Isabelle, and then conductsverification via neural automated theorem proving with an LLM. The joinedparadigm leverages the rigorous yet abundant formulated and organized rules inIsabelle and is also convenient for introducing and adjusting cutting-edgeLLMs. To achieve this goal, we extract a large-scale FVELER3. The FVELERdataset includes code dependencies and verification processes that areformulated in Isabelle, containing 758 theories, 29,125 lemmas, and 200,646proof steps in total with in-depth dependencies. We benchmark FVELER in theFVEL environment by first fine-tuning LLMs with FVELER and then evaluating themon Code2Inv and SV-COMP. The results show that FVEL with FVELER fine-tunedLlama3- 8B solves 17.39% (69 -> 81) more problems, and Mistral-7B 12% (75 ->84) more problems in SV-COMP. And the proportion of proof errors is reduced.Project page: https://fveler.github.io/.
随着大型语言模型(LLM)的不断发展,形式化验证(FV)的重要性与日俱增。然而,目前的形式化验证主要依赖于符号验证器或手工创建规则,这对广泛而灵活的验证造成了限制。另一方面,用于自动定理证明的形式语言(如 Isabelle)作为严格验证的另一条线,具有全面的规则和定理。在本文中,我们提出了带有 LLMs 的交互式形式化验证环境 FVEL。具体来说,FVEL 将给定的待验证代码转换为 Isabelle,然后通过神经自动定理证明与 LLM 进行验证。这种联合范式利用了Isabelle中严谨而丰富的规则,同时也便于引入和调整最前沿的LLM。为了实现这一目标,我们提取了大规模的 FVELER3。FVELER数据集包括用Isabelle制定的代码依赖和验证过程,共包含758个理论、29,125个词条和200,646个有深度依赖的验证步骤。我们在 FVEL 环境中对 FVELER 进行了基准测试,首先用 FVELER 对 LLM 进行了微调,然后在 Code2Inv 和 SV-COMP 上对它们进行了评估。结果表明,在 SV-COMP 中,使用 FVELER 微调过的 Llama3- 8B 解决的问题比 FVEL 多 17.39% (69 -> 81),Mistral-7B 解决的问题比 FVEL 多 12% (75 -> 84)。证明错误的比例也有所降低。项目页面:https://fveler.github.io/.
{"title":"FVEL: Interactive Formal Verification Environment with Large Language Models via Theorem Proving","authors":"Xiaohan Lin, Qingxing Cao, Yinya Huang, Haiming Wang, Jianqiao Lu, Zhengying Liu, Linqi Song, Xiaodan Liang","doi":"arxiv-2406.14408","DOIUrl":"https://doi.org/arxiv-2406.14408","url":null,"abstract":"Formal verification (FV) has witnessed growing significance with current\u0000emerging program synthesis by the evolving large language models (LLMs).\u0000However, current formal verification mainly resorts to symbolic verifiers or\u0000hand-craft rules, resulting in limitations for extensive and flexible\u0000verification. On the other hand, formal languages for automated theorem\u0000proving, such as Isabelle, as another line of rigorous verification, are\u0000maintained with comprehensive rules and theorems. In this paper, we propose\u0000FVEL, an interactive Formal Verification Environment with LLMs. Specifically,\u0000FVEL transforms a given code to be verified into Isabelle, and then conducts\u0000verification via neural automated theorem proving with an LLM. The joined\u0000paradigm leverages the rigorous yet abundant formulated and organized rules in\u0000Isabelle and is also convenient for introducing and adjusting cutting-edge\u0000LLMs. To achieve this goal, we extract a large-scale FVELER3. The FVELER\u0000dataset includes code dependencies and verification processes that are\u0000formulated in Isabelle, containing 758 theories, 29,125 lemmas, and 200,646\u0000proof steps in total with in-depth dependencies. We benchmark FVELER in the\u0000FVEL environment by first fine-tuning LLMs with FVELER and then evaluating them\u0000on Code2Inv and SV-COMP. The results show that FVEL with FVELER fine-tuned\u0000Llama3- 8B solves 17.39% (69 -> 81) more problems, and Mistral-7B 12% (75 ->\u000084) more problems in SV-COMP. And the proportion of proof errors is reduced.\u0000Project page: https://fveler.github.io/.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"76 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141529288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OpenCAEPoro: A Parallel Simulation Framework for Multiphase and Multicomponent Porous Media Flows OpenCAEPoro:多相和多组分多孔介质流动的并行模拟框架
Pub Date : 2024-06-16 DOI: arxiv-2406.10862
Shizhe Li, Chen-Song Zhang
OpenCAEPoro is a parallel numerical simulation software developed in C++ forsimulating multiphase and multicomponent flows in porous media. The softwareutilizes a set of general-purpose compositional model equations, enabling it tohandle a diverse range of fluid dynamics, including the black oil model,compositional model, and thermal recovery models. OpenCAEPoro establishes aunified solving framework that integrates many widely used methods, such asIMPEC, FIM, and AIM. This framework allows dynamic collaboration betweendifferent methods. Specifically, based on this framework, we have developed anadaptively coupled domain decomposition method, which can provide initialsolutions for global methods to accelerate the simulation. The reliability ofOpenCAEPoro has been validated through benchmark testing with the SPEcomparative solution project. Furthermore, its robust parallel efficiency hasbeen tested in distributed parallel environments, demonstrating its suitabilityfor large-scale simulation problems.
OpenCAEPoro 是一款用 C++ 开发的并行数值模拟软件,用于模拟多孔介质中的多相和多组分流动。该软件使用一套通用的组成模型方程,能够处理多种流体动力学问题,包括黑油模型、组成模型和热采模型。OpenCAEPoro 建立了一个统一的求解框架,集成了许多广泛使用的方法,如 IMPEC、FIM 和 AIM。该框架允许不同方法之间的动态协作。具体来说,基于这个框架,我们开发了一种自适应耦合域分解方法,它可以为全局方法提供初始解,从而加速仿真。通过与 SPE 比较求解项目的基准测试,OpenCAEPoro 的可靠性得到了验证。此外,还在分布式并行环境中测试了其强大的并行效率,证明了它适用于大规模仿真问题。
{"title":"OpenCAEPoro: A Parallel Simulation Framework for Multiphase and Multicomponent Porous Media Flows","authors":"Shizhe Li, Chen-Song Zhang","doi":"arxiv-2406.10862","DOIUrl":"https://doi.org/arxiv-2406.10862","url":null,"abstract":"OpenCAEPoro is a parallel numerical simulation software developed in C++ for\u0000simulating multiphase and multicomponent flows in porous media. The software\u0000utilizes a set of general-purpose compositional model equations, enabling it to\u0000handle a diverse range of fluid dynamics, including the black oil model,\u0000compositional model, and thermal recovery models. OpenCAEPoro establishes a\u0000unified solving framework that integrates many widely used methods, such as\u0000IMPEC, FIM, and AIM. This framework allows dynamic collaboration between\u0000different methods. Specifically, based on this framework, we have developed an\u0000adaptively coupled domain decomposition method, which can provide initial\u0000solutions for global methods to accelerate the simulation. The reliability of\u0000OpenCAEPoro has been validated through benchmark testing with the SPE\u0000comparative solution project. Furthermore, its robust parallel efficiency has\u0000been tested in distributed parallel environments, demonstrating its suitability\u0000for large-scale simulation problems.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SySTeC: A Symmetric Sparse Tensor Compiler SySTeC:对称稀疏张量编译器
Pub Date : 2024-06-13 DOI: arxiv-2406.09266
Radha Patel, Willow Ahrens, Saman Amarasinghe
Symmetric and sparse tensors arise naturally in many domains including linearalgebra, statistics, physics, chemistry, and graph theory. Symmetric tensorsare equal to their transposes, so in the $n$-dimensional case we can save up toa factor of $n!$ by avoiding redundant operations. Sparse tensors, on the otherhand, are mostly zero, and we can save asymptotically by processing onlynonzeros. Unfortunately, specializing for both symmetry and sparsity at thesame time is uniquely challenging. Optimizing for symmetry requiresconsideration of $n!$ transpositions of a triangular kernel, which can becomplex and error prone. Considering multiple transposed iteration orders andtriangular loop bounds also complicates iteration through intricate sparsetensor formats. Additionally, since each combination of symmetry and sparsetensor formats requires a specialized implementation, this leads to acombinatorial number of cases. A compiler is needed, but existing compilerscannot take advantage of both symmetry and sparsity within the same kernel. Inthis paper, we describe the first compiler which can automatically generatesymmetry-aware code for sparse or structured tensor kernels. We introduce ataxonomy for symmetry in tensor kernels, and show how to target each kind ofsymmetry. Our implementation demonstrates significant speedups ranging from1.36x for SSYMV to 30.4x for a 5-dimensional MTTKRP over the non-symmetricstate of the art.
对称和稀疏张量自然出现在线性代数、统计学、物理学、化学和图论等许多领域。对称张量等于它们的转置,因此在 $n$ 维的情况下,我们可以通过避免多余的运算节省高达 $n!另一方面,稀疏张量大部分为零,我们可以通过只处理非转置数来逐渐节省成本。不幸的是,同时针对对称性和稀疏性进行专门化处理具有独特的挑战性。优化对称性需要考虑三角形内核的 $n!$ 转置,这可能会变得复杂且容易出错。考虑多个转置迭代阶数和三角形循环边界也会使复杂的空间传感器格式迭代变得复杂。此外,由于对称性和 sparsetensor 格式的每种组合都需要专门的实现方法,这就导致了大量情况的出现。我们需要一个编译器,但现有的编译器无法在同一个内核中同时利用对称性和稀疏性。在本文中,我们描述了第一个可以为稀疏或结构化张量内核自动生成对称感知代码的编译器。我们介绍了张量核对称性的分类标准,并展示了如何针对每种对称性进行编译。与目前的非对称技术相比,我们的实现大大提高了速度,从 SSYMV 的 1.36 倍到 5 维 MTTKRP 的 30.4 倍不等。
{"title":"SySTeC: A Symmetric Sparse Tensor Compiler","authors":"Radha Patel, Willow Ahrens, Saman Amarasinghe","doi":"arxiv-2406.09266","DOIUrl":"https://doi.org/arxiv-2406.09266","url":null,"abstract":"Symmetric and sparse tensors arise naturally in many domains including linear\u0000algebra, statistics, physics, chemistry, and graph theory. Symmetric tensors\u0000are equal to their transposes, so in the $n$-dimensional case we can save up to\u0000a factor of $n!$ by avoiding redundant operations. Sparse tensors, on the other\u0000hand, are mostly zero, and we can save asymptotically by processing only\u0000nonzeros. Unfortunately, specializing for both symmetry and sparsity at the\u0000same time is uniquely challenging. Optimizing for symmetry requires\u0000consideration of $n!$ transpositions of a triangular kernel, which can be\u0000complex and error prone. Considering multiple transposed iteration orders and\u0000triangular loop bounds also complicates iteration through intricate sparse\u0000tensor formats. Additionally, since each combination of symmetry and sparse\u0000tensor formats requires a specialized implementation, this leads to a\u0000combinatorial number of cases. A compiler is needed, but existing compilers\u0000cannot take advantage of both symmetry and sparsity within the same kernel. In\u0000this paper, we describe the first compiler which can automatically generate\u0000symmetry-aware code for sparse or structured tensor kernels. We introduce a\u0000taxonomy for symmetry in tensor kernels, and show how to target each kind of\u0000symmetry. Our implementation demonstrates significant speedups ranging from\u00001.36x for SSYMV to 30.4x for a 5-dimensional MTTKRP over the non-symmetric\u0000state of the art.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PETSc/TAO Developments for Early Exascale Systems 为早期超大规模系统开发 PETSc/TAO
Pub Date : 2024-06-12 DOI: arxiv-2406.08646
Richard Tran Mills, Mark Adams, Satish Balay, Jed Brown, Jacob Faibussowitsch, Toby Isaac, Matthew Knepley, Todd Munson, Hansol Suh, Stefano Zampini, Hong Zhang, Junchao Zhang
The Portable Extensible Toolkit for Scientific Computation (PETSc) libraryprovides scalable solvers for nonlinear time-dependent differential andalgebraic equations and for numerical optimization via the Toolkit for AdvancedOptimization (TAO). PETSc is used in dozens of scientific fields and is animportant building block for many simulation codes. During the U.S. Departmentof Energy's Exascale Computing Project, the PETSc team has made substantialefforts to enable efficient utilization of the massive fine-grain parallelismpresent within exascale compute nodes and to enable performance portabilityacross exascale architectures. We recap some of the challenges that designersof numerical libraries face in such an endeavor, and then discuss the manydevelopments we have made, which include the addition of new GPU backends,features supporting efficient on-device matrix assembly, better support forasynchronicity and GPU kernel concurrency, and new communicationinfrastructure. We evaluate the performance of these developments on somepre-exascale systems as well the early exascale systems Frontier and Aurora,using compute kernel, communication layer, solver, and mini-applicationbenchmark studies, and then close with a few observations drawn from ourexperiences on the tension between portable performance and other goals ofnumerical libraries.
便携式可扩展科学计算工具包(PETSc)库为非线性时变微分方程和代数方程提供了可扩展的求解器,并通过高级优化工具包(TAO)为数值优化提供了可扩展的求解器。PETSc 广泛应用于数十个科学领域,是许多仿真代码的重要构建模块。在美国能源部的超大规模计算项目中,PETSc 团队做出了巨大努力,以高效利用超大规模计算节点中的大规模细粒度并行计算,并实现跨超大规模架构的性能可移植性。我们回顾了数值库设计者在这项工作中面临的一些挑战,然后讨论了我们所做的许多开发工作,其中包括增加新的 GPU 后端、支持高效设备上矩阵组装的功能、更好地支持同步性和 GPU 内核并发性,以及新的通信基础设施。我们利用计算内核、通信层、求解器和小型应用基准研究,评估了这些开发成果在一些超大规模前系统以及早期超大规模系统 Frontier 和 Aurora 上的性能,最后就可移植性能与数值库其他目标之间的矛盾提出了我们的一些看法。
{"title":"PETSc/TAO Developments for Early Exascale Systems","authors":"Richard Tran Mills, Mark Adams, Satish Balay, Jed Brown, Jacob Faibussowitsch, Toby Isaac, Matthew Knepley, Todd Munson, Hansol Suh, Stefano Zampini, Hong Zhang, Junchao Zhang","doi":"arxiv-2406.08646","DOIUrl":"https://doi.org/arxiv-2406.08646","url":null,"abstract":"The Portable Extensible Toolkit for Scientific Computation (PETSc) library\u0000provides scalable solvers for nonlinear time-dependent differential and\u0000algebraic equations and for numerical optimization via the Toolkit for Advanced\u0000Optimization (TAO). PETSc is used in dozens of scientific fields and is an\u0000important building block for many simulation codes. During the U.S. Department\u0000of Energy's Exascale Computing Project, the PETSc team has made substantial\u0000efforts to enable efficient utilization of the massive fine-grain parallelism\u0000present within exascale compute nodes and to enable performance portability\u0000across exascale architectures. We recap some of the challenges that designers\u0000of numerical libraries face in such an endeavor, and then discuss the many\u0000developments we have made, which include the addition of new GPU backends,\u0000features supporting efficient on-device matrix assembly, better support for\u0000asynchronicity and GPU kernel concurrency, and new communication\u0000infrastructure. We evaluate the performance of these developments on some\u0000pre-exascale systems as well the early exascale systems Frontier and Aurora,\u0000using compute kernel, communication layer, solver, and mini-application\u0000benchmark studies, and then close with a few observations drawn from our\u0000experiences on the tension between portable performance and other goals of\u0000numerical libraries.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"168 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An extension of C++ with memory-centric specifications for HPC to reduce memory footprints and streamline MPI development C++ 扩展了以内存为中心的 HPC 规范,以减少内存占用并简化 MPI 开发
Pub Date : 2024-06-10 DOI: arxiv-2406.06095
Pawel K. Radtke, Cristian G. Barrera-Hinojosa, Mladen Ivkovic, Tobias Weinzierl
The C++ programming language and its cousins lean towards amemory-inefficient storage of structs: The compiler inserts helper bits intothe struct such that individual attributes align with bytes, and it addsadditional bytes aligning attributes with cache lines, while it is not able toexploit knowledge about the range of integers, enums or bitsets to bring thememory footprint down. Furthermore, the language provides neither support fordata exchange via MPI nor for arbitrary floating-point precision formats. Ifdevelopers need to have a low memory footprint and MPI datatypes over structswhich exchange only minimal data, they have to manipulate the data and to writeMPI datatypes manually. We propose a C++ language extension based upon C++attributes through which developers can guide the compiler what memoryarrangements would be beneficial: Can multiple booleans be squeezed into onebit field, do floats hold fewer significant bits than in the IEEE standard, ordoes the code require a user-defined MPI datatype for certain subsets ofattributes? The extension offers the opportunity to fall back to normalalignment and padding rules via plain C++ assignments, no dependencies uponexternal libraries are introduced, and the resulting code remains standard C++.Our work implements the language annotations within LLVM and demonstrates theirpotential impact, both upon the runtime and the memory footprint, throughsmoothed particle hydrodynamics (SPH) benchmarks. They uncover the potentialgains in terms of performance and development productivity.
C++ 编程语言及其同类语言倾向于低内存效率的结构体存储:编译器会在结构体中插入辅助位,使单个属性与字节对齐,并添加额外的字节使属性与缓存行对齐,而无法利用整数、枚举或比特集的范围知识来减少内存占用。此外,该语言既不支持通过 MPI 进行数据交换,也不支持任意浮点精度格式。如果开发人员需要在只交换极少量数据的结构体上使用低内存占用和 MPI 数据类型,就必须手动操作数据和编写 MPI 数据类型。我们提出了一种基于 C++ 属性的 C++ 语言扩展,开发人员可以通过它指导编译器如何安排内存:是否可以将多个布尔值挤入一个比特字段,浮点数是否比 IEEE 标准中的有效位数更少,代码是否需要为某些属性子集提供用户定义的 MPI 数据类型?我们的工作在 LLVM 中实现了语言注释,并通过平滑粒子流体力学(SPH)基准测试证明了语言注释对运行时间和内存占用的潜在影响。我们的工作在 LLVM 中实现了语言注解,并通过平滑粒子流体力学(SPH)基准测试证明了其对运行时间和内存占用的潜在影响。
{"title":"An extension of C++ with memory-centric specifications for HPC to reduce memory footprints and streamline MPI development","authors":"Pawel K. Radtke, Cristian G. Barrera-Hinojosa, Mladen Ivkovic, Tobias Weinzierl","doi":"arxiv-2406.06095","DOIUrl":"https://doi.org/arxiv-2406.06095","url":null,"abstract":"The C++ programming language and its cousins lean towards a\u0000memory-inefficient storage of structs: The compiler inserts helper bits into\u0000the struct such that individual attributes align with bytes, and it adds\u0000additional bytes aligning attributes with cache lines, while it is not able to\u0000exploit knowledge about the range of integers, enums or bitsets to bring the\u0000memory footprint down. Furthermore, the language provides neither support for\u0000data exchange via MPI nor for arbitrary floating-point precision formats. If\u0000developers need to have a low memory footprint and MPI datatypes over structs\u0000which exchange only minimal data, they have to manipulate the data and to write\u0000MPI datatypes manually. We propose a C++ language extension based upon C++\u0000attributes through which developers can guide the compiler what memory\u0000arrangements would be beneficial: Can multiple booleans be squeezed into one\u0000bit field, do floats hold fewer significant bits than in the IEEE standard, or\u0000does the code require a user-defined MPI datatype for certain subsets of\u0000attributes? The extension offers the opportunity to fall back to normal\u0000alignment and padding rules via plain C++ assignments, no dependencies upon\u0000external libraries are introduced, and the resulting code remains standard C++.\u0000Our work implements the language annotations within LLVM and demonstrates their\u0000potential impact, both upon the runtime and the memory footprint, through\u0000smoothed particle hydrodynamics (SPH) benchmarks. They uncover the potential\u0000gains in terms of performance and development productivity.","PeriodicalId":501256,"journal":{"name":"arXiv - CS - Mathematical Software","volume":"233 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
arXiv - CS - Mathematical Software
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1