首页 > 最新文献

Frontiers in High Performance Computing最新文献

英文 中文
Supercharging distributed computing environments for high-performance data engineering 为高性能数据工程的分布式计算环境增压
Pub Date : 2024-07-12 DOI: 10.3389/fhpcp.2024.1384619
Niranda Perera, A. Sarker, Kaiying Shan, Alex Fetea, Supun Kamburugamuve, Thejaka Amila Kanewala, Chathura Widanage, Mills Staylor, Tianle Zhong, V. Abeykoon, Gregor von Laszewski, Geoffrey Fox
The data engineering and data science community has embraced the idea of using Python and R dataframes for regular applications. Driven by the big data revolution and artificial intelligence, these frameworks are now ever more important in order to process terabytes of data. They can easily exceed the capabilities of a single machine but also demand significant developer time and effort due to their convenience and ability to manipulate data with high-level abstractions that can be optimized. Therefore it is essential to design scalable dataframe solutions. There have been multiple efforts to be integrated into the most efficient fashion to tackle this problem, the most notable being the dataframe systems developed using distributed computing environments such as Dask and Ray. Even though Dask and Ray's distributed computing features look very promising, we perceive that the Dask Dataframes and Ray Datasets still have room for optimization In this paper, we present CylonFlow, an alternative distributed dataframe execution methodology that enables state-of-the-art performance and scalability on the same Dask and Ray infrastructure (supercharging them!). To achieve this, we integrate a high-performance dataframe system Cylon, which was originally based on an entirely different execution paradigm, into Dask and Ray. Our experiments show that on a pipeline of dataframe operators, CylonFlow achieves 30 × more distributed performance than Dask Dataframes. Interestingly, it also enables superior sequential performance due to leveraging the native C++ execution of Cylon. We believe the performance of Cylon in conjunction with CylonFlow extends beyond the data engineering domain and can be used to consolidate high-performance computing and distributed computing ecosystems.
数据工程和数据科学界已经接受了在常规应用中使用 Python 和 R 数据框架的理念。在大数据革命和人工智能的推动下,这些框架在处理 TB 级数据方面变得越来越重要。它们可以轻松超越单台机器的能力,但也需要开发人员花费大量的时间和精力,因为它们使用高级抽象来处理数据,方便快捷,而且可以进行优化。因此,设计可扩展的数据帧解决方案至关重要。为了以最有效的方式解决这一问题,人们做出了多种努力,其中最引人注目的是使用分布式计算环境(如 Dask 和 Ray)开发的数据帧系统。尽管 Dask 和 Ray 的分布式计算功能看起来很有前途,但我们认为 Dask 数据框架和 Ray 数据集仍有优化的空间。在本文中,我们介绍了 CylonFlow,这是一种可供选择的分布式数据框架执行方法,可在相同的 Dask 和 Ray 基础架构上实现最先进的性能和可扩展性(为它们增压!)。为了实现这一目标,我们将原本基于完全不同执行范式的高性能数据帧系统 Cylon 集成到 Dask 和 Ray 中。我们的实验表明,在数据帧操作流水线上,CylonFlow 的分布式性能比 Dask Dataframes 高出 30 倍。有趣的是,由于利用了 Cylon 的本地 C++ 执行,CylonFlow 还实现了卓越的顺序性能。我们相信,Cylon 的性能与 CylonFlow 的结合将超越数据工程领域,可用于整合高性能计算和分布式计算生态系统。
{"title":"Supercharging distributed computing environments for high-performance data engineering","authors":"Niranda Perera, A. Sarker, Kaiying Shan, Alex Fetea, Supun Kamburugamuve, Thejaka Amila Kanewala, Chathura Widanage, Mills Staylor, Tianle Zhong, V. Abeykoon, Gregor von Laszewski, Geoffrey Fox","doi":"10.3389/fhpcp.2024.1384619","DOIUrl":"https://doi.org/10.3389/fhpcp.2024.1384619","url":null,"abstract":"The data engineering and data science community has embraced the idea of using Python and R dataframes for regular applications. Driven by the big data revolution and artificial intelligence, these frameworks are now ever more important in order to process terabytes of data. They can easily exceed the capabilities of a single machine but also demand significant developer time and effort due to their convenience and ability to manipulate data with high-level abstractions that can be optimized. Therefore it is essential to design scalable dataframe solutions. There have been multiple efforts to be integrated into the most efficient fashion to tackle this problem, the most notable being the dataframe systems developed using distributed computing environments such as Dask and Ray. Even though Dask and Ray's distributed computing features look very promising, we perceive that the Dask Dataframes and Ray Datasets still have room for optimization In this paper, we present CylonFlow, an alternative distributed dataframe execution methodology that enables state-of-the-art performance and scalability on the same Dask and Ray infrastructure (supercharging them!). To achieve this, we integrate a high-performance dataframe system Cylon, which was originally based on an entirely different execution paradigm, into Dask and Ray. Our experiments show that on a pipeline of dataframe operators, CylonFlow achieves 30 × more distributed performance than Dask Dataframes. Interestingly, it also enables superior sequential performance due to leveraging the native C++ execution of Cylon. We believe the performance of Cylon in conjunction with CylonFlow extends beyond the data engineering domain and can be used to consolidate high-performance computing and distributed computing ecosystems.","PeriodicalId":474805,"journal":{"name":"Frontiers in High Performance Computing","volume":"40 18","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141655028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multiphysics coupling framework for exascale simulation of fracture evolution in subsurface energy applications 用于超大规模模拟地下能源应用中断裂演化的多物理场耦合框架
Pub Date : 2024-07-01 DOI: 10.3389/fhpcp.2024.1416727
David Trebotich, R. Settgast, Terry Ligocki, William Tobin, Gregory H. Miller, Sergi Molins, C. Steefel
Predicting the evolution of fractured media is challenging due to coupled thermal, hydrological, chemical and mechanical processes that occur over a broad range of spatial scales, from the microscopic pore scale to field scale. We present a software framework and scientific workflow that couples the pore scale flow and reactive transport simulator Chombo-Crunch with the field scale geomechanics solver in GEOS to simulate fracture evolution in subsurface fluid-rock systems. This new multiphysics coupling capability comprises several novel features. An HDF5 data schema for coupling fracture positions between the two codes is employed and leverages the coarse resolution of the GEOS mechanics solver which limits the size of data coupled, and is, thus, not taxed by data resulting from the high resolution pore scale Chombo-Crunch solver. The coupling framework requires tracking of both before and after coarse nodal positions in GEOS as well as the resolved embedded boundary in Chombo-Crunch. We accomplished this by developing an approach to geometry generation that tracks the fracture interface between the two different methodologies. The GEOS quadrilateral mesh is converted to triangles which are organized into bins and an accessible tree structure; the nodes are then mapped to the Chombo representation using a continuous signed distance function that determines locations inside, on and outside of the fracture boundary. The GEOS positions are retained in memory on the Chombo-Crunch side of the coupling. The time stepping cadence for coupled multiphysics processes of flow, transport, reactions and mechanics is stable and demonstrates temporal reach to experimental time scales. The approach is validated by demonstration of 9 days of simulated time of a core flood experiment with fracture aperture evolution due to invasion of carbonated brine in wellbore-cement and sandstone. We also demonstrate usage of exascale computing resources by simulating a high resolution version of the validation problem on OLCF Frontier.
由于热、水文、化学和机械耦合过程发生在从微观孔隙尺度到现场尺度的广泛空间尺度上,预测断裂介质的演化具有挑战性。我们介绍了一种软件框架和科学工作流程,它将孔隙尺度流动和反应传输模拟器 Chombo-Crunch 与 GEOS 中的野外尺度地质力学求解器结合起来,模拟地下流体-岩石系统中的裂缝演化。这一新的多物理场耦合功能包含几个新功能。采用 HDF5 数据模式在两个代码之间进行断裂位置耦合,并利用 GEOS 力学求解器的粗分辨率限制耦合数据的大小,因此不会受到高分辨率孔隙尺度 Chombo-Crunch 求解器产生的数据的影响。耦合框架要求跟踪 GEOS 中粗节点前后的位置以及 Chombo-Crunch 中解析的嵌入边界。为此,我们开发了一种几何生成方法,可以跟踪两种不同方法之间的断裂界面。GEOS 的四边形网格被转换成三角形,这些三角形被组织成分仓和可访问的树形结构;然后使用连续符号距离函数将节点映射到 Chombo 表示法,该函数可确定断裂边界内部、断裂边界上和断裂边界外的位置。GEOS 位置保留在 Chombo-Crunch 耦合侧的内存中。流动、传输、反应和力学等多物理场耦合过程的时间步进节奏是稳定的,在时间上达到了实验时间尺度。通过对井筒-水泥和砂岩中碳酸盐水入侵导致的裂缝孔径演变的岩心水灾实验进行 9 天模拟时间的演示,验证了该方法的有效性。我们还在 OLCF Frontier 上模拟了验证问题的高分辨率版本,展示了超大规模计算资源的使用情况。
{"title":"A multiphysics coupling framework for exascale simulation of fracture evolution in subsurface energy applications","authors":"David Trebotich, R. Settgast, Terry Ligocki, William Tobin, Gregory H. Miller, Sergi Molins, C. Steefel","doi":"10.3389/fhpcp.2024.1416727","DOIUrl":"https://doi.org/10.3389/fhpcp.2024.1416727","url":null,"abstract":"Predicting the evolution of fractured media is challenging due to coupled thermal, hydrological, chemical and mechanical processes that occur over a broad range of spatial scales, from the microscopic pore scale to field scale. We present a software framework and scientific workflow that couples the pore scale flow and reactive transport simulator Chombo-Crunch with the field scale geomechanics solver in GEOS to simulate fracture evolution in subsurface fluid-rock systems. This new multiphysics coupling capability comprises several novel features. An HDF5 data schema for coupling fracture positions between the two codes is employed and leverages the coarse resolution of the GEOS mechanics solver which limits the size of data coupled, and is, thus, not taxed by data resulting from the high resolution pore scale Chombo-Crunch solver. The coupling framework requires tracking of both before and after coarse nodal positions in GEOS as well as the resolved embedded boundary in Chombo-Crunch. We accomplished this by developing an approach to geometry generation that tracks the fracture interface between the two different methodologies. The GEOS quadrilateral mesh is converted to triangles which are organized into bins and an accessible tree structure; the nodes are then mapped to the Chombo representation using a continuous signed distance function that determines locations inside, on and outside of the fracture boundary. The GEOS positions are retained in memory on the Chombo-Crunch side of the coupling. The time stepping cadence for coupled multiphysics processes of flow, transport, reactions and mechanics is stable and demonstrates temporal reach to experimental time scales. The approach is validated by demonstration of 9 days of simulated time of a core flood experiment with fracture aperture evolution due to invasion of carbonated brine in wellbore-cement and sandstone. We also demonstrate usage of exascale computing resources by simulating a high resolution version of the validation problem on OLCF Frontier.","PeriodicalId":474805,"journal":{"name":"Frontiers in High Performance Computing","volume":"116 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141697218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SmartORC: smart orchestration of resources in the compute continuum SmartORC:计算连续体中资源的智能编排
Pub Date : 2023-10-25 DOI: 10.3389/fhpcp.2023.1164915
Emanuele Carlini, Massimo Coppola, Patrizio Dazzi, Luca Ferrucci, Hanna Kavalionak, Ioannis Korontanis, Matteo Mordacchini, Konstantinos Tserpes
The promise of the compute continuum is to present applications with a flexible and transparent view of the resources in the Internet of Things–Edge–Cloud ecosystem. However, such a promise requires tackling complex challenges to maximize the benefits of both the cloud and the edge. Challenges include managing a highly distributed platform, matching services and resources, harnessing resource heterogeneity, and adapting the deployment of services to the changes in resources and applications. In this study, we present SmartORC, a comprehensive set of components designed to provide a complete framework for managing resources and applications in the Compute Continuum. Along with the description of all the SmartORC subcomponents, we have also provided the results of an evaluation aimed at showcasing the framework's capability.
计算连续体的承诺是为应用程序提供物联网-边缘云生态系统中灵活透明的资源视图。然而,这样的承诺需要解决复杂的挑战,以最大限度地发挥云和边缘的优势。挑战包括管理高度分布式的平台、匹配服务和资源、利用资源异构性以及使服务部署适应资源和应用程序中的变化。在这项研究中,我们提出了SmartORC,一套全面的组件,旨在提供一个完整的框架来管理计算连续体中的资源和应用程序。除了对所有SmartORC子组件的描述之外,我们还提供了旨在展示框架功能的评估结果。
{"title":"SmartORC: smart orchestration of resources in the compute continuum","authors":"Emanuele Carlini, Massimo Coppola, Patrizio Dazzi, Luca Ferrucci, Hanna Kavalionak, Ioannis Korontanis, Matteo Mordacchini, Konstantinos Tserpes","doi":"10.3389/fhpcp.2023.1164915","DOIUrl":"https://doi.org/10.3389/fhpcp.2023.1164915","url":null,"abstract":"The promise of the compute continuum is to present applications with a flexible and transparent view of the resources in the Internet of Things–Edge–Cloud ecosystem. However, such a promise requires tackling complex challenges to maximize the benefits of both the cloud and the edge. Challenges include managing a highly distributed platform, matching services and resources, harnessing resource heterogeneity, and adapting the deployment of services to the changes in resources and applications. In this study, we present SmartORC, a comprehensive set of components designed to provide a complete framework for managing resources and applications in the Compute Continuum. Along with the description of all the SmartORC subcomponents, we have also provided the results of an evaluation aimed at showcasing the framework's capability.","PeriodicalId":474805,"journal":{"name":"Frontiers in High Performance Computing","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135169475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Opportunities for enhancing MLCommons efforts while leveraging insights from educational MLCommons earthquake benchmarks efforts 加强MLCommons工作的机会,同时利用教育MLCommons地震基准工作的见解
Pub Date : 2023-10-23 DOI: 10.3389/fhpcp.2023.1233877
Gregor von Laszewski, J. P. Fleischer, Robert Knuuti, Geoffrey C. Fox, Jake Kolessar, Thomas S. Butler, Judy Fox
MLCommons is an effort to develop and improve the artificial intelligence (AI) ecosystem through benchmarks, public data sets, and research. It consists of members from start-ups, leading companies, academics, and non-profits from around the world. The goal is to make machine learning better for everyone. In order to increase participation by others, educational institutions provide valuable opportunities for engagement. In this article, we identify numerous insights obtained from different viewpoints as part of efforts to utilize high-performance computing (HPC) big data systems in existing education while developing and conducting science benchmarks for earthquake prediction. As this activity was conducted across multiple educational efforts, we project if and how it is possible to make such efforts available on a wider scale. This includes the integration of sophisticated benchmarks into courses and research activities at universities, exposing the students and researchers to topics that are otherwise typically not sufficiently covered in current course curricula as we witnessed from our practical experience across multiple organizations. As such, we have outlined the many lessons we learned throughout these efforts, culminating in the need for benchmark carpentry for scientists using advanced computational resources. The article also presents the analysis of an earthquake prediction code benchmark while focusing on the accuracy of the results and not only on the runtime; notedly, this benchmark was created as a result of our lessons learned. Energy traces were produced throughout these benchmarks, which are vital to analyzing the power expenditure within HPC environments. Additionally, one of the insights is that in the short time of the project with limited student availability, the activity was only possible by utilizing a benchmark runtime pipeline while developing and using software to generate jobs from the permutation of hyperparameters automatically. It integrates a templated job management framework for executing tasks and experiments based on hyperparameters while leveraging hybrid compute resources available at different institutions. The software is part of a collection called cloudmesh with its newly developed components, cloudmesh-ee (experiment executor) and cloudmesh-cc (compute coordinator).
MLCommons是通过基准测试、公共数据集和研究来开发和改进人工智能(AI)生态系统的一项努力。它由来自世界各地的初创企业、领先公司、学者和非营利组织的成员组成。我们的目标是让机器学习对每个人都更好。为了增加其他人的参与,教育机构提供了宝贵的参与机会。在本文中,我们确定了从不同角度获得的许多见解,作为在现有教育中利用高性能计算(HPC)大数据系统的努力的一部分,同时开发和实施地震预测的科学基准。由于这项活动是在多个教育努力中进行的,我们计划是否以及如何可能使这些努力在更大范围内可用。这包括将复杂的基准整合到大学的课程和研究活动中,让学生和研究人员接触到当前课程中通常没有充分涵盖的主题,这是我们从多个组织的实践经验中看到的。因此,我们概述了我们在这些工作中学到的许多经验教训,最终需要为使用先进计算资源的科学家提供基准木工。本文还介绍了对地震预测代码基准的分析,同时重点关注结果的准确性,而不仅仅是运行时;值得注意的是,这个基准是根据我们的经验教训创建的。在这些基准测试中产生了能量轨迹,这对于分析HPC环境中的功率消耗至关重要。此外,其中一个见解是,在学生可用性有限的短时间内,只有在开发和使用软件从超参数的排列自动生成作业的同时,利用基准运行时管道,才能实现该活动。它集成了一个模板作业管理框架,用于执行基于超参数的任务和实验,同时利用不同机构可用的混合计算资源。该软件是cloudmesh系列的一部分,其新开发的组件是cloudmesh-ee(实验执行器)和cloudmesh-cc(计算协调器)。
{"title":"Opportunities for enhancing MLCommons efforts while leveraging insights from educational MLCommons earthquake benchmarks efforts","authors":"Gregor von Laszewski, J. P. Fleischer, Robert Knuuti, Geoffrey C. Fox, Jake Kolessar, Thomas S. Butler, Judy Fox","doi":"10.3389/fhpcp.2023.1233877","DOIUrl":"https://doi.org/10.3389/fhpcp.2023.1233877","url":null,"abstract":"MLCommons is an effort to develop and improve the artificial intelligence (AI) ecosystem through benchmarks, public data sets, and research. It consists of members from start-ups, leading companies, academics, and non-profits from around the world. The goal is to make machine learning better for everyone. In order to increase participation by others, educational institutions provide valuable opportunities for engagement. In this article, we identify numerous insights obtained from different viewpoints as part of efforts to utilize high-performance computing (HPC) big data systems in existing education while developing and conducting science benchmarks for earthquake prediction. As this activity was conducted across multiple educational efforts, we project if and how it is possible to make such efforts available on a wider scale. This includes the integration of sophisticated benchmarks into courses and research activities at universities, exposing the students and researchers to topics that are otherwise typically not sufficiently covered in current course curricula as we witnessed from our practical experience across multiple organizations. As such, we have outlined the many lessons we learned throughout these efforts, culminating in the need for benchmark carpentry for scientists using advanced computational resources. The article also presents the analysis of an earthquake prediction code benchmark while focusing on the accuracy of the results and not only on the runtime; notedly, this benchmark was created as a result of our lessons learned. Energy traces were produced throughout these benchmarks, which are vital to analyzing the power expenditure within HPC environments. Additionally, one of the insights is that in the short time of the project with limited student availability, the activity was only possible by utilizing a benchmark runtime pipeline while developing and using software to generate jobs from the permutation of hyperparameters automatically. It integrates a templated job management framework for executing tasks and experiments based on hyperparameters while leveraging hybrid compute resources available at different institutions. The software is part of a collection called cloudmesh with its newly developed components, cloudmesh-ee (experiment executor) and cloudmesh-cc (compute coordinator).","PeriodicalId":474805,"journal":{"name":"Frontiers in High Performance Computing","volume":"314 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135411836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Frontiers in High Performance Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1