首页 > 最新文献

Nature computational science最新文献

英文 中文
Author Correction: Approaching coupled-cluster accuracy for molecular electronic structures with multi-task learning.
IF 12 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-01-22 DOI: 10.1038/s43588-025-00767-z
Hao Tang, Brian Xiao, Wenhao He, Pero Subasic, Avetik R Harutyunyan, Yao Wang, Fang Liu, Haowei Xu, Ju Li
{"title":"Author Correction: Approaching coupled-cluster accuracy for molecular electronic structures with multi-task learning.","authors":"Hao Tang, Brian Xiao, Wenhao He, Pero Subasic, Avetik R Harutyunyan, Yao Wang, Fang Liu, Haowei Xu, Ju Li","doi":"10.1038/s43588-025-00767-z","DOIUrl":"10.1038/s43588-025-00767-z","url":null,"abstract":"","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":" ","pages":""},"PeriodicalIF":12.0,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143026218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Boosting AI with neuromorphic computing. 用神经形态计算增强人工智能。
IF 12 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-01-21 DOI: 10.1038/s43588-025-00770-4
{"title":"Boosting AI with neuromorphic computing.","authors":"","doi":"10.1038/s43588-025-00770-4","DOIUrl":"https://doi.org/10.1038/s43588-025-00770-4","url":null,"abstract":"","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":" ","pages":""},"PeriodicalIF":12.0,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143017818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Memristors enabling probabilistic AI at the edge. 在边缘启用概率AI的忆阻器。
IF 12 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-01-17 DOI: 10.1038/s43588-024-00761-x
Damien Querlioz
{"title":"Memristors enabling probabilistic AI at the edge.","authors":"Damien Querlioz","doi":"10.1038/s43588-024-00761-x","DOIUrl":"https://doi.org/10.1038/s43588-024-00761-x","url":null,"abstract":"","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":" ","pages":""},"PeriodicalIF":12.0,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143017826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient large language model with analog in-memory computing. 高效的大型语言模型与模拟内存计算。
IF 12 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-01-17 DOI: 10.1038/s43588-024-00760-y
Anand Subramoney
{"title":"Efficient large language model with analog in-memory computing.","authors":"Anand Subramoney","doi":"10.1038/s43588-024-00760-y","DOIUrl":"https://doi.org/10.1038/s43588-024-00760-y","url":null,"abstract":"","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":" ","pages":""},"PeriodicalIF":12.0,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143017821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy-efficient multimodal zero-shot learning using in-memory reservoir computing. 基于内存库计算的节能多模态零学习。
IF 12 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-01-13 DOI: 10.1038/s43588-024-00762-w
{"title":"Energy-efficient multimodal zero-shot learning using in-memory reservoir computing.","authors":"","doi":"10.1038/s43588-024-00762-w","DOIUrl":"https://doi.org/10.1038/s43588-024-00762-w","url":null,"abstract":"","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":" ","pages":""},"PeriodicalIF":12.0,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142980780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bridging generations and cultures in mathematics and computer science. 在数学和计算机科学中架起代际和文化的桥梁。
IF 12 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-01-09 DOI: 10.1038/s43588-024-00756-8
Alyssa April Dellow, Fatimah Abdul Razak
{"title":"Bridging generations and cultures in mathematics and computer science.","authors":"Alyssa April Dellow, Fatimah Abdul Razak","doi":"10.1038/s43588-024-00756-8","DOIUrl":"https://doi.org/10.1038/s43588-024-00756-8","url":null,"abstract":"","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":" ","pages":""},"PeriodicalIF":12.0,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142959947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A new tool for shape and structure optimization of soft materials. 软质材料形状和结构优化的新工具。
IF 12 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-01-09 DOI: 10.1038/s43588-024-00754-w
{"title":"A new tool for shape and structure optimization of soft materials.","authors":"","doi":"10.1038/s43588-024-00754-w","DOIUrl":"https://doi.org/10.1038/s43588-024-00754-w","url":null,"abstract":"","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":" ","pages":""},"PeriodicalIF":12.0,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142959944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Resistive memory-based zero-shot liquid state machine for multimodal event data learning. 多模态事件数据学习的电阻式记忆零射击液体状态机。
IF 12 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-01-09 DOI: 10.1038/s43588-024-00751-z
Ning Lin, Shaocong Wang, Yi Li, Bo Wang, Shuhui Shi, Yangu He, Woyu Zhang, Yifei Yu, Yue Zhang, Xinyuan Zhang, Kwunhang Wong, Songqi Wang, Xiaoming Chen, Hao Jiang, Xumeng Zhang, Peng Lin, Xiaoxin Xu, Xiaojuan Qi, Zhongrui Wang, Dashan Shang, Qi Liu, Ming Liu

The human brain is a complex spiking neural network (SNN) capable of learning multimodal signals in a zero-shot manner by generalizing existing knowledge. Remarkably, it maintains minimal power consumption through event-based signal propagation. However, replicating the human brain in neuromorphic hardware presents both hardware and software challenges. Hardware limitations, such as the slowdown of Moore's law and Von Neumann bottleneck, hinder the efficiency of digital computers. In addition, SNNs are characterized by their software training complexities. Here, to this end, we propose a hardware-software co-design on a 40 nm 256 kB in-memory computing macro that physically integrates a fixed and random liquid state machine SNN encoder with trainable artificial neural network projections. We showcase the zero-shot learning of multimodal events on the N-MNIST and N-TIDIGITS datasets, including visual and audio data association, as well as neural and visual data alignment for brain-machine interfaces. Our co-design achieves classification accuracy comparable to fully optimized software models, resulting in a 152.83- and 393.07-fold reduction in training costs compared with state-of-the-art spiking recurrent neural network-based contrastive learning and prototypical networks, and a 23.34- and 160-fold improvement in energy efficiency compared with cutting-edge digital hardware, respectively. These proof-of-principle prototypes demonstrate zero-shot multimodal events learning capability for emerging efficient and compact neuromorphic hardware.

人脑是一个复杂的尖峰神经网络(SNN),能够通过对已有知识的泛化,以零射击的方式学习多模态信号。值得注意的是,它通过基于事件的信号传播保持最小的功耗。然而,在神经形态硬件中复制人脑存在硬件和软件两方面的挑战。硬件限制,如摩尔定律的减速和冯·诺伊曼瓶颈,阻碍了数字计算机的效率。此外,snn的特点是其软件训练的复杂性。为此,我们提出了一种基于40 nm 256 kB内存计算宏的硬件软件协同设计,该宏物理集成了固定和随机的液态机SNN编码器以及可训练的人工神经网络投影。我们展示了N-MNIST和N-TIDIGITS数据集上多模态事件的零射击学习,包括视觉和音频数据关联,以及脑机接口的神经和视觉数据对齐。我们的共同设计实现了与完全优化的软件模型相当的分类精度,与最先进的基于脉冲循环神经网络的对比学习和原型网络相比,训练成本降低了152.83倍和393.07倍,与先进的数字硬件相比,能源效率分别提高了23.34倍和160倍。这些原理验证原型展示了零射击多模态事件学习能力,用于新兴的高效和紧凑的神经形态硬件。
{"title":"Resistive memory-based zero-shot liquid state machine for multimodal event data learning.","authors":"Ning Lin, Shaocong Wang, Yi Li, Bo Wang, Shuhui Shi, Yangu He, Woyu Zhang, Yifei Yu, Yue Zhang, Xinyuan Zhang, Kwunhang Wong, Songqi Wang, Xiaoming Chen, Hao Jiang, Xumeng Zhang, Peng Lin, Xiaoxin Xu, Xiaojuan Qi, Zhongrui Wang, Dashan Shang, Qi Liu, Ming Liu","doi":"10.1038/s43588-024-00751-z","DOIUrl":"10.1038/s43588-024-00751-z","url":null,"abstract":"<p><p>The human brain is a complex spiking neural network (SNN) capable of learning multimodal signals in a zero-shot manner by generalizing existing knowledge. Remarkably, it maintains minimal power consumption through event-based signal propagation. However, replicating the human brain in neuromorphic hardware presents both hardware and software challenges. Hardware limitations, such as the slowdown of Moore's law and Von Neumann bottleneck, hinder the efficiency of digital computers. In addition, SNNs are characterized by their software training complexities. Here, to this end, we propose a hardware-software co-design on a 40 nm 256 kB in-memory computing macro that physically integrates a fixed and random liquid state machine SNN encoder with trainable artificial neural network projections. We showcase the zero-shot learning of multimodal events on the N-MNIST and N-TIDIGITS datasets, including visual and audio data association, as well as neural and visual data alignment for brain-machine interfaces. Our co-design achieves classification accuracy comparable to fully optimized software models, resulting in a 152.83- and 393.07-fold reduction in training costs compared with state-of-the-art spiking recurrent neural network-based contrastive learning and prototypical networks, and a 23.34- and 160-fold improvement in energy efficiency compared with cutting-edge digital hardware, respectively. These proof-of-principle prototypes demonstrate zero-shot multimodal events learning capability for emerging efficient and compact neuromorphic hardware.</p>","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":" ","pages":""},"PeriodicalIF":12.0,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142959950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient scaling of large language models with mixture of experts and 3D analog in-memory computing. 高效缩放与专家和三维模拟内存计算混合的大型语言模型。
IF 12 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-01-08 DOI: 10.1038/s43588-024-00753-x
Julian Büchel, Athanasios Vasilopoulos, William Andrew Simon, Irem Boybat, HsinYu Tsai, Geoffrey W Burr, Hernan Castro, Bill Filipiak, Manuel Le Gallo, Abbas Rahimi, Vijay Narayanan, Abu Sebastian

Large language models (LLMs), with their remarkable generative capacities, have greatly impacted a range of fields, but they face scalability challenges due to their large parameter counts, which result in high costs for training and inference. The trend of increasing model sizes is exacerbating these challenges, particularly in terms of memory footprint, latency and energy consumption. Here we explore the deployment of 'mixture of experts' (MoEs) networks-networks that use conditional computing to keep computational demands low despite having many parameters-on three-dimensional (3D) non-volatile memory (NVM)-based analog in-memory computing (AIMC) hardware. When combined with the MoE architecture, this hardware, utilizing stacked NVM devices arranged in a crossbar array, offers a solution to the parameter-fetching bottleneck typical in traditional models deployed on conventional von-Neumann-based architectures. By simulating the deployment of MoEs on an abstract 3D AIMC system, we demonstrate that, due to their conditional compute mechanism, MoEs are inherently better suited to this hardware than conventional, dense model architectures. Our findings suggest that MoEs, in conjunction with emerging 3D NVM-based AIMC, can substantially reduce the inference costs of state-of-the-art LLMs, making them more accessible and energy-efficient.

大型语言模型(llm)具有显著的生成能力,极大地影响了一系列领域,但由于其参数数量大,导致训练和推理成本高,因此面临可扩展性挑战。增加模型尺寸的趋势加剧了这些挑战,特别是在内存占用、延迟和能耗方面。在这里,我们探讨了在基于三维(3D)非易失性存储器(NVM)的模拟内存计算(AIMC)硬件上部署“混合专家”(MoEs)网络——尽管有许多参数,但使用条件计算来保持低计算需求的网络。当与MoE架构相结合时,该硬件利用堆叠在交叉棒阵列中的NVM设备,为部署在传统基于冯-诺伊曼架构的传统模型中典型的参数获取瓶颈提供了解决方案。通过在抽象的3D AIMC系统上模拟moe的部署,我们证明,由于它们的条件计算机制,moe本质上比传统的密集模型架构更适合这种硬件。我们的研究结果表明,moe与新兴的基于3D nvm的AIMC相结合,可以大大降低最先进的llm的推理成本,使它们更容易获得和节能。
{"title":"Efficient scaling of large language models with mixture of experts and 3D analog in-memory computing.","authors":"Julian Büchel, Athanasios Vasilopoulos, William Andrew Simon, Irem Boybat, HsinYu Tsai, Geoffrey W Burr, Hernan Castro, Bill Filipiak, Manuel Le Gallo, Abbas Rahimi, Vijay Narayanan, Abu Sebastian","doi":"10.1038/s43588-024-00753-x","DOIUrl":"10.1038/s43588-024-00753-x","url":null,"abstract":"<p><p>Large language models (LLMs), with their remarkable generative capacities, have greatly impacted a range of fields, but they face scalability challenges due to their large parameter counts, which result in high costs for training and inference. The trend of increasing model sizes is exacerbating these challenges, particularly in terms of memory footprint, latency and energy consumption. Here we explore the deployment of 'mixture of experts' (MoEs) networks-networks that use conditional computing to keep computational demands low despite having many parameters-on three-dimensional (3D) non-volatile memory (NVM)-based analog in-memory computing (AIMC) hardware. When combined with the MoE architecture, this hardware, utilizing stacked NVM devices arranged in a crossbar array, offers a solution to the parameter-fetching bottleneck typical in traditional models deployed on conventional von-Neumann-based architectures. By simulating the deployment of MoEs on an abstract 3D AIMC system, we demonstrate that, due to their conditional compute mechanism, MoEs are inherently better suited to this hardware than conventional, dense model architectures. Our findings suggest that MoEs, in conjunction with emerging 3D NVM-based AIMC, can substantially reduce the inference costs of state-of-the-art LLMs, making them more accessible and energy-efficient.</p>","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":" ","pages":""},"PeriodicalIF":12.0,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142959948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decoupled peak property learning for efficient and interpretable electronic circular dichroism spectrum prediction. 解耦峰属性学习用于高效和可解释的电子圆二色光谱预测。
IF 12 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-01-03 DOI: 10.1038/s43588-024-00757-7
Hao Li, Da Long, Li Yuan, Yu Wang, Yonghong Tian, Xinchang Wang, Fanyang Mo

Electronic circular dichroism (ECD) spectra contain key information about molecular chirality by discriminating the absolute configurations of chiral molecules, which is crucial in asymmetric organic synthesis and the drug industry. However, existing predictive approaches lack the consideration of ECD spectra owing to the data scarcity and the limited interpretability to achieve trustworthy prediction. Here we establish a large-scale dataset for chiral molecular ECD spectra and propose ECDFormer for accurate and interpretable ECD spectrum prediction. ECDFormer decomposes ECD spectra into peak entities, uses the QFormer architecture to learn peak properties and renders peaks into spectra. Compared with spectrum sequence prediction methods, our decoupled peak prediction approach substantially enhances both accuracy and efficiency, improving the peak symbol accuracy from 37.3% to 72.7% and decreasing the time cost from an average of 4.6 central processing unit hours to 1.5 s. Moreover, ECDFormer demonstrated its ability to capture molecular orbital information directly from spectral data using the explainable peak-decoupling approach. Furthermore, ECDFormer proved to be equally proficient at predicting various types of spectrum, including infrared and mass spectroscopies, highlighting its substantial generalization capabilities.

电子圆二色性(ECD)光谱通过识别手性分子的绝对构型,包含了分子手性的关键信息,这在不对称有机合成和药物工业中至关重要。然而,由于数据的稀缺性和可解释性的限制,现有的预测方法缺乏对ECD谱的考虑,无法实现可信的预测。本文建立了大规模的手性分子ECD谱数据集,并提出了ECDFormer用于准确、可解释的ECD谱预测。ECDFormer将ECD光谱分解为峰实体,使用QFormer架构学习峰属性,并将峰渲染为光谱。与频谱序列预测方法相比,我们的解耦峰值预测方法大大提高了精度和效率,将峰值符号准确率从37.3%提高到72.7%,将时间成本从平均4.6中央处理单元小时降低到1.5 s。此外,ECDFormer证明了它能够使用可解释的峰解耦方法直接从光谱数据中捕获分子轨道信息。此外,ECDFormer在预测各种类型的光谱(包括红外和质谱)方面同样精通,突出了其强大的泛化能力。
{"title":"Decoupled peak property learning for efficient and interpretable electronic circular dichroism spectrum prediction.","authors":"Hao Li, Da Long, Li Yuan, Yu Wang, Yonghong Tian, Xinchang Wang, Fanyang Mo","doi":"10.1038/s43588-024-00757-7","DOIUrl":"https://doi.org/10.1038/s43588-024-00757-7","url":null,"abstract":"<p><p>Electronic circular dichroism (ECD) spectra contain key information about molecular chirality by discriminating the absolute configurations of chiral molecules, which is crucial in asymmetric organic synthesis and the drug industry. However, existing predictive approaches lack the consideration of ECD spectra owing to the data scarcity and the limited interpretability to achieve trustworthy prediction. Here we establish a large-scale dataset for chiral molecular ECD spectra and propose ECDFormer for accurate and interpretable ECD spectrum prediction. ECDFormer decomposes ECD spectra into peak entities, uses the QFormer architecture to learn peak properties and renders peaks into spectra. Compared with spectrum sequence prediction methods, our decoupled peak prediction approach substantially enhances both accuracy and efficiency, improving the peak symbol accuracy from 37.3% to 72.7% and decreasing the time cost from an average of 4.6 central processing unit hours to 1.5 s. Moreover, ECDFormer demonstrated its ability to capture molecular orbital information directly from spectral data using the explainable peak-decoupling approach. Furthermore, ECDFormer proved to be equally proficient at predicting various types of spectrum, including infrared and mass spectroscopies, highlighting its substantial generalization capabilities.</p>","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":" ","pages":""},"PeriodicalIF":12.0,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142928880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Nature computational science
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1