Pub Date : 2025-10-24DOI: 10.1038/s43588-025-00873-y
Ayse Kotil, Elijah Pelofske, Stephanie Riedmüller, Daniel J. Egger, Stephan Eidenbenz, Thorsten Koch, Stefan Woerner
The goal of multi-objective optimization is to understand optimal trade-offs between competing objective functions by finding the Pareto front, that is, the set of all Pareto-optimal solutions, where no objective can be improved without degrading another one. Multi-objective optimization can be challenging classically, even if the corresponding single-objective optimization problems are efficiently solvable. Thus, multi-objective optimization represents a compelling problem class to analyze with quantum computers. Here we use a low-depth quantum approximate optimization algorithm to approximate the optimal Pareto front of certain multi-objective weighted maximum-cut problems. We demonstrate its performance on an IBM Quantum computer, as well as with matrix product state numerical simulation, and show its potential to outperform classical approaches. This study explores the use of quantum computing to address multi-objective optimization challenges. By using a low-depth quantum approximate optimization algorithm to approximate the optimal Pareto front of multi-objective weighted max-cut problems, the authors demonstrate promising results—both in simulation and on IBM Quantum hardware—surpassing classical approaches.
{"title":"Quantum approximate multi-objective optimization","authors":"Ayse Kotil, Elijah Pelofske, Stephanie Riedmüller, Daniel J. Egger, Stephan Eidenbenz, Thorsten Koch, Stefan Woerner","doi":"10.1038/s43588-025-00873-y","DOIUrl":"10.1038/s43588-025-00873-y","url":null,"abstract":"The goal of multi-objective optimization is to understand optimal trade-offs between competing objective functions by finding the Pareto front, that is, the set of all Pareto-optimal solutions, where no objective can be improved without degrading another one. Multi-objective optimization can be challenging classically, even if the corresponding single-objective optimization problems are efficiently solvable. Thus, multi-objective optimization represents a compelling problem class to analyze with quantum computers. Here we use a low-depth quantum approximate optimization algorithm to approximate the optimal Pareto front of certain multi-objective weighted maximum-cut problems. We demonstrate its performance on an IBM Quantum computer, as well as with matrix product state numerical simulation, and show its potential to outperform classical approaches. This study explores the use of quantum computing to address multi-objective optimization challenges. By using a low-depth quantum approximate optimization algorithm to approximate the optimal Pareto front of multi-objective weighted max-cut problems, the authors demonstrate promising results—both in simulation and on IBM Quantum hardware—surpassing classical approaches.","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":"5 12","pages":"1168-1177"},"PeriodicalIF":18.3,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.comhttps://www.nature.com/articles/s43588-025-00873-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145369252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-23DOI: 10.1038/s43588-025-00893-8
Zihan Yu, Jingtao Ding, Yong Li
Network dynamics are fundamental to analyzing the properties of high-dimensional complex systems and understanding their behavior. Despite the accumulation of observational data across many domains, mathematical models exist in only a few areas with clear underlying principles. Here we show that a neural symbolic regression approach can bridge this gap by automatically deriving formulas from data. Our method reduces searches on high-dimensional networks to equivalent one-dimensional systems and uses pretrained neural networks to guide accurate formula discovery. Applied to ten benchmark systems, it recovers the correct forms and parameters of underlying dynamics. In two empirical natural systems, it corrects existing models of gene regulation and microbial communities, reducing prediction error by 59.98% and 55.94%, respectively. In epidemic transmission across human mobility networks of various scales, it discovers dynamics that exhibit the same power-law distribution of node correlations across scales and reveal country-level differences in intervention effects. These results demonstrate that machine-driven discovery of network dynamics can enhance understandings of complex systems and advance the development of complexity science.
{"title":"Discovering network dynamics with neural symbolic regression.","authors":"Zihan Yu, Jingtao Ding, Yong Li","doi":"10.1038/s43588-025-00893-8","DOIUrl":"10.1038/s43588-025-00893-8","url":null,"abstract":"<p><p>Network dynamics are fundamental to analyzing the properties of high-dimensional complex systems and understanding their behavior. Despite the accumulation of observational data across many domains, mathematical models exist in only a few areas with clear underlying principles. Here we show that a neural symbolic regression approach can bridge this gap by automatically deriving formulas from data. Our method reduces searches on high-dimensional networks to equivalent one-dimensional systems and uses pretrained neural networks to guide accurate formula discovery. Applied to ten benchmark systems, it recovers the correct forms and parameters of underlying dynamics. In two empirical natural systems, it corrects existing models of gene regulation and microbial communities, reducing prediction error by 59.98% and 55.94%, respectively. In epidemic transmission across human mobility networks of various scales, it discovers dynamics that exhibit the same power-law distribution of node correlations across scales and reveal country-level differences in intervention effects. These results demonstrate that machine-driven discovery of network dynamics can enhance understandings of complex systems and advance the development of complexity science.</p>","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":" ","pages":""},"PeriodicalIF":18.3,"publicationDate":"2025-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145356974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-22DOI: 10.1038/s43588-025-00872-z
L. Gerard, M. Scherbela, H. Sutterud, W. M. C. Foulkes, P. Grohs
Deep-learning-based variational Monte Carlo has emerged as a highly accurate method for solving the many-electron Schrödinger equation. Despite favorable scaling with the number of electrons, $${mathcal{O}}({{n}_{{rm{el}}}}^{4})$$ , the practical value of deep-learning-based variational Monte Carlo is limited by the high cost of optimizing the neural network weights for every system studied. Recent research has proposed optimizing a single neural network across multiple systems, reducing the cost per system. Here we extend this approach to solids, which require numerous calculations across different geometries, boundary conditions and supercell sizes. We demonstrate that optimization of a single ansatz across these variations significantly reduces optimization steps. Furthermore, we successfully transfer a network trained on 2 × 2 × 2 supercells of LiH, to 3 × 3 × 3 supercells, reducing the number of optimization steps required to simulate the large system by a factor of 50 compared with previous work. Investigating crystalline materials often requires calculations for many variations of a system, substantially increasing the computational burden. By training a transferable neural wavefunction across these variations, the cost can be reduced by approximately 50-fold for systems such as graphene and lithium hydride.
{"title":"Transferable neural wavefunctions for solids","authors":"L. Gerard, M. Scherbela, H. Sutterud, W. M. C. Foulkes, P. Grohs","doi":"10.1038/s43588-025-00872-z","DOIUrl":"10.1038/s43588-025-00872-z","url":null,"abstract":"Deep-learning-based variational Monte Carlo has emerged as a highly accurate method for solving the many-electron Schrödinger equation. Despite favorable scaling with the number of electrons, $${mathcal{O}}({{n}_{{rm{el}}}}^{4})$$ , the practical value of deep-learning-based variational Monte Carlo is limited by the high cost of optimizing the neural network weights for every system studied. Recent research has proposed optimizing a single neural network across multiple systems, reducing the cost per system. Here we extend this approach to solids, which require numerous calculations across different geometries, boundary conditions and supercell sizes. We demonstrate that optimization of a single ansatz across these variations significantly reduces optimization steps. Furthermore, we successfully transfer a network trained on 2 × 2 × 2 supercells of LiH, to 3 × 3 × 3 supercells, reducing the number of optimization steps required to simulate the large system by a factor of 50 compared with previous work. Investigating crystalline materials often requires calculations for many variations of a system, substantially increasing the computational burden. By training a transferable neural wavefunction across these variations, the cost can be reduced by approximately 50-fold for systems such as graphene and lithium hydride.","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":"5 12","pages":"1147-1157"},"PeriodicalIF":18.3,"publicationDate":"2025-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.comhttps://www.nature.com/articles/s43588-025-00872-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145350277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-22DOI: 10.1038/s43588-025-00877-8
Yubing Qian, Ji Chen
A recent study proposes using a single neural network to model and compute a wide range of solid-state materials, demonstrating exceptional transferability and substantially reduced computational costs — a breakthrough that could accelerate the design of next-generation materials in applications from efficient solar cells to room-temperature superconductors.
{"title":"Down to one network for computing crystalline materials","authors":"Yubing Qian, Ji Chen","doi":"10.1038/s43588-025-00877-8","DOIUrl":"10.1038/s43588-025-00877-8","url":null,"abstract":"A recent study proposes using a single neural network to model and compute a wide range of solid-state materials, demonstrating exceptional transferability and substantially reduced computational costs — a breakthrough that could accelerate the design of next-generation materials in applications from efficient solar cells to room-temperature superconductors.","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":"5 12","pages":"1098-1099"},"PeriodicalIF":18.3,"publicationDate":"2025-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145350323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-15DOI: 10.1038/s43588-025-00830-9
Han Chen, Christina V. Theodoris
The Large Perturbation Model (LPM) is a computational deep learning framework that predicts gene expression responses to chemical and genetic perturbations across diverse contexts. By modeling perturbation, readout, and context jointly, LPM enables in silico hypothesis generation and drug repurposing.
{"title":"Interpolating perturbations across contexts","authors":"Han Chen, Christina V. Theodoris","doi":"10.1038/s43588-025-00830-9","DOIUrl":"10.1038/s43588-025-00830-9","url":null,"abstract":"The Large Perturbation Model (LPM) is a computational deep learning framework that predicts gene expression responses to chemical and genetic perturbations across diverse contexts. By modeling perturbation, readout, and context jointly, LPM enables in silico hypothesis generation and drug repurposing.","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":"5 11","pages":"992-993"},"PeriodicalIF":18.3,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145304904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-15DOI: 10.1038/s43588-025-00870-1
Djordje Miladinovic, Tobias Höppe, Mathieu Chevalley, Andreas Georgiou, Lachlan Stuart, Arash Mehrjou, Marcus Bantscheff, Bernhard Schölkopf, Patrick Schwab
Data generated in perturbation experiments link perturbations to the changes they elicit and therefore contain information relevant to numerous biological discovery tasks—from understanding the relationships between biological entities to developing therapeutics. However, these data encompass diverse perturbations and readouts, and the complex dependence of experimental outcomes on their biological context makes it challenging to integrate insights across experiments. Here we present the large perturbation model (LPM), a deep-learning model that integrates multiple, heterogeneous perturbation experiments by representing perturbation, readout and context as disentangled dimensions. LPM outperforms existing methods across multiple biological discovery tasks, including in predicting post-perturbation transcriptomes of unseen experiments, identifying shared molecular mechanisms of action between chemical and genetic perturbations, and facilitating the inference of gene–gene interaction networks. LPM learns meaningful joint representations of perturbations, readouts and contexts, enables the study of biological relationships in silico and could considerably accelerate the derivation of insights from pooled perturbation experiments. A large perturbation model that integrates diverse laboratory experiments is presented to predict biological responses to chemical or genetic perturbations and support various biological discovery tasks.
{"title":"In silico biological discovery with large perturbation models","authors":"Djordje Miladinovic, Tobias Höppe, Mathieu Chevalley, Andreas Georgiou, Lachlan Stuart, Arash Mehrjou, Marcus Bantscheff, Bernhard Schölkopf, Patrick Schwab","doi":"10.1038/s43588-025-00870-1","DOIUrl":"10.1038/s43588-025-00870-1","url":null,"abstract":"Data generated in perturbation experiments link perturbations to the changes they elicit and therefore contain information relevant to numerous biological discovery tasks—from understanding the relationships between biological entities to developing therapeutics. However, these data encompass diverse perturbations and readouts, and the complex dependence of experimental outcomes on their biological context makes it challenging to integrate insights across experiments. Here we present the large perturbation model (LPM), a deep-learning model that integrates multiple, heterogeneous perturbation experiments by representing perturbation, readout and context as disentangled dimensions. LPM outperforms existing methods across multiple biological discovery tasks, including in predicting post-perturbation transcriptomes of unseen experiments, identifying shared molecular mechanisms of action between chemical and genetic perturbations, and facilitating the inference of gene–gene interaction networks. LPM learns meaningful joint representations of perturbations, readouts and contexts, enables the study of biological relationships in silico and could considerably accelerate the derivation of insights from pooled perturbation experiments. A large perturbation model that integrates diverse laboratory experiments is presented to predict biological responses to chemical or genetic perturbations and support various biological discovery tasks.","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":"5 11","pages":"1029-1040"},"PeriodicalIF":18.3,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.comhttps://www.nature.com/articles/s43588-025-00870-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145304935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Structure-based molecule generation represents a notable advancement in artificial intelligence-driven drug design. However, progress in this field is constrained by the scarcity of structural data on protein–ligand complexes. Here we propose a latent variable approach that bridges the gap between ligand-only data and protein–ligand complexes, enabling target-aware generative models to explore a broader chemical space, thereby enhancing the quality of molecular generation. Inspired by quantum molecular simulations, we introduce ECloudGen, a generative model that leverages electron clouds as meaningful latent variables. ECloudGen incorporates techniques such as latent diffusion models, Llama architectures and a contrastive learning task, which organizes the chemical space into a structured and highly interpretable latent representation. Benchmark studies demonstrate that ECloudGen outperforms state-of-the-art methods by generating more potent binders with superior physiochemical properties and by covering a broader chemical space. The incorporation of electron clouds as latent variables not only improves generative performance but also introduces model-level interpretability, as illustrated in our case studies. This study presents ECloudGen, which uses latent diffusion to generate electron clouds from protein pockets and decodes them into molecules. The adopted two-stage training expands the chemical space accessible to generative drug design.
{"title":"ECloudGen: leveraging electron clouds as a latent variable to scale up structure-based molecular design","authors":"Odin Zhang, Jieyu Jin, Zhenxing Wu, Jintu Zhang, Po Yuan, Yuntao Yu, Haitao Lin, Haiyang Zhong, Xujun Zhang, Chenqing Hua, Weibo Zhao, Zhengshuo Zhang, Kejun Ying, Yufei Huang, Huifeng Zhao, Yu Kang, Peichen Pan, Jike Wang, Dong Guo, Shuangjia Zheng, Chang-Yu Hsieh, Tingjun Hou","doi":"10.1038/s43588-025-00886-7","DOIUrl":"10.1038/s43588-025-00886-7","url":null,"abstract":"Structure-based molecule generation represents a notable advancement in artificial intelligence-driven drug design. However, progress in this field is constrained by the scarcity of structural data on protein–ligand complexes. Here we propose a latent variable approach that bridges the gap between ligand-only data and protein–ligand complexes, enabling target-aware generative models to explore a broader chemical space, thereby enhancing the quality of molecular generation. Inspired by quantum molecular simulations, we introduce ECloudGen, a generative model that leverages electron clouds as meaningful latent variables. ECloudGen incorporates techniques such as latent diffusion models, Llama architectures and a contrastive learning task, which organizes the chemical space into a structured and highly interpretable latent representation. Benchmark studies demonstrate that ECloudGen outperforms state-of-the-art methods by generating more potent binders with superior physiochemical properties and by covering a broader chemical space. The incorporation of electron clouds as latent variables not only improves generative performance but also introduces model-level interpretability, as illustrated in our case studies. This study presents ECloudGen, which uses latent diffusion to generate electron clouds from protein pockets and decodes them into molecules. The adopted two-stage training expands the chemical space accessible to generative drug design.","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":"5 11","pages":"1017-1028"},"PeriodicalIF":18.3,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145304934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-10DOI: 10.1038/s43588-025-00888-5
Sophie Slaats
The recent computational model ‘BRyBI’ proposes that gamma, theta, and delta neural oscillations can guide the process of word recognition by providing temporal windows for the integration of bottom-up input with top-down information.
{"title":"How neural rhythms can guide word recognition","authors":"Sophie Slaats","doi":"10.1038/s43588-025-00888-5","DOIUrl":"10.1038/s43588-025-00888-5","url":null,"abstract":"The recent computational model ‘BRyBI’ proposes that gamma, theta, and delta neural oscillations can guide the process of word recognition by providing temporal windows for the integration of bottom-up input with top-down information.","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":"5 10","pages":"848-849"},"PeriodicalIF":18.3,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145256869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-10DOI: 10.1038/s43588-025-00874-x
Renwen Zhang, Han Meng, Marion Neubronner, Yi-Chieh Lee
Large language models (LLMs) hold great potential for augmenting psychotherapy by enhancing accessibility, personalization and engagement. However, a systematic understanding of the roles that LLMs can play in psychotherapy remains underexplored. In this Perspective, we propose a taxonomy of LLM roles in psychotherapy that delineates six specific roles of LLMs across two key dimensions: artificial intelligence autonomy and emotional engagement. We discuss key computational and ethical challenges, such as emotion recognition, memory retention, privacy and emotional dependency, and offer recommendations to address these challenges. Large language models (LLMs) offer promising ways to enhance psychotherapy through greater accessibility, personalization and engagement. This Perspective introduces a typology that categorizes the roles of LLMs in psychotherapy along two critical dimensions: autonomy and emotional engagement.
{"title":"Computational and ethical considerations for using large language models in psychotherapy","authors":"Renwen Zhang, Han Meng, Marion Neubronner, Yi-Chieh Lee","doi":"10.1038/s43588-025-00874-x","DOIUrl":"10.1038/s43588-025-00874-x","url":null,"abstract":"Large language models (LLMs) hold great potential for augmenting psychotherapy by enhancing accessibility, personalization and engagement. However, a systematic understanding of the roles that LLMs can play in psychotherapy remains underexplored. In this Perspective, we propose a taxonomy of LLM roles in psychotherapy that delineates six specific roles of LLMs across two key dimensions: artificial intelligence autonomy and emotional engagement. We discuss key computational and ethical challenges, such as emotion recognition, memory retention, privacy and emotional dependency, and offer recommendations to address these challenges. Large language models (LLMs) offer promising ways to enhance psychotherapy through greater accessibility, personalization and engagement. This Perspective introduces a typology that categorizes the roles of LLMs in psychotherapy along two critical dimensions: autonomy and emotional engagement.","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":"5 10","pages":"854-862"},"PeriodicalIF":18.3,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145256872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-10DOI: 10.1038/s43588-025-00882-x
Nicole Martinez-Martin
In order to realize the potential of mental health AI applications to deliver improved care, a multipronged approach is needed, including representative AI datasets, research practices that reflect and anticipate potential sources of bias, stakeholder engagement, and equitable design practices.
{"title":"Developing mental health AI tools that improve care across different groups and contexts","authors":"Nicole Martinez-Martin","doi":"10.1038/s43588-025-00882-x","DOIUrl":"10.1038/s43588-025-00882-x","url":null,"abstract":"In order to realize the potential of mental health AI applications to deliver improved care, a multipronged approach is needed, including representative AI datasets, research practices that reflect and anticipate potential sources of bias, stakeholder engagement, and equitable design practices.","PeriodicalId":74246,"journal":{"name":"Nature computational science","volume":"5 10","pages":"839-840"},"PeriodicalIF":18.3,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145256873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}