Pub Date : 2024-08-12DOI: 10.1016/j.patter.2024.101041
Neuroanatomy is fundamental to understanding the nervous system, particularly dendritic spines, which are vital for synaptic transmission and change in response to injury or disease. Advancements in imaging have allowed for detailed three-dimensional (3D) visualization of these structures. However, existing tools for analyzing dendritic spine morphology are limited. To address this, we developed an open-source virtual reality (VR) structural analysis software ecosystem (coined “VR-SASE”) that offers a powerful, intuitive approach for analyzing dendritic spines. Our validation process confirmed the method’s superior accuracy, outperforming recognized gold-standard neural reconstruction techniques. Importantly, the VR-SASE workflow automatically calculates key morphological metrics, such as dendritic spine length, volume, and surface area, and reliably replicates established datasets from published dendritic spine studies. By integrating the Neurodata Without Borders (NWB) data standard, VR-SASE datasets can be preserved/distributed through DANDI Archives, satisfying the NIH data sharing mandate.
{"title":"A FAIR, open-source virtual reality platform for dendritic spine analysis","authors":"","doi":"10.1016/j.patter.2024.101041","DOIUrl":"https://doi.org/10.1016/j.patter.2024.101041","url":null,"abstract":"<p>Neuroanatomy is fundamental to understanding the nervous system, particularly dendritic spines, which are vital for synaptic transmission and change in response to injury or disease. Advancements in imaging have allowed for detailed three-dimensional (3D) visualization of these structures. However, existing tools for analyzing dendritic spine morphology are limited. To address this, we developed an open-source virtual reality (VR) structural analysis software ecosystem (coined “VR-SASE”) that offers a powerful, intuitive approach for analyzing dendritic spines. Our validation process confirmed the method’s superior accuracy, outperforming recognized gold-standard neural reconstruction techniques. Importantly, the VR-SASE workflow automatically calculates key morphological metrics, such as dendritic spine length, volume, and surface area, and reliably replicates established datasets from published dendritic spine studies. By integrating the Neurodata Without Borders (NWB) data standard, VR-SASE datasets can be preserved/distributed through DANDI Archives, satisfying the NIH data sharing mandate.</p>","PeriodicalId":36242,"journal":{"name":"Patterns","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141936990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-12DOI: 10.1016/j.patter.2024.101042
Molecular design based on generative models, such as variational autoencoders (VAEs), has become increasingly popular in recent years due to its efficiency for exploring high-dimensional molecular space to identify molecules with desired properties. While the efficacy of the initial model strongly depends on the training data, the sampling efficiency of the model for suggesting novel molecules with enhanced properties can be further enhanced via latent space optimization (LSO). In this paper, we propose a multi-objective LSO method that can significantly enhance the performance of generative molecular design (GMD). The proposed method adopts an iterative weighted retraining approach, where the respective weights of the molecules in the training data are determined by their Pareto efficiency. We demonstrate that our multi-objective GMD LSO method can significantly improve the performance of GMD for jointly optimizing multiple molecular properties.
{"title":"Multi-objective latent space optimization of generative molecular design models","authors":"","doi":"10.1016/j.patter.2024.101042","DOIUrl":"https://doi.org/10.1016/j.patter.2024.101042","url":null,"abstract":"<p>Molecular design based on generative models, such as variational autoencoders (VAEs), has become increasingly popular in recent years due to its efficiency for exploring high-dimensional molecular space to identify molecules with desired properties. While the efficacy of the initial model strongly depends on the training data, the sampling efficiency of the model for suggesting novel molecules with enhanced properties can be further enhanced via latent space optimization (LSO). In this paper, we propose a multi-objective LSO method that can significantly enhance the performance of generative molecular design (GMD). The proposed method adopts an iterative weighted retraining approach, where the respective weights of the molecules in the training data are determined by their Pareto efficiency. We demonstrate that our multi-objective GMD LSO method can significantly improve the performance of GMD for jointly optimizing multiple molecular properties.</p>","PeriodicalId":36242,"journal":{"name":"Patterns","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141936991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-09DOI: 10.1016/j.patter.2024.101028
The digital twin (DT) is a concept widely used in industry to create digital replicas of physical objects or systems. The dynamic, bi-directional link between the physical entity and its digital counterpart enables a real-time update of the digital entity. It can predict perturbations related to the physical object’s function. The obvious applications of DTs in healthcare and medicine are extremely attractive prospects that have the potential to revolutionize patient diagnosis and treatment. However, challenges including technical obstacles, biological heterogeneity, and ethical considerations make it difficult to achieve the desired goal. Advances in multi-modal deep learning methods, embodied AI agents, and the metaverse may mitigate some difficulties. Here, we discuss the basic concepts underlying DTs, the requirements for implementing DTs in medicine, and their current and potential healthcare uses. We also provide our perspective on five hallmarks for a healthcare DT system to advance research in this field.
{"title":"Concepts and applications of digital twins in healthcare and medicine","authors":"","doi":"10.1016/j.patter.2024.101028","DOIUrl":"https://doi.org/10.1016/j.patter.2024.101028","url":null,"abstract":"<p>The digital twin (DT) is a concept widely used in industry to create digital replicas of physical objects or systems. The dynamic, bi-directional link between the physical entity and its digital counterpart enables a real-time update of the digital entity. It can predict perturbations related to the physical object’s function. The obvious applications of DTs in healthcare and medicine are extremely attractive prospects that have the potential to revolutionize patient diagnosis and treatment. However, challenges including technical obstacles, biological heterogeneity, and ethical considerations make it difficult to achieve the desired goal. Advances in multi-modal deep learning methods, embodied AI agents, and the metaverse may mitigate some difficulties. Here, we discuss the basic concepts underlying DTs, the requirements for implementing DTs in medicine, and their current and potential healthcare uses. We also provide our perspective on five hallmarks for a healthcare DT system to advance research in this field.</p>","PeriodicalId":36242,"journal":{"name":"Patterns","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141936994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-09DOI: 10.1016/j.patter.2024.101043
Evolutionary-based machine learning models have emerged as a fascinating approach to mapping the landscape for protein evolution. Lian et al. demonstrated that evolution-based deep generative models, specifically variational autoencoders, can organize SH3 homologs in a hierarchical latent space, effectively distinguishing the specific Sho1SH3 domains.
{"title":"How deep can we decipher protein evolution with deep learning models","authors":"","doi":"10.1016/j.patter.2024.101043","DOIUrl":"https://doi.org/10.1016/j.patter.2024.101043","url":null,"abstract":"<p>Evolutionary-based machine learning models have emerged as a fascinating approach to mapping the landscape for protein evolution. Lian et al. demonstrated that evolution-based deep generative models, specifically variational autoencoders, can organize SH3 homologs in a hierarchical latent space, effectively distinguishing the specific Sho1<sup>SH3</sup> domains.</p>","PeriodicalId":36242,"journal":{"name":"Patterns","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141936992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-09DOI: 10.1016/j.patter.2024.101044
What can we do to mitigate climate change and achieve carbon neutrality for buildings? In their recent publication in Patterns, the authors proposed a modularized neural network incorporating physical priors for future building energy modeling, paving the way for scalable and reliable building energy modeling, optimization, retrofit designs, and buildings-to-grid integration. In this interview, the authors talk about incorporating fundamental heat transfer and thermodynamics knowledge into data-driven models.
{"title":"Meet the authors: Zixin Jiang and Bing Dong","authors":"","doi":"10.1016/j.patter.2024.101044","DOIUrl":"https://doi.org/10.1016/j.patter.2024.101044","url":null,"abstract":"<p>What can we do to mitigate climate change and achieve carbon neutrality for buildings? In their recent publication in <em>Patterns</em>, the authors proposed a modularized neural network incorporating physical priors for future building energy modeling, paving the way for scalable and reliable building energy modeling, optimization, retrofit designs, and buildings-to-grid integration. In this interview, the authors talk about incorporating fundamental heat transfer and thermodynamics knowledge into data-driven models.</p>","PeriodicalId":36242,"journal":{"name":"Patterns","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141936993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-08DOI: 10.1016/j.patter.2024.101038
Recently, a surge in image manipulations in scientific publications has led to numerous retractions, highlighting the importance of image integrity. Although forensic detectors for image duplication and synthesis have been researched, the detection of image splicing in scientific publications remains largely unexplored. Splicing detection is more challenging than duplication detection due to the lack of reference images and more difficult than synthesis detection because of the presence of smaller tampered-with areas. Moreover, disruptive factors in scientific images, such as artifacts, abnormal patterns, and noise, present misleading features like splicing traces, rendering this task difficult. In addition, the scarcity of high-quality datasets of spliced scientific images has limited advancements. Therefore, we propose the uncertainty-guided refinement network (URN) to mitigate these disruptive factors. We also construct a dataset for image splicing detection (SciSp) with 1,290 spliced images by collecting and manually splicing. Comprehensive experiments demonstrate the URN’s superior splicing detection performance.
{"title":"Exposing image splicing traces in scientific publications via uncertainty-guided refinement","authors":"","doi":"10.1016/j.patter.2024.101038","DOIUrl":"https://doi.org/10.1016/j.patter.2024.101038","url":null,"abstract":"<p>Recently, a surge in image manipulations in scientific publications has led to numerous retractions, highlighting the importance of image integrity. Although forensic detectors for image duplication and synthesis have been researched, the detection of image splicing in scientific publications remains largely unexplored. Splicing detection is more challenging than duplication detection due to the lack of reference images and more difficult than synthesis detection because of the presence of smaller tampered-with areas. Moreover, disruptive factors in scientific images, such as artifacts, abnormal patterns, and noise, present misleading features like splicing traces, rendering this task difficult. In addition, the scarcity of high-quality datasets of spliced scientific images has limited advancements. Therefore, we propose the uncertainty-guided refinement network (URN) to mitigate these disruptive factors. We also construct a dataset for image splicing detection (SciSp) with 1,290 spliced images by collecting and manually splicing. Comprehensive experiments demonstrate the URN’s superior splicing detection performance.</p>","PeriodicalId":36242,"journal":{"name":"Patterns","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141936995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-07DOI: 10.1016/j.patter.2024.101039
Changes in body mass are key indicators of health in humans and animals and are routinely monitored in animal husbandry and preclinical studies. In rodent studies, the current method of manually weighing the animal on a balance causes at least two issues. First, directly handling the animal induces stress, possibly confounding studies. Second, these data are static, limiting continuous assessment and obscuring rapid changes. A non-invasive, continuous method of monitoring animal mass would have utility in multiple biomedical research areas. We combine computer vision with statistical modeling to demonstrate the feasibility of determining mouse body mass by using video data. Our methods determine mass with a 4.8% error across genetically diverse mouse strains with varied coat colors and masses. This error is low enough to replace manual weighing in most mouse studies. We conclude that visually determining rodent mass enables non-invasive, continuous monitoring, improving preclinical studies and animal welfare.
{"title":"Highly accurate and precise determination of mouse mass using computer vision","authors":"","doi":"10.1016/j.patter.2024.101039","DOIUrl":"https://doi.org/10.1016/j.patter.2024.101039","url":null,"abstract":"<p>Changes in body mass are key indicators of health in humans and animals and are routinely monitored in animal husbandry and preclinical studies. In rodent studies, the current method of manually weighing the animal on a balance causes at least two issues. First, directly handling the animal induces stress, possibly confounding studies. Second, these data are static, limiting continuous assessment and obscuring rapid changes. A non-invasive, continuous method of monitoring animal mass would have utility in multiple biomedical research areas. We combine computer vision with statistical modeling to demonstrate the feasibility of determining mouse body mass by using video data. Our methods determine mass with a 4.8% error across genetically diverse mouse strains with varied coat colors and masses. This error is low enough to replace manual weighing in most mouse studies. We conclude that visually determining rodent mass enables non-invasive, continuous monitoring, improving preclinical studies and animal welfare.</p>","PeriodicalId":36242,"journal":{"name":"Patterns","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141936996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1016/j.patter.2024.101031
The amount of biomedical data continues to grow rapidly. However, collecting data from multiple sites for joint analysis remains challenging due to security, privacy, and regulatory concerns. To overcome this challenge, we use federated learning, which enables distributed training of neural network models over multiple data sources without sharing data. Each site trains the neural network over its private data for some time and then shares the neural network parameters (i.e., weights and/or gradients) with a federation controller, which in turn aggregates the local models and sends the resulting community model back to each site, and the process repeats. Our federated learning architecture, MetisFL, provides strong security and privacy. First, sample data never leave a site. Second, neural network parameters are encrypted before transmission and the global neural model is computed under fully homomorphic encryption. Finally, we use information-theoretic methods to limit information leakage from the neural model to prevent a “curious” site from performing model inversion or membership attacks. We present a thorough evaluation of the performance of secure, private federated learning in neuroimaging tasks, including for predicting Alzheimer’s disease and for brain age gap estimation (BrainAGE) from magnetic resonance imaging (MRI) studies in challenging, heterogeneous federated environments where sites have different amounts of data and statistical distributions.
{"title":"A federated learning architecture for secure and private neuroimaging analysis","authors":"","doi":"10.1016/j.patter.2024.101031","DOIUrl":"https://doi.org/10.1016/j.patter.2024.101031","url":null,"abstract":"<p>The amount of biomedical data continues to grow rapidly. However, collecting data from multiple sites for joint analysis remains challenging due to security, privacy, and regulatory concerns. To overcome this challenge, we use federated learning, which enables distributed training of neural network models over multiple data sources without sharing data. Each site trains the neural network over its private data for some time and then shares the neural network parameters (i.e., weights and/or gradients) with a federation controller, which in turn aggregates the local models and sends the resulting community model back to each site, and the process repeats. Our federated learning architecture, MetisFL, provides strong security and privacy. First, sample data never leave a site. Second, neural network parameters are encrypted before transmission and the global neural model is computed under fully homomorphic encryption. Finally, we use information-theoretic methods to limit information leakage from the neural model to prevent a “curious” site from performing model inversion or membership attacks. We present a thorough evaluation of the performance of secure, private federated learning in neuroimaging tasks, including for predicting Alzheimer’s disease and for brain age gap estimation (BrainAGE) from magnetic resonance imaging (MRI) studies in challenging, heterogeneous federated environments where sites have different amounts of data and statistical distributions.</p>","PeriodicalId":36242,"journal":{"name":"Patterns","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141865539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1016/j.patter.2024.101027
The present perspective outlines how epistemically baseless and ethically pernicious paradigms are recycled back into the scientific literature via machine learning (ML) and explores connections between these two dimensions of failure. We hold up the renewed emergence of physiognomic methods, facilitated by ML, as a case study in the harmful repercussions of ML-laundered junk science. A summary and analysis of several such studies is delivered, with attention to the means by which unsound research lends itself to social harms. We explore some of the many factors contributing to poor practice in applied ML. In conclusion, we offer resources for research best practices to developers and practitioners.
本视角概述了在认识论上毫无根据、在伦理道德上有害的范式是如何通过机器学习(ML)重新回到科学文献中的,并探讨了这两方面失败之间的联系。我们将机器学习推动下重新出现的相貌学方法作为一个案例,研究机器学习垃圾科学的有害影响。我们将对几项此类研究进行总结和分析,并关注不靠谱的研究是如何造成社会危害的。我们探讨了造成应用 ML 不良实践的诸多因素。最后,我们为开发人员和从业人员提供了研究最佳实践的资源。
{"title":"The reanimation of pseudoscience in machine learning and its ethical repercussions","authors":"","doi":"10.1016/j.patter.2024.101027","DOIUrl":"https://doi.org/10.1016/j.patter.2024.101027","url":null,"abstract":"<p>The present perspective outlines how epistemically baseless and ethically pernicious paradigms are recycled back into the scientific literature via machine learning (ML) and explores connections between these two dimensions of failure. We hold up the renewed emergence of physiognomic methods, facilitated by ML, as a case study in the harmful repercussions of ML-laundered junk science. A summary and analysis of several such studies is delivered, with attention to the means by which unsound research lends itself to social harms. We explore some of the many factors contributing to poor practice in applied ML. In conclusion, we offer resources for research best practices to developers and practitioners.</p>","PeriodicalId":36242,"journal":{"name":"Patterns","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141865439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-25DOI: 10.1016/j.patter.2024.101030
The “Reversal Curse” describes the inability of autoregressive decoder large language models (LLMs) to deduce “B is A” from “A is B,” assuming that B and A are distinct and can be uniquely identified from each other. This logical failure suggests limitations in using generative pretrained transformer (GPT) models for tasks like constructing knowledge graphs. Our study revealed that a bidirectional LLM, bidirectional encoder representations from transformers (BERT), does not suffer from this issue. To investigate further, we focused on more complex deductive reasoning by training encoder and decoder LLMs to perform union and intersection operations on sets. While both types of models managed tasks involving two sets, they struggled with operations involving three sets. Our findings underscore the differences between encoder and decoder models in handling logical reasoning. Thus, selecting BERT or GPT should depend on the task’s specific needs, utilizing BERT’s bidirectional context comprehension or GPT’s sequence prediction strengths.
逆转诅咒 "描述的是自回归解码器大型语言模型(LLM)无法从 "A 是 B "推导出 "B 是 A",前提是 B 和 A 是不同的,并且可以从彼此中唯一地识别出来。这种逻辑上的失败表明,在构建知识图谱等任务中使用生成式预训练转换器(GPT)模型存在局限性。我们的研究表明,双向 LLM--来自变换器的双向编码器表征(BERT)并不存在这个问题。为了进一步研究,我们将重点放在了更复杂的演绎推理上,训练编码器和解码器 LLM 对集合进行联合和相交运算。虽然这两类模型都能完成涉及两个集合的任务,但它们在涉及三个集合的运算中却举步维艰。我们的发现强调了编码器模型和解码器模型在处理逻辑推理方面的差异。因此,选择 BERT 还是 GPT 应取决于任务的具体需求,利用 BERT 的双向上下文理解能力或 GPT 的序列预测能力。
{"title":"Exploring the reversal curse and other deductive logical reasoning in BERT and GPT-based large language models","authors":"","doi":"10.1016/j.patter.2024.101030","DOIUrl":"https://doi.org/10.1016/j.patter.2024.101030","url":null,"abstract":"<p>The “Reversal Curse” describes the inability of autoregressive decoder large language models (LLMs) to deduce “B is A” from “A is B,” assuming that B and A are distinct and can be uniquely identified from each other. This logical failure suggests limitations in using generative pretrained transformer (GPT) models for tasks like constructing knowledge graphs. Our study revealed that a bidirectional LLM, bidirectional encoder representations from transformers (BERT), does not suffer from this issue. To investigate further, we focused on more complex deductive reasoning by training encoder and decoder LLMs to perform union and intersection operations on sets. While both types of models managed tasks involving two sets, they struggled with operations involving three sets. Our findings underscore the differences between encoder and decoder models in handling logical reasoning. Thus, selecting BERT or GPT should depend on the task’s specific needs, utilizing BERT’s bidirectional context comprehension or GPT’s sequence prediction strengths.</p>","PeriodicalId":36242,"journal":{"name":"Patterns","volume":null,"pages":null},"PeriodicalIF":6.5,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141778215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}