首页 > 最新文献

bioRxiv - Bioinformatics最新文献

英文 中文
Cloud-enabled Scalable Analysis of Large Proteomics Cohorts 大型蛋白质组学群组的云端可扩展分析
Pub Date : 2024-09-10 DOI: 10.1101/2024.09.05.611509
Harendra Guturu, Andrew Nichols, Lee S. Cantrell, Seth Just, János Kis, Theodore Platt, Iman Mohtashemi, Jian Wang, Serafim Batzoglou
Rapid advances in depth and throughput of untargeted mass-spectrometry-based proteomic technologies are enabling large-scale cohort proteomic and proteogenomic analyses. As such studies scale, the data infrastructure and search engines required to process data must also scale. This challenge is amplified in search engines that rely on library-free match between runs (MBR) search, which enable enhanced depth-per-sample and data completeness. However, to-date, no MBR-based search could scale to process cohorts of thousands or more individuals. Here, we present a strategy to deploy search engines in a distributed cloud environment without source code modification, thereby enhancing resource scalability and throughput. Additionally, we present an algorithm, Scalable MBR, that replicates the MBR procedure of the popular DIA-NN software for scalability to thousands of samples. We demonstrate that Scalable MBR can search thousands of MS raw files in a few hours compared to days required for the original DIA-NN MBR procedure and demonstrate that the results are almost indistinguishable to those of DIA-NN native MBR. The method has been tested to scale to over 15,000 injections and is available for use in the Proteograph(TM) Analysis Suite.
基于非靶向质谱的蛋白质组学技术在深度和通量方面的快速发展,使得大规模队列蛋白质组学和蛋白质基因组学分析成为可能。随着此类研究的扩展,处理数据所需的数据基础设施和搜索引擎也必须随之扩展。这种挑战在依靠无库运行间匹配(MBR)搜索的搜索引擎中更为严峻,因为这种搜索能提高每个样本的深度和数据的完整性。然而,迄今为止,还没有一种基于 MBR 的搜索能扩展到处理数千或更多个体的队列。在此,我们提出了一种无需修改源代码即可在分布式云环境中部署搜索引擎的策略,从而提高了资源的可扩展性和吞吐量。此外,我们还介绍了一种名为 "可扩展 MBR "的算法,该算法复制了流行的 DIA-NN 软件的 MBR 程序,可扩展至数千个样本。我们证明,与 DIA-NN 原始 MBR 程序所需的数天时间相比,Scalable MBR 可在数小时内搜索数千个 MS 原始文件,并证明其结果与 DIA-NN 原始 MBR 的结果几乎没有区别。经测试,该方法可扩展至 15,000 多次注射,并可在 Proteograph(TM) 分析套件中使用。
{"title":"Cloud-enabled Scalable Analysis of Large Proteomics Cohorts","authors":"Harendra Guturu, Andrew Nichols, Lee S. Cantrell, Seth Just, János Kis, Theodore Platt, Iman Mohtashemi, Jian Wang, Serafim Batzoglou","doi":"10.1101/2024.09.05.611509","DOIUrl":"https://doi.org/10.1101/2024.09.05.611509","url":null,"abstract":"Rapid advances in depth and throughput of untargeted mass-spectrometry-based proteomic technologies are enabling large-scale cohort proteomic and proteogenomic analyses. As such studies scale, the data infrastructure and search engines required to process data must also scale. This challenge is amplified in search engines that rely on library-free match between runs (MBR) search, which enable enhanced depth-per-sample and data completeness. However, to-date, no MBR-based search could scale to process cohorts of thousands or more individuals. Here, we present a strategy to deploy search engines in a distributed cloud environment without source code modification, thereby enhancing resource scalability and throughput. Additionally, we present an algorithm, Scalable MBR, that replicates the MBR procedure of the popular DIA-NN software for scalability to thousands of samples. We demonstrate that Scalable MBR can search thousands of MS raw files in a few hours compared to days required for the original DIA-NN MBR procedure and demonstrate that the results are almost indistinguishable to those of DIA-NN native MBR. The method has been tested to scale to over 15,000 injections and is available for use in the Proteograph(TM) Analysis Suite.","PeriodicalId":501307,"journal":{"name":"bioRxiv - Bioinformatics","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A semi-parametric multiple imputation method for high-sparse, high-dimensional, compositional data 针对高稀疏、高维、组合数据的半参数多重估算方法
Pub Date : 2024-09-10 DOI: 10.1101/2024.09.05.611521
Michael B Sohn, Kristin Scheible, Steven R Gill
High sparsity (i.e., excessive zeros) in microbiome data, which are high-dimensional and compositional, is unavoidable and can significantly alter analysis results. However, efforts to address this high sparsity have been very limited because, in part, it is impossible to justify the validity of any such methods, as zeros in microbiome data arise from multiple sources (e.g., true absence, stochastic nature of sampling). The most common approach is to treat all zeros as structural zeros (i.e., true absence) or rounded zeros (i.e., undetected due to detection limit). However, this approach can underestimate the mean abundance while overestimating its variance because many zeros can arise from the stochastic nature of sampling and/or functional redundancy (i.e., different microbes can perform the same functions), thus losing power. In this manuscript, we argue that treating all zeros as missing values would not significantly alter analysis results if the proportion of structural zeros is similar for all taxa, and we propose a semi-parametric multiple imputation method for high-sparse, high-dimensional, compositional data. We demonstrate the merits of the proposed method and its beneficial effects on downstream analyses in extensive simulation studies. We reanalyzed a type II diabetes (T2D) dataset to determine differentially abundant species between T2D patients and non-diabetic controls.
微生物组数据具有高维性和组成性,其中的高稀疏性(即过多的零)是不可避免的,会严重改变分析结果。然而,解决这种高稀疏性的努力非常有限,部分原因是无法证明任何此类方法的有效性,因为微生物组数据中的零是由多种原因造成的(如真正的缺失、采样的随机性)。最常见的方法是将所有零点视为结构零点(即真正缺失)或四舍五入零点(即因检测限而未检测到)。然而,这种方法可能会低估平均丰度,同时高估其方差,因为许多零可能是由于取样的随机性和/或功能冗余(即不同微生物可以执行相同的功能)引起的,从而失去了研究的意义。在本手稿中,我们认为如果所有类群的结构零比例相似,那么将所有零作为缺失值处理并不会显著改变分析结果,我们还提出了一种针对高稀疏、高维、成分数据的半参数多重估算方法。我们在大量模拟研究中证明了所提方法的优点及其对下游分析的有利影响。我们重新分析了一个 II 型糖尿病(T2D)数据集,以确定 T2D 患者与非糖尿病对照组之间物种丰富度的差异。
{"title":"A semi-parametric multiple imputation method for high-sparse, high-dimensional, compositional data","authors":"Michael B Sohn, Kristin Scheible, Steven R Gill","doi":"10.1101/2024.09.05.611521","DOIUrl":"https://doi.org/10.1101/2024.09.05.611521","url":null,"abstract":"High sparsity (i.e., excessive zeros) in microbiome data, which are high-dimensional and compositional, is unavoidable and can significantly alter analysis results. However, efforts to address this high sparsity have been very limited because, in part, it is impossible to justify the validity of any such methods, as zeros in microbiome data arise from multiple sources (e.g., true absence, stochastic nature of sampling). The most common approach is to treat all zeros as structural zeros (i.e., true absence) or rounded zeros (i.e., undetected due to detection limit). However, this approach can underestimate the mean abundance while overestimating its variance because many zeros can arise from the stochastic nature of sampling and/or functional redundancy (i.e., different microbes can perform the same functions), thus losing power. In this manuscript, we argue that treating all zeros as missing values would not significantly alter analysis results if the proportion of structural zeros is similar for all taxa, and we propose a semi-parametric multiple imputation method for high-sparse, high-dimensional, compositional data. We demonstrate the merits of the proposed method and its beneficial effects on downstream analyses in extensive simulation studies. We reanalyzed a type II diabetes (T2D) dataset to determine differentially abundant species between T2D patients and non-diabetic controls.","PeriodicalId":501307,"journal":{"name":"bioRxiv - Bioinformatics","volume":"157 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Utilizing Protein Bioinformatics to Delve Deeper Into Immunopeptidomic Datasets 利用蛋白质生物信息学深入研究免疫肽组数据集
Pub Date : 2024-09-10 DOI: 10.1101/2024.09.05.611486
Christopher T Boughter
Immunopeptidomics is a growing subfield of proteomics that has the potential to shed new light on a long-neglected aspect of adaptive immunology: a comprehensive understanding of the peptides presented by major histocompatibility complexes (MHC) to T cells. As the field of immunopeptidomics continues to grow and mature, a parallel expansion in the methods for extracting quantitative features of these peptides is necessary. Currently, massive experimental efforts to isolate a given immunopeptidome are summarized in tables and pie charts, or worse, entirely thrown out in favor of singular peptides of interest. Ideally, an unbiased approach would dive deeper into these large proteomic datasets, identifying sequence-level biochemical signatures inherent to each individual dataset and the given immunological niche. This chapter will outline the steps for a powerful approach to such analysis, utilizing the Automated Immune Molecule Separator (AIMS) software for the characterization of immunopeptidomic datasets. AIMS is a flexible tool for the identification of biophysical signatures in peptidomic datasets, the elucidation of nuanced differences in repertoires collected across tissues or experimental conditions, and the generation of machine learning models for future applications to classification problems. In learning to use AIMS, readers of this chapter will receive a broad introduction to the field of protein bioinformatics and its utility in the analysis of immunopeptidomic datasets and other large-scale immune repertoire datasets.
免疫肽组学是蛋白质组学中一个不断发展的子领域,它有可能为适应性免疫学中一个长期被忽视的方面带来新的启示:全面了解主要组织相容性复合体(MHC)向 T 细胞呈现的肽。随着免疫肽组学领域的不断发展和成熟,提取这些肽的定量特征的方法也必须同步扩展。目前,为分离特定免疫肽组所做的大量实验工作被总结成表格和饼状图,更有甚者,完全丢弃了感兴趣的单个肽。理想情况下,一种无偏见的方法可以深入研究这些大型蛋白质组数据集,识别每个数据集和特定免疫位点固有的序列级生化特征。本章将概述利用自动免疫分子分离器(AIMS)软件表征免疫肽组数据集的强大分析方法的步骤。AIMS 是一种灵活的工具,可用于识别肽组数据集中的生物物理特征,阐明不同组织或实验条件下收集的复合物之间的细微差别,并生成机器学习模型,以便将来应用于分类问题。在学习使用 AIMS 的过程中,本章读者将广泛了解蛋白质生物信息学领域及其在分析免疫肽组数据集和其他大规模免疫组数据集中的应用。
{"title":"Utilizing Protein Bioinformatics to Delve Deeper Into Immunopeptidomic Datasets","authors":"Christopher T Boughter","doi":"10.1101/2024.09.05.611486","DOIUrl":"https://doi.org/10.1101/2024.09.05.611486","url":null,"abstract":"Immunopeptidomics is a growing subfield of proteomics that has the potential to shed new light on a long-neglected aspect of adaptive immunology: a comprehensive understanding of the peptides presented by major histocompatibility complexes (MHC) to T cells. As the field of immunopeptidomics continues to grow and mature, a parallel expansion in the methods for extracting quantitative features of these peptides is necessary. Currently, massive experimental efforts to isolate a given immunopeptidome are summarized in tables and pie charts, or worse, entirely thrown out in favor of singular peptides of interest. Ideally, an unbiased approach would dive deeper into these large proteomic datasets, identifying sequence-level biochemical signatures inherent to each individual dataset and the given immunological niche. This chapter will outline the steps for a powerful approach to such analysis, utilizing the Automated Immune Molecule Separator (AIMS) software for the characterization of immunopeptidomic datasets. AIMS is a flexible tool for the identification of biophysical signatures in peptidomic datasets, the elucidation of nuanced differences in repertoires collected across tissues or experimental conditions, and the generation of machine learning models for future applications to classification problems. In learning to use AIMS, readers of this chapter will receive a broad introduction to the field of protein bioinformatics and its utility in the analysis of immunopeptidomic datasets and other large-scale immune repertoire datasets.","PeriodicalId":501307,"journal":{"name":"bioRxiv - Bioinformatics","volume":"05 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural-symbolic hybrid model for myosin complex in cardiac ventriculum decodes structural bases for inheritable heart disease from its genetic encoding 心室肌球蛋白复合物的神经符号混合模型从遗传编码解码遗传性心脏病的结构基础
Pub Date : 2024-09-10 DOI: 10.1101/2024.09.05.611508
Thomas P Burghardt
Background: Human ventriculum myosin (βmys) powers contraction sometimes in complex with myosin binding protein C (MYBPC3). The latter regulates βmys activity and impacts overall cardiac function. Nonsynonymous single nucleotide variants (SNVs) change protein sequence in βmys or MYBPC3 causing inheritable heart diseases by affecting the βmys/MYBPC3 complex. Muscle genetics encode instructions for contraction informing native protein construction, functional integration, and inheritable disease impairment. A digital model decodes these instructions and evolves by continuously processing new information content from diverse data modalities in partnership with the human agent.Methods: A general neural-network contraction model characterizes SNV impacts on human health. It rationalizes phenotype and pathogenicity assignment given the SNVs genetic characteristics and in this sense decodes βmys/MYBPC3 complex genetics and implicitly captures ventricular muscle functionality. When a SNV modified domain locates to an inter-protein contact in βmys/MYBPC3 it affects complex coordination. Domains involved, one in βmys and the other in MYBPC3, form coordinated domains (co-domains). Co-domains are bilateral implying potential for their SNV modification probabilities to respond jointly to a common perturbation to reveal their location. Human genetic diversity from the serial founder effect is the common systemic perturbation coupling co-domains that are mapped by a methodology called 2-dimensional correlation genetics (2D-CG). Results: Interpreting the general neural-network contraction model output involves 2D-CG co-domain mapping that provides natural language expressed structural insights. It aligns machine-learned intelligence from the neural network model with human provided structural insight from the 2D-CG map, and other data from the literature, to form a neural-symbolic hybrid model integrating genetic and protein interaction data into a nascent digital twin. This process is the template for combining new information content from diverse data modalities into a digital model that can evolve. The nascent digital twin interprets SNV implications to discover disease mechanism, can evaluate potential remedies for efficacy, and does so without animal models.
背景:人类室管膜肌球蛋白(βmys)有时与肌球蛋白结合蛋白 C(MYBPC3)复合,为收缩提供动力。后者调节 βmys 的活性并影响整体心脏功能。非同义单核苷酸变异(SNV)改变了 βmys 或 MYBPC3 的蛋白质序列,影响了 βmys/MYBPC3 复合物,从而导致遗传性心脏病。肌肉遗传编码了收缩指令,为原生蛋白质的构建、功能整合和遗传性疾病损害提供了信息。数字模型对这些指令进行解码,并通过与人类代理合作不断处理来自不同数据模式的新信息内容来实现进化:方法:一个通用的神经网络收缩模型描述了 SNV 对人类健康的影响。方法:一般神经网络收缩模型描述 SNV 对人类健康的影响,它根据 SNV 的遗传特征合理分配表型和致病性,并在此意义上解码 βmys/MYBPC3 复杂遗传学,隐含地捕捉心室肌肉功能。当 SNV 修饰的结构域位于 βmys/MYBPC3 的蛋白间接触点时,会影响复合体的协调。所涉及的结构域,一个在βmys中,另一个在MYBPC3中,形成了协调结构域(共结构域)。共域是双边的,这意味着它们的 SNV 修饰概率有可能共同应对共同的扰动,从而揭示它们的位置。连环创始人效应产生的人类遗传多样性是共同系统扰动耦合共域,共域是通过一种称为二维相关遗传学(2D-CG)的方法绘制的。结果解读一般神经网络收缩模型的输出涉及 2D-CG 共域映射,它提供了用自然语言表达的结构见解。它将神经网络模型中的机器学习智能与 2D-CG 地图中人类提供的结构洞察力以及文献中的其他数据结合起来,形成一个神经-符号混合模型,将基因和蛋白质相互作用数据整合到一个新生的数字孪生中。这一过程是将来自不同数据模式的新信息内容整合到一个可进化的数字模型中的模板。新生的数字孪生子可以解释 SNV 的含义,从而发现疾病机理,评估潜在疗法的疗效,而且无需动物模型。
{"title":"Neural-symbolic hybrid model for myosin complex in cardiac ventriculum decodes structural bases for inheritable heart disease from its genetic encoding","authors":"Thomas P Burghardt","doi":"10.1101/2024.09.05.611508","DOIUrl":"https://doi.org/10.1101/2024.09.05.611508","url":null,"abstract":"Background: Human ventriculum myosin (βmys) powers contraction sometimes in complex with myosin binding protein C (MYBPC3). The latter regulates βmys activity and impacts overall cardiac function. Nonsynonymous single nucleotide variants (SNVs) change protein sequence in βmys or MYBPC3 causing inheritable heart diseases by affecting the βmys/MYBPC3 complex. Muscle genetics encode instructions for contraction informing native protein construction, functional integration, and inheritable disease impairment. A digital model decodes these instructions and evolves by continuously processing new information content from diverse data modalities in partnership with the human agent.\u0000Methods: A general neural-network contraction model characterizes SNV impacts on human health. It rationalizes phenotype and pathogenicity assignment given the SNVs genetic characteristics and in this sense decodes βmys/MYBPC3 complex genetics and implicitly captures ventricular muscle functionality. When a SNV modified domain locates to an inter-protein contact in βmys/MYBPC3 it affects complex coordination. Domains involved, one in βmys and the other in MYBPC3, form coordinated domains (co-domains). Co-domains are bilateral implying potential for their SNV modification probabilities to respond jointly to a common perturbation to reveal their location. Human genetic diversity from the serial founder effect is the common systemic perturbation coupling co-domains that are mapped by a methodology called 2-dimensional correlation genetics (2D-CG). Results: Interpreting the general neural-network contraction model output involves 2D-CG co-domain mapping that provides natural language expressed structural insights. It aligns machine-learned intelligence from the neural network model with human provided structural insight from the 2D-CG map, and other data from the literature, to form a neural-symbolic hybrid model integrating genetic and protein interaction data into a nascent digital twin. This process is the template for combining new information content from diverse data modalities into a digital model that can evolve. The nascent digital twin interprets SNV implications to discover disease mechanism, can evaluate potential remedies for efficacy, and does so without animal models.","PeriodicalId":501307,"journal":{"name":"bioRxiv - Bioinformatics","volume":"60 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142224317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A consensus single-cell transcriptomic atlas of dermal fibroblast heterogeneity 真皮成纤维细胞异质性的单细胞转录组图谱共识
Pub Date : 2024-09-10 DOI: 10.1101/2024.09.05.611379
Alex M. Ascension, Ander Izeta
Single-cell RNA sequencing (scRNAseq) studies have unveiled large transcriptomic heterogeneity within both human and mouse dermal fibroblasts, but a consensus atlas that spans both species is lacking. Here, by studying 25 human and 9 mouse datasets through a semi-supervised procedure, we categorize 15 distinct human fibroblast populations across 5 main axes. Analysis of human fibroblast markers characteristic of each population suggested diverse functions, such as position-dependent ECM synthesis, association with immune responses or structural roles in skin appendages. Similarly, mouse fibroblasts were categorized into 17 populations across 5 axes. Comparison of mouse and human fibroblast populations highlighted similarities suggesting a degree of functional overlap, though nuanced differences were also noted: transcriptomically, human axes seem to segregate by function, while mouse axes seem to prioritize positional information over function. Importantly, addition of newer datasets did not significantly change the defined population structure. This study enhances our understanding of dermal fibroblast diversity, shedding light on species-specific distinctions as well as shared functionalities.
单细胞 RNA 测序(scRNAseq)研究揭示了人类和小鼠真皮成纤维细胞内巨大的转录组异质性,但目前还缺乏一个跨越两个物种的共识图谱。在这里,我们通过半监督程序对 25 个人类数据集和 9 个小鼠数据集进行了研究,在 5 个主轴上对 15 个不同的人类成纤维细胞群进行了分类。对每个群体所特有的人成纤维细胞标记的分析表明,这些群体具有不同的功能,如位置依赖性 ECM 合成、与免疫反应相关或在皮肤附属物中起结构作用。同样,小鼠成纤维细胞也被分为 5 个轴的 17 个群体。对小鼠和人类成纤维细胞群进行比较后发现,两者有相似之处,表明存在一定程度的功能重叠,但也发现了细微的差异:从转录组学角度看,人类轴似乎是按功能分离的,而小鼠轴似乎优先考虑位置信息而不是功能。重要的是,增加新的数据集并没有显著改变已定义的群体结构。这项研究加深了我们对真皮成纤维细胞多样性的了解,揭示了物种特异性的区别以及共同的功能。
{"title":"A consensus single-cell transcriptomic atlas of dermal fibroblast heterogeneity","authors":"Alex M. Ascension, Ander Izeta","doi":"10.1101/2024.09.05.611379","DOIUrl":"https://doi.org/10.1101/2024.09.05.611379","url":null,"abstract":"Single-cell RNA sequencing (scRNAseq) studies have unveiled large transcriptomic heterogeneity within both human and mouse dermal fibroblasts, but a consensus atlas that spans both species is lacking. Here, by studying 25 human and 9 mouse datasets through a semi-supervised procedure, we categorize 15 distinct human fibroblast populations across 5 main axes. Analysis of human fibroblast markers characteristic of each population suggested diverse functions, such as position-dependent ECM synthesis, association with immune responses or structural roles in skin appendages. Similarly, mouse fibroblasts were categorized into 17 populations across 5 axes. Comparison of mouse and human fibroblast populations highlighted similarities suggesting a degree of functional overlap, though nuanced differences were also noted: transcriptomically, human axes seem to segregate by function, while mouse axes seem to prioritize positional information over function. Importantly, addition of newer datasets did not significantly change the defined population structure. This study enhances our understanding of dermal fibroblast diversity, shedding light on species-specific distinctions as well as shared functionalities.","PeriodicalId":501307,"journal":{"name":"bioRxiv - Bioinformatics","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142224319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AlphaFold2 SLiM screen for LC3-LIR interactions in autophagy AlphaFold2 SLiM 筛选 LC3-LIR 在自噬中的相互作用
Pub Date : 2024-09-10 DOI: 10.1101/2024.09.06.611604
Jan F. M. Stuke, Gerhard Hummer
In selective autophagy, cargo recruitment is mediated by LC3-interacting regions (LIRs)/Atg8-interacting motifs (AIMs) in the cargo or cargo receptor proteins. The binding of these motifs to LC3/Atg8 proteins at the phagophore membrane is often modulated by post-translational modifications, especially phosphorylation. As a challenge for computational LIR predictions, sequences may contain the short canonical (W/F/Y)XX(L/I/V) motif without being functional. Conversely, LIRs may be formed by non-canonical but functional sequence motifs. AlphaFold2 has proven to be useful for LIR predictions, even if some LIRs are missed and proteins with thousands of residues reach the limits of computational feasibility. We present a fragment-based approach to address these limitations. We find that fragment length and phosphomimetic mutations modulate the interactions predicted by AlphaFold2. Systematic fragment screening for a range of target proteins yields structural models for interactions that AlphaFold2 and AlphaFold3 fail to predict for full-length targets. We provide guidance on fragment choice, sequence tuning, and LC3 isoform effects for optimal LIR screens. Finally, we also test the transferability of this general framework to SUMO-SIM interactions, another type of protein-protein interaction involving short linear motifs (SLiMs).
在选择性自噬过程中,货物招募是由货物或货物受体蛋白中的 LC3 相互作用区(LIRs)/Atg8 相互作用基序(AIMs)介导的。这些基序与吞噬膜上的 LC3/Atg8 蛋白的结合常常受到翻译后修饰(尤其是磷酸化)的影响。作为计算 LIR 预测的一个挑战,序列中可能包含短的典型 (W/F/Y)XX(L/I/V) 基序,但并不具有功能性。相反,LIR 可能由非经典但有功能的序列基序形成。AlphaFold2 已被证明可用于 LIR 预测,即使一些 LIR 被遗漏,以及具有数千个残基的蛋白质达到了计算可行性的极限。我们提出了一种基于片段的方法来解决这些局限性。我们发现片段长度和拟磷突变会调节 AlphaFold2 预测的相互作用。针对一系列目标蛋白质的系统片段筛选产生了相互作用的结构模型,而 AlphaFold2 和 AlphaFold3 无法预测全长目标蛋白质的相互作用。我们就片段选择、序列调整和 LC3 同工型效应为最佳 LIR 筛选提供了指导。最后,我们还测试了这一通用框架在 SUMO-SIM 相互作用(另一种涉及短线性基序 (SLiM) 的蛋白质-蛋白质相互作用)中的可移植性。
{"title":"AlphaFold2 SLiM screen for LC3-LIR interactions in autophagy","authors":"Jan F. M. Stuke, Gerhard Hummer","doi":"10.1101/2024.09.06.611604","DOIUrl":"https://doi.org/10.1101/2024.09.06.611604","url":null,"abstract":"In selective autophagy, cargo recruitment is mediated by LC3-interacting regions (LIRs)/Atg8-interacting motifs (AIMs) in the cargo or cargo receptor proteins. The binding of these motifs to LC3/Atg8 proteins at the phagophore membrane is often modulated by post-translational modifications, especially phosphorylation. As a challenge for computational LIR predictions, sequences may contain the short canonical (W/F/Y)XX(L/I/V) motif without being functional. Conversely, LIRs may be formed by non-canonical but functional sequence motifs. AlphaFold2 has proven to be useful for LIR predictions, even if some LIRs are missed and proteins with thousands of residues reach the limits of computational feasibility. We present a fragment-based approach to address these limitations. We find that fragment length and phosphomimetic mutations modulate the interactions predicted by AlphaFold2. Systematic fragment screening for a range of target proteins yields structural models for interactions that AlphaFold2 and AlphaFold3 fail to predict for full-length targets. We provide guidance on fragment choice, sequence tuning, and LC3 isoform effects for optimal LIR screens. Finally, we also test the transferability of this general framework to SUMO-SIM interactions, another type of protein-protein interaction involving short linear motifs (SLiMs).","PeriodicalId":501307,"journal":{"name":"bioRxiv - Bioinformatics","volume":"39 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
scDrugAtlas: an integrative single-cell drug response atlas for unraveling tumor heterogeneity in therapeutic efficacy scDrugAtlas:用于揭示肿瘤疗效异质性的综合性单细胞药物反应图谱
Pub Date : 2024-09-10 DOI: 10.1101/2024.09.05.611403
Wei Huang, Xinda Ren, Yinpu Bai, Hui Liu
Tumor heterogeneity often leads to substantial differences in responses to same drug treatment. The presence of pre-existing or acquired drug-resistant cell subpopulations within a tumor survive and proliferate, ultimately resulting in tumor relapse and metastasis. The drug resistance is the leading cause of failure in clinical tumor therapy. Therefore, accurate identification of drug-resistant tumor cell subpopulations could greatly facilitate the precision medicine and novel drug development. However, the scarcity of single-cell drug response data significantly hinders the exploration of tumor cell resistance mechanisms and the development of computational predictive methods. In this paper, we propose scDrugAtlas, a comprehensive database devoted to integrating the drug response data at single-cell level. We manually compiled more than 100 datasets containing single-cell drug responses from various public resources. The current version comprises large-scale single-cell transcriptional profiles and drug response labels from more than 1,000 samples (cell line, mouse, PDX models, patients and bacterium), across 66 unique drugs and 13 major cancer types. Particularly, we assigned a confidence level to each response label based on the tissue source (primary or relapse/metastasis), drug exposure time and drug-induced cell phenotype. We believe scDrugAtlas could greatly facilitate the Bioinformatics community for developing computational models and biologists for identifying drug-resistant tumor cells and underlying molecular mechanism. The scDrugAtlas database is available at: http://drug.hliulab.tech/scDrugAtlas/.
肿瘤的异质性往往导致对同种药物治疗的反应存在巨大差异。肿瘤内原有或获得性耐药细胞亚群的存活和增殖,最终导致肿瘤复发和转移。耐药性是临床肿瘤治疗失败的主要原因。因此,准确识别耐药肿瘤细胞亚群可大大促进精准医疗和新药研发。然而,单细胞药物反应数据的稀缺极大地阻碍了肿瘤细胞耐药机制的探索和计算预测方法的开发。本文提出的 scDrugAtlas 是一个致力于整合单细胞水平药物反应数据的综合性数据库。我们从各种公共资源中手动编译了 100 多个包含单细胞药物反应的数据集。目前的版本包括来自 1000 多个样本(细胞系、小鼠、PDX 模型、患者和细菌)的大规模单细胞转录谱和药物反应标签,涉及 66 种药物和 13 种主要癌症类型。特别是,我们根据组织来源(原发或复发/转移)、药物暴露时间和药物诱导的细胞表型,为每个反应标签指定了置信度。我们相信,scDrugAtlas 能极大地帮助生物信息学社区开发计算模型,也能帮助生物学家识别耐药肿瘤细胞及其潜在的分子机制。scDrugAtlas 数据库的网址是:http://drug.hliulab.tech/scDrugAtlas/。
{"title":"scDrugAtlas: an integrative single-cell drug response atlas for unraveling tumor heterogeneity in therapeutic efficacy","authors":"Wei Huang, Xinda Ren, Yinpu Bai, Hui Liu","doi":"10.1101/2024.09.05.611403","DOIUrl":"https://doi.org/10.1101/2024.09.05.611403","url":null,"abstract":"Tumor heterogeneity often leads to substantial differences in responses to same drug treatment. The presence of pre-existing or acquired drug-resistant cell subpopulations within a tumor survive and proliferate, ultimately resulting in tumor relapse and metastasis. The drug resistance is the leading cause of failure in clinical tumor therapy. Therefore, accurate identification of drug-resistant tumor cell subpopulations could greatly facilitate the precision medicine and novel drug development. However, the scarcity of single-cell drug response data significantly hinders the exploration of tumor cell resistance mechanisms and the development of computational predictive methods. In this paper, we propose scDrugAtlas, a comprehensive database devoted to integrating the drug response data at single-cell level. We manually compiled more than 100 datasets containing single-cell drug responses from various public resources. The current version comprises large-scale single-cell transcriptional profiles and drug response labels from more than 1,000 samples (cell line, mouse, PDX models, patients and bacterium), across 66 unique drugs and 13 major cancer types. Particularly, we assigned a confidence level to each response label based on the tissue source (primary or relapse/metastasis), drug exposure time and drug-induced cell phenotype. We believe scDrugAtlas could greatly facilitate the Bioinformatics community for developing computational models and biologists for identifying drug-resistant tumor cells and underlying molecular mechanism. The scDrugAtlas database is available at: http://drug.hliulab.tech/scDrugAtlas/.","PeriodicalId":501307,"journal":{"name":"bioRxiv - Bioinformatics","volume":"105 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The FAIR Data Point Populator: collaborative FAIRification and population of FAIR Data Points FAIR 数据点填充器:合作 FAIR 化和 FAIR 数据点填充
Pub Date : 2024-09-10 DOI: 10.1101/2024.09.06.611505
Daphne Wijnbergen, Rajaram Kaliyaperumal, Kees Burger, Luiz Olavo Bonino da Silva Santos, Barend Mons, Marco Roos, Eleni Mina
Background Use of the FAIR principles (Findable, Accessible, Interoperable and Reusable) allows the rapidly growing number of biomedical datasets to be optimally (re)used. An important aspect of the FAIR principles is metadata. The FAIR Data Point specifications and reference implementation have been designed as an example on how to publish metadata according to the FAIR principles. Various tools to create metadata have been created, but many of these have limitations, such as interfaces that are not intuitive, metadata that does not adhere to a common metadata schema, limited scalability, and inefficient collaboration. We aim to address these limitations in the FAIR Data Point Populator. Results The FAIR Data Point Populator consists of a GitHub workflow together with Excel templates that have tooltips, validation and documentation. The Excel templates are targeted towards non-technical users, and can be used collaboratively in online spreadsheet software. A more technical user then uses the GitHub workflow to read multiple entries in the Excel sheets, and transform it into machine readable metadata. This metadata is then automatically uploaded to a connected FAIR Data Point. We applied the FAIR Data Point Populator on the metadata of two datasets, and a patient registry. We were then able to run a query on the FAIR Data Point Index, in order to retrieve one of the datasets. Conclusion The FAIR Data Point Populator addresses several limitations of other tools. It makes creating metadata easier, ensures adherence to a common metadata schema, allows bulk creation of metadata entries and increases collaboration. As a result of this, the barrier of entry for FAIRification is lower, which enables the creation of FAIR data by more people.
背景 FAIR 原则(可查找、可访问、可互操作和可重用)的使用使数量迅速增长的生物医学数据集得以优化(再)利用。FAIR 原则的一个重要方面就是元数据。FAIR 数据点规范和参考实施是作为如何根据 FAIR 原则发布元数据的范例而设计的。目前已开发出各种创建元数据的工具,但其中许多都存在局限性,例如界面不够直观、元数据不符合通用元数据模式、可扩展性有限以及协作效率低下。我们的目标是在 FAIR 数据点填充器中解决这些局限性。结果 FAIR 数据点弹出器由 GitHub 工作流和带有工具提示、验证和文档的 Excel 模板组成。Excel 模板面向非技术用户,可在在线电子表格软件中协同使用。然后,技术水平较高的用户使用 GitHub 工作流读取 Excel 表单中的多个条目,并将其转换为机器可读的元数据。然后,这些元数据会自动上传到连接的 FAIR 数据点。我们在两个数据集和一个患者登记处的元数据上应用了 FAIR 数据点填充器。然后,我们可以在 FAIR 数据点索引上运行查询,以检索其中一个数据集。结论 FAIR 数据点填充器解决了其他工具的一些局限性。它使创建元数据变得更容易,确保遵守通用的元数据模式,允许批量创建元数据条目,并加强协作。因此,FAIR 化的准入门槛降低了,从而使更多人能够创建 FAIR 数据。
{"title":"The FAIR Data Point Populator: collaborative FAIRification and population of FAIR Data Points","authors":"Daphne Wijnbergen, Rajaram Kaliyaperumal, Kees Burger, Luiz Olavo Bonino da Silva Santos, Barend Mons, Marco Roos, Eleni Mina","doi":"10.1101/2024.09.06.611505","DOIUrl":"https://doi.org/10.1101/2024.09.06.611505","url":null,"abstract":"Background Use of the FAIR principles (Findable, Accessible, Interoperable and Reusable) allows the rapidly growing number of biomedical datasets to be optimally (re)used. An important aspect of the FAIR principles is metadata. The FAIR Data Point specifications and reference implementation have been designed as an example on how to publish metadata according to the FAIR principles. Various tools to create metadata have been created, but many of these have limitations, such as interfaces that are not intuitive, metadata that does not adhere to a common metadata schema, limited scalability, and inefficient collaboration. We aim to address these limitations in the FAIR Data Point Populator. Results The FAIR Data Point Populator consists of a GitHub workflow together with Excel templates that have tooltips, validation and documentation. The Excel templates are targeted towards non-technical users, and can be used collaboratively in online spreadsheet software. A more technical user then uses the GitHub workflow to read multiple entries in the Excel sheets, and transform it into machine readable metadata. This metadata is then automatically uploaded to a connected FAIR Data Point. We applied the FAIR Data Point Populator on the metadata of two datasets, and a patient registry. We were then able to run a query on the FAIR Data Point Index, in order to retrieve one of the datasets. Conclusion The FAIR Data Point Populator addresses several limitations of other tools. It makes creating metadata easier, ensures adherence to a common metadata schema, allows bulk creation of metadata entries and increases collaboration. As a result of this, the barrier of entry for FAIRification is lower, which enables the creation of FAIR data by more people.","PeriodicalId":501307,"journal":{"name":"bioRxiv - Bioinformatics","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142224318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A robust unsupervised clustering approach for high-dimensional biological imaging data reveals shared drug-induced morphological signatures 针对高维生物成像数据的鲁棒无监督聚类方法揭示了药物诱导的共同形态特征
Pub Date : 2024-09-09 DOI: 10.1101/2024.09.05.611300
Shaine Chenxin Bao, Dalia Mizikovsky, Kathleen Pishas, Qiongyi Zhao, Karla J Cowley, Evanny Marinovic, Mark Carey, Ian Campbell, Kaylene J Simpson, Dane Cheasley, Nathan Palpant
High-throughput analysis methods have emerged as central technologies to accelerate discovery through scalable generation of large-scale data. Analysis of these datasets remains challenging due to limitations in computational approaches for dimensionality reduction. Here, we present UnTANGLeD, a versatile computational pipeline that prioritises biologically robust and meaningful information to guide actionable strategies from input screening data which we demonstrate using results from image-based drug screening. By providing a robust framework for analysing high dimensional biological data, UnTANGLeD offers a powerful tool for analysis of theoretically any data type from any screening platform.
高通量分析方法已成为通过可扩展的大规模数据生成来加速发现的核心技术。由于降维计算方法的局限性,对这些数据集的分析仍然具有挑战性。在这里,我们介绍了 UnTANGLeD,这是一种多功能计算管道,可优先处理生物稳健性和有意义的信息,以指导从输入筛选数据中得出可操作的策略,我们使用基于图像的药物筛选结果进行了演示。UnTANGLeD 为分析高维生物数据提供了一个强大的框架,为理论上分析来自任何筛选平台的任何数据类型提供了一个强大的工具。
{"title":"A robust unsupervised clustering approach for high-dimensional biological imaging data reveals shared drug-induced morphological signatures","authors":"Shaine Chenxin Bao, Dalia Mizikovsky, Kathleen Pishas, Qiongyi Zhao, Karla J Cowley, Evanny Marinovic, Mark Carey, Ian Campbell, Kaylene J Simpson, Dane Cheasley, Nathan Palpant","doi":"10.1101/2024.09.05.611300","DOIUrl":"https://doi.org/10.1101/2024.09.05.611300","url":null,"abstract":"High-throughput analysis methods have emerged as central technologies to accelerate discovery through scalable generation of large-scale data. Analysis of these datasets remains challenging due to limitations in computational approaches for dimensionality reduction. Here, we present UnTANGLeD, a versatile computational pipeline that prioritises biologically robust and meaningful information to guide actionable strategies from input screening data which we demonstrate using results from image-based drug screening. By providing a robust framework for analysing high dimensional biological data, UnTANGLeD offers a powerful tool for analysis of theoretically any data type from any screening platform.","PeriodicalId":501307,"journal":{"name":"bioRxiv - Bioinformatics","volume":"416 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BFVD - a large repository of predicted viral protein structures BFVD - 大型病毒蛋白质结构预测库
Pub Date : 2024-09-09 DOI: 10.1101/2024.09.08.611582
Martin Steinegger, Eli Levy Karin, Rachel Seongeun Kim
The AlphaFold Protein Structure Database (AFDB) is the largest repository of accurately predicted structures with taxonomic labels. Despite providing predictions for over 214 million UniProt entries, the AFDB does not cover viral sequences, severely limiting their study. To bridge this gap, we created the Big Fantastic Virus Database (BFVD), a repository of 351,242 protein structures predicted by applying ColabFold to the viral sequence representatives of the UniRef30 clusters. BFVD holds a unique repertoire of protein structures as over 63% of its entries show no or low structural similarity to existing repositories. We demonstrate how BFVD substantially enhances the fraction of annotated bacteriophage proteins compared to sequence-based annotation using Bakta. In that, BFVD is on par with the AFDB, while holding nearly three orders of magnitude fewer structures. BFVD is an important virus-specific expansion to protein structure repositories, offering new opportunities to advance viral research. BFVD is freely available at https://bfvd.steineggerlab.workers.dev
AlphaFold 蛋白结构数据库(AFDB)是最大的带有分类标签的精确预测结构库。尽管 AFDB 为超过 2.14 亿个 UniProt 条目提供了预测,但它并不涵盖病毒序列,这严重限制了对病毒序列的研究。为了弥补这一差距,我们创建了大神奇病毒数据库(BFVD),这是一个通过将 ColabFold 应用于 UniRef30 聚类中的病毒序列代表而预测出的 351,242 种蛋白质结构的资源库。BFVD 拥有独特的蛋白质结构库,因为其超过 63% 的条目与现有结构库没有相似性或相似性很低。我们展示了与使用 Bakta 进行基于序列的注释相比,BFVD 如何大幅提高了噬菌体蛋白质的注释率。在这一点上,BFVD 与 AFDB 不相上下,但所保存的结构却少了近三个数量级。BFVD 是对蛋白质结构库的重要扩展,为推进病毒研究提供了新的机遇。BFVD 可在 https://bfvd.steineggerlab.workers.dev 免费获取。
{"title":"BFVD - a large repository of predicted viral protein structures","authors":"Martin Steinegger, Eli Levy Karin, Rachel Seongeun Kim","doi":"10.1101/2024.09.08.611582","DOIUrl":"https://doi.org/10.1101/2024.09.08.611582","url":null,"abstract":"The AlphaFold Protein Structure Database (AFDB) is the largest repository of accurately predicted structures with taxonomic labels. Despite providing predictions for over 214 million UniProt entries, the AFDB does not cover viral sequences, severely limiting their study. To bridge this gap, we created the Big Fantastic Virus Database (BFVD), a repository of 351,242 protein structures predicted by applying ColabFold to the viral sequence representatives of the UniRef30 clusters. BFVD holds a unique repertoire of protein structures as over 63% of its entries show no or low structural similarity to existing repositories. We demonstrate how BFVD substantially enhances the fraction of annotated bacteriophage proteins compared to sequence-based annotation using Bakta. In that, BFVD is on par with the AFDB, while holding nearly three orders of magnitude fewer structures. BFVD is an important virus-specific expansion to protein structure repositories, offering new opportunities to advance viral research. BFVD is freely available at https://bfvd.steineggerlab.workers.dev","PeriodicalId":501307,"journal":{"name":"bioRxiv - Bioinformatics","volume":"30 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142185465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
bioRxiv - Bioinformatics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1