首页 > 最新文献

Annals of Applied Statistics最新文献

英文 中文
Fitting stochastic epidemic models to gene genealogies using linear noise approximation. 用线性噪声近似拟合随机流行病模型的基因谱系。
IF 1.8 4区 数学 Q1 Mathematics Pub Date : 2023-03-01 DOI: 10.1214/21-aoas1583
Mingwei Tang, Gytis Dudas, Trevor Bedford, Vladimir N Minin

Phylodynamics is a set of population genetics tools that aim at reconstructing demographic history of a population based on molecular sequences of individuals sampled from the population of interest. One important task in phylodynamics is to estimate changes in (effective) population size. When applied to infectious disease sequences such estimation of population size trajectories can provide information about changes in the number of infections. To model changes in the number of infected individuals, current phylodynamic methods use non-parametric approaches (e.g., Bayesian curve-fitting based on change-point models or Gaussian process priors), parametric approaches (e.g., based on differential equations), and stochastic modeling in conjunction with likelihood-free Bayesian methods. The first class of methods yields results that are hard to interpret epidemiologically. The second class of methods provides estimates of important epidemiological parameters, such as infection and removal/recovery rates, but ignores variation in the dynamics of infectious disease spread. The third class of methods is the most advantageous statistically, but relies on computationally intensive particle filtering techniques that limits its applications. We propose a Bayesian model that combines phylodynamic inference and stochastic epidemic models, and achieves computational tractability by using a linear noise approximation (LNA) - a technique that allows us to approximate probability densities of stochastic epidemic model trajectories. LNA opens the door for using modern Markov chain Monte Carlo tools to approximate the joint posterior distribution of the disease transmission parameters and of high dimensional vectors describing unobserved changes in the stochastic epidemic model compartment sizes (e.g., numbers of infectious and susceptible individuals). In a simulation study, we show that our method can successfully recover parameters of stochastic epidemic models. We apply our estimation technique to Ebola genealogies estimated using viral genetic data from the 2014 epidemic in Sierra Leone and Liberia.

系统动力学是一套群体遗传学工具,旨在根据从感兴趣的群体中抽样的个体的分子序列重建群体的人口统计学历史。系统动力学的一个重要任务是估计(有效)种群大小的变化。当应用于传染病序列时,这种对种群大小轨迹的估计可以提供有关感染数量变化的信息。为了模拟感染人数的变化,目前的系统动力学方法使用非参数方法(例如,基于变化点模型或高斯过程先验的贝叶斯曲线拟合)、参数方法(例如,基于微分方程)和结合无似然贝叶斯方法的随机建模。第一类方法产生的结果很难从流行病学上解释。第二类方法提供了重要的流行病学参数的估计,例如感染率和清除/恢复率,但忽略了传染病传播动态的变化。第三类方法在统计上是最有利的,但依赖于计算密集型的粒子滤波技术,限制了它的应用。我们提出了一种贝叶斯模型,结合了系统动力学推断和随机流行病模型,并通过使用线性噪声近似(LNA)实现了计算可追溯性-一种允许我们近似随机流行病模型轨迹的概率密度的技术。LNA为使用现代马尔可夫链蒙特卡罗工具来近似疾病传播参数和描述随机流行病模型隔室大小(例如,感染和易感个体的数量)中未观察到的变化的高维向量的联合后向分布打开了大门。仿真研究表明,该方法可以成功地恢复随机流行病模型的参数。我们利用2014年塞拉利昂和利比里亚疫情的病毒遗传数据,将我们的估计技术应用于埃博拉谱系估计。
{"title":"Fitting stochastic epidemic models to gene genealogies using linear noise approximation.","authors":"Mingwei Tang,&nbsp;Gytis Dudas,&nbsp;Trevor Bedford,&nbsp;Vladimir N Minin","doi":"10.1214/21-aoas1583","DOIUrl":"https://doi.org/10.1214/21-aoas1583","url":null,"abstract":"<p><p>Phylodynamics is a set of population genetics tools that aim at reconstructing demographic history of a population based on molecular sequences of individuals sampled from the population of interest. One important task in phylodynamics is to estimate changes in (effective) population size. When applied to infectious disease sequences such estimation of population size trajectories can provide information about changes in the number of infections. To model changes in the number of infected individuals, current phylodynamic methods use non-parametric approaches (e.g., Bayesian curve-fitting based on change-point models or Gaussian process priors), parametric approaches (e.g., based on differential equations), and stochastic modeling in conjunction with likelihood-free Bayesian methods. The first class of methods yields results that are hard to interpret epidemiologically. The second class of methods provides estimates of important epidemiological parameters, such as infection and removal/recovery rates, but ignores variation in the dynamics of infectious disease spread. The third class of methods is the most advantageous statistically, but relies on computationally intensive particle filtering techniques that limits its applications. We propose a Bayesian model that combines phylodynamic inference and stochastic epidemic models, and achieves computational tractability by using a linear noise approximation (LNA) - a technique that allows us to approximate probability densities of stochastic epidemic model trajectories. LNA opens the door for using modern Markov chain Monte Carlo tools to approximate the joint posterior distribution of the disease transmission parameters and of high dimensional vectors describing unobserved changes in the stochastic epidemic model compartment sizes (e.g., numbers of infectious and susceptible individuals). In a simulation study, we show that our method can successfully recover parameters of stochastic epidemic models. We apply our estimation technique to Ebola genealogies estimated using viral genetic data from the 2014 epidemic in Sierra Leone and Liberia.</p>","PeriodicalId":50772,"journal":{"name":"Annals of Applied Statistics","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10237588/pdf/nihms-1891709.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9955586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Probabilistic HIV recency classification-a logistic regression without labeled individual level training data. 概率HIV近期分类——一种没有标记个人水平训练数据的逻辑回归。
IF 1.8 4区 数学 Q1 Mathematics Pub Date : 2023-03-01 Epub Date: 2023-01-24 DOI: 10.1214/22-aoas1618
Ben Sheng, Changcheng Li, Le Bao, Runze Li

Accurate HIV incidence estimation based on individual recent infection status (recent vs long-term infection) is important for monitoring the epidemic, targeting interventions to those at greatest risk of new infection, and evaluating existing programs of prevention and treatment. Starting from 2015, the Population-based HIV Impact Assessment (PHIA) individual-level surveys are implemented in the most-affected countries in sub-Saharan Africa. PHIA is a nationally-representative HIV-focused survey that combines household visits with key questions and cutting-edge technologies such as biomarker tests for HIV antibody and HIV viral load which offer the unique opportunity of distinguishing between recent infection and long-term infection, and providing relevant HIV information by age, gender, and location. In this article, we propose a semi-supervised logistic regression model for estimating individual level HIV recency status. It incorporates information from multiple data sources - the PHIA survey where the true HIV recency status is unknown, and the cohort studies provided in the literature where the relationship between HIV recency status and the covariates are presented in the form of a contingency table. It also utilizes the national level HIV incidence estimates from the epidemiology model. Applying the proposed model to Malawi PHIA data, we demonstrate that our approach is more accurate for the individual level estimation and more appropriate for estimating HIV recency rates at aggregated levels than the current practice - the binary classification tree (BCT).

根据个人近期感染状况(近期感染与长期感染)准确估计艾滋病毒发病率,对于监测疫情、针对新感染风险最大的人群进行干预以及评估现有的预防和治疗方案至关重要。从2015年开始,在撒哈拉以南非洲受影响最严重的国家实施基于人口的艾滋病毒影响评估(PHIA)个人层面调查。PHIA是一项具有全国代表性的以艾滋病毒为重点的调查,它将家访与关键问题和尖端技术相结合,如艾滋病毒抗体和艾滋病毒病毒载量的生物标志物测试,为区分近期感染和长期感染提供了独特的机会,并按年龄、性别和地点提供了相关的艾滋病毒信息。在这篇文章中,我们提出了一个半监督逻辑回归模型来估计个体水平的HIV近期状况。它结合了来自多个数据来源的信息——PHIA调查,其中真实的HIV近期状况未知,以及文献中提供的队列研究,其中HIV近期状况和协变量之间的关系以列联表的形式呈现。它还利用了流行病学模型中对国家一级艾滋病毒发病率的估计。将所提出的模型应用于马拉维PHIA数据,我们证明,与当前的实践——二叉分类树(BCT)相比,我们的方法更准确地用于个体水平的估计,也更适合于在总体水平上估计艾滋病毒感染率。
{"title":"Probabilistic HIV recency classification-a logistic regression without labeled individual level training data.","authors":"Ben Sheng,&nbsp;Changcheng Li,&nbsp;Le Bao,&nbsp;Runze Li","doi":"10.1214/22-aoas1618","DOIUrl":"10.1214/22-aoas1618","url":null,"abstract":"<p><p>Accurate HIV incidence estimation based on individual recent infection status (recent vs long-term infection) is important for monitoring the epidemic, targeting interventions to those at greatest risk of new infection, and evaluating existing programs of prevention and treatment. Starting from 2015, the Population-based HIV Impact Assessment (PHIA) individual-level surveys are implemented in the most-affected countries in sub-Saharan Africa. PHIA is a nationally-representative HIV-focused survey that combines household visits with key questions and cutting-edge technologies such as biomarker tests for HIV antibody and HIV viral load which offer the unique opportunity of distinguishing between recent infection and long-term infection, and providing relevant HIV information by age, gender, and location. In this article, we propose a semi-supervised logistic regression model for estimating individual level HIV recency status. It incorporates information from multiple data sources - the PHIA survey where the true HIV recency status is unknown, and the cohort studies provided in the literature where the relationship between HIV recency status and the covariates are presented in the form of a contingency table. It also utilizes the national level HIV incidence estimates from the epidemiology model. Applying the proposed model to Malawi PHIA data, we demonstrate that our approach is more accurate for the individual level estimation and more appropriate for estimating HIV recency rates at aggregated levels than the current practice - the binary classification tree (BCT).</p>","PeriodicalId":50772,"journal":{"name":"Annals of Applied Statistics","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10577400/pdf/nihms-1886688.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41240660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Hierarchical resampling for bagging in multistudy prediction with applications to human neurochemical sensing. 多研究预测中的分级重采样(Hierarchical resampling for bagging),应用于人类神经化学传感。
IF 1.8 4区 数学 Q1 Mathematics Pub Date : 2022-12-01 Epub Date: 2022-09-26 DOI: 10.1214/21-aoas1574
Gabriel Loewinger, Prasad Patil, Kenneth T Kishida, Giovanni Parmigiani

We propose the "study strap ensemble", which combines advantages of two common approaches to fitting prediction models when multiple training datasets ("studies") are available: pooling studies and fitting one model versus averaging predictions from multiple models each fit to individual studies. The study strap ensemble fits models to bootstrapped datasets, or "pseudo-studies." These are generated by resampling from multiple studies with a hierarchical resampling scheme that generalizes the randomized cluster bootstrap. The study strap is controlled by a tuning parameter that determines the proportion of observations to draw from each study. When the parameter is set to its lowest value, each pseudo-study is resampled from only a single study. When it is high, the study strap ignores the multi-study structure and generates pseudo-studies by merging the datasets and drawing observations like a standard bootstrap. We empirically show the optimal tuning value often lies in between, and prove that special cases of the study strap draw the merged dataset and the set of original studies as pseudo-studies. We extend the study strap approach with an ensemble weighting scheme that utilizes information in the distribution of the covariates of the test dataset. Our work is motivated by neuroscience experiments using real-time neurochemical sensing during awake behavior in humans. Current techniques to perform this kind of research require measurements from an electrode placed in the brain during awake neurosurgery and rely on prediction models to estimate neurotransmitter concentrations from the electrical measurements recorded by the electrode. These models are trained by combining multiple datasets that are collected in vitro under heterogeneous conditions in order to promote accuracy of the models when applied to data collected in the brain. A prevailing challenge is deciding how to combine studies or ensemble models trained on different studies to enhance model generalizability. Our methods produce marked improvements in simulations and in this application. All methods are available in the studyStrap CRAN package.

我们提出了 "研究带集合",它结合了在有多个训练数据集("研究")的情况下拟合预测模型的两种常用方法的优点:集合研究和拟合一个模型与平均每个研究拟合的多个模型的预测结果。研究带集合拟合模型适用于自引导数据集或 "伪研究"。这些数据集是通过对多项研究进行重采样产生的,重采样方案采用了分层重采样方法,对随机分组自举法进行了推广。研究带由一个调整参数控制,该参数决定了从每项研究中抽取观察值的比例。当参数设置为最低值时,每个伪研究只从单个研究中进行重采样。当参数值较高时,研究表带会忽略多研究结构,通过合并数据集生成伪研究,并像标准自举法一样抽取观察值。我们的经验表明,最佳调整值往往介于两者之间,并证明了研究带的特殊情况是将合并数据集和原始研究集作为伪研究。我们通过利用测试数据集协变量分布信息的集合加权方案扩展了研究带方法。我们的工作源于在人类清醒行为中使用实时神经化学传感的神经科学实验。目前进行此类研究的技术需要在清醒神经外科手术过程中通过放置在大脑中的电极进行测量,并依靠预测模型从电极记录的电测量值估算神经递质浓度。这些模型的训练方法是将在体外不同条件下收集的多个数据集结合起来,以提高模型应用于大脑中收集的数据时的准确性。一个普遍存在的挑战是决定如何将不同研究或在不同研究中训练的集合模型结合起来,以提高模型的通用性。我们的方法在模拟和应用方面都有明显的改进。所有方法都可以在 studyStrap CRAN 软件包中找到。
{"title":"Hierarchical resampling for bagging in multistudy prediction with applications to human neurochemical sensing.","authors":"Gabriel Loewinger, Prasad Patil, Kenneth T Kishida, Giovanni Parmigiani","doi":"10.1214/21-aoas1574","DOIUrl":"10.1214/21-aoas1574","url":null,"abstract":"<p><p>We propose the \"study strap ensemble\", which combines advantages of two common approaches to fitting prediction models when multiple training datasets (\"studies\") are available: pooling studies and fitting one model versus averaging predictions from multiple models each fit to individual studies. The study strap ensemble fits models to bootstrapped datasets, or \"pseudo-studies.\" These are generated by resampling from multiple studies with a hierarchical resampling scheme that generalizes the randomized cluster bootstrap. The study strap is controlled by a tuning parameter that determines the proportion of observations to draw from each study. When the parameter is set to its lowest value, each pseudo-study is resampled from only a single study. When it is high, the study strap ignores the multi-study structure and generates pseudo-studies by merging the datasets and drawing observations like a standard bootstrap. We empirically show the optimal tuning value often lies in between, and prove that special cases of the study strap draw the merged dataset and the set of original studies as pseudo-studies. We extend the study strap approach with an ensemble weighting scheme that utilizes information in the distribution of the covariates of the test dataset. Our work is motivated by neuroscience experiments using real-time neurochemical sensing during awake behavior in humans. Current techniques to perform this kind of research require measurements from an electrode placed in the brain during awake neurosurgery and rely on prediction models to estimate neurotransmitter concentrations from the electrical measurements recorded by the electrode. These models are trained by combining multiple datasets that are collected <i>in vitro</i> under heterogeneous conditions in order to promote accuracy of the models when applied to data collected in the brain. A prevailing challenge is deciding how to combine studies or ensemble models trained on different studies to enhance model generalizability. Our methods produce marked improvements in simulations and in this application. All methods are available in the studyStrap CRAN package.</p>","PeriodicalId":50772,"journal":{"name":"Annals of Applied Statistics","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9586160/pdf/nihms-1800688.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10733907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A SPATIAL CAUSAL ANALYSIS OF WILDLAND FIRE-CONTRIBUTED PM2.5 USING NUMERICAL MODEL OUTPUT. 利用数值模型输出对野地火灾造成的 pm2.5 进行空间因果分析。
IF 1.3 4区 数学 Q2 STATISTICS & PROBABILITY Pub Date : 2022-12-01 Epub Date: 2022-09-26 DOI: 10.1214/22-aoas1610
Alexandra Larsen, Shu Yang, Brian J Reich, Ana G Rappold

Wildland fire smoke contains hazardous levels of fine particulate matter (PM2.5), a pollutant shown to adversely effect health. Estimating fire attributable PM2.5 concentrations is key to quantifying the impact on air quality and subsequent health burden. This is a challenging problem since only total PM2.5 is measured at monitoring stations and both fire-attributable PM2.5 and PM2.5 from all other sources are correlated in space and time. We propose a framework for estimating fire-contributed PM2.5 and PM2.5 from all other sources using a novel causal inference framework and bias-adjusted chemical model representations of PM2.5 under counterfactual scenarios. The chemical model representation of PM2.5 for this analysis is simulated using Community Multiscale Air Quality Modeling System (CMAQ), run with and without fire emissions across the contiguous U.S. for the 2008-2012 wildfire seasons. The CMAQ output is calibrated with observations from monitoring sites for the same spatial domain and time period. We use a Bayesian model that accounts for spatial variation to estimate the effect of wildland fires on PM2.5 and state assumptions under which the estimate has a valid causal interpretation. Our results include estimates of the contributions of wildfire smoke to PM2.5 for the contiguous U.S. Additionally, we compute the health burden associated with the PM2.5 attributable to wildfire smoke.

野外火灾烟雾中含有有害水平的细颗粒物 (PM2.5),这种污染物已被证明会对健康产生不利影响。估算可归因于火灾的 PM2.5 浓度是量化对空气质量的影响和后续健康负担的关键。这是一个具有挑战性的问题,因为监测站只能测量 PM2.5 总量,而火灾引起的 PM2.5 和所有其他来源的 PM2.5 在空间和时间上都是相关的。我们提出了一个框架,利用新颖的因果推理框架和反事实情景下经过偏差调整的 PM2.5 化学模型表征,估算火灾贡献的 PM2.5 和所有其他来源的 PM2.5。用于本分析的 PM2.5 化学模型表示是使用社区多尺度空气质量建模系统(CMAQ)模拟的,在 2008-2012 年野火季节,在有和没有火灾排放的情况下在美国毗连地区运行。CMAQ 的输出结果与同一空间域和时间段内监测点的观测结果进行了校准。我们使用贝叶斯模型来估算野火对 PM2.5 的影响,并说明在哪些假设条件下估算结果具有有效的因果解释。我们的结果包括野火烟雾对美国毗连地区 PM2.5 贡献的估计值。此外,我们还计算了与野火烟雾造成的 PM2.5 相关的健康负担。
{"title":"A SPATIAL CAUSAL ANALYSIS OF WILDLAND FIRE-CONTRIBUTED PM<sub>2.5</sub> USING NUMERICAL MODEL OUTPUT.","authors":"Alexandra Larsen, Shu Yang, Brian J Reich, Ana G Rappold","doi":"10.1214/22-aoas1610","DOIUrl":"10.1214/22-aoas1610","url":null,"abstract":"<p><p>Wildland fire smoke contains hazardous levels of fine particulate matter (PM<sub>2.5</sub>), a pollutant shown to adversely effect health. Estimating fire attributable PM<sub>2.5</sub> concentrations is key to quantifying the impact on air quality and subsequent health burden. This is a challenging problem since only total PM<sub>2.5</sub> is measured at monitoring stations and both fire-attributable PM<sub>2.5</sub> and PM<sub>2.5</sub> from all other sources are correlated in space and time. We propose a framework for estimating fire-contributed PM<sub>2.5</sub> and PM<sub>2.5</sub> from all other sources using a novel causal inference framework and bias-adjusted chemical model representations of PM<sub>2.5</sub> under counterfactual scenarios. The chemical model representation of PM<sub>2.5</sub> for this analysis is simulated using Community Multiscale Air Quality Modeling System (CMAQ), run with and without fire emissions across the contiguous U.S. for the 2008-2012 wildfire seasons. The CMAQ output is calibrated with observations from monitoring sites for the same spatial domain and time period. We use a Bayesian model that accounts for spatial variation to estimate the effect of wildland fires on PM<sub>2.5</sub> and state assumptions under which the estimate has a valid causal interpretation. Our results include estimates of the contributions of wildfire smoke to PM<sub>2.5</sub> for the contiguous U.S. Additionally, we compute the health burden associated with the PM<sub>2.5</sub> attributable to wildfire smoke.</p>","PeriodicalId":50772,"journal":{"name":"Annals of Applied Statistics","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10181852/pdf/nihms-1846188.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9468690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-Supervised Non-Parametric Bayesian Modelling of Spatial Proteomics. 空间蛋白质组学的半监督非参数贝叶斯建模
IF 1.3 4区 数学 Q2 STATISTICS & PROBABILITY Pub Date : 2022-12-01 DOI: 10.1214/22-AOAS1603
Oliver M Crook, Kathryn S Lilley, Laurent Gatto, Paul D W Kirk

Understanding sub-cellular protein localisation is an essential component in the analysis of context specific protein function. Recent advances in quantitative mass-spectrometry (MS) have led to high resolution mapping of thousands of proteins to sub-cellular locations within the cell. Novel modelling considerations to capture the complex nature of these data are thus necessary. We approach analysis of spatial proteomics data in a non-parametric Bayesian framework, using K-component mixtures of Gaussian process regression models. The Gaussian process regression model accounts for correlation structure within a sub-cellular niche, with each mixture component capturing the distinct correlation structure observed within each niche. The availability of marker proteins (i.e. proteins with a priori known labelled locations) motivates a semi-supervised learning approach to inform the Gaussian process hyperparameters. We moreover provide an efficient Hamiltonian-within-Gibbs sampler for our model. Furthermore, we reduce the computational burden associated with inversion of covariance matrices by exploiting the structure in the covariance matrix. A tensor decomposition of our covariance matrices allows extended Trench and Durbin algorithms to be applied to reduce the computational complexity of inversion and hence accelerate computation. We provide detailed case-studies on Drosophila embryos and mouse pluripotent embryonic stem cells to illustrate the benefit of semi-supervised functional Bayesian modelling of the data.

了解亚细胞蛋白质定位是分析特定环境蛋白质功能的重要组成部分。定量质谱分析(MS)技术的最新进展,已将数千种蛋白质高分辨率地绘制到细胞内的亚细胞位置。因此有必要采用新的建模方法来捕捉这些数据的复杂性质。我们在非参数贝叶斯框架下,利用高斯过程回归模型的 K 分量混合物来分析空间蛋白质组学数据。高斯过程回归模型考虑了亚细胞龛内的相关结构,每个混合物成分捕捉每个龛内观察到的不同相关结构。标记蛋白质(即具有先验已知标记位置的蛋白质)的可用性促使我们采用半监督学习方法为高斯过程超参数提供信息。此外,我们还为我们的模型提供了一个高效的哈密顿-内-吉布斯采样器(Hamiltonian-within-Gibbs sampler)。此外,我们还利用协方差矩阵的结构,减轻了与协方差矩阵反演相关的计算负担。通过对协方差矩阵进行张量分解,可以应用扩展的 Trench 和 Durbin 算法来降低反演的计算复杂度,从而加快计算速度。我们提供了果蝇胚胎和小鼠多能胚胎干细胞的详细案例研究,以说明半监督功能贝叶斯数据建模的好处。
{"title":"Semi-Supervised Non-Parametric Bayesian Modelling of Spatial Proteomics.","authors":"Oliver M Crook, Kathryn S Lilley, Laurent Gatto, Paul D W Kirk","doi":"10.1214/22-AOAS1603","DOIUrl":"10.1214/22-AOAS1603","url":null,"abstract":"<p><p>Understanding sub-cellular protein localisation is an essential component in the analysis of context specific protein function. Recent advances in quantitative mass-spectrometry (MS) have led to high resolution mapping of thousands of proteins to sub-cellular locations within the cell. Novel modelling considerations to capture the complex nature of these data are thus necessary. We approach analysis of spatial proteomics data in a non-parametric Bayesian framework, using K-component mixtures of Gaussian process regression models. The Gaussian process regression model accounts for correlation structure within a sub-cellular niche, with each mixture component capturing the distinct correlation structure observed within each niche. The availability of <i>marker proteins</i> (i.e. proteins with <i>a priori</i> known labelled locations) motivates a semi-supervised learning approach to inform the Gaussian process hyperparameters. We moreover provide an efficient Hamiltonian-within-Gibbs sampler for our model. Furthermore, we reduce the computational burden associated with inversion of covariance matrices by exploiting the structure in the covariance matrix. A tensor decomposition of our covariance matrices allows extended Trench and Durbin algorithms to be applied to reduce the computational complexity of inversion and hence accelerate computation. We provide detailed case-studies on <i>Drosophila</i> embryos and mouse pluripotent embryonic stem cells to illustrate the benefit of semi-supervised functional Bayesian modelling of the data.</p>","PeriodicalId":50772,"journal":{"name":"Annals of Applied Statistics","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7613899/pdf/EMS143956.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9155886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SCALAR ON NETWORK REGRESSION VIA BOOSTING. 通过提升网络回归的标量。
IF 1.3 4区 数学 Q2 STATISTICS & PROBABILITY Pub Date : 2022-12-01 Epub Date: 2022-09-26 DOI: 10.1214/22-aoas1612
Emily L Morris, Kevin He, Jian Kang

Neuroimaging studies have a growing interest in learning the association between the individual brain connectivity networks and their clinical characteristics. It is also of great interest to identify the sub brain networks as biomarkers to predict the clinical symptoms, such as disease status, potentially providing insight on neuropathology. This motivates the need for developing a new type of regression model where the response variable is scalar, and predictors are networks that are typically represented as adjacent matrices or weighted adjacent matrices, to which we refer as scalar-on-network regression. In this work, we develop a new boosting method for model fitting with sub-network markers selection. Our approach, as opposed to group lasso or other existing regularization methods, is essentially a gradient descent algorithm leveraging known network structure. We demonstrate the utility of our methods via simulation studies and analysis of the resting-state fMRI data in a cognitive developmental cohort study.

神经影像学研究对了解单个大脑连接网络与其临床特征之间的关联越来越感兴趣。此外,将亚脑网络识别为生物标志物来预测临床症状(如疾病状态)也是非常有意义的,这有可能为神经病理学提供洞察力。这就促使我们需要开发一种新型回归模型,其中响应变量是标量,预测因子是网络,通常表示为相邻矩阵或加权相邻矩阵,我们称之为标量-网络回归。在这项工作中,我们开发了一种新的提升方法,用于子网络标记选择的模型拟合。与分组套索或其他现有的正则化方法不同,我们的方法本质上是一种利用已知网络结构的梯度下降算法。我们通过模拟研究和对认知发展队列研究中静息态 fMRI 数据的分析,证明了我们方法的实用性。
{"title":"SCALAR ON NETWORK REGRESSION VIA BOOSTING.","authors":"Emily L Morris, Kevin He, Jian Kang","doi":"10.1214/22-aoas1612","DOIUrl":"10.1214/22-aoas1612","url":null,"abstract":"<p><p>Neuroimaging studies have a growing interest in learning the association between the individual brain connectivity networks and their clinical characteristics. It is also of great interest to identify the sub brain networks as biomarkers to predict the clinical symptoms, such as disease status, potentially providing insight on neuropathology. This motivates the need for developing a new type of regression model where the response variable is scalar, and predictors are networks that are typically represented as adjacent matrices or weighted adjacent matrices, to which we refer as scalar-on-network regression. In this work, we develop a new boosting method for model fitting with sub-network markers selection. Our approach, as opposed to group lasso or other existing regularization methods, is essentially a gradient descent algorithm leveraging known network structure. We demonstrate the utility of our methods via simulation studies and analysis of the resting-state fMRI data in a cognitive developmental cohort study.</p>","PeriodicalId":50772,"journal":{"name":"Annals of Applied Statistics","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9624505/pdf/nihms-1815340.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40446178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NETWORK DIFFERENTIAL CONNECTIVITY ANALYSIS. 网络差分连接性分析。
IF 1.3 4区 数学 Q2 STATISTICS & PROBABILITY Pub Date : 2022-12-01 Epub Date: 2022-09-26 DOI: 10.1214/21-aoas1581
Sen Zhao, Ali Shojaie

Identifying differences in networks has become a canonical problem in many biological applications. Existing methods try to accomplish this goal by either directly comparing the estimated structures of two networks, or testing the null hypothesis that the covariance or inverse covariance matrices in two populations are identical. However, estimation approaches do not provide measures of uncertainty, e.g., p-values, whereas existing testing approaches could lead to misleading results, as we illustrate in this paper. To address these shortcomings, we propose a qualitative hypothesis testing framework, which tests whether the connectivity structures in the two networks are the same. our framework is especially appropriate if the goal is to identify nodes or edges that are differentially connected. No existing approach could test such hypotheses and provide corresponding measures of uncertainty. Theoretically, we show that under appropriate conditions, our proposal correctly controls the type-I error rate in testing the qualitative hypothesis. Empirically, we demonstrate the performance of our proposal using simulation studies and applications in cancer genomics.

识别网络中的差异已经成为许多生物学应用中的一个典型问题。现有的方法试图通过直接比较两个网络的估计结构,或者测试两个群体中的协方差矩阵或逆协方差矩阵相同的零假设来实现这一目标。然而,正如我们在本文中所说明的,估计方法不能提供不确定性的测量,例如p值,而现有的测试方法可能会导致误导性的结果。为了解决这些缺点,我们提出了一个定性假设测试框架,该框架测试两个网络中的连接结构是否相同。如果目标是识别差异连接的节点或边,那么我们的框架尤其合适。现有的任何方法都无法检验这些假设并提供相应的不确定性度量。从理论上讲,我们证明了在适当的条件下,我们的建议在检验定性假设时正确地控制了I型错误率。根据经验,我们使用癌症基因组学中的模拟研究和应用来证明我们的提案的性能。
{"title":"NETWORK DIFFERENTIAL CONNECTIVITY ANALYSIS.","authors":"Sen Zhao, Ali Shojaie","doi":"10.1214/21-aoas1581","DOIUrl":"10.1214/21-aoas1581","url":null,"abstract":"<p><p>Identifying differences in networks has become a canonical problem in many biological applications. Existing methods try to accomplish this goal by either directly comparing the estimated structures of two networks, or testing the null hypothesis that the covariance or inverse covariance matrices in two populations are identical. However, estimation approaches do not provide measures of uncertainty, e.g., <i>p</i>-values, whereas existing testing approaches could lead to misleading results, as we illustrate in this paper. To address these shortcomings, we propose a <i>qualitative</i> hypothesis testing framework, which tests whether the connectivity <i>structures</i> in the two networks are the same. our framework is especially appropriate if the goal is to identify nodes or edges that are differentially connected. No existing approach could test such hypotheses and provide corresponding measures of uncertainty. Theoretically, we show that under appropriate conditions, our proposal correctly controls the type-I error rate in testing the qualitative hypothesis. Empirically, we demonstrate the performance of our proposal using simulation studies and applications in cancer genomics.</p>","PeriodicalId":50772,"journal":{"name":"Annals of Applied Statistics","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10569671/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41240659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BAYESIAN HIERARCHICAL RANDOM-EFFECTS META-ANALYSIS AND DESIGN OF PHASE I CLINICAL TRIALS. 贝叶斯分层随机效应荟萃分析和I期临床试验设计。
IF 1.8 4区 数学 Q1 Mathematics Pub Date : 2022-12-01 Epub Date: 2022-09-26 DOI: 10.1214/22-aoas1600
Ruitao Lin, Haolun Shi, Guosheng Yin, Peter F Thall, Ying Yuan, Christopher R Flowers

We propose a curve-free random-effects meta-analysis approach to combining data from multiple phase I clinical trials to identify an optimal dose. Our method accounts for between-study heterogeneity that may stem from different study designs, patient populations, or tumor types. We also develop a meta-analytic-predictive (MAP) method based on a power prior that incorporates data from multiple historical studies into the design and conduct of a new phase I trial. Performances of the proposed methods for data analysis and trial design are evaluated by extensive simulation studies. The proposed random-effects meta-analysis method provides more reliable dose selection than comparators that rely on parametric assumptions. The MAP-based dose-finding designs are generally more efficient than those that do not borrow information, especially when the current and historical studies are similar. The proposed methodologies are illustrated by a meta-analysis of five historical phase I studies of Sorafenib, and design of a new phase I trial.

我们提出一种无曲线随机效应荟萃分析方法,结合多个I期临床试验的数据来确定最佳剂量。我们的方法解释了研究间的异质性,这种异质性可能源于不同的研究设计、患者群体或肿瘤类型。我们还开发了一种基于功率先验的元分析预测(MAP)方法,该方法将来自多个历史研究的数据纳入新的I期试验的设计和实施中。所提出的数据分析和试验设计方法的性能通过广泛的仿真研究进行了评估。所提出的随机效应荟萃分析方法比依赖参数假设的比较方法提供了更可靠的剂量选择。基于地图的剂量发现设计通常比不借鉴信息的设计更有效,特别是在当前和历史研究相似的情况下。提出的方法是通过对5个索拉非尼历史一期研究的荟萃分析和一个新的一期试验的设计来说明的。
{"title":"BAYESIAN HIERARCHICAL RANDOM-EFFECTS META-ANALYSIS AND DESIGN OF PHASE I CLINICAL TRIALS.","authors":"Ruitao Lin,&nbsp;Haolun Shi,&nbsp;Guosheng Yin,&nbsp;Peter F Thall,&nbsp;Ying Yuan,&nbsp;Christopher R Flowers","doi":"10.1214/22-aoas1600","DOIUrl":"https://doi.org/10.1214/22-aoas1600","url":null,"abstract":"<p><p>We propose a curve-free random-effects meta-analysis approach to combining data from multiple phase I clinical trials to identify an optimal dose. Our method accounts for between-study heterogeneity that may stem from different study designs, patient populations, or tumor types. We also develop a meta-analytic-predictive (MAP) method based on a power prior that incorporates data from multiple historical studies into the design and conduct of a new phase I trial. Performances of the proposed methods for data analysis and trial design are evaluated by extensive simulation studies. The proposed random-effects meta-analysis method provides more reliable dose selection than comparators that rely on parametric assumptions. The MAP-based dose-finding designs are generally more efficient than those that do not borrow information, especially when the current and historical studies are similar. The proposed methodologies are illustrated by a meta-analysis of five historical phase I studies of Sorafenib, and design of a new phase I trial.</p>","PeriodicalId":50772,"journal":{"name":"Annals of Applied Statistics","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9624503/pdf/nihms-1814042.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40446180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Bayesian Inference for Brain Activity from Functional Magnetic Resonance Imaging Collected at Two Spatial Resolutions. 从两个空间分辨率收集的功能磁共振成像中对脑活动的贝叶斯推断。
IF 1.8 4区 数学 Q1 Mathematics Pub Date : 2022-12-01 Epub Date: 2022-09-26 DOI: 10.1214/22-aoas1606
Andrew S Whiteman, Andreas J Bartsch, Jian Kang, Timothy D Johnson

Neuroradiologists and neurosurgeons increasingly opt to use functional magnetic resonance imaging (fMRI) to map functionally relevant brain regions for noninvasive presurgical planning and intraoperative neuronavigation. This application requires a high degree of spatial accuracy, but the fMRI signal-to-noise ratio (SNR) decreases as spatial resolution increases. In practice, fMRI scans can be collected at multiple spatial resolutions, and it is of interest to make more accurate inference on brain activity by combining data with different resolutions. To this end, we develop a new Bayesian model to leverage both better anatomical precision in high resolution fMRI and higher SNR in standard resolution fMRI. We assign a Gaussian process prior to the mean intensity function and develop an efficient, scalable posterior computation algorithm to integrate both sources of data. We draw posterior samples using an algorithm analogous to Riemann manifold Hamiltonian Monte Carlo in an expanded parameter space. We illustrate our method in analysis of presurgical fMRI data, and show in simulation that it infers the mean intensity more accurately than alternatives that use either the high or standard resolution fMRI data alone.

神经放射学家和神经外科医生越来越多地选择使用功能磁共振成像(fMRI)来绘制与功能相关的脑区域,以进行无创术前计划和术中神经导航。这种应用需要高度的空间精度,但fMRI的信噪比(SNR)随着空间分辨率的增加而降低。在实践中,fMRI扫描可以在多个空间分辨率下收集,通过结合不同分辨率的数据对大脑活动进行更准确的推断是很有意义的。为此,我们开发了一种新的贝叶斯模型,以利用高分辨率功能磁共振成像中更好的解剖精度和标准分辨率功能磁共振成像中更高的信噪比。我们在平均强度函数之前分配一个高斯过程,并开发了一个有效的,可扩展的后验计算算法来整合两个数据源。我们在扩展的参数空间中使用类似于黎曼流形哈密顿蒙特卡罗的算法绘制后验样本。我们在术前fMRI数据分析中说明了我们的方法,并在模拟中表明,它比单独使用高分辨率或标准分辨率fMRI数据的替代方法更准确地推断出平均强度。
{"title":"Bayesian Inference for Brain Activity from Functional Magnetic Resonance Imaging Collected at Two Spatial Resolutions.","authors":"Andrew S Whiteman,&nbsp;Andreas J Bartsch,&nbsp;Jian Kang,&nbsp;Timothy D Johnson","doi":"10.1214/22-aoas1606","DOIUrl":"https://doi.org/10.1214/22-aoas1606","url":null,"abstract":"<p><p>Neuroradiologists and neurosurgeons increasingly opt to use functional magnetic resonance imaging (fMRI) to map functionally relevant brain regions for noninvasive presurgical planning and intraoperative neuronavigation. This application requires a high degree of spatial accuracy, but the fMRI signal-to-noise ratio (SNR) decreases as spatial resolution increases. In practice, fMRI scans can be collected at multiple spatial resolutions, and it is of interest to make more accurate inference on brain activity by combining data with different resolutions. To this end, we develop a new Bayesian model to leverage both better anatomical precision in high resolution fMRI and higher SNR in standard resolution fMRI. We assign a Gaussian process prior to the mean intensity function and develop an efficient, scalable posterior computation algorithm to integrate both sources of data. We draw posterior samples using an algorithm analogous to Riemann manifold Hamiltonian Monte Carlo in an expanded parameter space. We illustrate our method in analysis of presurgical fMRI data, and show in simulation that it infers the mean intensity more accurately than alternatives that use either the high or standard resolution fMRI data alone.</p>","PeriodicalId":50772,"journal":{"name":"Annals of Applied Statistics","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9629780/pdf/nihms-1815339.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40469475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
EXTENDED STOCHASTIC BLOCK MODELS WITH APPLICATION TO CRIMINAL NETWORKS. 扩展随机块模型及其在犯罪网络中的应用。
IF 1.8 4区 数学 Q1 Mathematics Pub Date : 2022-12-01 Epub Date: 2022-09-26 DOI: 10.1214/21-AOAS1595
Sirio Legramanti, Tommaso Rigon, Daniele Durante, David B Dunson

Reliably learning group structures among nodes in network data is challenging in several applications. We are particularly motivated by studying covert networks that encode relationships among criminals. These data are subject to measurement errors, and exhibit a complex combination of an unknown number of core-periphery, assortative and disassortative structures that may unveil key architectures of the criminal organization. The coexistence of these noisy block patterns limits the reliability of routinely-used community detection algorithms, and requires extensions of model-based solutions to realistically characterize the node partition process, incorporate information from node attributes, and provide improved strategies for estimation and uncertainty quantification. To cover these gaps, we develop a new class of extended stochastic block models (esbm) that infer groups of nodes having common connectivity patterns via Gibbs-type priors on the partition process. This choice encompasses many realistic priors for criminal networks, covering solutions with fixed, random and infinite number of possible groups, and facilitates the inclusion of node attributes in a principled manner. Among the new alternatives in our class, we focus on the Gnedin process as a realistic prior that allows the number of groups to be finite, random and subject to a reinforcement process coherent with criminal networks. A collapsed Gibbs sampler is proposed for the whole esbm class, and refined strategies for estimation, prediction, uncertainty quantification and model selection are outlined. The esbm performance is illustrated in realistic simulations and in an application to an Italian mafia network, where we unveil key complex block structures, mostly hidden from state-of-the-art alternatives.

在一些应用中,可靠地学习网络数据中节点间的组结构是一个挑战。我们特别热衷于研究罪犯之间关系的秘密网络。这些数据受到测量误差的影响,并表现出未知数量的核心-外围、分类和非分类结构的复杂组合,这些结构可能揭示犯罪组织的关键架构。这些噪声块模式的共存限制了常规社区检测算法的可靠性,并且需要扩展基于模型的解决方案,以真实地表征节点划分过程,结合节点属性信息,并提供改进的估计和不确定性量化策略。为了弥补这些差距,我们开发了一类新的扩展随机块模型(esbm),该模型通过划分过程中的gibbs类型先验推断具有共同连接模式的节点组。这种选择包含了犯罪网络的许多现实先验,涵盖了固定、随机和无限数量的可能群体的解决方案,并以原则的方式促进了节点属性的包含。在我们课堂上的新选择中,我们关注格涅丁过程作为一个现实的先验,它允许群体的数量是有限的,随机的,并且服从于与犯罪网络一致的强化过程。提出了一种适用于整个esbm类的折叠吉布斯采样器,并概述了估计、预测、不确定性量化和模型选择的改进策略。esbm的性能在现实模拟和意大利黑手党网络的应用中得到了说明,在那里我们揭示了关键的复杂块结构,大部分隐藏在最先进的替代品中。
{"title":"EXTENDED STOCHASTIC BLOCK MODELS WITH APPLICATION TO CRIMINAL NETWORKS.","authors":"Sirio Legramanti,&nbsp;Tommaso Rigon,&nbsp;Daniele Durante,&nbsp;David B Dunson","doi":"10.1214/21-AOAS1595","DOIUrl":"https://doi.org/10.1214/21-AOAS1595","url":null,"abstract":"<p><p>Reliably learning group structures among nodes in network data is challenging in several applications. We are particularly motivated by studying covert networks that encode relationships among criminals. These data are subject to measurement errors, and exhibit a complex combination of an unknown number of core-periphery, assortative and disassortative structures that may unveil key architectures of the criminal organization. The coexistence of these noisy block patterns limits the reliability of routinely-used community detection algorithms, and requires extensions of model-based solutions to realistically characterize the node partition process, incorporate information from node attributes, and provide improved strategies for estimation and uncertainty quantification. To cover these gaps, we develop a new class of extended stochastic block models (esbm) that infer groups of nodes having common connectivity patterns via Gibbs-type priors on the partition process. This choice encompasses many realistic priors for criminal networks, covering solutions with fixed, random and infinite number of possible groups, and facilitates the inclusion of node attributes in a principled manner. Among the new alternatives in our class, we focus on the Gnedin process as a realistic prior that allows the number of groups to be finite, random and subject to a reinforcement process coherent with criminal networks. A collapsed Gibbs sampler is proposed for the whole esbm class, and refined strategies for estimation, prediction, uncertainty quantification and model selection are outlined. The esbm performance is illustrated in realistic simulations and in an application to an Italian mafia network, where we unveil key complex block structures, mostly hidden from state-of-the-art alternatives.</p>","PeriodicalId":50772,"journal":{"name":"Annals of Applied Statistics","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9681118/pdf/nihms-1846459.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40510658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
期刊
Annals of Applied Statistics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1