首页 > 最新文献

Statistics in Medicine最新文献

英文 中文
A Novel Method for Inserting Dose Levels Mid-Trial in Early-Phase Oncology Combination Studies. 早期肿瘤联合研究中期插入剂量水平的新方法。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-02-01 DOI: 10.1002/sim.70417
Matthew George, Ian Wadsworth, Pavel Mozgunov

The use of combination treatments in early-phase oncology trials is growing. The objective of these trials is to search for the maximum tolerated dose combination from a predefined set. However, cases in which the initial set of combinations does not contain one close to the target toxicity pose a significant challenge. Currently, solutions are typically ad hoc and may bring practical challenges. We propose a novel method for inserting dose levels mid-trial, which features a search for the contour partitioning the dose space into combinations with toxicity truly above and below the target toxicity. Establishing this contour with a degree of certainty suggests that no combination is close to the target toxicity, triggering an insertion. We examine our approach in a comprehensive simulation study applied to the PIPE design and two-dimensional Bayesian logistic regression model (BLRM), though any model-based or model-assisted design is an appropriate candidate. Our results demonstrate that, on average, the insertion method can increase the probability of selecting combinations close to the target toxicity, without increasing the probability of subtherapeutic or toxic recommendations.

在早期肿瘤试验中联合治疗的使用正在增加。这些试验的目的是从预先设定的剂量组中寻找最大耐受剂量组合。然而,在最初的一组组合中没有一种接近目标毒性的情况下,这构成了重大挑战。目前,解决方案通常是特别的,可能会带来实际挑战。我们提出了一种在试验中期插入剂量水平的新方法,其特点是搜索将剂量空间划分为毒性真正高于和低于目标毒性的组合的轮廓。以一定程度的确定性建立这个轮廓表明,没有任何组合接近目标毒性,触发插入。我们在应用于PIPE设计和二维贝叶斯逻辑回归模型(BLRM)的综合模拟研究中检查了我们的方法,尽管任何基于模型或模型辅助的设计都是合适的候选。我们的结果表明,平均而言,插入方法可以增加选择接近目标毒性的组合的概率,而不会增加亚治疗或毒性推荐的概率。
{"title":"A Novel Method for Inserting Dose Levels Mid-Trial in Early-Phase Oncology Combination Studies.","authors":"Matthew George, Ian Wadsworth, Pavel Mozgunov","doi":"10.1002/sim.70417","DOIUrl":"10.1002/sim.70417","url":null,"abstract":"<p><p>The use of combination treatments in early-phase oncology trials is growing. The objective of these trials is to search for the maximum tolerated dose combination from a predefined set. However, cases in which the initial set of combinations does not contain one close to the target toxicity pose a significant challenge. Currently, solutions are typically ad hoc and may bring practical challenges. We propose a novel method for inserting dose levels mid-trial, which features a search for the contour partitioning the dose space into combinations with toxicity truly above and below the target toxicity. Establishing this contour with a degree of certainty suggests that no combination is close to the target toxicity, triggering an insertion. We examine our approach in a comprehensive simulation study applied to the PIPE design and two-dimensional Bayesian logistic regression model (BLRM), though any model-based or model-assisted design is an appropriate candidate. Our results demonstrate that, on average, the insertion method can increase the probability of selecting combinations close to the target toxicity, without increasing the probability of subtherapeutic or toxic recommendations.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 3-5","pages":"e70417"},"PeriodicalIF":1.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146166837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Probabilistic Clustering Using Multivariate Growth Mixture Model in Clinical Settings-A Scleroderma Example. 概率聚类使用多变量生长混合模型在临床设置-硬皮病的例子。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-02-01 DOI: 10.1002/sim.70450
Ji Soo Kim, Yizhen Xu, Rachel S Wallwork, Laura K Hummers, Ami A Shah, Scott L Zeger

Background: Scleroderma (systemic sclerosis; SSc) is a chronic autoimmune disease known for wide heterogeneity in patients' disease progression in multiple organ systems. Our goal is to guide clinical care by real-time classification of patients into clinically interpretable subpopulations based on their baseline characteristics and the temporal patterns of their disease progression.

Methods: A Bayesian multivariate growth mixture model was fit to identify subgroups of patients from the Johns Hopkins Scleroderma Center Research Registry who share similar lung function trajectories. We jointly modeled forced vital capacity (FVC) and diffusing capacity for carbon monoxide (DLCO) as pulmonary outcomes for 289 patients with SSc and anti-topoisomerase 1 antibodies and developed a framework to sequentially update class membership probabilities for any given patient based on her accumulating data.

Results: We identified a "stable" group of 150 patients for whom both biomarkers changed little from the date of disease onset over the next 10 years, and a "progressor" group of 139 patients that, on average, experienced a clinically significant decline in both measures starting soon after disease onset. For any given patient at any given time, our algorithm calculates the probability of belonging to the progressor group using both baseline characteristics and the patient's longitudinal FVC and DLCO observations.

Conclusions: Our method calculates the probability of being a fast progressor at baseline when no FVC and DLCO are observed, then sequentially updates it as more information becomes available. This sequential integration of patient data and classification of her disease trajectory has the potential to improve clinical decisions and ultimately patient outcomes.

背景:硬皮病(系统性硬化症,简称SSc)是一种慢性自身免疫性疾病,在患者多器官系统的疾病进展中具有广泛的异质性。我们的目标是根据患者的基线特征和疾病进展的时间模式,将患者实时分类为临床可解释的亚群,从而指导临床护理。方法:采用贝叶斯多变量生长混合模型,从约翰霍普金斯硬皮病中心研究登记处确定具有相似肺功能轨迹的患者亚组。我们联合对289例SSc和抗拓扑异构酶1抗体患者的肺部预后进行了强制肺活量(FVC)和一氧化碳弥散量(DLCO)建模,并开发了一个框架,根据患者累积的数据,依次更新任何给定患者的类别隶属概率。结果:我们确定了一个由150名患者组成的“稳定”组,他们的两项生物标志物在接下来的10年里从疾病发病之日起几乎没有变化,以及一个由139名患者组成的“进展”组,平均而言,在疾病发病后不久,两项指标的临床显著下降。对于任何给定的患者,在任何给定的时间,我们的算法使用基线特征和患者的纵向FVC和DLCO观察来计算属于进展组的概率。结论:我们的方法计算在基线时未观察到FVC和DLCO时成为快速进展者的概率,然后在获得更多信息时依次更新它。这种对患者数据和疾病轨迹分类的顺序整合有可能改善临床决策和最终患者预后。
{"title":"Probabilistic Clustering Using Multivariate Growth Mixture Model in Clinical Settings-A Scleroderma Example.","authors":"Ji Soo Kim, Yizhen Xu, Rachel S Wallwork, Laura K Hummers, Ami A Shah, Scott L Zeger","doi":"10.1002/sim.70450","DOIUrl":"10.1002/sim.70450","url":null,"abstract":"<p><strong>Background: </strong>Scleroderma (systemic sclerosis; SSc) is a chronic autoimmune disease known for wide heterogeneity in patients' disease progression in multiple organ systems. Our goal is to guide clinical care by real-time classification of patients into clinically interpretable subpopulations based on their baseline characteristics and the temporal patterns of their disease progression.</p><p><strong>Methods: </strong>A Bayesian multivariate growth mixture model was fit to identify subgroups of patients from the Johns Hopkins Scleroderma Center Research Registry who share similar lung function trajectories. We jointly modeled forced vital capacity (FVC) and diffusing capacity for carbon monoxide (DLCO) as pulmonary outcomes for 289 patients with SSc and anti-topoisomerase 1 antibodies and developed a framework to sequentially update class membership probabilities for any given patient based on her accumulating data.</p><p><strong>Results: </strong>We identified a \"stable\" group of 150 patients for whom both biomarkers changed little from the date of disease onset over the next 10 years, and a \"progressor\" group of 139 patients that, on average, experienced a clinically significant decline in both measures starting soon after disease onset. For any given patient at any given time, our algorithm calculates the probability of belonging to the progressor group using both baseline characteristics and the patient's longitudinal FVC and DLCO observations.</p><p><strong>Conclusions: </strong>Our method calculates the probability of being a fast progressor at baseline when no FVC and DLCO are observed, then sequentially updates it as more information becomes available. This sequential integration of patient data and classification of her disease trajectory has the potential to improve clinical decisions and ultimately patient outcomes.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 3-5","pages":"e70450"},"PeriodicalIF":1.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12904757/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146195722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian Sample Size Calculations for External Validation Studies of Risk Prediction Models. 风险预测模型外部验证研究中的贝叶斯样本量计算。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-02-01 DOI: 10.1002/sim.70389
Mohsen Sadatsafavi, Paul Gustafson, Solmaz Setayeshgar, Laure Wynants, Richard D Riley

Contemporary sample size calculations for external validation of risk prediction models require users to specify fixed values of assumed model performance metrics alongside target precision levels (e.g., 95% CI widths). However, due to the finite samples of previous studies, our knowledge of true model performance in the target population is uncertain, and so choosing fixed values represents an incomplete picture. As well, for net benefit (NB) as a measure of clinical utility, the relevance of conventional precision-based inference is doubtful. In this work, we propose a general Bayesian framework for multi-criteria sample size considerations for prediction models for binary outcomes. For statistical metrics of performance (e.g., discrimination and calibration), we propose sample size rules that target desired expected precision or desired assurance probability that the precision criteria will be satisfied. For NB, we propose rules based on Optimality Assurance (the probability that the planned study correctly identifies the optimal strategy) and Value of Information (VoI) analysis, which quantifies the expected gain in NB by learning about model performance from a validation study of a given size. We showcase these developments in a case study on the validation of a risk prediction model for deterioration among hospitalized COVID-19 patients. Compared to conventional sample size calculation methods, a Bayesian approach requires explicit quantification of uncertainty around model performance, and thereby enables flexible sample size rules based on expected precision, assurance probabilities, and VoI. In our case study, calculations based on VoI for NB suggest considerably lower sample sizes are required than when focusing on the precision of calibration metrics. This approach is implemented in the accompanying software.

当前用于风险预测模型外部验证的样本量计算要求用户指定假设模型性能指标的固定值以及目标精度水平(例如,95% CI宽度)。然而,由于以往研究的样本有限,我们对目标人群中模型真实性能的了解是不确定的,因此选择固定值代表了不完整的图景。同样,对于净收益(NB)作为临床效用的衡量标准,传统的基于精度的推断的相关性值得怀疑。在这项工作中,我们提出了一个通用的贝叶斯框架,用于二元结果预测模型的多准则样本量考虑。对于性能的统计度量(例如,判别和校准),我们提出了样本大小规则,目标是期望的预期精度或精度标准将被满足的期望保证概率。对于NB,我们提出了基于最优性保证(计划研究正确识别最优策略的概率)和信息价值(VoI)分析的规则,该分析通过从给定规模的验证研究中学习模型性能来量化NB的预期增益。我们在一个关于COVID-19住院患者恶化风险预测模型验证的案例研究中展示了这些进展。与传统的样本量计算方法相比,贝叶斯方法需要明确量化模型性能的不确定性,从而实现基于预期精度、保证概率和VoI的灵活样本量规则。在我们的案例研究中,基于NB的VoI计算表明,与专注于校准指标的精度相比,所需的样本量要低得多。该方法在附带的软件中实现。
{"title":"Bayesian Sample Size Calculations for External Validation Studies of Risk Prediction Models.","authors":"Mohsen Sadatsafavi, Paul Gustafson, Solmaz Setayeshgar, Laure Wynants, Richard D Riley","doi":"10.1002/sim.70389","DOIUrl":"10.1002/sim.70389","url":null,"abstract":"<p><p>Contemporary sample size calculations for external validation of risk prediction models require users to specify fixed values of assumed model performance metrics alongside target precision levels (e.g., 95% CI widths). However, due to the finite samples of previous studies, our knowledge of true model performance in the target population is uncertain, and so choosing fixed values represents an incomplete picture. As well, for net benefit (NB) as a measure of clinical utility, the relevance of conventional precision-based inference is doubtful. In this work, we propose a general Bayesian framework for multi-criteria sample size considerations for prediction models for binary outcomes. For statistical metrics of performance (e.g., discrimination and calibration), we propose sample size rules that target desired expected precision or desired assurance probability that the precision criteria will be satisfied. For NB, we propose rules based on Optimality Assurance (the probability that the planned study correctly identifies the optimal strategy) and Value of Information (VoI) analysis, which quantifies the expected gain in NB by learning about model performance from a validation study of a given size. We showcase these developments in a case study on the validation of a risk prediction model for deterioration among hospitalized COVID-19 patients. Compared to conventional sample size calculation methods, a Bayesian approach requires explicit quantification of uncertainty around model performance, and thereby enables flexible sample size rules based on expected precision, assurance probabilities, and VoI. In our case study, calculations based on VoI for NB suggest considerably lower sample sizes are required than when focusing on the precision of calibration metrics. This approach is implemented in the accompanying software.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 3-5","pages":"e70389"},"PeriodicalIF":1.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12894519/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146166819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Missing Value Imputation With Adversarial Random Forests-MissARF. 基于对抗随机森林的缺失值输入。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-02-01 DOI: 10.1002/sim.70379
Pegah Golchian, Jan Kapar, David S Watson, Marvin N Wright

Handling missing values is a common challenge in biostatistical analyses, typically addressed by imputation methods. We propose a novel, fast, and easy-to-use imputation method called missing value imputation with adversarial random forests (MissARF), based on generative machine learning, that provides both single and multiple imputation. MissARF employs adversarial random forest (ARF) for density estimation and data synthesis. To impute a missing value of an observation, we condition on the non-missing values and sample from the estimated conditional distribution generated by ARF. Our experiments demonstrate that MissARF performs comparably to state-of-the-art single and multiple imputation methods in terms of imputation quality and fast runtime with no additional costs for multiple imputation.

处理缺失值是生物统计分析中常见的挑战,通常通过imputation方法来解决。我们提出了一种新颖,快速,易于使用的imputation方法,称为基于生成机器学习的对抗随机森林缺失值imputation (MissARF),它提供单次和多次imputation。MissARF采用对抗随机森林(ARF)进行密度估计和数据合成。为了估算观测值的缺失值,我们将ARF生成的估计条件分布中的非缺失值和样本作为条件。我们的实验表明,MissARF在输入质量和快速运行时间方面可以与最先进的单次和多次输入方法相媲美,而无需额外的多次输入成本。
{"title":"Missing Value Imputation With Adversarial Random Forests-MissARF.","authors":"Pegah Golchian, Jan Kapar, David S Watson, Marvin N Wright","doi":"10.1002/sim.70379","DOIUrl":"10.1002/sim.70379","url":null,"abstract":"<p><p>Handling missing values is a common challenge in biostatistical analyses, typically addressed by imputation methods. We propose a novel, fast, and easy-to-use imputation method called missing value imputation with adversarial random forests (MissARF), based on generative machine learning, that provides both single and multiple imputation. MissARF employs adversarial random forest (ARF) for density estimation and data synthesis. To impute a missing value of an observation, we condition on the non-missing values and sample from the estimated conditional distribution generated by ARF. Our experiments demonstrate that MissARF performs comparably to state-of-the-art single and multiple imputation methods in terms of imputation quality and fast runtime with no additional costs for multiple imputation.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 3-5","pages":"e70379"},"PeriodicalIF":1.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12871009/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146120212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Empirical Assessment of the Cost of Dichotomization of the Outcome of Clinical Trials. 临床试验结果二分类成本的实证评估。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-02-01 DOI: 10.1002/sim.70402
Erik W van Zwet, Frank E Harrell, Stephen J Senn

We have studied 21 435 unique randomized controlled trials (RCTs) from the Cochrane Database of Systematic Reviews (CDSR). Of these trials, 7224 (34%) have a continuous (numerical) outcome and 14 211 (66%) have a binary outcome. We find that trials with a binary outcome have larger sample sizes on average, but also larger standard errors and fewer statistically significant results. We conclude that researchers tend to increase the sample size to compensate for the low information content of binary outcomes, but not sufficiently. In many cases, the binary outcome is the result of dichotomization of a continuous outcome, which is sometimes referred to as "responder analysis". In those cases, the loss of information is avoidable. Burdening more participants than necessary is wasteful, costly, and unethical. We provide a method to convert a sample size calculation for the comparison of two proportions into one for the comparison of the means of the underlying continuous outcomes. This demonstrates how much the sample size may be reduced if the outcome were not dichotomized. We also provide a method to calculate the loss of information after a dichotomization. We apply this method to all the trials from the CDSR with a binary outcome, and estimate that on average, only about 60% of the information is retained after dichotomization. We provide R code and a shiny app at: https://vanzwet.shinyapps.io/info_loss/ to do these calculations. We hope that quantifying the loss of information will discourage researchers from dichotomizing continuous outcomes. Instead, we recommend they "model continuously but interpret dichotomously". For example, they might present "percentage achieving clinically meaningful improvement" derived from a continuous analysis rather than by dichotomizing raw data.

我们研究了来自Cochrane系统评价数据库(CDSR)的21435个独特的随机对照试验(rct)。在这些试验中,7224项(34%)具有连续(数字)结果,14211项(66%)具有二元结果。我们发现,具有二元结果的试验平均样本量较大,但也有较大的标准误差和较少的统计显著性结果。我们的结论是,研究人员倾向于增加样本量来弥补二元结果的低信息含量,但不够。在许多情况下,二元结果是连续结果的二分类结果,这有时被称为“应答者分析”。在这些情况下,信息的丢失是可以避免的。让更多的参与者承担不必要的负担是浪费、昂贵和不道德的。我们提供了一种方法,将两个比例比较的样本大小计算转换为一个用于比较潜在连续结果的平均值。这表明如果结果不进行二分类,样本量可能会减少多少。我们还提供了一种计算二分类后信息损失的方法。我们将该方法应用于所有具有二值结果的CDSR试验,并估计平均而言,二值化后仅保留约60%的信息。我们提供了R代码和一个闪亮的应用程序:https://vanzwet.shinyapps.io/info_loss/来做这些计算。我们希望量化信息损失将阻止研究人员对连续结果进行二分类。相反,我们建议他们“连续建模,但进行二分解释”。例如,他们可能会提出“实现临床有意义改善的百分比”,这是由连续分析得出的,而不是通过对原始数据进行二分法。
{"title":"An Empirical Assessment of the Cost of Dichotomization of the Outcome of Clinical Trials.","authors":"Erik W van Zwet, Frank E Harrell, Stephen J Senn","doi":"10.1002/sim.70402","DOIUrl":"10.1002/sim.70402","url":null,"abstract":"<p><p>We have studied 21 435 unique randomized controlled trials (RCTs) from the Cochrane Database of Systematic Reviews (CDSR). Of these trials, 7224 (34%) have a continuous (numerical) outcome and 14 211 (66%) have a binary outcome. We find that trials with a binary outcome have larger sample sizes on average, but also larger standard errors and fewer statistically significant results. We conclude that researchers tend to increase the sample size to compensate for the low information content of binary outcomes, but not sufficiently. In many cases, the binary outcome is the result of dichotomization of a continuous outcome, which is sometimes referred to as \"responder analysis\". In those cases, the loss of information is avoidable. Burdening more participants than necessary is wasteful, costly, and unethical. We provide a method to convert a sample size calculation for the comparison of two proportions into one for the comparison of the means of the underlying continuous outcomes. This demonstrates how much the sample size may be reduced if the outcome were not dichotomized. We also provide a method to calculate the loss of information after a dichotomization. We apply this method to all the trials from the CDSR with a binary outcome, and estimate that on average, only about 60% of the information is retained after dichotomization. We provide R code and a shiny app at: https://vanzwet.shinyapps.io/info_loss/ to do these calculations. We hope that quantifying the loss of information will discourage researchers from dichotomizing continuous outcomes. Instead, we recommend they \"model continuously but interpret dichotomously\". For example, they might present \"percentage achieving clinically meaningful improvement\" derived from a continuous analysis rather than by dichotomizing raw data.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 3-5","pages":"e70402"},"PeriodicalIF":1.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12875020/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146126398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Causal Inference With Survey Data: A Robust Framework for Propensity Score Weighting in Probability and Non-Probability Samples. 调查数据的因果推理:概率和非概率样本中倾向得分加权的稳健框架。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-02-01 DOI: 10.1002/sim.70420
Wei Liang, Changbao Wu

Confounding bias and selection bias are two major challenges in causal inference with observational data. While numerous methods have been developed to mitigate confounding bias, they often assume that the data are representative of the study population and ignore the potential selection bias introduced during data collection. In this paper, we propose a unified weighting framework-survey-weighted propensity score weighting-to simultaneously address both confounding and selection biases when the observational dataset is a probability survey sample from a finite population, which is itself viewed as a random sample from the target superpopulation. The proposed method yields a doubly robust inferential procedure for a class of population weighted average treatment effects. We further extend our results to non-probability observational data when the sampling mechanism is unknown but auxiliary information of the confounding variables is available from an external probability sample. We focus on practically important scenarios where the confounders are only partially observed in the external data. Our analysis reveals that the key variables in the external data are those related to both treatment effect heterogeneity and the selection mechanism. We also discuss how to combine auxiliary information from multiple reference probability samples. Monte Carlo simulations and an application to a real-world non-probability observational dataset demonstrate the superiority of our proposed methods over standard propensity score weighting approaches.

混淆偏差和选择偏差是观测数据因果推理中的两个主要挑战。虽然已经开发了许多方法来减轻混杂偏差,但它们通常假设数据代表研究人群,而忽略了数据收集过程中引入的潜在选择偏差。在本文中,我们提出了一个统一的加权框架-调查加权倾向得分加权-以同时解决混淆和选择偏差,当观测数据集是来自有限总体的概率调查样本时,该样本本身被视为来自目标超总体的随机样本。所提出的方法对一类总体加权平均处理效果产生了双重鲁棒推理过程。当抽样机制未知,但从外部概率样本中可以获得混杂变量的辅助信息时,我们进一步将结果扩展到非概率观测数据。我们关注的是在外部数据中只能部分观察到混杂因素的实际重要场景。我们的分析表明,外部数据中的关键变量是与治疗效果异质性和选择机制有关的变量。我们还讨论了如何结合多个参考概率样本的辅助信息。蒙特卡罗模拟和对现实世界非概率观测数据集的应用表明,我们提出的方法优于标准倾向得分加权方法。
{"title":"Causal Inference With Survey Data: A Robust Framework for Propensity Score Weighting in Probability and Non-Probability Samples.","authors":"Wei Liang, Changbao Wu","doi":"10.1002/sim.70420","DOIUrl":"10.1002/sim.70420","url":null,"abstract":"<p><p>Confounding bias and selection bias are two major challenges in causal inference with observational data. While numerous methods have been developed to mitigate confounding bias, they often assume that the data are representative of the study population and ignore the potential selection bias introduced during data collection. In this paper, we propose a unified weighting framework-survey-weighted propensity score weighting-to simultaneously address both confounding and selection biases when the observational dataset is a probability survey sample from a finite population, which is itself viewed as a random sample from the target superpopulation. The proposed method yields a doubly robust inferential procedure for a class of population weighted average treatment effects. We further extend our results to non-probability observational data when the sampling mechanism is unknown but auxiliary information of the confounding variables is available from an external probability sample. We focus on practically important scenarios where the confounders are only partially observed in the external data. Our analysis reveals that the key variables in the external data are those related to both treatment effect heterogeneity and the selection mechanism. We also discuss how to combine auxiliary information from multiple reference probability samples. Monte Carlo simulations and an application to a real-world non-probability observational dataset demonstrate the superiority of our proposed methods over standard propensity score weighting approaches.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 3-5","pages":"e70420"},"PeriodicalIF":1.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12873465/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146120100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Tutorial on Implementing Statistical Methods for Estimating Excess Death With a Case Study and Simulations on Estimating Excess Death in the Post-COVID-19 United States. 《实施超额死亡统计方法教程——以美国新冠肺炎疫情后超额死亡估算为例与模拟》
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-02-01 DOI: 10.1002/sim.70396
Lillian Rountree, Lauren Zimmermann, Lucy Teed, Daniel M Weinberger, Bhramar Mukherjee

Excess death estimation, defined as the difference between the observed and expected death counts, is a popular technique for assessing the overall death toll of a public health crisis. The expected death count is defined as the expected number of deaths in the counterfactual scenario where prevailing conditions continued and the public health crisis did not occur. While excess death is frequently obtained by estimating the expected number of deaths and subtracting it from the observed number, some methods calculate this difference directly, based on historic mortality data and direct predictors of excess deaths. This tutorial provides guidance to researchers on the application of four popular methods for estimating excess death: the World Health Organization's Bayesian model; The Economist's gradient boosting algorithm; Acosta and Irizarry's quasi-Poisson model; and the Institute for Health Metrics and Evaluation's ensemble model. We begin with explanations of the mathematical formulation of each method and then demonstrate how to code each method in R, applying the code for a case study estimating excess death in the United States for the post-pandemic period of 2022-2024. An additional simulation study estimating excess death for three different scenarios and three different extrapolation periods further demonstrates general trends in performance across methods; together, these two studies show how the estimates by these methods and their accuracy vary widely depending on the choice of input covariates, reference period, extrapolation period, and tuning parameters. Caution should be exercised when extrapolating for estimating excess death, particularly in cases where the reference period of pre-event conditions is temporally distant (> 5 years) from the period of interest. In place of committing to one method under one setting, we advocate for using multiple excess death methods in tandem, comparing and synthesizing their results and conducting thorough sensitivity analyses as best practice for estimating excess death for a period of interest. We also call for more detailed simulation studies and benchmark datasets to better understand the accuracy and comparative performance of methods estimating excess death.

超额死亡估计,定义为观察到的死亡人数与预期死亡人数之间的差异,是评估公共卫生危机总死亡人数的一种流行技术。预期死亡人数的定义是,在现行情况继续存在且没有发生公共卫生危机的反事实情况下的预期死亡人数。虽然超额死亡通常是通过估计预期死亡人数并将其从观察到的人数中减去来获得的,但有些方法根据历史死亡率数据和超额死亡的直接预测指标直接计算出这一差异。本教程为研究人员提供了四种流行的估计超额死亡方法的应用指导:世界卫生组织的贝叶斯模型;《经济学人》的梯度增强算法;Acosta和Irizarry的准泊松模型;以及健康计量与评估研究所的整体模型。我们首先解释每种方法的数学公式,然后演示如何用R对每种方法进行编码,并将代码应用于一个案例研究,估计2022-2024年大流行后美国的超额死亡人数。另一项模拟研究估计了三种不同情景和三种不同外推期的超额死亡人数,进一步表明了各种方法的总体表现趋势;总之,这两项研究表明,这些方法的估计及其准确性如何根据输入协变量、参考周期、外推周期和调优参数的选择而有很大差异。在为估计超额死亡进行外推时应谨慎行事,特别是在事件发生前情况的参考期与研究期在时间上相距较远(50至50年)的情况下。我们提倡同时使用多种超额死亡方法,比较和综合它们的结果,并进行彻底的敏感性分析,而不是在一种环境下使用一种方法,作为估计某一感兴趣时期超额死亡的最佳做法。我们还呼吁进行更详细的模拟研究和基准数据集,以更好地了解估计过量死亡的方法的准确性和比较性能。
{"title":"A Tutorial on Implementing Statistical Methods for Estimating Excess Death With a Case Study and Simulations on Estimating Excess Death in the Post-COVID-19 United States.","authors":"Lillian Rountree, Lauren Zimmermann, Lucy Teed, Daniel M Weinberger, Bhramar Mukherjee","doi":"10.1002/sim.70396","DOIUrl":"https://doi.org/10.1002/sim.70396","url":null,"abstract":"<p><p>Excess death estimation, defined as the difference between the observed and expected death counts, is a popular technique for assessing the overall death toll of a public health crisis. The expected death count is defined as the expected number of deaths in the counterfactual scenario where prevailing conditions continued and the public health crisis did not occur. While excess death is frequently obtained by estimating the expected number of deaths and subtracting it from the observed number, some methods calculate this difference directly, based on historic mortality data and direct predictors of excess deaths. This tutorial provides guidance to researchers on the application of four popular methods for estimating excess death: the World Health Organization's Bayesian model; The Economist's gradient boosting algorithm; Acosta and Irizarry's quasi-Poisson model; and the Institute for Health Metrics and Evaluation's ensemble model. We begin with explanations of the mathematical formulation of each method and then demonstrate how to code each method in R, applying the code for a case study estimating excess death in the United States for the post-pandemic period of 2022-2024. An additional simulation study estimating excess death for three different scenarios and three different extrapolation periods further demonstrates general trends in performance across methods; together, these two studies show how the estimates by these methods and their accuracy vary widely depending on the choice of input covariates, reference period, extrapolation period, and tuning parameters. Caution should be exercised when extrapolating for estimating excess death, particularly in cases where the reference period of pre-event conditions is temporally distant (> 5 years) from the period of interest. In place of committing to one method under one setting, we advocate for using multiple excess death methods in tandem, comparing and synthesizing their results and conducting thorough sensitivity analyses as best practice for estimating excess death for a period of interest. We also call for more detailed simulation studies and benchmark datasets to better understand the accuracy and comparative performance of methods estimating excess death.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 3-5","pages":"e70396"},"PeriodicalIF":1.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146166841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction to "Model-Robust Standardization in Cluster-Randomized Trials". 对“聚类随机试验模型稳健标准化”的修正。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-02-01 DOI: 10.1002/sim.70447
{"title":"Correction to \"Model-Robust Standardization in Cluster-Randomized Trials\".","authors":"","doi":"10.1002/sim.70447","DOIUrl":"https://doi.org/10.1002/sim.70447","url":null,"abstract":"","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 3-5","pages":"e70447"},"PeriodicalIF":1.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146228758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Generalized Harmonic Mean for p $$ p $$ -Values: Combining Dependent and Independent Tests. p $$ p $$ -值的广义调和均值:独立检验与相关检验的结合。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-02-01 DOI: 10.1002/sim.70439
Zhengbang Li, Xinjie Zhou

In medical research, particularly in fields such as genomics, multi-center clinical trials, and meta-analysis, effectively combining the p-values from multiple related hypothesis tests has always been a challenging statistical issue. To address this problem and enhance the statistical power of comprehensive analysis, this study proposes a generalized harmonic mean for p-values (GHMP( ξ $$ xi $$ )) combination method and builds two kinds of combination tests based on this framework. The first kind of test is designed for applications with small significance levels and has more lenient conditions for adapting to correlations, making it suitable for the complex dependency structures commonly found in actual research. The second kind of test introduces a novel high-order tail approximation technique based on stable distribution theory, which can more accurately estimate the extreme tail probabilities at large significance levels under independent or weakly correlated conditions. Extensive simulation experiments show that both kinds of tests perform robustly across various configurations, with statistical power not inferior to the traditional Cauchy combination test (CCT) and minimum p-value (MinP) methods, and demonstrate superior detection capabilities in several scenarios. Additionally, GHMP( ξ $$ xi $$ ) has high computational efficiency and has been empirically validated in real genetic data. These characteristics make it a reliable and practical analytical tool for high-dimensional medical research, such as genome-wide association studies (GWAS) and large-scale meta-analysis.

在医学研究中,特别是在基因组学、多中心临床试验和元分析等领域,如何有效地结合多个相关假设检验的p值一直是一个具有挑战性的统计学问题。为了解决这一问题,提高综合分析的统计能力,本文提出了p值的广义调和均值(GHMP(ξ $$ xi $$))组合方法,并在此框架上构建了两种组合检验。第一类测试是为具有小显著性水平的应用程序设计的,并且具有更宽松的适应相关性的条件,使其适合于实际研究中常见的复杂依赖结构。第二类检验引入了一种新的基于稳定分布理论的高阶尾部逼近技术,该技术可以在独立或弱相关条件下更准确地估计大显著性水平下的极端尾部概率。大量的仿真实验表明,这两种测试在各种配置下都具有鲁棒性,其统计功率不低于传统的柯西组合测试(CCT)和最小p值(MinP)方法,并且在几种情况下显示出优越的检测能力。此外,GHMP(ξ $$ xi $$)具有较高的计算效率,并在实际遗传数据中得到了经验验证。这些特点使其成为一个可靠和实用的高维医学研究分析工具,如全基因组关联研究(GWAS)和大规模荟萃分析。
{"title":"<ArticleTitle xmlns:ns0=\"http://www.w3.org/1998/Math/MathML\">The Generalized Harmonic Mean for <ns0:math> <ns0:semantics><ns0:mrow><ns0:mi>p</ns0:mi></ns0:mrow> <ns0:annotation>$$ p $$</ns0:annotation></ns0:semantics> </ns0:math> -Values: Combining Dependent and Independent Tests.","authors":"Zhengbang Li, Xinjie Zhou","doi":"10.1002/sim.70439","DOIUrl":"https://doi.org/10.1002/sim.70439","url":null,"abstract":"<p><p>In medical research, particularly in fields such as genomics, multi-center clinical trials, and meta-analysis, effectively combining the p-values from multiple related hypothesis tests has always been a challenging statistical issue. To address this problem and enhance the statistical power of comprehensive analysis, this study proposes a generalized harmonic mean for p-values (GHMP( <math> <semantics><mrow><mi>ξ</mi></mrow> <annotation>$$ xi $$</annotation></semantics> </math> )) combination method and builds two kinds of combination tests based on this framework. The first kind of test is designed for applications with small significance levels and has more lenient conditions for adapting to correlations, making it suitable for the complex dependency structures commonly found in actual research. The second kind of test introduces a novel high-order tail approximation technique based on stable distribution theory, which can more accurately estimate the extreme tail probabilities at large significance levels under independent or weakly correlated conditions. Extensive simulation experiments show that both kinds of tests perform robustly across various configurations, with statistical power not inferior to the traditional Cauchy combination test (CCT) and minimum p-value (MinP) methods, and demonstrate superior detection capabilities in several scenarios. Additionally, GHMP( <math> <semantics><mrow><mi>ξ</mi></mrow> <annotation>$$ xi $$</annotation></semantics> </math> ) has high computational efficiency and has been empirically validated in real genetic data. These characteristics make it a reliable and practical analytical tool for high-dimensional medical research, such as genome-wide association studies (GWAS) and large-scale meta-analysis.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 3-5","pages":"e70439"},"PeriodicalIF":1.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146182708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating Omics and Pathological Imaging Data for Cancer Prognosis via a Deep Neural Network-Based Cox Model. 基于深度神经网络的Cox模型整合组学和病理成像数据用于癌症预后。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-02-01 DOI: 10.1002/sim.70435
Jingmao Li, Shuangge Ma

Modeling prognosis has unique significance in cancer research. For this purpose, omics data have been routinely used. In a series of recent studies, pathological imaging data derived from biopsy have also been shown as informative. Motivated by the complementary information contained in omics and pathological imaging data, we examine integrating them under a Cox modeling framework. The two types of data have distinct properties: for omics variables, which are more actionable and demand stronger interpretability, we model their effects in a parametric way; whereas for pathological imaging features, which are not actionable and do not have lucid interpretations, we model their effects in a nonparametric way for better flexibility and prediction performance. Specifically, we adopt deep neural networks (DNNs) for nonparametric estimation, considering their advantages over regression models in accommodating nonlinearity and providing better prediction. As both omics and pathological imaging data are high-dimensional and are expected to contain noises, we propose applying penalization for selecting relevant variables and regulating estimation. Different from some existing studies, we pay unique attention to overlapping information contained in the two types of data. Numerical investigations are carefully carried out. In the analysis of TCGA data, sensible selection and superior prediction performance are observed, which demonstrates the practical utility of the proposed analysis.

预后建模在肿瘤研究中具有独特的意义。为此目的,组学数据已被常规使用。在最近的一系列研究中,来自活检的病理成像数据也被证明是有用的。由于组学和病理成像数据中包含的互补信息,我们研究了在Cox建模框架下整合它们。这两种类型的数据具有不同的属性:对于组学变量,它们更具可操作性,需要更强的可解释性,我们以参数化的方式建模它们的影响;然而,对于不可操作且没有清晰解释的病理成像特征,我们以非参数方式对其影响进行建模,以获得更好的灵活性和预测性能。具体来说,我们采用深度神经网络(dnn)进行非参数估计,考虑到它们在适应非线性和提供更好的预测方面优于回归模型。由于组学和病理成像数据都是高维的,并且预计会包含噪声,我们建议使用惩罚来选择相关变量和调节估计。与现有的一些研究不同,我们特别关注两类数据中包含的重叠信息。数值研究是认真进行的。在对TCGA数据的分析中,发现了合理的选择和良好的预测性能,证明了该分析方法的实用性。
{"title":"Integrating Omics and Pathological Imaging Data for Cancer Prognosis via a Deep Neural Network-Based Cox Model.","authors":"Jingmao Li, Shuangge Ma","doi":"10.1002/sim.70435","DOIUrl":"https://doi.org/10.1002/sim.70435","url":null,"abstract":"<p><p>Modeling prognosis has unique significance in cancer research. For this purpose, omics data have been routinely used. In a series of recent studies, pathological imaging data derived from biopsy have also been shown as informative. Motivated by the complementary information contained in omics and pathological imaging data, we examine integrating them under a Cox modeling framework. The two types of data have distinct properties: for omics variables, which are more actionable and demand stronger interpretability, we model their effects in a parametric way; whereas for pathological imaging features, which are not actionable and do not have lucid interpretations, we model their effects in a nonparametric way for better flexibility and prediction performance. Specifically, we adopt deep neural networks (DNNs) for nonparametric estimation, considering their advantages over regression models in accommodating nonlinearity and providing better prediction. As both omics and pathological imaging data are high-dimensional and are expected to contain noises, we propose applying penalization for selecting relevant variables and regulating estimation. Different from some existing studies, we pay unique attention to overlapping information contained in the two types of data. Numerical investigations are carefully carried out. In the analysis of TCGA data, sensible selection and superior prediction performance are observed, which demonstrates the practical utility of the proposed analysis.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 3-5","pages":"e70435"},"PeriodicalIF":1.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146126432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Statistics in Medicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1