首页 > 最新文献

Statistics in Medicine最新文献

英文 中文
Predicting Individual Risk of Advanced Adenoma Based on the Interval-Censored Recurrent Adenoma Event and Informative Screening Time. 基于间隔审查的复发性腺瘤事件和信息筛查时间预测晚期腺瘤的个体风险。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-03-01 DOI: 10.1002/sim.70478
Yipeng Wei, May AlHusseini, Hormuzd A Katki, Rajeshwari Sundaram, Qing Pan

Panel count data is common in cancer screening. In the context of colorectal cancer screening, our work focuses on the prediction of the probability of advanced adenoma conditional on patient-level risk factors and/or event history. We implement the joint frailty model proposed by Huang et al. which involves a non-stationary Poisson process for recurrent adenoma events and informative screening time using semi-parametric Cox models correlated by a latent frailty variable, where coefficients and baseline intensity functions are estimated by estimating equations. Subject-specific frailty value is estimated by the borrow-strength method. In addition, marginal models for the adenoma and screening events are also applicable when average covariate effects on the population level are of interest. Predictions based on the marginal model and predictions based on the frailty models for patients with or without a screening history are compared. When a patient's screening history is available and sufficient adenoma events are observed, the predictions based on the frailty model with estimated subject-specific frailty are superior. However, in the cases of early censoring when adenoma events are not observed for most patients and screening history is not available, the prediction based on the marginal model has better performance. Furthermore, for patients' future screening, the individualized screening intervals derived from the dynamic predictions of advanced adenoma risks will detect adenoma events earlier with shorter lag time between the adenoma state transition and the screening compared to the current practice of regular screening intervals.

小组计数数据在癌症筛查中很常见。在结直肠癌筛查的背景下,我们的工作重点是根据患者水平的危险因素和/或事件史来预测晚期腺瘤的概率。我们实施了Huang等人提出的联合脆弱性模型,该模型涉及复发性腺瘤事件的非平稳泊松过程和信息筛选时间,使用与潜在脆弱性变量相关的半参数Cox模型,其中系数和基线强度函数通过估计方程估计。特定对象的脆弱值是通过借用强度法估计的。此外,当对人口水平的平均协变量效应感兴趣时,腺瘤和筛查事件的边际模型也适用。对有或没有筛查史的患者,基于边际模型的预测和基于衰弱模型的预测进行比较。当患者的筛查史是可用的,并且观察到足够的腺瘤事件时,基于虚弱模型和估计的受试者特异性虚弱的预测是优越的。然而,在大多数患者未观察到腺瘤事件且没有筛查史的情况下进行早期筛查时,基于边际模型的预测具有更好的性能。此外,对于患者未来的筛查,与目前常规筛查间隔相比,基于晚期腺瘤风险动态预测的个体化筛查间隔将更早地发现腺瘤事件,并缩短腺瘤状态转变与筛查之间的滞后时间。
{"title":"Predicting Individual Risk of Advanced Adenoma Based on the Interval-Censored Recurrent Adenoma Event and Informative Screening Time.","authors":"Yipeng Wei, May AlHusseini, Hormuzd A Katki, Rajeshwari Sundaram, Qing Pan","doi":"10.1002/sim.70478","DOIUrl":"https://doi.org/10.1002/sim.70478","url":null,"abstract":"<p><p>Panel count data is common in cancer screening. In the context of colorectal cancer screening, our work focuses on the prediction of the probability of advanced adenoma conditional on patient-level risk factors and/or event history. We implement the joint frailty model proposed by Huang et al. which involves a non-stationary Poisson process for recurrent adenoma events and informative screening time using semi-parametric Cox models correlated by a latent frailty variable, where coefficients and baseline intensity functions are estimated by estimating equations. Subject-specific frailty value is estimated by the borrow-strength method. In addition, marginal models for the adenoma and screening events are also applicable when average covariate effects on the population level are of interest. Predictions based on the marginal model and predictions based on the frailty models for patients with or without a screening history are compared. When a patient's screening history is available and sufficient adenoma events are observed, the predictions based on the frailty model with estimated subject-specific frailty are superior. However, in the cases of early censoring when adenoma events are not observed for most patients and screening history is not available, the prediction based on the marginal model has better performance. Furthermore, for patients' future screening, the individualized screening intervals derived from the dynamic predictions of advanced adenoma risks will detect adenoma events earlier with shorter lag time between the adenoma state transition and the screening compared to the current practice of regular screening intervals.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 6-7","pages":"e70478"},"PeriodicalIF":1.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147378500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimating Risk Differences Using Large Healthcare Data Networks for Medical Product Post-Market Safety Outcomes in a Distributed Data Setting and Allowing for Active Post-Market Surveillance. 在分布式数据设置中使用大型医疗保健数据网络评估医疗产品上市后安全结果的风险差异,并允许积极的上市后监测。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-03-01 DOI: 10.1002/sim.70440
Andrea J Cook, Robert D Wellman, Tracey Marsh, Ram C Tiwari, Michael D Nguyen, Estelle Russek-Cohen, Yuexiang Peng, Jennifer C Nelson

Risk differences allow decision makers to easily estimate the excess safety risk associated with a medical product relative to the potential benefits. However, in post-market observational surveillance studies that actively monitor (e.g., sequentially over time) for safety risk of new medical products, available methods target a relative measure (e.g., odds ratio and relative risk), which can be especially unstable in the rare event setting. These studies are typically conducted within distributed healthcare networks (e.g., Food and Drug Administration [FDA] Sentinel and Centers for Disease Control [CDC] Vaccine Safety Datalink) with patient-level data protected behind firewalls, but sharing of aggregate, deidentified data for centralized analyses. We propose an inverse probability of treatment weighting (IPTW) method that uses site-specific propensity scores to estimate site-specific risk differences that are combined to create an overall stratified risk difference estimate. This method is tailored to the rare event setting and requires minimal data sharing. The stratified IPTW approach is then extended to the active post-market surveillance setting by incorporating group sequential monitoring boundaries using a novel permutation approach. A simulation study is conducted to evaluate the performance of the new methods relative to two centralized analysis approaches, and the methods are applied to a safety surveillance study comparing the risk of febrile seizure between two vaccines using FDA Sentinel Data from three healthcare organizations.

风险差异使决策者能够很容易地估计与医疗产品相关的相对于潜在利益的超额安全风险。然而,在积极监测(如按时间顺序)新医疗产品安全风险的上市后观察性监测研究中,现有方法针对的是一种相对度量(如优势比和相对风险),这在罕见事件环境中可能特别不稳定。这些研究通常是在分布式医疗网络中进行的(例如,食品和药物管理局(FDA)哨兵和疾病控制中心(CDC)疫苗安全数据链),患者级别的数据被保护在防火墙后面,但共享汇总的、未识别的数据以进行集中分析。我们提出了一种治疗加权逆概率(IPTW)方法,该方法使用特定地点的倾向得分来估计特定地点的风险差异,并将其组合在一起以创建总体分层风险差异估计。此方法针对罕见事件设置进行了定制,并且需要最小的数据共享。然后,分层IPTW方法通过使用一种新的排列方法合并群体顺序监测边界,扩展到积极的上市后监测环境。研究人员进行了一项模拟研究,以评估新方法相对于两种集中分析方法的性能,并将这些方法应用于一项安全监测研究,该研究使用来自三家医疗机构的FDA哨兵数据,比较两种疫苗之间的热性惊厥风险。
{"title":"Estimating Risk Differences Using Large Healthcare Data Networks for Medical Product Post-Market Safety Outcomes in a Distributed Data Setting and Allowing for Active Post-Market Surveillance.","authors":"Andrea J Cook, Robert D Wellman, Tracey Marsh, Ram C Tiwari, Michael D Nguyen, Estelle Russek-Cohen, Yuexiang Peng, Jennifer C Nelson","doi":"10.1002/sim.70440","DOIUrl":"10.1002/sim.70440","url":null,"abstract":"<p><p>Risk differences allow decision makers to easily estimate the excess safety risk associated with a medical product relative to the potential benefits. However, in post-market observational surveillance studies that actively monitor (e.g., sequentially over time) for safety risk of new medical products, available methods target a relative measure (e.g., odds ratio and relative risk), which can be especially unstable in the rare event setting. These studies are typically conducted within distributed healthcare networks (e.g., Food and Drug Administration [FDA] Sentinel and Centers for Disease Control [CDC] Vaccine Safety Datalink) with patient-level data protected behind firewalls, but sharing of aggregate, deidentified data for centralized analyses. We propose an inverse probability of treatment weighting (IPTW) method that uses site-specific propensity scores to estimate site-specific risk differences that are combined to create an overall stratified risk difference estimate. This method is tailored to the rare event setting and requires minimal data sharing. The stratified IPTW approach is then extended to the active post-market surveillance setting by incorporating group sequential monitoring boundaries using a novel permutation approach. A simulation study is conducted to evaluate the performance of the new methods relative to two centralized analysis approaches, and the methods are applied to a safety surveillance study comparing the risk of febrile seizure between two vaccines using FDA Sentinel Data from three healthcare organizations.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 6-7","pages":"e70440"},"PeriodicalIF":1.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147318424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Quadratic Programming to Reconstruct Data From Published Survival and Competing Risks Analyses. 利用二次规划重构已发表的生存和竞争风险分析数据。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-03-01 DOI: 10.1002/sim.70474
Andrew C Titman

The ability to retrieve pseudo-individual patient data (IPD) from published survival study results is important to facilitate meta-analysis, evidence synthesis or secondary data analyses for the purpose of decision modeling for cost effectiveness analysis. While established methods exist for retrieving pseudo-IPD from Kaplan-Meier plots, these algorithms are not easily extendable to other types of survival data, nor do they allow all available information to be incorporated. An optimization-based approach is proposed where the task of reconstructing the IPD is formulated as a quadratic program (QP) with linear constraints. The method easily allows auxiliary information such as marked censoring times. Moreover, the same approach can be used to reconstruct patient-level competing risks survival data from published cumulative incidence functions. In simulation, the QP-based method is shown to outperform existing algorithms particularly when data on numbers at risk and marked censoring times are available. The methods are illustrated through reconstruction of data from a published study on patients with advanced stage follicular lymphoma.

从已发表的生存研究结果中检索伪个体患者数据(IPD)的能力对于促进荟萃分析、证据合成或辅助数据分析非常重要,目的是为成本效益分析进行决策建模。虽然已有的方法可以从Kaplan-Meier图中检索伪ipd,但这些算法不容易扩展到其他类型的生存数据,也不允许合并所有可用的信息。提出了一种基于优化的方法,将IPD重构任务表述为具有线性约束的二次规划(QP)。该方法很容易允许附加信息,如标记的审查时间。此外,同样的方法可用于从已公布的累积发生率函数中重建患者层面的竞争风险生存数据。在模拟中,基于qp的方法被证明优于现有的算法,特别是当有风险数字的数据和标记的审查时间时。这些方法是通过对一项发表的晚期滤泡性淋巴瘤患者的研究数据的重建来说明的。
{"title":"Using Quadratic Programming to Reconstruct Data From Published Survival and Competing Risks Analyses.","authors":"Andrew C Titman","doi":"10.1002/sim.70474","DOIUrl":"10.1002/sim.70474","url":null,"abstract":"<p><p>The ability to retrieve pseudo-individual patient data (IPD) from published survival study results is important to facilitate meta-analysis, evidence synthesis or secondary data analyses for the purpose of decision modeling for cost effectiveness analysis. While established methods exist for retrieving pseudo-IPD from Kaplan-Meier plots, these algorithms are not easily extendable to other types of survival data, nor do they allow all available information to be incorporated. An optimization-based approach is proposed where the task of reconstructing the IPD is formulated as a quadratic program (QP) with linear constraints. The method easily allows auxiliary information such as marked censoring times. Moreover, the same approach can be used to reconstruct patient-level competing risks survival data from published cumulative incidence functions. In simulation, the QP-based method is shown to outperform existing algorithms particularly when data on numbers at risk and marked censoring times are available. The methods are illustrated through reconstruction of data from a published study on patients with advanced stage follicular lymphoma.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 6-7","pages":"e70474"},"PeriodicalIF":1.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12956426/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147349197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian Tensor Decomposition for Clustering Latent Symptom Profiles for Verbal Autopsy Data. 基于贝叶斯张量分解的尸检数据潜在症状分布聚类。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-03-01 DOI: 10.1002/sim.70475
Yu Zhu, Zehang Richard Li

Cause-of-death data is fundamental for understanding population health trends and inequalities as well as designing and evaluating public health interventions. A significant proportion of global deaths, particularly in low- and middle-income countries (LMICs), do not have medically certified causes assigned. In such settings, verbal autopsy (VA) is a widely adopted approach to estimate disease burdens by interviewing caregivers of the deceased. Recently, latent class models have been developed to model the joint distribution of symptoms and perform probabilistic cause-of-death assignment. A large number of latent classes are usually needed in order to characterize the complex dependence among symptoms, making the estimated symptom profiles challenging to summarize and interpret. In this paper, we propose a flexible Bayesian tensor decomposition framework that balances the predictive accuracy of the cause-of-death assignment task and the interpretability of the latent structures. The key to our approach is to partition symptoms into groups and model the joint distributions of group-level symptom sub-profiles. The proposed methods achieve better predictive accuracy than existing VA methods and provide a more parsimonious representation of the symptom distributions. We show our methods provide new insights into the clustering patterns of both symptoms and causes using the PHMRC gold-standard VA dataset.

死亡原因数据对于了解人口健康趋势和不平等以及设计和评估公共卫生干预措施至关重要。全球很大一部分死亡,特别是在低收入和中等收入国家,没有医学证明的死因。在这种情况下,口头解剖(VA)是一种广泛采用的方法,通过采访死者的照顾者来估计疾病负担。最近,潜在类别模型已经发展到模拟症状的联合分布和执行概率死因分配。为了描述症状之间复杂的依赖关系,通常需要大量的潜在类别,这使得估计的症状概况难以总结和解释。在本文中,我们提出了一个灵活的贝叶斯张量分解框架,以平衡死因分配任务的预测准确性和潜在结构的可解释性。我们方法的关键是将症状划分为组,并对组级症状子概况的联合分布进行建模。所提出的方法比现有的VA方法具有更好的预测精度,并且提供了更简洁的症状分布表示。我们展示了我们的方法为使用PHMRC金标准VA数据集的症状和原因的聚类模式提供了新的见解。
{"title":"Bayesian Tensor Decomposition for Clustering Latent Symptom Profiles for Verbal Autopsy Data.","authors":"Yu Zhu, Zehang Richard Li","doi":"10.1002/sim.70475","DOIUrl":"10.1002/sim.70475","url":null,"abstract":"<p><p>Cause-of-death data is fundamental for understanding population health trends and inequalities as well as designing and evaluating public health interventions. A significant proportion of global deaths, particularly in low- and middle-income countries (LMICs), do not have medically certified causes assigned. In such settings, verbal autopsy (VA) is a widely adopted approach to estimate disease burdens by interviewing caregivers of the deceased. Recently, latent class models have been developed to model the joint distribution of symptoms and perform probabilistic cause-of-death assignment. A large number of latent classes are usually needed in order to characterize the complex dependence among symptoms, making the estimated symptom profiles challenging to summarize and interpret. In this paper, we propose a flexible Bayesian tensor decomposition framework that balances the predictive accuracy of the cause-of-death assignment task and the interpretability of the latent structures. The key to our approach is to partition symptoms into groups and model the joint distributions of group-level symptom sub-profiles. The proposed methods achieve better predictive accuracy than existing VA methods and provide a more parsimonious representation of the symptom distributions. We show our methods provide new insights into the clustering patterns of both symptoms and causes using the PHMRC gold-standard VA dataset.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 6-7","pages":"e70475"},"PeriodicalIF":1.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12956427/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147349257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Bayesian Approach for Robust Longitudinal Envelope Models. 稳健纵向包络模型的贝叶斯方法。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-03-01 DOI: 10.1002/sim.70496
Peng Zeng, Yushan Mu

The envelope model provides a dimension-reduction framework for multivariate linear regression. However, existing envelope methods typically assume normally distributed random errors and do not accommodate repeated measures in longitudinal studies. To address these limitations, we propose the robust longitudinal envelope model (RoLEM). RoLEM employs a scale mixture of matrix-variate normal distributions to model random errors, allowing it to handle potential outliers, and incorporates flexible correlation structures for repeated measurements. In addition, we introduce new prior and proposal distributions on the Grassmann manifold to facilitate Bayesian inference for RoLEM. Simulation studies and real data analysis demonstrate the superior performance of the proposed method.

包络模型为多元线性回归提供了一个降维框架。然而,现有的包络方法通常假设正态分布的随机误差,并且不适应纵向研究中的重复测量。为了解决这些限制,我们提出了稳健的纵向包络模型(RoLEM)。RoLEM采用矩阵变量正态分布的比例混合来模拟随机误差,允许它处理潜在的异常值,并结合灵活的相关结构进行重复测量。此外,我们在Grassmann流形上引入了新的先验分布和建议分布,以促进RoLEM的贝叶斯推理。仿真研究和实际数据分析表明了该方法的优越性。
{"title":"A Bayesian Approach for Robust Longitudinal Envelope Models.","authors":"Peng Zeng, Yushan Mu","doi":"10.1002/sim.70496","DOIUrl":"https://doi.org/10.1002/sim.70496","url":null,"abstract":"<p><p>The envelope model provides a dimension-reduction framework for multivariate linear regression. However, existing envelope methods typically assume normally distributed random errors and do not accommodate repeated measures in longitudinal studies. To address these limitations, we propose the robust longitudinal envelope model (RoLEM). RoLEM employs a scale mixture of matrix-variate normal distributions to model random errors, allowing it to handle potential outliers, and incorporates flexible correlation structures for repeated measurements. In addition, we introduce new prior and proposal distributions on the Grassmann manifold to facilitate Bayesian inference for RoLEM. Simulation studies and real data analysis demonstrate the superior performance of the proposed method.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 6-7","pages":"e70496"},"PeriodicalIF":1.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147469352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Impact of Two Data-Generating Processes for Competing Risk Data on the Discrimination and Calibration of Two Types of Competing Risk Regression Models. 竞争风险数据的两种数据生成过程对两类竞争风险回归模型判别和校正的影响
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-03-01 DOI: 10.1002/sim.70468
Peter C Austin, Hein Putter

Monte Carlo simulations are an important tool in modern statistical research. The data-generating process is foundational to any simulation. In survival analysis, a competing risk is an event whose occurrence precludes the occurrence of the primary event of interest. Two data-generating processes have been described for simulating competing risk data: one based on all the cause-specific hazard functions for the different types of events, and one based on a subdistribution hazard model for the primary event of interest. There is a paucity of research on the impact of the choice of data-generating process. We used a series of Monte Carlo simulations to evaluate the impact of the choice of data-generating process on the performance of prediction models when assessing discrimination using the time-dependent AUC and accuracy using the time-dependent Brier score. We also assessed the impact of the choice of competing risk regression used for computing smoothed event probabilities for use when computing the calibration metrics ICI (integrated calibration index), E50, and E90. The impact of discordance between the fitted model and the data-generating process on both the time-dependent AUC and the time-dependent Brier score was minimal. When computing the ICI, E50, and E90, we recommend that researchers use a model for computing smoothed event probabilities that is concordant with the type of model whose calibration is being assessed.

蒙特卡罗模拟是现代统计研究的重要工具。数据生成过程是任何模拟的基础。在生存分析中,竞争风险是指一个事件的发生排除了主要感兴趣事件的发生。描述了两种用于模拟竞争风险数据的数据生成过程:一种是基于针对不同类型事件的所有特定原因的风险函数,另一种是基于针对主要感兴趣事件的子分布风险模型。关于选择数据生成过程的影响的研究很少。我们使用一系列蒙特卡罗模拟来评估数据生成过程的选择对预测模型性能的影响,当使用时间相关的AUC评估歧视和使用时间相关的Brier分数评估准确性时。我们还评估了在计算校准指标ICI(综合校准指数)、E50和E90时,用于计算平滑事件概率的竞争风险回归的选择的影响。拟合模型和数据生成过程之间的不一致性对随时间变化的AUC和随时间变化的Brier评分的影响最小。在计算ICI、E50和E90时,我们建议研究人员使用与正在评估其校准的模型类型一致的模型来计算平滑事件概率。
{"title":"The Impact of Two Data-Generating Processes for Competing Risk Data on the Discrimination and Calibration of Two Types of Competing Risk Regression Models.","authors":"Peter C Austin, Hein Putter","doi":"10.1002/sim.70468","DOIUrl":"10.1002/sim.70468","url":null,"abstract":"<p><p>Monte Carlo simulations are an important tool in modern statistical research. The data-generating process is foundational to any simulation. In survival analysis, a competing risk is an event whose occurrence precludes the occurrence of the primary event of interest. Two data-generating processes have been described for simulating competing risk data: one based on all the cause-specific hazard functions for the different types of events, and one based on a subdistribution hazard model for the primary event of interest. There is a paucity of research on the impact of the choice of data-generating process. We used a series of Monte Carlo simulations to evaluate the impact of the choice of data-generating process on the performance of prediction models when assessing discrimination using the time-dependent AUC and accuracy using the time-dependent Brier score. We also assessed the impact of the choice of competing risk regression used for computing smoothed event probabilities for use when computing the calibration metrics ICI (integrated calibration index), E50, and E90. The impact of discordance between the fitted model and the data-generating process on both the time-dependent AUC and the time-dependent Brier score was minimal. When computing the ICI, E50, and E90, we recommend that researchers use a model for computing smoothed event probabilities that is concordant with the type of model whose calibration is being assessed.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 6-7","pages":"e70468"},"PeriodicalIF":1.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12947980/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147318352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Causal Diagrams to Assess Parallel Trends in Difference-in-Differences Studies. 用因果图评估差异中差异研究中的平行趋势。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-03-01 DOI: 10.1002/sim.70459
Audrey Renson, Oliver Dukes, Zach Shahn

Difference-in-differences (DID) is popular because it can allow for unmeasured confounding when the key assumption of parallel trends holds. However, there exists little guidance on how to decide a priori whether this assumption is reasonable. We attempt to develop such guidance by considering the relationship between a causal diagram and the parallel trends assumption. This is challenging because parallel trends is scale-dependent and causal diagrams are generally scale-independent. We develop conditions under which, given a nonparametric causal diagram, one can reject or fail to reject parallel trends. In particular, we adopt a linear faithfulness assumption, which states that all graphically connected variables are correlated, and which is often reasonable in practice. We show that parallel trends can be rejected if either (i) the treatment is affected by pre-treatment outcomes, or (ii) there exist unmeasured confounders for the effect of treatment on pre-treatment outcomes that are not confounders for the post-treatment outcome, or vice versa. We also argue that parallel trends should be strongly questioned if (iii) the pre-treatment outcomes causally affect the post-treatment outcomes, since there exist reasonable semiparametric models in which such an effect violates parallel trends. When (i-iii) are absent, a necessary and sufficient condition for parallel trends is that the association between unmeasured confounders and potential outcomes is constant on an additive scale, pre- and post-treatment. We discuss our approach in the context of the effect of Medicaid expansion under the US Affordable Care Act on health insurance coverage rates.

差分法(DID)之所以流行,是因为当平行趋势的关键假设成立时,它可以允许不可测量的混淆。然而,关于如何先验地判断这一假设是否合理,几乎没有指导。我们试图通过考虑因果图和平行趋势假设之间的关系来发展这种指导。这是具有挑战性的,因为并行趋势依赖于尺度,而因果图通常与尺度无关。我们开发的条件下,给定一个非参数因果图,可以拒绝或不能拒绝平行趋势。特别是,我们采用线性忠实度假设,这表明所有图形连接的变量都是相关的,这在实践中通常是合理的。我们表明,如果(i)治疗受到治疗前结果的影响,或者(ii)治疗对治疗前结果的影响存在未测量的混杂因素,而这些混杂因素不是治疗后结果的混杂因素,或者反之亦然,平行趋势可以被拒绝。我们还认为,如果(iii)治疗前的结果会对治疗后的结果产生因果关系,那么平行趋势应该受到强烈质疑,因为存在合理的半参数模型,其中这种影响违反了平行趋势。当(i-iii)不存在时,平行趋势的一个充分必要条件是,未测量混杂因素和潜在结果之间的关联在加性尺度、治疗前后是恒定的。我们在美国平价医疗法案下医疗补助扩张对健康保险覆盖率的影响的背景下讨论我们的方法。
{"title":"Using Causal Diagrams to Assess Parallel Trends in Difference-in-Differences Studies.","authors":"Audrey Renson, Oliver Dukes, Zach Shahn","doi":"10.1002/sim.70459","DOIUrl":"10.1002/sim.70459","url":null,"abstract":"<p><p>Difference-in-differences (DID) is popular because it can allow for unmeasured confounding when the key assumption of parallel trends holds. However, there exists little guidance on how to decide a priori whether this assumption is reasonable. We attempt to develop such guidance by considering the relationship between a causal diagram and the parallel trends assumption. This is challenging because parallel trends is scale-dependent and causal diagrams are generally scale-independent. We develop conditions under which, given a nonparametric causal diagram, one can reject or fail to reject parallel trends. In particular, we adopt a linear faithfulness assumption, which states that all graphically connected variables are correlated, and which is often reasonable in practice. We show that parallel trends can be rejected if either (i) the treatment is affected by pre-treatment outcomes, or (ii) there exist unmeasured confounders for the effect of treatment on pre-treatment outcomes that are not confounders for the post-treatment outcome, or vice versa. We also argue that parallel trends should be strongly questioned if (iii) the pre-treatment outcomes causally affect the post-treatment outcomes, since there exist reasonable semiparametric models in which such an effect violates parallel trends. When (i-iii) are absent, a necessary and sufficient condition for parallel trends is that the association between unmeasured confounders and potential outcomes is constant on an additive scale, pre- and post-treatment. We discuss our approach in the context of the effect of Medicaid expansion under the US Affordable Care Act on health insurance coverage rates.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 6-7","pages":"e70459"},"PeriodicalIF":1.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147318397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Confidence Intervals for Comparing Two Independent Folded Normals: A Case Study in Bunion Surgery. 比较两条独立折叠法线的置信区间:拇外翻手术的一例研究。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-03-01 DOI: 10.1002/sim.70494
Eleonora Di Carluccio, Sarah Ogutu, Ozkan Köse, Henry G Mwambi, Andreas Ziegler

The absolute change in the angle measured immediately after surgery and after bone healing is a clinically relevant endpoint to judge the stability of an osteotomy. Assuming that the difference in angles is normally distributed, the absolute difference follows a folded normal distribution. The confidence interval (CI) for the absolute angle change of a novel fixation screw compared to a standard fixation screw may be used for evaluating non-inferiority. We suggest that the Welch statistic may serve as the basis for CI calculations of the difference between two folded normals. The coverage probabilities of derived CIs are investigated by simulations. We illustrate the approaches with data from a randomized controlled trial and an observational study on bunion surgery, in which magnesium-based and titanium-based fixation screws were compared. In the simulation studies, asymptotic and both non-parametric and parametric bootstrap CIs based on the Welch statistic were close to nominal coverage levels. In case of different sample sizes between groups, the t-statistic-based CIs were not always meeting the nominal coverage levels. Methods based on chi-square distributions were not deemed appropriate for comparing two folded normals. The re-analysis of the bunion trial permitted the conclusion of non-inferiority for the primary endpoint, the absolute difference between baseline and six months follow-up for the distal metatarsal articular angle, between magnesium-based and titanium-based fixation screws. We recommend the use of CIs based on the Welch statistic to evaluate non-inferiority in trials where the stability of angles after osteotomy is to be compared.

术后和骨愈合后即刻测量角度的绝对变化是判断截骨术稳定性的临床相关指标。假设角度差是正态分布,则绝对差遵循折叠正态分布。与标准固定螺钉相比,新型固定螺钉绝对角度变化的置信区间(CI)可用于评估非劣效性。我们建议韦尔奇统计量可以作为两个折叠法线之差的CI计算的基础。通过仿真研究了所导出的ci的覆盖概率。我们用一项随机对照试验和一项拇外翻手术观察性研究的数据来说明这些方法,其中比较了镁基和钛基固定螺钉。在模拟研究中,基于Welch统计量的渐近、非参数和参数自举ci都接近名义覆盖水平。在组间样本量不同的情况下,基于t统计的ci并不总是满足名义覆盖水平。基于卡方分布的方法被认为不适合比较两个折叠的正态线。对拇囊炎试验的重新分析得出了主要终点的非劣效性结论,即镁基和钛基固定螺钉在基线和6个月随访期间跖骨远端关节角的绝对差异。我们建议在比较截骨后角度稳定性的试验中,使用基于Welch统计量的ci来评估非劣效性。
{"title":"Confidence Intervals for Comparing Two Independent Folded Normals: A Case Study in Bunion Surgery.","authors":"Eleonora Di Carluccio, Sarah Ogutu, Ozkan Köse, Henry G Mwambi, Andreas Ziegler","doi":"10.1002/sim.70494","DOIUrl":"https://doi.org/10.1002/sim.70494","url":null,"abstract":"<p><p>The absolute change in the angle measured immediately after surgery and after bone healing is a clinically relevant endpoint to judge the stability of an osteotomy. Assuming that the difference in angles is normally distributed, the absolute difference follows a folded normal distribution. The confidence interval (CI) for the absolute angle change of a novel fixation screw compared to a standard fixation screw may be used for evaluating non-inferiority. We suggest that the Welch statistic may serve as the basis for CI calculations of the difference between two folded normals. The coverage probabilities of derived CIs are investigated by simulations. We illustrate the approaches with data from a randomized controlled trial and an observational study on bunion surgery, in which magnesium-based and titanium-based fixation screws were compared. In the simulation studies, asymptotic and both non-parametric and parametric bootstrap CIs based on the Welch statistic were close to nominal coverage levels. In case of different sample sizes between groups, the t-statistic-based CIs were not always meeting the nominal coverage levels. Methods based on chi-square distributions were not deemed appropriate for comparing two folded normals. The re-analysis of the bunion trial permitted the conclusion of non-inferiority for the primary endpoint, the absolute difference between baseline and six months follow-up for the distal metatarsal articular angle, between magnesium-based and titanium-based fixation screws. We recommend the use of CIs based on the Welch statistic to evaluate non-inferiority in trials where the stability of angles after osteotomy is to be compared.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 6-7","pages":"e70494"},"PeriodicalIF":1.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147444960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Factor Analysis for Sparse and Irregular Longitudinal Data: An Application to Metabolite Measurements in a COVID-19 Study. 稀疏和不规则纵向数据的动态因子分析:在COVID-19研究中代谢物测量的应用
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-03-01 DOI: 10.1002/sim.70499
Jiachen Cai, Robert J B Goudie, Brian D M Tom

Factor analysis (FA) can be used to identify key biomarkers in biological processes by assuming that latent biological pathways (statistically, "latent factors") drive the activity of measurable biomarkers ("observed variables"). However, biological pathways often interact, meaning that the classical FA assumption of independence between factors is questionable. Motivated by sparsely and irregularly collected longitudinal measurements of metabolites in a COVID-19 study, we propose a dynamic factor analysis model that accounts for cross-correlations between pathways via a multi-output Gaussian processes (MOGP) prior on the factor trajectories. To mitigate against overfitting caused by sparsity of longitudinal measurements, we introduce a roughness penalty upon MOGP hyperparameters and allow for non-zero mean functions. We also propose a scalable stochastic expectation maximization (StEM) algorithm that, in simulations, is both 20 times faster and provides more accurate and stable MOGP hyperparameter estimates than a previously-proposed Monte Carlo Expectation Maximization algorithm. In the motivating COVID-19 study, our methodology identifies a kynurenine pathway that affects the clinical severity of patients with COVID-19 disease and uncovers the role of the biomarker taurine. Our R package DFA4SIL implements the proposed method.

因子分析(FA)可以通过假设潜在的生物途径(统计学上,“潜在因素”)驱动可测量的生物标志物(“观察变量”)的活性来识别生物过程中的关键生物标志物。然而,生物学途径经常相互作用,这意味着经典FA假设因素之间的独立性是值得怀疑的。受一项COVID-19研究中代谢物稀疏且不规则收集的纵向测量数据的启发,我们提出了一个动态因素分析模型,该模型通过多输出高斯过程(MOGP)在因素轨迹之前解释了途径之间的相互相关性。为了减轻由纵向测量的稀疏性引起的过拟合,我们在MOGP超参数上引入了粗糙度惩罚,并允许非零均值函数。我们还提出了一种可扩展的随机期望最大化(StEM)算法,在模拟中,该算法比之前提出的蒙特卡洛期望最大化算法快20倍,并提供更准确和稳定的MOGP超参数估计。在激励COVID-19研究中,我们的方法确定了影响COVID-19疾病患者临床严重程度的犬尿氨酸途径,并揭示了生物标志物牛磺酸的作用。我们的R包DFA4SIL实现了所提出的方法。
{"title":"Dynamic Factor Analysis for Sparse and Irregular Longitudinal Data: An Application to Metabolite Measurements in a COVID-19 Study.","authors":"Jiachen Cai, Robert J B Goudie, Brian D M Tom","doi":"10.1002/sim.70499","DOIUrl":"10.1002/sim.70499","url":null,"abstract":"<p><p>Factor analysis (FA) can be used to identify key biomarkers in biological processes by assuming that latent biological pathways (statistically, \"latent factors\") drive the activity of measurable biomarkers (\"observed variables\"). However, biological pathways often interact, meaning that the classical FA assumption of independence between factors is questionable. Motivated by sparsely and irregularly collected longitudinal measurements of metabolites in a COVID-19 study, we propose a dynamic factor analysis model that accounts for cross-correlations between pathways via a multi-output Gaussian processes (MOGP) prior on the factor trajectories. To mitigate against overfitting caused by sparsity of longitudinal measurements, we introduce a roughness penalty upon MOGP hyperparameters and allow for non-zero mean functions. We also propose a scalable stochastic expectation maximization (StEM) algorithm that, in simulations, is both 20 times faster and provides more accurate and stable MOGP hyperparameter estimates than a previously-proposed Monte Carlo Expectation Maximization algorithm. In the motivating COVID-19 study, our methodology identifies a kynurenine pathway that affects the clinical severity of patients with COVID-19 disease and uncovers the role of the biomarker taurine. Our R package DFA4SIL implements the proposed method.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 6-7","pages":"e70499"},"PeriodicalIF":1.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12992701/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147469344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Permutation Tests Based on the Copula-Graphic Estimator and Their Use for Survival Tree Construction. 基于copula - graph估计量的置换检验及其在生存树构造中的应用。
IF 1.8 4区 医学 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-03-01 DOI: 10.1002/sim.70483
Pauline Baur, Markus Pauly, Takeshi Emura

Survival trees are popular alternatives to Cox or Aalen regression models that offer both modeling flexibility and graphical interpretability. This paper introduces a new algorithm for survival trees that relaxes the assumption of independent censoring. To this end, we use the copula-graphic estimator to estimate survival functions. This allows us to flexibly specify shape and strength of the dependence of survival and censoring times within survival trees. For splitting, we present a permutation test for the null hypothesis of equal survival. Our test statistic consists of the integrated absolute distance of the groups' copula-graphic estimators. A first simulation study shows a good type I error and power behavior of the new test. We thereby assess simulation settings of various group sizes, censoring percentages, and grades of dependence generated by Clayton and Frank copulas. Using this test as a splitting criterion, a second simulation study studies the performance of the resulting trees and compares it with that of the usual logrank-based tree. Lastly, the tree algorithm is applied to real-world clinical trial data.

生存树是Cox或Aalen回归模型的流行替代品,它提供了建模灵活性和图形可解释性。本文提出了一种新的生存树算法,放宽了独立审查的假设。为此,我们使用copula-graph估计器来估计生存函数。这使我们能够灵活地指定生存树中生存依赖的形状和强度以及审查时间。对于分裂,我们给出了生存等零假设的排列检验。我们的检验统计量是由两组相交图估计量的绝对距离组成的。第一次仿真研究表明,新测试具有良好的I型误差和功率性能。因此,我们评估了由Clayton和Frank copula生成的各种群体规模、审查百分比和依赖等级的模拟设置。使用这个测试作为分割标准,第二个模拟研究研究了结果树的性能,并将其与通常的基于lolang的树进行了比较。最后,将树算法应用于实际临床试验数据。
{"title":"Permutation Tests Based on the Copula-Graphic Estimator and Their Use for Survival Tree Construction.","authors":"Pauline Baur, Markus Pauly, Takeshi Emura","doi":"10.1002/sim.70483","DOIUrl":"10.1002/sim.70483","url":null,"abstract":"<p><p>Survival trees are popular alternatives to Cox or Aalen regression models that offer both modeling flexibility and graphical interpretability. This paper introduces a new algorithm for survival trees that relaxes the assumption of independent censoring. To this end, we use the copula-graphic estimator to estimate survival functions. This allows us to flexibly specify shape and strength of the dependence of survival and censoring times within survival trees. For splitting, we present a permutation test for the null hypothesis of equal survival. Our test statistic consists of the integrated absolute distance of the groups' copula-graphic estimators. A first simulation study shows a good type I error and power behavior of the new test. We thereby assess simulation settings of various group sizes, censoring percentages, and grades of dependence generated by Clayton and Frank copulas. Using this test as a splitting criterion, a second simulation study studies the performance of the resulting trees and compares it with that of the usual logrank-based tree. Lastly, the tree algorithm is applied to real-world clinical trial data.</p>","PeriodicalId":21879,"journal":{"name":"Statistics in Medicine","volume":"45 6-7","pages":"e70483"},"PeriodicalIF":1.8,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12989924/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147463841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Statistics in Medicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1