首页 > 最新文献

Epidemiologic Methods最新文献

英文 中文
Development and application of an evidence-based directed acyclic graph to evaluate the associations between metal mixtures and cardiometabolic outcomes. 基于证据的有向无环图的开发和应用,以评估金属混合物与心脏代谢结果之间的关联。
Q3 Mathematics Pub Date : 2023-01-01 DOI: 10.1515/em-2022-0133
Emily Riseberg, Rachel D Melamed, Katherine A James, Tanya L Alderete, Laura Corlin

Objectives: Specifying causal models to assess relationships among metal mixtures and cardiometabolic outcomes requires evidence-based models of the causal structures; however, such models have not been previously published. The objective of this study was to develop and evaluate a directed acyclic graph (DAG) diagraming metal mixture exposure and cardiometabolic outcomes.

Methods: We conducted a literature search to develop the DAG of metal mixtures and cardiometabolic outcomes. To evaluate consistency of the DAG, we tested the suggested conditional independence statements using linear and logistic regression analyses with data from the San Luis Valley Diabetes Study (SLVDS; n=1795). We calculated the proportion of statements supported by the data and compared this to the proportion of conditional independence statements supported by 1,000 DAGs with the same structure but randomly permuted nodes. Next, we used our DAG to identify minimally sufficient adjustment sets needed to estimate the association between metal mixtures and cardiometabolic outcomes (i.e., cardiovascular disease, fasting glucose, and systolic blood pressure). We applied them to the SLVDS using Bayesian kernel machine regression, linear mixed effects, and Cox proportional hazards models.

Results: From the 42 articles included in the review, we developed an evidence-based DAG with 74 testable conditional independence statements (43 % supported by SLVDS data). We observed evidence for an association between As and Mn and fasting glucose.

Conclusions: We developed, tested, and applied an evidence-based approach to analyze associations between metal mixtures and cardiometabolic health.

目的:指定因果模型来评估金属混合物与心脏代谢结果之间的关系,需要基于证据的因果结构模型;然而,这样的模型以前没有发表过。本研究的目的是开发和评估金属混合物暴露和心脏代谢结果的有向无环图(DAG)。方法:我们进行了文献检索,以建立金属混合物的DAG和心脏代谢结果。为了评估DAG的一致性,我们使用圣路易斯谷糖尿病研究(SLVDS)的数据进行线性和逻辑回归分析,测试了建议的条件独立陈述;n = 1795)。我们计算了数据支持的语句的比例,并将其与1,000个具有相同结构但随机排列节点的dag支持的条件独立语句的比例进行了比较。接下来,我们使用DAG来确定估算金属混合物与心脏代谢结果(即心血管疾病、空腹血糖和收缩压)之间关联所需的最低限度调整集。我们使用贝叶斯核机回归、线性混合效应和Cox比例风险模型将它们应用于SLVDS。结果:从纳入的42篇文章中,我们开发了一个基于证据的DAG,包含74个可测试的条件独立语句(43 %由SLVDS数据支持)。我们观察到As和Mn与空腹血糖之间存在关联的证据。结论:我们开发、测试并应用了一种基于证据的方法来分析金属混合物与心脏代谢健康之间的关系。
{"title":"Development and application of an evidence-based directed acyclic graph to evaluate the associations between metal mixtures and cardiometabolic outcomes.","authors":"Emily Riseberg,&nbsp;Rachel D Melamed,&nbsp;Katherine A James,&nbsp;Tanya L Alderete,&nbsp;Laura Corlin","doi":"10.1515/em-2022-0133","DOIUrl":"https://doi.org/10.1515/em-2022-0133","url":null,"abstract":"<p><strong>Objectives: </strong>Specifying causal models to assess relationships among metal mixtures and cardiometabolic outcomes requires evidence-based models of the causal structures; however, such models have not been previously published. The objective of this study was to develop and evaluate a directed acyclic graph (DAG) diagraming metal mixture exposure and cardiometabolic outcomes.</p><p><strong>Methods: </strong>We conducted a literature search to develop the DAG of metal mixtures and cardiometabolic outcomes. To evaluate consistency of the DAG, we tested the suggested conditional independence statements using linear and logistic regression analyses with data from the San Luis Valley Diabetes Study (SLVDS; n=1795). We calculated the proportion of statements supported by the data and compared this to the proportion of conditional independence statements supported by 1,000 DAGs with the same structure but randomly permuted nodes. Next, we used our DAG to identify minimally sufficient adjustment sets needed to estimate the association between metal mixtures and cardiometabolic outcomes (i.e., cardiovascular disease, fasting glucose, and systolic blood pressure). We applied them to the SLVDS using Bayesian kernel machine regression, linear mixed effects, and Cox proportional hazards models.</p><p><strong>Results: </strong>From the 42 articles included in the review, we developed an evidence-based DAG with 74 testable conditional independence statements (43 % supported by SLVDS data). We observed evidence for an association between As and Mn and fasting glucose.</p><p><strong>Conclusions: </strong>We developed, tested, and applied an evidence-based approach to analyze associations between metal mixtures and cardiometabolic health.</p>","PeriodicalId":37999,"journal":{"name":"Epidemiologic Methods","volume":"12 1","pages":"20220133"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10292771/pdf/em-12-1-em-2022-0133.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10352001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy-efficient model “DenseNet201 based on deep convolutional neural network” using cloud platform for detection of COVID-19 infected patients 基于云平台的新型冠状病毒感染患者检测节能模型“基于深度卷积神经网络的DenseNet201
Q3 Mathematics Pub Date : 2023-01-01 DOI: 10.1515/em-2021-0047
Sachin Kumar, Vijendra Pratap Singh, S. Pal, Priya Jaiswal
Abstract Objective The outbreak of the coronavirus caused major problems in more than 151 countries around the world. An important step in the fight against coronavirus is the search for infected people. The goal of this article is to predict COVID-19 infectious patients. Methods We implemented DenseNet201, available on cloud platform, as a learning network. DenseNet201 is a 201-layer networkthat. is trained on ImageNet. The input size of pre-trained DenseNet201 images is 224 × 224 × 3. Results Implementation of DenseNet201 was effectively performed based on 80 % of the training X-rays and 20 % of the X-rays of the test phases, respectively. DenseNet201 shows a good experimental result with an accuracy of 99.24 % in 7.47 min. To measure the computational efficiency of the proposed model, we collected more than 6,000 noise-free data infected by tuberculosis, COVID-19, and uninfected healthy chests for implementation. Conclusions DenseNet201 available on the cloud platform has been used for the classification of COVID-19-infected patients. The goal of this article is to demonstrate how to achieve faster results.
摘要目的新冠肺炎疫情在全球151多个国家引发重大问题。抗击冠状病毒的一个重要步骤是寻找感染者。本文的目的是预测COVID-19感染患者。方法采用云平台上的DenseNet201作为学习网络。DenseNet201是一个201层网络。是在ImageNet上训练的。预训练的DenseNet201图像的输入大小为224 × 224 × 3。结果基于80% %的训练x射线和20% %的测试阶段x射线,DenseNet201的实施有效。DenseNet201在7.47 min内获得了99.24 %的精度。为了衡量所提出模型的计算效率,我们收集了6000多个被结核病、COVID-19和未被感染的健康胸部感染的无噪声数据进行实施。结论采用云平台上的DenseNet201对新型冠状病毒感染患者进行分类。本文的目标是演示如何实现更快的结果。
{"title":"Energy-efficient model “DenseNet201 based on deep convolutional neural network” using cloud platform for detection of COVID-19 infected patients","authors":"Sachin Kumar, Vijendra Pratap Singh, S. Pal, Priya Jaiswal","doi":"10.1515/em-2021-0047","DOIUrl":"https://doi.org/10.1515/em-2021-0047","url":null,"abstract":"Abstract Objective The outbreak of the coronavirus caused major problems in more than 151 countries around the world. An important step in the fight against coronavirus is the search for infected people. The goal of this article is to predict COVID-19 infectious patients. Methods We implemented DenseNet201, available on cloud platform, as a learning network. DenseNet201 is a 201-layer networkthat. is trained on ImageNet. The input size of pre-trained DenseNet201 images is 224 × 224 × 3. Results Implementation of DenseNet201 was effectively performed based on 80 % of the training X-rays and 20 % of the X-rays of the test phases, respectively. DenseNet201 shows a good experimental result with an accuracy of 99.24 % in 7.47 min. To measure the computational efficiency of the proposed model, we collected more than 6,000 noise-free data infected by tuberculosis, COVID-19, and uninfected healthy chests for implementation. Conclusions DenseNet201 available on the cloud platform has been used for the classification of COVID-19-infected patients. The goal of this article is to demonstrate how to achieve faster results.","PeriodicalId":37999,"journal":{"name":"Epidemiologic Methods","volume":"752 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76887951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On some pitfalls of the log-linear modeling framework for capture-recapture studies in disease surveillance 疾病监测中捕获-再捕获研究的对数线性建模框架的一些缺陷
Q3 Mathematics Pub Date : 2023-01-01 DOI: 10.1515/em-2023-0019
Yuzi Zhang, Lin Ge, Lance A. Waller, Robert H. Lyles
Abstract In epidemiological studies, the capture-recapture (CRC) method is a powerful tool that can be used to estimate the number of diseased cases or potentially disease prevalence based on data from overlapping surveillance systems. Estimators derived from log-linear models are widely applied by epidemiologists when analyzing CRC data. The popularity of the log-linear model framework is largely associated with its accessibility and the fact that interaction terms can allow for certain types of dependency among data streams. In this work, we shed new light on significant pitfalls associated with the log-linear model framework in the context of CRC using real data examples and simulation studies. First, we demonstrate that the log-linear model paradigm is highly exclusionary. That is, it can exclude, by design, many possible estimates that are potentially consistent with the observed data. Second, we clarify the ways in which regularly used model selection metrics (e.g., information criteria) are fundamentally deceiving in the effort to select a “best” model in this setting. By focusing attention on these important cautionary points and on the fundamental untestable dependency assumption made when fitting a log-linear model to CRC data, we hope to improve the quality of and transparency associated with subsequent surveillance-based CRC estimates of case counts.
在流行病学研究中,捕获-再捕获(CRC)方法是一种强大的工具,可用于根据重叠监测系统的数据估计患病病例数或潜在疾病患病率。流行病学家在分析CRC数据时广泛使用对数线性模型的估计器。对数线性模型框架的流行在很大程度上与它的可访问性以及交互术语允许数据流之间存在某些类型的依赖关系这一事实有关。在这项工作中,我们使用真实数据示例和模拟研究,揭示了与CRC背景下的对数线性模型框架相关的重大缺陷。首先,我们证明对数线性模型范式是高度排他性的。也就是说,通过设计,它可以排除许多可能与观测数据一致的估计。其次,我们澄清了经常使用的模型选择度量(例如,信息标准)从根本上欺骗了在这种情况下选择“最佳”模型的努力。通过将注意力集中在这些重要的警告点上,以及在将对数线性模型拟合到CRC数据时所做的基本不可检验的依赖假设上,我们希望提高后续基于监测的CRC病例数估计的质量和透明度。
{"title":"On some pitfalls of the log-linear modeling framework for capture-recapture studies in disease surveillance","authors":"Yuzi Zhang, Lin Ge, Lance A. Waller, Robert H. Lyles","doi":"10.1515/em-2023-0019","DOIUrl":"https://doi.org/10.1515/em-2023-0019","url":null,"abstract":"Abstract In epidemiological studies, the capture-recapture (CRC) method is a powerful tool that can be used to estimate the number of diseased cases or potentially disease prevalence based on data from overlapping surveillance systems. Estimators derived from log-linear models are widely applied by epidemiologists when analyzing CRC data. The popularity of the log-linear model framework is largely associated with its accessibility and the fact that interaction terms can allow for certain types of dependency among data streams. In this work, we shed new light on significant pitfalls associated with the log-linear model framework in the context of CRC using real data examples and simulation studies. First, we demonstrate that the log-linear model paradigm is highly exclusionary. That is, it can exclude, by design, many possible estimates that are potentially consistent with the observed data. Second, we clarify the ways in which regularly used model selection metrics (e.g., information criteria) are fundamentally deceiving in the effort to select a “best” model in this setting. By focusing attention on these important cautionary points and on the fundamental untestable dependency assumption made when fitting a log-linear model to CRC data, we hope to improve the quality of and transparency associated with subsequent surveillance-based CRC estimates of case counts.","PeriodicalId":37999,"journal":{"name":"Epidemiologic Methods","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135053775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Application of machine learning tools for feature selection in the identification of prognostic markers in COVID-19 机器学习工具特征选择在COVID-19预后标志物识别中的应用
Q3 Mathematics Pub Date : 2023-01-01 DOI: 10.1515/em-2022-0132
Sprockel Diaz Johm Jaime, Hector Fabio Restrepo Guerrero, J. J. Fernández
Abstract Objective To identify prognostic markers by applying machine learning strategies to the feature selection. Methods An observational, retrospective, multi-center study that included hospitalized patients with a confirmed diagnosis of COVID-19 in three hospitals in Colombia. Eight strategies were applied to select prognostic-related characteristics. Eight logistic regression models were built from each set of variables and the predictive ability of the outcome was evaluated. The primary endpoint was transfer to intensive care or in-hospital death. Results The database consisted of 969 patients of which 486 had complete data. The main outcome occurred in 169 cases. The development database included 220 patients, 137 (62.3%) were men with a median age of 58.2, 39 (17.7%) were diabetic, 62 (28.2%) had high blood pressure, and 32 (14.5%) had chronic lung disease. Thirty-three variables were identified as prognostic markers, and those selected most frequently were: LDH, PaO2/FIO2 ratio, CRP, age, neutrophil and lymphocyte counts, respiratory rate, oxygen saturation, ferritin, and HCO3. The eight logistic regressions developed were validated on 266 patients in whom similar results were reached (accuracy: 65.8–72.9%). Conclusions The combined use of strategies for selecting characteristics through machine learning techniques makes it possible to identify a broad set of prognostic markers in patients hospitalized for COVID-19 for death or hospitalization in intensive care.
摘要目的将机器学习策略应用于特征选择,识别预后标志物。方法采用观察性、回顾性、多中心研究,纳入哥伦比亚三家医院确诊为COVID-19的住院患者。采用八种策略选择预后相关特征。对每组变量建立8个logistic回归模型,并对结果的预测能力进行评价。主要终点是转入重症监护或院内死亡。结果共纳入969例患者,其中486例资料完整。169例发生主要结局。发展数据库包括220例患者,137例(62.3%)为男性,中位年龄为58.2岁,39例(17.7%)为糖尿病患者,62例(28.2%)为高血压患者,32例(14.5%)为慢性肺病患者。33个变量被确定为预后指标,其中最常被选择的是:LDH、PaO2/FIO2比值、CRP、年龄、中性粒细胞和淋巴细胞计数、呼吸频率、氧饱和度、铁蛋白和HCO3。建立的8个logistic回归对266例患者进行了验证,结果相似(准确率:65.8-72.9%)。结论:通过机器学习技术联合使用选择特征的策略,可以在因COVID-19住院死亡或住院重症监护的患者中识别一系列广泛的预后标志物。
{"title":"Application of machine learning tools for feature selection in the identification of prognostic markers in COVID-19","authors":"Sprockel Diaz Johm Jaime, Hector Fabio Restrepo Guerrero, J. J. Fernández","doi":"10.1515/em-2022-0132","DOIUrl":"https://doi.org/10.1515/em-2022-0132","url":null,"abstract":"Abstract Objective To identify prognostic markers by applying machine learning strategies to the feature selection. Methods An observational, retrospective, multi-center study that included hospitalized patients with a confirmed diagnosis of COVID-19 in three hospitals in Colombia. Eight strategies were applied to select prognostic-related characteristics. Eight logistic regression models were built from each set of variables and the predictive ability of the outcome was evaluated. The primary endpoint was transfer to intensive care or in-hospital death. Results The database consisted of 969 patients of which 486 had complete data. The main outcome occurred in 169 cases. The development database included 220 patients, 137 (62.3%) were men with a median age of 58.2, 39 (17.7%) were diabetic, 62 (28.2%) had high blood pressure, and 32 (14.5%) had chronic lung disease. Thirty-three variables were identified as prognostic markers, and those selected most frequently were: LDH, PaO2/FIO2 ratio, CRP, age, neutrophil and lymphocyte counts, respiratory rate, oxygen saturation, ferritin, and HCO3. The eight logistic regressions developed were validated on 266 patients in whom similar results were reached (accuracy: 65.8–72.9%). Conclusions The combined use of strategies for selecting characteristics through machine learning techniques makes it possible to identify a broad set of prognostic markers in patients hospitalized for COVID-19 for death or hospitalization in intensive care.","PeriodicalId":37999,"journal":{"name":"Epidemiologic Methods","volume":"53 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86505772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Using repeated antibody testing to minimize bias in estimates of prevalence and incidence of SARS-CoV-2 infection 使用重复抗体检测以尽量减少估计SARS-CoV-2感染流行率和发病率的偏倚
Q3 Mathematics Pub Date : 2023-01-01 DOI: 10.1515/em-2023-0012
Michele Santacatterina, B. Burke, Mihili Gunaratne, W. Weintraub, M. Espeland, Adolfo Correa, DeAnna J. Friedman-Klabanoff, M. Gibbs, David M. Herrington, Kristen Miller, J. Sanders, A. Seals, D. Uschner, T. Wierzba, Morgana Mongraw-Chaffin
Abstract Objectives The prevalence and incidence of SARS-CoV-2, the virus which causes COVID-19, at any given time remains controversial, and is an essential piece in understanding the dynamics of the epidemic. Cross-sectional studies and single time point testing approaches continue to struggle with appropriate adjustment methods for the high false positive rates in low prevalence settings or high false negative rates in high prevalence settings, and post-hoc adjustment at the group level does not fully address this issue for incidence even at the population level. Methods In this study, we use seroprevalence as an illustrative example of the benefits of using a case definition using a combined parallel and serial testing framework to confirm antibody-positive status. In a simulation study, we show that our proposed approach reduces bias and improves positive and negative predictive value across the range of prevalence compared with cross-sectional testing even with gold standard tests and post-hoc adjustment. Using data from the North Carolina COVID-19 Community Research Partnership, we applied the proposed case definition to the estimation of SARS-CoV-2 seroprevalence and incidence early in the pandemic. Results The proposed approach is not always feasible given the cost and time required to administer repeated tests; however, it reduces bias in both low and high prevalence settings and addresses misclassification at the individual level. This approach can be applied to almost all testing contexts and platforms. Conclusions This systematic approach offers better estimation of both prevalence and incidence, which is important to improve understanding and facilitate controlling the pandemic.
目的在任何给定时间,引起COVID-19的病毒SARS-CoV-2的流行率和发病率仍然存在争议,这是了解疫情动态的重要组成部分。横断面研究和单时间点测试方法仍在努力寻找适当的调整方法,以应对低患病率环境下的高假阳性率或高患病率环境下的高假阴性,而在群体水平上的临时调整即使在人群水平上也不能完全解决发病率的问题。方法在本研究中,我们使用血清阳性率作为一个说明性的例子,说明使用病例定义,结合平行和串行检测框架来确认抗体阳性状态的好处。在一项模拟研究中,我们表明,与横截面测试相比,即使使用金标准测试和事后调整,我们提出的方法也可以减少偏差,提高整个流行范围内的阳性和阴性预测值。利用来自北卡罗来纳州COVID-19社区研究伙伴关系的数据,我们将提出的病例定义应用于大流行早期SARS-CoV-2血清阳性率和发病率的估计。结果考虑到重复检测所需的成本和时间,所提出的方法并不总是可行的;然而,它减少了在低流行率和高流行率环境中的偏差,并解决了个人水平上的错误分类。这种方法可以应用于几乎所有的测试环境和平台。结论该方法能更好地估计流行率和发病率,对提高认识和控制疫情具有重要意义。
{"title":"Using repeated antibody testing to minimize bias in estimates of prevalence and incidence of SARS-CoV-2 infection","authors":"Michele Santacatterina, B. Burke, Mihili Gunaratne, W. Weintraub, M. Espeland, Adolfo Correa, DeAnna J. Friedman-Klabanoff, M. Gibbs, David M. Herrington, Kristen Miller, J. Sanders, A. Seals, D. Uschner, T. Wierzba, Morgana Mongraw-Chaffin","doi":"10.1515/em-2023-0012","DOIUrl":"https://doi.org/10.1515/em-2023-0012","url":null,"abstract":"Abstract Objectives The prevalence and incidence of SARS-CoV-2, the virus which causes COVID-19, at any given time remains controversial, and is an essential piece in understanding the dynamics of the epidemic. Cross-sectional studies and single time point testing approaches continue to struggle with appropriate adjustment methods for the high false positive rates in low prevalence settings or high false negative rates in high prevalence settings, and post-hoc adjustment at the group level does not fully address this issue for incidence even at the population level. Methods In this study, we use seroprevalence as an illustrative example of the benefits of using a case definition using a combined parallel and serial testing framework to confirm antibody-positive status. In a simulation study, we show that our proposed approach reduces bias and improves positive and negative predictive value across the range of prevalence compared with cross-sectional testing even with gold standard tests and post-hoc adjustment. Using data from the North Carolina COVID-19 Community Research Partnership, we applied the proposed case definition to the estimation of SARS-CoV-2 seroprevalence and incidence early in the pandemic. Results The proposed approach is not always feasible given the cost and time required to administer repeated tests; however, it reduces bias in both low and high prevalence settings and addresses misclassification at the individual level. This approach can be applied to almost all testing contexts and platforms. Conclusions This systematic approach offers better estimation of both prevalence and incidence, which is important to improve understanding and facilitate controlling the pandemic.","PeriodicalId":37999,"journal":{"name":"Epidemiologic Methods","volume":"253 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83681676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Addressing substantial covariate imbalance with propensity score stratification and balancing weights: connections and recommendations 用倾向得分分层和平衡权重解决大量协变量失衡:联系和建议
Q3 Mathematics Pub Date : 2023-01-01 DOI: 10.1515/em-2022-0131
Laine E. Thomas, Steven M. Thomas, Fan Li, Roland A. Matsouaka
Abstract Objectives Propensity score (PS) weighting methods are commonly used to adjust for confounding in observational treatment comparisons. However, in the setting of substantial covariate imbalance, PS values may approach 0 and 1, yielding extreme weights and inflated variance of the estimated treatment effect. Adaptations of the standard inverse probability of treatment weights (IPTW) can reduce the influence of extremes, including trimming methods that exclude people with PS values near 0 or 1. Alternatively, overlap weighting (OW) optimizes criteria related to bias and variance, and performs well compared to other PS weighting and matching methods. However, it has not been compared to propensity score stratification (PSS). PSS has some of the same potential advantages; being insensitive extreme values. We sought to compare these methods in the setting of substantial covariate imbalance to generate practical recommendations. Methods Analytical derivations were used to establish connections between methods, and simulation studies were conducted to assess bias and variance of alternative methods. Results We find that OW is generally superior, particularly as covariate imbalance increases. In addition, a common method for implementing PSS based on Mantel–Haenszel weights (PSS-MH) is equivalent to a coarsened version of OW and can perform nearly as well. Finally, trimming methods increase bias across methods (IPTW, PSS and PSS-MH) unless the PS model is re-fit to the trimmed sample and weights or strata are re-derived. After trimming with re-fitting, all methods perform similarly to OW. Conclusions These results may guide the selection, implementation and reporting of PS methods for observational studies with substantial covariate imbalance.
【摘要】目的倾向评分(PS)加权法常用来校正观察性治疗比较中的混杂因素。然而,在大量协变量不平衡的情况下,PS值可能接近0和1,产生极端的权重和估计治疗效果的膨胀方差。调整标准处理权重逆概率(IPTW)可以减少极端情况的影响,包括剔除PS值接近0或1的人的方法。或者,重叠加权(OW)优化了与偏差和方差相关的标准,与其他PS加权和匹配方法相比,表现良好。然而,它还没有与倾向评分分层(PSS)进行比较。PSS具有一些相同的潜在优势;麻木不仁的极端价值观。我们试图在大量协变量不平衡的情况下比较这些方法,以产生实用的建议。方法采用分析推导方法建立方法之间的联系,并进行模拟研究以评估替代方法的偏倚和方差。结果我们发现,当协变量不平衡增加时,OW通常更优。此外,基于Mantel-Haenszel权值(PSS- mh)实现PSS的一种常用方法相当于OW的粗化版本,并且性能几乎一样好。最后,除非将PS模型重新拟合到修剪后的样本中,并重新推导权重或地层,否则修剪方法会增加不同方法(IPTW、PSS和PSS- mh)之间的偏差。在重新拟合后,所有方法的执行都与OW相似。结论这些结果可以指导协变量不平衡较大的观察性研究中PS方法的选择、实施和报告。
{"title":"Addressing substantial covariate imbalance with propensity score stratification and balancing weights: connections and recommendations","authors":"Laine E. Thomas, Steven M. Thomas, Fan Li, Roland A. Matsouaka","doi":"10.1515/em-2022-0131","DOIUrl":"https://doi.org/10.1515/em-2022-0131","url":null,"abstract":"Abstract Objectives Propensity score (PS) weighting methods are commonly used to adjust for confounding in observational treatment comparisons. However, in the setting of substantial covariate imbalance, PS values may approach 0 and 1, yielding extreme weights and inflated variance of the estimated treatment effect. Adaptations of the standard inverse probability of treatment weights (IPTW) can reduce the influence of extremes, including trimming methods that exclude people with PS values near 0 or 1. Alternatively, overlap weighting (OW) optimizes criteria related to bias and variance, and performs well compared to other PS weighting and matching methods. However, it has not been compared to propensity score stratification (PSS). PSS has some of the same potential advantages; being insensitive extreme values. We sought to compare these methods in the setting of substantial covariate imbalance to generate practical recommendations. Methods Analytical derivations were used to establish connections between methods, and simulation studies were conducted to assess bias and variance of alternative methods. Results We find that OW is generally superior, particularly as covariate imbalance increases. In addition, a common method for implementing PSS based on Mantel–Haenszel weights (PSS-MH) is equivalent to a coarsened version of OW and can perform nearly as well. Finally, trimming methods increase bias across methods (IPTW, PSS and PSS-MH) unless the PS model is re-fit to the trimmed sample and weights or strata are re-derived. After trimming with re-fitting, all methods perform similarly to OW. Conclusions These results may guide the selection, implementation and reporting of PS methods for observational studies with substantial covariate imbalance.","PeriodicalId":37999,"journal":{"name":"Epidemiologic Methods","volume":"251 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135604264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance evaluation of ResNet model for classification of tomato plant disease 番茄病害分类的ResNet模型性能评价
Q3 Mathematics Pub Date : 2023-01-01 DOI: 10.1515/em-2021-0044
Sachin Kumar, S. Pal, Vijendra Pratap Singh, P. Jaiswal
Abstract Objectives The plant tomato (Solanum Lycopersicum) is vastly infected by various diseases. Exact diagnosis on time contributes a significant job to the good production of tomato crops. The key objective of this article is to recognize the infection in tomato leaves with better accuracy and in less time. Methods Nowadays deep convolutional neural networks have attained surprising outcomes in several applications, together with the categorization of tomato leaves infected with several diseases. Our work is based on deep CNN with different residual networks. Finally; we have performed tomato leaves disease classification by using pre-trained deep CNN with the residual network using MATLAB available on the cloud. Results We have used a dataset of tomato leaves for the experiments which contain six different types of diseases with one healthy tomato leaf class. We have collected 6,594 tomato leaves dataset from Plant Village and we did not collect actual tomato leaves for testing. The outcome obtained by ResNet-50 shows a significant result with 96.35% accuracy for 50% training and 50% testing data and if we focus on time consumption for the outcome then ResNet-18 consumes 12.46 min for 70% training and 30% testing. Conclusions After observation of several outcomes, we have concluded that ResNet-50 shows a better accuracy for 50% training and 50% testing of data and ResNet-18 shows better efficiency for 70% training and 30% testing of data for the same dataset on the cloud.
摘要目的番茄(Solanum Lycopersicum)是一种广泛感染多种病害的植物。及时准确诊断对番茄高产有重要意义。本文的主要目的是在较短的时间内更准确地识别番茄叶片的侵染。方法近年来,深度卷积神经网络在多种病害的番茄叶片分类等应用中取得了令人惊讶的成果。我们的工作是基于具有不同残差网络的深度CNN。最后;我们使用云上可用的MATLAB,使用预训练的深度CNN和残差网络进行番茄叶片病害分类。结果我们使用了一个番茄叶片数据集进行实验,该数据集包含6种不同类型的疾病,其中一个健康番茄叶片类。我们从Plant Village收集了6594个番茄叶片数据集,我们没有收集实际的番茄叶片进行测试。ResNet-50获得的结果在50%的训练和50%的测试数据下显示出96.35%的准确率,如果我们关注结果的时间消耗,那么ResNet-18在70%的训练和30%的测试数据下消耗12.46分钟。在观察了几个结果后,我们得出结论,ResNet-50在对数据进行50%训练和50%测试时具有更好的准确率,而ResNet-18在云上对同一数据集进行70%训练和30%测试时具有更好的效率。
{"title":"Performance evaluation of ResNet model for classification of tomato plant disease","authors":"Sachin Kumar, S. Pal, Vijendra Pratap Singh, P. Jaiswal","doi":"10.1515/em-2021-0044","DOIUrl":"https://doi.org/10.1515/em-2021-0044","url":null,"abstract":"Abstract Objectives The plant tomato (Solanum Lycopersicum) is vastly infected by various diseases. Exact diagnosis on time contributes a significant job to the good production of tomato crops. The key objective of this article is to recognize the infection in tomato leaves with better accuracy and in less time. Methods Nowadays deep convolutional neural networks have attained surprising outcomes in several applications, together with the categorization of tomato leaves infected with several diseases. Our work is based on deep CNN with different residual networks. Finally; we have performed tomato leaves disease classification by using pre-trained deep CNN with the residual network using MATLAB available on the cloud. Results We have used a dataset of tomato leaves for the experiments which contain six different types of diseases with one healthy tomato leaf class. We have collected 6,594 tomato leaves dataset from Plant Village and we did not collect actual tomato leaves for testing. The outcome obtained by ResNet-50 shows a significant result with 96.35% accuracy for 50% training and 50% testing data and if we focus on time consumption for the outcome then ResNet-18 consumes 12.46 min for 70% training and 30% testing. Conclusions After observation of several outcomes, we have concluded that ResNet-50 shows a better accuracy for 50% training and 50% testing of data and ResNet-18 shows better efficiency for 70% training and 30% testing of data for the same dataset on the cloud.","PeriodicalId":37999,"journal":{"name":"Epidemiologic Methods","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74304998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A study of the impact of policy interventions on daily COVID scenario in India using interrupted time series analysis 使用中断时间序列分析研究政策干预对印度每日COVID情景的影响
Q3 Mathematics Pub Date : 2023-01-01 DOI: 10.1515/em-2022-0113
Subhankar Chattopadhyay, D. Ghosh, Raju Maiti, Samarjit Das, A. Biswas, Bibhas Chakraborty
Abstract Objectives The rapid increase both in daily cases and daily deaths made the second wave of COVID-19 pandemic in India more lethal than the first wave. Record number of infections and casualties were reported all over India during this period. Delhi and Maharashtra are the two most affected places in India during the second wave. So in response to this, the Indian government implemented strict intervention policies (“lockdowns”, “social distancing” and “vaccination drive”) in every state during this period to prohibit the spread of this virus. The objective of this article is to conduct an interrupted time series (ITS) analysis to study the impact of the interventions on the daily cases and deaths. Methods We collect daily data for Delhi and Maharashtra before and after the intervention points with a 14-day (incubation period of COVID-19) observation window. A segmented linear regression analysis is done to study the post-intervention slopes as well as whether there were any immediate changes after the interventions or not. We also add the counterfactuals and delayed time effects in the analysis to investigate the significance of our ITS design. Results Here, we observe the post-intervention trends to be statistically significant and negative for both the daily cases and the daily deaths. We also find that there is no immediate change in trend after the start of intervention, and hence we study some delayed time effects which display how changes in the trends happened over time. And from the Counterfactuals in our study, we can have an idea what would have happened to the COVID scenario had the interventions not been implemented. Conclusions We statistically try to figure out different circumstances of COVID scenario for both Delhi and Maharashtra by exploring all possible ingredients of ITS design in our analysis in order to present a feasible design to show the importance of implementation of proper intervention policies for tackling this type of pandemic which can have various highly contagious variants.
日病例数和日死亡人数的快速增长使得印度第二波COVID-19大流行比第一波更具致命性。在此期间,印度各地报告的感染和伤亡人数创下了纪录。德里和马哈拉施特拉邦是印度第二波疫情中受灾最严重的两个地区。为此,印度政府在此期间在各邦实施了严格的干预政策(“封锁”、“保持社交距离”和“疫苗接种”),以阻止病毒的传播。本文的目的是进行中断时间序列(ITS)分析,以研究干预措施对日常病例和死亡的影响。方法采用14 d (COVID-19潜伏期)观察窗,收集德里和马哈拉施特拉邦干预点前后的每日数据。采用分段线性回归分析研究干预后的坡度,以及干预后是否有直接变化。我们还在分析中加入了反事实和延迟时间效应,以研究我们的ITS设计的意义。结果在这里,我们观察到干预后的趋势在每日病例和每日死亡人数上都具有统计学意义和负相关。我们还发现,在干预开始后,趋势没有立即变化,因此我们研究了一些延迟时间效应,这些效应显示了趋势的变化是如何随着时间的推移而发生的。从我们研究中的反事实中,我们可以了解如果不实施干预措施,COVID的情况会发生什么。我们通过在分析中探索ITS设计的所有可能成分,从统计上试图找出德里和马哈拉施特拉邦不同的COVID情景,以便提出一个可行的设计,以表明实施适当的干预政策对于应对这种可能具有各种高传染性变异的大流行的重要性。
{"title":"A study of the impact of policy interventions on daily COVID scenario in India using interrupted time series analysis","authors":"Subhankar Chattopadhyay, D. Ghosh, Raju Maiti, Samarjit Das, A. Biswas, Bibhas Chakraborty","doi":"10.1515/em-2022-0113","DOIUrl":"https://doi.org/10.1515/em-2022-0113","url":null,"abstract":"Abstract Objectives The rapid increase both in daily cases and daily deaths made the second wave of COVID-19 pandemic in India more lethal than the first wave. Record number of infections and casualties were reported all over India during this period. Delhi and Maharashtra are the two most affected places in India during the second wave. So in response to this, the Indian government implemented strict intervention policies (“lockdowns”, “social distancing” and “vaccination drive”) in every state during this period to prohibit the spread of this virus. The objective of this article is to conduct an interrupted time series (ITS) analysis to study the impact of the interventions on the daily cases and deaths. Methods We collect daily data for Delhi and Maharashtra before and after the intervention points with a 14-day (incubation period of COVID-19) observation window. A segmented linear regression analysis is done to study the post-intervention slopes as well as whether there were any immediate changes after the interventions or not. We also add the counterfactuals and delayed time effects in the analysis to investigate the significance of our ITS design. Results Here, we observe the post-intervention trends to be statistically significant and negative for both the daily cases and the daily deaths. We also find that there is no immediate change in trend after the start of intervention, and hence we study some delayed time effects which display how changes in the trends happened over time. And from the Counterfactuals in our study, we can have an idea what would have happened to the COVID scenario had the interventions not been implemented. Conclusions We statistically try to figure out different circumstances of COVID scenario for both Delhi and Maharashtra by exploring all possible ingredients of ITS design in our analysis in order to present a feasible design to show the importance of implementation of proper intervention policies for tackling this type of pandemic which can have various highly contagious variants.","PeriodicalId":37999,"journal":{"name":"Epidemiologic Methods","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88001062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Outliers in nutrient intake data for U.S. adults: national health and nutrition examination survey 2017–2018 美国成年人营养摄入数据的异常值:2017-2018年全国健康和营养检查调查
Q3 Mathematics Pub Date : 2023-01-01 DOI: 10.1515/em-2023-0018
Sara Burcham, Yuki Liu, Ashley L. Merianos, Angelico Mendy
Abstract Objectives An important step in preparing data for statistical analysis is outlier detection and removal, yet no gold standard exists in current literature. The objective of this study is to identify the ideal decision test using the National Health and Nutrition Examination Survey (NHANES) 2017–2018 dietary data. Methods We conducted a secondary analysis of NHANES 24-h dietary recalls, considering the survey's multi-stage cluster design. Six outlier detection and removal strategies were assessed by evaluating the decision tests' impact on the Pearson's correlation coefficient among macronutrients. Furthermore, we assessed changes in the effect size estimates based on pre-defined sample sizes. The data were collected as part of the 2017–2018 24-h dietary recall among adult participants (N=4,893). Results Effect estimate changes for macronutrients varied from 6.5 % for protein to 39.3 % for alcohol across all decision tests. The largest proportion of outliers removed was 4.0 % in the large sample size, for the decision test, >2 standard deviations from the mean. The smallest sample size, particularly for alcohol analysis, was most affected by the six decision tests when compared to no decision test. Conclusions This study, the first to use 2017–2018 NHANES dietary data for outlier evaluation, emphasizes the importance of selecting an appropriate decision test considering factors such as statistical power, sample size, normality assumptions, the proportion of data removed, effect estimate changes, and the consistency of estimates across sample sizes. We recommend the use of non-parametric tests for non-normally distributed variables of interest.
摘要目的为统计分析准备数据的重要步骤是异常值检测和去除,但目前文献中没有金标准。本研究的目的是利用2017-2018年国家健康与营养检查调查(NHANES)的饮食数据确定理想的决策测试。方法考虑到调查的多阶段聚类设计,我们对NHANES 24小时饮食召回进行二次分析。通过评估决策测试对宏量营养素间Pearson相关系数的影响,对六种异常值检测和去除策略进行了评估。此外,我们根据预先定义的样本量评估了效应大小估计值的变化。这些数据是作为2017-2018年成人参与者24小时饮食回顾的一部分收集的(N=4,893)。结果在所有决策测试中,宏量营养素的效应估计变化从蛋白质的6.5%到酒精的39.3%不等。对于决策检验,在大样本量中,去除异常值的最大比例为4.0%,与平均值相差2个标准差。与没有决策测试相比,最小样本量,特别是酒精分析,受六种决策测试的影响最大。本研究首次使用2017-2018年NHANES饮食数据进行离群值评估,强调了选择合适的决策检验的重要性,考虑了统计能力、样本量、正态性假设、数据删除比例、效应估计变化以及不同样本量估计的一致性等因素。我们建议对感兴趣的非正态分布变量使用非参数检验。
{"title":"Outliers in nutrient intake data for U.S. adults: national health and nutrition examination survey 2017–2018","authors":"Sara Burcham, Yuki Liu, Ashley L. Merianos, Angelico Mendy","doi":"10.1515/em-2023-0018","DOIUrl":"https://doi.org/10.1515/em-2023-0018","url":null,"abstract":"Abstract Objectives An important step in preparing data for statistical analysis is outlier detection and removal, yet no gold standard exists in current literature. The objective of this study is to identify the ideal decision test using the National Health and Nutrition Examination Survey (NHANES) 2017–2018 dietary data. Methods We conducted a secondary analysis of NHANES 24-h dietary recalls, considering the survey's multi-stage cluster design. Six outlier detection and removal strategies were assessed by evaluating the decision tests' impact on the Pearson's correlation coefficient among macronutrients. Furthermore, we assessed changes in the effect size estimates based on pre-defined sample sizes. The data were collected as part of the 2017–2018 24-h dietary recall among adult participants (N=4,893). Results Effect estimate changes for macronutrients varied from 6.5 % for protein to 39.3 % for alcohol across all decision tests. The largest proportion of outliers removed was 4.0 % in the large sample size, for the decision test, &gt;2 standard deviations from the mean. The smallest sample size, particularly for alcohol analysis, was most affected by the six decision tests when compared to no decision test. Conclusions This study, the first to use 2017–2018 NHANES dietary data for outlier evaluation, emphasizes the importance of selecting an appropriate decision test considering factors such as statistical power, sample size, normality assumptions, the proportion of data removed, effect estimate changes, and the consistency of estimates across sample sizes. We recommend the use of non-parametric tests for non-normally distributed variables of interest.","PeriodicalId":37999,"journal":{"name":"Epidemiologic Methods","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135610575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A compartmental model of the COVID-19 pandemic course in Germany 德国COVID-19大流行过程的分区模型
Q3 Mathematics Pub Date : 2023-01-01 DOI: 10.1515/em-2022-0126
Yıldırım Adalıoğlu, Çağan Kaplan
Abstract Objectives In late 2019, the novel coronavirus, known as COVID-19, emerged in Wuhan, China, and rapidly spread worldwide, including in Germany. To mitigate the pandemic’s impact, various strategies, including vaccination and non-pharmaceutical interventions, have been implemented. However, the emergence of new, highly infectious SARS-CoV-2 strains has become the primary driving force behind the disease’s spread. Mathematical models, such as deterministic compartmental models, are essential for estimating contagion rates in different scenarios and predicting the pandemic’s behavior. Methods In this study, we present a novel model that incorporates vaccination dynamics, the three most prevalent virus strains (wild-type, alpha, and delta), infected individuals’ detection status, and pre-symptomatic transmission to represent the pandemic’s course in Germany from March 2, 2020, to August 17, 2021. Results By analyzing the behavior of the German population over 534 days and 25 time intervals, we estimated various parameters, including transmission, recovery, mortality, and detection. Furthermore, we conducted an alternative analysis of vaccination scenarios under the same interval conditions, emphasizing the importance of vaccination administration and awareness. Conclusions Our 534-day analysis provides policymakers with a range of circumstances and parameters that can be used to simulate future scenarios. The proposed model can also be used to make predictions and inform policy decisions related to pandemic control in Germany and beyond.
2019年底,新型冠状病毒COVID-19在中国武汉出现,并在包括德国在内的世界范围内迅速传播。为了减轻这一大流行病的影响,已经实施了各种战略,包括疫苗接种和非药物干预措施。然而,新的高传染性SARS-CoV-2菌株的出现已成为该疾病传播的主要推动力。数学模型,如确定性隔间模型,对于估计不同情景下的传染率和预测大流行的行为至关重要。在这项研究中,我们提出了一个新的模型,该模型结合了疫苗接种动态、三种最流行的病毒株(野生型、α型和δ型)、感染者的检测状态和症状前传播,以代表2020年3月2日至2021年8月17日在德国的大流行过程。结果通过分析德国人群在534天和25个时间间隔内的行为,我们估计了各种参数,包括传播、恢复、死亡率和检出率。此外,我们对相同间隔条件下的疫苗接种情景进行了替代分析,强调了疫苗接种管理和意识的重要性。我们为期534天的分析为政策制定者提供了一系列可用于模拟未来情景的环境和参数。所提出的模型还可用于做出预测,并为德国及其他地区与大流行控制有关的政策决策提供信息。
{"title":"A compartmental model of the COVID-19 pandemic course in Germany","authors":"Yıldırım Adalıoğlu, Çağan Kaplan","doi":"10.1515/em-2022-0126","DOIUrl":"https://doi.org/10.1515/em-2022-0126","url":null,"abstract":"Abstract Objectives In late 2019, the novel coronavirus, known as COVID-19, emerged in Wuhan, China, and rapidly spread worldwide, including in Germany. To mitigate the pandemic’s impact, various strategies, including vaccination and non-pharmaceutical interventions, have been implemented. However, the emergence of new, highly infectious SARS-CoV-2 strains has become the primary driving force behind the disease’s spread. Mathematical models, such as deterministic compartmental models, are essential for estimating contagion rates in different scenarios and predicting the pandemic’s behavior. Methods In this study, we present a novel model that incorporates vaccination dynamics, the three most prevalent virus strains (wild-type, alpha, and delta), infected individuals’ detection status, and pre-symptomatic transmission to represent the pandemic’s course in Germany from March 2, 2020, to August 17, 2021. Results By analyzing the behavior of the German population over 534 days and 25 time intervals, we estimated various parameters, including transmission, recovery, mortality, and detection. Furthermore, we conducted an alternative analysis of vaccination scenarios under the same interval conditions, emphasizing the importance of vaccination administration and awareness. Conclusions Our 534-day analysis provides policymakers with a range of circumstances and parameters that can be used to simulate future scenarios. The proposed model can also be used to make predictions and inform policy decisions related to pandemic control in Germany and beyond.","PeriodicalId":37999,"journal":{"name":"Epidemiologic Methods","volume":"56 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84699830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Epidemiologic Methods
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1