Pub Date : 2024-12-19DOI: 10.1186/s41512-024-00181-5
Johanna A A Damen, Banafsheh Arshi, Maarten van Smeden, Silvia Bertagnolio, Janet V Diaz, Ronaldo Silva, Soe Soe Thwin, Laure Wynants, Karel G M Moons
Background: We evaluated the performance of prognostic models for predicting mortality or ICU admission in hospitalized patients with COVID-19 in the World Health Organization (WHO) Global Clinical Platform, a repository of individual-level clinical data of patients hospitalized with COVID-19, including in low- and middle-income countries (LMICs).
Methods: We identified eligible multivariable prognostic models for predicting overall mortality and ICU admission during hospital stay in patients with confirmed or suspected COVID-19 from a living review of COVID-19 prediction models. These models were evaluated using data contributed to the WHO Global Clinical Platform for COVID-19 from nine LMICs (Burkina Faso, Cameroon, Democratic Republic of Congo, Guinea, India, Niger, Nigeria, Zambia, and Zimbabwe). Model performance was assessed in terms of discrimination and calibration.
Results: Out of 144 eligible models, 140 were excluded due to a high risk of bias, predictors unavailable in LIMCs, or insufficient model description. Among 11,338 participants, the remaining models showed good discrimination for predicting in-hospital mortality (3 models), with areas under the curve (AUCs) ranging between 0.76 (95% CI 0.71-0.81) and 0.84 (95% CI 0.77-0.89). An AUC of 0.74 (95% CI 0.70-0.78) was found for predicting ICU admission risk (one model). All models showed signs of miscalibration and overfitting, with extensive heterogeneity between countries.
Conclusions: Among the available COVID-19 prognostic models, only a few could be validated on data collected from LMICs, mainly due to limited predictor availability. Despite their discriminative ability, selected models for mortality prediction or ICU admission showed varying and suboptimal calibration.
背景:我们在世界卫生组织(WHO)全球临床平台中评估了预测COVID-19住院患者死亡率或ICU入院率的预后模型的性能,该平台是包括低收入和中等收入国家(LMICs)在内的COVID-19住院患者个人临床数据的存储库。方法:通过对COVID-19预测模型的实时回顾,我们确定了用于预测确诊或疑似COVID-19患者住院期间总死亡率和ICU住院率的合格多变量预后模型。使用来自9个中低收入国家(布基纳法索、喀麦隆、刚果民主共和国、几内亚、印度、尼日尔、尼日利亚、赞比亚和津巴布韦)向世卫组织COVID-19全球临床平台提供的数据对这些模型进行了评估。从判别和校准两个方面对模型性能进行了评估。结果:在144个符合条件的模型中,140个因高偏倚风险、LIMCs中无法获得预测因子或模型描述不充分而被排除。在11,338名参与者中,其余模型在预测住院死亡率(3个模型)方面表现出良好的辨别能力,曲线下面积(auc)范围在0.76 (95% CI 0.71-0.81)和0.84 (95% CI 0.77-0.89)之间。预测ICU入院风险的AUC为0.74 (95% CI 0.70-0.78)(一个模型)。所有模型都显示出校准不当和过拟合的迹象,各国之间存在广泛的异质性。结论:在现有的COVID-19预后模型中,只有少数模型可以根据从中低收入国家收集的数据进行验证,这主要是由于预测器的可用性有限。尽管它们具有判别能力,但所选的死亡率预测或ICU入院模型显示出不同的和次优的校准。
{"title":"Validation of prognostic models predicting mortality or ICU admission in patients with COVID-19 in low- and middle-income countries: a global individual participant data meta-analysis.","authors":"Johanna A A Damen, Banafsheh Arshi, Maarten van Smeden, Silvia Bertagnolio, Janet V Diaz, Ronaldo Silva, Soe Soe Thwin, Laure Wynants, Karel G M Moons","doi":"10.1186/s41512-024-00181-5","DOIUrl":"10.1186/s41512-024-00181-5","url":null,"abstract":"<p><strong>Background: </strong>We evaluated the performance of prognostic models for predicting mortality or ICU admission in hospitalized patients with COVID-19 in the World Health Organization (WHO) Global Clinical Platform, a repository of individual-level clinical data of patients hospitalized with COVID-19, including in low- and middle-income countries (LMICs).</p><p><strong>Methods: </strong>We identified eligible multivariable prognostic models for predicting overall mortality and ICU admission during hospital stay in patients with confirmed or suspected COVID-19 from a living review of COVID-19 prediction models. These models were evaluated using data contributed to the WHO Global Clinical Platform for COVID-19 from nine LMICs (Burkina Faso, Cameroon, Democratic Republic of Congo, Guinea, India, Niger, Nigeria, Zambia, and Zimbabwe). Model performance was assessed in terms of discrimination and calibration.</p><p><strong>Results: </strong>Out of 144 eligible models, 140 were excluded due to a high risk of bias, predictors unavailable in LIMCs, or insufficient model description. Among 11,338 participants, the remaining models showed good discrimination for predicting in-hospital mortality (3 models), with areas under the curve (AUCs) ranging between 0.76 (95% CI 0.71-0.81) and 0.84 (95% CI 0.77-0.89). An AUC of 0.74 (95% CI 0.70-0.78) was found for predicting ICU admission risk (one model). All models showed signs of miscalibration and overfitting, with extensive heterogeneity between countries.</p><p><strong>Conclusions: </strong>Among the available COVID-19 prognostic models, only a few could be validated on data collected from LMICs, mainly due to limited predictor availability. Despite their discriminative ability, selected models for mortality prediction or ICU admission showed varying and suboptimal calibration.</p>","PeriodicalId":72800,"journal":{"name":"Diagnostic and prognostic research","volume":"8 1","pages":"17"},"PeriodicalIF":0.0,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11656577/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142856909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-05DOI: 10.1186/s41512-024-00180-6
Bwambale Jonani, Emmanuel Charles Kasule, Herman Roman Bwire, Gerald Mboowa
This systematic review and meta-analysis evaluated reported prevalence and diagnostic methods for identifying Candida africana, an opportunistic yeast associated with vaginal and oral candidiasis. A comprehensive literature search yielded 53 studies meeting the inclusion criteria, 2 of which were case studies. The pooled prevalence of C. africana among 20,571 participants was 0.9% (95% CI: 0.7-1.3%), with significant heterogeneity observed (I2 = 79%, p < 0.01). Subgroup analyses revealed regional variations, with North America showing the highest prevalence (4.6%, 95% CI: 1.8-11.2%). The majority 84.52% of the C. africana have been isolated from vaginal samples, 8.37% from oral samples, 3.77% from urine, 2.09% from glans penis swabs, and 0.42% from rectal swabs, nasal swabs, and respiratory tract expectorations respectively. No C. africana has been isolated from nail samples. Hyphal wall protein 1 gene PCR was the most used diagnostic method for identifying C. africana. It has been used to identify 70% of the isolates. A comparison of methods revealed that the Vitek-2 system consistently failed to differentiate C. africana from Candida albicans, whereas MALDI-TOF misidentified several isolates compared with HWP1 PCR. Factors beyond diagnostic methodology may influence C. africana detection rates. We highlight the importance of adapting molecular methods for resource-limited settings or developing equally accurate but more accessible alternatives for the identification and differentiation of highly similar and cryptic Candida species such as C. africana.
本系统综述和荟萃分析评估了非洲念珠菌(一种与阴道和口腔念珠菌病相关的机会性酵母菌)的报告患病率和诊断方法。综合文献检索得到53项符合纳入标准的研究,其中2项为个案研究。在20,571名参与者中,非洲卷虫的总患病率为0.9% (95% CI: 0.7-1.3%),存在显著的异质性(I2 = 79%, p
{"title":"Reported prevalence and comparison of diagnostic approaches for Candida africana: a systematic review with meta-analysis.","authors":"Bwambale Jonani, Emmanuel Charles Kasule, Herman Roman Bwire, Gerald Mboowa","doi":"10.1186/s41512-024-00180-6","DOIUrl":"10.1186/s41512-024-00180-6","url":null,"abstract":"<p><p>This systematic review and meta-analysis evaluated reported prevalence and diagnostic methods for identifying Candida africana, an opportunistic yeast associated with vaginal and oral candidiasis. A comprehensive literature search yielded 53 studies meeting the inclusion criteria, 2 of which were case studies. The pooled prevalence of C. africana among 20,571 participants was 0.9% (95% CI: 0.7-1.3%), with significant heterogeneity observed (I<sup>2</sup> = 79%, p < 0.01). Subgroup analyses revealed regional variations, with North America showing the highest prevalence (4.6%, 95% CI: 1.8-11.2%). The majority 84.52% of the C. africana have been isolated from vaginal samples, 8.37% from oral samples, 3.77% from urine, 2.09% from glans penis swabs, and 0.42% from rectal swabs, nasal swabs, and respiratory tract expectorations respectively. No C. africana has been isolated from nail samples. Hyphal wall protein 1 gene PCR was the most used diagnostic method for identifying C. africana. It has been used to identify 70% of the isolates. A comparison of methods revealed that the Vitek-2 system consistently failed to differentiate C. africana from Candida albicans, whereas MALDI-TOF misidentified several isolates compared with HWP1 PCR. Factors beyond diagnostic methodology may influence C. africana detection rates. We highlight the importance of adapting molecular methods for resource-limited settings or developing equally accurate but more accessible alternatives for the identification and differentiation of highly similar and cryptic Candida species such as C. africana.</p>","PeriodicalId":72800,"journal":{"name":"Diagnostic and prognostic research","volume":"8 1","pages":"16"},"PeriodicalIF":0.0,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11619109/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142787989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-05DOI: 10.1186/s41512-024-00179-z
Peter C Austin, Douglas S Lee, Bo Wang
Background: Machine learning methods are increasingly being used to predict clinical outcomes. Optimism is the difference in model performance between derivation and validation samples. The term "data hungriness" refers to the sample size needed for a modelling technique to generate a prediction model with minimal optimism. Our objective was to compare the relative data hungriness of different statistical and machine learning methods when assessed using calibration.
Methods: We used Monte Carlo simulations to assess the effect of number of events per variable (EPV) on the optimism of six learning methods when assessing model calibration: unpenalized logistic regression, ridge regression, lasso regression, bagged classification trees, random forests, and stochastic gradient boosting machines using trees as the base learners. We performed simulations in two large cardiovascular datasets each of which comprised an independent derivation and validation sample: patients hospitalized with acute myocardial infarction and patients hospitalized with heart failure. We used six data-generating processes, each based on one of the six learning methods. We allowed the sample sizes to be such that the number of EPV ranged from 10 to 200 in increments of 10. We applied six prediction methods in each of the simulated derivation samples and evaluated calibration in the simulated validation samples using the integrated calibration index, the calibration intercept, and the calibration slope. We also examined Nagelkerke's R2, the scaled Brier score, and the c-statistic.
Results: Across all 12 scenarios (2 diseases × 6 data-generating processes), penalized logistic regression displayed very low optimism even when the number of EPV was very low. Random forests and bagged trees tended to be the most data hungry and displayed the greatest optimism.
Conclusions: When assessed using calibration, penalized logistic regression was substantially less data hungry than methods from the machine learning literature.
{"title":"The relative data hungriness of unpenalized and penalized logistic regression and ensemble-based machine learning methods: the case of calibration.","authors":"Peter C Austin, Douglas S Lee, Bo Wang","doi":"10.1186/s41512-024-00179-z","DOIUrl":"10.1186/s41512-024-00179-z","url":null,"abstract":"<p><strong>Background: </strong>Machine learning methods are increasingly being used to predict clinical outcomes. Optimism is the difference in model performance between derivation and validation samples. The term \"data hungriness\" refers to the sample size needed for a modelling technique to generate a prediction model with minimal optimism. Our objective was to compare the relative data hungriness of different statistical and machine learning methods when assessed using calibration.</p><p><strong>Methods: </strong>We used Monte Carlo simulations to assess the effect of number of events per variable (EPV) on the optimism of six learning methods when assessing model calibration: unpenalized logistic regression, ridge regression, lasso regression, bagged classification trees, random forests, and stochastic gradient boosting machines using trees as the base learners. We performed simulations in two large cardiovascular datasets each of which comprised an independent derivation and validation sample: patients hospitalized with acute myocardial infarction and patients hospitalized with heart failure. We used six data-generating processes, each based on one of the six learning methods. We allowed the sample sizes to be such that the number of EPV ranged from 10 to 200 in increments of 10. We applied six prediction methods in each of the simulated derivation samples and evaluated calibration in the simulated validation samples using the integrated calibration index, the calibration intercept, and the calibration slope. We also examined Nagelkerke's R<sup>2</sup>, the scaled Brier score, and the c-statistic.</p><p><strong>Results: </strong>Across all 12 scenarios (2 diseases × 6 data-generating processes), penalized logistic regression displayed very low optimism even when the number of EPV was very low. Random forests and bagged trees tended to be the most data hungry and displayed the greatest optimism.</p><p><strong>Conclusions: </strong>When assessed using calibration, penalized logistic regression was substantially less data hungry than methods from the machine learning literature.</p>","PeriodicalId":72800,"journal":{"name":"Diagnostic and prognostic research","volume":"8 1","pages":"15"},"PeriodicalIF":0.0,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11539735/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142585094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-27DOI: 10.1186/s41512-024-00177-1
Lasai Barreñada, Paula Dhiman, Dirk Timmerman, Anne-Laure Boulesteix, Ben Van Calster
Background: Random forests have become popular for clinical risk prediction modeling. In a case study on predicting ovarian malignancy, we observed training AUCs close to 1. Although this suggests overfitting, performance was competitive on test data. We aimed to understand the behavior of random forests for probability estimation by (1) visualizing data space in three real-world case studies and (2) a simulation study.
Methods: For the case studies, multinomial risk estimates were visualized using heatmaps in a 2-dimensional subspace. The simulation study included 48 logistic data-generating mechanisms (DGM), varying the predictor distribution, the number of predictors, the correlation between predictors, the true AUC, and the strength of true predictors. For each DGM, 1000 training datasets of size 200 or 4000 with binary outcomes were simulated, and random forest models were trained with minimum node size 2 or 20 using the ranger R package, resulting in 192 scenarios in total. Model performance was evaluated on large test datasets (N = 100,000).
Results: The visualizations suggested that the model learned "spikes of probability" around events in the training set. A cluster of events created a bigger peak or plateau (signal), isolated events local peaks (noise). In the simulation study, median training AUCs were between 0.97 and 1 unless there were 4 binary predictors or 16 binary predictors with a minimum node size of 20. The median discrimination loss, i.e., the difference between the median test AUC and the true AUC, was 0.025 (range 0.00 to 0.13). Median training AUCs had Spearman correlations of around 0.70 with discrimination loss. Median test AUCs were higher with higher events per variable, higher minimum node size, and binary predictors. Median training calibration slopes were always above 1 and were not correlated with median test slopes across scenarios (Spearman correlation - 0.11). Median test slopes were higher with higher true AUC, higher minimum node size, and higher sample size.
Conclusions: Random forests learn local probability peaks that often yield near perfect training AUCs without strongly affecting AUCs on test data. When the aim is probability estimation, the simulation results go against the common recommendation to use fully grown trees in random forest models.
{"title":"Understanding overfitting in random forest for probability estimation: a visualization and simulation study.","authors":"Lasai Barreñada, Paula Dhiman, Dirk Timmerman, Anne-Laure Boulesteix, Ben Van Calster","doi":"10.1186/s41512-024-00177-1","DOIUrl":"10.1186/s41512-024-00177-1","url":null,"abstract":"<p><strong>Background: </strong>Random forests have become popular for clinical risk prediction modeling. In a case study on predicting ovarian malignancy, we observed training AUCs close to 1. Although this suggests overfitting, performance was competitive on test data. We aimed to understand the behavior of random forests for probability estimation by (1) visualizing data space in three real-world case studies and (2) a simulation study.</p><p><strong>Methods: </strong>For the case studies, multinomial risk estimates were visualized using heatmaps in a 2-dimensional subspace. The simulation study included 48 logistic data-generating mechanisms (DGM), varying the predictor distribution, the number of predictors, the correlation between predictors, the true AUC, and the strength of true predictors. For each DGM, 1000 training datasets of size 200 or 4000 with binary outcomes were simulated, and random forest models were trained with minimum node size 2 or 20 using the ranger R package, resulting in 192 scenarios in total. Model performance was evaluated on large test datasets (N = 100,000).</p><p><strong>Results: </strong>The visualizations suggested that the model learned \"spikes of probability\" around events in the training set. A cluster of events created a bigger peak or plateau (signal), isolated events local peaks (noise). In the simulation study, median training AUCs were between 0.97 and 1 unless there were 4 binary predictors or 16 binary predictors with a minimum node size of 20. The median discrimination loss, i.e., the difference between the median test AUC and the true AUC, was 0.025 (range 0.00 to 0.13). Median training AUCs had Spearman correlations of around 0.70 with discrimination loss. Median test AUCs were higher with higher events per variable, higher minimum node size, and binary predictors. Median training calibration slopes were always above 1 and were not correlated with median test slopes across scenarios (Spearman correlation - 0.11). Median test slopes were higher with higher true AUC, higher minimum node size, and higher sample size.</p><p><strong>Conclusions: </strong>Random forests learn local probability peaks that often yield near perfect training AUCs without strongly affecting AUCs on test data. When the aim is probability estimation, the simulation results go against the common recommendation to use fully grown trees in random forest models.</p>","PeriodicalId":72800,"journal":{"name":"Diagnostic and prognostic research","volume":"8 1","pages":"14"},"PeriodicalIF":0.0,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11437774/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142333691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-03DOI: 10.1186/s41512-024-00175-3
Thomas R Fanshawe, Brian D Nicholson, Rafael Perera, Jason L Oke
Background: Many clinical pathways for the diagnosis of disease are based on diagnostic tests that are performed in sequence. The performance of the full diagnostic sequence is dictated by the diagnostic performance of each test in the sequence as well as the conditional dependence between them, given true disease status. Resulting estimates of performance, such as the sensitivity and specificity of the test sequence, are key parameters in health-economic evaluations. We conducted a methodological review of statistical methods for assessing the performance of diagnostic tests performed in sequence, with the aim of guiding data analysts towards classes of methods that may be suitable given the design and objectives of the testing sequence.
Methods: We searched PubMed, Scopus and Web of Science for relevant papers describing methodology for analysing sequences of diagnostic tests. Papers were classified by the characteristics of the method used, and these were used to group methods into themes. We illustrate some of the methods using data from a cohort study of repeat faecal immunochemical testing for colorectal cancer in symptomatic patients, to highlight the importance of allowing for conditional dependence in test sequences and adjustment for an imperfect reference standard.
Results: Five overall themes were identified, detailing methods for combining multiple tests in sequence, estimating conditional dependence, analysing sequences of diagnostic tests used for risk assessment, analysing test sequences in conjunction with an imperfect or incomplete reference standard, and meta-analysis of test sequences.
Conclusions: This methodological review can be used to help researchers identify suitable analytic methods for studies that use diagnostic tests performed in sequence.
背景:许多疾病诊断的临床路径都以依次进行的诊断测试为基础。完整诊断序列的性能取决于序列中每项检验的诊断性能,以及在真实疾病状态下它们之间的条件依赖性。由此得出的性能估计值,如检验序列的灵敏度和特异性,是健康经济评价的关键参数。我们对评估依次进行的诊断检测性能的统计方法进行了方法学回顾,目的是指导数据分析师根据检测序列的设计和目标选择合适的方法类别:我们在 PubMed、Scopus 和 Web of Science 上搜索了描述诊断检测序列分析方法的相关论文。我们根据所用方法的特点对论文进行了分类,并将这些方法归入不同的主题。我们利用对无症状患者进行结直肠癌重复粪便免疫化学检验的队列研究数据来说明其中的一些方法,以突出考虑检验序列中条件依赖性和调整不完善参考标准的重要性:确定了五个总体主题,详细介绍了将多个检验序列结合起来的方法、估计条件依赖性的方法、分析用于风险评估的诊断检测序列的方法、结合不完善或不完全参考标准分析检验序列的方法以及检验序列的元分析方法:本方法论综述可用于帮助研究人员为使用序列诊断检测的研究确定合适的分析方法。
{"title":"A review of methods for the analysis of diagnostic tests performed in sequence.","authors":"Thomas R Fanshawe, Brian D Nicholson, Rafael Perera, Jason L Oke","doi":"10.1186/s41512-024-00175-3","DOIUrl":"10.1186/s41512-024-00175-3","url":null,"abstract":"<p><strong>Background: </strong>Many clinical pathways for the diagnosis of disease are based on diagnostic tests that are performed in sequence. The performance of the full diagnostic sequence is dictated by the diagnostic performance of each test in the sequence as well as the conditional dependence between them, given true disease status. Resulting estimates of performance, such as the sensitivity and specificity of the test sequence, are key parameters in health-economic evaluations. We conducted a methodological review of statistical methods for assessing the performance of diagnostic tests performed in sequence, with the aim of guiding data analysts towards classes of methods that may be suitable given the design and objectives of the testing sequence.</p><p><strong>Methods: </strong>We searched PubMed, Scopus and Web of Science for relevant papers describing methodology for analysing sequences of diagnostic tests. Papers were classified by the characteristics of the method used, and these were used to group methods into themes. We illustrate some of the methods using data from a cohort study of repeat faecal immunochemical testing for colorectal cancer in symptomatic patients, to highlight the importance of allowing for conditional dependence in test sequences and adjustment for an imperfect reference standard.</p><p><strong>Results: </strong>Five overall themes were identified, detailing methods for combining multiple tests in sequence, estimating conditional dependence, analysing sequences of diagnostic tests used for risk assessment, analysing test sequences in conjunction with an imperfect or incomplete reference standard, and meta-analysis of test sequences.</p><p><strong>Conclusions: </strong>This methodological review can be used to help researchers identify suitable analytic methods for studies that use diagnostic tests performed in sequence.</p>","PeriodicalId":72800,"journal":{"name":"Diagnostic and prognostic research","volume":"8 1","pages":"8"},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11370044/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142121261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-20DOI: 10.1186/s41512-024-00174-4
Susannah Fleming, Lazaro Mwandigha, Thomas R Fanshawe
Interim analysis is a common methodology in randomised clinical trials but has received less attention in studies of diagnostic test accuracy. In such studies, early termination for futility may be beneficial if early evidence indicates that a diagnostic test is unlikely to achieve a clinically useful level of diagnostic performance, as measured by the sensitivity and specificity. In this paper, we describe relevant practical and analytical considerations when planning and performing interim analysis in diagnostic accuracy studies, focusing on stopping rules for futility. We present an adaptation of the exact group sequential method for diagnostic testing, with R code provided for implementing this method in practice. The method is illustrated using two simulated data sets and data from a published diagnostic accuracy study for point-of-care testing for SARS-CoV-2. The considerations described in this paper can be used to guide decisions as to when an interim analysis in a diagnostic accuracy study is suitable and highlight areas for further methodological development.
中期分析是随机临床试验的常用方法,但在诊断测试准确性研究中却较少受到关注。在此类研究中,如果早期证据表明诊断测试不太可能达到临床有用的诊断水平(以灵敏度和特异性衡量),则因无效而提前终止可能是有益的。在本文中,我们介绍了在诊断准确性研究中规划和执行中期分析时的相关实践和分析注意事项,重点是无效性的终止规则。我们介绍了一种用于诊断测试的精确分组顺序法,并提供了在实践中实施该方法的 R 代码。我们使用两个模拟数据集和已发表的 SARS-CoV-2 护理点检测诊断准确性研究的数据对该方法进行了说明。本文所述的考虑因素可用于指导诊断准确性研究中何时适合进行中期分析的决策,并突出了方法学进一步发展的领域。
{"title":"Practical and analytical considerations when performing interim analyses in diagnostic test accuracy studies.","authors":"Susannah Fleming, Lazaro Mwandigha, Thomas R Fanshawe","doi":"10.1186/s41512-024-00174-4","DOIUrl":"10.1186/s41512-024-00174-4","url":null,"abstract":"<p><p>Interim analysis is a common methodology in randomised clinical trials but has received less attention in studies of diagnostic test accuracy. In such studies, early termination for futility may be beneficial if early evidence indicates that a diagnostic test is unlikely to achieve a clinically useful level of diagnostic performance, as measured by the sensitivity and specificity. In this paper, we describe relevant practical and analytical considerations when planning and performing interim analysis in diagnostic accuracy studies, focusing on stopping rules for futility. We present an adaptation of the exact group sequential method for diagnostic testing, with R code provided for implementing this method in practice. The method is illustrated using two simulated data sets and data from a published diagnostic accuracy study for point-of-care testing for SARS-CoV-2. The considerations described in this paper can be used to guide decisions as to when an interim analysis in a diagnostic accuracy study is suitable and highlight areas for further methodological development.</p>","PeriodicalId":72800,"journal":{"name":"Diagnostic and prognostic research","volume":"8 1","pages":"12"},"PeriodicalIF":0.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11334588/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142006023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-06DOI: 10.1186/s41512-024-00173-5
Victoria Watson, Catrin Tudur Smith, Laura J Bonnett
Background: Patients who suffer from chronic conditions or diseases are susceptible to experiencing repeated events of the same type (e.g. seizures), termed 'recurrent events'. Prediction models can be used to predict the risk of recurrence so that intervention or management can be tailored accordingly, but statistical methodology can vary. The objective of this systematic review was to identify and describe statistical approaches that have been applied for the development and validation of multivariable prediction models with recurrent event data. A secondary objective was to informally assess the characteristics and quality of analysis approaches used in the development and validation of prediction models of recurrent event data.
Methods: Searches were run in MEDLINE using a search strategy in 2019 which included index terms and phrases related to recurrent events and prediction models. For studies to be included in the review they must have developed or validated a multivariable clinical prediction model for recurrent event outcome data, specifically modelling the recurrent events and the timing between them. The statistical analysis methods used to analyse the recurrent event data in the clinical prediction model were extracted to answer the primary aim of the systematic review. In addition, items such as the event rate as well as any discrimination and calibration statistics that were used to assess the model performance were extracted for the secondary aim of the review.
Results: A total of 855 publications were identified using the developed search strategy and 301 of these are included in our systematic review. The Andersen-Gill method was identified as the most commonly applied method in the analysis of recurrent events, which was used in 152 (50.5%) studies. This was closely followed by frailty models which were used in 116 (38.5%) included studies. Of the 301 included studies, only 75 (24.9%) internally validated their model(s) and three (1.0%) validated their model(s) in an external dataset.
Conclusions: This review identified a variety of methods which are used in practice when developing or validating prediction models for recurrent events. The variability of the approaches identified is cause for concern as it indicates possible immaturity in the field and highlights the need for more methodological research to bring greater consistency in approach of recurrent event analysis. Further work is required to ensure publications report all required information and use robust statistical methods for model development and validation.
{"title":"Systematic review of methods used in prediction models with recurrent event data.","authors":"Victoria Watson, Catrin Tudur Smith, Laura J Bonnett","doi":"10.1186/s41512-024-00173-5","DOIUrl":"10.1186/s41512-024-00173-5","url":null,"abstract":"<p><strong>Background: </strong>Patients who suffer from chronic conditions or diseases are susceptible to experiencing repeated events of the same type (e.g. seizures), termed 'recurrent events'. Prediction models can be used to predict the risk of recurrence so that intervention or management can be tailored accordingly, but statistical methodology can vary. The objective of this systematic review was to identify and describe statistical approaches that have been applied for the development and validation of multivariable prediction models with recurrent event data. A secondary objective was to informally assess the characteristics and quality of analysis approaches used in the development and validation of prediction models of recurrent event data.</p><p><strong>Methods: </strong>Searches were run in MEDLINE using a search strategy in 2019 which included index terms and phrases related to recurrent events and prediction models. For studies to be included in the review they must have developed or validated a multivariable clinical prediction model for recurrent event outcome data, specifically modelling the recurrent events and the timing between them. The statistical analysis methods used to analyse the recurrent event data in the clinical prediction model were extracted to answer the primary aim of the systematic review. In addition, items such as the event rate as well as any discrimination and calibration statistics that were used to assess the model performance were extracted for the secondary aim of the review.</p><p><strong>Results: </strong>A total of 855 publications were identified using the developed search strategy and 301 of these are included in our systematic review. The Andersen-Gill method was identified as the most commonly applied method in the analysis of recurrent events, which was used in 152 (50.5%) studies. This was closely followed by frailty models which were used in 116 (38.5%) included studies. Of the 301 included studies, only 75 (24.9%) internally validated their model(s) and three (1.0%) validated their model(s) in an external dataset.</p><p><strong>Conclusions: </strong>This review identified a variety of methods which are used in practice when developing or validating prediction models for recurrent events. The variability of the approaches identified is cause for concern as it indicates possible immaturity in the field and highlights the need for more methodological research to bring greater consistency in approach of recurrent event analysis. Further work is required to ensure publications report all required information and use robust statistical methods for model development and validation.</p><p><strong>Prospero registration: </strong>CRD42019116031.</p>","PeriodicalId":72800,"journal":{"name":"Diagnostic and prognostic research","volume":"8 1","pages":"13"},"PeriodicalIF":0.0,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11302841/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141894934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-25DOI: 10.1186/s41512-024-00172-6
Flavia L Lombardo, Patrizia Lorenzini, Flavia Mayer, Marco Massari, Paola Piscopo, Ilaria Bacigalupo, Antonio Ancidoni, Francesco Sciancalepore, Nicoletta Locuratolo, Giulia Remoli, Simone Salemme, Stefano Cappa, Daniela Perani, Patrizia Spadin, Fabrizio Tagliavini, Alberto Redolfi, Maria Cotelli, Camillo Marra, Naike Caraglia, Fabrizio Vecchio, Francesca Miraglia, Paolo Maria Rossini, Nicola Vanacore
Background: In recent years, significant efforts have been directed towards the research and development of disease-modifying therapies for dementia. These drugs focus on prodromal (mild cognitive impairment, MCI) and/or early stages of Alzheimer's disease (AD). Literature evidence indicates that a considerable proportion of individuals with MCI do not progress to dementia. Identifying individuals at higher risk of developing dementia is essential for appropriate management, including the prescription of new disease-modifying therapies expected to become available in clinical practice in the near future.
Methods: The ongoing INTERCEPTOR study is a multicenter, longitudinal, interventional, non-therapeutic cohort study designed to enroll 500 individuals with MCI aged 50-85 years. The primary aim is to identify a biomarker or a set of biomarkers able to accurately predict the conversion from MCI to AD dementia within 3 years of follow-up. The biomarkers investigated in this study are neuropsychological tests (mini-mental state examination (MMSE) and delayed free recall), brain glucose metabolism ([18F]FDG-PET), MRI volumetry of the hippocampus, EEG brain connectivity, cerebrospinal fluid (CSF) markers (p-tau, t-tau, Aβ1-42, Aβ1-42/1-40 ratio, Aβ1-42/p-Tau ratio) and APOE genotype. The baseline visit includes a full cognitive and neuropsychological evaluation, as well as the collection of clinical and socio-demographic information. Prognostic models will be developed using Cox regression, incorporating individual characteristics and biomarkers through stepwise selection. Model performance will be evaluated in terms of discrimination and calibration and subjected to internal validation using the bootstrapping procedure. The final model will be visually represented as a nomogram.
Discussion: This paper contains a detailed description of the statistical analysis plan to ensure the reproducibility and transparency of the analysis. The prognostic model developed in this study aims to identify the population with MCI at higher risk of developing AD dementia, potentially eligible for drug prescriptions. The nomogram could provide a valuable tool for clinicians for risk stratification and early treatment decisions.
Trial registration: ClinicalTrials.gov NCT03834402. Registered on February 8, 2019.
{"title":"Development of a prediction model of conversion to Alzheimer's disease in people with mild cognitive impairment: the statistical analysis plan of the INTERCEPTOR project.","authors":"Flavia L Lombardo, Patrizia Lorenzini, Flavia Mayer, Marco Massari, Paola Piscopo, Ilaria Bacigalupo, Antonio Ancidoni, Francesco Sciancalepore, Nicoletta Locuratolo, Giulia Remoli, Simone Salemme, Stefano Cappa, Daniela Perani, Patrizia Spadin, Fabrizio Tagliavini, Alberto Redolfi, Maria Cotelli, Camillo Marra, Naike Caraglia, Fabrizio Vecchio, Francesca Miraglia, Paolo Maria Rossini, Nicola Vanacore","doi":"10.1186/s41512-024-00172-6","DOIUrl":"10.1186/s41512-024-00172-6","url":null,"abstract":"<p><strong>Background: </strong>In recent years, significant efforts have been directed towards the research and development of disease-modifying therapies for dementia. These drugs focus on prodromal (mild cognitive impairment, MCI) and/or early stages of Alzheimer's disease (AD). Literature evidence indicates that a considerable proportion of individuals with MCI do not progress to dementia. Identifying individuals at higher risk of developing dementia is essential for appropriate management, including the prescription of new disease-modifying therapies expected to become available in clinical practice in the near future.</p><p><strong>Methods: </strong>The ongoing INTERCEPTOR study is a multicenter, longitudinal, interventional, non-therapeutic cohort study designed to enroll 500 individuals with MCI aged 50-85 years. The primary aim is to identify a biomarker or a set of biomarkers able to accurately predict the conversion from MCI to AD dementia within 3 years of follow-up. The biomarkers investigated in this study are neuropsychological tests (mini-mental state examination (MMSE) and delayed free recall), brain glucose metabolism ([<sup>18</sup>F]FDG-PET), MRI volumetry of the hippocampus, EEG brain connectivity, cerebrospinal fluid (CSF) markers (p-tau, t-tau, Aβ1-42, Aβ1-42/1-40 ratio, Aβ1-42/p-Tau ratio) and APOE genotype. The baseline visit includes a full cognitive and neuropsychological evaluation, as well as the collection of clinical and socio-demographic information. Prognostic models will be developed using Cox regression, incorporating individual characteristics and biomarkers through stepwise selection. Model performance will be evaluated in terms of discrimination and calibration and subjected to internal validation using the bootstrapping procedure. The final model will be visually represented as a nomogram.</p><p><strong>Discussion: </strong>This paper contains a detailed description of the statistical analysis plan to ensure the reproducibility and transparency of the analysis. The prognostic model developed in this study aims to identify the population with MCI at higher risk of developing AD dementia, potentially eligible for drug prescriptions. The nomogram could provide a valuable tool for clinicians for risk stratification and early treatment decisions.</p><p><strong>Trial registration: </strong>ClinicalTrials.gov NCT03834402. Registered on February 8, 2019.</p>","PeriodicalId":72800,"journal":{"name":"Diagnostic and prognostic research","volume":"8 1","pages":"11"},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11271065/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141763069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-24DOI: 10.1186/s41512-024-00176-2
Natasha Samsunder, Aida Sivro, Razia Hassan-Moosa, Lara Lewis, Zahra Kara, Cheryl Baxter, Quarraisha Abdool Karim, Salim Abdool Karim, Ayesha B M Kharsany, Kogieleum Naidoo, Sinaye Ngcapu
Background and objective: Saliva has been proposed as a potential more convenient, cost-effective, and easier sample for diagnosing SARS-CoV-2 infections, but there is limited knowledge of the impact of saliva volumes and stages of infection on its sensitivity and specificity.
Methods: In this study, we assessed the performance of SARS-CoV-2 testing in 171 saliva samples from 52 mostly mildly symptomatic patients (aged 18 to 70 years) with a positive reference standard result at screening. The samples were collected at different volumes (50, 100, 300, and 500 µl of saliva) and at different stages of the disease (at enrollment, day 7, 14, and 28 post SARS-CoV-2 diagnosis). Imperfect nasopharyngeal (NP) swab nucleic acid amplification testing was used as a reference. We used a logistic regression with generalized estimating equations to estimate sensitivity, specificity, PPV, and NPV, accounting for the correlation between repeated observations.
Results: The sensitivity and specificity values were consistent across saliva volumes. The sensitivity of saliva samples ranged from 70.2% (95% CI, 49.3-85.0%) for 100 μl to 81.0% (95% CI, 51.9-94.4%) for 300 μl of saliva collected. The specificity values ranged between 75.8% (95% CI, 55.0-88.9%) for 50 μl and 78.8% (95% CI, 63.2-88.9%) for 100 μl saliva compared to NP swab samples. The overall percentage of positive results in NP swabs and saliva specimens remained comparable throughout the study visits. We observed no significant difference in cycle number values between saliva and NP swab specimens, irrespective of saliva volume tested.
Conclusions: The saliva collection offers a promising approach for population-based testing.
{"title":"Evaluating diagnostic accuracy of an RT-PCR test for the detection of SARS-CoV-2 in saliva.","authors":"Natasha Samsunder, Aida Sivro, Razia Hassan-Moosa, Lara Lewis, Zahra Kara, Cheryl Baxter, Quarraisha Abdool Karim, Salim Abdool Karim, Ayesha B M Kharsany, Kogieleum Naidoo, Sinaye Ngcapu","doi":"10.1186/s41512-024-00176-2","DOIUrl":"10.1186/s41512-024-00176-2","url":null,"abstract":"<p><strong>Background and objective: </strong>Saliva has been proposed as a potential more convenient, cost-effective, and easier sample for diagnosing SARS-CoV-2 infections, but there is limited knowledge of the impact of saliva volumes and stages of infection on its sensitivity and specificity.</p><p><strong>Methods: </strong>In this study, we assessed the performance of SARS-CoV-2 testing in 171 saliva samples from 52 mostly mildly symptomatic patients (aged 18 to 70 years) with a positive reference standard result at screening. The samples were collected at different volumes (50, 100, 300, and 500 µl of saliva) and at different stages of the disease (at enrollment, day 7, 14, and 28 post SARS-CoV-2 diagnosis). Imperfect nasopharyngeal (NP) swab nucleic acid amplification testing was used as a reference. We used a logistic regression with generalized estimating equations to estimate sensitivity, specificity, PPV, and NPV, accounting for the correlation between repeated observations.</p><p><strong>Results: </strong>The sensitivity and specificity values were consistent across saliva volumes. The sensitivity of saliva samples ranged from 70.2% (95% CI, 49.3-85.0%) for 100 μl to 81.0% (95% CI, 51.9-94.4%) for 300 μl of saliva collected. The specificity values ranged between 75.8% (95% CI, 55.0-88.9%) for 50 μl and 78.8% (95% CI, 63.2-88.9%) for 100 μl saliva compared to NP swab samples. The overall percentage of positive results in NP swabs and saliva specimens remained comparable throughout the study visits. We observed no significant difference in cycle number values between saliva and NP swab specimens, irrespective of saliva volume tested.</p><p><strong>Conclusions: </strong>The saliva collection offers a promising approach for population-based testing.</p>","PeriodicalId":72800,"journal":{"name":"Diagnostic and prognostic research","volume":"8 1","pages":"9"},"PeriodicalIF":0.0,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11267770/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141753522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-16DOI: 10.1186/s41512-024-00171-7
Jung Yin Tsang, Matthew Sperrin, Thomas Blakeman, Rupert A Payne, Darren M Ashcroft
Background: An increasing number of people are using multiple medications each day, named polypharmacy. This is driven by an ageing population, increasing multimorbidity, and single disease-focussed guidelines. Medications carry obvious benefits, yet polypharmacy is also linked to adverse consequences including adverse drug events, drug-drug and drug-disease interactions, poor patient experience and wasted resources. Problematic polypharmacy is 'the prescribing of multiple medicines inappropriately, or where the intended benefits are not realised'. Identifying people with problematic polypharmacy is complex, as multiple medicines can be suitable for people with several chronic conditions requiring more treatment. Hence, polypharmacy is often potentially problematic, rather than always inappropriate, dependent on clinical context and individual benefit vs risk. There is a need to improve how we identify and evaluate these patients by extending beyond simple counts of medicines to include individual factors and long-term conditions.
Aim: To produce a Polypharmacy Assessment Score to identify a population with unusual levels of prescribing who may be at risk of potentially problematic polypharmacy.
Methods: Analyses will be performed in three parts: 1. A prediction model will be constructed using observed medications count as the dependent variable, with age, gender and long-term conditions as independent variables. A 'Polypharmacy Assessment Score' will then be constructed through calculating the differences between the observed and expected count of prescribed medications, thereby highlighting people that have unexpected levels of prescribing. Parts 2 and 3 will examine different aspects of validity of the Polypharmacy Assessment Score: 2. To assess 'construct validity', cross-sectional analyses will evaluate high-risk prescribing within populations defined by a range of Polypharmacy Assessment Scores, using both explicit (STOPP/START criteria) and implicit (Medication Appropriateness Index) measures of inappropriate prescribing. 3. To assess 'predictive validity', a retrospective cohort study will explore differences in clinical outcomes (adverse drug reactions, unplanned hospitalisation and all-cause mortality) between differing scores.
Discussion: Developing a cross-cutting measure of polypharmacy may allow healthcare professionals to prioritise and risk stratify patients with polypharmacy using unusual levels of prescribing. This would be an improvement from current approaches of either using simple cutoffs or narrow prescribing criteria.
{"title":"Protocol for the development and validation of a Polypharmacy Assessment Score.","authors":"Jung Yin Tsang, Matthew Sperrin, Thomas Blakeman, Rupert A Payne, Darren M Ashcroft","doi":"10.1186/s41512-024-00171-7","DOIUrl":"10.1186/s41512-024-00171-7","url":null,"abstract":"<p><strong>Background: </strong>An increasing number of people are using multiple medications each day, named polypharmacy. This is driven by an ageing population, increasing multimorbidity, and single disease-focussed guidelines. Medications carry obvious benefits, yet polypharmacy is also linked to adverse consequences including adverse drug events, drug-drug and drug-disease interactions, poor patient experience and wasted resources. Problematic polypharmacy is 'the prescribing of multiple medicines inappropriately, or where the intended benefits are not realised'. Identifying people with problematic polypharmacy is complex, as multiple medicines can be suitable for people with several chronic conditions requiring more treatment. Hence, polypharmacy is often potentially problematic, rather than always inappropriate, dependent on clinical context and individual benefit vs risk. There is a need to improve how we identify and evaluate these patients by extending beyond simple counts of medicines to include individual factors and long-term conditions.</p><p><strong>Aim: </strong>To produce a Polypharmacy Assessment Score to identify a population with unusual levels of prescribing who may be at risk of potentially problematic polypharmacy.</p><p><strong>Methods: </strong>Analyses will be performed in three parts: 1. A prediction model will be constructed using observed medications count as the dependent variable, with age, gender and long-term conditions as independent variables. A 'Polypharmacy Assessment Score' will then be constructed through calculating the differences between the observed and expected count of prescribed medications, thereby highlighting people that have unexpected levels of prescribing. Parts 2 and 3 will examine different aspects of validity of the Polypharmacy Assessment Score: 2. To assess 'construct validity', cross-sectional analyses will evaluate high-risk prescribing within populations defined by a range of Polypharmacy Assessment Scores, using both explicit (STOPP/START criteria) and implicit (Medication Appropriateness Index) measures of inappropriate prescribing. 3. To assess 'predictive validity', a retrospective cohort study will explore differences in clinical outcomes (adverse drug reactions, unplanned hospitalisation and all-cause mortality) between differing scores.</p><p><strong>Discussion: </strong>Developing a cross-cutting measure of polypharmacy may allow healthcare professionals to prioritise and risk stratify patients with polypharmacy using unusual levels of prescribing. This would be an improvement from current approaches of either using simple cutoffs or narrow prescribing criteria.</p>","PeriodicalId":72800,"journal":{"name":"Diagnostic and prognostic research","volume":"8 1","pages":"10"},"PeriodicalIF":0.0,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11251249/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141621901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}