Pub Date : 2023-01-01DOI: 10.1177/0272989X221112420
Rowan Iskandar, Cassandra Berns
Highlights: A Markov model simulates the average experience of a cohort of patients.Monte Carlo simulation, the standard approach for estimating the variance, is computationally expensive.A multinomial distribution provides an exact representation of a Markov model.Using the known formulas of a multinomial distribution, the mean and variance of a Markov model can be readily calculated.
{"title":"Markov Cohort State-Transition Model: A Multinomial Distribution Representation.","authors":"Rowan Iskandar, Cassandra Berns","doi":"10.1177/0272989X221112420","DOIUrl":"https://doi.org/10.1177/0272989X221112420","url":null,"abstract":"<p><strong>Highlights: </strong>A Markov model simulates the average experience of a cohort of patients.Monte Carlo simulation, the standard approach for estimating the variance, is computationally expensive.A multinomial distribution provides an exact representation of a Markov model.Using the known formulas of a multinomial distribution, the mean and variance of a Markov model can be readily calculated.</p>","PeriodicalId":49839,"journal":{"name":"Medical Decision Making","volume":"43 1","pages":"139-142"},"PeriodicalIF":3.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10347959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/0272989X221132741
Valerio Benedetto, Luís Filipe, Catherine Harris, Joseph Spencer, Carmel Hickson, Andrew Clegg
<p><strong>Background: </strong>Digital health interventions (DHIs) can improve the provision of health care services. To fully account for their effects in economic evaluations, traditional methods based on measuring health-related quality of life may not be appropriate, as nonhealth and process outcomes are likely to be relevant too.</p><p><strong>Purpose: </strong>This systematic review identifies, assesses, and synthesizes the arguments on the analytical frameworks and outcome measures used in the economic evaluations of DHIs. The results informed recommendations for future economic evaluations.</p><p><strong>Data sources: </strong>We ran searches on multiple databases, complemented by gray literature and backward and forward citation searches.</p><p><strong>Study selection: </strong>We included records containing theoretical and empirical arguments associated with the use of analytical frameworks and outcome measures for economic evaluations of DHIs. Following title/abstract and full-text screening, our final analysis included 15 studies.</p><p><strong>Data extraction: </strong>The arguments we extracted related to analytical frameworks (14 studies), generic outcome measures (5 studies), techniques used to elicit utility values (3 studies), and disease-specific outcome measures and instruments to collect health states data (both from 2 studies).</p><p><strong>Data synthesis: </strong>Rather than assessing the quality of the studies, we critically assessed and synthesized the extracted arguments. Building on this synthesis, we developed a 3-stage set of recommendations in which we encourage the use of impact matrices and analyses of equity impacts to integrate traditional economic evaluation methods.</p><p><strong>Limitations: </strong>Our review and recommendations explored but not fully covered other potentially important aspects of economic evaluations that were outside our scope.</p><p><strong>Conclusions: </strong>This is the first systematic review that summarizes the arguments on how the effects of DHIs could be measured in economic evaluations. Our recommendations will help design future economic evaluations.</p><p><strong>Highlights: </strong>Using traditional outcome measures based on health-related quality of life (such as the quality-adjusted life-year) may not be appropriate in economic evaluations of digital health interventions, which are likely to trigger nonhealth and process outcomes.This is the first systematic review to investigate how the effects of digital health interventions could be measured in economic evaluations.We extracted and synthesized different arguments from the literature, outlining advantages and disadvantages associated with different methods used to measure the effects of digital health interventions.We propose a methodological set of recommendations in which 1) we suggest that researchers consider the use of impact matrices and cost-consequence analysis, 2) we discuss the suitability of analytical frame
{"title":"Analytical Frameworks and Outcome Measures in Economic Evaluations of Digital Health Interventions: A Methodological Systematic Review.","authors":"Valerio Benedetto, Luís Filipe, Catherine Harris, Joseph Spencer, Carmel Hickson, Andrew Clegg","doi":"10.1177/0272989X221132741","DOIUrl":"https://doi.org/10.1177/0272989X221132741","url":null,"abstract":"<p><strong>Background: </strong>Digital health interventions (DHIs) can improve the provision of health care services. To fully account for their effects in economic evaluations, traditional methods based on measuring health-related quality of life may not be appropriate, as nonhealth and process outcomes are likely to be relevant too.</p><p><strong>Purpose: </strong>This systematic review identifies, assesses, and synthesizes the arguments on the analytical frameworks and outcome measures used in the economic evaluations of DHIs. The results informed recommendations for future economic evaluations.</p><p><strong>Data sources: </strong>We ran searches on multiple databases, complemented by gray literature and backward and forward citation searches.</p><p><strong>Study selection: </strong>We included records containing theoretical and empirical arguments associated with the use of analytical frameworks and outcome measures for economic evaluations of DHIs. Following title/abstract and full-text screening, our final analysis included 15 studies.</p><p><strong>Data extraction: </strong>The arguments we extracted related to analytical frameworks (14 studies), generic outcome measures (5 studies), techniques used to elicit utility values (3 studies), and disease-specific outcome measures and instruments to collect health states data (both from 2 studies).</p><p><strong>Data synthesis: </strong>Rather than assessing the quality of the studies, we critically assessed and synthesized the extracted arguments. Building on this synthesis, we developed a 3-stage set of recommendations in which we encourage the use of impact matrices and analyses of equity impacts to integrate traditional economic evaluation methods.</p><p><strong>Limitations: </strong>Our review and recommendations explored but not fully covered other potentially important aspects of economic evaluations that were outside our scope.</p><p><strong>Conclusions: </strong>This is the first systematic review that summarizes the arguments on how the effects of DHIs could be measured in economic evaluations. Our recommendations will help design future economic evaluations.</p><p><strong>Highlights: </strong>Using traditional outcome measures based on health-related quality of life (such as the quality-adjusted life-year) may not be appropriate in economic evaluations of digital health interventions, which are likely to trigger nonhealth and process outcomes.This is the first systematic review to investigate how the effects of digital health interventions could be measured in economic evaluations.We extracted and synthesized different arguments from the literature, outlining advantages and disadvantages associated with different methods used to measure the effects of digital health interventions.We propose a methodological set of recommendations in which 1) we suggest that researchers consider the use of impact matrices and cost-consequence analysis, 2) we discuss the suitability of analytical frame","PeriodicalId":49839,"journal":{"name":"Medical Decision Making","volume":"43 1","pages":"125-138"},"PeriodicalIF":3.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9742632/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10350862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/0272989X221132256
Maryam Alimohammadi, W Art Chaovalitwongse, Hubert J Vesselle, Shengfan Zhang
Background: Lung volume reduction surgery (LVRS) and medical therapy are 2 available treatment options in dealing with severe emphysema, which is a chronic lung disease. However, or there are currently limited guidelines on the timing of LVRS for patients with different characteristics.
Objective: The objective of this study is to assess the timing of receiving LVRS in terms of patient outcomes, taking into consideration a patient's characteristics.
Methods: A finite-horizon Markov decision process model for patients with severe emphysema was developed to determine the short-term (5 y) and long-term timing of emphysema treatment. Maximizing the expected life expectancy, expected quality-adjusted life-years, and total expected cost of each treatment option were applied as the objective functions of the model. To estimate parameters in the model, the data provided by the National Emphysema Treatment Trial were used.
Results: The results indicate that the treatment timing strategy for patients with upper-lobe predominant emphysema is to receive LVRS regardless of their specific characteristics. However, for patients with non-upper-lobe-predominant emphysema, the optimal strategy depends on the age, maximum workload level, and forced expiratory volume in 1 second level.
Conclusion: This study demonstrates the utilization of clinical trial data to gain insights into the timing of surgical treatment for patients with emphysema, considering patient age, observable health condition, and location of emphysema.
Highlights: Both short-term and long-term Markov decision process models were developed to assess the timing of receiving lung volume reduction surgery in patients with severe emphysema.How clinical trial data can be used to estimate the parameters and obtain short-term results from the Markov decision process model is demonstrated.The results provide insights into the timing of receiving lung volume reduction surgery as a function of a patient's characteristics, including age, emphysema location, maximum workload, and forced expiratory volume in 1 second level.
{"title":"Utilizing Clinical Trial Data to Assess Timing of Surgical Treatment for Emphysema Patients.","authors":"Maryam Alimohammadi, W Art Chaovalitwongse, Hubert J Vesselle, Shengfan Zhang","doi":"10.1177/0272989X221132256","DOIUrl":"https://doi.org/10.1177/0272989X221132256","url":null,"abstract":"<p><strong>Background: </strong>Lung volume reduction surgery (LVRS) and medical therapy are 2 available treatment options in dealing with severe emphysema, which is a chronic lung disease. However, or there are currently limited guidelines on the timing of LVRS for patients with different characteristics.</p><p><strong>Objective: </strong>The objective of this study is to assess the timing of receiving LVRS in terms of patient outcomes, taking into consideration a patient's characteristics.</p><p><strong>Methods: </strong>A finite-horizon Markov decision process model for patients with severe emphysema was developed to determine the short-term (5 y) and long-term timing of emphysema treatment. Maximizing the expected life expectancy, expected quality-adjusted life-years, and total expected cost of each treatment option were applied as the objective functions of the model. To estimate parameters in the model, the data provided by the National Emphysema Treatment Trial were used.</p><p><strong>Results: </strong>The results indicate that the treatment timing strategy for patients with upper-lobe predominant emphysema is to receive LVRS regardless of their specific characteristics. However, for patients with non-upper-lobe-predominant emphysema, the optimal strategy depends on the age, maximum workload level, and forced expiratory volume in 1 second level.</p><p><strong>Conclusion: </strong>This study demonstrates the utilization of clinical trial data to gain insights into the timing of surgical treatment for patients with emphysema, considering patient age, observable health condition, and location of emphysema.</p><p><strong>Highlights: </strong>Both short-term and long-term Markov decision process models were developed to assess the timing of receiving lung volume reduction surgery in patients with severe emphysema.How clinical trial data can be used to estimate the parameters and obtain short-term results from the Markov decision process model is demonstrated.The results provide insights into the timing of receiving lung volume reduction surgery as a function of a patient's characteristics, including age, emphysema location, maximum workload, and forced expiratory volume in 1 second level.</p>","PeriodicalId":49839,"journal":{"name":"Medical Decision Making","volume":"43 1","pages":"110-124"},"PeriodicalIF":3.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10357691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/0272989X221125418
John Austin McCandlish, Turgay Ayer, Jagpreet Chhatwal
Background: Metamodels can address some of the limitations of complex simulation models by formulating a mathematical relationship between input parameters and simulation model outcomes. Our objective was to develop and compare the performance of a machine learning (ML)-based metamodel against a conventional metamodeling approach in replicating the findings of a complex simulation model.
Methods: We constructed 3 ML-based metamodels using random forest, support vector regression, and artificial neural networks and a linear regression-based metamodel from a previously validated microsimulation model of the natural history hepatitis C virus (HCV) consisting of 40 input parameters. Outcomes of interest included societal costs and quality-adjusted life-years (QALYs), the incremental cost-effectiveness (ICER) of HCV treatment versus no treatment, cost-effectiveness analysis curve (CEAC), and expected value of perfect information (EVPI). We evaluated metamodel performance using root mean squared error (RMSE) and Pearson's R2 on the normalized data.
Results: The R2 values for the linear regression metamodel for QALYs without treatment, QALYs with treatment, societal cost without treatment, societal cost with treatment, and ICER were 0.92, 0.98, 0.85, 0.92, and 0.60, respectively. The corresponding R2 values for our ML-based metamodels were 0.96, 0.97, 0.90, 0.95, and 0.49 for support vector regression; 0.99, 0.83, 0.99, 0.99, and 0.82 for artificial neural network; and 0.99, 0.99, 0.99, 0.99, and 0.98 for random forest. Similar trends were observed for RMSE. The CEAC and EVPI curves produced by the random forest metamodel matched the results of the simulation output more closely than the linear regression metamodel.
Conclusions: ML-based metamodels generally outperformed traditional linear regression metamodels at replicating results from complex simulation models, with random forest metamodels performing best.
Highlights: Decision-analytic models are frequently used by policy makers and other stakeholders to assess the impact of new medical technologies and interventions. However, complex models can impose limitations on conducting probabilistic sensitivity analysis and value-of-information analysis, and may not be suitable for developing online decision-support tools.Metamodels, which accurately formulate a mathematical relationship between input parameters and model outcomes, can replicate complex simulation models and address the above limitation.The machine learning-based random forest model can outperform linear regression in replicating the findings of a complex simulation model. Such a metamodel can be used for conducting cost-effectiveness and value-of-information analyses or developing online decision support tools.
{"title":"Cost-Effectiveness and Value-of-Information Analysis Using Machine Learning-Based Metamodeling: A Case of Hepatitis C Treatment.","authors":"John Austin McCandlish, Turgay Ayer, Jagpreet Chhatwal","doi":"10.1177/0272989X221125418","DOIUrl":"https://doi.org/10.1177/0272989X221125418","url":null,"abstract":"<p><strong>Background: </strong>Metamodels can address some of the limitations of complex simulation models by formulating a mathematical relationship between input parameters and simulation model outcomes. Our objective was to develop and compare the performance of a machine learning (ML)-based metamodel against a conventional metamodeling approach in replicating the findings of a complex simulation model.</p><p><strong>Methods: </strong>We constructed 3 ML-based metamodels using random forest, support vector regression, and artificial neural networks and a linear regression-based metamodel from a previously validated microsimulation model of the natural history hepatitis C virus (HCV) consisting of 40 input parameters. Outcomes of interest included societal costs and quality-adjusted life-years (QALYs), the incremental cost-effectiveness (ICER) of HCV treatment versus no treatment, cost-effectiveness analysis curve (CEAC), and expected value of perfect information (EVPI). We evaluated metamodel performance using root mean squared error (RMSE) and Pearson's <i>R</i><sup>2</sup> on the normalized data.</p><p><strong>Results: </strong>The <i>R</i><sup>2</sup> values for the linear regression metamodel for QALYs without treatment, QALYs with treatment, societal cost without treatment, societal cost with treatment, and ICER were 0.92, 0.98, 0.85, 0.92, and 0.60, respectively. The corresponding <i>R</i><sup>2</sup> values for our ML-based metamodels were 0.96, 0.97, 0.90, 0.95, and 0.49 for support vector regression; 0.99, 0.83, 0.99, 0.99, and 0.82 for artificial neural network; and 0.99, 0.99, 0.99, 0.99, and 0.98 for random forest. Similar trends were observed for RMSE. The CEAC and EVPI curves produced by the random forest metamodel matched the results of the simulation output more closely than the linear regression metamodel.</p><p><strong>Conclusions: </strong>ML-based metamodels generally outperformed traditional linear regression metamodels at replicating results from complex simulation models, with random forest metamodels performing best.</p><p><strong>Highlights: </strong>Decision-analytic models are frequently used by policy makers and other stakeholders to assess the impact of new medical technologies and interventions. However, complex models can impose limitations on conducting probabilistic sensitivity analysis and value-of-information analysis, and may not be suitable for developing online decision-support tools.Metamodels, which accurately formulate a mathematical relationship between input parameters and model outcomes, can replicate complex simulation models and address the above limitation.The machine learning-based random forest model can outperform linear regression in replicating the findings of a complex simulation model. Such a metamodel can be used for conducting cost-effectiveness and value-of-information analyses or developing online decision support tools.</p>","PeriodicalId":49839,"journal":{"name":"Medical Decision Making","volume":"43 1","pages":"68-77"},"PeriodicalIF":3.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10410398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/0272989X221132257
M A Chaudhary, M Edmondson-Jones, G Baio, E Mackay, J R Penrod, D J Sharpe, G Yates, S Rafiq, K Johannesen, M K Siddiqui, J Vanderpuye-Orgle, A Briggs
<p><strong>Objectives: </strong>Immuno-oncology (IO) therapies are often associated with delayed responses that are deep and durable, manifesting as long-term survival benefits in patients with metastatic cancer. Complex hazard functions arising from IO treatments may limit the accuracy of extrapolations from standard parametric models (SPMs). We evaluated the ability of flexible parametric models (FPMs) to improve survival extrapolations using data from 2 trials involving patients with non-small-cell lung cancer (NSCLC).</p><p><strong>Methods: </strong>Our analyses used consecutive database locks (DBLs) at 2-, 3-, and 5-y minimum follow-up from trials evaluating nivolumab versus docetaxel in patients with pretreated metastatic squamous (CheckMate-017) and nonsquamous (CheckMate-057) NSCLC. For each DBL, SPMs, as well as 3 FPMs-landmark response models (LRMs), mixture cure models (MCMs), and Bayesian multiparameter evidence synthesis (B-MPES)-were estimated on nivolumab overall survival (OS). The performance of each parametric model was assessed by comparing milestone restricted mean survival times (RMSTs) and survival probabilities with results obtained from externally validated SPMs.</p><p><strong>Results: </strong>For the 2- and 3-y DBLs of both trials, all models tended to underestimate 5-y OS. Predictions from nonvalidated SPMs fitted to the 2-y DBLs were highly unreliable, whereas extrapolations from FPMs were much more consistent between models fitted to successive DBLs. For CheckMate-017, in which an apparent survival plateau emerges in the 3-y DBL, MCMs fitted to this DBL estimated 5-y OS most accurately (11.6% v. 12.3% observed), and long-term predictions were similar to those from the 5-y validated SPM (20-y RMST: 30.2 v. 30.5 mo). For CheckMate-057, where there is no clear evidence of a survival plateau in the early DBLs, only B-MPES was able to accurately predict 5-y OS (14.1% v. 14.0% observed [3-y DBL]).</p><p><strong>Conclusions: </strong>We demonstrate that the use of FPMs for modeling OS in NSCLC patients from early follow-up data can yield accurate estimates for RMST observed with longer follow-up and provide similar long-term extrapolations to externally validated SPMs based on later data cuts. B-MPES generated reasonable predictions even when fitted to the 2-y DBLs of the studies, whereas MCMs were more reliant on longer-term data to estimate a plateau and therefore performed better from 3 y. Generally, LRM extrapolations were less reliable than those from alternative FPMs and validated SPMs but remained superior to nonvalidated SPMs. Our work demonstrates the potential benefits of using advanced parametric models that incorporate external data sources, such as B-MPES and MCMs, to allow for accurate evaluation of treatment clinical and cost-effectiveness from trial data with limited follow-up.</p><p><strong>Highlights: </strong>Flexible advanced parametric modeling methods can provide improved survival extrapolations for immu
目的:免疫肿瘤学(IO)治疗通常与深度和持久的延迟反应相关,表现为转移性癌症患者的长期生存益处。IO处理产生的复杂危险函数可能会限制标准参数模型(SPMs)外推的准确性。我们利用两项涉及非小细胞肺癌(NSCLC)患者的试验数据,评估了灵活参数模型(FPMs)改善生存推断的能力。方法:我们的分析使用连续数据库锁定(dbl),在2年、3年和5年的最短随访时间,从试验中评估纳武单抗与多西他赛在预处理转移性鳞状(CheckMate-017)和非鳞状(CheckMate-057)非小细胞肺癌患者中的疗效。对于每个DBL, SPMs以及3个fpm -标志性反应模型(lrm),混合治愈模型(MCMs)和贝叶斯多参数证据合成(B-MPES)-对纳沃单抗总生存期(OS)进行估计。通过比较里程碑限制平均生存时间(RMSTs)和生存概率与外部验证SPMs获得的结果来评估每个参数模型的性能。结果:对于两项试验的2年和3年DBLs,所有模型都倾向于低估5年OS。未经验证的SPMs对2年DBLs的预测是高度不可靠的,而FPMs的外推在连续DBLs的模型之间更为一致。对于CheckMate-017,在3年的DBL中出现了明显的生存平台,适合该DBL的mcm最准确地估计了5年的OS (11.6% vs 12.3%观察到),并且长期预测与5年验证的SPM相似(20年RMST: 30.2 vs 30.5个月)。对于CheckMate-057,在早期DBL中没有明确的生存平台证据,只有B-MPES能够准确预测5年生存率(14.1% vs . 14.0%观察[3年DBL])。结论:我们证明,使用FPMs对NSCLC患者的OS进行建模,可以从早期随访数据中得出更长的随访期间观察到的RMST的准确估计,并为基于后期数据切割的外部验证的SPMs提供类似的长期推断。B-MPES即使适用于研究的2年DBLs,也能产生合理的预测,而mcm更依赖于长期数据来估计平台期,因此从3年开始表现更好。一般来说,LRM外推的可靠性低于替代FPMs和经过验证的SPMs,但仍优于未经验证的SPMs。我们的工作证明了使用包含外部数据源的先进参数模型的潜在好处,例如B-MPES和mcm,可以在有限的随访下从试验数据中准确评估治疗的临床和成本效益。亮点:灵活的先进参数化建模方法可以从早期临床试验数据中为卫生技术评估中的免疫肿瘤学成本效益提供改进的生存推断,从而更好地预测延长的随访。优点包括利用额外的可观察试验数据,外部数据的系统集成,以及对底层过程进行更详细的建模。贝叶斯多参数证据合成在外部数据匹配良好的情况下表现特别好。混合固化模型也表现良好,但根据具体情况,可能需要相对较长的随访时间来确定出现的平台期。在这种情况下,里程碑式反应模型提供了边际效益,可能需要在每个反应组中增加更多的数据和/或增加随访,以支持在每个子组中改进的外推。
{"title":"Use of Advanced Flexible Modeling Approaches for Survival Extrapolation from Early Follow-up Data in two Nivolumab Trials in Advanced NSCLC with Extended Follow-up.","authors":"M A Chaudhary, M Edmondson-Jones, G Baio, E Mackay, J R Penrod, D J Sharpe, G Yates, S Rafiq, K Johannesen, M K Siddiqui, J Vanderpuye-Orgle, A Briggs","doi":"10.1177/0272989X221132257","DOIUrl":"https://doi.org/10.1177/0272989X221132257","url":null,"abstract":"<p><strong>Objectives: </strong>Immuno-oncology (IO) therapies are often associated with delayed responses that are deep and durable, manifesting as long-term survival benefits in patients with metastatic cancer. Complex hazard functions arising from IO treatments may limit the accuracy of extrapolations from standard parametric models (SPMs). We evaluated the ability of flexible parametric models (FPMs) to improve survival extrapolations using data from 2 trials involving patients with non-small-cell lung cancer (NSCLC).</p><p><strong>Methods: </strong>Our analyses used consecutive database locks (DBLs) at 2-, 3-, and 5-y minimum follow-up from trials evaluating nivolumab versus docetaxel in patients with pretreated metastatic squamous (CheckMate-017) and nonsquamous (CheckMate-057) NSCLC. For each DBL, SPMs, as well as 3 FPMs-landmark response models (LRMs), mixture cure models (MCMs), and Bayesian multiparameter evidence synthesis (B-MPES)-were estimated on nivolumab overall survival (OS). The performance of each parametric model was assessed by comparing milestone restricted mean survival times (RMSTs) and survival probabilities with results obtained from externally validated SPMs.</p><p><strong>Results: </strong>For the 2- and 3-y DBLs of both trials, all models tended to underestimate 5-y OS. Predictions from nonvalidated SPMs fitted to the 2-y DBLs were highly unreliable, whereas extrapolations from FPMs were much more consistent between models fitted to successive DBLs. For CheckMate-017, in which an apparent survival plateau emerges in the 3-y DBL, MCMs fitted to this DBL estimated 5-y OS most accurately (11.6% v. 12.3% observed), and long-term predictions were similar to those from the 5-y validated SPM (20-y RMST: 30.2 v. 30.5 mo). For CheckMate-057, where there is no clear evidence of a survival plateau in the early DBLs, only B-MPES was able to accurately predict 5-y OS (14.1% v. 14.0% observed [3-y DBL]).</p><p><strong>Conclusions: </strong>We demonstrate that the use of FPMs for modeling OS in NSCLC patients from early follow-up data can yield accurate estimates for RMST observed with longer follow-up and provide similar long-term extrapolations to externally validated SPMs based on later data cuts. B-MPES generated reasonable predictions even when fitted to the 2-y DBLs of the studies, whereas MCMs were more reliant on longer-term data to estimate a plateau and therefore performed better from 3 y. Generally, LRM extrapolations were less reliable than those from alternative FPMs and validated SPMs but remained superior to nonvalidated SPMs. Our work demonstrates the potential benefits of using advanced parametric models that incorporate external data sources, such as B-MPES and MCMs, to allow for accurate evaluation of treatment clinical and cost-effectiveness from trial data with limited follow-up.</p><p><strong>Highlights: </strong>Flexible advanced parametric modeling methods can provide improved survival extrapolations for immu","PeriodicalId":49839,"journal":{"name":"Medical Decision Making","volume":"43 1","pages":"91-109"},"PeriodicalIF":3.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10350861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/0272989X221126678
Ioannis Bellos
Background: Network meta-analysis exploits randomized data to compare multiple interventions and generate rankings. Selecting an optimal treatment may be complicated when multiple conflicting outcomes are evaluated in parallel.
Design: The present study suggested the incorporation of multicriteria decision-making methods in network meta-analyses to select the best intervention when multiple outcomes are of interest by creating partial and absolute rankings with the TOPSIS, VIKOR, and PROMETHEE algorithms. The TOPSIS and VIKOR techniques represent distance-based methods for compromise intervention selection, whereas the PROMETHEE analysis method allows the definition of preference and indifference thresholds. In addition, the PROMETHEE technique allows a variety of modeling options by selecting alternative preference functions. Different weights may be applied to outcomes objectively with the entropy method as well as subjectively with the analytic hierarchy process, enabling the individualization of treatment choice depending on the clinical scenario.
Results: Visualization of decision analysis may be performed with multicriteria score-adjusted scatterplots, while league tables may be constructed to depict the PROMETHEE I partial ordering of interventions. A simulated study was performed assuming equal weights of outcomes, and the TOPSIS, VIKOR, and PROMETHEE II methods were compared using a similarity coefficient, indicating a high degree of agreement among methods, especially with higher numbers of interventions.
Conclusions: Multicriteria decision analysis provides a flexible and computationally direct way of selecting compromise interventions and visualizing treatment selection in network meta-analyses. Further research should provide empirical data about the implementation of multicriteria decision analysis in real-world network meta-analyses aiming to define the most suitable method depending on the clinical question.
Highlights: Multicriteria decision-making methods can be implemented in network meta-analysis to indicate compromise interventions.The TOPSIS, VIKOR, and PROMETHEE methods can be used for optimal treatment selection when conflicting outcomes are evaluated.The weights of outcomes can be defined objectively or subjectively, reflecting the priorities of the decision maker.
{"title":"Multicriteria Decision-Making Methods for Optimal Treatment Selection in Network Meta-Analysis.","authors":"Ioannis Bellos","doi":"10.1177/0272989X221126678","DOIUrl":"https://doi.org/10.1177/0272989X221126678","url":null,"abstract":"<p><strong>Background: </strong>Network meta-analysis exploits randomized data to compare multiple interventions and generate rankings. Selecting an optimal treatment may be complicated when multiple conflicting outcomes are evaluated in parallel.</p><p><strong>Design: </strong>The present study suggested the incorporation of multicriteria decision-making methods in network meta-analyses to select the best intervention when multiple outcomes are of interest by creating partial and absolute rankings with the TOPSIS, VIKOR, and PROMETHEE algorithms. The TOPSIS and VIKOR techniques represent distance-based methods for compromise intervention selection, whereas the PROMETHEE analysis method allows the definition of preference and indifference thresholds. In addition, the PROMETHEE technique allows a variety of modeling options by selecting alternative preference functions. Different weights may be applied to outcomes objectively with the entropy method as well as subjectively with the analytic hierarchy process, enabling the individualization of treatment choice depending on the clinical scenario.</p><p><strong>Results: </strong>Visualization of decision analysis may be performed with multicriteria score-adjusted scatterplots, while league tables may be constructed to depict the PROMETHEE I partial ordering of interventions. A simulated study was performed assuming equal weights of outcomes, and the TOPSIS, VIKOR, and PROMETHEE II methods were compared using a similarity coefficient, indicating a high degree of agreement among methods, especially with higher numbers of interventions.</p><p><strong>Conclusions: </strong>Multicriteria decision analysis provides a flexible and computationally direct way of selecting compromise interventions and visualizing treatment selection in network meta-analyses. Further research should provide empirical data about the implementation of multicriteria decision analysis in real-world network meta-analyses aiming to define the most suitable method depending on the clinical question.</p><p><strong>Highlights: </strong>Multicriteria decision-making methods can be implemented in network meta-analysis to indicate compromise interventions.The TOPSIS, VIKOR, and PROMETHEE methods can be used for optimal treatment selection when conflicting outcomes are evaluated.The weights of outcomes can be defined objectively or subjectively, reflecting the priorities of the decision maker.</p>","PeriodicalId":49839,"journal":{"name":"Medical Decision Making","volume":"43 1","pages":"78-90"},"PeriodicalIF":3.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10354038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2022-08-23DOI: 10.1177/0272989X221117162
David M Phillippo, Sofia Dias, A E Ades, Mark Belger, Alan Brnabic, Daniel Saure, Yves Schymura, Nicky J Welton
<p><strong>Background: </strong>Network meta-analysis (NMA) and indirect comparisons combine aggregate data (AgD) from multiple studies on treatments of interest but may give biased estimates if study populations differ. Population adjustment methods such as multilevel network meta-regression (ML-NMR) aim to reduce bias by adjusting for differences in study populations using individual patient data (IPD) from 1 or more studies under the conditional constancy assumption. A shared effect modifier assumption may also be necessary for identifiability. This article aims to demonstrate how the assumptions made by ML-NMR can be assessed in practice to obtain reliable treatment effect estimates in a target population.</p><p><strong>Methods: </strong>We apply ML-NMR to a network of evidence on treatments for plaque psoriasis with a mix of IPD and AgD trials reporting ordered categorical outcomes. Relative treatment effects are estimated for each trial population and for 3 external target populations represented by a registry and 2 cohort studies. We examine residual heterogeneity and inconsistency and relax the shared effect modifier assumption for each covariate in turn.</p><p><strong>Results: </strong>Estimated population-average treatment effects were similar across study populations, as differences in the distributions of effect modifiers were small. Better fit was achieved with ML-NMR than with NMA, and uncertainty was reduced by explaining within- and between-study variation. We found little evidence that the conditional constancy or shared effect modifier assumptions were invalid.</p><p><strong>Conclusions: </strong>ML-NMR extends the NMA framework and addresses issues with previous population adjustment approaches. It coherently synthesizes evidence from IPD and AgD studies in networks of any size while avoiding aggregation bias and noncollapsibility bias, allows for key assumptions to be assessed or relaxed, and can produce estimates relevant to a target population for decision-making.</p><p><strong>Highlights: </strong>Multilevel network meta-regression (ML-NMR) extends the network meta-analysis framework to synthesize evidence from networks of studies providing individual patient data or aggregate data while adjusting for differences in effect modifiers between studies (population adjustment). We apply ML-NMR to a network of treatments for plaque psoriasis with ordered categorical outcomes.We demonstrate for the first time how ML-NMR allows key assumptions to be assessed. We check for violations of conditional constancy of relative effects (such as unobserved effect modifiers) through residual heterogeneity and inconsistency and the shared effect modifier assumption by relaxing this for each covariate in turn.Crucially for decision making, population-adjusted treatment effects can be produced in any relevant target population. We produce population-average estimates for 3 external target populations, represented by the PsoBest registry and the
{"title":"Validating the Assumptions of Population Adjustment: Application of Multilevel Network Meta-regression to a Network of Treatments for Plaque Psoriasis.","authors":"David M Phillippo, Sofia Dias, A E Ades, Mark Belger, Alan Brnabic, Daniel Saure, Yves Schymura, Nicky J Welton","doi":"10.1177/0272989X221117162","DOIUrl":"10.1177/0272989X221117162","url":null,"abstract":"<p><strong>Background: </strong>Network meta-analysis (NMA) and indirect comparisons combine aggregate data (AgD) from multiple studies on treatments of interest but may give biased estimates if study populations differ. Population adjustment methods such as multilevel network meta-regression (ML-NMR) aim to reduce bias by adjusting for differences in study populations using individual patient data (IPD) from 1 or more studies under the conditional constancy assumption. A shared effect modifier assumption may also be necessary for identifiability. This article aims to demonstrate how the assumptions made by ML-NMR can be assessed in practice to obtain reliable treatment effect estimates in a target population.</p><p><strong>Methods: </strong>We apply ML-NMR to a network of evidence on treatments for plaque psoriasis with a mix of IPD and AgD trials reporting ordered categorical outcomes. Relative treatment effects are estimated for each trial population and for 3 external target populations represented by a registry and 2 cohort studies. We examine residual heterogeneity and inconsistency and relax the shared effect modifier assumption for each covariate in turn.</p><p><strong>Results: </strong>Estimated population-average treatment effects were similar across study populations, as differences in the distributions of effect modifiers were small. Better fit was achieved with ML-NMR than with NMA, and uncertainty was reduced by explaining within- and between-study variation. We found little evidence that the conditional constancy or shared effect modifier assumptions were invalid.</p><p><strong>Conclusions: </strong>ML-NMR extends the NMA framework and addresses issues with previous population adjustment approaches. It coherently synthesizes evidence from IPD and AgD studies in networks of any size while avoiding aggregation bias and noncollapsibility bias, allows for key assumptions to be assessed or relaxed, and can produce estimates relevant to a target population for decision-making.</p><p><strong>Highlights: </strong>Multilevel network meta-regression (ML-NMR) extends the network meta-analysis framework to synthesize evidence from networks of studies providing individual patient data or aggregate data while adjusting for differences in effect modifiers between studies (population adjustment). We apply ML-NMR to a network of treatments for plaque psoriasis with ordered categorical outcomes.We demonstrate for the first time how ML-NMR allows key assumptions to be assessed. We check for violations of conditional constancy of relative effects (such as unobserved effect modifiers) through residual heterogeneity and inconsistency and the shared effect modifier assumption by relaxing this for each covariate in turn.Crucially for decision making, population-adjusted treatment effects can be produced in any relevant target population. We produce population-average estimates for 3 external target populations, represented by the PsoBest registry and the ","PeriodicalId":49839,"journal":{"name":"Medical Decision Making","volume":"43 1","pages":"53-67"},"PeriodicalIF":3.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9742635/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10697827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/0272989X221103163
Fernando Alarid-Escudero, Eline Krijkamp, Eva A Enns, Alan Yang, M G Myriam Hunink, Petros Pechlivanoglou, Hawre Jalal
Decision models can combine information from different sources to simulate the long-term consequences of alternative strategies in the presence of uncertainty. A cohort state-transition model (cSTM) is a decision model commonly used in medical decision making to simulate the transitions of a hypothetical cohort among various health states over time. This tutorial focuses on time-independent cSTM, in which transition probabilities among health states remain constant over time. We implement time-independent cSTM in R, an open-source mathematical and statistical programming language. We illustrate time-independent cSTMs using a previously published decision model, calculate costs and effectiveness outcomes, and conduct a cost-effectiveness analysis of multiple strategies, including a probabilistic sensitivity analysis. We provide open-source code in R to facilitate wider adoption. In a second, more advanced tutorial, we illustrate time-dependent cSTMs.
{"title":"An Introductory Tutorial on Cohort State-Transition Models in R Using a Cost-Effectiveness Analysis Example.","authors":"Fernando Alarid-Escudero, Eline Krijkamp, Eva A Enns, Alan Yang, M G Myriam Hunink, Petros Pechlivanoglou, Hawre Jalal","doi":"10.1177/0272989X221103163","DOIUrl":"https://doi.org/10.1177/0272989X221103163","url":null,"abstract":"<p><p>Decision models can combine information from different sources to simulate the long-term consequences of alternative strategies in the presence of uncertainty. A cohort state-transition model (cSTM) is a decision model commonly used in medical decision making to simulate the transitions of a hypothetical cohort among various health states over time. This tutorial focuses on time-independent cSTM, in which transition probabilities among health states remain constant over time. We implement time-independent cSTM in R, an open-source mathematical and statistical programming language. We illustrate time-independent cSTMs using a previously published decision model, calculate costs and effectiveness outcomes, and conduct a cost-effectiveness analysis of multiple strategies, including a probabilistic sensitivity analysis. We provide open-source code in R to facilitate wider adoption. In a second, more advanced tutorial, we illustrate time-dependent cSTMs.</p>","PeriodicalId":49839,"journal":{"name":"Medical Decision Making","volume":"43 1","pages":"3-20"},"PeriodicalIF":3.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9742144/pdf/nihms-1806797.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9703316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2022-07-29DOI: 10.1177/0272989X221115364
Christopher Weyant, Serin Lee, Jason R Andrews, Fernando Alarid-Escudero, Jeremy D Goldhaber-Fiebert
<p><strong>Background: </strong>Historically, correctional facilities have had large outbreaks of respiratory infectious diseases like COVID-19. Hence, importation and exportation of such diseases from correctional facilities raises substantial concern.</p><p><strong>Methods: </strong>We developed a stochastic simulation model of transmission of respiratory infectious diseases within and between correctional facilities and the community. We investigated the infection dynamics, key governing factors, and relative importance of different infection routes (e.g., incarcerations and releases versus correctional staff). We also developed machine-learning meta-models of the simulation model, which allowed us to examine how our findings depended on different disease, correctional facility, and community characteristics.</p><p><strong>Results: </strong>We find a magnification-reflection dynamic: a small outbreak in the community can cause a larger outbreak in the correction facility, which can then cause a second, larger outbreak in the community. This dynamic is strongest when community size is relatively small as compared with the size of the correctional population, the initial community R-effective is near 1, and initial prevalence of immunity in the correctional population is low. The timing of the correctional magnification and community reflection peaks in infection prevalence are primarily governed by the initial R-effective for each setting. Because the release rates from prisons are low, our model suggests correctional staff may be a more important infection entry route into prisons than incarcerations and releases; in jails, where incarceration and release rates are much higher, our model suggests the opposite.</p><p><strong>Conclusions: </strong>We find that across many combinations of respiratory pathogens, correctional settings, and communities, there can be substantial magnification-reflection dynamics, which are governed by several key factors. Our goal was to derive theoretical insights relevant to many contexts; our findings should be interpreted accordingly.</p><p><strong>Highlights: </strong>We find a magnification-reflection dynamic: a small outbreak in a community can cause a larger outbreak in a correctional facility, which can then cause a second, larger outbreak in the community.For public health decision makers considering contexts most susceptible to this dynamic, we find that the dynamic is strongest when the community size is relatively small, initial community R-effective is near 1, and the initial prevalence of immunity in the correctional population is low; the timing of the correctional magnification and community reflection peaks in infection prevalence are primarily governed by the initial R-effective for each setting.We find that correctional staff may be a more important infection entry route into prisons than incarcerations and releases; however, for jails, the relative importance of the entry routes may be reversed.F
背景:历史上,惩教机构曾大规模爆发过 COVID-19 等呼吸道传染病。因此,此类疾病从惩教机构的输入和输出引起了人们的极大关注:我们建立了一个呼吸道传染病在惩教机构内部以及惩教机构与社区之间传播的随机模拟模型。我们研究了感染动态、关键影响因素以及不同感染途径(如监禁和释放与管教人员)的相对重要性。我们还开发了模拟模型的机器学习元模型,这使我们能够研究我们的发现如何取决于不同的疾病、惩教机构和社区特征:我们发现了一种放大-反射动态:社区中的小规模疫情会导致惩教机构中更大规模的疫情爆发,而惩教机构又会导致社区中第二次更大规模的疫情爆发。当社区规模相对于矫正人群规模较小、社区初始 R 效应接近 1 且矫正人群初始免疫流行率较低时,这种态势最为明显。感染率的矫正放大峰和社区反射峰的时间主要取决于每种环境的初始 R-效应。由于监狱的释放率很低,我们的模型表明,与监禁和释放相比,管教人员可能是进入监狱的更重要的感染途径;而在监禁和释放率更高的监狱中,我们的模型表明情况恰恰相反:我们发现,在呼吸道病原体、管教环境和社区的多种组合中,可能存在着大量的放大-反射动态,而这又受几个关键因素的制约。我们的目标是得出与多种情况相关的理论见解;我们的发现也应相应地加以解释:我们发现了一种放大-反射动态:社区中的小规模疫情会导致惩教机构中更大规模的疫情爆发,而惩教机构又会导致社区中第二次更大规模的疫情爆发。对于考虑最易受这种动态影响的环境的公共卫生决策者来说,我们发现,当社区规模相对较小、社区初始 R 效应接近 1,以及矫治人群的初始免疫流行率较低时,这种动态影响最大;矫治放大和社区反射感染流行高峰的时间主要受每种环境的初始 R 效应的影响。我们发现,与监禁和释放相比,管教人员可能是进入监狱的更重要的感染途径;然而,对于监狱来说,进入途径的相对重要性可能正好相反。对于建模者来说,我们将模拟建模、机器学习元建模和可解释的机器学习结合起来,以检验我们的发现如何依赖于不同的疾病、管教设施和社区特征;我们发现这些发现总体上是稳健的。
{"title":"Dynamics of Respiratory Infectious Diseases in Incarcerated and Free-Living Populations: A Simulation Modeling Study.","authors":"Christopher Weyant, Serin Lee, Jason R Andrews, Fernando Alarid-Escudero, Jeremy D Goldhaber-Fiebert","doi":"10.1177/0272989X221115364","DOIUrl":"10.1177/0272989X221115364","url":null,"abstract":"<p><strong>Background: </strong>Historically, correctional facilities have had large outbreaks of respiratory infectious diseases like COVID-19. Hence, importation and exportation of such diseases from correctional facilities raises substantial concern.</p><p><strong>Methods: </strong>We developed a stochastic simulation model of transmission of respiratory infectious diseases within and between correctional facilities and the community. We investigated the infection dynamics, key governing factors, and relative importance of different infection routes (e.g., incarcerations and releases versus correctional staff). We also developed machine-learning meta-models of the simulation model, which allowed us to examine how our findings depended on different disease, correctional facility, and community characteristics.</p><p><strong>Results: </strong>We find a magnification-reflection dynamic: a small outbreak in the community can cause a larger outbreak in the correction facility, which can then cause a second, larger outbreak in the community. This dynamic is strongest when community size is relatively small as compared with the size of the correctional population, the initial community R-effective is near 1, and initial prevalence of immunity in the correctional population is low. The timing of the correctional magnification and community reflection peaks in infection prevalence are primarily governed by the initial R-effective for each setting. Because the release rates from prisons are low, our model suggests correctional staff may be a more important infection entry route into prisons than incarcerations and releases; in jails, where incarceration and release rates are much higher, our model suggests the opposite.</p><p><strong>Conclusions: </strong>We find that across many combinations of respiratory pathogens, correctional settings, and communities, there can be substantial magnification-reflection dynamics, which are governed by several key factors. Our goal was to derive theoretical insights relevant to many contexts; our findings should be interpreted accordingly.</p><p><strong>Highlights: </strong>We find a magnification-reflection dynamic: a small outbreak in a community can cause a larger outbreak in a correctional facility, which can then cause a second, larger outbreak in the community.For public health decision makers considering contexts most susceptible to this dynamic, we find that the dynamic is strongest when the community size is relatively small, initial community R-effective is near 1, and the initial prevalence of immunity in the correctional population is low; the timing of the correctional magnification and community reflection peaks in infection prevalence are primarily governed by the initial R-effective for each setting.We find that correctional staff may be a more important infection entry route into prisons than incarcerations and releases; however, for jails, the relative importance of the entry routes may be reversed.F","PeriodicalId":49839,"journal":{"name":"Medical Decision Making","volume":"43 1","pages":"42-52"},"PeriodicalIF":3.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9742162/pdf/nihms-1822488.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10459855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1177/0272989X221121747
Fernando Alarid-Escudero, Eline Krijkamp, Eva A Enns, Alan Yang, M G Myriam Hunink, Petros Pechlivanoglou, Hawre Jalal
In an introductory tutorial, we illustrated building cohort state-transition models (cSTMs) in R, where the state transition probabilities were constant over time. However, in practice, many cSTMs require transitions, rewards, or both to vary over time (time dependent). This tutorial illustrates adding 2 types of time dependence using a previously published cost-effectiveness analysis of multiple strategies as an example. The first is simulation-time dependence, which allows for the transition probabilities to vary as a function of time as measured since the start of the simulation (e.g., varying probability of death as the cohort ages). The second is state-residence time dependence, allowing for history by tracking the time spent in any particular health state using tunnel states. We use these time-dependent cSTMs to conduct cost-effectiveness and probabilistic sensitivity analyses. We also obtain various epidemiological outcomes of interest from the outputs generated from the cSTM, such as survival probability and disease prevalence, often used for model calibration and validation. We present the mathematical notation first, followed by the R code to execute the calculations. The full R code is provided in a public code repository for broader implementation.
{"title":"A Tutorial on Time-Dependent Cohort State-Transition Models in R Using a Cost-Effectiveness Analysis Example.","authors":"Fernando Alarid-Escudero, Eline Krijkamp, Eva A Enns, Alan Yang, M G Myriam Hunink, Petros Pechlivanoglou, Hawre Jalal","doi":"10.1177/0272989X221121747","DOIUrl":"https://doi.org/10.1177/0272989X221121747","url":null,"abstract":"<p><p>In an introductory tutorial, we illustrated building cohort state-transition models (cSTMs) in R, where the state transition probabilities were constant over time. However, in practice, many cSTMs require transitions, rewards, or both to vary over time (time dependent). This tutorial illustrates adding 2 types of time dependence using a previously published cost-effectiveness analysis of multiple strategies as an example. The first is simulation-time dependence, which allows for the transition probabilities to vary as a function of time as measured since the start of the simulation (e.g., varying probability of death as the cohort ages). The second is state-residence time dependence, allowing for history by tracking the time spent in any particular health state using tunnel states. We use these time-dependent cSTMs to conduct cost-effectiveness and probabilistic sensitivity analyses. We also obtain various epidemiological outcomes of interest from the outputs generated from the cSTM, such as survival probability and disease prevalence, often used for model calibration and validation. We present the mathematical notation first, followed by the R code to execute the calculations. The full R code is provided in a public code repository for broader implementation.</p>","PeriodicalId":49839,"journal":{"name":"Medical Decision Making","volume":"43 1","pages":"21-41"},"PeriodicalIF":3.6,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9844995/pdf/nihms-1829740.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10536012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}