JEL classification: I30, J17.
JEL classification: I30, J17.
BackgroundEconomic evaluations in health typically assume a nonwelfarist framework, arguably better served by preferences elicited from a social perspective than a personal one. However, most health state valuation studies elicit personal preferences, leading to a methodological inconsistency. No studies have directly compared social and personal preferences for outcomes using otherwise identical scenarios, leaving their empirical relationship unclear.AimThis unique study examines whether the choice of eliciting preferences from a social or personal perspective influences valuations of health and well-being outcomes.MethodsUsing discrete choice experiments, social and personal preferences for health and well-being attributes were elicited from the UK general public recruited from an internet panel (n = 1,020 personal, n = 3,009 social surveys). Mixed logit models were estimated, and willingness-to-pay (WTP) values for each attribute were calculated to compare differences between the 2 perspectives.ResultsWhile no significant differences were observed in the effects of physical and mental health, loneliness, and neighborhood safety across the 2 perspectives, significant differences emerged in WTP values for employment and housing quality. For instance, other things being the same, personal preferences rate being retired as more preferable than being an informal caregiver, but the social preferences rate them in the reverse order.ConclusionOur findings demonstrate that the perspective matters, particularly for valuing outcomes such as employment and housing. These findings indicate that the exclusive use of personal preferences to value states such as employment and housing quality may potentially lead to suboptimal resource allocation, given that such valuations reflect individual rather than societal benefit. This highlights the importance of considering perspective especially in the resource allocation of public health interventions.HighlightsPersonal preferences were not aligned with social preferences for employment and housing quality outcomes.Respondents valued health outcomes the same in both social and personal perspectives.Using personal preferences in public health resource allocation decisions may not reflect societal priorities.
BackgroundCorrectional facilities can act as amplifiers of infectious disease outbreaks. Small community outbreaks can cause larger prison outbreaks, which can in turn exacerbate the community outbreaks. However, strategies for epidemic control in communities and correctional facilities are generally not closely coordinated. We sought to evaluate different strategies for coordinated control.MethodsWe developed a stochastic simulation model of an epidemic spreading across a network of communities and correctional facilities. We parameterized it for the initial phases of the COVID-19 epidemic for 1) California communities and prisons based on community data from covidestim, prison data from the California Department of Corrections and Rehabilitation, and mobility data from SafeGraph, and 2) a small, illustrative network of communities and prisons. For each community or prison, control measures were defined by the intensity of 2 activities: 1) screening to detect and isolate cases and 2) nonpharmaceutical interventions (e.g., masking and social distancing) to reduce transmission. We compared the performance of different control strategies including heuristic and reinforcement learning (RL) strategies using a reward function, which accounted for both the benefit of averted infections and nonlinear cost of the control measures. Finally, we performed analyses to interpret the optimal strategy and examine its robustness.ResultsThe RL control strategy robustly outperformed other strategies including heuristic approaches such as those that were largely used during the COVID-19 epidemic. The RL strategy prioritized different characteristics of communities versus prisons when allocating control resources and exhibited geo-temporal patterns consistent with mitigating prison amplification dynamics.ConclusionRL is a promising method to find efficient policies for controlling epidemic spread on networks of communities and correctional facilities, providing insights that can help guide policy.HighlightsFor modelers, we developed a stochastic simulation model of an epidemic spreading across a network of communities and correctional facilities, and we parameterized it for the initial phases of the COVID-19 epidemic for California communities and prisons in addition to an illustrative network.We compared different control strategies using a reward function that accounted for both the benefit of averted infections and cost of the control measures; we found that reinforcement learning robustly outperformed the other strategies including heuristic approaches such as those that were largely used during the COVID-19 epidemic.For policy makers, our work suggests that they should consider investing in the further development of such methods and using them for the control of future epidemics.We offer qualitative insights into different factors that might inform resource allocation to communities versus prisons during future epidemics.
IntroductionSubgroup analyses are vital components of health technology assessments, but randomized controlled trials (RCTs) do not commonly report survival distributions for subgroups. This study developed an analytical framework to elicit unreported subgroup-specific survival curves from aggregate RCT data.MethodsAssuming exponentially distributed subgroup survival durations, we developed an optimization model that approximates the restricted mean survival time (RMST) for the overall population via the weighted average of the RMSTs of 2 subgroups in each arm. Reported hazard ratios from the forest plots between the arms were used to enforce the relationship among subgroups' hazard rates in the model. The performance of the model was tested in a real-life test set of 8 RCTs in advanced-stage gastrointestinal tumors, which also reported KM curves for overall survival (OS) for 40 subgroups as well as in 42 synthetic test cases with 168 subgroups as a benchmark. For each subgroup, predicted median survival, OS rates, and the RMSTs were compared against their actual counterparts as well as their 95% confidence intervals (CIs).ResultsPredicted median survivals and RMSTs were within the 95% CIs of the reported values in 32 (80%) and 34 (85%) of 40 subgroups in real-life test cases and in 163 (97%) and 146 (87%) of 168 subgroups in synthetic test cases, respectively. Across all cases, on average, the predicted survival curves laid within the 95% CIs of reported KM curves 71% and 97% of the time in real-life and synthetic test cases, respectively.DiscussionOur study offers a useful and scalable method for extracting subgroup-specific survival from aggregate RCT data to enable subgroup-specific indirect comparisons, and cost-utility and meta-analyses.HiglightsMost randomized controlled trials report survival curves for the overall patient population but do not provide subgroup-specific survival curves, which are crucial for cost-effectiveness analyses and meta-analyses focusing on these subgroups.This study developed an optimization modeling approach to elicit unreported subgroup-specific survival curves from aggregate trial data.The proposed modeling approach accurately predicted the reported subgroup-specific survival curves in 42 simulated test cases with 168 subgroups overall, in which each subgroup-specific survival curve was assumed to followed an exponential distribution.The performance of the proposed modeling approach was sensitive to the assumptions when it was tested using a real-life test set of 8 oncology trials, which also reported survival curves for a total of 40 subgroups.
PurposeWe conducted a distributional cost-effectiveness analysis (DCEA) using routinely collected data to estimate the population health and health inequality impacts of the National Abdominal Aortic Aneurysm Screening Programme (NAAASP) in England.MethodsAn existing discrete event simulation model of AAA screening was adapted to examine differences between socioeconomic groups defined by Index of Multiple Deprivation, obtained from an analysis of secondary data sources. We examined the distributional cost-effectiveness of being invited versus not invited at age 65 y to screen using a National Health Service perspective. Changes in inequality were valued using a measure of equally distributed equivalent health.ResultsThe net health benefits of population screening (317 quality-adjusted life-years [QALYs] gained) were disproportionately accounted for by the effects on those living in more advantaged areas. The NAAASP improved health on average compared with no screening, but the health opportunity cost of the programme exceeded the QALY gains for people living in the most deprived areas, resulting in a negative net health impact for this group (106 QALYs lost) that was driven by differences in the need for screening. Consequently, the NAAASP increased health inequality at the population level. Given current estimates for inequality aversion in England, screening for AAA remains the optimal strategy.ConclusionExamination of the distributional cost-effectiveness of the NAAASP in England using routinely collected data revealed a tradeoff between total population health and health inequality. Study findings suggest that the NAAASP provides value for money despite health impacts being disseminated to those who are more advantaged.HighlightsThis study examines the population health and health inequality effects of the National Abdominal Aortic Aneurysm Screening Programme (NAAASP) between socioeconomic groups defined by Index of Multiple Deprivation.Findings suggest a tradeoff between total population health and health inequality.Given current estimates for inequality aversion in England, screening remains the optimal strategy relative to not screening.Opportunities remain to reduce inequality effects for those most vulnerable through targeted approaches.
BackgroundUpdated estimates of the productivity losses per HIV infection due to premature HIV mortality are needed to help quantify the economic burden of HIV and inform cost-effectiveness analyses.MethodsWe used the human capital approach to estimate the productivity loss due to HIV mortality per HIV infection in the United States, discounted to the time of HIV infection. We incorporated published data on age-specific annual productivity, life expectancy at HIV diagnosis, life-years lost from premature death among persons with HIV (PWH), the number of years from HIV infection to diagnosis, and the percentage of deaths in PWH attributable to HIV. For the base case, we used 2018 life expectancy data for all PWH in the United States. We also examined scenarios using life expectancy in 2010 and life expectancy for cohorts on antiretroviral therapy (ART). We conducted sensitivity analyses to understand the impact of key input parameters.ResultsWe estimated the base-case overall average productivity loss due to HIV mortality per HIV infection at $65,300 in 2022 US dollars. The base-case results showed a 45% decrease in the estimated productivity loss compared with the results when applying life expectancy data from 2010. Productivity loss was 83% lower for cohorts of PWH on ART compared with the base-case scenario. Results were sensitive to assumptions about percentage of deaths attributable to HIV and heterogeneity in age at death.ConclusionThis study provides valuable insights into the economic impact of HIV mortality, illustrating reductions in productivity losses over time due to advancements in treatments.HighlightsUpdated estimates of productivity losses per HIV infection due to premature HIV mortality can help assess the total economic burden of HIV in the United States.This study estimates productivity losses per HIV infection for overall, by sex, and by varying ages of HIV infection.Advancement in treatment has contributed to a significant reduction in productivity losses due to premature HIV mortality in the United States over the past decade.
ObjectivesBayesian multiparameter evidence synthesis (B-MPES) can improve the reliability of long-term survival extrapolations by leveraging registry data. We extended the B-MPES framework to also incorporate historical trial data and examined the impact of alternative external information sources on predictions from early data cuts for a trial in metastatic non-small-cell lung cancer (mNSCLC).MethodsB-MPES models were fitted to survival data from the phase III CheckMate 9LA study of nivolumab plus ipilimumab plus 2 cycles of chemotherapy (NIVO+IPI+CHEMO, v. 4 cycles of CHEMO) in first-line mNSCLC, with 1 y of minimum follow-up. Trial observations were supplemented by registry data from the Surveillance, Epidemiology, and End Results program, general population data, and, optionally, historical trial data with extended follow-up for first-line NIVO+IPI (v. CHEMO) and/or second-line NIVO monotherapy in advanced NSCLC, via estimated 1-y conditional survival. Predictions from the 3 alternative B-MPES models were compared with those from standard parametric models (SPMs).ResultsB-MPES models better anticipated the emergent survival plateau with NIVO+IPI+CHEMO that was apparent in the 4-y data cut compared with SPMs, for which short-term extrapolations in both treatment arms were overly conservative. However, the B-MPES model incorporating NIVO+IPI data slightly overestimated 4-y NIVO+IPI+CHEMO survival owing to a confounding effect on estimated hazards that could not be accounted for a priori until later data cuts of CheckMate 9LA. Extrapolations were relatively robust to the choice of external data sources provided that the prior data had been adjusted to attenuate confounding.ConclusionsIncorporating historical trial data into survival models can improve the plausibility and interpretability of lifetime extrapolations for studies of novel therapies in metastatic cancers when data are immature, and B-MPES provides an appealing method for this purpose.HighlightsLeveraging historical trial data with extended follow-up to extrapolate survival from early study data cuts in a Bayesian evidence synthesis framework can realize anticipated longer-term effects that are characteristic of a novel therapy or class thereof.Using moderately confounded external data sources can improve the reliability of survival extrapolations from B-MPES models provided that the prior information is adjusted and rescaled appropriately, but it is essential to rationalize the implicit assumptions surrounding longer-term treatment effects in the current study.B-MPES models are an attractive option to conduct informed lifetime survival extrapolations based on transparent clinical assumptions via leveraging multiple external data sources, but model flexibility and a priori confidence in external data must be specified carefully to avoid overfitting.
BackgroundReal-world data can inform health care decisions by allowing the evaluation of nuanced treatment strategies. Longitudinal observational data enable the assessment of dynamic treatment regimes (DTRs), strategies that adapt treatment over time based on patient history, but require causal inference methods to address time-varying confounding. Longitudinal targeted minimum loss-based estimation (LTMLE) is a machine learning-based double-robust approach for improved causal effect estimation.MethodsWe applied LTMLE to longitudinal registry data to evaluate the impact of erythropoiesis-stimulating agents (ESAs) in the clinical management of low to intermediate-1 risk myelodysplastic syndrome (MDS). We defined DTRs based on clinically relevant decision rules (e.g., commencing treatment when the hemoglobin level falls below a threshold) and compared them to static treatment regimes (always or never giving ESAs). Outcomes include mortality and health-related quality of life measured by EQ-5D scores.ResultsThe static regime of never administering ESAs resulted in declining counterfactual EQ-5D scores and increasing mortality risk over time. In contrast, both the static regime of continuous administration of ESAs and the use of dynamic regimes improved the EQ-5D scores and tended to reduce mortality, although the mortality differences were not statistically significant.ConclusionsThe article provides a case study application of the LTMLE method to evaluate realistic treatment policies under time-varying confounding. The findings support the potential benefits of dynamic treatment strategies for the management of MDS, highlighting the importance of personalized treatment adaptation. The study contributes methodological insights into the applications of LTMLE in small-sample, long-follow-up settings relevant to health technology assessment and policy making.HighlightsThis study applies the longitudinal targeted minimum loss estimation (LTMLE) method to evaluate the causal effect of static and dynamic treatment strategies using longitudinal observational data.We demonstrate the use of the LTMLE method to assess the impact of erythropoiesis stimulating agents (ESAs) on quality of life and mortality in patients with low to intermediate-1 risk myelodysplastic syndromes.The findings suggest that patients treated under dynamic ESA treatment regimes show an improved quality of life measured by EQ-5D scores and survival compared with those treated under the static treatment regime of never administering ESAs.This study contributes to the methodological literature by showcasing the application of the LTMLE method in a small-sample, long-follow-up setting with time-varying confounding, informing health technology assessment and policy decisions.
BackgroundThe Australian National Bowel Cancer Screening Program (NBCSP), which provides 2-yearly screening to people aged 50 to 74 y, had a phased rollout from 2006 and was fully implemented in 2020. To measure the effectiveness of the NBCSP accounting for age-specific trends, we aimed to develop a novel integrative method to project colorectal cancer (CRC) incidence rates from 2006 to 2045 in the absence of the NBCSP (referred to as "no-NBCSP projections") while addressing the challenge of complex age-specific trends in CRC incidence.MethodsWe constructed a new dataset by replacing the observed data for NBCSP-eligible individuals aged 50 to 74 y with intermediate projections based on pre-NBCSP data from 1982 to 2005. We compared the no-NBCSP CRC incidence projected using a standard age-period-cohort (APC) model, age-stratified APC models, and the integrative modeling approach.ResultsThe integrative modeling approach captured complex age-specific trends better than the standard and age-stratified APC models did. Without the NBCSP, the overall CRC incidence rates would be expected to decline from 2005 to 2025, followed by increases from 2026 to 2045. The incidence rates for those aged <50 y would be projected to continue increasing to 2045, and an increase in incidence rates for older age groups would be projected to occur from 2020 for ages 50 to 54 y, from 2030 for ages 65 to 74 y, and from 2035 for ages 75 y and older.ConclusionsThese no-NBCSP projections provide a counterfactual benchmark against which to measure the impact of the NBCSP on CRC incidence in Australia, and they have been used as new calibration targets for a simulation model of CRC and screening in Australia. The methods developed here could be used to generate comparators to assess the impact of other public health interventions.HighlightsWe constructed counterfactual projections of colorectal cancer (CRC) incidence rates in the absence of the National Bowel Cancer Screening Program (no-NBCSP projections).To do this, we developed a new integrative modeling approach to capture complex age-specific colorectal cancer incidence trends.These no-NBCSP projections provide a counterfactual benchmark against which to measure the impact of the NBCSP on CRC incidence in Australia.These projections stress the need for ongoing assessment of the starting age for the NBCSP, to tackle the increasing incidence for people younger than 50 y.

