Background. Health technology assessment bodies in several countries, including Japan and the United Kingdom, recommend mapping techniques to obtain utility scores in clinical trials that do not have a preference-based measure of health. This study sought to develop mapping algorithms to predict EQ-5D-3L scores from the Kansas City Cardiomyopathy Questionnaire (KCCQ) in patients with heart failure (HF). Methods. Data from the randomized, double-blind PARADIGM-HF trial were analyzed, and EQ-5D-3L scores were calculated using the Japanese and UK value sets. Several different model specifications were explored to best fit EQ-5D data collected at baseline with KCCQ scores, including ordinary least square regression, two-part, Tobit, and three-part models. Generalized estimating equations models were also fitted to analyze longitudinal EQ-5D data. To validate model predictions, the data set was split into a derivation (n = 4,465) from which the models were developed and a separate sample (n = 1,892) for validation. Results. There were only small differences between the different model classes tested. Model performance and predictive power was better for the item-level models than for the models including KCCQ domain scores. R 2 statistics for the item-level models ranged from 0.45 to 0.52. Mean absolute error in the validation sample was 0.10 for the models using the Japanese value set and 0.114 for the UK models. All models showed some underprediction of utility above 0.75 and overprediction of utility below 0.5, but performed well for population-level estimates. Conclusions. Using data from a large clinical trial in HF, we found that EQ-5D-3L scores can be estimated from responses to the KCCQ and can facilitate cost-utility analysis from existing HF trials where only the KCCQ was administered. Future validation in other HF populations is warranted.
Extensive testing lies at the heart of any strategy to effectively combat the SARS-COV-2 pandemic. In recent months, the use of enzyme-linked immunosorbent assay-based antibody tests has gained a lot of attention. These tests can potentially be used to assess SARS-COV-2 immunity status in individuals (e.g., essential health care personnel). They can also be used as a screening tool to identify people that had COVID-19 asymptomatically, thus getting a better estimate of the true spread of the disease, gain important insights on disease severity, and to better evaluate the effectiveness of policy measures implemented to combat the pandemic. But the usefulness of these tests depends not only on the quality of the test but also, critically, on how far disease has already spread in the population. For example, when only very few people in a population are infected, a positive test result has a high chance of being a false positive. As a consequence, the spread of the disease in a population as well as individuals' immunity status may be systematically misinterpreted. SARS-COV-2 infection rates vary greatly across both time and space. In many places, the infection rates are very low but can quickly skyrocket when the virus spreads unchecked. Here, we present two tools, natural frequency trees and positive and negative predictive value graphs, that allow one to assess the usefulness of antibody testing for a specific context at a glance. These tools should be used to support individual doctor-patient consultation for assessing individual immunity status as well as to inform policy discussions on testing initiatives.
Background. Variability in outpatient specialty clinic schedules contributes to numerous adverse effects including chaotic clinic settings, provider burnout, increased patient waiting times, and inefficient use of resources. This research measures the benefit of balancing provider schedules in an outpatient specialty clinic. Design. We developed a constrained optimization model to minimize the variability in provider schedules in an outpatient specialty clinic. Schedule variability was defined as the variance in the number of providers scheduled for clinic during each hour the clinic is open. We compared the variance in the number of providers scheduled per hour resulting from the constrained optimization schedule with the actual schedule for three reference scenarios used in practice at M Health Fairview's Clinics and Surgery Center as a case study. Results. Compared to the actual schedules, use of constrained optimization modeling reduced the variance in the number of providers scheduled per hour by 92% (1.70-0.14), 88% (1.98-0.24), and 94% (1.98-0.12). When compared with the reference scenarios, the total, and per provider, assigned clinic hours remained the same. Use of constrained optimization modeling also reduced the maximum number of providers scheduled during each of the actual schedules for each of the reference scenarios. The constrained optimization schedules utilized 100% of the available clinic time compared to the reference scenario schedules where providers were scheduled during 87%, 92%, and 82% of the open clinic time, respectively. Limitations. The scheduling model's use requires a centralized provider scheduling process in the clinic. Conclusions. Constrained optimization can help balance provider schedules in outpatient specialty clinics, thereby reducing the risk of negative effects associated with highly variable clinic settings.
Despite the evolving evidence in favor of shared decision making (SDM) and of decades-long calls for its adoption, SDM remains uncommon in routine care. Reflecting on this lack of progress, we sought to reimagine the future of SDM and the path to take us there. In late 2017, a multidisciplinary and international group of six researchers were challenged by a senior SDM scholar to envision the future and, based on a provocatively critical view of the present, to write letters to themselves from the year 2028. Letters were exchanged and discussed electronically. The group then met in person to discuss the letters. Since the letters painted a dystopian picture, they triggered questions about the nature of SDM, who should benefit from SDM, how to measure its contribution to care, and what new ways can be invented to design and test interventions to implement SDM in routine care. Through contrasting the purposefully generated dystopias with an ideal future for SDM, we generated reflections on a research agenda for SDM. These reflections hinged on recognizing SDM's contributing to care, that is, as a way to advance the problematic human situation of patients. These focused on three distinct yet complimentary contributors to SDM: 1) the process of making decisions, 2) humanistic communication, and 3) fit-to-care of the resulting decision. The group then concluded that to move SDM from envisioned to routine practice, and to ensure it reaches all, particularly persons rendered vulnerable by current forms of health care, a substantial investment in implementation research is necessary. Perhaps the discussion of these reflections can contribute to a path forward that will improve the likelihood of the future we dream for SDM.
The Centers for Medicare and Medicaid Services (CMS) has mandated shared decision making (SDM) using patient decision aids for three conditions (lung cancer screening, atrial fibrillation, and implantable defibrillators). These forward-thinking approaches are in response to a wealth of efficacy data demonstrating that decision aids can improve patient decision making. However, there has been little focus on how to implement these approaches in real-world practice. This article demonstrates how using an implementation science framework may help programs understand multilevel challenges and opportunities to improve adherence to the CMS mandates. Using the PRISM (Pragmatic Robust Implementation and Sustainability Model) framework, we discuss general challenges to implementation of SDM, issues specific to each mandate, and how to plan for, enhance, and assess SDM implementation outcomes. Notably, a theme of this discussion is that successful implementation is context-specific and to truly have successful and sustainable changes in practice, context variability, and adaptation to context must be considered and addressed.
Purpose. In 2018, the US Preventive Services Task Force (USPSTF) endorsed three strategies for cervical cancer screening in women ages 30 to 65: cytology every 3 years, testing for high-risk types of human papillomavirus (hrHPV) every 5 years, and cytology plus hrHPV testing (co-testing) every 5 years. It further recommended that women discuss with health care providers which testing strategy is best for them. To inform such discussions, we used decision analysis to estimate outcomes of screening strategies recommended for women at age 30. Methods. We constructed a Markov decision model using estimates of the natural history of HPV and cervical neoplasia. We evaluated the three USPSTF-endorsed strategies, hrHPV testing every 3 years and no screening. Outcomes included colposcopies with biopsy, false-positive testing (a colposcopy in which no cervical intraepithelial neoplasia grade 2 or worse was found), treatments, cancers, and cancer mortality expressed per 10,000 women over a shorter-than-lifetime horizon (15-year). Results. All strategies resulted in substantially lower cancer and cancer death rates compared with no screening. Strategies with the lowest likelihood of cancer and cancer death generally had higher likelihood of colposcopy and false-positive testing. Conclusions. The screening strategies we evaluated involved tradeoffs in terms of benefits and harms. Because individual women may place different weights on these projected outcomes, the optimal choice for each woman may best be discerned through shared decision making.