Pub Date : 2019-10-09DOI: 10.1136/bmjqs-2019-009703
E. Thomas
In this issue, Amalberti and Vincent1 ask ‘what strategies we might adopt to protect patients when healthcare systems and organizations are under stress and simply cannot provide the standard of care they aspire to’. This is clearly a critical and much overdue question, as many healthcare organisations are in an almost constant state of stress from high workload, personnel shortages, high-complexity patients, new technologies, fragmented and conflicting payment systems, over-regulation, and many other issues. These stressors put mid-level managers and front-line staff in situations where they may compromise their standards and be unable to provide the highest quality care. Such circumstances can contribute to low morale and burn-out. The authors provide guidance for addressing this tension of providing safe care during times of organisational stress, including principles for managing risk in difficult conditions, examples for managing this tension in other high-risk industries, and a research and development agenda for healthcare. Leaders at all levels of healthcare organisations should read this article. These authors join others2 who advise that we should shift our focus from creating absolute safety (meaning the elimination of error and harm) towards doing a better job of actively managing risk. I want to expand on this point to explore how an excessive focus on absolute safety may paradoxically reduce safety. Striving for absolute safety—often termed ‘zero harm’—is encouraged by some consultants, patient safety experts and regulators. Take for example the recently published book, ‘ Zero Harm: How to Achieve Patient and Workforce Safety in Healthcare ’ ,3 edited by three leaders of Press Ganey, a large organisation that works with over 26 000 healthcare organisations with the mission of helping organisations improve patient experience, including improving safety. The book states, ‘We will only reduce serious safety events, and improve organizations’ overall performance, if every US …
{"title":"The harms of promoting ‘Zero Harm’","authors":"E. Thomas","doi":"10.1136/bmjqs-2019-009703","DOIUrl":"https://doi.org/10.1136/bmjqs-2019-009703","url":null,"abstract":"In this issue, Amalberti and Vincent1 ask ‘what strategies we might adopt to protect patients when healthcare systems and organizations are under stress and simply cannot provide the standard of care they aspire to’. This is clearly a critical and much overdue question, as many healthcare organisations are in an almost constant state of stress from high workload, personnel shortages, high-complexity patients, new technologies, fragmented and conflicting payment systems, over-regulation, and many other issues. These stressors put mid-level managers and front-line staff in situations where they may compromise their standards and be unable to provide the highest quality care. Such circumstances can contribute to low morale and burn-out.\u0000\u0000The authors provide guidance for addressing this tension of providing safe care during times of organisational stress, including principles for managing risk in difficult conditions, examples for managing this tension in other high-risk industries, and a research and development agenda for healthcare. Leaders at all levels of healthcare organisations should read this article.\u0000\u0000These authors join others2 who advise that we should shift our focus from creating absolute safety (meaning the elimination of error and harm) towards doing a better job of actively managing risk. I want to expand on this point to explore how an excessive focus on absolute safety may paradoxically reduce safety.\u0000\u0000Striving for absolute safety—often termed ‘zero harm’—is encouraged by some consultants, patient safety experts and regulators. Take for example the recently published book, ‘ Zero Harm: How to Achieve Patient and Workforce Safety in Healthcare ’ ,3 edited by three leaders of Press Ganey, a large organisation that works with over 26 000 healthcare organisations with the mission of helping organisations improve patient experience, including improving safety. The book states, ‘We will only reduce serious safety events, and improve organizations’ overall performance, if every US …","PeriodicalId":49653,"journal":{"name":"Quality & Safety in Health Care","volume":"29 1","pages":"4 - 6"},"PeriodicalIF":0.0,"publicationDate":"2019-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1136/bmjqs-2019-009703","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44153658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-09DOI: 10.1136/bmjqs-2019-009515
M. Delisle, J. Pradarelli, N. Panda, A. Haynes, A. Hannenberg
Effective teamwork and communication is now recognised as a critical component of safe and high-quality patient care. Researchers, policymakers and frontline providers are in search of strategies to improve teamwork in healthcare. The most frequently used strategy is teamwork training.1 Teamwork training involves a systematic process in which a team is guided (often by facilitators) to improve and master different skills for working together effectively. Single-centre teamwork training initiatives have demonstrated improvements in patient care, but these results have been challenging to reproduce at scale.2 In this issue of BMJ Quality and Safety, Lenguerrand et al report the results of a stepped-wedge randomised controlled trial in which PRactical Obstetric Multi-Professional Training (PROMPT), an interprofessional intrapartum training package, was implemented across 12 maternity units in Scotland.3 Each participating unit identified an in-house training team to travel to attend a 2-day PROMPT Train the Trainers programme conducted in one simulation centre; two units were unable to send training teams. Training teams were subsequently responsible for coordinating the delivery of in-house PROMPT courses to all maternity staff within 12 months. The courses were intended to cover core obstetrical emergencies, such as postpartum haemorrhage, sepsis, shoulder dystocia, teamwork and fetal monitoring. In addition to clinical outcomes, each maternity unit collected process data about their local PROMPT courses, including the total number of staff trained and courses delivered and the actual course content. The authors found a significant amount of variability in the implementation across units. For example, all courses included elements of teamwork whereas fetal monitoring and shoulder dystocia training were not universally included. Despite the previously demonstrated benefits of PROMPT in single-centre studies, the final results did not demonstrate a reduction of term babies with a low Apgar score. The authors postulate this null finding may be in part related …
{"title":"Methods for scaling simulation-based teamwork training","authors":"M. Delisle, J. Pradarelli, N. Panda, A. Haynes, A. Hannenberg","doi":"10.1136/bmjqs-2019-009515","DOIUrl":"https://doi.org/10.1136/bmjqs-2019-009515","url":null,"abstract":"Effective teamwork and communication is now recognised as a critical component of safe and high-quality patient care. Researchers, policymakers and frontline providers are in search of strategies to improve teamwork in healthcare. The most frequently used strategy is teamwork training.1 Teamwork training involves a systematic process in which a team is guided (often by facilitators) to improve and master different skills for working together effectively. Single-centre teamwork training initiatives have demonstrated improvements in patient care, but these results have been challenging to reproduce at scale.2 \u0000\u0000In this issue of BMJ Quality and Safety, Lenguerrand et al report the results of a stepped-wedge randomised controlled trial in which PRactical Obstetric Multi-Professional Training (PROMPT), an interprofessional intrapartum training package, was implemented across 12 maternity units in Scotland.3 Each participating unit identified an in-house training team to travel to attend a 2-day PROMPT Train the Trainers programme conducted in one simulation centre; two units were unable to send training teams. Training teams were subsequently responsible for coordinating the delivery of in-house PROMPT courses to all maternity staff within 12 months. The courses were intended to cover core obstetrical emergencies, such as postpartum haemorrhage, sepsis, shoulder dystocia, teamwork and fetal monitoring. In addition to clinical outcomes, each maternity unit collected process data about their local PROMPT courses, including the total number of staff trained and courses delivered and the actual course content. The authors found a significant amount of variability in the implementation across units. For example, all courses included elements of teamwork whereas fetal monitoring and shoulder dystocia training were not universally included. Despite the previously demonstrated benefits of PROMPT in single-centre studies, the final results did not demonstrate a reduction of term babies with a low Apgar score. The authors postulate this null finding may be in part related …","PeriodicalId":49653,"journal":{"name":"Quality & Safety in Health Care","volume":"29 1","pages":"102 - 98"},"PeriodicalIF":0.0,"publicationDate":"2019-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1136/bmjqs-2019-009515","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49090516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-08DOI: 10.1136/bmjqs-2019-009730
A. Borzecki, A. Rosen
Despite consensus that preventing patient safety events is important, measurement of safety events remains challenging. This is, in part, because they occur relatively infrequently and are not always preventable. There is also no consensus on the ‘best way‘ or the ‘best measure’ of patient safety. The purpose of all safety measures is to improve care and prevent safety events; this can be achieved by different means. If the overall goal of measuring patient safety is to capture the universe of safety events that occur, then broader measures encompassing large populations, such as those based on administrative data, may be preferable. Acknowledging the trade-off between comprehensiveness and accuracy, such measures may be better suited for surveillance and quality improvement (QI), rather than public reporting/reimbursement. Conversely, using measures for public reporting and pay-for-performance requires more narrowly focused measures that favour accuracy over comprehensiveness, such as those with restricted denominators or those based on medical record review. There are at least two well-established patient safety measurement systems available for use in the inpatient setting, namely the administrative data-based Agency for Healthcare Research and Quality (AHRQ) Patient Safety Indicators (PSIs) and the medical record-based National Surgical Quality Improvement Programme (NSQIP) measures.1–3 The AHRQ PSIs, publicly released in 2003, are evidence-based measures designed to screen for potentially preventable medical and surgical complications that occur in the acute care setting. Since they use administrative data, they were originally designed as tools for use in case finding for local QI efforts and surveillance, as well as for internal hospital comparisons. They were developed using a rigorous process beginning with a thorough review of the literature for existing administrative data-based indicators, review by clinical expert panels, consultation with coding experts and empirical analyses to assess the statistical properties of the measures, such as reliability and predictive and …
{"title":"Is there a ‘best measure’ of patient safety?","authors":"A. Borzecki, A. Rosen","doi":"10.1136/bmjqs-2019-009730","DOIUrl":"https://doi.org/10.1136/bmjqs-2019-009730","url":null,"abstract":"Despite consensus that preventing patient safety events is important, measurement of safety events remains challenging. This is, in part, because they occur relatively infrequently and are not always preventable. There is also no consensus on the ‘best way‘ or the ‘best measure’ of patient safety. The purpose of all safety measures is to improve care and prevent safety events; this can be achieved by different means. If the overall goal of measuring patient safety is to capture the universe of safety events that occur, then broader measures encompassing large populations, such as those based on administrative data, may be preferable. Acknowledging the trade-off between comprehensiveness and accuracy, such measures may be better suited for surveillance and quality improvement (QI), rather than public reporting/reimbursement. Conversely, using measures for public reporting and pay-for-performance requires more narrowly focused measures that favour accuracy over comprehensiveness, such as those with restricted denominators or those based on medical record review.\u0000\u0000There are at least two well-established patient safety measurement systems available for use in the inpatient setting, namely the administrative data-based Agency for Healthcare Research and Quality (AHRQ) Patient Safety Indicators (PSIs) and the medical record-based National Surgical Quality Improvement Programme (NSQIP) measures.1–3 The AHRQ PSIs, publicly released in 2003, are evidence-based measures designed to screen for potentially preventable medical and surgical complications that occur in the acute care setting. Since they use administrative data, they were originally designed as tools for use in case finding for local QI efforts and surveillance, as well as for internal hospital comparisons. They were developed using a rigorous process beginning with a thorough review of the literature for existing administrative data-based indicators, review by clinical expert panels, consultation with coding experts and empirical analyses to assess the statistical properties of the measures, such as reliability and predictive and …","PeriodicalId":49653,"journal":{"name":"Quality & Safety in Health Care","volume":"29 1","pages":"185 - 188"},"PeriodicalIF":0.0,"publicationDate":"2019-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1136/bmjqs-2019-009730","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44428914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-05DOI: 10.1136/bmjqs-2019-009469
C. Burton, Luke O'Neill, P. Oliver, P. Murchie
Objectives To examine how much of the variation between general practices in referral rates and cancer detection rates is attributable to local health services rather than the practices or their populations. Design Ecological analysis of national data on fast-track referrals for suspected cancer from general practices. Data were analysed at the levels of general practice, primary care organisation (Clinical Commissioning Group) and secondary care provider (Acute Hospital Trust) level. Analysis of variation in detection rate was by multilevel linear and Poisson regression. Setting 6379 group practices with data relating to more than 50 cancer cases diagnosed over the 5 years from 2013 to 2017. Outcomes Proportion of observed variation attributable to primary and secondary care organisations in standardised fast-track referral rate and in cancer detection rate before and after adjustment for practice characteristics. Results Primary care organisation accounted for 21% of the variation between general practices in the standardised fast-track referral rate and 42% of the unadjusted variation in cancer detection rate. After adjusting for standardised fast-track referral rate, primary care organisation accounted for 31% of the variation in cancer detection rate (compared with 18% accounted for by practice characteristics). In areas where a hospital trust was the main provider for multiple primary care organisations, hospital trusts accounted for the majority of the variation attributable to local health services (between 63% and 69%). Conclusion This is the first large-scale finding that a substantial proportion of the variation between general practitioner practices in referrals is attributable to their local healthcare systems. Efforts to reduce variation need to focus not just on individual practices but on local diagnostic service provision and culture at the interface of primary and secondary care.
{"title":"Contribution of primary care organisation and specialist care provider to variation in GP referrals for suspected cancer: ecological analysis of national data","authors":"C. Burton, Luke O'Neill, P. Oliver, P. Murchie","doi":"10.1136/bmjqs-2019-009469","DOIUrl":"https://doi.org/10.1136/bmjqs-2019-009469","url":null,"abstract":"Objectives To examine how much of the variation between general practices in referral rates and cancer detection rates is attributable to local health services rather than the practices or their populations. Design Ecological analysis of national data on fast-track referrals for suspected cancer from general practices. Data were analysed at the levels of general practice, primary care organisation (Clinical Commissioning Group) and secondary care provider (Acute Hospital Trust) level. Analysis of variation in detection rate was by multilevel linear and Poisson regression. Setting 6379 group practices with data relating to more than 50 cancer cases diagnosed over the 5 years from 2013 to 2017. Outcomes Proportion of observed variation attributable to primary and secondary care organisations in standardised fast-track referral rate and in cancer detection rate before and after adjustment for practice characteristics. Results Primary care organisation accounted for 21% of the variation between general practices in the standardised fast-track referral rate and 42% of the unadjusted variation in cancer detection rate. After adjusting for standardised fast-track referral rate, primary care organisation accounted for 31% of the variation in cancer detection rate (compared with 18% accounted for by practice characteristics). In areas where a hospital trust was the main provider for multiple primary care organisations, hospital trusts accounted for the majority of the variation attributable to local health services (between 63% and 69%). Conclusion This is the first large-scale finding that a substantial proportion of the variation between general practitioner practices in referrals is attributable to their local healthcare systems. Efforts to reduce variation need to focus not just on individual practices but on local diagnostic service provision and culture at the interface of primary and secondary care.","PeriodicalId":49653,"journal":{"name":"Quality & Safety in Health Care","volume":"29 1","pages":"296 - 303"},"PeriodicalIF":0.0,"publicationDate":"2019-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1136/bmjqs-2019-009469","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48854777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-03DOI: 10.1136/bmjqs-2019-009923
B. Galen, Sarah W. Baron, Sandra Young, A. Hall, Linda Berger-Spivack, W. Southern
Background Training nurses in ultrasound-guided peripheral intravenous catheter placement might reduce the use of more invasive venous access devices (peripherally inserted central catheters (PICC) and midline catheters). Methods We implemented an abbreviated training in ultrasound-guided peripheral intravenous catheter placement for nurses on an inpatient medical unit and provided a portable ultrasound device for 10 months. Results Nurses on this unit placed 99 ultrasound-guided peripheral intravenous catheters with a high level of success. During the implementation period, PICC and midline catheter placement decreased from a mean 4.8 to 2.5 per month, meeting criteria for special cause variation. In the postimplementation period, the average catheter use reverted to 4.3 per month on the intervention unit. A comparison inpatient medical unit without training or access to a portable ultrasound device experienced no significant change in PICC and midline catheter use throughout the study period (mean of 6.0 per month). Conclusions These results suggest that an abbreviated training in ultrasound-guided peripheral intravenous catheter placement for nurses on an inpatient medical unit is sufficient to reduce PICC and midline catheters.
{"title":"Reducing peripherally inserted central catheters and midline catheters by training nurses in ultrasound-guided peripheral intravenous catheter placement","authors":"B. Galen, Sarah W. Baron, Sandra Young, A. Hall, Linda Berger-Spivack, W. Southern","doi":"10.1136/bmjqs-2019-009923","DOIUrl":"https://doi.org/10.1136/bmjqs-2019-009923","url":null,"abstract":"Background Training nurses in ultrasound-guided peripheral intravenous catheter placement might reduce the use of more invasive venous access devices (peripherally inserted central catheters (PICC) and midline catheters). Methods We implemented an abbreviated training in ultrasound-guided peripheral intravenous catheter placement for nurses on an inpatient medical unit and provided a portable ultrasound device for 10 months. Results Nurses on this unit placed 99 ultrasound-guided peripheral intravenous catheters with a high level of success. During the implementation period, PICC and midline catheter placement decreased from a mean 4.8 to 2.5 per month, meeting criteria for special cause variation. In the postimplementation period, the average catheter use reverted to 4.3 per month on the intervention unit. A comparison inpatient medical unit without training or access to a portable ultrasound device experienced no significant change in PICC and midline catheter use throughout the study period (mean of 6.0 per month). Conclusions These results suggest that an abbreviated training in ultrasound-guided peripheral intravenous catheter placement for nurses on an inpatient medical unit is sufficient to reduce PICC and midline catheters.","PeriodicalId":49653,"journal":{"name":"Quality & Safety in Health Care","volume":"29 1","pages":"245 - 249"},"PeriodicalIF":0.0,"publicationDate":"2019-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1136/bmjqs-2019-009923","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42249604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-03DOI: 10.1136/bmjqs-2019-009953
S. Hota, M. Doll, G. Bearman
Clostridioides difficile infection (CDI) remains an important healthcare-associated infection and threat to patient safety since the height of the NAP1/027 epidemic in the early part of the millennium. In 2011, C. difficile caused almost half a million infections and 29 000 deaths in the USA alone, with 24% of those cases occurring in hospital settings.1 The US Centres for Disease Control identifies C. difficile as one of three pathogens that poses ‘an immediate antibiotic resistance threat that requires urgent and aggressive action’.2 Many jurisdictions now require public reporting of hospital CDI rates. In some countries, hospitals face financial penalties for elevated CDI rates. CDI rates are also top priorities on hospital quality agendas, often associated with ambitious reduction targets. Some institutions even aim for complete elimination of healthcare-associated CDI—a goal referred to as ‘getting to zero’. There is no argument that healthcare-associated CDI is a significant patient safety issue and that aggressive efforts should be taken to prevent its harmful effects. However, external pressures and a lack of appreciation for the complexity of C. difficile epidemiology are interfering with the mission to prevent healthcare-associated CDI. We expose the challenges of the current approach to CDI prevention in hospitals and highlight where prevention efforts deserve further attention. With the focus on reducing CDI rates, diagnostic test stewardship for CDI is a popular quality improvement initiative in hospitals. Typical symptoms for CDI—diarrhoea and abdominal pain—are common in hospitalised patients due to comorbidities, medication exposures (including laxatives) and initiation of enteral feeds. Coupled with the increasing use of highly sensitive C. difficile molecular tests, CDI is overdiagnosed in up to half of those under investigation.3 Algorithms thus exist to discourage testing in patients with alternative aetiologies of diarrhoea. Diagnostic C. difficile test stewardship may provide the benefit of heightening accurate case …
{"title":"Preventing Clostridioides difficile infection in hospitals: what is the endgame?","authors":"S. Hota, M. Doll, G. Bearman","doi":"10.1136/bmjqs-2019-009953","DOIUrl":"https://doi.org/10.1136/bmjqs-2019-009953","url":null,"abstract":"Clostridioides difficile infection (CDI) remains an important healthcare-associated infection and threat to patient safety since the height of the NAP1/027 epidemic in the early part of the millennium. In 2011, C. difficile caused almost half a million infections and 29 000 deaths in the USA alone, with 24% of those cases occurring in hospital settings.1 The US Centres for Disease Control identifies C. difficile as one of three pathogens that poses ‘an immediate antibiotic resistance threat that requires urgent and aggressive action’.2 Many jurisdictions now require public reporting of hospital CDI rates. In some countries, hospitals face financial penalties for elevated CDI rates. CDI rates are also top priorities on hospital quality agendas, often associated with ambitious reduction targets. Some institutions even aim for complete elimination of healthcare-associated CDI—a goal referred to as ‘getting to zero’.\u0000\u0000There is no argument that healthcare-associated CDI is a significant patient safety issue and that aggressive efforts should be taken to prevent its harmful effects. However, external pressures and a lack of appreciation for the complexity of C. difficile epidemiology are interfering with the mission to prevent healthcare-associated CDI. We expose the challenges of the current approach to CDI prevention in hospitals and highlight where prevention efforts deserve further attention.\u0000\u0000With the focus on reducing CDI rates, diagnostic test stewardship for CDI is a popular quality improvement initiative in hospitals. Typical symptoms for CDI—diarrhoea and abdominal pain—are common in hospitalised patients due to comorbidities, medication exposures (including laxatives) and initiation of enteral feeds. Coupled with the increasing use of highly sensitive C. difficile molecular tests, CDI is overdiagnosed in up to half of those under investigation.3 Algorithms thus exist to discourage testing in patients with alternative aetiologies of diarrhoea. Diagnostic C. difficile test stewardship may provide the benefit of heightening accurate case …","PeriodicalId":49653,"journal":{"name":"Quality & Safety in Health Care","volume":"29 1","pages":"157 - 160"},"PeriodicalIF":0.0,"publicationDate":"2019-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1136/bmjqs-2019-009953","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49443461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-23DOI: 10.1136/bmjqs-2019-009570
T. Green, A. Bonner, L. Teleni, Natalie K. Bradford, L. Purtell, C. Douglas, P. Yates, M. MacAndrew, Hai Yen Dao, R. Chan
Background Experience-based codesign (EBCD) is an approach to health service design that engages patients and healthcare staff in partnership to develop and improve health services or pathways of care. The aim of this systematic review was to examine the use (structure, process and outcomes) and reporting of EBCD in health service improvement activities. Methods Electronic databases (MEDLINE, CINAHL, PsycINFO and The Cochrane Library) were searched to identify peer-reviewed articles published from database inception to August 2018. Search terms identified peer-reviewed English language qualitative, quantitative and mixed methods studies that underwent independent screening by two authors. Full texts were independently reviewed by two reviewers and data were independently extracted by one reviewer before being checked by a second reviewer. Adherence to the 10 activities embedded within the eight-stage EBCD framework was calculated for each study. Results We identified 20 studies predominantly from the UK and in acute mental health or cancer services. EBCD fidelity ranged from 40% to 100% with only three studies satisfying 100% fidelity. Conclusion EBCD is used predominantly for quality improvement, but has potential to be used for intervention design projects. There is variation in the use of EBCD, with many studies eliminating or modifying some EBCD stages. Moreover, there is no consistency in reporting. In order to evaluate the effect of modifying EBCD or levels of EBCD fidelity, the outcomes of each EBCD phase (ie, touchpoints and improvement activities) should be reported in a consistent manner. Trial registration number CRD42018105879.
{"title":"Use and reporting of experience-based codesign studies in the healthcare setting: a systematic review","authors":"T. Green, A. Bonner, L. Teleni, Natalie K. Bradford, L. Purtell, C. Douglas, P. Yates, M. MacAndrew, Hai Yen Dao, R. Chan","doi":"10.1136/bmjqs-2019-009570","DOIUrl":"https://doi.org/10.1136/bmjqs-2019-009570","url":null,"abstract":"Background Experience-based codesign (EBCD) is an approach to health service design that engages patients and healthcare staff in partnership to develop and improve health services or pathways of care. The aim of this systematic review was to examine the use (structure, process and outcomes) and reporting of EBCD in health service improvement activities. Methods Electronic databases (MEDLINE, CINAHL, PsycINFO and The Cochrane Library) were searched to identify peer-reviewed articles published from database inception to August 2018. Search terms identified peer-reviewed English language qualitative, quantitative and mixed methods studies that underwent independent screening by two authors. Full texts were independently reviewed by two reviewers and data were independently extracted by one reviewer before being checked by a second reviewer. Adherence to the 10 activities embedded within the eight-stage EBCD framework was calculated for each study. Results We identified 20 studies predominantly from the UK and in acute mental health or cancer services. EBCD fidelity ranged from 40% to 100% with only three studies satisfying 100% fidelity. Conclusion EBCD is used predominantly for quality improvement, but has potential to be used for intervention design projects. There is variation in the use of EBCD, with many studies eliminating or modifying some EBCD stages. Moreover, there is no consistency in reporting. In order to evaluate the effect of modifying EBCD or levels of EBCD fidelity, the outcomes of each EBCD phase (ie, touchpoints and improvement activities) should be reported in a consistent manner. Trial registration number CRD42018105879.","PeriodicalId":49653,"journal":{"name":"Quality & Safety in Health Care","volume":"29 1","pages":"64 - 76"},"PeriodicalIF":0.0,"publicationDate":"2019-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1136/bmjqs-2019-009570","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42513084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-20DOI: 10.1136/bmjqs-2019-009800
J. Mehaffey, R. Hawkins, E. Charles, F. Turrentine, B. Kaplan, S. Fogel, Charles Harris, D. Reines, J. Posadas, G. Ailawadi, J. Hanks, P. Hallowell, R. S. Jones
Background Socioeconomic status affects surgical outcomes, however these factors are not included in clinical quality improvement data and risk models. We performed a prospective registry analysis to determine if the Distressed Communities Index (DCI), a composite socioeconomic ranking by zip code, could predict risk-adjusted surgical outcomes and resource utilisation. Methods All patients undergoing surgery (n=44,451) in a regional quality improvement database (American College of Surgeons-National Surgical Quality Improvement Program ACS-NSQIP) were paired with DCI, ranging from 0–100 (low to high distress) and accounting for unemployment, education level, poverty rate, median income, business growth and housing vacancies. The top quartile of distress was compared to the remainder of the cohort and a mixed effects modeling evaluated ACS-NSQIP risk-adjusted association between DCI and the primary outcomes of surgical complications and resource utilisation. Results A total of 9369 (21.1%) patients came from severely distressed communities (DCI >75), who had higher rates of most medical comorbidities as well as transfer status (8.4% vs 4.8%, p<0.0001) resulting in higher ACS-NSQIP predicted risk of any complication (8.0% vs 7.1%, p<0.0001). Patients from severely distressed communities had increased 30-day mortality (1.8% vs 1.4%, p=0.01), postoperative complications (9.8% vs 8.5%, p<0.0001), hospital readmission (7.7 vs 6.8, p<0.0001) and resource utilisation. DCI was independently associated with postoperative complications (OR 1.07, 95% CI 1.04 to 1.10, p<0.0001) as well as resource utilisation after adjusting for ACS-NSQIP predicted risk Conclusion Increasing Distressed Communities Index is associated with increased postoperative complications and resource utilisation even after ACS-NSQIP risk adjustment. These findings demonstrate a disparity in surgical outcomes based on community level socioeconomic factors, highlighting the continued need for public health innovation and policy initiatives.
背景:社会经济地位影响手术结果,但这些因素不包括在临床质量改善数据和风险模型中。我们进行了一项前瞻性登记分析,以确定贫困社区指数(DCI),一个由邮政编码组成的综合社会经济排名,是否可以预测风险调整后的手术结果和资源利用。方法在区域质量改进数据库(美国外科医师学会-国家外科质量改进计划ACS-NSQIP)中对所有手术患者(n=44,451)进行DCI配对,DCI评分范围为0-100(低至高窘迫),并考虑失业、教育水平、贫困率、收入中位数、业务增长和住房空置率。将困扰的前四分之一患者与队列的其余患者进行比较,并采用混合效应模型评估ACS-NSQIP风险调整后DCI与手术并发症和资源利用的主要结局之间的关联。结果9369例(21.1%)患者来自危重社区(DCI bb0.75),其大部分医疗合并症和转院状况的发生率较高(8.4%比4.8%,p<0.0001),导致ACS-NSQIP预测并发症的风险较高(8.0%比7.1%,p<0.0001)。来自严重贫困社区的患者30天死亡率(1.8% vs 1.4%, p=0.01)、术后并发症(9.8% vs 8.5%, p<0.0001)、再入院率(7.7 vs 6.8, p<0.0001)和资源利用率均有所增加。调整ACS-NSQIP预测风险后,DCI与术后并发症(OR 1.07, 95% CI 1.04 ~ 1.10, p<0.0001)以及资源利用率独立相关。结论:即使在ACS-NSQIP风险调整后,贫困社区指数的增加仍与术后并发症和资源利用率的增加相关。这些发现表明,基于社区层面的社会经济因素,手术结果存在差异,突出了公共卫生创新和政策举措的持续必要性。
{"title":"Community level socioeconomic status association with surgical outcomes and resource utilisation in a regional cohort: a prospective registry analysis","authors":"J. Mehaffey, R. Hawkins, E. Charles, F. Turrentine, B. Kaplan, S. Fogel, Charles Harris, D. Reines, J. Posadas, G. Ailawadi, J. Hanks, P. Hallowell, R. S. Jones","doi":"10.1136/bmjqs-2019-009800","DOIUrl":"https://doi.org/10.1136/bmjqs-2019-009800","url":null,"abstract":"Background Socioeconomic status affects surgical outcomes, however these factors are not included in clinical quality improvement data and risk models. We performed a prospective registry analysis to determine if the Distressed Communities Index (DCI), a composite socioeconomic ranking by zip code, could predict risk-adjusted surgical outcomes and resource utilisation. Methods All patients undergoing surgery (n=44,451) in a regional quality improvement database (American College of Surgeons-National Surgical Quality Improvement Program ACS-NSQIP) were paired with DCI, ranging from 0–100 (low to high distress) and accounting for unemployment, education level, poverty rate, median income, business growth and housing vacancies. The top quartile of distress was compared to the remainder of the cohort and a mixed effects modeling evaluated ACS-NSQIP risk-adjusted association between DCI and the primary outcomes of surgical complications and resource utilisation. Results A total of 9369 (21.1%) patients came from severely distressed communities (DCI >75), who had higher rates of most medical comorbidities as well as transfer status (8.4% vs 4.8%, p<0.0001) resulting in higher ACS-NSQIP predicted risk of any complication (8.0% vs 7.1%, p<0.0001). Patients from severely distressed communities had increased 30-day mortality (1.8% vs 1.4%, p=0.01), postoperative complications (9.8% vs 8.5%, p<0.0001), hospital readmission (7.7 vs 6.8, p<0.0001) and resource utilisation. DCI was independently associated with postoperative complications (OR 1.07, 95% CI 1.04 to 1.10, p<0.0001) as well as resource utilisation after adjusting for ACS-NSQIP predicted risk Conclusion Increasing Distressed Communities Index is associated with increased postoperative complications and resource utilisation even after ACS-NSQIP risk adjustment. These findings demonstrate a disparity in surgical outcomes based on community level socioeconomic factors, highlighting the continued need for public health innovation and policy initiatives.","PeriodicalId":49653,"journal":{"name":"Quality & Safety in Health Care","volume":"29 1","pages":"232 - 237"},"PeriodicalIF":0.0,"publicationDate":"2019-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1136/bmjqs-2019-009800","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46450256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-18DOI: 10.1136/bmjqs-2018-009165
G. Abel, M. Elliott
When the degree of variation between healthcare organisations or geographical regions is quantified, there is often a failure to account for the role of chance, which can lead to an overestimation of the true variation. Mixed-effects models account for the role of chance and estimate the true/underlying variation between organisations or regions. In this paper, we explore how a random intercept model can be applied to rate or proportion indicators and how to interpret the estimated variance parameter.
{"title":"Identifying and quantifying variation between healthcare organisations and geographical regions: using mixed-effects models","authors":"G. Abel, M. Elliott","doi":"10.1136/bmjqs-2018-009165","DOIUrl":"https://doi.org/10.1136/bmjqs-2018-009165","url":null,"abstract":"When the degree of variation between healthcare organisations or geographical regions is quantified, there is often a failure to account for the role of chance, which can lead to an overestimation of the true variation. Mixed-effects models account for the role of chance and estimate the true/underlying variation between organisations or regions. In this paper, we explore how a random intercept model can be applied to rate or proportion indicators and how to interpret the estimated variance parameter.","PeriodicalId":49653,"journal":{"name":"Quality & Safety in Health Care","volume":"28 1","pages":"1032 - 1038"},"PeriodicalIF":0.0,"publicationDate":"2019-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1136/bmjqs-2018-009165","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44350807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}