Pub Date : 2024-09-06DOI: 10.1016/j.jclinepi.2024.111517
Avraham Tenenbaum , Shoshana Revel-Vilk , Sivan Gazit , Michael Roimi , Aidan Gill , Dafna Gilboa , Ora Paltiel , Orly Manor , Varda Shalev , Gabriel Chodick
<div><h3>Objective</h3><div>The diagnosis of Gaucher disease (GD) presents a major challenge due to the high variability and low specificity of its clinical characteristics, along with limited physician awareness of the disease’s early symptoms. Early and accurate diagnosis is important to enable effective treatment decisions, prevent unnecessary testing, and facilitate genetic counseling. This study aimed to develop a machine learning (ML) model for GD screening and GD early diagnosis based on real-world clinical data using the Maccabi Healthcare Services electronic database, which contains 20 years of longitudinal data on approximately 2.6 million patients.</div></div><div><h3>Study Design and Setting</h3><div>We screened the Maccabi Healthcare Services database for patients with GD between January 1998 and May 2022. Eligible controls were matched by year of birth, sex, and socioeconomic status in a 1:13 ratio. The data were partitioned into 75% training and 25% test sets and trained to predict GD using features obtained from medical and laboratory records. Model performances were evaluated using the area under the receiver operating characteristic curve and the area under the precision-recall curve.</div></div><div><h3>Results</h3><div>We detected 264 confirmed patients with GD to which we matched 3,429 controls. The best model performance (which included known GD signs and symptoms, previously unknown clinical features, and administrative codes) on the test set had an area under the receiver operating characteristic curve = 0.95 ± 0.03 and area under the precision-recall curve = 0.80 ± 0.08, which yielded a median GD identification of 2.78 years earlier than the clinical diagnosis (25th–75th percentile: 1.29–4.53).</div></div><div><h3>Conclusion</h3><div>Using an ML approach on real-world data led to excellent discrimination between GD patients and controls, with the ability to detect GD significantly earlier than the time of actual diagnosis. Hence, this approach might be useful as a screening tool for GD and lead to earlier diagnosis and treatment. Furthermore, advanced ML analytics may highlight previously unrecognized features associated with GD, including clinical diagnoses and health-seeking behaviors.</div></div><div><h3>Plain Language Summary</h3><div>Diagnosing Gaucher disease is difficult, which often leads to late or incorrect diagnoses. As a result, patients may undergo unnecessary tests and treatments and experience health deterioration despite medications availability for Gaucher disease. In this study, we used electronic health data to develop machine learning models for early diagnosis of Gaucher disease type 1. Our models, which included known Gaucher disease signs and symptoms, previously unknown clinical features, and administrative codes, were able to significantly outperform other models and expert opinions, detecting type 1 Gaucher disease 3 years on average before actual diagnosis. Our models also revealed new features
{"title":"A machine learning model for early diagnosis of type 1 Gaucher disease using real-life data","authors":"Avraham Tenenbaum , Shoshana Revel-Vilk , Sivan Gazit , Michael Roimi , Aidan Gill , Dafna Gilboa , Ora Paltiel , Orly Manor , Varda Shalev , Gabriel Chodick","doi":"10.1016/j.jclinepi.2024.111517","DOIUrl":"10.1016/j.jclinepi.2024.111517","url":null,"abstract":"<div><h3>Objective</h3><div>The diagnosis of Gaucher disease (GD) presents a major challenge due to the high variability and low specificity of its clinical characteristics, along with limited physician awareness of the disease’s early symptoms. Early and accurate diagnosis is important to enable effective treatment decisions, prevent unnecessary testing, and facilitate genetic counseling. This study aimed to develop a machine learning (ML) model for GD screening and GD early diagnosis based on real-world clinical data using the Maccabi Healthcare Services electronic database, which contains 20 years of longitudinal data on approximately 2.6 million patients.</div></div><div><h3>Study Design and Setting</h3><div>We screened the Maccabi Healthcare Services database for patients with GD between January 1998 and May 2022. Eligible controls were matched by year of birth, sex, and socioeconomic status in a 1:13 ratio. The data were partitioned into 75% training and 25% test sets and trained to predict GD using features obtained from medical and laboratory records. Model performances were evaluated using the area under the receiver operating characteristic curve and the area under the precision-recall curve.</div></div><div><h3>Results</h3><div>We detected 264 confirmed patients with GD to which we matched 3,429 controls. The best model performance (which included known GD signs and symptoms, previously unknown clinical features, and administrative codes) on the test set had an area under the receiver operating characteristic curve = 0.95 ± 0.03 and area under the precision-recall curve = 0.80 ± 0.08, which yielded a median GD identification of 2.78 years earlier than the clinical diagnosis (25th–75th percentile: 1.29–4.53).</div></div><div><h3>Conclusion</h3><div>Using an ML approach on real-world data led to excellent discrimination between GD patients and controls, with the ability to detect GD significantly earlier than the time of actual diagnosis. Hence, this approach might be useful as a screening tool for GD and lead to earlier diagnosis and treatment. Furthermore, advanced ML analytics may highlight previously unrecognized features associated with GD, including clinical diagnoses and health-seeking behaviors.</div></div><div><h3>Plain Language Summary</h3><div>Diagnosing Gaucher disease is difficult, which often leads to late or incorrect diagnoses. As a result, patients may undergo unnecessary tests and treatments and experience health deterioration despite medications availability for Gaucher disease. In this study, we used electronic health data to develop machine learning models for early diagnosis of Gaucher disease type 1. Our models, which included known Gaucher disease signs and symptoms, previously unknown clinical features, and administrative codes, were able to significantly outperform other models and expert opinions, detecting type 1 Gaucher disease 3 years on average before actual diagnosis. Our models also revealed new features ","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"175 ","pages":"Article 111517"},"PeriodicalIF":7.3,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-06DOI: 10.1016/j.jclinepi.2024.111518
Declan Devane , Candyce Hamel , Gerald Gartlehner , Barbara Nussbaumer-Streit , Ursula Griebler , Lisa Affengruber , KM Saif-Ur-Rahman , Chantelle Garritty
Background and Objective
Rapid reviews have gained popularity as a pragmatic approach to synthesize evidence in a timely manner to inform decision-making in healthcare. This article provides an overview of the key concepts and methodological considerations in conducting rapid reviews, drawing from a series of recently published guidance papers by the Cochrane Rapid Reviews Methods Group.
Study Design and Setting
We discuss the definition, characteristics, and potential applications of rapid reviews and the trade-offs between speed and rigor. We present a practical example of a rapid review and highlight the methodological considerations outlined in the updated Cochrane guidance, including recommendations for literature searching, study selection, data extraction, risk of bias assessment, synthesis, and assessing the certainty of evidence.
Results
Rapid reviews can be a valuable tool for evidence-based decision-making, but it is essential to understand their limitations and adhere to methodological standards to ensure their validity and reliability.
Conclusion
As the demand for rapid evidence synthesis continues to grow, further research is needed to refine and standardize the methods and reporting of rapid reviews.
Plain Language Summary
Rapid reviews are a type of research method designed to quickly gather and summarize evidence to support decision-making in healthcare. They are particularly useful when timely information is needed, such as during a public health emergency. This article explains the key aspects of how rapid reviews are conducted, based on the latest guidance from experts. Rapid reviews involve several steps, including searching for relevant studies, selecting which studies to include, and carefully examining the quality of the evidence. Although rapid reviews are faster to complete than full systematic reviews, they still follow rigorous processes to ensure that the findings are reliable. This article also provides an example of a rapid review in action, demonstrating how these reviews can be applied in real-world situations. While rapid reviews are a powerful tool for making quick, evidence-based decisions, it is important to be aware of their limitations. Researchers must follow established methods to make sure the results are as accurate and useful as possible. As more people use rapid reviews, ongoing research is needed to improve and standardize how they are done.
{"title":"Key concepts in rapid reviews: an overview","authors":"Declan Devane , Candyce Hamel , Gerald Gartlehner , Barbara Nussbaumer-Streit , Ursula Griebler , Lisa Affengruber , KM Saif-Ur-Rahman , Chantelle Garritty","doi":"10.1016/j.jclinepi.2024.111518","DOIUrl":"10.1016/j.jclinepi.2024.111518","url":null,"abstract":"<div><h3>Background and Objective</h3><div>Rapid reviews have gained popularity as a pragmatic approach to synthesize evidence in a timely manner to inform decision-making in healthcare. This article provides an overview of the key concepts and methodological considerations in conducting rapid reviews, drawing from a series of recently published guidance papers by the Cochrane Rapid Reviews Methods Group.</div></div><div><h3>Study Design and Setting</h3><div>We discuss the definition, characteristics, and potential applications of rapid reviews and the trade-offs between speed and rigor. We present a practical example of a rapid review and highlight the methodological considerations outlined in the updated Cochrane guidance, including recommendations for literature searching, study selection, data extraction, risk of bias assessment, synthesis, and assessing the certainty of evidence.</div></div><div><h3>Results</h3><div>Rapid reviews can be a valuable tool for evidence-based decision-making, but it is essential to understand their limitations and adhere to methodological standards to ensure their validity and reliability.</div></div><div><h3>Conclusion</h3><div>As the demand for rapid evidence synthesis continues to grow, further research is needed to refine and standardize the methods and reporting of rapid reviews.</div></div><div><h3>Plain Language Summary</h3><div>Rapid reviews are a type of research method designed to quickly gather and summarize evidence to support decision-making in healthcare. They are particularly useful when timely information is needed, such as during a public health emergency. This article explains the key aspects of how rapid reviews are conducted, based on the latest guidance from experts. Rapid reviews involve several steps, including searching for relevant studies, selecting which studies to include, and carefully examining the quality of the evidence. Although rapid reviews are faster to complete than full systematic reviews, they still follow rigorous processes to ensure that the findings are reliable. This article also provides an example of a rapid review in action, demonstrating how these reviews can be applied in real-world situations. While rapid reviews are a powerful tool for making quick, evidence-based decisions, it is important to be aware of their limitations. Researchers must follow established methods to make sure the results are as accurate and useful as possible. As more people use rapid reviews, ongoing research is needed to improve and standardize how they are done.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"175 ","pages":"Article 111518"},"PeriodicalIF":7.3,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-05DOI: 10.1016/j.jclinepi.2024.111516
Annabelle R. Iken , Rudolf W. Poolman , Maaike G.J. Gademan
Objective
High-quality data entry in clinical trial databases is crucial to the usefulness, validity, and replicability of research findings, as it influences evidence-based medical practice and future research. Our aim is to assess the quality of self-reported data in trial registries and present practical and systematic methods for identifying and evaluating data quality.
Study Design and Setting
We searched ClinicalTrials.Gov (CTG) for interventional total knee arthroplasty (TKA) trials between 2000 and 2015. We extracted required and optional trial information elements and used the CTG's variables' definitions. We performed a literature review on data quality reporting on frameworks, checklists, and overviews of irregularities in healthcare databases. We identified and assessed data quality attributes as follows: consistency, accuracy, completeness, and timeliness.
Results
We included 816 interventional TKA trials. Data irregularities varied widely: 0%–100%. Inconsistency ranged from 0% to 36%, and most often nonrandomized labeled allocation was combined with a “single-group” assignment trial design. Inaccuracy ranged from 0% to 100%. Incompleteness ranged from 0% to 61%; 61% of finished TKA trials did not report their outcome. With regard to irregularities in timeliness, 49% of the trials were registered more than 3 months after the start date.
Conclusion
We found significant variations in the data quality of registered clinical TKA trials. Trial sponsors should be committed to ensuring that the information they provide is reliable, consistent, up-to-date, transparent, and accurate. CTG's users need to be critical when drawing conclusions based on the registered data. We believe this awareness will increase well-informed decisions about published articles and treatment protocols, including replicating and improving trial designs.
{"title":"Data quality assessment of interventional trials in public trial databases","authors":"Annabelle R. Iken , Rudolf W. Poolman , Maaike G.J. Gademan","doi":"10.1016/j.jclinepi.2024.111516","DOIUrl":"10.1016/j.jclinepi.2024.111516","url":null,"abstract":"<div><h3>Objective</h3><div>High-quality data entry in clinical trial databases is crucial to the usefulness, validity, and replicability of research findings, as it influences evidence-based medical practice and future research. Our aim is to assess the quality of self-reported data in trial registries and present practical and systematic methods for identifying and evaluating data quality.</div></div><div><h3>Study Design and Setting</h3><div>We searched ClinicalTrials.Gov (CTG) for interventional total knee arthroplasty (TKA) trials between 2000 and 2015. We extracted required and optional trial information elements and used the CTG's variables' definitions. We performed a literature review on data quality reporting on frameworks, checklists, and overviews of irregularities in healthcare databases. We identified and assessed data quality attributes as follows: consistency, accuracy, completeness, and timeliness.</div></div><div><h3>Results</h3><div>We included 816 interventional TKA trials. Data irregularities varied widely: 0%–100%. Inconsistency ranged from 0% to 36%, and most often nonrandomized labeled allocation was combined with a “single-group” assignment trial design. Inaccuracy ranged from 0% to 100%. Incompleteness ranged from 0% to 61%; 61% of finished TKA trials did not report their outcome. With regard to irregularities in timeliness, 49% of the trials were registered more than 3 months after the start date.</div></div><div><h3>Conclusion</h3><div>We found significant variations in the data quality of registered clinical TKA trials. Trial sponsors should be committed to ensuring that the information they provide is reliable, consistent, up-to-date, transparent, and accurate. CTG's users need to be critical when drawing conclusions based on the registered data. We believe this awareness will increase well-informed decisions about published articles and treatment protocols, including replicating and improving trial designs.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"175 ","pages":"Article 111516"},"PeriodicalIF":7.3,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142146795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-04DOI: 10.1016/j.jclinepi.2024.111515
Chetan Khatri , Conrad J. Harrison , Deborah MacDonald , Nick Clement , Chloe E.H. Scott , Andrew J. Metcalfe , Jeremy N. Rodrigues
Objectives
The Oxford knee score (OKS) and OKS Activity and Participation Questionnaire (OKS-APQ) are patient-reported outcome measures used to assess people undergoing knee replacement surgery. They have not explicitly been tested for unidimensionality (whether they measure one underlying trait such as ‘knee health’). This study applied item response theory (IRT) to improve the validity of the instruments to optimize for ongoing use.
Study Design and Setting
Participants undergoing primary total knee replacement (TKR) provided preoperative and postoperative responses for OKS and OKS-APQ. Confirmatory factor analysis (CFA) were performed on the OKS and OKS-APQ separately and then on both when pooled into one. An IRT model was fitted to the data.
Results
2972 individual response patterns were analyzed. CFA demonstrated that when combining OKS and OKS-APQ as one instrument, they measure one latent health trait. A user-friendly, free-to-use, web app has been developed to allow clinicians to upload raw data and instantly receive IRT scores.
Conclusions
The OKS and OKS-APQ can be combined to use effectively as a single instrument (producing a single score). For the separate OKS and OKS-APQ the original items and response options can continue to be posed to patients, and this study has confirmed the suitability of IRT-weighted scoring. Applying IRT to existing responses converts traditional sum scores into continuous measurements with greater granularity, including individual measurement error.
{"title":"Item response theory validation of the Oxford knee score and Activity and Participation Questionnaire: a step toward a common metric","authors":"Chetan Khatri , Conrad J. Harrison , Deborah MacDonald , Nick Clement , Chloe E.H. Scott , Andrew J. Metcalfe , Jeremy N. Rodrigues","doi":"10.1016/j.jclinepi.2024.111515","DOIUrl":"10.1016/j.jclinepi.2024.111515","url":null,"abstract":"<div><h3>Objectives</h3><div>The Oxford knee score (OKS) and OKS Activity and Participation Questionnaire (OKS-APQ) are patient-reported outcome measures used to assess people undergoing knee replacement surgery. They have not explicitly been tested for unidimensionality (whether they measure one underlying trait such as ‘knee health’). This study applied item response theory (IRT) to improve the validity of the instruments to optimize for ongoing use.</div></div><div><h3>Study Design and Setting</h3><div>Participants undergoing primary total knee replacement (TKR) provided preoperative and postoperative responses for OKS and OKS-APQ. Confirmatory factor analysis (CFA) were performed on the OKS and OKS-APQ separately and then on both when pooled into one. An IRT model was fitted to the data.</div></div><div><h3>Results</h3><div>2972 individual response patterns were analyzed. CFA demonstrated that when combining OKS and OKS-APQ as one instrument, they measure one latent health trait. A user-friendly, free-to-use, web app has been developed to allow clinicians to upload raw data and instantly receive IRT scores.</div></div><div><h3>Conclusions</h3><div>The OKS and OKS-APQ can be combined to use effectively as a single instrument (producing a single score). For the separate OKS and OKS-APQ the original items and response options can continue to be posed to patients, and this study has confirmed the suitability of IRT-weighted scoring. Applying IRT to existing responses converts traditional sum scores into continuous measurements with greater granularity, including individual measurement error.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"175 ","pages":"Article 111515"},"PeriodicalIF":7.3,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142146797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-02DOI: 10.1016/j.jclinepi.2024.111511
Yin Bun Cheung , Xiangmei Ma , Grant Mackenzie
Objectives
The prior event rate ratio (PERR) is a recently developed approach for controlling confounding by measured and unmeasured covariates in real-world evidence research and observational studies. Despite its rising popularity in studies of safety and effectiveness of biopharmaceutical products, there is no guidance on how to empirically evaluate its model assumptions. We propose two methods to evaluate two of the assumptions required by the PERR, specifically, the assumptions that occurrence of outcome events does not alter the likelihood of receiving treatment, and that earlier event rate does not affect later event rate.
Study Design and Setting
We propose using self-controlled case series (SCCS) and dynamic random intercept modeling (DRIM), respectively, to evaluate the two aforementioned assumptions. A nonmathematical introduction of the methods and their application to evaluate the assumptions are provided. We illustrate the evaluation with secondary analysis of deidentified data on pneumococcal vaccination and clinical pneumonia in The Gambia, West Africa.
Results
SCCS analysis of data on 12,901 vaccinated Gambian infants did not reject the assumption of clinical pneumonia episodes had no influence on the likelihood of pneumococcal vaccination. DRIM analysis of 14,325 infants with a total of 1719 episodes of clinical pneumonia did not reject the assumption of earlier episodes of clinical pneumonia had no influence on later incidence of the disease.
Conclusion
The SCCS and DRIM methods can facilitate appropriate use of the PERR approach to control confounding.
Plain Language Summary
The prior event rate ratio is a promising approach for analysis of real-world data and observational studies. We propose two statistical methods to evaluate the validity of two assumptions it is based on. They can facilitate appropriate use of the prior even rate ratio.
{"title":"Two assumptions of the prior event rate ratio approach for controlling confounding can be evaluated by self-controlled case series and dynamic random intercept modeling","authors":"Yin Bun Cheung , Xiangmei Ma , Grant Mackenzie","doi":"10.1016/j.jclinepi.2024.111511","DOIUrl":"10.1016/j.jclinepi.2024.111511","url":null,"abstract":"<div><h3>Objectives</h3><div>The prior event rate ratio (PERR) is a recently developed approach for controlling confounding by measured and unmeasured covariates in real-world evidence research and observational studies. Despite its rising popularity in studies of safety and effectiveness of biopharmaceutical products, there is no guidance on how to empirically evaluate its model assumptions. We propose two methods to evaluate two of the assumptions required by the PERR, specifically, the assumptions that occurrence of outcome events does not alter the likelihood of receiving treatment, and that earlier event rate does not affect later event rate.</div></div><div><h3>Study Design and Setting</h3><div>We propose using self-controlled case series (SCCS) and dynamic random intercept modeling (DRIM), respectively, to evaluate the two aforementioned assumptions. A nonmathematical introduction of the methods and their application to evaluate the assumptions are provided. We illustrate the evaluation with secondary analysis of deidentified data on pneumococcal vaccination and clinical pneumonia in The Gambia, West Africa.</div></div><div><h3>Results</h3><div>SCCS analysis of data on 12,901 vaccinated Gambian infants did not reject the assumption of clinical pneumonia episodes had no influence on the likelihood of pneumococcal vaccination. DRIM analysis of 14,325 infants with a total of 1719 episodes of clinical pneumonia did not reject the assumption of earlier episodes of clinical pneumonia had no influence on later incidence of the disease.</div></div><div><h3>Conclusion</h3><div>The SCCS and DRIM methods can facilitate appropriate use of the PERR approach to control confounding.</div></div><div><h3>Plain Language Summary</h3><div>The prior event rate ratio is a promising approach for analysis of real-world data and observational studies. We propose two statistical methods to evaluate the validity of two assumptions it is based on. They can facilitate appropriate use of the prior even rate ratio.</div></div>","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"175 ","pages":"Article 111511"},"PeriodicalIF":7.3,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895435624002671/pdfft?md5=9a4a1a8e00257fa60f896cb2f2a18d2b&pid=1-s2.0-S0895435624002671-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142134388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01DOI: 10.1016/j.jclinepi.2024.111422
<div><h3>Background and Objective</h3><div>Although comprehensive and widespread guidelines on how to conduct systematic reviews of outcome measurement instruments (OMIs) exist, for example from the COSMIN (COnsensus-based Standards for the selection of health Measurement INstruments) initiative, key information is often missing in published reports. This article describes the development of an extension of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 guideline: PRISMA-COSMIN for OMIs 2024.</div></div><div><h3>Methods</h3><div>The development process followed the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) guidelines and included a literature search, expert consultations, a Delphi study, a hybrid workgroup meeting, pilot testing, and an end-of-project meeting, with integrated patient/public involvement.</div></div><div><h3>Results</h3><div>From the literature and expert consultation, 49 potentially relevant reporting items were identified. Round 1 of the Delphi study was completed by 103 panelists, whereas round 2 and 3 were completed by 78 panelists. After 3 rounds, agreement (≥67%) on inclusion and wording was reached for 44 items. Eleven items without consensus for inclusion and/or wording were discussed at a workgroup meeting attended by 24 participants. Agreement was reached for the inclusion and wording of 10 items, and the deletion of 1 item. Pilot testing with 65 authors of OMI systematic reviews further improved the guideline through minor changes in wording and structure, finalized during the end-of-project meeting. The final checklist to facilitate the reporting of full systematic review reports contains 54 (sub)items addressing the review's title, abstract, plain language summary, open science, introduction, methods, results, and discussion. Thirteen items pertaining to the title and abstract are also included in a separate abstract checklist, guiding authors in reporting for example conference abstracts.</div></div><div><h3>Conclusion</h3><div>PRISMA-COSMIN for OMIs 2024 consists of two checklists (full reports; abstracts), their corresponding explanation and elaboration documents detailing the rationale and examples for each item, and a data flow diagram. PRISMA-COSMIN for OMIs 2024 can improve the reporting of systematic reviews of OMIs, fostering their reproducibility and allowing end-users to appraise the quality of OMIs and select the most appropriate OMI for a specific application.</div></div><div><h3>Note</h3><div>This paper was jointly developed by Journal of Clinical Epidemiology, Quality of Life Research, Journal of Patient Reported Outcomes, Health and Quality of Life Outcomes and jointly published by Elsevier Inc, Springer Nature Switzerland AG, and BioMed Central Ltd., part of Springer Nature. The articles are identical except for minor stylistic and spelling differences in keeping with each journal’s style. Either citation can be used when citing this artic
{"title":"Guideline for reporting systematic reviews of outcome measurement instruments (OMIs): PRISMA-COSMIN for OMIs 2024","authors":"","doi":"10.1016/j.jclinepi.2024.111422","DOIUrl":"10.1016/j.jclinepi.2024.111422","url":null,"abstract":"<div><h3>Background and Objective</h3><div>Although comprehensive and widespread guidelines on how to conduct systematic reviews of outcome measurement instruments (OMIs) exist, for example from the COSMIN (COnsensus-based Standards for the selection of health Measurement INstruments) initiative, key information is often missing in published reports. This article describes the development of an extension of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 guideline: PRISMA-COSMIN for OMIs 2024.</div></div><div><h3>Methods</h3><div>The development process followed the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) guidelines and included a literature search, expert consultations, a Delphi study, a hybrid workgroup meeting, pilot testing, and an end-of-project meeting, with integrated patient/public involvement.</div></div><div><h3>Results</h3><div>From the literature and expert consultation, 49 potentially relevant reporting items were identified. Round 1 of the Delphi study was completed by 103 panelists, whereas round 2 and 3 were completed by 78 panelists. After 3 rounds, agreement (≥67%) on inclusion and wording was reached for 44 items. Eleven items without consensus for inclusion and/or wording were discussed at a workgroup meeting attended by 24 participants. Agreement was reached for the inclusion and wording of 10 items, and the deletion of 1 item. Pilot testing with 65 authors of OMI systematic reviews further improved the guideline through minor changes in wording and structure, finalized during the end-of-project meeting. The final checklist to facilitate the reporting of full systematic review reports contains 54 (sub)items addressing the review's title, abstract, plain language summary, open science, introduction, methods, results, and discussion. Thirteen items pertaining to the title and abstract are also included in a separate abstract checklist, guiding authors in reporting for example conference abstracts.</div></div><div><h3>Conclusion</h3><div>PRISMA-COSMIN for OMIs 2024 consists of two checklists (full reports; abstracts), their corresponding explanation and elaboration documents detailing the rationale and examples for each item, and a data flow diagram. PRISMA-COSMIN for OMIs 2024 can improve the reporting of systematic reviews of OMIs, fostering their reproducibility and allowing end-users to appraise the quality of OMIs and select the most appropriate OMI for a specific application.</div></div><div><h3>Note</h3><div>This paper was jointly developed by Journal of Clinical Epidemiology, Quality of Life Research, Journal of Patient Reported Outcomes, Health and Quality of Life Outcomes and jointly published by Elsevier Inc, Springer Nature Switzerland AG, and BioMed Central Ltd., part of Springer Nature. The articles are identical except for minor stylistic and spelling differences in keeping with each journal’s style. Either citation can be used when citing this artic","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"173 ","pages":"Article 111422"},"PeriodicalIF":7.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S089543562400177X/pdfft?md5=aed09d1c3763f51554cb4643f1bdf357&pid=1-s2.0-S089543562400177X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141288876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01DOI: 10.1016/j.jclinepi.2024.111462
Jos Twisk
{"title":"To use or not use Sobel’s test for hypothesis testing of indirect effects and confidence interval estimation: author’s reply","authors":"Jos Twisk","doi":"10.1016/j.jclinepi.2024.111462","DOIUrl":"10.1016/j.jclinepi.2024.111462","url":null,"abstract":"","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"173 ","pages":"Article 111462"},"PeriodicalIF":7.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141724955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01DOI: 10.1016/j.jclinepi.2024.111529
David Tovey , Andrea C. Tricco , André Knottnerus , Ludo van Amelsvoort , Peter Tugwell , Jessie McGowan
{"title":"Farewell and thanks to Tony and Inday Dans","authors":"David Tovey , Andrea C. Tricco , André Knottnerus , Ludo van Amelsvoort , Peter Tugwell , Jessie McGowan","doi":"10.1016/j.jclinepi.2024.111529","DOIUrl":"10.1016/j.jclinepi.2024.111529","url":null,"abstract":"","PeriodicalId":51079,"journal":{"name":"Journal of Clinical Epidemiology","volume":"173 ","pages":"Article 111529"},"PeriodicalIF":7.3,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142311630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}