Pub Date : 2025-01-24DOI: 10.1186/s12874-025-02475-8
Ahtisham Younas, Sergi Fàbregues, Sarah Munce, John W Creswell
Background: The generation of metainferences is a core and significant feature of mixed methods research. In recent years, there has been some discussion in the literature about criteria for appraising the quality of metainferences, the processes for generating them, and the critical role that assessing the "fit" of quantitative and qualitative data and results plays in this generative process. However, little is known about the types of insights that emerge from generating metainferences. To address this gap, this paper conceptualize and present the types and forms of metainferences that can be generated in MMR studies for guiding future research projects.
Methods: A critical review of literature sources was conducted, including peer-reviewed articles, book chapters, and research reports. We performed a non-systematic literature search in the Scopus, Web of Science, Ovid, and Google Scholar databases using general phrases such as "inferences in research", "metainferences in mixed methods", "inferences in mixed methods research", and "inference types". Additional searches included key methodological journals, such as the Journal of Mixed Methods Research, International Journal of Multiple Research Approaches, Methodological Innovations, and the Sage Research Methods database, to locate books, chapters, and peer-reviewed articles that discussed inferences and metainferences.
Results: We propose two broad types of metainferences and five sub-types. The broad metainferences are global and specific, and the subtypes include relational, predictive, causal, comparative, and elaborative metainferences. Furthermore, we provide examples of each type of metainference from published mixed methods empirical studies.
Conclusions: This paper contributes to the field of mixed methods research by expanding the knowledge about metainferences and offering a practical framework of types of metainferences for mixed methods researchers and educators. The proposed framework offers an approach to identifying and recognizing types of metainferences in mixed methods research and serves as an opportunity for future discussion on the nature, insights, and characteristic features of metainferences within this methodology. By proposing a foundation for metainferences, our framework advances this critical area of mixed methods research.
{"title":"Framework for types of metainferences in mixed methods research.","authors":"Ahtisham Younas, Sergi Fàbregues, Sarah Munce, John W Creswell","doi":"10.1186/s12874-025-02475-8","DOIUrl":"10.1186/s12874-025-02475-8","url":null,"abstract":"<p><strong>Background: </strong>The generation of metainferences is a core and significant feature of mixed methods research. In recent years, there has been some discussion in the literature about criteria for appraising the quality of metainferences, the processes for generating them, and the critical role that assessing the \"fit\" of quantitative and qualitative data and results plays in this generative process. However, little is known about the types of insights that emerge from generating metainferences. To address this gap, this paper conceptualize and present the types and forms of metainferences that can be generated in MMR studies for guiding future research projects.</p><p><strong>Methods: </strong>A critical review of literature sources was conducted, including peer-reviewed articles, book chapters, and research reports. We performed a non-systematic literature search in the Scopus, Web of Science, Ovid, and Google Scholar databases using general phrases such as \"inferences in research\", \"metainferences in mixed methods\", \"inferences in mixed methods research\", and \"inference types\". Additional searches included key methodological journals, such as the Journal of Mixed Methods Research, International Journal of Multiple Research Approaches, Methodological Innovations, and the Sage Research Methods database, to locate books, chapters, and peer-reviewed articles that discussed inferences and metainferences.</p><p><strong>Results: </strong>We propose two broad types of metainferences and five sub-types. The broad metainferences are global and specific, and the subtypes include relational, predictive, causal, comparative, and elaborative metainferences. Furthermore, we provide examples of each type of metainference from published mixed methods empirical studies.</p><p><strong>Conclusions: </strong>This paper contributes to the field of mixed methods research by expanding the knowledge about metainferences and offering a practical framework of types of metainferences for mixed methods researchers and educators. The proposed framework offers an approach to identifying and recognizing types of metainferences in mixed methods research and serves as an opportunity for future discussion on the nature, insights, and characteristic features of metainferences within this methodology. By proposing a foundation for metainferences, our framework advances this critical area of mixed methods research.</p>","PeriodicalId":9114,"journal":{"name":"BMC Medical Research Methodology","volume":"25 1","pages":"18"},"PeriodicalIF":3.9,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11758751/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143036861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-22DOI: 10.1186/s12874-025-02465-w
Amy M Crisp, M Elizabeth Halloran, Matt D T Hitchings, Ira M Longini, Natalie E Dean
Background: Cluster randomized trials, which often enroll a small number of clusters, can benefit from constrained randomization, selecting a final randomization scheme from a set of known, balanced randomizations. Previous literature has addressed the suitability of adjusting the analysis for the covariates that were balanced in the design phase when the outcome is continuous or binary. Here we extended this work to time-to-event outcomes by comparing two model-based tests and a newly derived permutation test. A current cluster randomized trial of vector control for the prevention of mosquito-borne disease in children in Mexico is used as a motivating example.
Methods: We assessed type I error rates and power between simple randomization and constrained randomization using both prognostic and non-prognostic covariates via a simulation study. We compared the performance of a semi-parametric Cox proportional hazards model with robust variance, a mixed effects Cox model, and a permutation test utilizing deviance residuals.
Results: The permutation test generally maintained nominal type I error-with the exception of the unadjusted analysis for constrained randomization-and also provided power comparable to the two Cox model-based tests. The model-based tests had inflated type I error when there were very few clusters per trial arm. All three methods performed well when there were 25 clusters per trial arm, as in the case of the motivating example.
Conclusion: For time-to-event outcomes, covariate-constrained randomization was shown to improve power relative to simple randomization. The permutation test developed here was more robust to inflation of type I error compared to model-based tests. Gaining power by adjusting for covariates in the analysis phase was largely dependent on the number of clusters per trial arm.
{"title":"Analysis methods for covariate-constrained cluster randomized trials with time-to-event outcomes.","authors":"Amy M Crisp, M Elizabeth Halloran, Matt D T Hitchings, Ira M Longini, Natalie E Dean","doi":"10.1186/s12874-025-02465-w","DOIUrl":"10.1186/s12874-025-02465-w","url":null,"abstract":"<p><strong>Background: </strong>Cluster randomized trials, which often enroll a small number of clusters, can benefit from constrained randomization, selecting a final randomization scheme from a set of known, balanced randomizations. Previous literature has addressed the suitability of adjusting the analysis for the covariates that were balanced in the design phase when the outcome is continuous or binary. Here we extended this work to time-to-event outcomes by comparing two model-based tests and a newly derived permutation test. A current cluster randomized trial of vector control for the prevention of mosquito-borne disease in children in Mexico is used as a motivating example.</p><p><strong>Methods: </strong>We assessed type I error rates and power between simple randomization and constrained randomization using both prognostic and non-prognostic covariates via a simulation study. We compared the performance of a semi-parametric Cox proportional hazards model with robust variance, a mixed effects Cox model, and a permutation test utilizing deviance residuals.</p><p><strong>Results: </strong>The permutation test generally maintained nominal type I error-with the exception of the unadjusted analysis for constrained randomization-and also provided power comparable to the two Cox model-based tests. The model-based tests had inflated type I error when there were very few clusters per trial arm. All three methods performed well when there were 25 clusters per trial arm, as in the case of the motivating example.</p><p><strong>Conclusion: </strong>For time-to-event outcomes, covariate-constrained randomization was shown to improve power relative to simple randomization. The permutation test developed here was more robust to inflation of type I error compared to model-based tests. Gaining power by adjusting for covariates in the analysis phase was largely dependent on the number of clusters per trial arm.</p>","PeriodicalId":9114,"journal":{"name":"BMC Medical Research Methodology","volume":"25 1","pages":"16"},"PeriodicalIF":3.9,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11753003/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143022122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Logistic regression is a useful statistical technique commonly used in many fields like healthcare, marketing, or finance to generate insights from binary outcomes (e.g., sick vs. not sick). However, when applying logistic regression to complex survey data, which includes complex sampling designs, specific methodological issues are often overlooked.
Methods: The systematic review extensively searched the PubMed and ScienceDirect databases from January 2015 to December 2021, following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 guidelines, focusing primarily on the Demographic and Health Surveys (DHS) and Multiple Indicator Cluster Surveys (MICS). 810 articles met the inclusion criteria and were included in the analysis. When discussing logistic regression, the review considered multiple methodological problems such as the model adequacy assessment, handling dependence of observations, utilization of complex survey design, dealing with missing values, outliers, and more.
Results: Among the selected articles, the DHS database was used the most (96%), with MICS accounting for only 3%, and both DHS and MICS accounting for 1%. Of these, it was found that only 19.7% of the studies employed multilevel mixed-effects logistic regression to account for data dependencies. Model validation techniques were not reported in 94.8% of the studies with limited uses of the bootstrap, jackknife, and other resampling methods. Moreover, sample weights, PSUs, and strata variables were used together in 40.4% of the articles, and 41.7% of the studies did not use any of these variables, which could have produced biased results. Goodness-of-fit assessments were not mentioned in 75.3% of the articles, and the Hosmer-Lemeshow and likelihood ratio test were the most common among those reported. Furthermore, 95.8% of studies did not mention outliers, and only 41.0% of studies corrected for missing information, while only 2.7% applied imputation techniques.
Conclusions: This systematic review highlights important gaps in the use of logistic regression with complex survey data, such as overlooking data dependencies, survey design, and proper validation techniques, along with neglecting outliers, missing data, and goodness-of-fit assessments, all of which point to the need for clearer methodological standards and more thorough reporting to improve the reliability of results. Future research should focus on consistently following these standards to ensure stronger and more dependable findings.
{"title":"The proper application of logistic regression model in complex survey data: a systematic review.","authors":"Devjit Dey, Md Samio Haque, Md Mojahedul Islam, Umme Iffat Aishi, Sajida Sultana Shammy, Md Sabbir Ahmed Mayen, Syed Toukir Ahmed Noor, Md Jamal Uddin","doi":"10.1186/s12874-024-02454-5","DOIUrl":"10.1186/s12874-024-02454-5","url":null,"abstract":"<p><strong>Background: </strong>Logistic regression is a useful statistical technique commonly used in many fields like healthcare, marketing, or finance to generate insights from binary outcomes (e.g., sick vs. not sick). However, when applying logistic regression to complex survey data, which includes complex sampling designs, specific methodological issues are often overlooked.</p><p><strong>Methods: </strong>The systematic review extensively searched the PubMed and ScienceDirect databases from January 2015 to December 2021, following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 guidelines, focusing primarily on the Demographic and Health Surveys (DHS) and Multiple Indicator Cluster Surveys (MICS). 810 articles met the inclusion criteria and were included in the analysis. When discussing logistic regression, the review considered multiple methodological problems such as the model adequacy assessment, handling dependence of observations, utilization of complex survey design, dealing with missing values, outliers, and more.</p><p><strong>Results: </strong>Among the selected articles, the DHS database was used the most (96%), with MICS accounting for only 3%, and both DHS and MICS accounting for 1%. Of these, it was found that only 19.7% of the studies employed multilevel mixed-effects logistic regression to account for data dependencies. Model validation techniques were not reported in 94.8% of the studies with limited uses of the bootstrap, jackknife, and other resampling methods. Moreover, sample weights, PSUs, and strata variables were used together in 40.4% of the articles, and 41.7% of the studies did not use any of these variables, which could have produced biased results. Goodness-of-fit assessments were not mentioned in 75.3% of the articles, and the Hosmer-Lemeshow and likelihood ratio test were the most common among those reported. Furthermore, 95.8% of studies did not mention outliers, and only 41.0% of studies corrected for missing information, while only 2.7% applied imputation techniques.</p><p><strong>Conclusions: </strong>This systematic review highlights important gaps in the use of logistic regression with complex survey data, such as overlooking data dependencies, survey design, and proper validation techniques, along with neglecting outliers, missing data, and goodness-of-fit assessments, all of which point to the need for clearer methodological standards and more thorough reporting to improve the reliability of results. Future research should focus on consistently following these standards to ensure stronger and more dependable findings.</p>","PeriodicalId":9114,"journal":{"name":"BMC Medical Research Methodology","volume":"25 1","pages":"15"},"PeriodicalIF":3.9,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11752662/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143022127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-21DOI: 10.1186/s12874-025-02466-9
Katie Chadd, Anna Caute, Anna Pettican, Pam Enderby
Background: Vast volumes of routinely collected data (RCD) about patients are collated by health professionals. Leveraging this data - a form of real-world data - can be valuable for quality improvement and contributing to the evidence-base to inform practice. Examining routine data may be especially useful for examining issues related to social justice such as health inequities. However, little is known about the extent to which RCD is utilised in health fields and published for wider dissemination.
Objectives: The objective of this scoping review is to document the peer-reviewed published research in allied health fields which utilise RCD and evaluate the extent to which these studies have addressed issues pertaining to social justice.
Methods: An enhanced version of the Arksey and O'Malley's framework, put forth by Westphalm et al. guided the scoping review. A comprehensive literature search of three databases identified 1584 articles. Application of inclusion and exclusion criteria was piloted on 5% of the papers by three researchers. All titles and abstracts were screened independently by 2 team members, as were full texts. A data charting framework, developed to address the research questions, was piloted by three researchers with data extraction being completed by the lead researcher. A sample of papers were independently charted by a second researcher for reliability checking.
Results: One hundred and ninety papers were included in the review. The literature was diverse in terms of the professions that were represented: physiotherapy (33.7%) and psychology/mental health professions (15.8%) predominated. Many studies were first authored by clinicians (44.2%), often with clinical-academic teams. Some (33.25%) directly referenced the use of their studies to examine translation of research to practice. Few studies (14.2%) specifically tackled issues pertaining to social justice, though many collected variables that could have been utilised for this purpose.
Conclusion: Studies operationalising RCD can meaningfully address research to practice gaps and provide new evidence about issues related to social justice. However, RCD is underutilised for these purposes. Given that vast volumes of relevant data are routinely collected, more needs to be done to leverage it, which would be supported by greater acknowledgement of the value of RCD studies.
{"title":"Operationalising routinely collected patient data in research to further the pursuit of social justice and health equity: a team-based scoping review.","authors":"Katie Chadd, Anna Caute, Anna Pettican, Pam Enderby","doi":"10.1186/s12874-025-02466-9","DOIUrl":"10.1186/s12874-025-02466-9","url":null,"abstract":"<p><strong>Background: </strong>Vast volumes of routinely collected data (RCD) about patients are collated by health professionals. Leveraging this data - a form of real-world data - can be valuable for quality improvement and contributing to the evidence-base to inform practice. Examining routine data may be especially useful for examining issues related to social justice such as health inequities. However, little is known about the extent to which RCD is utilised in health fields and published for wider dissemination.</p><p><strong>Objectives: </strong>The objective of this scoping review is to document the peer-reviewed published research in allied health fields which utilise RCD and evaluate the extent to which these studies have addressed issues pertaining to social justice.</p><p><strong>Methods: </strong>An enhanced version of the Arksey and O'Malley's framework, put forth by Westphalm et al. guided the scoping review. A comprehensive literature search of three databases identified 1584 articles. Application of inclusion and exclusion criteria was piloted on 5% of the papers by three researchers. All titles and abstracts were screened independently by 2 team members, as were full texts. A data charting framework, developed to address the research questions, was piloted by three researchers with data extraction being completed by the lead researcher. A sample of papers were independently charted by a second researcher for reliability checking.</p><p><strong>Results: </strong>One hundred and ninety papers were included in the review. The literature was diverse in terms of the professions that were represented: physiotherapy (33.7%) and psychology/mental health professions (15.8%) predominated. Many studies were first authored by clinicians (44.2%), often with clinical-academic teams. Some (33.25%) directly referenced the use of their studies to examine translation of research to practice. Few studies (14.2%) specifically tackled issues pertaining to social justice, though many collected variables that could have been utilised for this purpose.</p><p><strong>Conclusion: </strong>Studies operationalising RCD can meaningfully address research to practice gaps and provide new evidence about issues related to social justice. However, RCD is underutilised for these purposes. Given that vast volumes of relevant data are routinely collected, more needs to be done to leverage it, which would be supported by greater acknowledgement of the value of RCD studies.</p>","PeriodicalId":9114,"journal":{"name":"BMC Medical Research Methodology","volume":"25 1","pages":"14"},"PeriodicalIF":3.9,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11749527/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143000027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-17DOI: 10.1186/s12874-024-02453-6
Elias Laurin Meyer, Tobias Mielke, Marta Bofill Roig, Michaela Maria Freitag, Peter Jacko, Pavla Krotka, Peter Mesenbrink, Tom Parke, Sonja Zehetmayer, Dario Zocholl, Franz König
Background: Platform trials are innovative clinical trials governed by a master protocol that allows for the evaluation of multiple investigational treatments that enter and leave the trial over time. Interest in platform trials has been steadily increasing over the last decade. Due to their highly adaptive nature, platform trials provide sufficient flexibility to customize important trial design aspects to the requirements of both the specific disease under investigation and the different stakeholders. The flexibility of platform trials, however, comes with complexities when designing such trials. In the past, we reviewed existing software for simulating clinical trials and found that none of them were suitable for simulating platform trials as they do not accommodate the design features and flexibility inherent to platform trials, such as staggered entry of treatments over time.
Results: We argued that simulation studies are crucial for the design of efficient platform trials. We developed and proposed an iterative, simulation-guided "vanilla and sprinkles" framework, i.e. from a basic to a more complex design, for designing platform trials. We addressed the functionality limitations of existing software as well as the unavailability of the coding therein by developing a suite of open-source software to use in simulating platform trials based on the R programming language. To give some examples, the newly developed software supports simulating staggered entry of treatments throughout the trial, choosing different options for control data sharing, specifying different platform stopping rules and platform-level operating characteristics. The software we developed is available through open-source licensing to enable users to access and modify the code. The separate use of two of these software packages to implement the same platform design by independent teams obtained the same results.
Conclusion: We provide a framework, as well as open-source software for the design and simulation of platform trials. The software tools provide the flexibility necessary to capture the complexity of platform trials.
{"title":"Why and how should we simulate platform trials? Learnings from EU-PEARL.","authors":"Elias Laurin Meyer, Tobias Mielke, Marta Bofill Roig, Michaela Maria Freitag, Peter Jacko, Pavla Krotka, Peter Mesenbrink, Tom Parke, Sonja Zehetmayer, Dario Zocholl, Franz König","doi":"10.1186/s12874-024-02453-6","DOIUrl":"10.1186/s12874-024-02453-6","url":null,"abstract":"<p><strong>Background: </strong>Platform trials are innovative clinical trials governed by a master protocol that allows for the evaluation of multiple investigational treatments that enter and leave the trial over time. Interest in platform trials has been steadily increasing over the last decade. Due to their highly adaptive nature, platform trials provide sufficient flexibility to customize important trial design aspects to the requirements of both the specific disease under investigation and the different stakeholders. The flexibility of platform trials, however, comes with complexities when designing such trials. In the past, we reviewed existing software for simulating clinical trials and found that none of them were suitable for simulating platform trials as they do not accommodate the design features and flexibility inherent to platform trials, such as staggered entry of treatments over time.</p><p><strong>Results: </strong>We argued that simulation studies are crucial for the design of efficient platform trials. We developed and proposed an iterative, simulation-guided \"vanilla and sprinkles\" framework, i.e. from a basic to a more complex design, for designing platform trials. We addressed the functionality limitations of existing software as well as the unavailability of the coding therein by developing a suite of open-source software to use in simulating platform trials based on the R programming language. To give some examples, the newly developed software supports simulating staggered entry of treatments throughout the trial, choosing different options for control data sharing, specifying different platform stopping rules and platform-level operating characteristics. The software we developed is available through open-source licensing to enable users to access and modify the code. The separate use of two of these software packages to implement the same platform design by independent teams obtained the same results.</p><p><strong>Conclusion: </strong>We provide a framework, as well as open-source software for the design and simulation of platform trials. The software tools provide the flexibility necessary to capture the complexity of platform trials.</p>","PeriodicalId":9114,"journal":{"name":"BMC Medical Research Methodology","volume":"25 1","pages":"12"},"PeriodicalIF":3.9,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11740366/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143000042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-17DOI: 10.1186/s12874-024-02450-9
Naomi Bradbury, Tom Morris, Clareece Nevill, Janion Nevill, Ryan Field, Suzanne Freeman, Nicola Cooper, Alex Sutton
Background: Since 2015, the Complex Reviews Synthesis Unit (CRSU) has developed a suite of web-based applications (apps) that conduct complex evidence synthesis meta-analyses through point-and-click interfaces. This has been achieved in the R programming language by combining existing R packages that conduct meta-analysis with the shiny web-application package. The CRSU apps have evolved from two short-term student projects into a suite of eight apps that are used for more than 3,000 h per month.
Aim: Here, we present our experience of developing production grade web-apps from the point-of-view of individuals trained primarily as statisticians rather than software developers in the hopes of encouraging and inspiring other groups to develop valuable open-source statistical software whilst also learning from our experiences.
Key challenges: We discuss how we have addressed challenges to research software development such as responding to feedback from our real-world users to improve the CRSU apps, the implementation of software engineering principles into our app development process and gaining recognition for non-traditional research work within the academic environment.
Future developments: The CRSU continues to seek funding opportunities both to maintain and further develop our shiny apps. We aim to increase our user base by implementing new features within the apps and building links with other groups developing complementary evidence synthesis tools.
{"title":"A case study in statistical software development for advanced evidence synthesis: the combined value of analysts and research software engineers.","authors":"Naomi Bradbury, Tom Morris, Clareece Nevill, Janion Nevill, Ryan Field, Suzanne Freeman, Nicola Cooper, Alex Sutton","doi":"10.1186/s12874-024-02450-9","DOIUrl":"10.1186/s12874-024-02450-9","url":null,"abstract":"<p><strong>Background: </strong>Since 2015, the Complex Reviews Synthesis Unit (CRSU) has developed a suite of web-based applications (apps) that conduct complex evidence synthesis meta-analyses through point-and-click interfaces. This has been achieved in the R programming language by combining existing R packages that conduct meta-analysis with the shiny web-application package. The CRSU apps have evolved from two short-term student projects into a suite of eight apps that are used for more than 3,000 h per month.</p><p><strong>Aim: </strong>Here, we present our experience of developing production grade web-apps from the point-of-view of individuals trained primarily as statisticians rather than software developers in the hopes of encouraging and inspiring other groups to develop valuable open-source statistical software whilst also learning from our experiences.</p><p><strong>Key challenges: </strong>We discuss how we have addressed challenges to research software development such as responding to feedback from our real-world users to improve the CRSU apps, the implementation of software engineering principles into our app development process and gaining recognition for non-traditional research work within the academic environment.</p><p><strong>Future developments: </strong>The CRSU continues to seek funding opportunities both to maintain and further develop our shiny apps. We aim to increase our user base by implementing new features within the apps and building links with other groups developing complementary evidence synthesis tools.</p>","PeriodicalId":9114,"journal":{"name":"BMC Medical Research Methodology","volume":"25 1","pages":"13"},"PeriodicalIF":3.9,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11740572/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142999799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-16DOI: 10.1186/s12874-025-02460-1
Xia Jing, Yuchun Zhou, James J Cimino, Jay H Shubrook, Vimla L Patel, Sonsoles De Lacalle, Aneesa Weaver, Chang Liu
Objectives: Metrics and instruments can provide guidance for clinical researchers to assess their potential research projects at an early stage before significant investment. Furthermore, metrics can also provide structured criteria for peer reviewers to assess others' clinical research manuscripts or grant proposals. This study aimed to develop, test, validate, and use evaluation metrics and instruments to accurately, consistently, systematically, and conveniently assess the quality of scientific hypotheses for clinical research projects.
Materials and methods: Metrics development went through iterative stages, including literature review, metrics and instrument development, internal and external testing and validation, and continuous revisions in each stage based on feedback. Furthermore, two experiments were conducted to determine brief and comprehensive versions of the instrument.
Results: The brief version of the instrument contained three dimensions: validity, significance, and feasibility. The comprehensive version of metrics included novelty, clinical relevance, potential benefits and risks, ethicality, testability, clarity, interestingness, and the three dimensions of the brief version. Each evaluation dimension included 2 to 5 subitems to evaluate the specific aspects of each dimension. For example, validity included clinical validity and scientific validity. The brief and comprehensive versions of the instruments included 12 and 39 subitems, respectively. Each subitem used a 5-point Likert scale.
Conclusion: The validated brief and comprehensive versions of metrics can provide standardized, consistent, systematic, and generic measurements for clinical research hypotheses, allow clinical researchers to prioritize their research ideas systematically, objectively, and consistently, and can be used as a tool for quality assessment during the peer review process.
{"title":"Development, validation, and usage of metrics to evaluate the quality of clinical research hypotheses.","authors":"Xia Jing, Yuchun Zhou, James J Cimino, Jay H Shubrook, Vimla L Patel, Sonsoles De Lacalle, Aneesa Weaver, Chang Liu","doi":"10.1186/s12874-025-02460-1","DOIUrl":"10.1186/s12874-025-02460-1","url":null,"abstract":"<p><strong>Objectives: </strong>Metrics and instruments can provide guidance for clinical researchers to assess their potential research projects at an early stage before significant investment. Furthermore, metrics can also provide structured criteria for peer reviewers to assess others' clinical research manuscripts or grant proposals. This study aimed to develop, test, validate, and use evaluation metrics and instruments to accurately, consistently, systematically, and conveniently assess the quality of scientific hypotheses for clinical research projects.</p><p><strong>Materials and methods: </strong>Metrics development went through iterative stages, including literature review, metrics and instrument development, internal and external testing and validation, and continuous revisions in each stage based on feedback. Furthermore, two experiments were conducted to determine brief and comprehensive versions of the instrument.</p><p><strong>Results: </strong>The brief version of the instrument contained three dimensions: validity, significance, and feasibility. The comprehensive version of metrics included novelty, clinical relevance, potential benefits and risks, ethicality, testability, clarity, interestingness, and the three dimensions of the brief version. Each evaluation dimension included 2 to 5 subitems to evaluate the specific aspects of each dimension. For example, validity included clinical validity and scientific validity. The brief and comprehensive versions of the instruments included 12 and 39 subitems, respectively. Each subitem used a 5-point Likert scale.</p><p><strong>Conclusion: </strong>The validated brief and comprehensive versions of metrics can provide standardized, consistent, systematic, and generic measurements for clinical research hypotheses, allow clinical researchers to prioritize their research ideas systematically, objectively, and consistently, and can be used as a tool for quality assessment during the peer review process.</p>","PeriodicalId":9114,"journal":{"name":"BMC Medical Research Methodology","volume":"25 1","pages":"11"},"PeriodicalIF":3.9,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11737058/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143000002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-15DOI: 10.1186/s12874-024-02430-z
Yizhen Li, Zhe Huang, Zhongzhi Luan, Shujing Xu, Yunan Zhang, Lin Wu, Darong Wu, Dongran Han, Yixing Liu
Purpose: The process of searching for and selecting clinical evidence for systematic reviews (SRs) or clinical guidelines is essential for researchers in Traditional Chinese medicine (TCM). However, this process is often time-consuming and resource-intensive. In this study, we introduce a novel precision-preferred comprehensive information extraction and selection procedure to enhance both the efficiency and accuracy of evidence selection for TCM practitioners.
Methods: We integrated an established deep learning model (Evi-BERT combined rule-based method) with Boolean logic algorithms and an expanded retrieval strategy to automatically and accurately select potential evidence with minimal human intervention. The selection process is recorded in real-time, allowing researchers to backtrack and verify its accuracy. This innovative approach was tested on ten high-quality, randomly selected systematic reviews of TCM-related topics written in Chinese. To evaluate its effectiveness, we compared the screening time and accuracy of this approach with traditional evidence selection methods.
Results: Our finding demonstrated that the new method accurately selected potential literature based on consistent criteria while significantly reducing the time required for the process. Additionally, in some cases, this approach identified a broader range of relevant evidence and enabled the tracking of selection progress for future reference. The study also revealed that traditional screening methods are often subjective and prone to errors, frequently resulting in the inclusion of literature that does not meet established standards. In contrast, our method offers a more accurate and efficient way to select clinical evidence for TCM practitioners, outperforming traditional manual approaches.
Conclusion: We proposed an innovative approach for selecting clinical evidence for TCM reviews and guidelines, aiming to reduce the workload for researchers. While this method showed promise in improving the efficiency and accuracy of evidence-based selection, its full potential required further validation. Additionally, it may serve as a useful tool for editors to assess manuscript quality in the future.
{"title":"Efficient evidence selection for systematic reviews in traditional Chinese medicine.","authors":"Yizhen Li, Zhe Huang, Zhongzhi Luan, Shujing Xu, Yunan Zhang, Lin Wu, Darong Wu, Dongran Han, Yixing Liu","doi":"10.1186/s12874-024-02430-z","DOIUrl":"10.1186/s12874-024-02430-z","url":null,"abstract":"<p><strong>Purpose: </strong>The process of searching for and selecting clinical evidence for systematic reviews (SRs) or clinical guidelines is essential for researchers in Traditional Chinese medicine (TCM). However, this process is often time-consuming and resource-intensive. In this study, we introduce a novel precision-preferred comprehensive information extraction and selection procedure to enhance both the efficiency and accuracy of evidence selection for TCM practitioners.</p><p><strong>Methods: </strong>We integrated an established deep learning model (Evi-BERT combined rule-based method) with Boolean logic algorithms and an expanded retrieval strategy to automatically and accurately select potential evidence with minimal human intervention. The selection process is recorded in real-time, allowing researchers to backtrack and verify its accuracy. This innovative approach was tested on ten high-quality, randomly selected systematic reviews of TCM-related topics written in Chinese. To evaluate its effectiveness, we compared the screening time and accuracy of this approach with traditional evidence selection methods.</p><p><strong>Results: </strong>Our finding demonstrated that the new method accurately selected potential literature based on consistent criteria while significantly reducing the time required for the process. Additionally, in some cases, this approach identified a broader range of relevant evidence and enabled the tracking of selection progress for future reference. The study also revealed that traditional screening methods are often subjective and prone to errors, frequently resulting in the inclusion of literature that does not meet established standards. In contrast, our method offers a more accurate and efficient way to select clinical evidence for TCM practitioners, outperforming traditional manual approaches.</p><p><strong>Conclusion: </strong>We proposed an innovative approach for selecting clinical evidence for TCM reviews and guidelines, aiming to reduce the workload for researchers. While this method showed promise in improving the efficiency and accuracy of evidence-based selection, its full potential required further validation. Additionally, it may serve as a useful tool for editors to assess manuscript quality in the future.</p>","PeriodicalId":9114,"journal":{"name":"BMC Medical Research Methodology","volume":"25 1","pages":"10"},"PeriodicalIF":3.9,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11734327/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143000018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-14DOI: 10.1186/s12874-024-02440-x
Michelle Pfaffenlehner, Max Behrens, Daniela Zöller, Kathrin Ungethüm, Kai Günther, Viktoria Rücker, Jens-Peter Reese, Peter Heuschmann, Miriam Kesselmeier, Flavia Remo, André Scherag, Harald Binder, Nadine Binder
Background: The integration of real-world evidence (RWE) from real-world data (RWD) in clinical research is crucial for bridging the gap between clinical trial results and real-world outcomes. Analyzing routinely collected data to generate clinical evidence faces methodological concerns like confounding and bias, similar to prospectively documented observational studies. This study focuses on additional limitations frequently reported in the literature, providing an overview of the challenges and biases inherent to analyzing routine clinical care data, including health claims data (hereafter: routine data).
Methods: We conducted a literature search on routine data studies in four high-impact journals based on the Journal Citation Reports (JCR) category "Medicine, General & Internal" as of 2022 and three oncology journals, covering articles published from January 2018 to October 2023. Articles were screened and categorized into three scenarios based on their potential to provide meaningful RWE: (1) Burden of Disease, (2) Safety and Risk Group Analysis, and (3) Treatment Comparison. Limitations of this type of data cited in the discussion sections were extracted and classified according to different bias types: main bias categories in non-randomized studies (information bias, reporting bias, selection bias, confounding) and additional routine data-specific challenges (i.e., operationalization, coding, follow-up, missing data, validation, and data quality). These classifications were then ranked by relevance in a focus group meeting of methodological experts. The search was pre-specified and registered in PROSPERO (CRD42023477616).
Results: In October 2023, 227 articles were identified, 69 were assessed for eligibility, and 39 were included in the review: 11 on the burden of disease, 17 on safety and risk group analysis, and 11 on treatment comparison. Besides typical biases in observational studies, we identified additional challenges specific to RWE frequently mentioned in the discussion sections. The focus group had varied opinions on the limitations of Safety and Risk Group Analysis and Treatment Comparison but agreed on the essential limitations for the Burden of Disease category.
Conclusion: This review provides a comprehensive overview of potential limitations and biases in analyzing routine data reported in recent high-impact journals. We highlighted key challenges that have high potential to impact analysis results, emphasizing the need for thorough consideration and discussion for meaningful inferences.
背景:临床研究中真实世界证据(RWE)与真实世界数据(RWD)的整合对于弥合临床试验结果与真实世界结果之间的差距至关重要。分析常规收集的数据以产生临床证据面临混淆和偏倚等方法学问题,类似于前瞻性记录的观察性研究。本研究着重于文献中经常报道的其他限制,概述了分析常规临床护理数据(包括健康声明数据)所固有的挑战和偏见。方法:检索截至2022年JCR期刊引文报告(Journal Citation Reports)“Medicine, General & Internal”类别的4种高影响力期刊和3种肿瘤学期刊的常规数据研究,涵盖2018年1月至2023年10月发表的文章。文章根据其提供有意义RWE的潜力被筛选并分为三种情景:(1)疾病负担,(2)安全性和风险组分析,以及(3)治疗比较。根据不同的偏倚类型提取和分类讨论部分中引用的这类数据的局限性:非随机研究中的主要偏倚类别(信息偏倚、报告偏倚、选择偏倚、混淆)和额外的常规数据特定挑战(即操作化、编码、随访、缺失数据、验证和数据质量)。然后在方法学专家焦点小组会议上按相关性对这些分类进行排序。该搜索在PROSPERO (CRD42023477616)中预先指定并注册。结果:在2023年10月,227篇文章被识别,69篇被评估为合格,39篇被纳入审查:11篇关于疾病负担,17篇关于安全性和风险组分析,11篇关于治疗比较。除了观察性研究中的典型偏差外,我们还确定了讨论部分经常提到的RWE特有的其他挑战。焦点小组对安全性和风险组分析和治疗比较的局限性有不同的意见,但对疾病负担类别的基本局限性达成一致。结论:本综述全面概述了在分析近期高影响力期刊报道的常规数据时可能存在的局限性和偏倚。我们强调了对分析结果有很大影响的关键挑战,强调需要对有意义的推论进行彻底的考虑和讨论。
{"title":"Methodological challenges using routine clinical care data for real-world evidence: a rapid review utilizing a systematic literature search and focus group discussion.","authors":"Michelle Pfaffenlehner, Max Behrens, Daniela Zöller, Kathrin Ungethüm, Kai Günther, Viktoria Rücker, Jens-Peter Reese, Peter Heuschmann, Miriam Kesselmeier, Flavia Remo, André Scherag, Harald Binder, Nadine Binder","doi":"10.1186/s12874-024-02440-x","DOIUrl":"10.1186/s12874-024-02440-x","url":null,"abstract":"<p><strong>Background: </strong>The integration of real-world evidence (RWE) from real-world data (RWD) in clinical research is crucial for bridging the gap between clinical trial results and real-world outcomes. Analyzing routinely collected data to generate clinical evidence faces methodological concerns like confounding and bias, similar to prospectively documented observational studies. This study focuses on additional limitations frequently reported in the literature, providing an overview of the challenges and biases inherent to analyzing routine clinical care data, including health claims data (hereafter: routine data).</p><p><strong>Methods: </strong>We conducted a literature search on routine data studies in four high-impact journals based on the Journal Citation Reports (JCR) category \"Medicine, General & Internal\" as of 2022 and three oncology journals, covering articles published from January 2018 to October 2023. Articles were screened and categorized into three scenarios based on their potential to provide meaningful RWE: (1) Burden of Disease, (2) Safety and Risk Group Analysis, and (3) Treatment Comparison. Limitations of this type of data cited in the discussion sections were extracted and classified according to different bias types: main bias categories in non-randomized studies (information bias, reporting bias, selection bias, confounding) and additional routine data-specific challenges (i.e., operationalization, coding, follow-up, missing data, validation, and data quality). These classifications were then ranked by relevance in a focus group meeting of methodological experts. The search was pre-specified and registered in PROSPERO (CRD42023477616).</p><p><strong>Results: </strong>In October 2023, 227 articles were identified, 69 were assessed for eligibility, and 39 were included in the review: 11 on the burden of disease, 17 on safety and risk group analysis, and 11 on treatment comparison. Besides typical biases in observational studies, we identified additional challenges specific to RWE frequently mentioned in the discussion sections. The focus group had varied opinions on the limitations of Safety and Risk Group Analysis and Treatment Comparison but agreed on the essential limitations for the Burden of Disease category.</p><p><strong>Conclusion: </strong>This review provides a comprehensive overview of potential limitations and biases in analyzing routine data reported in recent high-impact journals. We highlighted key challenges that have high potential to impact analysis results, emphasizing the need for thorough consideration and discussion for meaningful inferences.</p>","PeriodicalId":9114,"journal":{"name":"BMC Medical Research Methodology","volume":"25 1","pages":"8"},"PeriodicalIF":3.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11731536/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142982630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-14DOI: 10.1186/s12874-025-02464-x
Jonas Lander, Simon Wallraf, Dawid Pieper, Ronny Klawunn, Hala Altawil, Marie-Luise Dierks, Cosima John
Background: Focus groups (FGs) are an established method in health research to capture a full range of different perspectives on a particular research question. The extent to which they are effective depends, not least, on the composition of the participants. This study aimed to investigate how published FG studies plan and conduct the recruitment of study participants. We looked at what kind of information is reported about recruitment practices and what this reveals about the comprehensiveness of the actual recruitment plans and practices.
Methods: We conducted a systematic search of FG studies in PubMed and Web of Science published between 2018 and 2024, and included n = 80 eligible publications in the analysis. We used a text extraction sheet to collect all relevant recruitment information from each study. We then coded the extracted text passages and summarised the findings descriptively.
Results: Nearly half (n = 38/80) of the studies were from the USA and Canada, many addressing issues related to diabetes, cancer, mental health and chronic diseases. For recruitment planning, 20% reported a specific sampling target, while 6% used existing studies or literature for organisational and content planning. A further 10% reported previous recruitment experience of the researchers. The studies varied in terms of number of participants (range = 7-202) and group size (range = 7-20). Recruitment occurred often in healthcare settings, rarely through digital channels and everyday places. FG participants were most commonly recruited by the research team (21%) or by health professionals (16%), with less collaboration with public organisations (10%) and little indication of the number of people involved (13%). A financial incentive for participants was used in 43% of cases, and 19% reported participatory approaches to plan and carry out recruitment. 65 studies (81%) reported a total of 58 limitations related to recruitment.
Conclusions: The reporting of recruitment often seems to be incomplete, and its performance lacking. Hence, guidelines and recruitment recommendations designed to assist researchers are not yet adequately serving their purpose. Researchers may benefit from more practical support, such as early training on key principles and options for effective recruitment strategies provided by institutions in their immediate professional environment, e.g. universities, faculties or scientific associations.
背景:焦点小组(FGs)是卫生研究中的一种既定方法,用于捕获对特定研究问题的全方位不同观点。它们的有效程度不仅取决于参与者的构成。本研究旨在探讨已发表的FG研究如何计划和招募研究参与者。我们研究了关于招聘实践的哪些信息被报道,以及这些信息揭示了实际招聘计划和实践的全面性。方法:系统检索2018年至2024年间发表在PubMed和Web of Science上的FG研究,纳入n = 80篇符合条件的论文。我们使用文本提取表收集每个研究的所有相关招募信息。然后,我们对提取的文本段落进行编码,并对结果进行描述性总结。结果:近一半(n = 38/80)的研究来自美国和加拿大,许多研究涉及与糖尿病、癌症、心理健康和慢性病相关的问题。对于招聘计划,20%的人报告了一个特定的抽样目标,而6%的人使用现有的研究或文献进行组织和内容规划。另有10%的人报告了之前招募研究人员的经历。这些研究在参与者人数(范围= 7-202)和小组规模(范围= 7-20)方面有所不同。招聘通常发生在医疗机构,很少通过数字渠道和日常场所。FG参与者通常是由研究小组(21%)或卫生专业人员(16%)招募的,与公共组织的合作较少(10%),很少表明参与人数(13%)。43%的案例采用了对参与者的经济激励,19%的案例采用了参与式方法来计划和实施招聘。65项研究(81%)报告了与招募相关的58项限制。结论:招聘的报道往往显得不完整,缺乏实效性。因此,旨在帮助研究人员的指导方针和招聘建议尚未充分服务于其目的。研究人员可以从更实际的支持中受益,例如在其直接的专业环境中,如大学、学院或科学协会,机构提供的关于关键原则和有效招聘策略选择的早期培训。
{"title":"Recruiting participants for focus groups in health research: a meta-research study.","authors":"Jonas Lander, Simon Wallraf, Dawid Pieper, Ronny Klawunn, Hala Altawil, Marie-Luise Dierks, Cosima John","doi":"10.1186/s12874-025-02464-x","DOIUrl":"10.1186/s12874-025-02464-x","url":null,"abstract":"<p><strong>Background: </strong>Focus groups (FGs) are an established method in health research to capture a full range of different perspectives on a particular research question. The extent to which they are effective depends, not least, on the composition of the participants. This study aimed to investigate how published FG studies plan and conduct the recruitment of study participants. We looked at what kind of information is reported about recruitment practices and what this reveals about the comprehensiveness of the actual recruitment plans and practices.</p><p><strong>Methods: </strong>We conducted a systematic search of FG studies in PubMed and Web of Science published between 2018 and 2024, and included n = 80 eligible publications in the analysis. We used a text extraction sheet to collect all relevant recruitment information from each study. We then coded the extracted text passages and summarised the findings descriptively.</p><p><strong>Results: </strong>Nearly half (n = 38/80) of the studies were from the USA and Canada, many addressing issues related to diabetes, cancer, mental health and chronic diseases. For recruitment planning, 20% reported a specific sampling target, while 6% used existing studies or literature for organisational and content planning. A further 10% reported previous recruitment experience of the researchers. The studies varied in terms of number of participants (range = 7-202) and group size (range = 7-20). Recruitment occurred often in healthcare settings, rarely through digital channels and everyday places. FG participants were most commonly recruited by the research team (21%) or by health professionals (16%), with less collaboration with public organisations (10%) and little indication of the number of people involved (13%). A financial incentive for participants was used in 43% of cases, and 19% reported participatory approaches to plan and carry out recruitment. 65 studies (81%) reported a total of 58 limitations related to recruitment.</p><p><strong>Conclusions: </strong>The reporting of recruitment often seems to be incomplete, and its performance lacking. Hence, guidelines and recruitment recommendations designed to assist researchers are not yet adequately serving their purpose. Researchers may benefit from more practical support, such as early training on key principles and options for effective recruitment strategies provided by institutions in their immediate professional environment, e.g. universities, faculties or scientific associations.</p>","PeriodicalId":9114,"journal":{"name":"BMC Medical Research Methodology","volume":"25 1","pages":"9"},"PeriodicalIF":3.9,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11730470/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142982650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}