Pub Date : 2025-01-09DOI: 10.1186/s12874-025-02457-w
Yan Li
Objective: To assess whether the outcome generation true model could be identified from other candidate models for clinical practice with current conventional model performance measures considering various simulation scenarios and a CVD risk prediction as exemplar.
Study design and setting: Thousands of scenarios of true models were used to simulate clinical data, various candidate models and true models were trained on training datasets and then compared on testing datasets with 25 conventional use model performance measures. This consists of univariate simulation (179.2k simulated datasets and over 1.792 million models), multivariate simulation (728k simulated datasets and over 8.736 million models) and a CVD risk prediction case analysis.
Results: True models had overall C statistic and 95% range of 0.67 (0.51, 0.96) across all scenarios in univariate simulation, 0.81 (0.54, 0.98) in multivariate simulation, 0.85 (0.82, 0.88) in univariate case analysis and 0.85 (0.82, 0.88) in multivariate case analysis. Measures showed very clear differences between the true model and flip-coin model, little or none differences between the true model and candidate models with extra noises, relatively small differences between the true model and proxy models missing causal predictors.
Conclusion: The study found the true model is not always identified as the "outperformed" model by current conventional measures for binary outcome, even though such true model is presented in the clinical data. New statistical approaches or measures should be established to identify the casual true model from proxy models, especially for those in proxy models with extra noises and/or missing causal predictors.
{"title":"Identify the underlying true model from other models for clinical practice using model performance measures.","authors":"Yan Li","doi":"10.1186/s12874-025-02457-w","DOIUrl":"10.1186/s12874-025-02457-w","url":null,"abstract":"<p><strong>Objective: </strong>To assess whether the outcome generation true model could be identified from other candidate models for clinical practice with current conventional model performance measures considering various simulation scenarios and a CVD risk prediction as exemplar.</p><p><strong>Study design and setting: </strong>Thousands of scenarios of true models were used to simulate clinical data, various candidate models and true models were trained on training datasets and then compared on testing datasets with 25 conventional use model performance measures. This consists of univariate simulation (179.2k simulated datasets and over 1.792 million models), multivariate simulation (728k simulated datasets and over 8.736 million models) and a CVD risk prediction case analysis.</p><p><strong>Results: </strong>True models had overall C statistic and 95% range of 0.67 (0.51, 0.96) across all scenarios in univariate simulation, 0.81 (0.54, 0.98) in multivariate simulation, 0.85 (0.82, 0.88) in univariate case analysis and 0.85 (0.82, 0.88) in multivariate case analysis. Measures showed very clear differences between the true model and flip-coin model, little or none differences between the true model and candidate models with extra noises, relatively small differences between the true model and proxy models missing causal predictors.</p><p><strong>Conclusion: </strong>The study found the true model is not always identified as the \"outperformed\" model by current conventional measures for binary outcome, even though such true model is presented in the clinical data. New statistical approaches or measures should be established to identify the casual true model from proxy models, especially for those in proxy models with extra noises and/or missing causal predictors.</p>","PeriodicalId":9114,"journal":{"name":"BMC Medical Research Methodology","volume":"25 1","pages":"4"},"PeriodicalIF":3.9,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11715858/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142944667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-08DOI: 10.1186/s12874-024-02441-w
Mikateko Mazinu, Nomonde Gwebushe, Samuel Manda, Tarylee Reddy
Background: The majority of phase 3 clinical trials are implemented in multiple sites or centres, which inevitably leads to a correlation between observations from the same site or centre. This correlation must be carefully considered in both the design and the statistical analysis to ensure an accurate interpretation of the results and reduce the risk of biased results. This scoping review aims to provide a detailed statistical method used to analyze data collected from multicentre HIV randomized controlled trials in the African region.
Methods: This review followed the methodological framework proposed by Arksey and O'Malley. We searched four databases (PubMed, EBSCOhost, Scopus, and Web of Science) and retrieved 977 articles, 34 of which were included in the review.
Results: Data charting revealed that the most used statistical methods for analysing HIV endpoints in multicentre randomized controlled trials in Africa were standard survival analysis techniques (24 articles [71%]). Approximately 47% of the articles used stratified analysis methods to account for variations across different sites. Out of 34 articles reviewed, only 6 explicitly considered intra-site correlation in the analysis.
Conclusions: Our scoping review provides insights into the statistical methods used to analyse HIV data in multicentre randomized controlled trials in Africa and highlights the need for standardized reporting of statistical methods.
背景:大多数3期临床试验在多个地点或中心进行,这不可避免地导致来自同一地点或中心的观察结果之间的相关性。在设计和统计分析中必须仔细考虑这种相关性,以确保对结果的准确解释并减少结果偏差的风险。这项范围审查的目的是提供一种详细的统计方法,用于分析从非洲地区的多中心艾滋病毒随机对照试验收集的数据。方法:本综述遵循Arksey和O'Malley提出的方法框架。我们检索了四个数据库(PubMed、EBSCOhost、Scopus和Web of Science),检索到977篇文章,其中34篇被纳入综述。结果:数据图表显示,在非洲的多中心随机对照试验中,用于分析HIV终点的最常用统计方法是标准生存分析技术(24篇文章[71%])。大约47%的文章使用分层分析方法来解释不同地点的差异。在回顾的34篇文章中,只有6篇在分析中明确考虑了位点内相关性。结论:我们的范围综述提供了对非洲多中心随机对照试验中用于分析艾滋病毒数据的统计方法的见解,并强调了统计方法标准化报告的必要性。
{"title":"Statistical methods in the analysis of multicentre HIV randomized controlled trials in the African region: a scoping review.","authors":"Mikateko Mazinu, Nomonde Gwebushe, Samuel Manda, Tarylee Reddy","doi":"10.1186/s12874-024-02441-w","DOIUrl":"10.1186/s12874-024-02441-w","url":null,"abstract":"<p><strong>Background: </strong>The majority of phase 3 clinical trials are implemented in multiple sites or centres, which inevitably leads to a correlation between observations from the same site or centre. This correlation must be carefully considered in both the design and the statistical analysis to ensure an accurate interpretation of the results and reduce the risk of biased results. This scoping review aims to provide a detailed statistical method used to analyze data collected from multicentre HIV randomized controlled trials in the African region.</p><p><strong>Methods: </strong>This review followed the methodological framework proposed by Arksey and O'Malley. We searched four databases (PubMed, EBSCOhost, Scopus, and Web of Science) and retrieved 977 articles, 34 of which were included in the review.</p><p><strong>Results: </strong>Data charting revealed that the most used statistical methods for analysing HIV endpoints in multicentre randomized controlled trials in Africa were standard survival analysis techniques (24 articles [71%]). Approximately 47% of the articles used stratified analysis methods to account for variations across different sites. Out of 34 articles reviewed, only 6 explicitly considered intra-site correlation in the analysis.</p><p><strong>Conclusions: </strong>Our scoping review provides insights into the statistical methods used to analyse HIV data in multicentre randomized controlled trials in Africa and highlights the need for standardized reporting of statistical methods.</p>","PeriodicalId":9114,"journal":{"name":"BMC Medical Research Methodology","volume":"25 1","pages":"3"},"PeriodicalIF":3.9,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11707912/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142944670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-04DOI: 10.1186/s12874-024-02449-2
Freja Gomez Overgaard, Henrik Hein Lauridsen, Mads Damkjær, Anne Reffsøe Ebbesen, Lise Hestbæk, Mikkel Brunsgaard Konner, Søren Francis Dyhrberg O'Neill, Stine Haugaard Pape, Michael Skovdal Rathleff, Christian Lund Straszek, Casper Nim
Background: Spinal pain affects up to 30% of school-age children and can interfere with various aspects of daily life, such as school attendance, physical function, and social life. Current assessment tools often rely on parental reporting which limits our understanding of how each child is affected by their pain. This study aimed to address this gap by developing MySpineData-Kids ("MiRD-Kids"), a tailored patient-reported questionnaire focusing on children with spinal pain in secondary care (Danish hospital setting).
Methods: The process and development of MiRD-Kids followed a structured, multi-phase approach targeted children in outpatient care. The first phase involved evidence-synthesis, expert consultations, and item formulation, resulting in the first version. The second phase involved pilot testing among pediatric spinal pain patients, leading to modifications for improved clarity and relevance. The third phase involved implementation at the Pediatric outpatient track at The Spine Centre of Southern Denmark, University Hospital of Southern Denmark.
Results: MiRD-Kids was based on selected items from seven questionnaires, encompassing 20 items across six domains. Pilot testing with 13 pediatric patients facilitated modifications and finalized the questionnaire. The questionnaire includes sections for parents/legal guardians and six domains for children covering pain, sleep, activities, trauma, concerns, and treatment, following the International Classification of Functioning, Disability, and Health (ICF). Implementation challenges were overcome within a 2-month period, resulting in the clinical questionnaire MiRD-Kids a comprehensive tool for assessing pediatric spinal pain in hospital outpatient settings.
Conclusion: MiRD-Kids is the first comprehensive questionnaire for children with spinal pain seen in outpatient caresetting and follows the ICF approach. It can support age-specific high-quality research and comprehensive clinical assessment of children aged 12 to 17 years, potentially, contributing to efforts aimed at mitigating the long-term consequences of spinal pain.
{"title":"Development of a standardized patient-reported clinical questionnaire for children with spinal pain.","authors":"Freja Gomez Overgaard, Henrik Hein Lauridsen, Mads Damkjær, Anne Reffsøe Ebbesen, Lise Hestbæk, Mikkel Brunsgaard Konner, Søren Francis Dyhrberg O'Neill, Stine Haugaard Pape, Michael Skovdal Rathleff, Christian Lund Straszek, Casper Nim","doi":"10.1186/s12874-024-02449-2","DOIUrl":"10.1186/s12874-024-02449-2","url":null,"abstract":"<p><strong>Background: </strong>Spinal pain affects up to 30% of school-age children and can interfere with various aspects of daily life, such as school attendance, physical function, and social life. Current assessment tools often rely on parental reporting which limits our understanding of how each child is affected by their pain. This study aimed to address this gap by developing MySpineData-Kids (\"MiRD-Kids\"), a tailored patient-reported questionnaire focusing on children with spinal pain in secondary care (Danish hospital setting).</p><p><strong>Methods: </strong>The process and development of MiRD-Kids followed a structured, multi-phase approach targeted children in outpatient care. The first phase involved evidence-synthesis, expert consultations, and item formulation, resulting in the first version. The second phase involved pilot testing among pediatric spinal pain patients, leading to modifications for improved clarity and relevance. The third phase involved implementation at the Pediatric outpatient track at The Spine Centre of Southern Denmark, University Hospital of Southern Denmark.</p><p><strong>Results: </strong>MiRD-Kids was based on selected items from seven questionnaires, encompassing 20 items across six domains. Pilot testing with 13 pediatric patients facilitated modifications and finalized the questionnaire. The questionnaire includes sections for parents/legal guardians and six domains for children covering pain, sleep, activities, trauma, concerns, and treatment, following the International Classification of Functioning, Disability, and Health (ICF). Implementation challenges were overcome within a 2-month period, resulting in the clinical questionnaire MiRD-Kids a comprehensive tool for assessing pediatric spinal pain in hospital outpatient settings.</p><p><strong>Conclusion: </strong>MiRD-Kids is the first comprehensive questionnaire for children with spinal pain seen in outpatient caresetting and follows the ICF approach. It can support age-specific high-quality research and comprehensive clinical assessment of children aged 12 to 17 years, potentially, contributing to efforts aimed at mitigating the long-term consequences of spinal pain.</p>","PeriodicalId":9114,"journal":{"name":"BMC Medical Research Methodology","volume":"25 1","pages":"2"},"PeriodicalIF":3.9,"publicationDate":"2025-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11699818/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142926660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-03DOI: 10.1186/s12874-024-02423-y
Sabyasachi Guharay
Background: In this work, we implement a data-driven approach using an aggregation of several analytical methods to study the characteristics of COVID-19 daily infection and death time series and identify correlations and characteristic trends that can be corroborated to the time evolution of this disease. The datasets cover twelve distinct countries across six continents, from January 22, 2020 till March 1, 2022. This time span is partitioned into three windows: (1) pre-vaccine, (2) post-vaccine and pre-omicron (BA.1 variant), and (3) post-vaccine including post-omicron variant. This study enables deriving insights into intriguing questions related to the science of system dynamics pertaining to COVID-19 evolution.
Methods: We implement a set of several distinct analytical methods for: (a) statistical studies to estimate the skewness and kurtosis of the data distributions; (b) analyzing the stationarity properties of these time series using the Augmented Dickey-Fuller (ADF) tests; (c) examining co-integration properties for the non-stationary time series using the Phillips-Ouliaris (PO) tests; (d) calculating the Hurst exponent using the rescaled-range (R/S) analysis, along with the Detrended Fluctuation Analysis (DFA), for self-affinity studies of the evolving dynamical datasets.
Results: We notably observe a significant asymmetry of distributions shows from skewness and the presence of heavy tails is noted from kurtosis. The daily infection and death data are, by and large, nonstationary, while their corresponding log return values render stationarity. The self-affinity studies through the Hurst exponents and DFA exhibit intriguing local changes over time. These changes can be attributed to the underlying dynamics of state transitions, especially from a random state to either mean-reversion or long-range memory/persistence states.
Conclusions: We conduct systematic studies covering a widely diverse time series datasets of the daily infections and deaths during the evolution of the COVID-19 pandemic. We demonstrate the merit of a multiple analytics frameworks through systematically laying down a methodological structure for analyses and quantitatively examining the evolution of the daily COVID-19 infection and death cases. This methodology builds a capability for tracking dynamically evolving states pertaining to critical problems.
{"title":"A data-driven approach to study temporal characteristics of COVID-19 infection and death Time Series for twelve countries across six continents.","authors":"Sabyasachi Guharay","doi":"10.1186/s12874-024-02423-y","DOIUrl":"10.1186/s12874-024-02423-y","url":null,"abstract":"<p><strong>Background: </strong>In this work, we implement a data-driven approach using an aggregation of several analytical methods to study the characteristics of COVID-19 daily infection and death time series and identify correlations and characteristic trends that can be corroborated to the time evolution of this disease. The datasets cover twelve distinct countries across six continents, from January 22, 2020 till March 1, 2022. This time span is partitioned into three windows: (1) pre-vaccine, (2) post-vaccine and pre-omicron (BA.1 variant), and (3) post-vaccine including post-omicron variant. This study enables deriving insights into intriguing questions related to the science of system dynamics pertaining to COVID-19 evolution.</p><p><strong>Methods: </strong>We implement a set of several distinct analytical methods for: (a) statistical studies to estimate the skewness and kurtosis of the data distributions; (b) analyzing the stationarity properties of these time series using the Augmented Dickey-Fuller (ADF) tests; (c) examining co-integration properties for the non-stationary time series using the Phillips-Ouliaris (PO) tests; (d) calculating the Hurst exponent using the rescaled-range (R/S) analysis, along with the Detrended Fluctuation Analysis (DFA), for self-affinity studies of the evolving dynamical datasets.</p><p><strong>Results: </strong>We notably observe a significant asymmetry of distributions shows from skewness and the presence of heavy tails is noted from kurtosis. The daily infection and death data are, by and large, nonstationary, while their corresponding log return values render stationarity. The self-affinity studies through the Hurst exponents and DFA exhibit intriguing local changes over time. These changes can be attributed to the underlying dynamics of state transitions, especially from a random state to either mean-reversion or long-range memory/persistence states.</p><p><strong>Conclusions: </strong>We conduct systematic studies covering a widely diverse time series datasets of the daily infections and deaths during the evolution of the COVID-19 pandemic. We demonstrate the merit of a multiple analytics frameworks through systematically laying down a methodological structure for analyses and quantitatively examining the evolution of the daily COVID-19 infection and death cases. This methodology builds a capability for tracking dynamically evolving states pertaining to critical problems.</p>","PeriodicalId":9114,"journal":{"name":"BMC Medical Research Methodology","volume":"25 1","pages":"1"},"PeriodicalIF":3.9,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11697903/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142926659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-31DOI: 10.1186/s12874-024-02429-6
Limei Ji, Max Geraedts, Werner de Cruppé
Background: Health services research often relies on secondary data, necessitating quality checks for completeness, validity, and potential errors before use. Various methods address implausible data, including data elimination, statistical estimation, or value substitution from the same or another dataset. This study presents an internal validation process of a secondary dataset used to investigate hospital compliance with minimum caseload requirements (MCR) in Germany. The secondary data source validated is the German Hospital Quality Reports (GHQR), an official dataset containing structured self-reported data from all hospitals in Germany.
Methods: This study conducted an internal cross-field validation of MCR-related data in GHQR from 2016 to 2021. The validation process checked the validity of reported MCR caseloads, including data availability and consistency, by comparing the stated MCR caseload with further variables in the GHQR. Subsequently, implausible MCR caseload values were corrected using the most plausible values given in the same GHQR. The study also analysed the error sources and used reimbursement-related Diagnosis Related Groups Statistic data to assess the validation outcomes.
Results: The analysis focused on four MCR procedures. 11.8-27.7% of the total MCR caseload values in the GHQR appeared ambiguous, and 7.9-23.7% were corrected. The correction added 0.7-3.7% of cases not previously stated as MCR caseloads and added 1.5-26.1% of hospital sites as MCR performing hospitals not previously stated in the GHQR. The main error source was this non-reporting of MCR caseloads, especially by hospitals with low case numbers. The basic plausibility control implemented by the Federal Joint Committee since 2018 has improved the MCR-related data quality over time.
Conclusions: This study employed a comprehensive approach to dataset internal validation that encompassed: (1) hospital association level data, (2) hospital site level data and (3) medical department level data, (4) report data spanning six years, and (5) logical plausibility checks. To ensure data completeness, we selected the most plausible values without eliminating incomplete or implausible data. For future practice, we recommend a validation process when using GHQR as a data source for MCR-related research. Additionally, an adapted plausibility control could help to improve the quality of MCR documentation.
{"title":"Internal validation of self-reported case numbers in hospital quality reports: preparing secondary data for health services research.","authors":"Limei Ji, Max Geraedts, Werner de Cruppé","doi":"10.1186/s12874-024-02429-6","DOIUrl":"10.1186/s12874-024-02429-6","url":null,"abstract":"<p><strong>Background: </strong>Health services research often relies on secondary data, necessitating quality checks for completeness, validity, and potential errors before use. Various methods address implausible data, including data elimination, statistical estimation, or value substitution from the same or another dataset. This study presents an internal validation process of a secondary dataset used to investigate hospital compliance with minimum caseload requirements (MCR) in Germany. The secondary data source validated is the German Hospital Quality Reports (GHQR), an official dataset containing structured self-reported data from all hospitals in Germany.</p><p><strong>Methods: </strong>This study conducted an internal cross-field validation of MCR-related data in GHQR from 2016 to 2021. The validation process checked the validity of reported MCR caseloads, including data availability and consistency, by comparing the stated MCR caseload with further variables in the GHQR. Subsequently, implausible MCR caseload values were corrected using the most plausible values given in the same GHQR. The study also analysed the error sources and used reimbursement-related Diagnosis Related Groups Statistic data to assess the validation outcomes.</p><p><strong>Results: </strong>The analysis focused on four MCR procedures. 11.8-27.7% of the total MCR caseload values in the GHQR appeared ambiguous, and 7.9-23.7% were corrected. The correction added 0.7-3.7% of cases not previously stated as MCR caseloads and added 1.5-26.1% of hospital sites as MCR performing hospitals not previously stated in the GHQR. The main error source was this non-reporting of MCR caseloads, especially by hospitals with low case numbers. The basic plausibility control implemented by the Federal Joint Committee since 2018 has improved the MCR-related data quality over time.</p><p><strong>Conclusions: </strong>This study employed a comprehensive approach to dataset internal validation that encompassed: (1) hospital association level data, (2) hospital site level data and (3) medical department level data, (4) report data spanning six years, and (5) logical plausibility checks. To ensure data completeness, we selected the most plausible values without eliminating incomplete or implausible data. For future practice, we recommend a validation process when using GHQR as a data source for MCR-related research. Additionally, an adapted plausibility control could help to improve the quality of MCR documentation.</p>","PeriodicalId":9114,"journal":{"name":"BMC Medical Research Methodology","volume":"24 1","pages":"325"},"PeriodicalIF":3.9,"publicationDate":"2024-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11686984/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142906192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-31DOI: 10.1186/s12874-024-02433-w
Bin Hu, Yaohui Han, Wenhui Zhang, Qingyang Zhang, Wen Gu, Jun Bi, Bi Chen, Lishun Xiao
Background: The prediction of coronavirus disease in 2019 (COVID-19) in broader regions has been widely researched, but for specific areas such as urban areas the predictive models were rarely studied. It may be inaccurate to apply predictive models from a broad region directly to a small area. This paper builds a prediction approach for small size COVID-19 time series in a city.
Methods: Numbers of COVID-19 daily confirmed cases were collected from November 1, 2022 to November 16, 2023 in Xuzhou city of China. Classical deep learning models including recurrent neural network (RNN), long and short-term memory (LSTM), gated recurrent unit (GRU) and temporal convolutional network (TCN) are initially trained, then RNN, LSTM and GRU are integrated with a new attention mechanism and transfer learning to improve the performance. Ten times ablation experiments are conducted to show the robustness of the performance in prediction. The performances among the models are compared by the mean absolute error, root mean square error and coefficient of determination.
Results: LSTM outperforms than others, and TCN has the worst generalization ability. Thus, LSTM is integrated with the new attention mechanism to construct an LSTMATT model, which improves the performance. LSTMATT is trained on the smoothed time series curve through frequency domain convolution augmentation, then transfer learning is adopted to transfer the learned features back to the original time series resulting in a TLLA model that further improves the performance. RNN and GRU are also integrated with the attention mechanism and transfer learning and their performances are also improved, but TLLA still performs best.
Conclusions: The TLLA model has the best prediction performance for the time series of COVID-19 daily confirmed cases, and the new attention mechanism and transfer learning contribute to improve the prediction performance in the flatten part and the jagged part, respectively.
{"title":"A prediction approach to COVID-19 time series with LSTM integrated attention mechanism and transfer learning.","authors":"Bin Hu, Yaohui Han, Wenhui Zhang, Qingyang Zhang, Wen Gu, Jun Bi, Bi Chen, Lishun Xiao","doi":"10.1186/s12874-024-02433-w","DOIUrl":"10.1186/s12874-024-02433-w","url":null,"abstract":"<p><strong>Background: </strong>The prediction of coronavirus disease in 2019 (COVID-19) in broader regions has been widely researched, but for specific areas such as urban areas the predictive models were rarely studied. It may be inaccurate to apply predictive models from a broad region directly to a small area. This paper builds a prediction approach for small size COVID-19 time series in a city.</p><p><strong>Methods: </strong>Numbers of COVID-19 daily confirmed cases were collected from November 1, 2022 to November 16, 2023 in Xuzhou city of China. Classical deep learning models including recurrent neural network (RNN), long and short-term memory (LSTM), gated recurrent unit (GRU) and temporal convolutional network (TCN) are initially trained, then RNN, LSTM and GRU are integrated with a new attention mechanism and transfer learning to improve the performance. Ten times ablation experiments are conducted to show the robustness of the performance in prediction. The performances among the models are compared by the mean absolute error, root mean square error and coefficient of determination.</p><p><strong>Results: </strong>LSTM outperforms than others, and TCN has the worst generalization ability. Thus, LSTM is integrated with the new attention mechanism to construct an LSTMATT model, which improves the performance. LSTMATT is trained on the smoothed time series curve through frequency domain convolution augmentation, then transfer learning is adopted to transfer the learned features back to the original time series resulting in a TLLA model that further improves the performance. RNN and GRU are also integrated with the attention mechanism and transfer learning and their performances are also improved, but TLLA still performs best.</p><p><strong>Conclusions: </strong>The TLLA model has the best prediction performance for the time series of COVID-19 daily confirmed cases, and the new attention mechanism and transfer learning contribute to improve the prediction performance in the flatten part and the jagged part, respectively.</p>","PeriodicalId":9114,"journal":{"name":"BMC Medical Research Methodology","volume":"24 1","pages":"323"},"PeriodicalIF":3.9,"publicationDate":"2024-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11686916/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142906191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-31DOI: 10.1186/s12874-024-02445-6
Yi Yao, Brad C Astor, Wei Yang, Tom Greene, Liang Li
Background: Graft loss is a major health concern for kidney transplant (KTx) recipients. It is of clinical interest to develop a prognostic model for both graft function, quantified by estimated glomerular filtration rate (eGFR), and the risk of graft failure. Additionally, the model should be dynamic in the sense that it adapts to accumulating longitudinal information, including time-varying at-risk population, predictor-outcome association, and clinical history. Finally, the model should also properly account for the competing risk by death with a functioning graft. A model with the features above is not yet available in the literature and is the focus of this research.
Methods: We built and internally validated a prediction model on 3,893 patients from the Wisconsin Allograft Recipient Database (WisARD) who had a functioning graft 6 months after kidney transplantation. The landmark analysis approach was used to build a proof-of-concept dynamic prediction model to address the aforementioned methodological issues: the prediction of graft failure, accounted for competing risk of death, as well as the future eGFR value, are updated at each post-transplant time. We used 21 predictors including recipient characteristics, donor characteristics, transplant-related and post-transplant factors, longitudinal eGFR, hospitalization, and rejection history. A sensitivity analysis explored a less conservative variable selection rule that resulted in a more parsimonious model with reduced predictors.
Results: For prediction up to the next 1 to 5 years, the model achieved high accuracy in predicting graft failure, with the AUC between 0.80 and 0.95, and moderately high accuracy in predicting eGFR, with the root mean squared error between 10 and 18 mL/min/1.73m2 and 70%-90% of predicted eGFR falling within 30% of the observed eGFR. The model demonstrated substantial accuracy improvement compared to a conventional prediction model that used only baseline predictors.
Conclusion: The model outperformed conventional prediction model that used only baseline predictors. It is a useful tool for patient counseling and clinical management of KTx and is currently available as a web app.
{"title":"Predicting kidney graft function and failure among kidney transplant recipients.","authors":"Yi Yao, Brad C Astor, Wei Yang, Tom Greene, Liang Li","doi":"10.1186/s12874-024-02445-6","DOIUrl":"10.1186/s12874-024-02445-6","url":null,"abstract":"<p><strong>Background: </strong>Graft loss is a major health concern for kidney transplant (KTx) recipients. It is of clinical interest to develop a prognostic model for both graft function, quantified by estimated glomerular filtration rate (eGFR), and the risk of graft failure. Additionally, the model should be dynamic in the sense that it adapts to accumulating longitudinal information, including time-varying at-risk population, predictor-outcome association, and clinical history. Finally, the model should also properly account for the competing risk by death with a functioning graft. A model with the features above is not yet available in the literature and is the focus of this research.</p><p><strong>Methods: </strong>We built and internally validated a prediction model on 3,893 patients from the Wisconsin Allograft Recipient Database (WisARD) who had a functioning graft 6 months after kidney transplantation. The landmark analysis approach was used to build a proof-of-concept dynamic prediction model to address the aforementioned methodological issues: the prediction of graft failure, accounted for competing risk of death, as well as the future eGFR value, are updated at each post-transplant time. We used 21 predictors including recipient characteristics, donor characteristics, transplant-related and post-transplant factors, longitudinal eGFR, hospitalization, and rejection history. A sensitivity analysis explored a less conservative variable selection rule that resulted in a more parsimonious model with reduced predictors.</p><p><strong>Results: </strong>For prediction up to the next 1 to 5 years, the model achieved high accuracy in predicting graft failure, with the AUC between 0.80 and 0.95, and moderately high accuracy in predicting eGFR, with the root mean squared error between 10 and 18 mL/min/1.73m2 and 70%-90% of predicted eGFR falling within 30% of the observed eGFR. The model demonstrated substantial accuracy improvement compared to a conventional prediction model that used only baseline predictors.</p><p><strong>Conclusion: </strong>The model outperformed conventional prediction model that used only baseline predictors. It is a useful tool for patient counseling and clinical management of KTx and is currently available as a web app.</p>","PeriodicalId":9114,"journal":{"name":"BMC Medical Research Methodology","volume":"24 1","pages":"324"},"PeriodicalIF":3.9,"publicationDate":"2024-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11687162/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142906241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-27DOI: 10.1186/s12874-024-02446-5
Maura Leusder, Sven Relijveld, Derya Demirtas, Jon Emery, Michelle Tew, Peter Gibbs, Jeremy Millar, Victoria White, Michael Jefford, Fanny Franchini, Maarten IJzerman
Background: The aim of this study is to develop a method we call "cost mining" to unravel cost variation and identify cost drivers by modelling integrated patient pathways from primary care to the palliative care setting. This approach fills an urgent need to quantify financial strains on healthcare systems, particularly for colorectal cancer, which is the most expensive cancer in Australia, and the second most expensive cancer globally.
Methods: We developed and published a customized algorithm that dynamically estimates and visualizes the mean, minimum, and total costs of care at the patient level, by aggregating activity-based healthcare system costs (e.g. DRGs) across integrated pathways. This extends traditional process mining approaches by making the resulting process maps actionable and informative and by displaying cost estimates. We demonstrate the method by constructing a unique dataset of colorectal cancer pathways in Victoria, Australia, using records of primary care, diagnosis, hospital admission and chemotherapy, medication, health system costs, and life events to create integrated colorectal cancer patient pathways from 2012 to 2020.
Results: Cost mining with the algorithm enabled exploration of costly integrated pathways, i.e. drilling down in high-cost pathways to discover cost drivers, for 4246 cases covering approx. 4 million care activities. Per-patient CRC pathway costs ranged from $10,379 AUD to $41,643 AUD, and varied significantly per cancer stage such that e.g. chemotherapy costs in one cancer stage are different to the same chemotherapy regimen in a different stage. Admitted episodes were most costly, representing 93.34% or $56.6 M AUD of the total healthcare system costs covered in the sample.
Conclusions: Cost mining can supplement other health economic methods by providing contextual, sequence and timing-related information depicting how patients flow through complex care pathways. This approach can also facilitate health economic studies informing decision-makers on where to target care improvement or to evaluate the consequences of new treatments or care delivery interventions. Through this study we provide an approach for hospitals and policymakers to leverage their health data infrastructure and to enable real time patient level cost mining.
{"title":"Toward value-based care using cost mining: cost aggregation and visualization across the entire colorectal cancer patient pathway.","authors":"Maura Leusder, Sven Relijveld, Derya Demirtas, Jon Emery, Michelle Tew, Peter Gibbs, Jeremy Millar, Victoria White, Michael Jefford, Fanny Franchini, Maarten IJzerman","doi":"10.1186/s12874-024-02446-5","DOIUrl":"10.1186/s12874-024-02446-5","url":null,"abstract":"<p><strong>Background: </strong>The aim of this study is to develop a method we call \"cost mining\" to unravel cost variation and identify cost drivers by modelling integrated patient pathways from primary care to the palliative care setting. This approach fills an urgent need to quantify financial strains on healthcare systems, particularly for colorectal cancer, which is the most expensive cancer in Australia, and the second most expensive cancer globally.</p><p><strong>Methods: </strong>We developed and published a customized algorithm that dynamically estimates and visualizes the mean, minimum, and total costs of care at the patient level, by aggregating activity-based healthcare system costs (e.g. DRGs) across integrated pathways. This extends traditional process mining approaches by making the resulting process maps actionable and informative and by displaying cost estimates. We demonstrate the method by constructing a unique dataset of colorectal cancer pathways in Victoria, Australia, using records of primary care, diagnosis, hospital admission and chemotherapy, medication, health system costs, and life events to create integrated colorectal cancer patient pathways from 2012 to 2020.</p><p><strong>Results: </strong>Cost mining with the algorithm enabled exploration of costly integrated pathways, i.e. drilling down in high-cost pathways to discover cost drivers, for 4246 cases covering approx. 4 million care activities. Per-patient CRC pathway costs ranged from $10,379 AUD to $41,643 AUD, and varied significantly per cancer stage such that e.g. chemotherapy costs in one cancer stage are different to the same chemotherapy regimen in a different stage. Admitted episodes were most costly, representing 93.34% or $56.6 M AUD of the total healthcare system costs covered in the sample.</p><p><strong>Conclusions: </strong>Cost mining can supplement other health economic methods by providing contextual, sequence and timing-related information depicting how patients flow through complex care pathways. This approach can also facilitate health economic studies informing decision-makers on where to target care improvement or to evaluate the consequences of new treatments or care delivery interventions. Through this study we provide an approach for hospitals and policymakers to leverage their health data infrastructure and to enable real time patient level cost mining.</p>","PeriodicalId":9114,"journal":{"name":"BMC Medical Research Methodology","volume":"24 1","pages":"321"},"PeriodicalIF":3.9,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11681630/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142891622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-27DOI: 10.1186/s12874-024-02428-7
James Weaver, Erica A Voss, Guy Cafri, Kathleen Beyrau, Michelle Nashleanas, Robert Suruki
Background: Autoimmune disorders have primary manifestations such as joint pain and bowel inflammation but can also have secondary manifestations such as non-infectious uveitis (NIU). A regulatory health authority raised concerns after receiving spontaneous reports for NIU following exposure to Remicade®, a biologic therapy with multiple indications for which alternative therapies are available. In assessment of this clinical question, we applied validity diagnostics to support observational data causal inferences.
Methods: We assessed the risk of NIU among patients exposed to Remicade® compared to alternative biologics. Five databases, four study populations, and four analysis methodologies were used to estimate 80 potential treatment effects, with 20 pre-specified as primary. The study populations included inflammatory bowel conditions Crohn's disease or ulcerative colitis (IBD), ankylosing spondylitis (AS), psoriatic conditions plaque psoriasis or psoriatic arthritis (PsO/PsA), and rheumatoid arthritis (RA). We conducted four analysis strategies intended to address limitations of causal estimation using observational data and applied four diagnostics with pre-specified quantitative rules to evaluate threats to validity from observed and unobserved confounding. We also qualitatively assessed post-propensity score matching representativeness, and bias susceptibility from outcome misclassification. We fit Cox proportional-hazards models, conditioned on propensity score-matched sets, to estimate the on-treatment risk of NIU among Remicade® initiators versus alternatives. Estimates from analyses that passed four validity tests were assessed.
Results: Of the 80 total analyses and the 20 analyses pre-specified as primary, 24% and 20% passed diagnostics, respectively. Among patients with IBD, we observed no evidence of increased risk for NIU relative to other similarly indicated biologics (pooled hazard ratio [HR] 0.75, 95% confidence interval [CI] 0.38-1.40). For patients with RA, we observed no increased risk relative to similarly indicated biologics, although results were imprecise (HR: 1.23, 95% CI 0.14-10.47).
Conclusions: We applied validity diagnostics on a heterogenous, observational setting to answer a specific research question. The results indicated that safety effect estimates from many analyses would be inappropriate to interpret as causal, given the data available and methods employed. Validity diagnostics should always be used to determine if the design and analysis are of sufficient quality to support causal inferences. The clinical implications of our findings on IBD suggests that, if an increased risk exists, it is unlikely to be greater than 40% given the 1.40 upper bound of the pooled HR confidence interval.
背景:自身免疫性疾病的主要表现为关节疼痛和肠道炎症,但也可能有继发性表现,如非感染性葡萄膜炎(NIU)。Remicade®是一种具有多种适应症的生物疗法,可用于替代疗法。在收到自发性报告后,监管卫生当局提出了担忧。在评估这个临床问题时,我们应用有效性诊断来支持观察数据的因果推断。方法:与其他生物制剂相比,我们评估了暴露于Remicade®的患者中NIU的风险。使用5个数据库、4个研究人群和4种分析方法来估计80种潜在的治疗效果,其中20种预先指定为主要效果。研究人群包括炎症性肠病克罗恩病或溃疡性结肠炎(IBD)、强直性脊柱炎(AS)、银屑病斑块性银屑病或银屑病关节炎(PsO/PsA)和类风湿性关节炎(RA)。我们采用了四种分析策略,旨在解决使用观测数据进行因果估计的局限性,并应用了四种诊断方法,采用预先指定的定量规则来评估观察到的和未观察到的混淆对有效性的威胁。我们还定性地评估了后倾向评分匹配代表性和结果错误分类的偏倚易感性。我们拟合以倾向评分匹配集为条件的Cox比例风险模型,以估计Remicade®启动者与替代方案的治疗期间NIU风险。对通过四项效度测试的分析估计进行了评估。结果:在80例总分析和20例预先指定为主要分析中,分别有24%和20%通过诊断。在IBD患者中,我们没有观察到与其他类似适应症的生物制剂相比,NIU的风险增加的证据(合并风险比[HR] 0.75, 95%可信区间[CI] 0.38-1.40)。对于RA患者,我们观察到与类似适应症的生物制剂相比,风险没有增加,尽管结果不精确(HR: 1.23, 95% CI 0.14-10.47)。结论:我们应用有效性诊断的异质性,观察设置回答一个具体的研究问题。结果表明,考虑到现有的数据和采用的方法,许多分析得出的安全效应估计不适合解释为因果关系。有效性诊断应始终用于确定设计和分析是否具有足够的质量来支持因果推论。我们的研究结果对IBD的临床意义表明,如果存在风险增加,考虑到合并HR置信区间的上限1.40,它不太可能大于40%。
{"title":"The necessity of validity diagnostics when drawing causal inferences from observational data: lessons from a multi-database evaluation of the risk of non-infectious uveitis among patients exposed to Remicade<sup>®</sup>.","authors":"James Weaver, Erica A Voss, Guy Cafri, Kathleen Beyrau, Michelle Nashleanas, Robert Suruki","doi":"10.1186/s12874-024-02428-7","DOIUrl":"10.1186/s12874-024-02428-7","url":null,"abstract":"<p><strong>Background: </strong>Autoimmune disorders have primary manifestations such as joint pain and bowel inflammation but can also have secondary manifestations such as non-infectious uveitis (NIU). A regulatory health authority raised concerns after receiving spontaneous reports for NIU following exposure to Remicade<sup>®</sup>, a biologic therapy with multiple indications for which alternative therapies are available. In assessment of this clinical question, we applied validity diagnostics to support observational data causal inferences.</p><p><strong>Methods: </strong>We assessed the risk of NIU among patients exposed to Remicade<sup>®</sup> compared to alternative biologics. Five databases, four study populations, and four analysis methodologies were used to estimate 80 potential treatment effects, with 20 pre-specified as primary. The study populations included inflammatory bowel conditions Crohn's disease or ulcerative colitis (IBD), ankylosing spondylitis (AS), psoriatic conditions plaque psoriasis or psoriatic arthritis (PsO/PsA), and rheumatoid arthritis (RA). We conducted four analysis strategies intended to address limitations of causal estimation using observational data and applied four diagnostics with pre-specified quantitative rules to evaluate threats to validity from observed and unobserved confounding. We also qualitatively assessed post-propensity score matching representativeness, and bias susceptibility from outcome misclassification. We fit Cox proportional-hazards models, conditioned on propensity score-matched sets, to estimate the on-treatment risk of NIU among Remicade<sup>®</sup> initiators versus alternatives. Estimates from analyses that passed four validity tests were assessed.</p><p><strong>Results: </strong>Of the 80 total analyses and the 20 analyses pre-specified as primary, 24% and 20% passed diagnostics, respectively. Among patients with IBD, we observed no evidence of increased risk for NIU relative to other similarly indicated biologics (pooled hazard ratio [HR] 0.75, 95% confidence interval [CI] 0.38-1.40). For patients with RA, we observed no increased risk relative to similarly indicated biologics, although results were imprecise (HR: 1.23, 95% CI 0.14-10.47).</p><p><strong>Conclusions: </strong>We applied validity diagnostics on a heterogenous, observational setting to answer a specific research question. The results indicated that safety effect estimates from many analyses would be inappropriate to interpret as causal, given the data available and methods employed. Validity diagnostics should always be used to determine if the design and analysis are of sufficient quality to support causal inferences. The clinical implications of our findings on IBD suggests that, if an increased risk exists, it is unlikely to be greater than 40% given the 1.40 upper bound of the pooled HR confidence interval.</p>","PeriodicalId":9114,"journal":{"name":"BMC Medical Research Methodology","volume":"24 1","pages":"322"},"PeriodicalIF":3.9,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11681708/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142892081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-26DOI: 10.1186/s12874-024-02417-w
Gudrun Wells, Janelle Bowden, Duncan Colyer, Eleonora Kay, Sarah Lukeman, Lyndsay Newett, Lisa Eckstein
Background: The connection between participants and their research team can affect how safe, informed, and respected a participant feels, and their willingness to complete a research project. Communication between researchers and participants is key to developing this connection, but there is little published work evaluating how communication during clinical research is conducted.
Purpose: This paper explores what communications happen (and how) with research participants in Australia post consenting to participate in clinical research. It provides reflections from Australians working in clinical research about their current strategies, or those they would like to use, to communicate with research participants.
Methods: This exploratory, qualitative descriptive study reports findings associated with twenty semi-structured interviews that were undertaken with people who work in clinical research in Australia (such as staff in participant facing, site management, or sponsor representative roles). These interviews were conducted and analysed inductively using thematic analysis.
Findings: Research staff reported using a range of communication strategies which varied in implementation, uptake, and suitability between clinical research studies and sites. Four major themes were identified in the interviews: [1] staff use innovative pragmatism to communicate; [2] staff tailor the communication strategies to fit the participants' context; [3] the site, its systems, and staff training all impact communication; [4] successful communication requires collaboration between stakeholders.
Conclusion: There are a variety of communication strategies, methods and activities research staff currently employ with trial participants, which vary in purpose, method, resources required, and suitability between studies and sites. Thorough consideration of the participants' contexts and the capacity of research sites is crucial for the design of studies which allow for effective communication between the research team and participants. The authors encourage those developing clinical research projects to involve site staff and consumer representatives early in planning for communication with participants.
{"title":"Exploratory interviews with Australian clinical research staff on how they communicate with participants.","authors":"Gudrun Wells, Janelle Bowden, Duncan Colyer, Eleonora Kay, Sarah Lukeman, Lyndsay Newett, Lisa Eckstein","doi":"10.1186/s12874-024-02417-w","DOIUrl":"10.1186/s12874-024-02417-w","url":null,"abstract":"<p><strong>Background: </strong>The connection between participants and their research team can affect how safe, informed, and respected a participant feels, and their willingness to complete a research project. Communication between researchers and participants is key to developing this connection, but there is little published work evaluating how communication during clinical research is conducted.</p><p><strong>Purpose: </strong>This paper explores what communications happen (and how) with research participants in Australia post consenting to participate in clinical research. It provides reflections from Australians working in clinical research about their current strategies, or those they would like to use, to communicate with research participants.</p><p><strong>Methods: </strong>This exploratory, qualitative descriptive study reports findings associated with twenty semi-structured interviews that were undertaken with people who work in clinical research in Australia (such as staff in participant facing, site management, or sponsor representative roles). These interviews were conducted and analysed inductively using thematic analysis.</p><p><strong>Findings: </strong>Research staff reported using a range of communication strategies which varied in implementation, uptake, and suitability between clinical research studies and sites. Four major themes were identified in the interviews: [1] staff use innovative pragmatism to communicate; [2] staff tailor the communication strategies to fit the participants' context; [3] the site, its systems, and staff training all impact communication; [4] successful communication requires collaboration between stakeholders.</p><p><strong>Conclusion: </strong>There are a variety of communication strategies, methods and activities research staff currently employ with trial participants, which vary in purpose, method, resources required, and suitability between studies and sites. Thorough consideration of the participants' contexts and the capacity of research sites is crucial for the design of studies which allow for effective communication between the research team and participants. The authors encourage those developing clinical research projects to involve site staff and consumer representatives early in planning for communication with participants.</p>","PeriodicalId":9114,"journal":{"name":"BMC Medical Research Methodology","volume":"24 1","pages":"319"},"PeriodicalIF":3.9,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11670412/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142892077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}