Rahul Kashyap, Gayane Yenokyan, Robert Joyner, Melissa Gerstenhaber, Mary Alderfer, Erika Siegrist, Joan Moore, Channing J. Paller, Hanan Aboumatar, James J. Potter, Stanley Watkins Jr, John E. Niederhuber, Daniel E. Ford, Adrian Dobs
Clinical research studies are becoming increasingly complex resulting in compounded work burden and longer study cycle times, each fueling runaway costs. The impact of protocol complexity often results in inadequate recruitment and insufficient sample sizes, which challenges validity and generalizability. Understanding the need to provide an alternative model to engage researchers and sponsors and bringing clinical research opportunities to the broader community, clinical research networks (CRN) have been proposed and initiated in the United States and other parts of the world. We report on the Johns Hopkins Clinical Research Network (JHCRN), established in 2009 as a multi-disease research collaboration between the academic medical centers and community hospitals/health systems. We have discussed vision, governance, infrastructure, participating hospitals' characteristics, and lessons learned in creating this partnership. Designed to leverage organized patient communities, community-based investigators, and academic researchers, the JHCRN provides expedited research across nine health systems in the mid-Atlantic region. With one IRB of record, a centralized contracting office, and a pool of dedicated network coordinators, it facilitates research partnerships to expand research collaborations among the differing sizes and types of hospitals/health systems in a region. As of August 2024, total 81 studies-clinical trials, cohort studies, and comparative effectiveness research have been conducted, with funding from the NIH, private foundations, and industry. The JHCRN experience has enhanced understanding of the complexity of participating sites and associated ambulatory practices. In conclusion, the CRN, as an academic–community partnership, provides an infrastructure for multiple disease studies, shared risk, and increased investigator and volunteer engagement.
{"title":"Clinical Research Network: JHCRN Infrastructure and Lessons Learned","authors":"Rahul Kashyap, Gayane Yenokyan, Robert Joyner, Melissa Gerstenhaber, Mary Alderfer, Erika Siegrist, Joan Moore, Channing J. Paller, Hanan Aboumatar, James J. Potter, Stanley Watkins Jr, John E. Niederhuber, Daniel E. Ford, Adrian Dobs","doi":"10.1111/cts.70123","DOIUrl":"10.1111/cts.70123","url":null,"abstract":"<p>Clinical research studies are becoming increasingly complex resulting in compounded work burden and longer study cycle times, each fueling runaway costs. The impact of protocol complexity often results in inadequate recruitment and insufficient sample sizes, which challenges validity and generalizability. Understanding the need to provide an alternative model to engage researchers and sponsors and bringing clinical research opportunities to the broader community, clinical research networks (CRN) have been proposed and initiated in the United States and other parts of the world. We report on the Johns Hopkins Clinical Research Network (JHCRN), established in 2009 as a multi-disease research collaboration between the academic medical centers and community hospitals/health systems. We have discussed vision, governance, infrastructure, participating hospitals' characteristics, and lessons learned in creating this partnership. Designed to leverage organized patient communities, community-based investigators, and academic researchers, the JHCRN provides expedited research across nine health systems in the mid-Atlantic region. With one IRB of record, a centralized contracting office, and a pool of dedicated network coordinators, it facilitates research partnerships to expand research collaborations among the differing sizes and types of hospitals/health systems in a region. As of August 2024, total 81 studies-clinical trials, cohort studies, and comparative effectiveness research have been conducted, with funding from the NIH, private foundations, and industry. The JHCRN experience has enhanced understanding of the complexity of participating sites and associated ambulatory practices. In conclusion, the CRN, as an academic–community partnership, provides an infrastructure for multiple disease studies, shared risk, and increased investigator and volunteer engagement.</p>","PeriodicalId":50610,"journal":{"name":"Cts-Clinical and Translational Science","volume":"18 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11754071/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143025555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiahui Zhang, Wei Cheng, Dongkai Li, Guoyu Zhao, Xianli Lei, Na Cui
This study aimed to develop and validate a nomogram based on lymphocyte subtyping and clinical factors for the early and rapid prediction of Intra-abdominal candidiasis (IAC) in septic patients. A prospective cohort study of 633 consecutive patients diagnosed with sepsis and intra-abdominal infection (IAI) was performed. We assessed the clinical characteristics and lymphocyte subsets at the onset of IAI. A machine-learning random forest model was used to select important variables, and multivariate logistic regression was used to analyze the factors influencing IAC. A nomogram model was constructed, and the discrimination, calibration, and clinical effectiveness of the model were verified. High-dose corticosteroids receipt, the CD4+T/CD8+ T ratio, total parenteral nutrition, gastrointestinal perforation, (1,3)-β-D-glucan (BDG) positivity and broad-spectrum antibiotics receipt were independent predictors of IAC. Using the above parameters to establish a nomogram, the area under the curve (AUC) values of the nomogram in the derivation and validation cohorts were 0.822 (95% CI 0.777–0.868) and 0.808 (95% CI 0.739–0.876), respectively. The AUC in the derivation cohort was greater than the Candida score [0.822 (95% CI 0.777–0.868) vs. 0.521 (95% CI 0.478–0.563), p < 0.001]. The calibration curve showed good predictive values and observed values of the nomogram; the Decision Curve Analysis (DCA) results showed that the nomogram had high clinical value. In conclusion, we established a nomogram based on the CD4+/CD8+ T-cell ratio and clinical risk factors that can help clinical physicians quickly rule out IAC or identify patients at greater risk for IAC at the onset of infection.
本研究旨在建立并验证一种基于淋巴细胞亚型和临床因素的nomogram腹腔内念珠菌病(IAC)的早期快速预测方法。对633例连续诊断为败血症和腹腔感染(IAI)的患者进行了一项前瞻性队列研究。我们评估了IAI发病时的临床特征和淋巴细胞亚群。采用机器学习随机森林模型选择重要变量,采用多元逻辑回归分析影响IAC的因素。建立了模态图模型,并对模型的判别、校正和临床有效性进行了验证。大剂量皮质类固醇、CD4+T/CD8+ T比值、总肠外营养、胃肠道穿孔、(1,3)-β- d-葡聚糖(BDG)阳性和广谱抗生素使用是IAC的独立预测因素。利用上述参数建立nomogram,推导队列和验证队列nomogram的曲线下面积(AUC)分别为0.822 (95% CI 0.777 ~ 0.868)和0.808 (95% CI 0.739 ~ 0.876)。衍生队列的AUC大于念珠菌评分[0.822 (95% CI 0.777-0.868) vs. 0.521 (95% CI 0.478-0.563)], p +/CD8+ t细胞比值和临床危险因素可以帮助临床医生快速排除IAC或识别在感染开始时IAC风险较大的患者。
{"title":"Establishment and Validation of a Machine-Learning Prediction Nomogram Based on Lymphocyte Subtyping for Intra-Abdominal Candidiasis in Septic Patients","authors":"Jiahui Zhang, Wei Cheng, Dongkai Li, Guoyu Zhao, Xianli Lei, Na Cui","doi":"10.1111/cts.70140","DOIUrl":"10.1111/cts.70140","url":null,"abstract":"<p>This study aimed to develop and validate a nomogram based on lymphocyte subtyping and clinical factors for the early and rapid prediction of Intra-abdominal candidiasis (IAC) in septic patients. A prospective cohort study of 633 consecutive patients diagnosed with sepsis and intra-abdominal infection (IAI) was performed. We assessed the clinical characteristics and lymphocyte subsets at the onset of IAI. A machine-learning random forest model was used to select important variables, and multivariate logistic regression was used to analyze the factors influencing IAC. A nomogram model was constructed, and the discrimination, calibration, and clinical effectiveness of the model were verified. High-dose corticosteroids receipt, the CD4<sup>+</sup>T/CD8<sup>+</sup> T ratio, total parenteral nutrition, gastrointestinal perforation, (1,3)-β-D-glucan (BDG) positivity and broad-spectrum antibiotics receipt were independent predictors of IAC. Using the above parameters to establish a nomogram, the area under the curve (AUC) values of the nomogram in the derivation and validation cohorts were 0.822 (95% CI 0.777–0.868) and 0.808 (95% CI 0.739–0.876), respectively. The AUC in the derivation cohort was greater than the Candida score [0.822 (95% CI 0.777–0.868) vs. 0.521 (95% CI 0.478–0.563), <i>p</i> < 0.001]. The calibration curve showed good predictive values and observed values of the nomogram; the Decision Curve Analysis (DCA) results showed that the nomogram had high clinical value. In conclusion, we established a nomogram based on the CD4<sup>+</sup>/CD8<sup>+</sup> T-cell ratio and clinical risk factors that can help clinical physicians quickly rule out IAC or identify patients at greater risk for IAC at the onset of infection.</p>","PeriodicalId":50610,"journal":{"name":"Cts-Clinical and Translational Science","volume":"18 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11747989/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143015670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marin Lahouati, Mélanie Oudart, Philippe Alzieu, Candice Chapouly, Antoine Petitcollin, Fabien Xuereb
Penetration of antimicrobial treatments into the cerebrospinal fluid is essential to successfully treat infections of the central nervous system. This penetration is hindered by different barriers, including the blood–brain barrier, which is the most impermeable. However, inflammation may lead to structural alterations of these barriers, modifying their permeability. The impact of blood–brain barrier disruption on linezolid and tedizolid (antibiotics that may be alternatives to treat nosocomial meningitis) penetration in cerebrospinal fluid (CSF) remains unknown. The aim of this study is to evaluate the impact of blood brain barrier disruption on CSF penetration of linezolid and tedizolid. Female C57BI/6 J mice were used. Blood–brain barrier disruption was induced by an intraperitoneal administration of lipopolysaccharide. Linezolid (40 mg/kg) or tedizolid-phosphate (20 mg/kg) were injected intraperitoneally. All the plasma and CSF samples were analyzed with a validated UPLC-MS/MS method. Pharmacokinetic parameters were calculated using a non-compartmental approach based on the free drug concentration. The penetration ratio from the plasma into the CSF was calculated by the AUC0-8h (Area Under Curve) ratio (AUC0-8hCSF/AUC0-8hplasma). Linezolid penetration ratio was 46.5% in control group and 46.1% in lipopolysaccharide group. Concerning tedizolid, penetration ratio was 5.5% in control group and 15.5% in lipopolysaccharide group. In conclusion, CSF penetration of linezolid is not impacted by blood–brain barrier disruption, unlike tedizolid, whose penetration ratio increased.
{"title":"Penetration of linezolid and tedizolid in cerebrospinal fluid of mouse and impact of blood–brain barrier disruption","authors":"Marin Lahouati, Mélanie Oudart, Philippe Alzieu, Candice Chapouly, Antoine Petitcollin, Fabien Xuereb","doi":"10.1111/cts.70100","DOIUrl":"10.1111/cts.70100","url":null,"abstract":"<p>Penetration of antimicrobial treatments into the cerebrospinal fluid is essential to successfully treat infections of the central nervous system. This penetration is hindered by different barriers, including the blood–brain barrier, which is the most impermeable. However, inflammation may lead to structural alterations of these barriers, modifying their permeability. The impact of blood–brain barrier disruption on linezolid and tedizolid (antibiotics that may be alternatives to treat nosocomial meningitis) penetration in cerebrospinal fluid (CSF) remains unknown. The aim of this study is to evaluate the impact of blood brain barrier disruption on CSF penetration of linezolid and tedizolid. Female C57BI/6 J mice were used. Blood–brain barrier disruption was induced by an intraperitoneal administration of lipopolysaccharide. Linezolid (40 mg/kg) or tedizolid-phosphate (20 mg/kg) were injected intraperitoneally. All the plasma and CSF samples were analyzed with a validated UPLC-MS/MS method. Pharmacokinetic parameters were calculated using a non-compartmental approach based on the free drug concentration. The penetration ratio from the plasma into the CSF was calculated by the AUC<sub>0-8h</sub> (Area Under Curve) ratio (AUC<sub>0-8hCSF</sub>/AUC<sub>0-8hplasma</sub>). Linezolid penetration ratio was 46.5% in control group and 46.1% in lipopolysaccharide group. Concerning tedizolid, penetration ratio was 5.5% in control group and 15.5% in lipopolysaccharide group. In conclusion, CSF penetration of linezolid is not impacted by blood–brain barrier disruption, unlike tedizolid, whose penetration ratio increased.</p>","PeriodicalId":50610,"journal":{"name":"Cts-Clinical and Translational Science","volume":"18 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11746922/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143061419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
<p>The advent of AI has brought transformative changes across many fields, particularly in biomedical field, where AI is now being used to facilitate drug discovery and development, enhance diagnostic and prognostic accuracy, and support clinical decision-making. For example, since 2021, there has been a notable increase in AI-related submissions to the US Food and Drug Administration (FDA) Center for Drug Evaluation and Research (CDER), reflecting the rapid expansion of AI applications in drug development [<span>1</span>]. In addition, the rapid growth in AI health applications is reflected by the exponential increase in the number of such studies found on PubMed [<span>2</span>]. However, the translation of AI models from development to real-world deployment remains challenging. This is due to various factors, including data drift, where the characteristics of data in the deployment phase differ from those used in model training. Consequently, ensuring the performance of medical AI models in the deployment phase has become a critical area of focus, as AI models that excel in controlled environments may still struggle with real-world variability, leading to poor predictions for patients whose characteristics differ significantly from the training set. Such cases, often referred to as OOD samples, present a major challenge for AI-driven decision-making, such as making diagnosis or selecting treatments for a patient. The failure to recognize these OOD samples can result in suboptimal or even harmful decisions.</p><p>To address this, we propose a prescreening procedure for medical AI model deployment (especially when the AI model risk is high), aimed at avoiding or flagging the predictions by AI models on OOD samples (Figure 1a). This procedure, we believe, can be beneficial for ensuring the trustworthiness of AI in medicine.</p><p>OOD scenarios are a common challenge in medical AI applications. For instance, a model trained predominantly on data from a specific demographic group may underperform when applied to patients from different demographic groups, resulting in inaccurate predictions. OOD cases can also arise when AI models encounter data that differ from the training data due to factors like variations in medical practices and treatment landscapes of the clinical trials. These issues can potentially lead to harm to patients (e.g., misdiagnosis, inappropriate treatment recommendations), and a loss of trust in AI systems.</p><p>The importance of detecting OOD samples to define the scope of use for AI models has been highlighted in multiple research and clinical studies. A well-known example is the Medical Out-of-Distribution-Analysis (MOOD) Challenge [<span>3</span>], which benchmarked OOD detection algorithms across several supervised and unsupervised models, including autoencoder neural networks, U-Net, vector-quantized variational autoencoders, principle component analysis (PCA), and linear Gaussian process regression. These algorithms wer
{"title":"First, Do No Harm: Addressing AI's Challenges With Out-of-Distribution Data in Medicine","authors":"Chu Weng, Wesley Lin, Sherry Dong, Qi Liu, Hanrui Zhang","doi":"10.1111/cts.70132","DOIUrl":"10.1111/cts.70132","url":null,"abstract":"<p>The advent of AI has brought transformative changes across many fields, particularly in biomedical field, where AI is now being used to facilitate drug discovery and development, enhance diagnostic and prognostic accuracy, and support clinical decision-making. For example, since 2021, there has been a notable increase in AI-related submissions to the US Food and Drug Administration (FDA) Center for Drug Evaluation and Research (CDER), reflecting the rapid expansion of AI applications in drug development [<span>1</span>]. In addition, the rapid growth in AI health applications is reflected by the exponential increase in the number of such studies found on PubMed [<span>2</span>]. However, the translation of AI models from development to real-world deployment remains challenging. This is due to various factors, including data drift, where the characteristics of data in the deployment phase differ from those used in model training. Consequently, ensuring the performance of medical AI models in the deployment phase has become a critical area of focus, as AI models that excel in controlled environments may still struggle with real-world variability, leading to poor predictions for patients whose characteristics differ significantly from the training set. Such cases, often referred to as OOD samples, present a major challenge for AI-driven decision-making, such as making diagnosis or selecting treatments for a patient. The failure to recognize these OOD samples can result in suboptimal or even harmful decisions.</p><p>To address this, we propose a prescreening procedure for medical AI model deployment (especially when the AI model risk is high), aimed at avoiding or flagging the predictions by AI models on OOD samples (Figure 1a). This procedure, we believe, can be beneficial for ensuring the trustworthiness of AI in medicine.</p><p>OOD scenarios are a common challenge in medical AI applications. For instance, a model trained predominantly on data from a specific demographic group may underperform when applied to patients from different demographic groups, resulting in inaccurate predictions. OOD cases can also arise when AI models encounter data that differ from the training data due to factors like variations in medical practices and treatment landscapes of the clinical trials. These issues can potentially lead to harm to patients (e.g., misdiagnosis, inappropriate treatment recommendations), and a loss of trust in AI systems.</p><p>The importance of detecting OOD samples to define the scope of use for AI models has been highlighted in multiple research and clinical studies. A well-known example is the Medical Out-of-Distribution-Analysis (MOOD) Challenge [<span>3</span>], which benchmarked OOD detection algorithms across several supervised and unsupervised models, including autoencoder neural networks, U-Net, vector-quantized variational autoencoders, principle component analysis (PCA), and linear Gaussian process regression. These algorithms wer","PeriodicalId":50610,"journal":{"name":"Cts-Clinical and Translational Science","volume":"18 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11739455/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143015671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kayla R. Tunehag, Ashton F. Pearce, Layna P. Fox, George A. Stouffer, Sten Solander, Craig R. Lee
In neurovascular settings, including treatment and prevention of ischemic stroke and prevention of thromboembolic complications after percutaneous neurointerventional procedures, dual antiplatelet therapy with a P2Y12 inhibitor and aspirin is the standard of care. Clopidogrel remains the most commonly prescribed P2Y12 inhibitor for neurovascular indications. However, patients carrying CYP2C19 no-function alleles have diminished capacity for inhibition of platelet reactivity due to reduced formation of clopidogrel's active metabolite. In patients with cardiovascular disease undergoing a percutaneous coronary intervention, CYP2C19 no-function allele carriers treated with clopidogrel experience a higher risk of major adverse cardiovascular outcomes, and multiple large prospective outcomes studies have shown an improvement in clinical outcomes when antiplatelet therapy selection was guided by CYP2C19 genotype. Similarly, accumulating evidence has associated CYP2C19 no-function alleles with poor clinical outcomes in clopidogrel-treated patients in neurovascular settings. However, the utility of implementing a genotype-guided antiplatelet therapy selection strategy in the setting of neurovascular disease and the clinical outcomes evidence in neurointerventional procedures remains unclear. In this review, we will (1) summarize existing evidence and guideline recommendations related to CYP2C19 genotype-guided antiplatelet therapy in the setting of neurovascular disease, (2) evaluate and synthesize the existing evidence on the relationship of clinical outcomes to CYP2C19 genotype and clopidogrel treatment in patients undergoing a percutaneous neurointerventional procedure, and (3) identify knowledge gaps and discuss future research directions.
{"title":"CYP2C19 Genotype-Guided Antiplatelet Therapy and Clinical Outcomes in Patients Undergoing a Neurointerventional Procedure","authors":"Kayla R. Tunehag, Ashton F. Pearce, Layna P. Fox, George A. Stouffer, Sten Solander, Craig R. Lee","doi":"10.1111/cts.70131","DOIUrl":"10.1111/cts.70131","url":null,"abstract":"<p>In neurovascular settings, including treatment and prevention of ischemic stroke and prevention of thromboembolic complications after percutaneous neurointerventional procedures, dual antiplatelet therapy with a P2Y12 inhibitor and aspirin is the standard of care. Clopidogrel remains the most commonly prescribed P2Y12 inhibitor for neurovascular indications. However, patients carrying <i>CYP2C19</i> no-function alleles have diminished capacity for inhibition of platelet reactivity due to reduced formation of clopidogrel's active metabolite. In patients with cardiovascular disease undergoing a percutaneous coronary intervention, <i>CYP2C19</i> no-function allele carriers treated with clopidogrel experience a higher risk of major adverse cardiovascular outcomes, and multiple large prospective outcomes studies have shown an improvement in clinical outcomes when antiplatelet therapy selection was guided by <i>CYP2C19</i> genotype. Similarly, accumulating evidence has associated <i>CYP2C19</i> no-function alleles with poor clinical outcomes in clopidogrel-treated patients in neurovascular settings. However, the utility of implementing a genotype-guided antiplatelet therapy selection strategy in the setting of neurovascular disease and the clinical outcomes evidence in neurointerventional procedures remains unclear. In this review, we will (1) summarize existing evidence and guideline recommendations related to <i>CYP2C19</i> genotype-guided antiplatelet therapy in the setting of neurovascular disease, (2) evaluate and synthesize the existing evidence on the relationship of clinical outcomes to <i>CYP2C19</i> genotype and clopidogrel treatment in patients undergoing a percutaneous neurointerventional procedure, and (3) identify knowledge gaps and discuss future research directions.</p>","PeriodicalId":50610,"journal":{"name":"Cts-Clinical and Translational Science","volume":"18 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11739457/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143015669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study investigated the success rate of Phase 1 clinical trial entry and the factors influencing it in oncology projects involving academia–industry collaboration during the discovery and preclinical stages. A total of 344 oncology projects in the discovery stage and 360 in the preclinical stage, initiated through collaborations with universities or hospitals between 2015 and 2019, were analyzed. The Phase 1 clinical trial entry success rates for oncology collaborative projects were 9.9% from the discovery stage and 24.2% from the preclinical stage. For discovery stage contracts, strong statistical significance was observed for contract type (co-development OR 16.45, p = 0.008; licensing OR 42.43, p = 0.000) and technology (cell or gene therapy OR 3.82, p = 0.008). In contrast, for preclinical stage contracts, significant changes were noted for cancer type (blood cancer OR 2.24, p = 0.004), while the year of contract signing showed a relatively weak statistical significance (OR 1.24, p = 0.021). No significant changes were observed concerning partner firm size and the partnership territory. This study sheds light on how the characteristics of partnerships influence the success rates of early-phase research, providing valuable insights for future strategic planning in oncology drug development.
本研究调查了在发现和临床前阶段涉及产学研合作的肿瘤项目中,进入i期临床试验的成功率及其影响因素。2015年至2019年期间,通过与大学或医院合作启动的肿瘤项目中,共有344个处于发现阶段,360个处于临床前阶段。肿瘤合作项目的1期临床试验进入成功率在发现阶段为9.9%,在临床前阶段为24.2%。对于发现阶段合同,合同类型有很强的统计学意义(共同开发OR为16.45,p = 0.008;许可(OR 42.43, p = 0.000)和技术(OR 3.82, p = 0.008)。而在临床前分期合同中,癌症类型差异有统计学意义(OR 2.24, p = 0.004),合同签订年份差异有统计学意义(OR 1.24, p = 0.021)。在合伙人公司规模和合伙人领域方面没有观察到显著的变化。本研究揭示了合作伙伴关系的特点如何影响早期研究的成功率,为肿瘤药物开发的未来战略规划提供了有价值的见解。
{"title":"From Lab to Clinic: Effect of Academia–Industry Collaboration Characteristics on Oncology Phase 1 Trial Entry","authors":"Wonseok Yang, Sang-Won Lee","doi":"10.1111/cts.70135","DOIUrl":"10.1111/cts.70135","url":null,"abstract":"<p>This study investigated the success rate of Phase 1 clinical trial entry and the factors influencing it in oncology projects involving academia–industry collaboration during the discovery and preclinical stages. A total of 344 oncology projects in the discovery stage and 360 in the preclinical stage, initiated through collaborations with universities or hospitals between 2015 and 2019, were analyzed. The Phase 1 clinical trial entry success rates for oncology collaborative projects were 9.9% from the discovery stage and 24.2% from the preclinical stage. For discovery stage contracts, strong statistical significance was observed for contract type (co-development OR 16.45, <i>p</i> = 0.008; licensing OR 42.43, <i>p</i> = 0.000) and technology (cell or gene therapy OR 3.82, <i>p</i> = 0.008). In contrast, for preclinical stage contracts, significant changes were noted for cancer type (blood cancer OR 2.24, <i>p</i> = 0.004), while the year of contract signing showed a relatively weak statistical significance (OR 1.24, <i>p</i> = 0.021). No significant changes were observed concerning partner firm size and the partnership territory. This study sheds light on how the characteristics of partnerships influence the success rates of early-phase research, providing valuable insights for future strategic planning in oncology drug development.</p>","PeriodicalId":50610,"journal":{"name":"Cts-Clinical and Translational Science","volume":"18 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11730079/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142980522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Placebo effect represents a serious confounder for the assessment of treatment effect to the extent that it has become increasingly difficult to develop antidepressant medications appropriate for outperforming placebo. Treatment effect in randomized, placebo-controlled trials, is usually estimated by the mean baseline adjusted difference of treatment response in active and placebo arms and is function of treatment-specific and non-specific effects. The non-specific treatment effect varies subject by subject conditional to the individual propensity to respond to placebo. This effect is not estimable at an individual level using the conventional parallel-group study design, since each subject enrolled in the trial is assigned to receive either active treatment or placebo, but not both. The objective of this study was to conduct a comparative analysis of the machine learning methodologies to estimate the individual probability of a non-specific treatment effect. The estimated probability is expected to support novel methodological approaches for better controlling effect of excessively high placebo response. At this purpose, six machine learning methodologies (gradient boosting machine, lasso regression, logistic regression, support vector machines, k-nearest neighbors, and random forests) were compared to the multilayer perceptrons artificial neural network (ANN) methodology for predicting the probability of individual non-specific treatment response. ANN achieved the highest overall accuracy among all methods tested. A fivefold cross-validation was used to assess performances and risks of overfitting of the ANN model. The analysis conducted without subjects with non-specific effect indicated a significant increase of signal detection with significant increase in effect size.
{"title":"Comparison of Different Machine Learning Methodologies for Predicting the Non-Specific Treatment Response in Placebo Controlled Major Depressive Disorder Clinical Trials","authors":"Roberto Gomeni, Françoise Bressolle-Gomeni","doi":"10.1111/cts.70128","DOIUrl":"10.1111/cts.70128","url":null,"abstract":"<p>Placebo effect represents a serious confounder for the assessment of treatment effect to the extent that it has become increasingly difficult to develop antidepressant medications appropriate for outperforming placebo. Treatment effect in randomized, placebo-controlled trials, is usually estimated by the mean baseline adjusted difference of treatment response in active and placebo arms and is function of treatment-specific and non-specific effects. The non-specific treatment effect varies subject by subject conditional to the individual propensity to respond to placebo. This effect is not estimable at an individual level using the conventional parallel-group study design, since each subject enrolled in the trial is assigned to receive either active treatment or placebo, but not both. The objective of this study was to conduct a comparative analysis of the machine learning methodologies to estimate the individual probability of a non-specific treatment effect. The estimated probability is expected to support novel methodological approaches for better controlling effect of excessively high placebo response. At this purpose, six machine learning methodologies (gradient boosting machine, lasso regression, logistic regression, support vector machines, <i>k</i>-nearest neighbors, and random forests) were compared to the multilayer perceptrons artificial neural network (ANN) methodology for predicting the probability of individual non-specific treatment response. ANN achieved the highest overall accuracy among all methods tested. A fivefold cross-validation was used to assess performances and risks of overfitting of the ANN model. The analysis conducted without subjects with non-specific effect indicated a significant increase of signal detection with significant increase in effect size.</p>","PeriodicalId":50610,"journal":{"name":"Cts-Clinical and Translational Science","volume":"18 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11729444/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142980598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nai Lee, Su-jin Rhee, Seong Min Koo, So Won Kim, Gyo Eun Lee, Yoon A Yie, Yun Kim
Monoclonal antibodies (mAbs) are critical components in the therapeutic landscape, but their dosing strategies often evolve post-approval as new data emerge. This review evaluates post-marketing label changes in dosing information for FDA-approved mAbs from January 2015 to September 2024, with a focus on both initial and extended indications. We systematically analyzed dosing modifications, categorizing them into six predefined groups: Dose increases or decreases, inclusion of new patient populations by body weight or age, shifts from body weight-based dosing to fixed regimens, and adjustments in infusion rates. Among the 86 mAbs evaluated, 21% (n = 18) exhibited changes in dosing information for the initial indication, with a median time to modification of 37.5 months (range: 5–76 months). Furthermore, for mAbs with extended indications (n = 26), 19.2% (n = 5) underwent dosing changes in their first extensions, with a median time to adjustment of 31 months (range: 8–71 months). Key drivers for these adjustments included optimizing therapeutic efficacy, addressing safety concerns, accommodating special populations, and enhancing patient convenience. We also discuss the role of model-informed drug development, real-world evidence, and pharmacogenomics in refining mAb dosing strategies. These insights underscore the importance of ongoing monitoring and data integration in the post-marketing phase, providing a foundation for future precision medicine approaches in mAb therapy.
{"title":"Navigating Recent Changes in Dosing Information: Dynamics of FDA-Approved Monoclonal Antibodies in Post-Marketing Realities","authors":"Nai Lee, Su-jin Rhee, Seong Min Koo, So Won Kim, Gyo Eun Lee, Yoon A Yie, Yun Kim","doi":"10.1111/cts.70125","DOIUrl":"10.1111/cts.70125","url":null,"abstract":"<p>Monoclonal antibodies (mAbs) are critical components in the therapeutic landscape, but their dosing strategies often evolve post-approval as new data emerge. This review evaluates post-marketing label changes in dosing information for FDA-approved mAbs from January 2015 to September 2024, with a focus on both initial and extended indications. We systematically analyzed dosing modifications, categorizing them into six predefined groups: Dose increases or decreases, inclusion of new patient populations by body weight or age, shifts from body weight-based dosing to fixed regimens, and adjustments in infusion rates. Among the 86 mAbs evaluated, 21% (<i>n</i> = 18) exhibited changes in dosing information for the initial indication, with a median time to modification of 37.5 months (range: 5–76 months). Furthermore, for mAbs with extended indications (<i>n</i> = 26), 19.2% (<i>n</i> = 5) underwent dosing changes in their first extensions, with a median time to adjustment of 31 months (range: 8–71 months). Key drivers for these adjustments included optimizing therapeutic efficacy, addressing safety concerns, accommodating special populations, and enhancing patient convenience. We also discuss the role of model-informed drug development, real-world evidence, and pharmacogenomics in refining mAb dosing strategies. These insights underscore the importance of ongoing monitoring and data integration in the post-marketing phase, providing a foundation for future precision medicine approaches in mAb therapy.</p>","PeriodicalId":50610,"journal":{"name":"Cts-Clinical and Translational Science","volume":"18 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11729449/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142980524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sebastian Haertter, Maximilian Lobmeyer, Brian C. Ferslew, Pallabi Mitra, Thomas Arnhold
Hepatic impairment (HI) trials are traditionally part of the clinical pharmacology development to assess the need for dose adaptation in people with impaired metabolic capacity due to their diseased liver. This review aimed at looking into the data from dedicated HI studies, cluster these data into various categories and connect the effect by HI with reported pharmacokinetics (PK) properties in order to identify patterns that may allow waiver, extrapolations, or adapted HI study designs. Based on a ratio ≥ 2 or ≤ 0.5 in AUC or Cmax between hepatically impaired participants/healthy controls these were considered “positive” or “negative”. In case of more than one HI severity stratum per compound included in the HI trial, the comparison of the AUC ratios for mild, moderate, or severe HI were used to investigate the increase across HI categories. For the in total 436 hits, relevant PK information could be retrieved for 273 compounds of which 199 were categorized negative, 69 positive ups and 5 positive downs. Fourteen out of 69 compounds demonstrated a steep increase in the AUC ratios from mild to severe HI. Compounds demonstrating a steep increase typically had a high plasma protein binding of > 95%, high volume of distribution, lower absolute bioavailability, minor elimination via the kidneys, were predominantly metabolized by CYP3A4 or CYP2D6 and the majority of these compounds were substrates of OATP1B1. While for compounds with steep increase studies in all severity strata may be warranted they may also offer the potential to estimate the appropriate doses in an HI trial. On the other hand, for compounds with slow or no increase across HI severity strata, reduced HI trials may be justified, e.g. only testing PK in moderate HI.
{"title":"New Insights Into Hepatic Impairment (HI) Trials","authors":"Sebastian Haertter, Maximilian Lobmeyer, Brian C. Ferslew, Pallabi Mitra, Thomas Arnhold","doi":"10.1111/cts.70130","DOIUrl":"10.1111/cts.70130","url":null,"abstract":"<p>Hepatic impairment (HI) trials are traditionally part of the clinical pharmacology development to assess the need for dose adaptation in people with impaired metabolic capacity due to their diseased liver. This review aimed at looking into the data from dedicated HI studies, cluster these data into various categories and connect the effect by HI with reported pharmacokinetics (PK) properties in order to identify patterns that may allow waiver, extrapolations, or adapted HI study designs. Based on a ratio ≥ 2 or ≤ 0.5 in AUC or Cmax between hepatically impaired participants/healthy controls these were considered “positive” or “negative”. In case of more than one HI severity stratum per compound included in the HI trial, the comparison of the AUC ratios for mild, moderate, or severe HI were used to investigate the increase across HI categories. For the in total 436 hits, relevant PK information could be retrieved for 273 compounds of which 199 were categorized negative, 69 positive ups and 5 positive downs. Fourteen out of 69 compounds demonstrated a steep increase in the AUC ratios from mild to severe HI. Compounds demonstrating a steep increase typically had a high plasma protein binding of > 95%, high volume of distribution, lower absolute bioavailability, minor elimination via the kidneys, were predominantly metabolized by CYP3A4 or CYP2D6 and the majority of these compounds were substrates of OATP1B1. While for compounds with steep increase studies in all severity strata may be warranted they may also offer the potential to estimate the appropriate doses in an HI trial. On the other hand, for compounds with slow or no increase across HI severity strata, reduced HI trials may be justified, e.g. only testing PK in moderate HI.</p>","PeriodicalId":50610,"journal":{"name":"Cts-Clinical and Translational Science","volume":"18 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11724149/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142967405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Karthik Raman, Rukmini Kumar, Cynthia J. Musante, Subha Madhavan
The pharmaceutical industry constantly strives to improve drug development processes to reduce costs, increase efficiencies, and enhance therapeutic outcomes for patients. Model-Informed Drug Development (MIDD) uses mathematical models to simulate intricate processes involved in drug absorption, distribution, metabolism, and excretion, as well as pharmacokinetics and pharmacodynamics. Artificial intelligence (AI), encompassing techniques such as machine learning, deep learning, and Generative AI, offers powerful tools and algorithms to efficiently identify meaningful patterns, correlations, and drug–target interactions from big data, enabling more accurate predictions and novel hypothesis generation. The union of MIDD with AI enables pharmaceutical researchers to optimize drug candidate selection, dosage regimens, and treatment strategies through virtual trials to help derisk drug candidates. However, several challenges, including the availability of relevant, labeled, high-quality datasets, data privacy concerns, model interpretability, and algorithmic bias, must be carefully managed. Standardization of model architectures, data formats, and validation processes is imperative to ensure reliable and reproducible results. Moreover, regulatory agencies have recognized the need to adapt their guidelines to evaluate recommendations from AI-enhanced MIDD methods. In conclusion, integrating model-driven drug development with AI offers a transformative paradigm for pharmaceutical innovation. By integrating the predictive power of computational models and the data-driven insights of AI, the synergy between these approaches has the potential to accelerate drug discovery, optimize treatment strategies, and usher in a new era of personalized medicine, benefiting patients, researchers, and the pharmaceutical industry as a whole.
{"title":"Integrating Model-Informed Drug Development With AI: A Synergistic Approach to Accelerating Pharmaceutical Innovation","authors":"Karthik Raman, Rukmini Kumar, Cynthia J. Musante, Subha Madhavan","doi":"10.1111/cts.70124","DOIUrl":"10.1111/cts.70124","url":null,"abstract":"<p>The pharmaceutical industry constantly strives to improve drug development processes to reduce costs, increase efficiencies, and enhance therapeutic outcomes for patients. Model-Informed Drug Development (MIDD) uses mathematical models to simulate intricate processes involved in drug absorption, distribution, metabolism, and excretion, as well as pharmacokinetics and pharmacodynamics. Artificial intelligence (AI), encompassing techniques such as machine learning, deep learning, and Generative AI, offers powerful tools and algorithms to efficiently identify meaningful patterns, correlations, and drug–target interactions from big data, enabling more accurate predictions and novel hypothesis generation. The union of MIDD with AI enables pharmaceutical researchers to optimize drug candidate selection, dosage regimens, and treatment strategies through virtual trials to help derisk drug candidates. However, several challenges, including the availability of relevant, labeled, high-quality datasets, data privacy concerns, model interpretability, and algorithmic bias, must be carefully managed. Standardization of model architectures, data formats, and validation processes is imperative to ensure reliable and reproducible results. Moreover, regulatory agencies have recognized the need to adapt their guidelines to evaluate recommendations from AI-enhanced MIDD methods. In conclusion, integrating model-driven drug development with AI offers a transformative paradigm for pharmaceutical innovation. By integrating the predictive power of computational models and the data-driven insights of AI, the synergy between these approaches has the potential to accelerate drug discovery, optimize treatment strategies, and usher in a new era of personalized medicine, benefiting patients, researchers, and the pharmaceutical industry as a whole.</p>","PeriodicalId":50610,"journal":{"name":"Cts-Clinical and Translational Science","volume":"18 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11724156/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142967402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}