Pub Date : 2025-03-06DOI: 10.1208/s12248-025-01023-y
Brian W Pack, Robert W Siegel, Paul D Cornwell, Andrea Ferrante, Douglas A Roepke, Michael E Hodsdon, Laurent Malherbe, Mark A Carfagna
There is limited regulatory guidance that outlines the globally acceptable level of individual and total impurities present in peptide and oligonucleotide drug substances that can be supported and accepted during clinical testing. In early clinical development, there is uncertainty regarding the potential toxicological and immunogenicity risk of these impurities relative to the active pharmaceutical ingredient; however, as pharmaceutical development companies move closer to marketing applications, this uncertainty lessens through knowledge gained by clinical and toxicology studies. While these peptide and oligonucleotide related impurities are predicted to be under process control and to have the same safety profile as the parent drug substance, they do not offer any inherent advantages to the patient. Thus, the safety and specification control of these impurities is frequently challenged by regulatory agencies. In support of phase-appropriate control strategies, this manuscript presents a risk-based approach to evaluate the safety of peptide and oligonucleotide impurities from a toxicology and immunogenicity perspective. In many cases, the proposed safety threshold is higher than what is accepted by regulatory bodies, but still is expected to be safe based upon sound toxicological principles which should be the focus for clinical studies. The risk assessment strategies presented here consider the stage of development, indication, potential impact of unintended cross reactivity with endogenous proteins, dose, and frequency of dosing throughout development to inform chemistry manufacturing and control of inherent safety risks associated with API-related impurities. Importantly, for the first time, this manuscript establishes a threshold of immunogenicity concern along with an experimental mitigation plan specifically for peptide impurities as a function of the development phase.
{"title":"A Phase-Appropriate Risk Assessment Strategy in Support of the Safety of Peptide and Oligonucleotide-Related Impurities.","authors":"Brian W Pack, Robert W Siegel, Paul D Cornwell, Andrea Ferrante, Douglas A Roepke, Michael E Hodsdon, Laurent Malherbe, Mark A Carfagna","doi":"10.1208/s12248-025-01023-y","DOIUrl":"https://doi.org/10.1208/s12248-025-01023-y","url":null,"abstract":"<p><p>There is limited regulatory guidance that outlines the globally acceptable level of individual and total impurities present in peptide and oligonucleotide drug substances that can be supported and accepted during clinical testing. In early clinical development, there is uncertainty regarding the potential toxicological and immunogenicity risk of these impurities relative to the active pharmaceutical ingredient; however, as pharmaceutical development companies move closer to marketing applications, this uncertainty lessens through knowledge gained by clinical and toxicology studies. While these peptide and oligonucleotide related impurities are predicted to be under process control and to have the same safety profile as the parent drug substance, they do not offer any inherent advantages to the patient. Thus, the safety and specification control of these impurities is frequently challenged by regulatory agencies. In support of phase-appropriate control strategies, this manuscript presents a risk-based approach to evaluate the safety of peptide and oligonucleotide impurities from a toxicology and immunogenicity perspective. In many cases, the proposed safety threshold is higher than what is accepted by regulatory bodies, but still is expected to be safe based upon sound toxicological principles which should be the focus for clinical studies. The risk assessment strategies presented here consider the stage of development, indication, potential impact of unintended cross reactivity with endogenous proteins, dose, and frequency of dosing throughout development to inform chemistry manufacturing and control of inherent safety risks associated with API-related impurities. Importantly, for the first time, this manuscript establishes a threshold of immunogenicity concern along with an experimental mitigation plan specifically for peptide impurities as a function of the development phase.</p>","PeriodicalId":50934,"journal":{"name":"AAPS Journal","volume":"27 2","pages":"56"},"PeriodicalIF":5.0,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143574594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-03DOI: 10.1208/s12248-025-01017-w
Ran Li, Abigail K Grosskopf, Louis R Joslyn, Eric Gary Stefanich, Vittal Shivva
Cell-based immunotherapy has revolutionized cancer treatment in recent years and is rapidly expanding as one of the major therapeutic options in immuno-oncology. So far ten adoptive T cell therapies (TCTs) have been approved by the health authorities for cancer treatment, and they have shown remarkable anti-tumor efficacy with potent and durable responses. While adoptive T cell therapies have shown success in treating hematological malignancies, they are lagging behind in establishing promising efficacy in treating solid tumors, partially due to our incomplete understanding of the cellular kinetics (CK) and biodistribution (including tumoral penetration) of cell therapy products. Indeed, recent clinical studies have provided ample evidence that CK of TCTs can influence clinical outcomes in both hematological malignancies and solid tumors. In this review, we will discuss the current knowledge on the CK and biodistribution of anti-tumor TCTs. We will first describe the typical CK and biodistribution characteristics of these "living" drugs, and the biological factors that influence these characteristics. We will then review the relationships between CK and pharmacological responses of TCT, and potential strategies in enhancing the persistence and tumoral penetration of TCTs in the clinic. Finally, we will also summarize bioanalytical methods, preclinical in vitro and in vivo tools, and in silico modeling approaches used to assess the CK and biodistribution of TCTs.
{"title":"Cellular Kinetics and Biodistribution of Adoptive T Cell Therapies: from Biological Principles to Effects on Patient Outcomes.","authors":"Ran Li, Abigail K Grosskopf, Louis R Joslyn, Eric Gary Stefanich, Vittal Shivva","doi":"10.1208/s12248-025-01017-w","DOIUrl":"https://doi.org/10.1208/s12248-025-01017-w","url":null,"abstract":"<p><p>Cell-based immunotherapy has revolutionized cancer treatment in recent years and is rapidly expanding as one of the major therapeutic options in immuno-oncology. So far ten adoptive T cell therapies (TCTs) have been approved by the health authorities for cancer treatment, and they have shown remarkable anti-tumor efficacy with potent and durable responses. While adoptive T cell therapies have shown success in treating hematological malignancies, they are lagging behind in establishing promising efficacy in treating solid tumors, partially due to our incomplete understanding of the cellular kinetics (CK) and biodistribution (including tumoral penetration) of cell therapy products. Indeed, recent clinical studies have provided ample evidence that CK of TCTs can influence clinical outcomes in both hematological malignancies and solid tumors. In this review, we will discuss the current knowledge on the CK and biodistribution of anti-tumor TCTs. We will first describe the typical CK and biodistribution characteristics of these \"living\" drugs, and the biological factors that influence these characteristics. We will then review the relationships between CK and pharmacological responses of TCT, and potential strategies in enhancing the persistence and tumoral penetration of TCTs in the clinic. Finally, we will also summarize bioanalytical methods, preclinical in vitro and in vivo tools, and in silico modeling approaches used to assess the CK and biodistribution of TCTs.</p>","PeriodicalId":50934,"journal":{"name":"AAPS Journal","volume":"27 2","pages":"55"},"PeriodicalIF":5.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143543660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-28DOI: 10.1208/s12248-025-01035-8
Ling Zou, Huan-Chieh Chien, Devendra Pade, Yanfei Li, Minhkhoi Nguyen, Ravi Kanth Bhamidipati, Zhe Wang, Osatohanmwen Jessica Enogieru, Jan Wahlstrom
Kp,uu,brain is a critical parameter for evaluating the brain penetration of CNS-targeted compounds, reflecting the ratio of unbound drug concentration in the brain to that in the plasma. While Kp,uu,brain is widely used in the pharmaceutical industry to assess brain exposure, the fidelity of translating Kp,uu,brain to target coverage and pharmacodynamic (PD) effect remains uncertain. This study explores the effectiveness of Kp,uu,brain-based strategies in identifying drug candidates with sufficient target coverage and substantial PD effect. By analyzing reported Kp,uu,brain, unbound drug concentrations in the brain and IC50 values against pharmacological targets for 17 drugs including anticonvulsants, antidepressants, antipsychotics, and antimicrobials, our study demonstrated that while in vitro and in vivo models work well for rank ordering compounds with high Kp,uu,brain, this parameter does not necessarily translate into adequate target coverage (Cu/IC50). In addition, by leveraging PK and PD profiles of 18 drugs measured from human glioblastoma tumors, our study showed that target coverage (glioblastoma Cu/5xIC50) generally correlates well with PD effect. Additionally, Kp,uu,brain tumor is a better indicator for glioblastoma PD effect than Kp,uu,brain, suggesting that intact BBB model may not adequately reflect the barrier heterogeneity in brain tumors such as glioblastoma. In conclusion, while Kp,uu,brain provides an insight on the extent of brain penetration, our study highlighted the need for integrative approaches combining Kp,uu,brain data with comprehensive PK/PD analysis to prioritize CNS-targeted drug candidates with sufficient target coverage and substantial PD effect.
{"title":"Considerations in K<sub>p,uu,brain</sub>-based Strategy for Selecting CNS-targeted Drug Candidates with Sufficient Target Coverage and Substantial Pharmacodynamic Effect.","authors":"Ling Zou, Huan-Chieh Chien, Devendra Pade, Yanfei Li, Minhkhoi Nguyen, Ravi Kanth Bhamidipati, Zhe Wang, Osatohanmwen Jessica Enogieru, Jan Wahlstrom","doi":"10.1208/s12248-025-01035-8","DOIUrl":"https://doi.org/10.1208/s12248-025-01035-8","url":null,"abstract":"<p><p>K<sub>p,uu,brain</sub> is a critical parameter for evaluating the brain penetration of CNS-targeted compounds, reflecting the ratio of unbound drug concentration in the brain to that in the plasma. While K<sub>p,uu,brain</sub> is widely used in the pharmaceutical industry to assess brain exposure, the fidelity of translating K<sub>p,uu,brain</sub> to target coverage and pharmacodynamic (PD) effect remains uncertain. This study explores the effectiveness of K<sub>p,uu,brain</sub>-based strategies in identifying drug candidates with sufficient target coverage and substantial PD effect. By analyzing reported K<sub>p,uu,brain</sub>, unbound drug concentrations in the brain and IC<sub>50</sub> values against pharmacological targets for 17 drugs including anticonvulsants, antidepressants, antipsychotics, and antimicrobials, our study demonstrated that while in vitro and in vivo models work well for rank ordering compounds with high K<sub>p,uu,brain</sub>, this parameter does not necessarily translate into adequate target coverage (C<sub>u</sub>/IC<sub>50</sub>). In addition, by leveraging PK and PD profiles of 18 drugs measured from human glioblastoma tumors, our study showed that target coverage (glioblastoma C<sub>u</sub>/5xIC<sub>50</sub>) generally correlates well with PD effect. Additionally, K<sub>p,uu,brain tumor</sub> is a better indicator for glioblastoma PD effect than K<sub>p,uu,brain</sub>, suggesting that intact BBB model may not adequately reflect the barrier heterogeneity in brain tumors such as glioblastoma. In conclusion, while K<sub>p,uu,brain</sub> provides an insight on the extent of brain penetration, our study highlighted the need for integrative approaches combining K<sub>p,uu,brain</sub> data with comprehensive PK/PD analysis to prioritize CNS-targeted drug candidates with sufficient target coverage and substantial PD effect.</p>","PeriodicalId":50934,"journal":{"name":"AAPS Journal","volume":"27 2","pages":"52"},"PeriodicalIF":5.0,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143532148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-28DOI: 10.1208/s12248-025-01034-9
Hamza Sayadi, Yeleen Fromage, Marc Labriffe, Pierre-André Billat, Cyrielle Codde, Selim Arraki Zava, Pierre Marquet, Jean-Baptiste Woillard
Introduction: Valganciclovir, a prodrug of ganciclovir (GCV), is used to prevent cytomegalovirus infection after transplantation, with doses adjusted based on creatinine clearance (CrCL) to target GCV AUC0-24 h of 40-60 mg*h/L. This sometimes leads to overexposure or underexposure. This study aimed to train, test and validate machine learning (ML) algorithms for accurate GCV AUC0-24 h estimation in solid organ transplantation.
Methods: We simulated patients for different dosing regimen (900 mg/24 h, 450 mg/24 h, 450 mg/48 h, 450 mg/72 h) using two literature population pharmacokinetic models, allocating 75% for training and 25% for testing. Simulations from two other literature models and real patients provided validation datasets. Three independent sets of ML algorithms were created for each regimen, incorporating CrCL and 2 or 3 concentrations. We evaluated their performance on testing and validation datasets and compared them with MAP-BE.
Results: XGBoost using 3 concentrations generated the most accurate predictions. In testing dataset, they exhibited a relative bias of -0.02 to 1.5% and a relative RMSE of 2.6 to 8.5%. In the validation dataset, a relative bias of 1.5 to 5.8% and 8.9 to 16.5%, and a relative RMSE of 8.5 to 9.6% and 10.7% to 19.7% were observed depending on the model used. XGBoost algorithms outperformed or matched MAP-BE, showing enhanced generalization and robustness in their estimates. When applied to real patients' data, algorithms using 2 concentrations showed relative bias of 1.26% and relative RMSE of 12.68%.
Conclusions: XGBoost ML models accurately estimated GCV AUC0-24 h from limited samples and CrCL, providing a strategy for optimized therapeutic drug monitoring.
{"title":"Estimation of Ganciclovir Exposure in Adults Transplant Patients by Machine Learning.","authors":"Hamza Sayadi, Yeleen Fromage, Marc Labriffe, Pierre-André Billat, Cyrielle Codde, Selim Arraki Zava, Pierre Marquet, Jean-Baptiste Woillard","doi":"10.1208/s12248-025-01034-9","DOIUrl":"https://doi.org/10.1208/s12248-025-01034-9","url":null,"abstract":"<p><strong>Introduction: </strong>Valganciclovir, a prodrug of ganciclovir (GCV), is used to prevent cytomegalovirus infection after transplantation, with doses adjusted based on creatinine clearance (CrCL) to target GCV AUC0-24 h of 40-60 mg*h/L. This sometimes leads to overexposure or underexposure. This study aimed to train, test and validate machine learning (ML) algorithms for accurate GCV AUC0-24 h estimation in solid organ transplantation.</p><p><strong>Methods: </strong>We simulated patients for different dosing regimen (900 mg/24 h, 450 mg/24 h, 450 mg/48 h, 450 mg/72 h) using two literature population pharmacokinetic models, allocating 75% for training and 25% for testing. Simulations from two other literature models and real patients provided validation datasets. Three independent sets of ML algorithms were created for each regimen, incorporating CrCL and 2 or 3 concentrations. We evaluated their performance on testing and validation datasets and compared them with MAP-BE.</p><p><strong>Results: </strong>XGBoost using 3 concentrations generated the most accurate predictions. In testing dataset, they exhibited a relative bias of -0.02 to 1.5% and a relative RMSE of 2.6 to 8.5%. In the validation dataset, a relative bias of 1.5 to 5.8% and 8.9 to 16.5%, and a relative RMSE of 8.5 to 9.6% and 10.7% to 19.7% were observed depending on the model used. XGBoost algorithms outperformed or matched MAP-BE, showing enhanced generalization and robustness in their estimates. When applied to real patients' data, algorithms using 2 concentrations showed relative bias of 1.26% and relative RMSE of 12.68%.</p><p><strong>Conclusions: </strong>XGBoost ML models accurately estimated GCV AUC0-24 h from limited samples and CrCL, providing a strategy for optimized therapeutic drug monitoring.</p>","PeriodicalId":50934,"journal":{"name":"AAPS Journal","volume":"27 2","pages":"53"},"PeriodicalIF":5.0,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143532149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-28DOI: 10.1208/s12248-024-01010-9
Kristof De Vos, Raf Mols, Sagnik Chatterjee, Miao-Chan Huang, Patrick Augustijns, Justina Clarinda Wolters, Pieter Annaert
Understanding the kinetics of hepatic processes, such as bile acid (BA) handling and cellular aerobic metabolism, is crucial for advancing our knowledge of liver toxicity, particularly drug-induced cholestasis (DiCho). This article aimed to construct interpretable models with parameter estimations serving as reference values when investigating these cell metrics. Longitudinal datasets on BA disposition and oxygen consumption rates were collected using sandwich-cultured human hepatocytes. Chenodeoxycholic acid (CDCA), lithocholic acid (LCA), as well as their amidated and sulfate-conjugated metabolites were quantified with liquid chromatography-mass spectrometry. The bile salt export pump (BSEP) abundance was monitored with targeted proteomics and modelled for activity assessment. Oxygen consumption was measured using Seahorse XFp analyser. Ordinary differential equation-based models were solved in R. The basolateral uptake and efflux clearance of glycine-conjugated CDCA (GCDCA) were estimated at 1.22 µL/min/106 cells (RSE 14%) and 0.11 µL/min/106 cells (RSE 10%), respectively. The GCDCA clearance from canaliculi back to the medium was 2.22 nL/min/106 cells (RSE 17%), and the dissociation constant between (G)CDCA and FXR for regulating BSEP abundance was 25.73 nM (RSE 11%). Sulfation clearance for LCA was 0.19 µL/min/106 cells (RSE 11%). Model performance was further demonstrated by a maximum two-fold deviation of the 95% confidence boundaries from parameter estimates. These in vitro-in silico models provide a quantitative framework for exploring xenobiotic impacts on BA disposition, BSEP activity, and cellular aerobic metabolism in hepatocytes. Model simulations were consistent with reported in vivo data in progressive familial intrahepatic cholestasis type II patients.
{"title":"In Vitro-In Silico Models to Elucidate Mechanisms of Bile Acid Disposition and Cellular Aerobics in Human Hepatocytes.","authors":"Kristof De Vos, Raf Mols, Sagnik Chatterjee, Miao-Chan Huang, Patrick Augustijns, Justina Clarinda Wolters, Pieter Annaert","doi":"10.1208/s12248-024-01010-9","DOIUrl":"https://doi.org/10.1208/s12248-024-01010-9","url":null,"abstract":"<p><p>Understanding the kinetics of hepatic processes, such as bile acid (BA) handling and cellular aerobic metabolism, is crucial for advancing our knowledge of liver toxicity, particularly drug-induced cholestasis (DiCho). This article aimed to construct interpretable models with parameter estimations serving as reference values when investigating these cell metrics. Longitudinal datasets on BA disposition and oxygen consumption rates were collected using sandwich-cultured human hepatocytes. Chenodeoxycholic acid (CDCA), lithocholic acid (LCA), as well as their amidated and sulfate-conjugated metabolites were quantified with liquid chromatography-mass spectrometry. The bile salt export pump (BSEP) abundance was monitored with targeted proteomics and modelled for activity assessment. Oxygen consumption was measured using Seahorse XFp analyser. Ordinary differential equation-based models were solved in R. The basolateral uptake and efflux clearance of glycine-conjugated CDCA (GCDCA) were estimated at 1.22 µL/min/10<sup>6</sup> cells (RSE 14%) and 0.11 µL/min/10<sup>6</sup> cells (RSE 10%), respectively. The GCDCA clearance from canaliculi back to the medium was 2.22 nL/min/10<sup>6</sup> cells (RSE 17%), and the dissociation constant between (G)CDCA and FXR for regulating BSEP abundance was 25.73 nM (RSE 11%). Sulfation clearance for LCA was 0.19 µL/min/10<sup>6</sup> cells (RSE 11%). Model performance was further demonstrated by a maximum two-fold deviation of the 95% confidence boundaries from parameter estimates. These in vitro-in silico models provide a quantitative framework for exploring xenobiotic impacts on BA disposition, BSEP activity, and cellular aerobic metabolism in hepatocytes. Model simulations were consistent with reported in vivo data in progressive familial intrahepatic cholestasis type II patients.</p>","PeriodicalId":50934,"journal":{"name":"AAPS Journal","volume":"27 2","pages":"51"},"PeriodicalIF":5.0,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143524564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-28DOI: 10.1208/s12248-025-01038-5
Mianzhi Gu, Andrew Gehman, Brady Nifong, Andrew P Mayer, Vicky Li, Mary Birchler, Kai Wang, Huaping Tang
Bioanalytical cross-validation plays a crucial role in ensuring data exchangeability throughout the assay life cycle for data generated between methods or laboratories. The ICH M10 guideline addresses gaps from previous guidelines concerning the conduct and data analysis of cross-validation studies. While the guideline provides high-level direction, it allows flexibility for sponsors to implement their own statistical analysis and acceptance criteria. This flexibility can lead to variability in interpretation and practices across the industry. This manuscript presents a practical framework for implementing ICH M10 in cross-validation studies, with an emphasis on rigorous experimental design and robust statistical analysis. Our approach integrates Incurred Sample Reanalysis (ISR) criteria, Bland-Altman analysis, and Deming regression. A case study illustrates the application of this framework in cross-validating a pharmacodynamic biomarker assay across multiple laboratories. Our study revealed significant inter-laboratory variability in post-dose measurements, driven by the dynamic equilibrium between free and complexed forms of the biomarker. Assay conditions, such as temperature and incubation time, were found to significantly contribute to the observed variability, suggesting that cross-laboratory comparisons of post-dose results are not reliable. In contrast, pre-treatment baseline samples, with no drug on board, exhibited strong alignment across laboratories. Our experimental design captures variability reflective of clinical trial datasets, and the integrated statistical methodology ensures a robust assessment of method variability. This framework supports reliable bioanalytical data integration for Pharmacokinetic/Pharmacodynamic (PK/PD) modeling and regulatory submissions.
{"title":"From Guidelines to Implementation: A Case Study on Applying ICH M10 for Bioanalytical Assay Cross-Validation.","authors":"Mianzhi Gu, Andrew Gehman, Brady Nifong, Andrew P Mayer, Vicky Li, Mary Birchler, Kai Wang, Huaping Tang","doi":"10.1208/s12248-025-01038-5","DOIUrl":"https://doi.org/10.1208/s12248-025-01038-5","url":null,"abstract":"<p><p>Bioanalytical cross-validation plays a crucial role in ensuring data exchangeability throughout the assay life cycle for data generated between methods or laboratories. The ICH M10 guideline addresses gaps from previous guidelines concerning the conduct and data analysis of cross-validation studies. While the guideline provides high-level direction, it allows flexibility for sponsors to implement their own statistical analysis and acceptance criteria. This flexibility can lead to variability in interpretation and practices across the industry. This manuscript presents a practical framework for implementing ICH M10 in cross-validation studies, with an emphasis on rigorous experimental design and robust statistical analysis. Our approach integrates Incurred Sample Reanalysis (ISR) criteria, Bland-Altman analysis, and Deming regression. A case study illustrates the application of this framework in cross-validating a pharmacodynamic biomarker assay across multiple laboratories. Our study revealed significant inter-laboratory variability in post-dose measurements, driven by the dynamic equilibrium between free and complexed forms of the biomarker. Assay conditions, such as temperature and incubation time, were found to significantly contribute to the observed variability, suggesting that cross-laboratory comparisons of post-dose results are not reliable. In contrast, pre-treatment baseline samples, with no drug on board, exhibited strong alignment across laboratories. Our experimental design captures variability reflective of clinical trial datasets, and the integrated statistical methodology ensures a robust assessment of method variability. This framework supports reliable bioanalytical data integration for Pharmacokinetic/Pharmacodynamic (PK/PD) modeling and regulatory submissions.</p>","PeriodicalId":50934,"journal":{"name":"AAPS Journal","volume":"27 2","pages":"54"},"PeriodicalIF":5.0,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143532152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-26DOI: 10.1208/s12248-025-01041-w
Johnny Michel, Francesco Monti, Fabien Lamoureux, Djibril Diagouraga, Manuel Etienne, Muriel Quillard, Camille Molkhou, Fabienne Tamion, Sandrine Dahyot, Tania Petersen, Tony Pereira, Martine Pestel-Caron, Julien Grosjean, Thomas Duflot
Ceftriaxone is pivotal in treating severe infections; however, predicting unbound plasma ceftriaxone (CEFu) from total ceftriaxone (CEFtot) remains challenging. This study aimed to (1) predict CEFu from CEFtot, (2) determine optimal target for CEFtot trough concentration in plasma, (3) perform an external validation of published models, and (4) to ascertain whether the CEF dosing regimen was sufficient to achieve the therapeutic objectives. CEFu predictions based on CEFtot were evaluated using previously published models. Optimal CEFtot targets for an MIC of 1mg/L were calculated to achieve CEFu concentrations above MIC and 4xMIC 100% of the time. External validation was conducted assessing serum albumin, CEFtot and CEFu and comparing predicted CEFu across models. Retrospective data, comprising 408 CEFtot from 222 patients, were analyzed to assess the probability of target attainment (PTA) based on model predicted CEFu. CEFu predictions based on CEFtot were evaluated using previously published models. Optimal CEFtot targets for an MIC of 1mg/L were calculated to achieve CEFu concentrations above MIC and 4xMIC 100% of the time. External validation was conducted assessing serum albumin, CEFtot and CEFu and comparing predicted CEFu across models. Retrospective data, comprising 408 CEFtot from 222 patients, were analyzed to assess the probability of target attainment (PTA) based on model predicted CEFu. Optimal CEFtot trough concentration targets ranged from 2.0 mg/L to 16.9 mg/L (1xMIC) and from 7.9 mg/L to 56.2 mg/L (4xMIC) across models. Some models accurately predicted CEFu obtained from prospective external validation. In the retrospective cohort, PTA ranged from 94.4% to 98.7% for 1xMIC and from 66.9% to 97.3% for 4xMIC. Modeling or direct quantification of CEFu may improve patient outcomes, but achieving this requires standardized analytical approaches and further research to assess the ability of published models to accurately predict CEFu.
{"title":"Unraveling Ceftriaxone Dosing: Free Drug Prediction, Threshold Optimization, and Model Validation.","authors":"Johnny Michel, Francesco Monti, Fabien Lamoureux, Djibril Diagouraga, Manuel Etienne, Muriel Quillard, Camille Molkhou, Fabienne Tamion, Sandrine Dahyot, Tania Petersen, Tony Pereira, Martine Pestel-Caron, Julien Grosjean, Thomas Duflot","doi":"10.1208/s12248-025-01041-w","DOIUrl":"https://doi.org/10.1208/s12248-025-01041-w","url":null,"abstract":"<p><p>Ceftriaxone is pivotal in treating severe infections; however, predicting unbound plasma ceftriaxone (CEF<sub>u</sub>) from total ceftriaxone (CEF<sub>tot</sub>) remains challenging. This study aimed to (1) predict CEF<sub>u</sub> from CEF<sub>tot</sub>, (2) determine optimal target for CEF<sub>tot</sub> trough concentration in plasma, (3) perform an external validation of published models, and (4) to ascertain whether the CEF dosing regimen was sufficient to achieve the therapeutic objectives. CEF<sub>u</sub> predictions based on CEF<sub>tot</sub> were evaluated using previously published models. Optimal CEF<sub>tot</sub> targets for an MIC of 1mg/L were calculated to achieve CEF<sub>u</sub> concentrations above MIC and 4xMIC 100% of the time. External validation was conducted assessing serum albumin, CEF<sub>tot</sub> and CEF<sub>u</sub> and comparing predicted CEF<sub>u</sub> across models. Retrospective data, comprising 408 CEF<sub>tot</sub> from 222 patients, were analyzed to assess the probability of target attainment (PTA) based on model predicted CEF<sub>u</sub>. CEF<sub>u</sub> predictions based on CEF<sub>tot</sub> were evaluated using previously published models. Optimal CEF<sub>tot</sub> targets for an MIC of 1mg/L were calculated to achieve CEF<sub>u</sub> concentrations above MIC and 4xMIC 100% of the time. External validation was conducted assessing serum albumin, CEF<sub>tot</sub> and CEF<sub>u</sub> and comparing predicted CEF<sub>u</sub> across models. Retrospective data, comprising 408 CEF<sub>tot</sub> from 222 patients, were analyzed to assess the probability of target attainment (PTA) based on model predicted CEF<sub>u</sub>. Optimal CEF<sub>tot</sub> trough concentration targets ranged from 2.0 mg/L to 16.9 mg/L (1xMIC) and from 7.9 mg/L to 56.2 mg/L (4xMIC) across models. Some models accurately predicted CEF<sub>u</sub> obtained from prospective external validation. In the retrospective cohort, PTA ranged from 94.4% to 98.7% for 1xMIC and from 66.9% to 97.3% for 4xMIC. Modeling or direct quantification of CEF<sub>u</sub> may improve patient outcomes, but achieving this requires standardized analytical approaches and further research to assess the ability of published models to accurately predict CEF<sub>u</sub>.</p>","PeriodicalId":50934,"journal":{"name":"AAPS Journal","volume":"27 2","pages":"50"},"PeriodicalIF":5.0,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143517358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-26DOI: 10.1208/s12248-025-01028-7
Manel Bautista, Seb Caille, Claudia Corredor, Sankaran Anantharaman, Joseph Bradbury, Bei Chen, W Mark Eickhoff, Gregory Harmon, Mark Johnson, Fasheng Li, Anja Keubler, Laura Pfund, Alexander Russell, Kevin Sutcliffe, Claire Tridon
The IQ Consortium Uniformity Testing Working Group reviewed the current BU and CU testing practices among ten member companies. All ten companies presented their current approach to BU and CU testing at the three stages of Product Lifecycle Management: the Process Design Stage, the Process Qualification Stage, and the Continuous Verification Stage. With this information on hand, the Uniformity Testing Working Group members developed a risk-based approach to BU and CU testing, and proposed innovative methods to reduce or eliminate blend sampling based on risk to Uniformity of Dosage Unit (UDU) testing. This approach uses prior knowledge, mechanistic understanding, and structured risk assessment tools to classify formulations as low-risk or high-risk, thus guiding the testing strategy. A decision tree was outlined on this basis for low-risk and high-risk formulations. The Working Group aims to influence health authorities on the matter, enabling streamlined testing expectations.
{"title":"Blend Uniformity and Content Uniformity in Oral Solid Dosage Manufacturing: an IQ Consortium Industry Position Paper.","authors":"Manel Bautista, Seb Caille, Claudia Corredor, Sankaran Anantharaman, Joseph Bradbury, Bei Chen, W Mark Eickhoff, Gregory Harmon, Mark Johnson, Fasheng Li, Anja Keubler, Laura Pfund, Alexander Russell, Kevin Sutcliffe, Claire Tridon","doi":"10.1208/s12248-025-01028-7","DOIUrl":"https://doi.org/10.1208/s12248-025-01028-7","url":null,"abstract":"<p><p>The IQ Consortium Uniformity Testing Working Group reviewed the current BU and CU testing practices among ten member companies. All ten companies presented their current approach to BU and CU testing at the three stages of Product Lifecycle Management: the Process Design Stage, the Process Qualification Stage, and the Continuous Verification Stage. With this information on hand, the Uniformity Testing Working Group members developed a risk-based approach to BU and CU testing, and proposed innovative methods to reduce or eliminate blend sampling based on risk to Uniformity of Dosage Unit (UDU) testing. This approach uses prior knowledge, mechanistic understanding, and structured risk assessment tools to classify formulations as low-risk or high-risk, thus guiding the testing strategy. A decision tree was outlined on this basis for low-risk and high-risk formulations. The Working Group aims to influence health authorities on the matter, enabling streamlined testing expectations.</p>","PeriodicalId":50934,"journal":{"name":"AAPS Journal","volume":"27 2","pages":"49"},"PeriodicalIF":5.0,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143517350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-20DOI: 10.1208/s12248-025-01039-4
Yoshiyasu Takefuji
This paper explores a novel approach using generative AI to enhance drug marketing strategies in the US pharmaceutical sector. By leveraging an official dataset sourced from the US government, the AI generates Python code to analyze the time interval between FDA approval dates and market release dates. The analysis identifies 370 manufacturers who achieved "zero-day" marketing-referring to drugs marketed immediately upon FDA approval-and 174 manufacturers who marketed their products within less than seven days of approval. Notably, 947 drug products were found to have been marketed prior to FDA approval, raising significant regulatory and ethical concerns that necessitate further discussion. The findings indicate that 174 drug manufacturers have the potential to optimize their marketing strategies to achieve zero-day timelines, prompting an examination of the feasibility of such acceleration within the current regulatory framework and its implications for industry practices. Additionally, this paper discusses the broader impact of AI-driven strategies in the pharmaceutical sector, highlighting their potential to not only enhance marketing speed but also improve aspects such as compliance and decision-making efficiency. Furthermore, a tutorial on implementing generative AI is provided, detailing how it can be utilized to achieve marketing objectives through interactive conversations with the AI. This practical application demonstrates the technology's capabilities using real dataset analysis and reveals significant findings that could inform future strategies within the industry. The research objectives and their broader implications underscore the need for ongoing dialogue about the ethical and regulatory dimensions of AI in pharmaceutical marketing.
{"title":"AI-Driven Analysis of Drug Marketing Efficiency: Unveiling FDA Approval to Market Release Dynamics.","authors":"Yoshiyasu Takefuji","doi":"10.1208/s12248-025-01039-4","DOIUrl":"https://doi.org/10.1208/s12248-025-01039-4","url":null,"abstract":"<p><p>This paper explores a novel approach using generative AI to enhance drug marketing strategies in the US pharmaceutical sector. By leveraging an official dataset sourced from the US government, the AI generates Python code to analyze the time interval between FDA approval dates and market release dates. The analysis identifies 370 manufacturers who achieved \"zero-day\" marketing-referring to drugs marketed immediately upon FDA approval-and 174 manufacturers who marketed their products within less than seven days of approval. Notably, 947 drug products were found to have been marketed prior to FDA approval, raising significant regulatory and ethical concerns that necessitate further discussion. The findings indicate that 174 drug manufacturers have the potential to optimize their marketing strategies to achieve zero-day timelines, prompting an examination of the feasibility of such acceleration within the current regulatory framework and its implications for industry practices. Additionally, this paper discusses the broader impact of AI-driven strategies in the pharmaceutical sector, highlighting their potential to not only enhance marketing speed but also improve aspects such as compliance and decision-making efficiency. Furthermore, a tutorial on implementing generative AI is provided, detailing how it can be utilized to achieve marketing objectives through interactive conversations with the AI. This practical application demonstrates the technology's capabilities using real dataset analysis and reveals significant findings that could inform future strategies within the industry. The research objectives and their broader implications underscore the need for ongoing dialogue about the ethical and regulatory dimensions of AI in pharmaceutical marketing.</p>","PeriodicalId":50934,"journal":{"name":"AAPS Journal","volume":"27 2","pages":"48"},"PeriodicalIF":5.0,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143469871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-19DOI: 10.1208/s12248-024-00981-z
Jing Wang, Gregory Campbell, Jae H Lee, Meng Hu, Kairui Feng, Somesh Chattopadhyay, Liang Zhao, Carl C Peck
The regulatory statistical standard for evaluating average bioequivalence (BE) in generic drug testing, formulation bridging, and 505b2 drug comparisons has traditionally employed the two one-sided t-tests (TOST) procedure. A comparison of BE determinations of TOST and a t-distribution-based, non-informative Bayesian procedure (BayesT) was conducted on 2341 pharmacokinetic parameter datasets in 678 anonymized abbreviated new drug applications (ANDAs) to assess the influence of deviations from lognormality and the presence of extreme values. This research has been motivated to assess reliability of statistical inference procedures for accurate and fair regulatory assessments of BE and non-BE (NBE). The BE criterion of 90% confidence (CI) or Bayesian credible (CrI) intervals of log test/reference ratios for TOST and BayesT was 0.80-1.25. TOST. BayesT agreed on BE conclusions in 98.9% of cases. There were 20 disagreed cases in which TOST rejected BE and BayesT concluded BE, wherein all cases failed the lognormality test and 17 of which contained extreme values (4.2% of 409 cases that contained extreme values). In this circumstance, TOST can be overly conservative in the presence of extreme values. There were 7 cases in which TOST concluded BE at outer BE bounds, while BayesT marginally rejected BE, despite these cases successfully passing the lognormality test. While TOST remains a widely accepted method for BE assessment, the presence of extreme values and deviations from lognormality may lead to faulty inference of BE. The BayesT methodology offers an alternative approach to TOST that can be prespecified to assess BE in such scenarios. Pre-specified application of the BayesT procedure may ensure more reliable outcomes in regulatory assessments of BE.
{"title":"Bioequivalence of ANDA Data using a Non-Informative Bayesian Procedure (BEST) Compared with the Two One‑Sided t‑Tests (TOST).","authors":"Jing Wang, Gregory Campbell, Jae H Lee, Meng Hu, Kairui Feng, Somesh Chattopadhyay, Liang Zhao, Carl C Peck","doi":"10.1208/s12248-024-00981-z","DOIUrl":"10.1208/s12248-024-00981-z","url":null,"abstract":"<p><p>The regulatory statistical standard for evaluating average bioequivalence (BE) in generic drug testing, formulation bridging, and 505b2 drug comparisons has traditionally employed the two one-sided t-tests (TOST) procedure. A comparison of BE determinations of TOST and a t-distribution-based, non-informative Bayesian procedure (Bayes<sub>T</sub>) was conducted on 2341 pharmacokinetic parameter datasets in 678 anonymized abbreviated new drug applications (ANDAs) to assess the influence of deviations from lognormality and the presence of extreme values. This research has been motivated to assess reliability of statistical inference procedures for accurate and fair regulatory assessments of BE and non-BE (NBE). The BE criterion of 90% confidence (CI) or Bayesian credible (CrI) intervals of log test/reference ratios for TOST and Bayes<sub>T</sub> was 0.80-1.25. TOST. Bayes<sub>T</sub> agreed on BE conclusions in 98.9% of cases. There were 20 disagreed cases in which TOST rejected BE and Bayes<sub>T</sub> concluded BE, wherein all cases failed the lognormality test and 17 of which contained extreme values (4.2% of 409 cases that contained extreme values). In this circumstance, TOST can be overly conservative in the presence of extreme values. There were 7 cases in which TOST concluded BE at outer BE bounds, while Bayes<sub>T</sub> marginally rejected BE, despite these cases successfully passing the lognormality test. While TOST remains a widely accepted method for BE assessment, the presence of extreme values and deviations from lognormality may lead to faulty inference of BE. The Bayes<sub>T</sub> methodology offers an alternative approach to TOST that can be prespecified to assess BE in such scenarios. Pre-specified application of the Bayes<sub>T</sub> procedure may ensure more reliable outcomes in regulatory assessments of BE.</p>","PeriodicalId":50934,"journal":{"name":"AAPS Journal","volume":"27 2","pages":"47"},"PeriodicalIF":5.0,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}