Pub Date : 2026-01-12DOI: 10.1007/s10928-025-10013-8
Günter Heimann, Giulia Lestini, Jochen Zisowsky
PK-QTc analyses are routinely done as part of most drug development programs. Usually, the PK concentration of a single compound is related to the QTc effect. However, in many instances there are several active compounds, for example a parent drug and its metabolite, or combination drugs. Previous authors have shown that doing separate PK-QTc analyses for each of the potentially active compounds may lead to biased results, and recommended to do joint modeling of the impact of both compounds on the corrected QT interval. In this paper we go one step further and propose a formal hypothesis test to exclude a [Formula: see text]msec effect based on a joint modeling approach when there are potentially two active compounds. In analogy to the situation with just one active compound, where the upper limit of a [Formula: see text]% confidence interval for [Formula: see text] (with [Formula: see text] being the slope of a linear exposure-response relationship and [Formula: see text] being the expected maximum concentration of some supra-therapeutic dose) needs to be below [Formula: see text]msec, we use the upper confidence intervals for [Formula: see text], [Formula: see text], and [Formula: see text] and exclude a [Formula: see text]msec effect if all three upper confidence limits are below the [Formula: see text]msec threshold. We propose a bootstrap approach for decision making, and show via simulations that this approach controls the type I error of [Formula: see text]%. We focus on the situation where exposure-response is linear in both compounds, but also indicate how this can be extended to non-linear situations.
{"title":"Concentration response analyses for QT data with several active compounds.","authors":"Günter Heimann, Giulia Lestini, Jochen Zisowsky","doi":"10.1007/s10928-025-10013-8","DOIUrl":"https://doi.org/10.1007/s10928-025-10013-8","url":null,"abstract":"<p><p>PK-QTc analyses are routinely done as part of most drug development programs. Usually, the PK concentration of a single compound is related to the QTc effect. However, in many instances there are several active compounds, for example a parent drug and its metabolite, or combination drugs. Previous authors have shown that doing separate PK-QTc analyses for each of the potentially active compounds may lead to biased results, and recommended to do joint modeling of the impact of both compounds on the corrected QT interval. In this paper we go one step further and propose a formal hypothesis test to exclude a [Formula: see text]msec effect based on a joint modeling approach when there are potentially two active compounds. In analogy to the situation with just one active compound, where the upper limit of a [Formula: see text]% confidence interval for [Formula: see text] (with [Formula: see text] being the slope of a linear exposure-response relationship and [Formula: see text] being the expected maximum concentration of some supra-therapeutic dose) needs to be below [Formula: see text]msec, we use the upper confidence intervals for [Formula: see text], [Formula: see text], and [Formula: see text] and exclude a [Formula: see text]msec effect if all three upper confidence limits are below the [Formula: see text]msec threshold. We propose a bootstrap approach for decision making, and show via simulations that this approach controls the type I error of [Formula: see text]%. We focus on the situation where exposure-response is linear in both compounds, but also indicate how this can be extended to non-linear situations.</p>","PeriodicalId":16851,"journal":{"name":"Journal of Pharmacokinetics and Pharmacodynamics","volume":"53 1","pages":"7"},"PeriodicalIF":2.8,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145959546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-05DOI: 10.1007/s10928-025-10012-9
Maddlie Bardol, Andrea Henrich, Celine Sarr, Enrica Mezzalana, Jurgen Langenhorst
Phase I single and multiple ascending dose studies are more and more often used to evaluate QT liability of new drugs. However, these studies are not primarily tailored to concentration-QT analysis and to control or document influential factors such as meal intake. In addition, sampling times may vary over the day for operational reasons. This simulation analysis evaluates the reliability of the standard pre-specified linear model (PLM) proposed by a publication of Garnett et al. and an adjusted PLM accounting for food effect and clock time. The QTcF-time profile of a drug with a mild QT-liability (upper bound of the 90% confidence interval close to the 10 ms threshold) resulting from a well-controlled study was simulated 1000 times and evaluated with the unadjusted PLM (Scenario A, negative rate: 20.8%). Compared to suboptimal study designs with uncontrolled and unbalanced (i.e., differences between active treatment and placebo) differences in meal intake and dosing/sampling times, the unadjusted PLM led to an inflated negative rate (≤ 50%), while the adjusted PLM was able to correct for the imbalances resulting in similar negative rates as the reference scenario or lower, i.e., being more conservative. In conclusion, good documentation in Phase I trials and adjusting for known influential factors can help to analyze QT effects reliably and waive with relevance QT/QTc studies.
{"title":"Risks encountered when not adjusting for diurnal variation and food effect in QTcF analysis based on phase I data.","authors":"Maddlie Bardol, Andrea Henrich, Celine Sarr, Enrica Mezzalana, Jurgen Langenhorst","doi":"10.1007/s10928-025-10012-9","DOIUrl":"10.1007/s10928-025-10012-9","url":null,"abstract":"<p><p>Phase I single and multiple ascending dose studies are more and more often used to evaluate QT liability of new drugs. However, these studies are not primarily tailored to concentration-QT analysis and to control or document influential factors such as meal intake. In addition, sampling times may vary over the day for operational reasons. This simulation analysis evaluates the reliability of the standard pre-specified linear model (PLM) proposed by a publication of Garnett et al. and an adjusted PLM accounting for food effect and clock time. The QTcF-time profile of a drug with a mild QT-liability (upper bound of the 90% confidence interval close to the 10 ms threshold) resulting from a well-controlled study was simulated 1000 times and evaluated with the unadjusted PLM (Scenario A, negative rate: 20.8%). Compared to suboptimal study designs with uncontrolled and unbalanced (i.e., differences between active treatment and placebo) differences in meal intake and dosing/sampling times, the unadjusted PLM led to an inflated negative rate (≤ 50%), while the adjusted PLM was able to correct for the imbalances resulting in similar negative rates as the reference scenario or lower, i.e., being more conservative. In conclusion, good documentation in Phase I trials and adjusting for known influential factors can help to analyze QT effects reliably and waive with relevance QT/QTc studies.</p>","PeriodicalId":16851,"journal":{"name":"Journal of Pharmacokinetics and Pharmacodynamics","volume":"53 1","pages":"6"},"PeriodicalIF":2.8,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12775080/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145911913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-04DOI: 10.1007/s10928-025-10017-4
Zhonghui Huang, Matthew Fidler, Minshi Lan, Iek Leng Cheng, Frank Kloprogge, Joseph F Standing
{"title":"Correction to: An automated pipeline to generate initial estimates for population Pharmacokinetic base models.","authors":"Zhonghui Huang, Matthew Fidler, Minshi Lan, Iek Leng Cheng, Frank Kloprogge, Joseph F Standing","doi":"10.1007/s10928-025-10017-4","DOIUrl":"10.1007/s10928-025-10017-4","url":null,"abstract":"","PeriodicalId":16851,"journal":{"name":"Journal of Pharmacokinetics and Pharmacodynamics","volume":"53 1","pages":"5"},"PeriodicalIF":2.8,"publicationDate":"2026-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12764674/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145896588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-16DOI: 10.1007/s10928-025-10014-7
Douglas W Chung, Sihem Ait-Oudhia
The landscape of pharmaceutical research and drug development is undergoing a significant evolution, with Model-Informed Drug Discovery and Development (MID3) as a transformative approach to accelerate innovation. Realizing the full potential of MID3 required a concerted global effort to enhance education, foster collaboration, and drive scientific advancement. In this issue, we propose that true progress and equitable outcomes hinge on embracing a multifaceted approach, encompassing not only the inclusion of data from diverse patient populations, such as pediatric and pregnant individuals, but also fostering an inclusive environment for a globally diverse group of scientists. We highlight the critical role of globalization in expanding pharmacometrics collaborations across national boundaries and cultural contexts, recognizing that varied perspectives and expertise drive richer insights. Furthermore, we emphasize the importance of equitable access to education and training, particularly for non-native English-speaking institutions, in cultivating a truly global talent pool. Finally, we demonstrate how this expanded diversity fuels innovation, encouraging the adoption of a broader spectrum of quantitative approaches-from classical PK/PD to Physiologically Based Pharmacokinetics (PBPK), Quantitative Systems Pharmacology and Toxicology (QSP/T), and artificial intelligence driven modeling, thereby addressing complex biological challenges and ultimately achieving the "right dose for the right patient at the right time." This editorial emphasizes that by intentionally integrating globalization, education, and innovation, the pharmacometrics community can catalyze profound change in MID3, leading to more effective and inclusive medicines for all.
{"title":"Catalyzing change in MID3 through globalization, education, and innovation.","authors":"Douglas W Chung, Sihem Ait-Oudhia","doi":"10.1007/s10928-025-10014-7","DOIUrl":"10.1007/s10928-025-10014-7","url":null,"abstract":"<p><p>The landscape of pharmaceutical research and drug development is undergoing a significant evolution, with Model-Informed Drug Discovery and Development (MID3) as a transformative approach to accelerate innovation. Realizing the full potential of MID3 required a concerted global effort to enhance education, foster collaboration, and drive scientific advancement. In this issue, we propose that true progress and equitable outcomes hinge on embracing a multifaceted approach, encompassing not only the inclusion of data from diverse patient populations, such as pediatric and pregnant individuals, but also fostering an inclusive environment for a globally diverse group of scientists. We highlight the critical role of globalization in expanding pharmacometrics collaborations across national boundaries and cultural contexts, recognizing that varied perspectives and expertise drive richer insights. Furthermore, we emphasize the importance of equitable access to education and training, particularly for non-native English-speaking institutions, in cultivating a truly global talent pool. Finally, we demonstrate how this expanded diversity fuels innovation, encouraging the adoption of a broader spectrum of quantitative approaches-from classical PK/PD to Physiologically Based Pharmacokinetics (PBPK), Quantitative Systems Pharmacology and Toxicology (QSP/T), and artificial intelligence driven modeling, thereby addressing complex biological challenges and ultimately achieving the \"right dose for the right patient at the right time.\" This editorial emphasizes that by intentionally integrating globalization, education, and innovation, the pharmacometrics community can catalyze profound change in MID3, leading to more effective and inclusive medicines for all.</p>","PeriodicalId":16851,"journal":{"name":"Journal of Pharmacokinetics and Pharmacodynamics","volume":"53 1","pages":"4"},"PeriodicalIF":2.8,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145768433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-09DOI: 10.1007/s10928-025-10011-w
Hidde van de Beek, Pyry A J Välitalo, J G Coen van Hasselt, Laura B Zwep
Pharmacometric modelling is traditionally performed using individual level data. Recently a new method was developed to fit pharmacometric models to summary level - or aggregate - data. This methodology allows for jointly modelling different data sources, once transformed into aggregate data. As such, the method can be applied to a combination of individual data, pharmacometric models, and aggregate data. In this study we aimed to (1) implement this methodological framework into an accessible R package (admr) and (2) develop a novel algorithm with enhanced computational efficiency. The developed R-package allows calculating aggregate data from different data sources, jointly fitting one or multiple data sources and assessing model performance. The implementation of the newly developed algorithm improves computational efficiency by iteratively reweighting internal Monte Carlo predictions. Three simulation scenarios using different data generating models demonstrated an improvement of 3 to 100-fold speed-up when using the novel Iterative Reweighting Monte Carlo (IR-MC) algorithm, while maintaining the convergence properties of the original MC algorithm. These analyses demonstrated that estimation with the IR-MC algorithm is increasingly more efficient as model complexity rises as compared to the standard MC algorithm, indicating the utility for more complex pharmacometric models. In conclusion, the aggregate data modelling implementation in the admr R package allows for a fast and user-friendly application of the aggregate data modelling framework.
{"title":"Aggregate data modelling: A fast implementation for fitting pharmacometrics models to summary-level data in R.","authors":"Hidde van de Beek, Pyry A J Välitalo, J G Coen van Hasselt, Laura B Zwep","doi":"10.1007/s10928-025-10011-w","DOIUrl":"https://doi.org/10.1007/s10928-025-10011-w","url":null,"abstract":"<p><p>Pharmacometric modelling is traditionally performed using individual level data. Recently a new method was developed to fit pharmacometric models to summary level - or aggregate - data. This methodology allows for jointly modelling different data sources, once transformed into aggregate data. As such, the method can be applied to a combination of individual data, pharmacometric models, and aggregate data. In this study we aimed to (1) implement this methodological framework into an accessible R package (admr) and (2) develop a novel algorithm with enhanced computational efficiency. The developed R-package allows calculating aggregate data from different data sources, jointly fitting one or multiple data sources and assessing model performance. The implementation of the newly developed algorithm improves computational efficiency by iteratively reweighting internal Monte Carlo predictions. Three simulation scenarios using different data generating models demonstrated an improvement of 3 to 100-fold speed-up when using the novel Iterative Reweighting Monte Carlo (IR-MC) algorithm, while maintaining the convergence properties of the original MC algorithm. These analyses demonstrated that estimation with the IR-MC algorithm is increasingly more efficient as model complexity rises as compared to the standard MC algorithm, indicating the utility for more complex pharmacometric models. In conclusion, the aggregate data modelling implementation in the admr R package allows for a fast and user-friendly application of the aggregate data modelling framework.</p>","PeriodicalId":16851,"journal":{"name":"Journal of Pharmacokinetics and Pharmacodynamics","volume":"53 1","pages":"3"},"PeriodicalIF":2.8,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145714766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-08DOI: 10.1007/s10928-025-10009-4
Didier Zugaj, Fahima Nekka
The usefulness of mathematical modeling of biological systems and their responses to exogenous products is now well recognized. However, this recognition is marred by problems of unreliability of representations of real populations and predictions of responses to treatments. To remedy this, the generation of virtual populations combined with quantitative systems pharmacology models is increasingly being adopted. However, the complexity of these models and the large number of parameters they involve, generally within a context of lack of information or data, raise the question of nonidentifiability as a potential source affecting the quality of model predictions. This article attempts to present a vision that confronts the management of nonidentifiability with the concerns linked to the classification of virtual populations and their corresponding parametric signatures, as a potential tool for the evaluation of therapeutic interventions.
{"title":"Identification and characterization of virtual sub-populations through phenotype-guided filtering. The challenging case of nonidentifiable models in the context of therapeutic evaluation.","authors":"Didier Zugaj, Fahima Nekka","doi":"10.1007/s10928-025-10009-4","DOIUrl":"https://doi.org/10.1007/s10928-025-10009-4","url":null,"abstract":"<p><p>The usefulness of mathematical modeling of biological systems and their responses to exogenous products is now well recognized. However, this recognition is marred by problems of unreliability of representations of real populations and predictions of responses to treatments. To remedy this, the generation of virtual populations combined with quantitative systems pharmacology models is increasingly being adopted. However, the complexity of these models and the large number of parameters they involve, generally within a context of lack of information or data, raise the question of nonidentifiability as a potential source affecting the quality of model predictions. This article attempts to present a vision that confronts the management of nonidentifiability with the concerns linked to the classification of virtual populations and their corresponding parametric signatures, as a potential tool for the evaluation of therapeutic interventions.</p>","PeriodicalId":16851,"journal":{"name":"Journal of Pharmacokinetics and Pharmacodynamics","volume":"53 1","pages":"2"},"PeriodicalIF":2.8,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145707997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-27DOI: 10.1007/s10928-025-10006-7
Csaba B Kátai, Manon M M Berns, Jeroen Elassaiss-Schaap
Understanding the pharmacokinetics of therapeutic antibodies often requires a detailed investigation of the mechanisms governing their distribution and clearance. Two of the most important mechanisms are the salvage and recycling of antibodies by the neonatal Fc receptor (FcRn), and target-mediated drug disposition (TMDD). While the two mechanisms have been analysed individually in detail, their combination and coupling is yet to be addressed. An important point of consideration is the characteristic time scales pertaining to the processes in each mechanism and how they can be related and thus integrated into a single framework. To this end a minimal 'physiology-based' pharmacokinetic model incorporating specific (TMDD) and non-specific (FcRn) antibody elimination is investigated in the high binding-affinity limit using the method of matched asymptotic expansions. The theory builds on previous asymptotic frameworks corresponding to each mechanism individually. The combined FcRn-TMDD model consists of a plasma space and an endosomal space, with target binding occurring in the former and antibody salvage in the latter. Two parameter regimes are studied in particular, that correspond to cases wherein both the specific and the non-specific clearance mechanisms provide comparable contributions to the total antibody clearance over the same time scale. The analysis offers insight into the processes dominating antibody pharmacokinetics during each characteristic phase of the problem. In addition to the accurate analytical description of the kinetics, relevant pharmacometric expressions are also derived, such as the approximate time and concentration when the target receptors are no longer 'fully' saturated, AUC and the terminal slope. The resulting insight on the dominant processes and model parameters in the specific characteristic phases may be utilised to guide parameter estimation in future modelling efforts. Additionally, the presented theory can be used to assess the validity of various quasi-equilibrium, quasi-steady and Michaelis-Menten type assumptions in each phase. In short, the presented theory can provide guidance for physiology-based pharmacokinetic as well as standard pharmacokinetic modelling efforts.
{"title":"On the coupling between a basic FcRn mechanism and target-mediated disposition of antibodies.","authors":"Csaba B Kátai, Manon M M Berns, Jeroen Elassaiss-Schaap","doi":"10.1007/s10928-025-10006-7","DOIUrl":"https://doi.org/10.1007/s10928-025-10006-7","url":null,"abstract":"<p><p>Understanding the pharmacokinetics of therapeutic antibodies often requires a detailed investigation of the mechanisms governing their distribution and clearance. Two of the most important mechanisms are the salvage and recycling of antibodies by the neonatal Fc receptor (FcRn), and target-mediated drug disposition (TMDD). While the two mechanisms have been analysed individually in detail, their combination and coupling is yet to be addressed. An important point of consideration is the characteristic time scales pertaining to the processes in each mechanism and how they can be related and thus integrated into a single framework. To this end a minimal 'physiology-based' pharmacokinetic model incorporating specific (TMDD) and non-specific (FcRn) antibody elimination is investigated in the high binding-affinity limit using the method of matched asymptotic expansions. The theory builds on previous asymptotic frameworks corresponding to each mechanism individually. The combined FcRn-TMDD model consists of a plasma space and an endosomal space, with target binding occurring in the former and antibody salvage in the latter. Two parameter regimes are studied in particular, that correspond to cases wherein both the specific and the non-specific clearance mechanisms provide comparable contributions to the total antibody clearance over the same time scale. The analysis offers insight into the processes dominating antibody pharmacokinetics during each characteristic phase of the problem. In addition to the accurate analytical description of the kinetics, relevant pharmacometric expressions are also derived, such as the approximate time and concentration when the target receptors are no longer 'fully' saturated, AUC and the terminal slope. The resulting insight on the dominant processes and model parameters in the specific characteristic phases may be utilised to guide parameter estimation in future modelling efforts. Additionally, the presented theory can be used to assess the validity of various quasi-equilibrium, quasi-steady and Michaelis-Menten type assumptions in each phase. In short, the presented theory can provide guidance for physiology-based pharmacokinetic as well as standard pharmacokinetic modelling efforts.</p>","PeriodicalId":16851,"journal":{"name":"Journal of Pharmacokinetics and Pharmacodynamics","volume":"53 1","pages":"1"},"PeriodicalIF":2.8,"publicationDate":"2025-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145634660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-15DOI: 10.1007/s10928-025-10007-6
Günter Heimann, Thomas Dumortier, Karin Meiser
PK-QTc analyses are an integral part of drug development programs. These analyses are often based on phase I study data, and the question may be asked whether the design of these phase I studies has an impact on the precision of the corresponding PK-QT analysis. More precisely, we are interested whether one can increase the power of such analyses when using interleaved ascending dose designs rather than parallel group ascending dose designs. Based on a simulation study, previous authors have concluded that this is the case. Their conclusions, however, are based on assumptions regarding the magnitude of the random effect variances, and on a very specific set-up of their simulation study. In this paper we provide a study re-analysis of historical QTc data. The resulting estimates of these random effect variances are much smaller than those used by the previous authors. We also propose a simulation set-up that adequately mimics the data generation process and the correlation between the primary endpoint change from baseline and the covariate baseline. We present a simulation study using the revised simulation set-up and random effect variances as observed in our study re-analysis. We did not find major differences in power between the different designs when the number of observations is the same. We also provide a justification based on causal analysis why we think our simulation set-up is more adequate for situations when change from baseline is the primary endpoint, specifically when baseline is used as a covariate.
{"title":"A note on phase I interleaved versus parallel group ascending dose designs for concentration-QTc analyses.","authors":"Günter Heimann, Thomas Dumortier, Karin Meiser","doi":"10.1007/s10928-025-10007-6","DOIUrl":"10.1007/s10928-025-10007-6","url":null,"abstract":"<p><p>PK-QTc analyses are an integral part of drug development programs. These analyses are often based on phase I study data, and the question may be asked whether the design of these phase I studies has an impact on the precision of the corresponding PK-QT analysis. More precisely, we are interested whether one can increase the power of such analyses when using interleaved ascending dose designs rather than parallel group ascending dose designs. Based on a simulation study, previous authors have concluded that this is the case. Their conclusions, however, are based on assumptions regarding the magnitude of the random effect variances, and on a very specific set-up of their simulation study. In this paper we provide a study re-analysis of historical QTc data. The resulting estimates of these random effect variances are much smaller than those used by the previous authors. We also propose a simulation set-up that adequately mimics the data generation process and the correlation between the primary endpoint change from baseline and the covariate baseline. We present a simulation study using the revised simulation set-up and random effect variances as observed in our study re-analysis. We did not find major differences in power between the different designs when the number of observations is the same. We also provide a justification based on causal analysis why we think our simulation set-up is more adequate for situations when change from baseline is the primary endpoint, specifically when baseline is used as a covariate.</p>","PeriodicalId":16851,"journal":{"name":"Journal of Pharmacokinetics and Pharmacodynamics","volume":"52 6","pages":"62"},"PeriodicalIF":2.8,"publicationDate":"2025-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145523642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-07DOI: 10.1007/s10928-025-10010-x
Verena Gotta, Birgit Donner
Prolongation of the QT interval in the ECG is a critical finding that signifies an extended duration from the onset of ventricular depolarization to the end of ventricular repolarization. It can predispose patients to life-threatening arrhythmias, such as Torsades de Pointes (TdP). Long QT syndromes (LQTS) are defined by mutations in ion channel genes, particularly those encoding cardiac potassium and sodium channels and are characterized by a significant risk for sudden cardiac death if untreated. However, besides these clearly defined entities various medications have been implicated in causing QT interval prolongation. There is increasing evidence for a genetically determined risk for drug-induced QT prolongation. In addition, due to numerous clinical factors influencing the QT interval, QT prolongation increases the risk of TdP particularly in multi-morbid patients necessitating vigilant monitoring in at-risk populations. This review gives an overview of mechanisms and conditions which induce QT prolongation, the clinical assessment of QT interval duration, thereby highlighting quantitative variations in measurement techniques and heart-rate correction, as well as in demographic interpretation of normal values. The risk of cardiac arrhythmia is discussed, in both patients with congenital LQTS and acquired QT prolongation, along with influencing pharmacokinetic/pharmacodynamic, non-pharmacologic and genetic risk factors for TdP. Finally, clinical implications for individual patient management, including risk-adapted drug-prescription and use of ECG monitoring to mitigate the risks associated with QT prolongation, are summarized. Understanding the interplay between pharmacokinetics, pharmacodynamics, genetic predisposition and co-morbidities is essential for optimizing treatment in the context of prolonged QT intervals, preventing adverse cardiovascular events, and improving cardiac safety. Comprehensive drug labelling regarding exposure-QT relationships and available pharmacovigilance data are important sources of information enhancing patient safety.
{"title":"QT interval prolongation: clinical assessment, risk factors and quantitative pharmacological considerations.","authors":"Verena Gotta, Birgit Donner","doi":"10.1007/s10928-025-10010-x","DOIUrl":"10.1007/s10928-025-10010-x","url":null,"abstract":"<p><p>Prolongation of the QT interval in the ECG is a critical finding that signifies an extended duration from the onset of ventricular depolarization to the end of ventricular repolarization. It can predispose patients to life-threatening arrhythmias, such as Torsades de Pointes (TdP). Long QT syndromes (LQTS) are defined by mutations in ion channel genes, particularly those encoding cardiac potassium and sodium channels and are characterized by a significant risk for sudden cardiac death if untreated. However, besides these clearly defined entities various medications have been implicated in causing QT interval prolongation. There is increasing evidence for a genetically determined risk for drug-induced QT prolongation. In addition, due to numerous clinical factors influencing the QT interval, QT prolongation increases the risk of TdP particularly in multi-morbid patients necessitating vigilant monitoring in at-risk populations. This review gives an overview of mechanisms and conditions which induce QT prolongation, the clinical assessment of QT interval duration, thereby highlighting quantitative variations in measurement techniques and heart-rate correction, as well as in demographic interpretation of normal values. The risk of cardiac arrhythmia is discussed, in both patients with congenital LQTS and acquired QT prolongation, along with influencing pharmacokinetic/pharmacodynamic, non-pharmacologic and genetic risk factors for TdP. Finally, clinical implications for individual patient management, including risk-adapted drug-prescription and use of ECG monitoring to mitigate the risks associated with QT prolongation, are summarized. Understanding the interplay between pharmacokinetics, pharmacodynamics, genetic predisposition and co-morbidities is essential for optimizing treatment in the context of prolonged QT intervals, preventing adverse cardiovascular events, and improving cardiac safety. Comprehensive drug labelling regarding exposure-QT relationships and available pharmacovigilance data are important sources of information enhancing patient safety.</p>","PeriodicalId":16851,"journal":{"name":"Journal of Pharmacokinetics and Pharmacodynamics","volume":"52 6","pages":"61"},"PeriodicalIF":2.8,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12594730/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145471352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-06DOI: 10.1007/s10928-025-10000-z
Zhonghui Huang, Matthew Fidler, Minshi Lan, Iek Leng Cheng, Frank Kloprogge, Joseph F Standing
Nonlinear mixed-effects models rely on adequate initial parameter estimates for efficient parameter optimization. Poor initial estimates can result in failed model convergence or termination with incorrect parameter estimates. Non-compartmental analysis (NCA) and other manual methods have typically been used to derive initial estimates for pharmacokinetic (PK) parameters. However, NCA struggles with sparse data and recent advances in automated modeling increasingly emphasize the need for initial estimates that require no user input. This study aimed to develop an integrated pipeline for the computation of initial estimates applicable to various data types and model structures. The designed pipeline incorporated a custom-designed algorithm that leveraged data-driven methods to generate initial estimates for both structural and statistical parameters in population pharmacokinetic (PopPK) base models. The pipeline's performance was evaluated across twenty-one simulated datasets and thirteen real-life datasets. The results suggested that this pipeline performed well in all test cases. Initial estimates recommended by the pipeline resulted in final parameter estimates closely aligned with pre-set true values in simulated datasets or with literature references in the case of real-life data. This study provides an efficient and reliable tool for delivering PK initial estimates for population pharmacokinetic modeling in both rich and sparse data scenarios. An open-source R package has been created.
{"title":"An automated pipeline to generate initial estimates for population Pharmacokinetic base models.","authors":"Zhonghui Huang, Matthew Fidler, Minshi Lan, Iek Leng Cheng, Frank Kloprogge, Joseph F Standing","doi":"10.1007/s10928-025-10000-z","DOIUrl":"10.1007/s10928-025-10000-z","url":null,"abstract":"<p><p>Nonlinear mixed-effects models rely on adequate initial parameter estimates for efficient parameter optimization. Poor initial estimates can result in failed model convergence or termination with incorrect parameter estimates. Non-compartmental analysis (NCA) and other manual methods have typically been used to derive initial estimates for pharmacokinetic (PK) parameters. However, NCA struggles with sparse data and recent advances in automated modeling increasingly emphasize the need for initial estimates that require no user input. This study aimed to develop an integrated pipeline for the computation of initial estimates applicable to various data types and model structures. The designed pipeline incorporated a custom-designed algorithm that leveraged data-driven methods to generate initial estimates for both structural and statistical parameters in population pharmacokinetic (PopPK) base models. The pipeline's performance was evaluated across twenty-one simulated datasets and thirteen real-life datasets. The results suggested that this pipeline performed well in all test cases. Initial estimates recommended by the pipeline resulted in final parameter estimates closely aligned with pre-set true values in simulated datasets or with literature references in the case of real-life data. This study provides an efficient and reliable tool for delivering PK initial estimates for population pharmacokinetic modeling in both rich and sparse data scenarios. An open-source R package has been created.</p>","PeriodicalId":16851,"journal":{"name":"Journal of Pharmacokinetics and Pharmacodynamics","volume":"52 6","pages":"60"},"PeriodicalIF":2.8,"publicationDate":"2025-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12592298/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145459099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}