Humans are daily exposed to a wide range of chemicals in their environment, many of which may exert harmful effects on health. Hence, knowledge of these chemicals for their genotoxicity and carcinogenicity potential is crucial for protecting human health. Genotoxicity, in particular, serves as an early indicator of carcinogenic risk. The assessment of both genotoxicity and carcinogenicity is vital for regulatory bodies and has led to the development of alternative non-animal testing methods. One such method is in silico approach, which relies on predictive software tools for faster, more cost-effective screening.
This paper examines two in silico tools, CASE Ultra 1.9.0.8 (MultiCASE, USA) and QSAR Toolbox 4.5 (OECD), to evaluate their ability to predict the genotoxicity and carcinogenicity of various chemicals. The in silico tools CASE Ultra, QSAR Toolbox, and its profilers demonstrated remarkable performance, with balanced accuracy rates of 80%, 85%, and 62%, for genotoxicity and 79%, 86% and 66% for carcinogenicity, respectively. These promising results underscore the potential of computational approaches in risk assessment, offering a valuable complement to traditional testing methods for evaluating the genotoxicity and carcinogenicity of chemicals. Such tools can play a crucial role in regulatory decision-making and public health protection.
{"title":"In silico approaches using CASE Ultra and QSAR Toolbox for predicting genotoxicity and carcinogenicity on diverse groups of chemicals","authors":"Gowrav Adiga Perdur , Zabiullah AJ , Mohan Krishnappa , Kamil Jurowski , Varun Ahuja","doi":"10.1016/j.comtox.2025.100380","DOIUrl":"10.1016/j.comtox.2025.100380","url":null,"abstract":"<div><div>Humans are daily exposed to a wide range of chemicals in their environment, many of which may exert harmful effects on health. Hence, knowledge of these chemicals for their genotoxicity and carcinogenicity potential is crucial for protecting human health. Genotoxicity, in particular, serves as an early indicator of carcinogenic risk. The assessment of both genotoxicity and carcinogenicity is vital for regulatory bodies and has led to the development of alternative non-animal testing methods. One such method is <em>in silico</em> approach, which relies on predictive software tools for faster, more cost-effective screening.</div><div>This paper examines two <em>in silico</em> tools, CASE Ultra 1.9.0.8 (MultiCASE, USA) and QSAR Toolbox 4.5 (OECD), to evaluate their ability to predict the genotoxicity and carcinogenicity of various chemicals. The <em>in silico</em> tools CASE Ultra, QSAR Toolbox, and its profilers demonstrated remarkable performance, with balanced accuracy rates of 80%, 85%, and 62%, for genotoxicity and 79%, 86% and 66% for carcinogenicity, respectively. These promising results underscore the potential of computational approaches in risk assessment, offering a valuable complement to traditional testing methods for evaluating the genotoxicity and carcinogenicity of chemicals. Such tools can play a crucial role in regulatory decision-making and public health protection.</div></div>","PeriodicalId":37651,"journal":{"name":"Computational Toxicology","volume":"36 ","pages":"Article 100380"},"PeriodicalIF":2.9,"publicationDate":"2025-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145120579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-13DOI: 10.1016/j.comtox.2025.100375
Kanak Kumar , Anshul Verma , Pradeepika Verma
Pesticides present significant concerns regarding environmental sustainability and global stability. This study investigates the types, benefits, and environmental challenges associated with pesticide use. To address these concerns, we developed an innovative Internet of Things (IoT) integrated quantile principal component analysis (QPCA) framework for the recognition of toxic pesticides in smart farming, termed IoT-TPR. The proposed IoT-TPR system is an intelligent electronic nose based on a tin-oxide sensor array, consisting of eight commercial metal–oxide–semiconductor gas sensors, which detect toxic pesticides and transmit the data to the Amazon Web Services cloud for further analysis. A two-stage QPCA preprocessing technique is employed to analyze sensor responses. Subsequently, four classifiers such as radial basis function (RBF), extreme learning machine (ELM), decision tree (DT), and k-nearest neighbor (KNN) are used for comparative performance evaluation. The results indicate that QPCA-KNN achieves the highest accuracy at 99.05%, outperforming other methods across all performance metrics and demonstrating superior classification capability. RBF (96.24%) and ELM (95.81%) also exhibit strong performance, though slightly lower than QPCA-KNN, while DT (92.35%) shows the lowest accuracy but still maintains reasonable performance. Overall, QPCA-KNN emerges as the most effective and robust classification model in this study.
{"title":"IoT integrated quantile principal component analysis based framework for toxic pesticides recognition and classification","authors":"Kanak Kumar , Anshul Verma , Pradeepika Verma","doi":"10.1016/j.comtox.2025.100375","DOIUrl":"10.1016/j.comtox.2025.100375","url":null,"abstract":"<div><div>Pesticides present significant concerns regarding environmental sustainability and global stability. This study investigates the types, benefits, and environmental challenges associated with pesticide use. To address these concerns, we developed an innovative Internet of Things (IoT) integrated quantile principal component analysis (QPCA) framework for the recognition of toxic pesticides in smart farming, termed IoT-TPR. The proposed IoT-TPR system is an intelligent electronic nose based on a tin-oxide sensor array, consisting of eight commercial metal–oxide–semiconductor gas sensors, which detect toxic pesticides and transmit the data to the Amazon Web Services cloud for further analysis. A two-stage QPCA preprocessing technique is employed to analyze sensor responses. Subsequently, four classifiers such as radial basis function (RBF), extreme learning machine (ELM), decision tree (DT), and k-nearest neighbor (KNN) are used for comparative performance evaluation. The results indicate that QPCA-KNN achieves the highest accuracy at 99.05%, outperforming other methods across all performance metrics and demonstrating superior classification capability. RBF (96.24%) and ELM (95.81%) also exhibit strong performance, though slightly lower than QPCA-KNN, while DT (92.35%) shows the lowest accuracy but still maintains reasonable performance. Overall, QPCA-KNN emerges as the most effective and robust classification model in this study.</div></div>","PeriodicalId":37651,"journal":{"name":"Computational Toxicology","volume":"36 ","pages":"Article 100375"},"PeriodicalIF":2.9,"publicationDate":"2025-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145098799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-10DOI: 10.1016/j.comtox.2025.100379
Gyamfi Akyianu , Carsten Kneuer , Judy Choi
In silico software and tools are increasingly being employed as an alternative to in vivo animal testing to predict toxicity of chemicals. One particular application of the underlying in silico models for hazard assessment has been to predict the potential endocrine disrupting activity of chemicals, which is one of the three fundamental elements of an endocrine disrupting chemical (EDC). In this study, 11 in silico tools based on methods ranging from Quantitative Structure-Activity Relationship (QSAR) to docking were selected and tested for their predictivity of estrogen receptor (ER) activity using a set of 80 chemicals of known ER activity potential. The accuracy in prediction, as determined by Matthew’s correlation coefficient (MCC), among the 11 individual tools tested ranged from 0.16 to 0.54 (min–max). However, when combining various tools and applying rules set for a conservative approach in assessing the prediction outcomes, the MCC increased as high as 0.68, demonstrating the higher probability of generating a correct prediction when multiple in silico tools are employed. This study presents the strengths and weaknesses of the individual tools/models tested and provides insights on how in silico predictions could supplement the weight-of-evidence approach in determining endocrine activity potential of chemicals.
{"title":"Exploring in silico tools to predict estrogen receptor activity of chemicals for the assessment of endocrine disruption","authors":"Gyamfi Akyianu , Carsten Kneuer , Judy Choi","doi":"10.1016/j.comtox.2025.100379","DOIUrl":"10.1016/j.comtox.2025.100379","url":null,"abstract":"<div><div><em>In silico</em> software and tools are increasingly being employed as an alternative to <em>in vivo</em> animal testing to predict toxicity of chemicals. One particular application of the underlying <em>in silico</em> models for hazard assessment has been to predict the potential endocrine disrupting activity of chemicals, which is one of the three fundamental elements of an endocrine disrupting chemical (EDC). In this study, 11 <em>in silico</em> tools based on methods ranging from Quantitative Structure-Activity Relationship (QSAR) to docking were selected and tested for their predictivity of estrogen receptor (ER) activity using a set of 80 chemicals of known ER activity potential. The accuracy in prediction, as determined by Matthew’s correlation coefficient (MCC), among the 11 individual tools tested ranged from 0.16 to 0.54 (min–max). However, when combining various tools and applying rules set for a conservative approach in assessing the prediction outcomes, the MCC increased as high as 0.68, demonstrating the higher probability of generating a correct prediction when multiple <em>in silico</em> tools are employed. This study presents the strengths and weaknesses of the individual tools/models tested and provides insights on how <em>in silico</em> predictions could supplement the weight-of-evidence approach in determining endocrine activity potential of chemicals.</div></div>","PeriodicalId":37651,"journal":{"name":"Computational Toxicology","volume":"36 ","pages":"Article 100379"},"PeriodicalIF":2.9,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145098800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01DOI: 10.1016/j.comtox.2025.100377
Jason C. Lambert , Jason Brown , Hui Gong , Curtis Kilburn , Jan Krysa , Brad Kuntzelman , Janet Lee , April Luke , Joshua Powell , Asif Rashid , James Renner , Risa Sayre , Jyothi Tumkur , Carl F. Valone , Chelsea Weitekamp , Russell S. Thomas
Regulatory bodies such as the U.S. Environmental Protection Agency are consistently faced with decisions pertaining to potential human health impacts of a diverse landscape of chemicals encountered in exposure matrices such as water, air, and soil. For legacy chemicals or those currently in commerce, decision contexts may range from emergency response to disasters where evaluation of potential threats to human health occurs on the order of hours to days, up to site- or media-specific assessment and remediation over the course of months to years. In addition, screening and prioritization of new chemicals or emerging contaminants represents an ever-present focus area for the regulatory community. A common theme across these overarching decision contexts is the need for assembling and integrating human health relevant data such as toxicity values and associated effects information. Various activities ranging from screening and prioritization to human health risk assessment of chemicals have historically been time and resource intensive, often requiring that practitioners consult and review a variety of disparate data streams to inform a given decision. In addition, many environmental chemicals are ‘data-poor’, lacking sufficient hazard data or toxicity values applicable to a given exposure scenario. In response, decision-based workflows have been developed and deployed in the RapidTox online platform wherein available toxicity values, hazard/effects data, physicochemical properties, and new approach methods-based data (e.g., read-across; cell-based bioactivity) have been assembled into data delivery modules. To date, the user interface design and expertly scoped content have been integrated in ‘screening human health assessment’ or ‘emergency response’ workflows to support decision-making.
{"title":"“RapidTox”: A decision-support workflow to inform rapid toxicity and human health assessment","authors":"Jason C. Lambert , Jason Brown , Hui Gong , Curtis Kilburn , Jan Krysa , Brad Kuntzelman , Janet Lee , April Luke , Joshua Powell , Asif Rashid , James Renner , Risa Sayre , Jyothi Tumkur , Carl F. Valone , Chelsea Weitekamp , Russell S. Thomas","doi":"10.1016/j.comtox.2025.100377","DOIUrl":"10.1016/j.comtox.2025.100377","url":null,"abstract":"<div><div>Regulatory bodies such as the U.S. Environmental Protection Agency are consistently faced with decisions pertaining to potential human health impacts of a diverse landscape of chemicals encountered in exposure matrices such as water, air, and soil. For legacy chemicals or those currently in commerce, decision contexts may range from emergency response to disasters where evaluation of potential threats to human health occurs on the order of hours to days, up to site- or media-specific assessment and remediation over the course of months to years. In addition, screening and prioritization of new chemicals or emerging contaminants represents an ever-present focus area for the regulatory community. A common theme across these overarching decision contexts is the need for assembling and integrating human health relevant data such as toxicity values and associated effects information. Various activities ranging from screening and prioritization to human health risk assessment of chemicals have historically been time and resource intensive, often requiring that practitioners consult and review a variety of disparate data streams to inform a given decision. In addition, many environmental chemicals are ‘data-poor’, lacking sufficient hazard data or toxicity values applicable to a given exposure scenario. In response, decision-based workflows have been developed and deployed in the RapidTox online platform wherein available toxicity values, hazard/effects data, physicochemical properties, and new approach methods-based data (e.g., read-across; cell-based bioactivity) have been assembled into data delivery modules. To date, the user interface design and expertly scoped content have been integrated in ‘screening human health assessment’ or ‘emergency response’ workflows to support decision-making.</div></div>","PeriodicalId":37651,"journal":{"name":"Computational Toxicology","volume":"35 ","pages":"Article 100377"},"PeriodicalIF":2.9,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144932314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The nanoinformatics provides a platform to refine the nanotechnology approach by controlling the parameters based on the previous informations. Nanoinformatics helps the research community by leveraging sophisticated algorithms and complex computational modeling to predict the essential properties of nanomedicine and ensure their optimal biological interaction and performance. There are numerous potential roles of nanoinformatics in enhancing therapeutic value and preventing unpredictable toxicological pathways of nanomedicine. This review article delves into the pivotal applications of various computational tools to optimize the biological behavior of nanomedicine by controlling their physicochemical characteristics. This review thus offers an insight into adequately comprehending the in silico models such as nano-QSAR, MD simulations, CGMD and Brownian simulations to optimize nanomedicine. These tools help in product development by reducing the cost and time by controlling several biological responses of nanomedicines, including their protein interaction, mitigation, extravasation, receptor interaction and toxicological responses.
{"title":"Nanoinformatics: Emerging technology for prediction and controlling of biological performance of nanomedicines","authors":"Anjana Sharma , Zubina Anjum , Khalid Raza , Nitin Sharma , Balak Das Kurmi","doi":"10.1016/j.comtox.2025.100378","DOIUrl":"10.1016/j.comtox.2025.100378","url":null,"abstract":"<div><div>The nanoinformatics provides a platform to refine the nanotechnology approach by controlling the parameters based on the previous informations. Nanoinformatics helps the research community by leveraging sophisticated algorithms and complex computational modeling to predict the essential properties of nanomedicine and ensure their optimal biological interaction and performance. There are numerous potential roles of nanoinformatics in enhancing therapeutic value and preventing unpredictable toxicological pathways of nanomedicine. This review article delves into the pivotal applications of various computational tools to optimize the biological behavior of nanomedicine by controlling their physicochemical characteristics. This review thus offers an insight into adequately comprehending the <em>in silico</em> models such as nano-QSAR, MD simulations, CGMD and Brownian simulations to optimize nanomedicine. These tools help in product development by reducing the cost and time by controlling several biological responses of nanomedicines, including their protein interaction, mitigation, extravasation, receptor interaction and toxicological responses.</div></div>","PeriodicalId":37651,"journal":{"name":"Computational Toxicology","volume":"35 ","pages":"Article 100378"},"PeriodicalIF":2.9,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145010073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-21DOI: 10.1016/j.comtox.2025.100376
R. Broughton , M. Feshuk , Z. Stanfield , K.K. Isaacs , K. Paul Friedman
The Toxicity Forecaster (ToxCast) program contains targeted bioactivity screening data for thousands of chemicals, but chemicals are often encountered as co-exposures. This work evaluated the feasibility of using single chemical ToxCast data to predict mixture bioactivity assuming chemical additivity. Twenty-one binary mixtures and their single components, inspired by consumer product chemical exposures, were screened in concentration–response using a multidimensional in vitro assay platform for transcription factor activity. Three models were applied to simulate mixtures’ concentration-responses: concentration addition (CA), independent action (IA), and a model that treats the mixture as the most potent single chemical component (MP). Uncertainty in the modeled and observed mixture points of departure and full concentration-responses was considered using bootstrap resampling and a Bayesian statistical framework. Approximately 80 % of the predicted mixture point of departure values were within ±0.5 on a log10-micromolar scale of the observed concentrations; a majority of these predicted points of departure were protective (90–96 %), whether using CA, IA, or MP derived with the screened single components, when compared to the observed mixture. For most mixtures, ≥80 % of the observed mixture concentration–response data points fell within the modeled 95 % prediction interval, suggesting it would be difficult to observe deviations from additivity when accounting for experimental and mixtures modeling uncertainties. As it is resource-prohibitive to screen all mixtures, a case study to estimate bioactivity:exposure ratios for mixtures of per- and polyfluoroalkyl chemicals demonstrated the utility of operationalizing existing ToxCast data with mixtures modeling that includes uncertainty to predict potential risk from co-exposures.
{"title":"Development of mathematical new approach methods to assess chemical mixtures","authors":"R. Broughton , M. Feshuk , Z. Stanfield , K.K. Isaacs , K. Paul Friedman","doi":"10.1016/j.comtox.2025.100376","DOIUrl":"10.1016/j.comtox.2025.100376","url":null,"abstract":"<div><div>The Toxicity Forecaster (ToxCast) program contains targeted bioactivity screening data for thousands of chemicals, but chemicals are often encountered as co-exposures. This work evaluated the feasibility of using single chemical ToxCast data to predict mixture bioactivity assuming chemical additivity. Twenty-one binary mixtures and their single components, inspired by consumer product chemical exposures, were screened in concentration–response using a multidimensional <em>in vitro</em> assay platform for transcription factor activity. Three models were applied to simulate mixtures’ concentration-responses: concentration addition (CA), independent action (IA), and a model that treats the mixture as the most potent single chemical component (MP). Uncertainty in the modeled and observed mixture points of departure and full concentration-responses was considered using bootstrap resampling and a Bayesian statistical framework. Approximately 80 % of the predicted mixture point of departure values were within ±0.5 on a log<sub>10</sub>-micromolar scale of the observed concentrations; a majority of these predicted points of departure were protective (90–96 %), whether using CA, IA, or MP derived with the screened single components, when compared to the observed mixture. For most mixtures, ≥80 % of the observed mixture concentration–response data points fell within the modeled 95 % prediction interval, suggesting it would be difficult to observe deviations from additivity when accounting for experimental and mixtures modeling uncertainties. As it is resource-prohibitive to screen all mixtures, a case study to estimate bioactivity:exposure ratios for mixtures of per- and polyfluoroalkyl chemicals demonstrated the utility of operationalizing existing ToxCast data with mixtures modeling that includes uncertainty to predict potential risk from co-exposures.</div></div>","PeriodicalId":37651,"journal":{"name":"Computational Toxicology","volume":"35 ","pages":"Article 100376"},"PeriodicalIF":2.9,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144891954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-19DOI: 10.1016/j.comtox.2025.100374
Jerry Achar , James W. Firman , Mark T.D. Cronin
Consensus approaches are applied in different quantitative structure–activity relationship (QSAR) modeling contexts based on the assumption that combining individual model predictions will improve prediction reliability. This study evaluated the performance of TEST, CATMoS and VEGA models for prediction of oral rat LD50, both individually and in consensus, across a dataset of 6,229 organic compounds. Predicted LD50 values from the models were compared for each compound, and the lowest value was assigned as the output of the conservative consensus model (CCM). Predictive accuracy was then evaluated based on the agreement of predicted LD50-based GHS category assignments with those derived experimentally. The aim was to allow for the most conservative value to be identified. Results showed that CCM had the highest over-prediction rate at 37 %, compared to TEST (24 %), CATMoS (25 %) and VEGA (8 %). Meanwhile, its under-prediction rate was lowest at 2 %, relative to TEST (20 %), CATMoS (10 %) and VEGA (5 %). Due to the method applied, CCM was the most conservative across all GHS categories. Further, structural analysis demonstrated that no specific chemical classes or functional groups were consistently underpredicted or overpredicted. The utility of CCM lies in its ability to establish a foundation for contextualizing the general use of consensus modeling, in order to derive health-protective oral rat LD50 estimates under conditions of uncertainty, especially where experimental data are limited or absent.
{"title":"Conservative consensus QSAR approach for the prediction of rat acute oral toxicity","authors":"Jerry Achar , James W. Firman , Mark T.D. Cronin","doi":"10.1016/j.comtox.2025.100374","DOIUrl":"10.1016/j.comtox.2025.100374","url":null,"abstract":"<div><div>Consensus approaches are applied in different quantitative structure–activity relationship (QSAR) modeling contexts based on the assumption that combining individual model predictions will improve prediction reliability. This study evaluated the performance of TEST, CATMoS and VEGA models for prediction of oral rat LD<sub>50</sub>, both individually and in consensus, across a dataset of 6,229 organic compounds. Predicted LD<sub>50</sub> values from the models were compared for each compound, and the lowest value was assigned as the output of the conservative consensus model (CCM). Predictive accuracy was then evaluated based on the agreement of predicted LD<sub>50</sub>-based GHS category assignments with those derived experimentally. The aim was to allow for the most conservative value to be identified. Results showed that CCM had the highest over-prediction rate at 37 %, compared to TEST (24 %), CATMoS (25 %) and VEGA (8 %). Meanwhile, its under-prediction rate was lowest at 2 %, relative to TEST (20 %), CATMoS (10 %) and VEGA (5 %). Due to the method applied, CCM was the most conservative across all GHS categories. Further, structural analysis demonstrated that no specific chemical classes or functional groups were consistently underpredicted or overpredicted. The utility of CCM lies in its ability to establish a foundation for contextualizing the general use of consensus modeling, in order to derive health-protective oral rat LD<sub>50</sub> estimates under conditions of uncertainty, especially where experimental data are limited or absent.</div></div>","PeriodicalId":37651,"journal":{"name":"Computational Toxicology","volume":"35 ","pages":"Article 100374"},"PeriodicalIF":2.9,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144879907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-08DOI: 10.1016/j.comtox.2025.100373
Shovanlal Gayen , Indrasis Dasgupta , Balaram Ghosh , Insaf Ahmed Qureshi , Partha Pratim Roy
This hepatic transporter, OATP1B1, plays a critical role in transporter-related toxic responses and drug-drug interactions (DDIs). Several drug-drug interactions associated with OATP1B1 are clinically reported during combination therapies of lipid-lowering statins with antihypertensive, antiviral, and antibiotic drugs.
In the present study, different molecular properties of OATP1B1-interactors and non-interactors were initially compared, and the results revealed a distinct pattern in molecular weight, hydrophobicity, and number of rotatable bonds between them. Further chemical space, scaffold content, and diversity analyses indicated that OATP1B1-interactors/non-interactors are structurally diverse. Recursive partitioning and Bayesian classification analyses, involving ECFP and FCFP fingerprints, highlighted critical structural features that may serve as alerts for toxic or non-toxic effects on OATP1B1-mediated toxicity. Other machine learning-based classification models were also constructed, where Support Vector Classifier (SVC) shows higher statistical significance and predictive ability (accuracy: 0.797; precision: 0.833, and recall: 0.758). Moreover, local and global SHAP analyses were also performed to explain the distinctive structural features of OATP1B1-interactors and non-interactors.
Overall, the study offers insights into structural determinants of OATP1B1 interactions and provides predictive models to distinguish interactors from non-interactors, which may aid in reducing transporter-related toxicity risks in drug development. The outcomes may assist in advancing the safety and performance of medicinal compounds.
{"title":"Machine learning-based structural analysis of OATP1B1 interactors/non-interactors: Discriminating toxic and non-toxic alerts for transporter-mediated toxicity","authors":"Shovanlal Gayen , Indrasis Dasgupta , Balaram Ghosh , Insaf Ahmed Qureshi , Partha Pratim Roy","doi":"10.1016/j.comtox.2025.100373","DOIUrl":"10.1016/j.comtox.2025.100373","url":null,"abstract":"<div><div>This hepatic transporter, OATP1B1, plays a critical role in transporter-related toxic responses and drug-drug interactions (DDIs). Several drug-drug interactions associated with OATP1B1 are clinically reported during combination therapies of lipid-lowering statins with antihypertensive, antiviral, and antibiotic drugs.</div><div>In the present study, different molecular properties of OATP1B1-interactors and non-interactors were initially compared, and the results revealed a distinct pattern in molecular weight, hydrophobicity, and number of rotatable bonds between them. Further chemical space, scaffold content, and diversity analyses indicated that OATP1B1-interactors/non-interactors are structurally diverse. Recursive partitioning and Bayesian classification analyses, involving ECFP and FCFP fingerprints, highlighted critical structural features that may serve as alerts for toxic or non-toxic effects on OATP1B1-mediated toxicity. Other machine learning-based classification models were also constructed, where Support Vector Classifier (SVC) shows higher statistical significance and predictive ability (accuracy: 0.797; precision: 0.833, and recall: 0.758). Moreover, local and global SHAP analyses were also performed to explain the distinctive structural features of OATP1B1-interactors and non-interactors.</div><div>Overall, the study offers insights into structural determinants of OATP1B1 interactions and provides predictive models to distinguish interactors from non-interactors, which may aid in reducing transporter-related toxicity risks in drug development. The outcomes may assist in advancing the safety and performance of medicinal compounds.</div></div>","PeriodicalId":37651,"journal":{"name":"Computational Toxicology","volume":"35 ","pages":"Article 100373"},"PeriodicalIF":2.9,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144831426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-07DOI: 10.1016/j.comtox.2025.100367
Christoph Schür , Kristin Schirmer , Marco Baity-Jesi
Comparability across in silico predictive ecotoxicology studies remains a significant challenge, particularly when assessing model performance. In this work, we identify key criteria necessary for meaningful comparison between independent studies: (i) the use of identical datasets that represent the same chemical and/or taxonomic space; (ii) consistent data cleaning procedures; (iii) identical train/test splits; (iv) clearly defined evaluation metrics, as subtle differences — such as alternative formulations of — can lead to misleading discrepancies; and (v) transparent reporting through code and dataset sharing. Our review of recent literature on fish acute toxicity prediction reveals a critical gap: no two studies fully meet these criteria, rendering cross-study comparisons unreliable. This lack of comparability hampers scientific progress in the field. To address this, we advocate for the adoption of benchmark datasets with standardized cleaning protocols, version control, and defined data splits. We further emphasize the importance of precise metric definitions and transparent reporting practices, including code availability and the use of structured reporting or data sheets, to foster reproducibility and advance the discipline.
{"title":"On the comparability between studies in predictive ecotoxicology","authors":"Christoph Schür , Kristin Schirmer , Marco Baity-Jesi","doi":"10.1016/j.comtox.2025.100367","DOIUrl":"10.1016/j.comtox.2025.100367","url":null,"abstract":"<div><div>Comparability across <em>in silico</em> predictive ecotoxicology studies remains a significant challenge, particularly when assessing model performance. In this work, we identify key criteria necessary for meaningful comparison between independent studies: (i) the use of identical datasets that represent the same chemical and/or taxonomic space; (ii) consistent data cleaning procedures; (iii) identical train/test splits; (iv) clearly defined evaluation metrics, as subtle differences — such as alternative formulations of <span><math><msup><mrow><mi>R</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span> — can lead to misleading discrepancies; and (v) transparent reporting through code and dataset sharing. Our review of recent literature on fish acute toxicity prediction reveals a critical gap: no two studies fully meet these criteria, rendering cross-study comparisons unreliable. This lack of comparability hampers scientific progress in the field. To address this, we advocate for the adoption of benchmark datasets with standardized cleaning protocols, version control, and defined data splits. We further emphasize the importance of precise metric definitions and transparent reporting practices, including code availability and the use of structured reporting or data sheets, to foster reproducibility and advance the discipline.</div></div>","PeriodicalId":37651,"journal":{"name":"Computational Toxicology","volume":"35 ","pages":"Article 100367"},"PeriodicalIF":2.9,"publicationDate":"2025-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144866339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-05DOI: 10.1016/j.comtox.2025.100372
Fernando Rivero-Pino, Caroline Idowu, Hannes Malfroy, Diana Rueda, Hannah Lester
In silico tools are emerging as a valuable resource for predicting the behaviour of proteins, not only for the assessment of toxicity and allergenicity, but also for modelling digestion to study protein digestibility. These methods offer cost-effective, high-throughput alternatives to traditional in vitro and in vivo methods. Computational models simulate enzymatic digestion, allowing the analysis of protein cleavage and peptide release. Complementary tools such as molecular docking have also been proposed as part of the in silico battery of tests. Given their efficiency, in silico approaches could ultimately be proposed to support regulated product applications, particularly in assessing protein digestibility for novel foods. However, their acceptance and use in risk assessment remains uncertain due to a lack of validation in part due to conflicting findings cited in the literature − while some studies report strong correlations between in silico and in vitro digestibility results, others indicate significant discrepancies. This review critically evaluates the potential regulatory application of in silico protein digestibility models for use in novel food risk assessment, highlighting key challenges such as model standardization, validation against experimental data, and the influence of protein structure and digestion conditions. Future research should focus on refining model accuracy and establishing clear validation frameworks to enhance regulatory confidence in in silico digestion tools.
{"title":"In silico analyses as a tool for regulatory assessment of protein digestibility: Where are we?","authors":"Fernando Rivero-Pino, Caroline Idowu, Hannes Malfroy, Diana Rueda, Hannah Lester","doi":"10.1016/j.comtox.2025.100372","DOIUrl":"10.1016/j.comtox.2025.100372","url":null,"abstract":"<div><div><em>In silico</em> tools are emerging as a valuable resource for predicting the behaviour of proteins, not only for the assessment of toxicity and allergenicity, but also for modelling digestion to study protein digestibility. These methods offer cost-effective, high-throughput alternatives to traditional <em>in vitro</em> and <em>in vivo</em> methods. Computational models simulate enzymatic digestion, allowing the analysis of protein cleavage and peptide release. Complementary tools such as molecular docking have also been proposed as part of the <em>in silico</em> battery of tests. Given their efficiency, <em>in silico</em> approaches could ultimately be proposed to support regulated product applications, particularly in assessing protein digestibility for novel foods. However, their acceptance and use in risk assessment remains uncertain due to a lack of validation in part due to conflicting findings cited in the literature − while some studies report strong correlations between <em>in silico</em> and <em>in vitro</em> digestibility results, others indicate significant discrepancies. This review critically evaluates the potential regulatory application of <em>in silico</em> protein digestibility models for use in novel food risk assessment, highlighting key challenges such as model standardization, validation against experimental data, and the influence of protein structure and digestion conditions. Future research should focus on refining model accuracy and establishing clear validation frameworks to enhance regulatory confidence in <em>in silico</em> digestion tools.</div></div>","PeriodicalId":37651,"journal":{"name":"Computational Toxicology","volume":"35 ","pages":"Article 100372"},"PeriodicalIF":2.9,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144780871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}