Pub Date : 2025-01-01DOI: 10.1016/j.ibmed.2025.100220
Esaie Naroum , Ebenezer Maka Maka , Hamadjam Abboubakar , Paul Dayang , Appolinaire Batoure Bamana , Benjamin Garga , Hassana Daouda Daouda , Mohsen Bakouri , Ilyas Khan
The Plasmodium parasite, which causes malaria is transmitted by Anopheles mosquitoes, and remains a major development barrier in Africa. This is particularly true considering the conducive environment that promotes the spread of malaria. This study examines several machine learning approaches, such as long short term memory (LSTM), random forests (RF), support vector machines (SVM), and data regularization models including Ridge, Lasso, and ElasticNet, in order to forecast the occurrence of malaria in the Adamaoua region of Cameroon. The LSTM, a recurrent neural network variant, performed the best with 76% accuracy and a low error rate (RMSE = 0.08). Statistical evidence indicates that temperatures exceeding 34 degrees halt mosquito vector reproduction, thereby slowing the spread of malaria. However, humidity increases the morbidity of the condition. The survey also identified high-risk areas in Ngaoundéré Rural and Urban and Meiganga. Between 2018 and 2022, the Adamaoua region had 20.1%, 12.3%, and 10.0% of malaria cases, respectively, in these locations. According to the estimate, the number of malaria cases in the Adamaoua region will rise gradually between 2023 and 2026, peaking in 2029 before declining in 2031.
{"title":"Comparative analysis of deep learning and machine learning techniques for forecasting new malaria cases in Cameroon’s Adamaoua region","authors":"Esaie Naroum , Ebenezer Maka Maka , Hamadjam Abboubakar , Paul Dayang , Appolinaire Batoure Bamana , Benjamin Garga , Hassana Daouda Daouda , Mohsen Bakouri , Ilyas Khan","doi":"10.1016/j.ibmed.2025.100220","DOIUrl":"10.1016/j.ibmed.2025.100220","url":null,"abstract":"<div><div>The Plasmodium parasite, which causes malaria is transmitted by Anopheles mosquitoes, and remains a major development barrier in Africa. This is particularly true considering the conducive environment that promotes the spread of malaria. This study examines several machine learning approaches, such as long short term memory (LSTM), random forests (RF), support vector machines (SVM), and data regularization models including Ridge, Lasso, and ElasticNet, in order to forecast the occurrence of malaria in the Adamaoua region of Cameroon. The LSTM, a recurrent neural network variant, performed the best with 76% accuracy and a low error rate (RMSE = 0.08). Statistical evidence indicates that temperatures exceeding 34 degrees halt mosquito vector reproduction, thereby slowing the spread of malaria. However, humidity increases the morbidity of the condition. The survey also identified high-risk areas in Ngaoundéré Rural and Urban and Meiganga. Between 2018 and 2022, the Adamaoua region had 20.1%, 12.3%, and 10.0% of malaria cases, respectively, in these locations. According to the estimate, the number of malaria cases in the Adamaoua region will rise gradually between 2023 and 2026, peaking in 2029 before declining in 2031.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"11 ","pages":"Article 100220"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.ibmed.2025.100248
ArunaDevi Karuppasamy , Hamza zidoum , Majda Said Sultan Al-Rashdi , Maiya Al-Bahri
The Deep Learning (DL) has demonstrated a significant impact on a various pattern recognition applications, resulting in significant advancements in areas such as visual recognition, autonomous cars, language processing, and healthcare. Nowadays, deep learning was widely applied on the medical images to identify the diseases efficiently. Still, the use of applications in clinical settings is now limited to a small number. The main factors to this might be due to an inadequate annotated data, noises in the images and challenges related to collecting data. Our research proposed a convolutional autoencoder to classify the breast cancer tumors, using the Sultan Qaboos University Hospital(SQUH) and BreakHis datasets. The proposed model named Convolutional AutoEncoder with modified Loss Function (CAE-LF) achieved a good performance, by attaining a F1-score of 0.90, recall of 0.89, and accuracy of 91%. The results obtained are comparable to those obtained in earlier researches. Additional analyses conducted on the SQUH dataset demonstrate that it yields a good performance with an F1-score of 0.91, 0.93, 0.92, and 0.93 for 4x, 10x, 20x, and 40x magnifications, respectively. Our study highlights the potential of deep learning in analyzing medical images to classify breast tumors.
{"title":"Optimizing breast cancer diagnosis with convolutional autoencoders: Enhanced performance through modified loss functions","authors":"ArunaDevi Karuppasamy , Hamza zidoum , Majda Said Sultan Al-Rashdi , Maiya Al-Bahri","doi":"10.1016/j.ibmed.2025.100248","DOIUrl":"10.1016/j.ibmed.2025.100248","url":null,"abstract":"<div><div>The Deep Learning (DL) has demonstrated a significant impact on a various pattern recognition applications, resulting in significant advancements in areas such as visual recognition, autonomous cars, language processing, and healthcare. Nowadays, deep learning was widely applied on the medical images to identify the diseases efficiently. Still, the use of applications in clinical settings is now limited to a small number. The main factors to this might be due to an inadequate annotated data, noises in the images and challenges related to collecting data. Our research proposed a convolutional autoencoder to classify the breast cancer tumors, using the Sultan Qaboos University Hospital(SQUH) and BreakHis datasets. The proposed model named Convolutional AutoEncoder with modified Loss Function (CAE-LF) achieved a good performance, by attaining a F1-score of 0.90, recall of 0.89, and accuracy of 91%. The results obtained are comparable to those obtained in earlier researches. Additional analyses conducted on the SQUH dataset demonstrate that it yields a good performance with an F1-score of 0.91, 0.93, 0.92, and 0.93 for 4x, 10x, 20x, and 40x magnifications, respectively. Our study highlights the potential of deep learning in analyzing medical images to classify breast tumors.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"11 ","pages":"Article 100248"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143887937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.ibmed.2025.100267
Shuaibu Saidu Musa , Adamu Muhammad Ibrahim , Muhammad Yasir Alhassan , Abubakar Hafs Musa , Abdulrahman Garba Jibo , Auwal Rabiu Auwal , Olalekan John Okesanya , Zhinya Kawa Othman , Muhammad Sadiq Abubakar , Mohamed Mustaf Ahmed , Carina Joane V. Barroso , Abraham Fessehaye Sium , Manuel B. Garcia , James Brian Flores , Adamu Safiyanu Maikifi , M.B.N. Kouwenhoven , Don Eliseo Lucero-Prisno
The fusion of molecular-scale engineering in nanotechnology with machine learning (ML) analytics is reshaping the field of precision medicine. Nanoparticles enable ultrasensitive diagnostics, targeted drug and gene delivery, and high-resolution imaging, whereas ML models mine vast multimodal datasets to optimize nanoparticle design, enhance predictive accuracy, and personalize treatment in real-time. Recent breakthroughs include ML-guided formulations of lipid, polymeric, and inorganic carriers that cross biological barriers; AI-enhanced nanosensors that flag early disease from breath, sweat, or blood; and nanotheranostic agents that simultaneously track and treat tumors. Comparative insights into Retrieval-Augmented Generation and supervised learning pipelines reveal distinct advantages for nanodevice engineering across diverse data environments. An expanded focus on explainable AI tools, such as SHAP, LIME, Grad-CAM, and Integrated Gradients, highlights their role in enhancing transparency, trust, and interpretability in nano-enabled clinical decisions. A structured narrative review method was applied, and key ML model performances were synthesized to strengthen analytical clarity. Emerging biodegradable nanomaterials, autonomous micro-nanorobots, and hybrid lab-on-chip systems promise faster point-of-care decisions but raise pressing questions about data integrity, interpretability, scalability, regulation, ethics, and equitable access. Addressing these hurdles will require robust data standards, privacy safeguards, interdisciplinary R&D networks, and flexible approval pathways to translate bench advances into bedside benefits for patients. This review synthesizes the current landscape, critical challenges, and future directions at the intersection of nanotechnology and ML in precision medicine.
{"title":"Nanotechnology and machine learning: a promising confluence for the advancement of precision medicine","authors":"Shuaibu Saidu Musa , Adamu Muhammad Ibrahim , Muhammad Yasir Alhassan , Abubakar Hafs Musa , Abdulrahman Garba Jibo , Auwal Rabiu Auwal , Olalekan John Okesanya , Zhinya Kawa Othman , Muhammad Sadiq Abubakar , Mohamed Mustaf Ahmed , Carina Joane V. Barroso , Abraham Fessehaye Sium , Manuel B. Garcia , James Brian Flores , Adamu Safiyanu Maikifi , M.B.N. Kouwenhoven , Don Eliseo Lucero-Prisno","doi":"10.1016/j.ibmed.2025.100267","DOIUrl":"10.1016/j.ibmed.2025.100267","url":null,"abstract":"<div><div>The fusion of molecular-scale engineering in nanotechnology with machine learning (ML) analytics is reshaping the field of precision medicine. Nanoparticles enable ultrasensitive diagnostics, targeted drug and gene delivery, and high-resolution imaging, whereas ML models mine vast multimodal datasets to optimize nanoparticle design, enhance predictive accuracy, and personalize treatment in real-time. Recent breakthroughs include ML-guided formulations of lipid, polymeric, and inorganic carriers that cross biological barriers; AI-enhanced nanosensors that flag early disease from breath, sweat, or blood; and nanotheranostic agents that simultaneously track and treat tumors. Comparative insights into Retrieval-Augmented Generation and supervised learning pipelines reveal distinct advantages for nanodevice engineering across diverse data environments. An expanded focus on explainable AI tools, such as SHAP, LIME, Grad-CAM, and Integrated Gradients, highlights their role in enhancing transparency, trust, and interpretability in nano-enabled clinical decisions. A structured narrative review method was applied, and key ML model performances were synthesized to strengthen analytical clarity. Emerging biodegradable nanomaterials, autonomous micro-nanorobots, and hybrid lab-on-chip systems promise faster point-of-care decisions but raise pressing questions about data integrity, interpretability, scalability, regulation, ethics, and equitable access. Addressing these hurdles will require robust data standards, privacy safeguards, interdisciplinary R&D networks, and flexible approval pathways to translate bench advances into bedside benefits for patients. This review synthesizes the current landscape, critical challenges, and future directions at the intersection of nanotechnology and ML in precision medicine.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"12 ","pages":"Article 100267"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144271155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.ibmed.2025.100265
Abedin Keshavarz, Amir Lakizadeh
Polypharmacy, or the concurrent use of multiple medications, increases the risk of adverse effects due to drug interactions. As polypharmacy becomes more prevalent, forecasting these interactions is essential in the pharmaceutical field. Due to the limitations of clinical trials in detecting rare side effects associated with polypharmacy, computational methods are being developed to model these adverse effects. This study introduces a method named PU-MLP, based on a Multi-Layer Perceptron, to predict side effects from drug combinations. This research utilizes advanced machine learning techniques to explore the connections between medications and their adverse effects. The approach consists of three key stages: first, it creates an optimal representation of each drug using a combination of a random forest classifier, Graph Neural Networks (GNNs), and dimensionality reduction techniques. Second, it employs Positive Unlabeled learning to address data uncertainty. Finally, a Multi-Layer Perceptron model is utilized to predict polypharmacy side effects. Performance evaluation using 5-fold cross-validation shows that the proposed method surpasses other approaches, achieving impressive scores of 0.99, 0.99, and 0.98 in AUPR, AUC, and F1 measures, respectively.
{"title":"PU-MLP: A PU-learning based method for polypharmacy side-effects detection based on multi-layer perceptron and feature extraction techniques","authors":"Abedin Keshavarz, Amir Lakizadeh","doi":"10.1016/j.ibmed.2025.100265","DOIUrl":"10.1016/j.ibmed.2025.100265","url":null,"abstract":"<div><div>Polypharmacy, or the concurrent use of multiple medications, increases the risk of adverse effects due to drug interactions. As polypharmacy becomes more prevalent, forecasting these interactions is essential in the pharmaceutical field. Due to the limitations of clinical trials in detecting rare side effects associated with polypharmacy, computational methods are being developed to model these adverse effects. This study introduces a method named PU-MLP, based on a Multi-Layer Perceptron, to predict side effects from drug combinations. This research utilizes advanced machine learning techniques to explore the connections between medications and their adverse effects. The approach consists of three key stages: first, it creates an optimal representation of each drug using a combination of a random forest classifier, Graph Neural Networks (GNNs), and dimensionality reduction techniques. Second, it employs Positive Unlabeled learning to address data uncertainty. Finally, a Multi-Layer Perceptron model is utilized to predict polypharmacy side effects. Performance evaluation using 5-fold cross-validation shows that the proposed method surpasses other approaches, achieving impressive scores of 0.99, 0.99, and 0.98 in AUPR, AUC, and F1 measures, respectively.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"12 ","pages":"Article 100265"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144220989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dental caries is one of the major dental issues that is common among many individuals. It leads to tooth loss and affects the tooth root, creating a need to automatically detect dental caries to reduce treatment costs and prevent its consequences. The Lightweight Caries Segmentation Network (LCSNet) proposed in this study detects the location of dental caries by applying pixel-wise segmentation to dental photographs taken with various Android phones. LCSNet utilizes a Dual Multiscale Residual (DMR) block in both the encoder and decoder, adapts transfer learning through a pre-trained InceptionV3 model at the bottleneck layer, and incorporates a Squeeze and Excitation block in the skip connection, effectively extracting spatial information even from images where 95 % of the background and only 5 % represent the area of interest. A new dataset was developed by gathering oral photographs of dental caries from two hospitals, with advanced augmentation techniques applied. The LCSNet architecture demonstrated an accuracy of 97.36 %, precision of 73.1 %, recall of 70.2 %, an F1-Score of 71.14 %, and an Intersection-over-Union (IoU) of 56.8 %. Expert dentists confirmed that the LCSNet model proposed in this in vivo study accurately segments the position and texture of dental caries. Both qualitative and quantitative performance analyses, along with comparative analyses of efficiency and computational requirements, were conducted with other deep learning models. The proposed model outperforms existing deep learning models and shows significant potential for integration into a smartphone application-based oral disease detection system, potentially replacing some conventional clinically adapted methods.
{"title":"LCSNet: Lightweight Caries Segmentation Network for the segmentation of dental caries using smartphone photographs","authors":"Radha R.C. , B.S. Raghavendra , Rishabh Kumar Hota , K.R. Vijayalakshmi , Seema Patil , A.V. Narasimhadhan","doi":"10.1016/j.ibmed.2025.100304","DOIUrl":"10.1016/j.ibmed.2025.100304","url":null,"abstract":"<div><div>Dental caries is one of the major dental issues that is common among many individuals. It leads to tooth loss and affects the tooth root, creating a need to automatically detect dental caries to reduce treatment costs and prevent its consequences. The Lightweight Caries Segmentation Network (LCSNet) proposed in this study detects the location of dental caries by applying pixel-wise segmentation to dental photographs taken with various Android phones. LCSNet utilizes a Dual Multiscale Residual (DMR) block in both the encoder and decoder, adapts transfer learning through a pre-trained InceptionV3 model at the bottleneck layer, and incorporates a Squeeze and Excitation block in the skip connection, effectively extracting spatial information even from images where 95 % of the background and only 5 % represent the area of interest. A new dataset was developed by gathering oral photographs of dental caries from two hospitals, with advanced augmentation techniques applied. The LCSNet architecture demonstrated an accuracy of 97.36 %, precision of 73.1 %, recall of 70.2 %, an F1-Score of 71.14 %, and an Intersection-over-Union (IoU) of 56.8 %. Expert dentists confirmed that the LCSNet model proposed in this in vivo study accurately segments the position and texture of dental caries. Both qualitative and quantitative performance analyses, along with comparative analyses of efficiency and computational requirements, were conducted with other deep learning models. The proposed model outperforms existing deep learning models and shows significant potential for integration into a smartphone application-based oral disease detection system, potentially replacing some conventional clinically adapted methods.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"12 ","pages":"Article 100304"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145361516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cardiovascular disease causes 17.9 million deaths annually, yet current AI systems achieve ∼82 % accuracy without uncertainty quantification—limiting clinical utility where prediction confidence directly guides life-saving treatment decisions.
Objective
We developed an uncertainty-aware hybrid optimization framework for robust CVD detection that provides clinicians with both risk predictions and confidence intervals, enabling personalized decision-making under real-world clinical conditions.
Methods
Our clinical translation framework integrates multiple complementary AI models (Gaussian processes, gradient-boosted trees, Transformers) through uncertainty-guided optimization. Key clinical innovations include: (1) real-time uncertainty calibration responding to data quality variations, (2) dynamic model weighting adapting to individual patient characteristics, and (3) interpretable confidence intervals supporting clinical decision protocols.
Results
Clinical validation on 12,458 CVD patients from MIMIC-III and UK Biobank demonstrated clinically significant improvements: +1.4 % AUC (0.853 vs 0.839, p < 0.01) translating to 50 additional correct diagnoses per 10,000 patients, +1.5 % balanced accuracy, and 20 % better uncertainty calibration. The framework maintained robust performance (>80 % AUC) under realistic clinical noise while providing reliable confidence intervals across all risk levels.
Clinical translation
This framework delivers immediate clinical utility through real-time inference (<2s), FHIR-compliant EHR integration, and physician-validated uncertainty interpretation. Implementation prevents an estimated 50 missed diagnoses and 23 unnecessary procedures per 10,000 patients screened annually.
Conclusions
Our uncertainty-aware framework represents the first clinically ready AI system providing both accurate CVD risk assessment and trustworthy confidence measures, directly addressing physician adoption barriers and supporting personalized cardiovascular care.
背景:心血管疾病每年导致1790万人死亡,但目前的人工智能系统在没有不确定性量化的情况下达到了82%的准确率,这限制了临床实用性,预测置信度直接指导挽救生命的治疗决策。目的:我们开发了一个不确定性感知的混合优化框架,用于稳健的心血管疾病检测,为临床医生提供风险预测和置信区间,从而在现实临床条件下实现个性化决策。方法通过不确定性导向优化,sour临床翻译框架集成了多个互补的人工智能模型(高斯过程、梯度增强树、变形金刚)。关键的临床创新包括:(1)响应数据质量变化的实时不确定度校准,(2)适应个体患者特征的动态模型加权,以及(3)支持临床决策方案的可解释置信区间。结果:来自MIMIC-III和UK Biobank的12,458例CVD患者的临床验证显示出临床显着改善:+ 1.4%的AUC (0.853 vs 0.839, p < 0.01)转化为每10,000例患者额外50例正确诊断,+ 1.5%的平衡准确性和20%的不确定度校准。该框架在真实的临床噪声下保持稳健的性能(80% AUC),同时在所有风险水平上提供可靠的置信区间。临床翻译该框架通过实时推理(<2s)、符合fhir的EHR集成和医生验证的不确定性解释,提供即时的临床效用。每年每1万名接受筛查的患者中,估计有50例漏诊和23例不必要的手术得到预防。结论我们的不确定性感知框架代表了第一个临床就绪的人工智能系统,提供准确的心血管疾病风险评估和可信赖的信心措施,直接解决医生采用障碍并支持个性化心血管护理。
{"title":"Uncertainty-aware hybrid optimization for robust cardiovascular disease detection: A clinical translation framework","authors":"Tamanna Jena , Rahul Suryodai , Desidi Narsimha Reddy , Kambala Vijaya Kumar , Elangovan Muniyandy , N.V. Phani Sai Kumar","doi":"10.1016/j.ibmed.2025.100302","DOIUrl":"10.1016/j.ibmed.2025.100302","url":null,"abstract":"<div><h3>Background</h3><div>Cardiovascular disease causes 17.9 million deaths annually, yet current AI systems achieve ∼82 % accuracy without uncertainty quantification—limiting clinical utility where prediction confidence directly guides life-saving treatment decisions.</div></div><div><h3>Objective</h3><div>We developed an uncertainty-aware hybrid optimization framework for robust CVD detection that provides clinicians with both risk predictions and confidence intervals, enabling personalized decision-making under real-world clinical conditions.</div></div><div><h3>Methods</h3><div>Our clinical translation framework integrates multiple complementary AI models (Gaussian processes, gradient-boosted trees, Transformers) through uncertainty-guided optimization. Key clinical innovations include: (1) real-time uncertainty calibration responding to data quality variations, (2) dynamic model weighting adapting to individual patient characteristics, and (3) interpretable confidence intervals supporting clinical decision protocols.</div></div><div><h3>Results</h3><div>Clinical validation on 12,458 CVD patients from MIMIC-III and UK Biobank demonstrated clinically significant improvements: +1.4 % AUC (0.853 vs 0.839, p < 0.01) translating to 50 additional correct diagnoses per 10,000 patients, +1.5 % balanced accuracy, and 20 % better uncertainty calibration. The framework maintained robust performance (>80 % AUC) under realistic clinical noise while providing reliable confidence intervals across all risk levels.</div></div><div><h3>Clinical translation</h3><div>This framework delivers immediate clinical utility through real-time inference (<2s), FHIR-compliant EHR integration, and physician-validated uncertainty interpretation. Implementation prevents an estimated 50 missed diagnoses and 23 unnecessary procedures per 10,000 patients screened annually.</div></div><div><h3>Conclusions</h3><div>Our uncertainty-aware framework represents the first clinically ready AI system providing both accurate CVD risk assessment and trustworthy confidence measures, directly addressing physician adoption barriers and supporting personalized cardiovascular care.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"12 ","pages":"Article 100302"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145319571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.ibmed.2025.100306
Peru Gabirondo , María García-Martínez , Ana Pozueta-Cantudo , Patricia Laura Maran , Patricia Dias , Tomas Rojo , Javier Jiménez-Raboso , Carmen Lage , Francisco Martínez-Dubarbie , Sara López-García , Marta Fernández-Matarrubia , Andrea Corrales-Pardo , María Bravo , Juan Irure-Ventura , Marcos López-Hoyos , Pascual Sánchez-Juan , Carla Zaldua , Eloy Rodríguez-Rodríguez
{"title":"Speech biomarkers predict amyloid status in cognitively unimpaired adults","authors":"Peru Gabirondo , María García-Martínez , Ana Pozueta-Cantudo , Patricia Laura Maran , Patricia Dias , Tomas Rojo , Javier Jiménez-Raboso , Carmen Lage , Francisco Martínez-Dubarbie , Sara López-García , Marta Fernández-Matarrubia , Andrea Corrales-Pardo , María Bravo , Juan Irure-Ventura , Marcos López-Hoyos , Pascual Sánchez-Juan , Carla Zaldua , Eloy Rodríguez-Rodríguez","doi":"10.1016/j.ibmed.2025.100306","DOIUrl":"10.1016/j.ibmed.2025.100306","url":null,"abstract":"","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"12 ","pages":"Article 100306"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145319567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, a novel emotion detection system is proposed based on Graph Neural Network (GNN) architecture, which is used to integrate and learn from multiple data sets (EEG, face expression, physiological signals). The proposed GNN is able to learn about interactions between multiple modalities, so as to extract a single picture of emotion categorization. This model is very good and gets 91.25 % accuracy, 91.26 % precision, 91.25 % recall and 91.25 % F1-score. Moreover, the proposed GNN is a sensible trade-off between speed and precision, with a calculation time of 163 ms. The Proposed GNN is better, primarily due to its ability to represent complex relations between multi-modal inputs, thereby improving its real-time emotional state recognition and classification performance. The proposed GNN demonstrates its suitability for powerful emotion detection by outperforming all models in classification precision and multi-modal data fusion, surpassing traditional models such as SVM, KNN, CCA, CNN, and RNN. The Proposed GNN consistently proves to be the most accurate and robust solution, having been the most dominant technique in emotion detection, despite CNN and RNN achieving slightly better results.
{"title":"Enhancing emotion recognition through multi-modal data fusion and graph neural networks","authors":"Kasthuri Devarajan , Suresh Ponnan , Sundresan Perumal","doi":"10.1016/j.ibmed.2025.100291","DOIUrl":"10.1016/j.ibmed.2025.100291","url":null,"abstract":"<div><div>In this paper, a novel emotion detection system is proposed based on Graph Neural Network (GNN) architecture, which is used to integrate and learn from multiple data sets (EEG, face expression, physiological signals). The proposed GNN is able to learn about interactions between multiple modalities, so as to extract a single picture of emotion categorization. This model is very good and gets 91.25 % accuracy, 91.26 % precision, 91.25 % recall and 91.25 % F1-score. Moreover, the proposed GNN is a sensible trade-off between speed and precision, with a calculation time of 163 ms. The Proposed GNN is better, primarily due to its ability to represent complex relations between multi-modal inputs, thereby improving its real-time emotional state recognition and classification performance. The proposed GNN demonstrates its suitability for powerful emotion detection by outperforming all models in classification precision and multi-modal data fusion, surpassing traditional models such as SVM, KNN, CCA, CNN, and RNN. The Proposed GNN consistently proves to be the most accurate and robust solution, having been the most dominant technique in emotion detection, despite CNN and RNN achieving slightly better results.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"12 ","pages":"Article 100291"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144893233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.ibmed.2025.100292
G. Inbasakaran, J. Anitha Ruth
Purpose
This study develops a computationally efficient Convolutional Neural Network (CNN) for lung cancer classification in Computed Tomography (CT) images, addressing the critical need for accurate diagnostic tools deployable in resource-constrained clinical settings.
Methods
Using the IQ-OTH/NCCD dataset (1190 CT images: normal, benign, and malignant classes from 110 patients), we implemented systematic architecture optimization with strategic data augmentation to address class imbalance and limited dataset challenges. Patient-level data splitting prevented leakage, ensuring valid performance metrics. The model was evaluated using 5-fold cross-validation and compared against established architectures using McNemar's test for statistical significance.
Results
The optimized CNN achieved 94 % classification accuracy with only 4.2 million parameters and 18 ms inference time. Performance significantly exceeded AlexNet (85 %), VGG-16 (88 %), ResNet-50 (90 %), InceptionV3 (87 %), and DenseNet (86 %) with p < 0.05. Malignant case detection showed excellent clinical metrics (precision: 0.96, recall: 0.95, F1-score: 0.95), critical for minimizing false negatives. Ablation studies revealed data augmentation contributed 6.6 % accuracy improvement, with rotation and translation proving most effective. The model operates 4.3 × faster than ResNet-50 while using 6 × fewer parameters, enabling deployment on standard clinical workstations with 4–8 GB GPU memory.
Conclusions
Carefully optimized CNN architectures can achieve superior diagnostic performance while meeting computational constraints of real-world medical settings. Our approach demonstrates that systematic optimization strategies effectively balance accuracy with clinical deployment feasibility, providing a practical framework for implementing AI-assisted lung cancer detection in resource-limited healthcare environments. The model's high sensitivity for malignant cases positions it as a valuable clinical decision support tool.
目的:本研究开发了一种计算效率高的卷积神经网络(CNN),用于计算机断层扫描(CT)图像中的肺癌分类,解决了在资源有限的临床环境中部署准确诊断工具的关键需求。方法利用iqoth /NCCD数据集(来自110例患者的1190张CT图像:正常、良性和恶性分类),通过战略性数据增强实现系统架构优化,以解决分类不平衡和数据集有限的挑战。患者级数据分割防止了泄漏,确保了有效的性能指标。该模型使用5倍交叉验证进行评估,并使用McNemar的统计显著性检验与已建立的架构进行比较。结果优化后的CNN只需要420万个参数和18 ms的推理时间,分类准确率达到94%。性能显著高于AlexNet(85%)、VGG-16(88%)、ResNet-50(90%)、InceptionV3(87%)和DenseNet (86%), p < 0.05。恶性病例的检出表现出优异的临床指标(准确率:0.96,召回率:0.95,f1评分:0.95),这对于减少假阴性至关重要。消融研究显示,数据增强提高了6.6%的准确性,旋转和平移被证明是最有效的。该模型的运行速度比ResNet-50快4.3倍,同时使用的参数减少了6倍,可在具有4-8 GB GPU内存的标准临床工作站上部署。结论经过精心优化的CNN架构在满足现实医疗环境计算约束的情况下,能够取得优异的诊断性能。我们的方法表明,系统优化策略有效地平衡了准确性和临床部署的可行性,为在资源有限的医疗环境中实施人工智能辅助肺癌检测提供了一个实用的框架。该模型对恶性病例的高敏感性使其成为一种有价值的临床决策支持工具。
{"title":"Clinical-ready CNN framework for lung cancer classification: Systematic optimization for healthcare deployment with enhanced computational efficiency","authors":"G. Inbasakaran, J. Anitha Ruth","doi":"10.1016/j.ibmed.2025.100292","DOIUrl":"10.1016/j.ibmed.2025.100292","url":null,"abstract":"<div><h3>Purpose</h3><div>This study develops a computationally efficient Convolutional Neural Network (CNN) for lung cancer classification in Computed Tomography (CT) images, addressing the critical need for accurate diagnostic tools deployable in resource-constrained clinical settings.</div></div><div><h3>Methods</h3><div>Using the IQ-OTH/NCCD dataset (1190 CT images: normal, benign, and malignant classes from 110 patients), we implemented systematic architecture optimization with strategic data augmentation to address class imbalance and limited dataset challenges. Patient-level data splitting prevented leakage, ensuring valid performance metrics. The model was evaluated using 5-fold cross-validation and compared against established architectures using McNemar's test for statistical significance.</div></div><div><h3>Results</h3><div>The optimized CNN achieved 94 % classification accuracy with only 4.2 million parameters and 18 ms inference time. Performance significantly exceeded AlexNet (85 %), VGG-16 (88 %), ResNet-50 (90 %), InceptionV3 (87 %), and DenseNet (86 %) with p < 0.05. Malignant case detection showed excellent clinical metrics (precision: 0.96, recall: 0.95, F1-score: 0.95), critical for minimizing false negatives. Ablation studies revealed data augmentation contributed 6.6 % accuracy improvement, with rotation and translation proving most effective. The model operates 4.3 × faster than ResNet-50 while using 6 × fewer parameters, enabling deployment on standard clinical workstations with 4–8 GB GPU memory.</div></div><div><h3>Conclusions</h3><div>Carefully optimized CNN architectures can achieve superior diagnostic performance while meeting computational constraints of real-world medical settings. Our approach demonstrates that systematic optimization strategies effectively balance accuracy with clinical deployment feasibility, providing a practical framework for implementing AI-assisted lung cancer detection in resource-limited healthcare environments. The model's high sensitivity for malignant cases positions it as a valuable clinical decision support tool.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"12 ","pages":"Article 100292"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144893234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.ibmed.2025.100206
Sajid Naveed , Mujtaba Husnain
Medical experts and physicians examine the gene expression abnormality in glioblastoma (GBM) cancer patients to identify the drug response. The main objective of this research is to build a machine learning (ML) based model for improve the outcome of cancer medication to save the time and effort of medical practitioners. Developing a drug response recommendation system is our goal that uses the gene expression data of cancer cell lines to predict the response of anticancer drugs in terms of half-maximal inhibitory concentration (IC50). Genetic data from a GBM cancer patient is used as input into a system to predict and recommend the response of multiple anticancer drugs in a particular cancer sample. In this research, we used K-mer molecular fragmentation to process drug SMILES in a novel way, which enabled us to build a competent model that provides drug response. We used the Light Gradient Boosting Machine (LightGBM) regression algorithm and Genomics of Drug Sensitivity of Cancer (GDSC) data for this proposed recommendation system. The results showed that all predicted IC50 values are fall within the range of the real values when examining GBM data. Two drugs, temozolomide and carmustine, were predicted with a Mean Squared Error (MSE) of 0.10 and 0.11 respectively, and 0.41 in unseen test samples. These recommended responses were then verified by expert doctors, who confirmed that the responses to these drugs were very close to the actual response. These recommendation are also effective in slowing the growth of these tumors and improving patients quality of life by monitoring medication effects.
医学专家和医生检查胶质母细胞瘤(GBM)癌症患者的基因表达异常,以确定药物反应。本研究的主要目的是建立一个基于机器学习(ML)的模型,以改善癌症药物治疗的结果,从而节省医生的时间和精力。我们的目标是开发一种药物反应推荐系统,利用癌细胞系的基因表达数据,以半最大抑制浓度(IC50)来预测抗癌药物的反应。来自GBM癌症患者的遗传数据被用作系统的输入,以预测和推荐多种抗癌药物对特定癌症样本的反应。在这项研究中,我们利用K-mer分子碎片以一种新颖的方式处理药物SMILES,这使我们能够建立一个提供药物反应的胜任模型。我们使用光梯度增强机(Light Gradient Boosting Machine, LightGBM)回归算法和癌症药物敏感性基因组学(Genomics of Drug Sensitivity of Cancer, GDSC)数据来构建这个推荐系统。结果表明,对GBM数据的预测IC50值均落在实际值的范围内。替莫唑胺和卡莫司汀两种药物的预测均方误差(MSE)分别为0.10和0.11,未见样品的预测均方误差为0.41。这些建议的反应然后由专家医生验证,他们确认对这些药物的反应非常接近实际反应。这些建议也有效地减缓这些肿瘤的生长,并通过监测药物效果来改善患者的生活质量。
{"title":"A drug recommendation system based on response prediction: Integrating gene expression and K-mer fragmentation of drug SMILES using LightGBM","authors":"Sajid Naveed , Mujtaba Husnain","doi":"10.1016/j.ibmed.2025.100206","DOIUrl":"10.1016/j.ibmed.2025.100206","url":null,"abstract":"<div><div>Medical experts and physicians examine the gene expression abnormality in glioblastoma (GBM) cancer patients to identify the drug response. The main objective of this research is to build a machine learning (ML) based model for improve the outcome of cancer medication to save the time and effort of medical practitioners. Developing a drug response recommendation system is our goal that uses the gene expression data of cancer cell lines to predict the response of anticancer drugs in terms of half-maximal inhibitory concentration (IC50). Genetic data from a GBM cancer patient is used as input into a system to predict and recommend the response of multiple anticancer drugs in a particular cancer sample. In this research, we used K-mer molecular fragmentation to process drug SMILES in a novel way, which enabled us to build a competent model that provides drug response. We used the Light Gradient Boosting Machine (LightGBM) regression algorithm and Genomics of Drug Sensitivity of Cancer (GDSC) data for this proposed recommendation system. The results showed that all predicted IC50 values are fall within the range of the real values when examining GBM data. Two drugs, temozolomide and carmustine, were predicted with a Mean Squared Error (MSE) of 0.10 and 0.11 respectively, and 0.41 in unseen test samples. These recommended responses were then verified by expert doctors, who confirmed that the responses to these drugs were very close to the actual response. These recommendation are also effective in slowing the growth of these tumors and improving patients quality of life by monitoring medication effects.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"11 ","pages":"Article 100206"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143173636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}