Pub Date : 2025-01-01Epub Date: 2025-09-13DOI: 10.1016/j.ibmed.2025.100295
Bethel Osuagwu , Hongli Huang , Emily L. McNicol , Vellaisamy A.L. Roy , Aleksandra Vučkovič
Introduction
Motor evoked potentials (MEP) are detected using various methods that determine signal changepoints. The current detection methods perform well given a high signal to noise ratio. However, performance can diminish with artefact such as those arising due to poor signal quality and unwanted electrical potentials. Part of the problem is likely because the methods ignore the morphology of a signal making it impossible to differentiate noise from MEPs.
Methods
For the first time, we investigated a new detection method able to learn MEP morphology using artificial neural networks. To build an MEP detection model, we trained deep neural networks with architectures based on combined CNN and LSTM or self-attention mechanism, using sample MEP data recorded from able-bodied individuals. The MEP detection capability of the models was compared with that of a changepoint based detection method.
Results
Our models reached test accuracy of up to 89.7 ± 1.5 % on average. In a real-world setting evaluation, our models achieved average detection accuracy of up to 94.7 ± 1.2 %, compared with 76.4 ± 5.3 % for the standard changepoint detection method (p = 0.004).
Conclusion
Artificial neural network models can be used for improved automated detection of MEPs.
{"title":"Artificial neural network based automatic detection of motor evoked potentials","authors":"Bethel Osuagwu , Hongli Huang , Emily L. McNicol , Vellaisamy A.L. Roy , Aleksandra Vučkovič","doi":"10.1016/j.ibmed.2025.100295","DOIUrl":"10.1016/j.ibmed.2025.100295","url":null,"abstract":"<div><h3>Introduction</h3><div>Motor evoked potentials (MEP) are detected using various methods that determine signal changepoints. The current detection methods perform well given a high signal to noise ratio. However, performance can diminish with artefact such as those arising due to poor signal quality and unwanted electrical potentials. Part of the problem is likely because the methods ignore the morphology of a signal making it impossible to differentiate noise from MEPs.</div></div><div><h3>Methods</h3><div>For the first time, we investigated a new detection method able to learn MEP morphology using artificial neural networks. To build an MEP detection model, we trained deep neural networks with architectures based on combined CNN and LSTM or self-attention mechanism, using sample MEP data recorded from able-bodied individuals. The MEP detection capability of the models was compared with that of a changepoint based detection method.</div></div><div><h3>Results</h3><div>Our models reached test accuracy of up to 89.7 ± 1.5 % on average. In a real-world setting evaluation, our models achieved average detection accuracy of up to 94.7 ± 1.2 %, compared with 76.4 ± 5.3 % for the standard changepoint detection method (p = 0.004).</div></div><div><h3>Conclusion</h3><div>Artificial neural network models can be used for improved automated detection of MEPs.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"12 ","pages":"Article 100295"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145094604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-07-16DOI: 10.1016/j.ibmed.2025.100277
Roberto Diaz-Peregrino , Fabian Torres Robles , German Gonzalez , Roberto Palma , Boris Escalante-Ramirez , Jimena Olveres , Juan P. Reyes-Gonzalez , Jose A. Gomez-Coeto , Carlos A. Rodriguez-Herrera
Whole-body magnetic resonance imaging (WB-MRI) is a critical diagnostic tool in clinical practice. However, the manual interpretation of WB-MRI scans is a time-consuming and labor-intensive process. Integrating artificial intelligence (AI) has the potential to streamline these processes, yet the variability in MRI images due to differences in scanner features presents significant challenges for the generalization of AI models across different platforms. This study aims to address these challenges by developing and validating a data augmentation pipeline designed to effectively represent image artifacts from WB-MRI acquisition. The study employs a WB-MRI database to evaluate the generalization power of a segmentation model across platforms, with performance metrics such as the Dice Similarity Coefficient (DSC) and Area Under the Curve (AUC) being reported. The findings suggest that advanced data augmentation techniques can mitigate the impact of scanner variability, thereby enhancing the generalization capabilities of AI models in the context of WB-MRI analysis.
全身磁共振成像(WB-MRI)是临床实践中重要的诊断工具。然而,手动解释WB-MRI扫描是一个耗时和劳动密集型的过程。集成人工智能(AI)有可能简化这些过程,然而,由于扫描仪特征的差异,MRI图像的可变性对人工智能模型在不同平台上的泛化提出了重大挑战。本研究旨在通过开发和验证数据增强管道来解决这些挑战,该管道旨在有效地表示来自WB-MRI采集的图像伪影。该研究采用WB-MRI数据库来评估跨平台分割模型的泛化能力,并报告了Dice Similarity Coefficient (DSC)和Area Under The Curve (AUC)等性能指标。研究结果表明,先进的数据增强技术可以减轻扫描仪可变性的影响,从而增强AI模型在WB-MRI分析背景下的泛化能力。
{"title":"Enhancing generalization in whole-body MRI-based deep learning models: A novel data augmentation pipeline for cross-platform adaptation","authors":"Roberto Diaz-Peregrino , Fabian Torres Robles , German Gonzalez , Roberto Palma , Boris Escalante-Ramirez , Jimena Olveres , Juan P. Reyes-Gonzalez , Jose A. Gomez-Coeto , Carlos A. Rodriguez-Herrera","doi":"10.1016/j.ibmed.2025.100277","DOIUrl":"10.1016/j.ibmed.2025.100277","url":null,"abstract":"<div><div>Whole-body magnetic resonance imaging (WB-MRI) is a critical diagnostic tool in clinical practice. However, the manual interpretation of WB-MRI scans is a time-consuming and labor-intensive process. Integrating artificial intelligence (AI) has the potential to streamline these processes, yet the variability in MRI images due to differences in scanner features presents significant challenges for the generalization of AI models across different platforms. This study aims to address these challenges by developing and validating a data augmentation pipeline designed to effectively represent image artifacts from WB-MRI acquisition. The study employs a WB-MRI database to evaluate the generalization power of a segmentation model across platforms, with performance metrics such as the Dice Similarity Coefficient (DSC) and Area Under the Curve (AUC) being reported. The findings suggest that advanced data augmentation techniques can mitigate the impact of scanner variability, thereby enhancing the generalization capabilities of AI models in the context of WB-MRI analysis.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"12 ","pages":"Article 100277"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144652996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-11-24DOI: 10.1016/j.ibmed.2025.100318
Falah Jabar , Lill-Tove Rasmussen Busund , Biagio Ricciuti , Masoud Tafavvoghi , Thomas K. Kilvaer , David J. Pinato , Mette Pøhl , Sigve Andersen , Tom Donnem , David J. Kwiatkowski , Mehrdad Rakaee
Tiling (or patching) histology Whole Slide Images (WSIs) is a required initial step in the development of deep learning (DL) models. Gigapixel-scale WSIs must be divided into smaller, manageable image tiles. Standard WSI tiling techniques often exclude diagnostically important tissue regions or include regions with artifacts such as folds, blurs, and pen-markings, which can significantly degrade DL model performance and analysis. This paper introduces WSI-SmartTiling, a fully automated, deep learning-based, content-aware WSI tiling pipeline designed to include maximal information content from WSI. A supervised DL model for artifact detection was developed using pixel-based semantic segmentation at high magnification (20× and 40x) to classify WSI regions as either artifacts or qualified tissue. The model was trained on a diverse dataset and validated using both internal and external datasets. Quantitative and qualitative evaluations demonstrated its superiority, outperforming state-of-the-art methods with accuracy, precision, recall, and F1 scores exceeding 95 % across all artifact types, along with Dice scores above 94 %. In addition, WSI-SmartTiling integrates a generative adversarial network model to reconstruct tissue regions obscured by pen-markings in various colors, ensuring relevant valuable areas are preserved. Lastly, while excluding artifacts, the pipeline efficiently tiles qualified tissue regions with minimum tissue loss.
In conclusion, this high-resolution preprocessing pipeline can significantly improve pathology WSI-based feature extraction and DL-based classification by minimizing tissue loss and providing high-quality – artifact-free – tissue tiles. The WSI-SmartTiling pipeline is publicly available on GitHub.
{"title":"Fully automatic content-aware tiling pipeline for pathology whole slide images","authors":"Falah Jabar , Lill-Tove Rasmussen Busund , Biagio Ricciuti , Masoud Tafavvoghi , Thomas K. Kilvaer , David J. Pinato , Mette Pøhl , Sigve Andersen , Tom Donnem , David J. Kwiatkowski , Mehrdad Rakaee","doi":"10.1016/j.ibmed.2025.100318","DOIUrl":"10.1016/j.ibmed.2025.100318","url":null,"abstract":"<div><div>Tiling (or patching) histology Whole Slide Images (WSIs) is a required initial step in the development of deep learning (DL) models. Gigapixel-scale WSIs must be divided into smaller, manageable image tiles. Standard WSI tiling techniques often exclude diagnostically important tissue regions or include regions with artifacts such as folds, blurs, and pen-markings, which can significantly degrade DL model performance and analysis. This paper introduces WSI-SmartTiling, a fully automated, deep learning-based, content-aware WSI tiling pipeline designed to include maximal information content from WSI. A supervised DL model for artifact detection was developed using pixel-based semantic segmentation at high magnification (20× and 40x) to classify WSI regions as either artifacts or qualified tissue. The model was trained on a diverse dataset and validated using both internal and external datasets. Quantitative and qualitative evaluations demonstrated its superiority, outperforming state-of-the-art methods with accuracy, precision, recall, and F1 scores exceeding 95 % across all artifact types, along with Dice scores above 94 %. In addition, WSI-SmartTiling integrates a generative adversarial network model to reconstruct tissue regions obscured by pen-markings in various colors, ensuring relevant valuable areas are preserved. Lastly, while excluding artifacts, the pipeline efficiently tiles qualified tissue regions with minimum tissue loss.</div><div>In conclusion, this high-resolution preprocessing pipeline can significantly improve pathology WSI-based feature extraction and DL-based classification by minimizing tissue loss and providing high-quality – artifact-free – tissue tiles. The WSI-SmartTiling pipeline is publicly available on <span><span>GitHub</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"12 ","pages":"Article 100318"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145683783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-07-31DOI: 10.1016/j.ibmed.2025.100282
Timothy Suraj
This paper introduces a novel Bayesian framework integrating Large Language Models (LLMs) into medical history-taking specifically for recurrent medical conditions, aiming to overcome limitations of traditional methods and improve treatment outcomes. Unlike existing AI applications in healthcare that primarily focus on diagnostic classification or prediction in acute settings, our approach emphasizes iterative diagnostic refinement and explainable AI within a Bayesian probabilistic framework, offering a unique strategy for personalized management of recurrent conditions. We empirically evaluate this framework by analyzing the current limitations in clinical history-taking practices and leveraging the capabilities of modern LLMs to generate more comprehensive patient narratives, improve pattern recognition across longitudinal data, and enhance the identification of subtle disease precursors. Our review of preliminary implementations suggests that LLM integration into clinical workflows may reduce diagnostic errors, improve treatment adherence, and enable more personalized therapeutic interventions. However, significant challenges remain regarding clinical validation, privacy concerns, and integration with existing healthcare systems. We conclude that LLMs represent a promising tool for treating recurrent medical conditions when deployed as physician augmentation rather than replacement technologies.
{"title":"A Bayesian framework for LLM-enhanced history-taking in recurrent medical conditions to improve treatment outcomes: An empirical evaluation","authors":"Timothy Suraj","doi":"10.1016/j.ibmed.2025.100282","DOIUrl":"10.1016/j.ibmed.2025.100282","url":null,"abstract":"<div><div>This paper introduces a novel Bayesian framework integrating Large Language Models (LLMs) into medical history-taking specifically for recurrent medical conditions, aiming to overcome limitations of traditional methods and improve treatment outcomes. Unlike existing AI applications in healthcare that primarily focus on diagnostic classification or prediction in acute settings, our approach emphasizes iterative diagnostic refinement and explainable AI within a Bayesian probabilistic framework, offering a unique strategy for personalized management of recurrent conditions. We empirically evaluate this framework by analyzing the current limitations in clinical history-taking practices and leveraging the capabilities of modern LLMs to generate more comprehensive patient narratives, improve pattern recognition across longitudinal data, and enhance the identification of subtle disease precursors. Our review of preliminary implementations suggests that LLM integration into clinical workflows may reduce diagnostic errors, improve treatment adherence, and enable more personalized therapeutic interventions. However, significant challenges remain regarding clinical validation, privacy concerns, and integration with existing healthcare systems. We conclude that LLMs represent a promising tool for treating recurrent medical conditions when deployed as physician augmentation rather than replacement technologies.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"12 ","pages":"Article 100282"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144771996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-02-03DOI: 10.1016/j.ibmed.2025.100209
Geethu S. Kumar, B. Ankayarkanni
Stress detection is crucial for monitoring mental health and preventing stress-related disorders. Real-time stress detection shows promise with photoplethysmography (PPG), a non-invasive optical technology that analyzes blood volume changes in the microvascular bed of tissue. This study introduces a novel hybrid model, Conv-XGBoost, which combines Convolutional Neural Networks (CNN) and eXtreme Gradient Boosting (XGBoost) to improve the accuracy and robustness of stress detection from PPG signals. The Conv-XGBoost model utilizes the feature extraction capabilities of CNNs to process PPG signals, converting them into spectrograms that capture the time–frequency characteristics of data. The XGBoost component is essential for handling the complex, high-dimensional feature sets provided by the CNN, enhancing prediction capabilities through gradient boosting. This customized approach addresses the limitations of traditional machine learning algorithms in dealing with hand-crafted features. The Pulse Rate Variability-based Photoplethysmography dataset was chosen for training and validation. The outcomes of the experiments revealed that the proposed Conv-XGBoost model outperformed more conventional machine learning techniques with a training accuracy of 98.87%, validation accuracy of 93.28% and an F1-score of 97.25%. Additionally, the model demonstrated superior resilience to noise and variability in PPG signals, common in real-world scenarios. This study underscores how hybrid models can improve stress detection and sets the stage for future research integrating physiological signals with advanced deep learning techniques.
{"title":"Leveraging Conv-XGBoost algorithm for perceived mental stress detection using Photoplethysmography","authors":"Geethu S. Kumar, B. Ankayarkanni","doi":"10.1016/j.ibmed.2025.100209","DOIUrl":"10.1016/j.ibmed.2025.100209","url":null,"abstract":"<div><div>Stress detection is crucial for monitoring mental health and preventing stress-related disorders. Real-time stress detection shows promise with photoplethysmography (PPG), a non-invasive optical technology that analyzes blood volume changes in the microvascular bed of tissue. This study introduces a novel hybrid model, Conv-XGBoost, which combines Convolutional Neural Networks (CNN) and eXtreme Gradient Boosting (XGBoost) to improve the accuracy and robustness of stress detection from PPG signals. The Conv-XGBoost model utilizes the feature extraction capabilities of CNNs to process PPG signals, converting them into spectrograms that capture the time–frequency characteristics of data. The XGBoost component is essential for handling the complex, high-dimensional feature sets provided by the CNN, enhancing prediction capabilities through gradient boosting. This customized approach addresses the limitations of traditional machine learning algorithms in dealing with hand-crafted features. The Pulse Rate Variability-based Photoplethysmography dataset was chosen for training and validation. The outcomes of the experiments revealed that the proposed Conv-XGBoost model outperformed more conventional machine learning techniques with a training accuracy of 98.87%, validation accuracy of 93.28% and an F1-score of 97.25%. Additionally, the model demonstrated superior resilience to noise and variability in PPG signals, common in real-world scenarios. This study underscores how hybrid models can improve stress detection and sets the stage for future research integrating physiological signals with advanced deep learning techniques.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"11 ","pages":"Article 100209"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143377342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-04-14DOI: 10.1016/j.ibmed.2025.100249
Ruqsar Zaitoon , Shaik Salma Asiya Begum , Sachi Nandan Mohanty , Deepa Jose
The exponential growth in medical data and feature dimensionality presents significant challenges in building accurate and efficient diagnostic models. High-dimensional datasets often contain redundant or irrelevant features that degrade classification performance and increase computational burden. Feature selection (FS) is therefore a critical step in medical data analysis to enhance model accuracy and interpretability. While many recent FS techniques rely on optimization algorithms, tuning their parameters and avoiding early convergence remain major challenges. This study introduces a novel hybrid optimization technique—Hybridized Genghis Khan Shark with Snow Ablation Optimization (HyGKS-SAO)—to identify the most informative features for multi-disease classification. The raw medical datasets are first pre-processed using a Tanh-based normalization method. The HyGKS-SAO algorithm then selects optimal features, balancing exploration and exploitation effectively. Finally, a multi-kernel support vector machine (SVM) is employed to classify diseases based on the selected features. The proposed framework is evaluated on six publicly available medical datasets, including breast cancer, diabetes, heart disease, stroke, lung cancer, and chronic kidney disease. Experimental results demonstrate the effectiveness of the proposed method, achieving 98 % accuracy, 97.99 % MCC, 96.31 % PPV, 97.35 % G-mean, 98.03 % Kappa Coefficient, and a low computation time of 50 s, outperforming several state-of-the-art approaches.
{"title":"Feature selection using hybridized Genghis Khan Shark with snow ablation optimization technique for multi-disease prognosis","authors":"Ruqsar Zaitoon , Shaik Salma Asiya Begum , Sachi Nandan Mohanty , Deepa Jose","doi":"10.1016/j.ibmed.2025.100249","DOIUrl":"10.1016/j.ibmed.2025.100249","url":null,"abstract":"<div><div>The exponential growth in medical data and feature dimensionality presents significant challenges in building accurate and efficient diagnostic models. High-dimensional datasets often contain redundant or irrelevant features that degrade classification performance and increase computational burden. Feature selection (FS) is therefore a critical step in medical data analysis to enhance model accuracy and interpretability. While many recent FS techniques rely on optimization algorithms, tuning their parameters and avoiding early convergence remain major challenges. This study introduces a novel hybrid optimization technique—Hybridized Genghis Khan Shark with Snow Ablation Optimization (HyGKS-SAO)—to identify the most informative features for multi-disease classification. The raw medical datasets are first pre-processed using a Tanh-based normalization method. The HyGKS-SAO algorithm then selects optimal features, balancing exploration and exploitation effectively. Finally, a multi-kernel support vector machine (SVM) is employed to classify diseases based on the selected features. The proposed framework is evaluated on six publicly available medical datasets, including breast cancer, diabetes, heart disease, stroke, lung cancer, and chronic kidney disease. Experimental results demonstrate the effectiveness of the proposed method, achieving 98 % accuracy, 97.99 % MCC, 96.31 % PPV, 97.35 % G-mean, 98.03 % Kappa Coefficient, and a low computation time of 50 s, outperforming several state-of-the-art approaches.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"11 ","pages":"Article 100249"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143848482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Osteosarcoma (OS) is the most common primary bone cancer particularly in individuals aged 0–19, classified into different stages. Early diagnosis improves survival, Determination of prognosis and treatment based on it, and enables limb-sparing surgery. AI, in particular machine learning (ML) and deep learning (DL), helps analyze large datasets, identify biomarkers, predict prognosis, and personalize treatments by assessing the aforementioned features. AI has the potential to improve evaluation procedures, such as imaging and pathology approaches used in OS diagnosis, prognosis, and treatment. This study systematically examines AI's synergistic role with conventional evaluating techniques in OS treatment, improving prognostication, predicting therapy responses, and developing personalized treatment strategies.
Method
We performed an extensive search via several databases until April 23, 2024. Machine learning (ML), deep learning (DL) as the main branches of AI are often utilized in the medical sciences were searched for detection classification, and prognostication of osteosarcoma. RAYYAN.ai was used to screen the articles through the titles and abstracts. We conducted data extraction on the included articles and employed Cochrane and QUIPS tools to assess potential bias in the included non-prognosis and prognosis studies to evaluate their quality, respectively.
Results
There were 8129 articles obtained from the four databases following a thorough search. Of them 8050 ones were excluded and the remaining 78 articles published from 2013 to 2024 were reviewed. A large number of the articles indicated moderate and low risk of bias as a result of the risk of bias assessment. The majority of the articles that were reviewed (n = 48) concerned the clinical aspects of osteosarcoma; of these, 23 and 25 studies assessed diagnosis and prognoses, respectively. Furthermore, 20 articles examined image analysis specifically, 4 examined image segmentation methods, and 16 introduced classifiers to identify osteosarcoma from other diseases.
Conclusion
AI improves biomarker identification, diagnostics, and prognosis of osteosarcoma through medical imaging and data integration. Models like ResNet50 and CNN show high performance but face real-world limitations due to data heterogeneity and overfitting. This study explores AI's role in osteosarcoma diagnosis, emphasizing interdisciplinary collaboration, external validation, and real-world application challenges.
{"title":"Implementation of artificial intelligence in detection, classification, and prognostication of osteosarcoma utilizing different assessment techniques: a systematic review","authors":"Zhina Mohamadi , Paniz Partovifar , Helia Ahmadzadeh , Elmira Ali Ahmadi , Ali Ghanbari , Sina Feyzipour , Fatemeh Atefat , Nazanin Jahanpeyma , Fatemeh Haghighi asl , Armin Zarinkhat , Narges Sharbatdaran , Narges Hosseinzadeh taher , Mobina Sedighi , Fatemeh Aghajafari","doi":"10.1016/j.ibmed.2025.100250","DOIUrl":"10.1016/j.ibmed.2025.100250","url":null,"abstract":"<div><h3>Introduction</h3><div>Osteosarcoma (OS) is the most common primary bone cancer particularly in individuals aged 0–19, classified into different stages. Early diagnosis improves survival, Determination of prognosis and treatment based on it, and enables limb-sparing surgery. AI, in particular machine learning (ML) and deep learning (DL), helps analyze large datasets, identify biomarkers, predict prognosis, and personalize treatments by assessing the aforementioned features. AI has the potential to improve evaluation procedures, such as imaging and pathology approaches used in OS diagnosis, prognosis, and treatment. This study systematically examines AI's synergistic role with conventional evaluating techniques in OS treatment, improving prognostication, predicting therapy responses, and developing personalized treatment strategies.</div></div><div><h3>Method</h3><div>We performed an extensive search via several databases until April 23, 2024. Machine learning (ML), deep learning (DL) as the main branches of AI are often utilized in the medical sciences were searched for detection classification, and prognostication of osteosarcoma. RAYYAN.ai was used to screen the articles through the titles and abstracts. We conducted data extraction on the included articles and employed Cochrane and QUIPS tools to assess potential bias in the included non-prognosis and prognosis studies to evaluate their quality, respectively.</div></div><div><h3>Results</h3><div>There were 8129 articles obtained from the four databases following a thorough search. Of them 8050 ones were excluded and the remaining 78 articles published from 2013 to 2024 were reviewed. A large number of the articles indicated moderate and low risk of bias as a result of the risk of bias assessment. The majority of the articles that were reviewed (n = 48) concerned the clinical aspects of osteosarcoma; of these, 23 and 25 studies assessed diagnosis and prognoses, respectively. Furthermore, 20 articles examined image analysis specifically, 4 examined image segmentation methods, and 16 introduced classifiers to identify osteosarcoma from other diseases.</div></div><div><h3>Conclusion</h3><div>AI improves biomarker identification, diagnostics, and prognosis of osteosarcoma through medical imaging and data integration. Models like ResNet50 and CNN show high performance but face real-world limitations due to data heterogeneity and overfitting. This study explores AI's role in osteosarcoma diagnosis, emphasizing interdisciplinary collaboration, external validation, and real-world application challenges.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"12 ","pages":"Article 100250"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144167924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Inflammatory skin diseases often display overlapping visual features, making accurate diagnosis challenging. This study proposes a deep learning framework combining transfer learning, feature fusion, and adaptive ensemble strategies to improve dermatological image classification. Using MobileNetV3-Large as the backbone, expert-defined anatomical metadata and model-derived probabilities were fused to enrich diagnostic features. A fuzzy rank-based ensemble aggregated predictions across multiple regions of interest (ROIs), prioritizing classifier confidence dynamically. The approach achieved consistent performance across ROI settings, with F1-scores reaching 0.8. These findings demonstrate that integrating anatomical context with deep learning enhances the interpretability and diagnostic utility of automated dermatological systems.
{"title":"Skin disease classification using transfer learning model and fusion strategy","authors":"YA-Ching Yang , Wu-Chun Chung , Chun-Ying Wu , Che-Lun Hung , Yi-Ju Chen","doi":"10.1016/j.ibmed.2025.100271","DOIUrl":"10.1016/j.ibmed.2025.100271","url":null,"abstract":"<div><div>Inflammatory skin diseases often display overlapping visual features, making accurate diagnosis challenging. This study proposes a deep learning framework combining transfer learning, feature fusion, and adaptive ensemble strategies to improve dermatological image classification. Using MobileNetV3-Large as the backbone, expert-defined anatomical metadata and model-derived probabilities were fused to enrich diagnostic features. A fuzzy rank-based ensemble aggregated predictions across multiple regions of interest (ROIs), prioritizing classifier confidence dynamically. The approach achieved consistent performance across ROI settings, with F1-scores reaching 0.8. These findings demonstrate that integrating anatomical context with deep learning enhances the interpretability and diagnostic utility of automated dermatological systems.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"12 ","pages":"Article 100271"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144563717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-03-29DOI: 10.1016/j.ibmed.2025.100245
Yusuf Alibrahim , Muhieldean Ibrahim , Devindra Gurdayal , Muhammad Munshi
Objective
Evaluate the use of large-language model (LLM) speechbot tools and deep learning-assisted generation of 3D reconstructions when integrated in a virtual reality (VR) setting to teach radiology on-call topics to radiology residents.
Methods
Three first year radiology residents in Guyana were enrolled in an 8-week radiology course that focused on preparation for on-call duties. The course, delivered via VR headsets with custom software integrating LLM-powered speechbots trained on imaging reports and 3D reconstructions segmented with the help of a deep learning model. Each session focused on a specific radiology area, employing a didactic and case-based learning approach, enhanced with 3D reconstructions and an LLM-powered speechbot. Post-session, residents reassessed their knowledge and provided feedback on their VR and LLM-powered speechbot experiences.
Results/discussion
Residents found that the 3D reconstructions segmented semi-automatically by deep learning algorithms and AI-driven self-learning via speechbot was highly valuable. The 3D reconstructions, especially in the interventional radiology session, were helpful and the benefit is augmented by VR where navigating the models is seamless and perception of depth is pronounced. Residents also found conversing with the AI-speechbot seamless and was valuable in their post session self-learning. The major drawback of VR was motion sickness, which was mild and improved over time.
Conclusion
AI-assisted VR radiology education could be used to develop new and accessible ways of teaching a variety of radiology topics in a seamless and cost-effective way. This could be especially useful in supporting radiology education remotely in regions which lack local radiology expertise.
{"title":"AI speechbots and 3D segmentations in virtual reality improve radiology on-call training in resource-limited settings","authors":"Yusuf Alibrahim , Muhieldean Ibrahim , Devindra Gurdayal , Muhammad Munshi","doi":"10.1016/j.ibmed.2025.100245","DOIUrl":"10.1016/j.ibmed.2025.100245","url":null,"abstract":"<div><h3>Objective</h3><div>Evaluate the use of large-language model (LLM) speechbot tools and deep learning-assisted generation of 3D reconstructions when integrated in a virtual reality (VR) setting to teach radiology on-call topics to radiology residents.</div></div><div><h3>Methods</h3><div>Three first year radiology residents in Guyana were enrolled in an 8-week radiology course that focused on preparation for on-call duties. The course, delivered via VR headsets with custom software integrating LLM-powered speechbots trained on imaging reports and 3D reconstructions segmented with the help of a deep learning model. Each session focused on a specific radiology area, employing a didactic and case-based learning approach, enhanced with 3D reconstructions and an LLM-powered speechbot. Post-session, residents reassessed their knowledge and provided feedback on their VR and LLM-powered speechbot experiences.</div></div><div><h3>Results/discussion</h3><div>Residents found that the 3D reconstructions segmented semi-automatically by deep learning algorithms and AI-driven self-learning via speechbot was highly valuable. The 3D reconstructions, especially in the interventional radiology session, were helpful and the benefit is augmented by VR where navigating the models is seamless and perception of depth is pronounced. Residents also found conversing with the AI-speechbot seamless and was valuable in their post session self-learning. The major drawback of VR was motion sickness, which was mild and improved over time.</div></div><div><h3>Conclusion</h3><div>AI-assisted VR radiology education could be used to develop new and accessible ways of teaching a variety of radiology topics in a seamless and cost-effective way. This could be especially useful in supporting radiology education remotely in regions which lack local radiology expertise.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"11 ","pages":"Article 100245"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143747483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2025-05-10DOI: 10.1016/j.ibmed.2025.100254
Nouhaila Erragzi , Nabila Zrira , Safae Lanjeri , Youssef Omor , Anwar Jimi , Ibtissam Benmiloud , Rajaa Sebihi , Rachida Latib , Nabil Ngote , Haris Ahmad Khan , Shah Nawaz
Breast cancer remains a critical health problem worldwide. Increasing survival rates requires early detection. Accurate classification and segmentation are crucial for effective diagnosis and treatment. Although breast imaging modalities offer many advantages for the diagnosis of breast cancer, the interpretation of breast ultrasound images has always been a vital issue for physicians and radiologists due to misdiagnosis. Moreover, detecting cancer at an early stage increases the chances of survival. This article presents two approaches: Attention-DenseUNet for the segmentation task and EfficientNetB7 for the classification task using public datasets: BUSI, UDIAT, BUSC, BUSIS, and STUHospital. These models are proposed in the context of Computer-Aided Diagnosis (CAD) for breast cancer detection. In the first study, we obtained an impressive Dice coefficient for all datasets, with scores of 88.93%, 95.35%, 92.79%, 93.29%, and 94.24%, respectively. In the classification task, we achieved a high accuracy using only four public datasets that include the two classes benign and malignant: BUSI, UDIAT, BUSC, and BUSIS, with an accuracy of 97%, 100%, 99%, and 94%, respectively. Generally, the results show that our proposed methods are considerably better than other state-of-the-art methods, which will undoubtedly help improve cancer diagnosis and reduce the number of false positives. Finally, we used the suggested approaches to create “Moroccan BreastCare”, an advanced breast cancer segmentation and classification software that automatically processes, segments, and classifies breast ultrasound images.
{"title":"BreastCare application: Moroccan Breast cancer diagnosis through deep learning-based image segmentation and classification","authors":"Nouhaila Erragzi , Nabila Zrira , Safae Lanjeri , Youssef Omor , Anwar Jimi , Ibtissam Benmiloud , Rajaa Sebihi , Rachida Latib , Nabil Ngote , Haris Ahmad Khan , Shah Nawaz","doi":"10.1016/j.ibmed.2025.100254","DOIUrl":"10.1016/j.ibmed.2025.100254","url":null,"abstract":"<div><div>Breast cancer remains a critical health problem worldwide. Increasing survival rates requires early detection. Accurate classification and segmentation are crucial for effective diagnosis and treatment. Although breast imaging modalities offer many advantages for the diagnosis of breast cancer, the interpretation of breast ultrasound images has always been a vital issue for physicians and radiologists due to misdiagnosis. Moreover, detecting cancer at an early stage increases the chances of survival. This article presents two approaches: Attention-DenseUNet for the segmentation task and EfficientNetB7 for the classification task using public datasets: BUSI, UDIAT, BUSC, BUSIS, and STUHospital. These models are proposed in the context of Computer-Aided Diagnosis (CAD) for breast cancer detection. In the first study, we obtained an impressive Dice coefficient for all datasets, with scores of 88.93%, 95.35%, 92.79%, 93.29%, and 94.24%, respectively. In the classification task, we achieved a high accuracy using only four public datasets that include the two classes benign and malignant: BUSI, UDIAT, BUSC, and BUSIS, with an accuracy of 97%, 100%, 99%, and 94%, respectively. Generally, the results show that our proposed methods are considerably better than other state-of-the-art methods, which will undoubtedly help improve cancer diagnosis and reduce the number of false positives. Finally, we used the suggested approaches to create “Moroccan BreastCare”, an advanced breast cancer segmentation and classification software that automatically processes, segments, and classifies breast ultrasound images.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"11 ","pages":"Article 100254"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143943162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}