Pub Date : 2026-03-01Epub Date: 2025-11-24DOI: 10.1016/j.ibmed.2025.100319
Faheem Ahmad Wagay, Jahiruddin
Mental health detection from social media text has attracted growing research attention due to the global rise in mental health concerns. Traditional deep learning models, such as Bidirectional Long Short-Term Memory (BiLSTM) networks and hybrid Convolutional BiLSTM (Conv-BiLSTM) architectures, have demonstrated strong performance in text classification tasks. However, these models often struggle to capture the hierarchical and spatial relationships that are intrinsic to linguistic data. To address this limitation, this study investigates the integration of capsule networks with BiLSTM and Conv-BiLSTM architectures for mental health detection. Leveraging a real-world Reddit corpus, we conduct extensive experiments comparing baseline BiLSTM and Conv-BiLSTM models with their capsule-enhanced counterparts. Furthermore, we explore the role of advanced loss functions, such as focal loss and contrastive loss, in addressing class imbalance and mitigating boundary blurring among semantically overlapping disorders. Our findings indicate that incorporating capsule layers significantly strengthens feature representation, leading to notable improvements in accuracy and F1-score across multiple mental health categories. The study focuses on six key disorders, including depression, anxiety, borderline personality disorder (BPD), and bipolar disorder. In addition, model interpretability is enhanced using Local Interpretable Model-agnostic Explanations (LIME), which highlights the critical linguistic features driving predictions, thereby improving transparency and reliability in mental health evaluations.
{"title":"Capsule-augmented deep learning architectures for mental health detection from social media text","authors":"Faheem Ahmad Wagay, Jahiruddin","doi":"10.1016/j.ibmed.2025.100319","DOIUrl":"10.1016/j.ibmed.2025.100319","url":null,"abstract":"<div><div>Mental health detection from social media text has attracted growing research attention due to the global rise in mental health concerns. Traditional deep learning models, such as Bidirectional Long Short-Term Memory (BiLSTM) networks and hybrid Convolutional BiLSTM (Conv-BiLSTM) architectures, have demonstrated strong performance in text classification tasks. However, these models often struggle to capture the hierarchical and spatial relationships that are intrinsic to linguistic data. To address this limitation, this study investigates the integration of capsule networks with BiLSTM and Conv-BiLSTM architectures for mental health detection. Leveraging a real-world Reddit corpus, we conduct extensive experiments comparing baseline BiLSTM and Conv-BiLSTM models with their capsule-enhanced counterparts. Furthermore, we explore the role of advanced loss functions, such as focal loss and contrastive loss, in addressing class imbalance and mitigating boundary blurring among semantically overlapping disorders. Our findings indicate that incorporating capsule layers significantly strengthens feature representation, leading to notable improvements in accuracy and F1-score across multiple mental health categories. The study focuses on six key disorders, including depression, anxiety, borderline personality disorder (BPD), and bipolar disorder. In addition, model interpretability is enhanced using Local Interpretable Model-agnostic Explanations (LIME), which highlights the critical linguistic features driving predictions, thereby improving transparency and reliability in mental health evaluations.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"13 ","pages":"Article 100319"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-01-22DOI: 10.1016/j.ibmed.2026.100350
Ravula Jyothsna , K. Prasanna , U. Moulali , V. Surya Narayana Reddy , D. Ramya Krishna , T. Praveen Kumar
Skin cancer continues to pose a major global health challenge, and its early identification is essential for improving patient outcomes. Traditional diagnostic practices rely heavily on clinician expertise and manual interpretation of dermoscopic images, making the process subjective, inconsistent, and time-consuming. To address these limitations, this work introduces STViTDA-Net, an explainable transformer-based framework designed for fast, objective, and scalable multi-class skin cancer classification. The model integrates three key components: STGAN for class-balanced dermoscopic image augmentation, ViT-MAE for robust hierarchical feature learning through masked patch reconstruction, and a Deformable Attention Transformer Encoder that adaptively focuses on irregular lesion boundaries and subtle spatial variations. Preprocessing with Error Level Analysis (ELA) enhances fine-grained diagnostic cues, while Grad-CAM provides interpretable heatmaps that highlight the regions influencing the model's predictions. Unlike manual dermoscopic evaluation, STViTDA-Net performs end-to-end inference within milliseconds and delivers consistent, expert-independent predictions supported by visual explanations. When evaluated on the ISIC2019 dataset comprising nine lesion categories, the model achieves 99.35 % accuracy, 99.0 % precision, 99.5 % recall, 99.2 % F1-score, and 99.2 % AUC-ROC, surpassing existing CNN and transformer baselines. By unifying class-balanced augmentation, adaptive feature encoding, deformable attention, and explainable outputs, STViTDA-Net establishes a powerful and efficient solution for automated dermatological diagnosis.
{"title":"STViTDA-Net: An explainable transformer-based framework with STGAN-ViT-MAE and deformable attention for multi-class skin cancer classification","authors":"Ravula Jyothsna , K. Prasanna , U. Moulali , V. Surya Narayana Reddy , D. Ramya Krishna , T. Praveen Kumar","doi":"10.1016/j.ibmed.2026.100350","DOIUrl":"10.1016/j.ibmed.2026.100350","url":null,"abstract":"<div><div>Skin cancer continues to pose a major global health challenge, and its early identification is essential for improving patient outcomes. Traditional diagnostic practices rely heavily on clinician expertise and manual interpretation of dermoscopic images, making the process subjective, inconsistent, and time-consuming. To address these limitations, this work introduces STViTDA-Net, an explainable transformer-based framework designed for fast, objective, and scalable multi-class skin cancer classification. The model integrates three key components: STGAN for class-balanced dermoscopic image augmentation, ViT-MAE for robust hierarchical feature learning through masked patch reconstruction, and a Deformable Attention Transformer Encoder that adaptively focuses on irregular lesion boundaries and subtle spatial variations. Preprocessing with Error Level Analysis (ELA) enhances fine-grained diagnostic cues, while Grad-CAM provides interpretable heatmaps that highlight the regions influencing the model's predictions. Unlike manual dermoscopic evaluation, STViTDA-Net performs end-to-end inference within milliseconds and delivers consistent, expert-independent predictions supported by visual explanations. When evaluated on the ISIC2019 dataset comprising nine lesion categories, the model achieves 99.35 % accuracy, 99.0 % precision, 99.5 % recall, 99.2 % F1-score, and 99.2 % AUC-ROC, surpassing existing CNN and transformer baselines. By unifying class-balanced augmentation, adaptive feature encoding, deformable attention, and explainable outputs, STViTDA-Net establishes a powerful and efficient solution for automated dermatological diagnosis.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"13 ","pages":"Article 100350"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-19DOI: 10.1016/j.ibmed.2025.100335
Carmen Guadalupe Colin-Tenorio , Agnes Mayr , Christian Kremser , Markus Haltmeier , Enrique Almar-Munoz
Introduction:
Transcatheter Aortic Valve Implantation (TAVI) has become the preferred method for treating severe aortic stenosis, especially in patients who are unsuitable for traditional surgery. Typically, preoperative imaging for TAVI involves contrast-enhanced Computed Tomography Angiography (CTA). However, for patients with contraindications to contrast agents, Cardiac Magnetic Resonance imaging (CMR) is a viable alternative, albeit with its limitations in visualizing calcifications.
Methods:
This study explores the application of diffusion models to enhance CMR-to-CTA contrast-free image conversion, to avoid the use of contrast agents and ionizing radiation. We developed a pipeline incorporating Denoising Diffusion Probabilistic Models (DDPMs) and Stochastic Differential Equations (SDE) models to synthesize CTA-equivalent images from CMR scans. We evaluated this approach using an in-house dataset consisting of 39 paired CTA and CMR scans. For the training process, coregistration of both modalities was required, which we achieved by performing rigid registration using the segmented aorta masks.
Results:
Our results show that the synthesized CTA images maintain high fidelity to the actual scans. This is quantitatively supported by a mean Structural Similarity Index Measure (SSIM) of 0.8 and a Peak Signal-to-Noise Ratio (PSNR) of 22 dB using conditional Stochastic Differential Equations (SDE) and Prediction-Correction (PC), indicating strong structural preservation and low reconstruction error. However, the model occasionally fails to accurately detect valve calcifications, likely due to limitations in capturing subtle pathological details that are not visually discernible in CMR images.
Conclusion:
Diffusion models used to synthesize CTA images from CMR datasets achieve high accuracy, providing a contrast-free alternative for TAVI planning and potential insights into valvular calcification patterns. However, accurate visualization of valve calcification occasionally fails, and larger datasets are desirable for validation.
{"title":"Cardiac Magnetic Resonance-to-Computed Tomography Angiography image conversion using diffusion models for Transcatheter Aortic Valve Implantation planning","authors":"Carmen Guadalupe Colin-Tenorio , Agnes Mayr , Christian Kremser , Markus Haltmeier , Enrique Almar-Munoz","doi":"10.1016/j.ibmed.2025.100335","DOIUrl":"10.1016/j.ibmed.2025.100335","url":null,"abstract":"<div><h3>Introduction:</h3><div>Transcatheter Aortic Valve Implantation (TAVI) has become the preferred method for treating severe aortic stenosis, especially in patients who are unsuitable for traditional surgery. Typically, preoperative imaging for TAVI involves contrast-enhanced Computed Tomography Angiography (CTA). However, for patients with contraindications to contrast agents, Cardiac Magnetic Resonance imaging (CMR) is a viable alternative, albeit with its limitations in visualizing calcifications.</div></div><div><h3>Methods:</h3><div>This study explores the application of diffusion models to enhance CMR-to-CTA contrast-free image conversion, to avoid the use of contrast agents and ionizing radiation. We developed a pipeline incorporating Denoising Diffusion Probabilistic Models (DDPMs) and Stochastic Differential Equations (SDE) models to synthesize CTA-equivalent images from CMR scans. We evaluated this approach using an in-house dataset consisting of 39 paired CTA and CMR scans. For the training process, coregistration of both modalities was required, which we achieved by performing rigid registration using the segmented aorta masks.</div></div><div><h3>Results:</h3><div>Our results show that the synthesized CTA images maintain high fidelity to the actual scans. This is quantitatively supported by a mean Structural Similarity Index Measure (SSIM) of 0.8 and a Peak Signal-to-Noise Ratio (PSNR) of 22<!--> <!-->dB using conditional Stochastic Differential Equations (SDE) and Prediction-Correction (PC), indicating strong structural preservation and low reconstruction error. However, the model occasionally fails to accurately detect valve calcifications, likely due to limitations in capturing subtle pathological details that are not visually discernible in CMR images.</div></div><div><h3>Conclusion:</h3><div>Diffusion models used to synthesize CTA images from CMR datasets achieve high accuracy, providing a contrast-free alternative for TAVI planning and potential insights into valvular calcification patterns. However, accurate visualization of valve calcification occasionally fails, and larger datasets are desirable for validation.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"13 ","pages":"Article 100335"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-01-31DOI: 10.1016/j.ibmed.2026.100355
Vaibhav C. Gandhi , Priyesh P. Gandhi , Ali k. Abdul Raheem , Yasser Taha Alzubaidi , Kabul Khudaybergenov , Mohammad Khishe
One of the significant causes of irreversible blindness is glaucoma which develops without symptoms and is only revealed in time when the case is very severe. The existing diagnostic models that use single-modes imaging and convolutional neural networks (CNNs) have limitations of local features dependence, less interpretability, and a lack of accuracy. This paper suggests a multi-modal deep learning model combining retinal fundus images and optical coherence tomography (OCT) scans with Vision Transformer (ViT) networks that could improve the detection and progression analysis of glaucoma. In this work, multimodal refers to an architectural and representational fusing through a hybrid Vision Transformer design and not a concurrent multi-sensor data acquisition. The structural and contextual information across modalities provide the framework with the ability to capture subtle pathological changes than CNN baselines. The benchmark experiments prove that the suggested model achieves both an accuracy of 94.5 percent and AUC-ROC of 91.7 percent, being better than VGG16, ResNet-50, and InceptionV3. Such findings highlight the promise of transformer-based multi-modal solutions to enhance early detection of glaucoma and assist with more feasible and interpretable clinical judgment.
{"title":"Advancing glaucoma diagnosis: Multi-modal deep learning with vision transformer architectures","authors":"Vaibhav C. Gandhi , Priyesh P. Gandhi , Ali k. Abdul Raheem , Yasser Taha Alzubaidi , Kabul Khudaybergenov , Mohammad Khishe","doi":"10.1016/j.ibmed.2026.100355","DOIUrl":"10.1016/j.ibmed.2026.100355","url":null,"abstract":"<div><div>One of the significant causes of irreversible blindness is glaucoma which develops without symptoms and is only revealed in time when the case is very severe. The existing diagnostic models that use single-modes imaging and convolutional neural networks (CNNs) have limitations of local features dependence, less interpretability, and a lack of accuracy. This paper suggests a multi-modal deep learning model combining retinal fundus images and optical coherence tomography (OCT) scans with Vision Transformer (ViT) networks that could improve the detection and progression analysis of glaucoma. In this work, multimodal refers to an architectural and representational fusing through a hybrid Vision Transformer design and not a concurrent multi-sensor data acquisition. The structural and contextual information across modalities provide the framework with the ability to capture subtle pathological changes than CNN baselines. The benchmark experiments prove that the suggested model achieves both an accuracy of 94.5 percent and AUC-ROC of 91.7 percent, being better than VGG16, ResNet-50, and InceptionV3. Such findings highlight the promise of transformer-based multi-modal solutions to enhance early detection of glaucoma and assist with more feasible and interpretable clinical judgment.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"13 ","pages":"Article 100355"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146188805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-01-19DOI: 10.1016/j.ibmed.2026.100348
M. Rajesh , K. Swaminathan , K. Vengatesan , Usha Moorthy , Sathishkumar Veerappampalayam Easwaramoorthy
Brain tumours adversely affect patient outcomes owing to their intricacy and the difficulties associated with diagnosis. The accuracy and timeliness of diagnosis are hindered by the subjectivity and unpredictability inherent in manual magnetic resonance imaging (MRI) interpretation. We present novel research on artificial intelligence systems capable of detecting, segmenting, and categorising brain cancers utilising MRI data, which may assist in addressing these issues. The system utilises advanced convolutional neural network (CNN) designs and unique explainability methods; it is designed for application in therapeutic and evidential contexts. This approach addresses deficiencies in cancer categorisation, differentiation, and AI interpretability, hence enhancing the accuracy and reliability of diagnosis. The method's efficacy and practical utility were evidenced through validation on extensive MRI datasets encompassing gliomas, meningiomas, pituitary tumours, and healthy controls. An AI-driven diagnostic tool can increase clinical decision-making, decrease diagnostic error rates, expedite therapy initiation, and improve patient outcomes.
{"title":"Advanced AI framework for accurate detection and classification of brain tumours from MRI images","authors":"M. Rajesh , K. Swaminathan , K. Vengatesan , Usha Moorthy , Sathishkumar Veerappampalayam Easwaramoorthy","doi":"10.1016/j.ibmed.2026.100348","DOIUrl":"10.1016/j.ibmed.2026.100348","url":null,"abstract":"<div><div>Brain tumours adversely affect patient outcomes owing to their intricacy and the difficulties associated with diagnosis. The accuracy and timeliness of diagnosis are hindered by the subjectivity and unpredictability inherent in manual magnetic resonance imaging (MRI) interpretation. We present novel research on artificial intelligence systems capable of detecting, segmenting, and categorising brain cancers utilising MRI data, which may assist in addressing these issues. The system utilises advanced convolutional neural network (CNN) designs and unique explainability methods; it is designed for application in therapeutic and evidential contexts. This approach addresses deficiencies in cancer categorisation, differentiation, and AI interpretability, hence enhancing the accuracy and reliability of diagnosis. The method's efficacy and practical utility were evidenced through validation on extensive MRI datasets encompassing gliomas, meningiomas, pituitary tumours, and healthy controls. An AI-driven diagnostic tool can increase clinical decision-making, decrease diagnostic error rates, expedite therapy initiation, and improve patient outcomes.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"13 ","pages":"Article 100348"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The growing digitalization of medical diagnostics has resulted in massive amounts of complicated data, including medical imaging and genomic sequences, clinical writing and real-time patient monitoring. Classical machine learning (CML) has achieved amazing success in evaluating such data. But its computational restrictions hinder scalability and efficiency when dealing with high-dimensional biomedical problems. Quantum machine learning (QML) combines the principles of quantum computing (QC) with advanced learning algorithms to offer a transformative paradigm for digital healthcare. This paper provides a systematic overview of QML foundations including quantum data encoding (QDC), variational quantum circuits (VQC), kernel methods, and hybrid quantum classical models. This paper also focuses on their applications in medical imaging, genomics, natural language processing (NLP) for electronic health records, drug discovery and healthcare security. We present comparative insights between classical and quantum approaches such as slow processing of high-dimensional data, limited scalability and inefficiency in complex optimization problems. This review also emphasizes the emerging directions towards quantum-based personalized digital healthcare approaches. By combining medical science with quantum QML has the potential to revolutionize the future of precision diagnostics, treatment optimization and healthcare data security. This study also provides a valuable resource for those interested in quantum computing and researchers who want to stay updated on the fast-growing area.
{"title":"Quantum computing in medical diagnostics and treatment: A systematic review of trends, challenges and future directions","authors":"Najnin Sultana Shirin , Md Mehedi Hasan , Md Kishor Morol , Nafiz Fahad , Md Tanzib Hosain , Md Jakir Hossen , Dip Nandi","doi":"10.1016/j.ibmed.2026.100356","DOIUrl":"10.1016/j.ibmed.2026.100356","url":null,"abstract":"<div><div>The growing digitalization of medical diagnostics has resulted in massive amounts of complicated data, including medical imaging and genomic sequences, clinical writing and real-time patient monitoring. Classical machine learning (CML) has achieved amazing success in evaluating such data. But its computational restrictions hinder scalability and efficiency when dealing with high-dimensional biomedical problems. Quantum machine learning (QML) combines the principles of quantum computing (QC) with advanced learning algorithms to offer a transformative paradigm for digital healthcare. This paper provides a systematic overview of QML foundations including quantum data encoding (QDC), variational quantum circuits (VQC), kernel methods, and hybrid quantum classical models. This paper also focuses on their applications in medical imaging, genomics, natural language processing (NLP) for electronic health records, drug discovery and healthcare security. We present comparative insights between classical and quantum approaches such as slow processing of high-dimensional data, limited scalability and inefficiency in complex optimization problems. This review also emphasizes the emerging directions towards quantum-based personalized digital healthcare approaches. By combining medical science with quantum QML has the potential to revolutionize the future of precision diagnostics, treatment optimization and healthcare data security. This study also provides a valuable resource for those interested in quantum computing and researchers who want to stay updated on the fast-growing area.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"13 ","pages":"Article 100356"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147396417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-01-06DOI: 10.1016/j.ibmed.2026.100343
Queenie T.K. Shea , Wing Ki Wong , Louis Lee
Patient-specific three-dimensional (3D)-printed magnetic resonance imaging localization grids (MR-Grids) have demonstrated feasibility as an assistive tool for image-guided interventional procedures. However, a limitation of conventional designs arises when the target lesion lies in an imaging plane where the discrete grid markers are not visible. To address this challenge, we proposed a patient-specific MR-Grid incorporating a contrast-filled flexible tubing system arranged in a criss-cross pattern, ensuring visibility across all imaging slices.
The proposed MR-Grid comprises two primary components: (1) a 3D-printed patient-specific scaffold designed to conform to individual anatomical contours, and (2) a contrast-filled flexible tubing system inserted into the grooves of the scaffold.
The MR-Grid was tested in an interventional procedure using a biopsy phantom containing MRI-visible lesions to validate its utility. The grid facilitated precise needle insertion by identifying the optimal entry point under MR guidance, demonstrating its potential to improve accuracy and efficiency in image-guided interventions.
{"title":"The second-generation 3D-Printed localization grid for MRI-guided interventional procedures","authors":"Queenie T.K. Shea , Wing Ki Wong , Louis Lee","doi":"10.1016/j.ibmed.2026.100343","DOIUrl":"10.1016/j.ibmed.2026.100343","url":null,"abstract":"<div><div>Patient-specific three-dimensional (3D)-printed magnetic resonance imaging localization grids (MR-Grids) have demonstrated feasibility as an assistive tool for image-guided interventional procedures. However, a limitation of conventional designs arises when the target lesion lies in an imaging plane where the discrete grid markers are not visible. To address this challenge, we proposed a patient-specific MR-Grid incorporating a contrast-filled flexible tubing system arranged in a criss-cross pattern, ensuring visibility across all imaging slices.</div><div>The proposed MR-Grid comprises two primary components: (1) a 3D-printed patient-specific scaffold designed to conform to individual anatomical contours, and (2) a contrast-filled flexible tubing system inserted into the grooves of the scaffold.</div><div>The MR-Grid was tested in an interventional procedure using a biopsy phantom containing MRI-visible lesions to validate its utility. The grid facilitated precise needle insertion by identifying the optimal entry point under MR guidance, demonstrating its potential to improve accuracy and efficiency in image-guided interventions.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"13 ","pages":"Article 100343"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-27DOI: 10.1016/j.ibmed.2025.100338
Fatemeh Afroughi , SeyedAhmad SeyedAlinaghi , Pegah Mirzapour , Shabnam Shirdel , Zohal Parmoon , Mohammad Musa Khorshidi , Somaye Mansouri , Mahdi Sheykhi , Yusuf Popoola , Esmaeil Mehraeen
Introduction
Artificial intelligence (AI) is the simulation of human intelligence, in which machines perform problem-solving like the human brain. AI and neuroscience are interrelated. In this study, a systematic review of current AI models and applications was conducted to consider the potential of AI in advancing neuroscience.
Methods
Relevant articles were selected based on a search in three reputable databases, including Web of Science, PubMed, and Scopus. Two independent researchers conducted the selection process in two stages.
Results
A total of 99 studies (2019–2024) met PRISMA criteria. Of these, 83 studies focused on specific brain disorders—most notably Alzheimer's disease (n = 26), stroke (n = 14), epilepsy (n = 7), and Parkinson's disease (n = 7)—while 22 addressed broader neuroscience applications. A range of AI methods were applied, including traditional machine learning techniques (e.g., Support Vector Machines (SVM), Random Forest) and deep learning approaches (e.g., Convolutional Neural Networks (CNNs), Generative Adversarial Networks (GANs)), with several studies employing hybrid models. A comparative analysis of study designs revealed a heavy reliance on public datasets (e.g., Alzheimers Disease Neuroimaging Initiative (ADNI)) for Alzheimer's research, while studies on other disorders predominantly utilized private cohorts. Regarding validation, the majority of studies employed internal cross-validation strategies, with fewer utilizing independent external datasets to test generalizability.
Conclusion
The transformative potential of AI in advancing neuroscience lies in its ability to increase diagnostic accuracy, predict disease progression, and enhance imaging techniques. Future research should focus on refining AI methods to enhance generalizability and foster collaborations between AI practitioners and neuroscientists.
{"title":"The potential of artificial intelligence in advancing neuroscience: A systematic review of current applications and models","authors":"Fatemeh Afroughi , SeyedAhmad SeyedAlinaghi , Pegah Mirzapour , Shabnam Shirdel , Zohal Parmoon , Mohammad Musa Khorshidi , Somaye Mansouri , Mahdi Sheykhi , Yusuf Popoola , Esmaeil Mehraeen","doi":"10.1016/j.ibmed.2025.100338","DOIUrl":"10.1016/j.ibmed.2025.100338","url":null,"abstract":"<div><h3>Introduction</h3><div>Artificial intelligence (AI) is the simulation of human intelligence, in which machines perform problem-solving like the human brain. AI and neuroscience are interrelated. In this study, a systematic review of current AI models and applications was conducted to consider the potential of AI in advancing neuroscience.</div></div><div><h3>Methods</h3><div>Relevant articles were selected based on a search in three reputable databases, including Web of Science, PubMed, and Scopus. Two independent researchers conducted the selection process in two stages.</div></div><div><h3>Results</h3><div>A total of 99 studies (2019–2024) met PRISMA criteria. Of these, 83 studies focused on specific brain disorders—most notably Alzheimer's disease (n = 26), stroke (n = 14), epilepsy (n = 7), and Parkinson's disease (n = 7)—while 22 addressed broader neuroscience applications. A range of AI methods were applied, including traditional machine learning techniques (e.g., Support Vector Machines (SVM), Random Forest) and deep learning approaches (e.g., Convolutional Neural Networks (CNNs), Generative Adversarial Networks (GANs)), with several studies employing hybrid models. A comparative analysis of study designs revealed a heavy reliance on public datasets (e.g., Alzheimers Disease Neuroimaging Initiative (ADNI)) for Alzheimer's research, while studies on other disorders predominantly utilized private cohorts. Regarding validation, the majority of studies employed internal cross-validation strategies, with fewer utilizing independent external datasets to test generalizability.</div></div><div><h3>Conclusion</h3><div>The transformative potential of AI in advancing neuroscience lies in its ability to increase diagnostic accuracy, predict disease progression, and enhance imaging techniques. Future research should focus on refining AI methods to enhance generalizability and foster collaborations between AI practitioners and neuroscientists.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"13 ","pages":"Article 100338"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-01-21DOI: 10.1016/j.ibmed.2026.100349
Soumendra N. Bhanja , Haoran Niu , Yang Chen , Olufemi A. Omitaomu , Angela Laurio , Amber Trickey , Vijayalakshmi Sampath , Jonathan R. Nebeker
Background
Anomaly detection in electronic health records (EHRs) is a cornerstone of biomedical informatics, with direct implications for patient safety, clinical decision-making, and the prevention of healthcare fraud. Once guided primarily by simple rule-based methods, the field has advanced rapidly, driven by increased computing power, richer and more detailed health data, and the rise of machine learning and deep learning techniques. The objective of this paper is to provide a comprehensive overview of modern approaches to detecting anomalies in EHRs, outlining their strengths, limitations, and relevance to key healthcare challenges. We review traditional statistical methods alongside newer ML- and DL-based strategies and hybrid models, with particular attention to how these techniques support transparency and build clinical trust.
Methods
This paper presents a thorough and critical survey through systematic review (PRISMA-based) of the latest anomaly detection strategies in time-sequence data domains within electronic health record systems.
Results
We explore a broad spectrum of methodologies, including statistical models, supervised and unsupervised learning approaches, hybrid frameworks, and state-of-the-art ML-based techniques that collectively advance the precision and scalability of detecting anomalies in complex clinical datasets. In addition to mapping current capabilities, we address the enduring challenges that hinder widespread implementation and provide a forward-looking perspective on the future of anomaly detection in the data-rich landscape of modern healthcare.
Summary
The advancement in AI-based approaches is reported along with the basic principles of the individual approaches and their applicability. The increased availability of high-quality data, advancements in DL approaches, and enhanced computation power are leading to more frequent adaptation of DL-based approaches. Emerging DL-based approaches that have been adapted in other domains or recently applied in the EHR domain are also discussed in detail. Although DL-based approaches can improve model predictions by incorporating comorbidities, their application is limited in low-frequency data domains (e.g., when the total available data remains in the single digits). Therefore, the user must carefully consider the application based on data availability.
{"title":"Emerging anomaly detection techniques for electronic health records: A survey","authors":"Soumendra N. Bhanja , Haoran Niu , Yang Chen , Olufemi A. Omitaomu , Angela Laurio , Amber Trickey , Vijayalakshmi Sampath , Jonathan R. Nebeker","doi":"10.1016/j.ibmed.2026.100349","DOIUrl":"10.1016/j.ibmed.2026.100349","url":null,"abstract":"<div><h3>Background</h3><div>Anomaly detection in electronic health records (EHRs) is a cornerstone of biomedical informatics, with direct implications for patient safety, clinical decision-making, and the prevention of healthcare fraud. Once guided primarily by simple rule-based methods, the field has advanced rapidly, driven by increased computing power, richer and more detailed health data, and the rise of machine learning and deep learning techniques. The objective of this paper is to provide a comprehensive overview of modern approaches to detecting anomalies in EHRs, outlining their strengths, limitations, and relevance to key healthcare challenges. We review traditional statistical methods alongside newer ML- and DL-based strategies and hybrid models, with particular attention to how these techniques support transparency and build clinical trust.</div></div><div><h3>Methods</h3><div>This paper presents a thorough and critical survey through systematic review (PRISMA-based) of the latest anomaly detection strategies in time-sequence data domains within electronic health record systems.</div></div><div><h3>Results</h3><div>We explore a broad spectrum of methodologies, including statistical models, supervised and unsupervised learning approaches, hybrid frameworks, and state-of-the-art ML-based techniques that collectively advance the precision and scalability of detecting anomalies in complex clinical datasets. In addition to mapping current capabilities, we address the enduring challenges that hinder widespread implementation and provide a forward-looking perspective on the future of anomaly detection in the data-rich landscape of modern healthcare.</div></div><div><h3>Summary</h3><div>The advancement in AI-based approaches is reported along with the basic principles of the individual approaches and their applicability. The increased availability of high-quality data, advancements in DL approaches, and enhanced computation power are leading to more frequent adaptation of DL-based approaches. Emerging DL-based approaches that have been adapted in other domains or recently applied in the EHR domain are also discussed in detail. Although DL-based approaches can improve model predictions by incorporating comorbidities, their application is limited in low-frequency data domains (e.g., when the total available data remains in the single digits). Therefore, the user must carefully consider the application based on data availability.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"13 ","pages":"Article 100349"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Speckle noise, tissue deformation, low contrast, and frame inconsistencies limit the reliability of traditional breast lesion tracking approaches in ultrasound videos.
Objective
This study aims to develop a robust hybrid framework that integrates advanced image enhancement, deep learning-based detection, and spatiotemporal feature fusion for improved lesion detection and tracking in breast ultrasound video sequences.
Methods
We propose a two-phase computational framework. The first phase employs Contrast-Limited Adaptive Histogram Equalization (CLAHE) for local contrast enhancement, followed by a hybrid denoising strategy combining anisotropic diffusion and unsharp masking to suppress noise and preserve edge sharpness. In the second phase, lesion detection is performed using a YOLOv11-L model, fine-tuned on a curated dataset of annotated breast ultrasound images. For tracking, we utilize Kernelized Correlation Filtering (KCF) enhanced with a Hybrid Spatiotemporal Context (STC) representation. The system is evaluated on a dataset comprising 11,382 ultrasound images and 40 video sequences, with performance assessed using Intersection over Union (IoU), success rate, failure rate, and processing speed.
Results
The proposed framework achieved an IoU of 0.878 for benign lesions and 0.881 for malignant lesions. The integration of STC features and YOLO detection reduced tracking failure rates by over 25 % and improved success rates to 99.0 % for benign and 99.4 % for malignant lesions. The system processed 41–45 frames per second in real time.
Conclusions
Our framework provides an effective solution for real-time lesion detection and tracking in breast ultrasound videos. By enhancing both accuracy and reliability, it supports improved clinical decision-making in breast cancer diagnostics.
{"title":"Hybrid spatiotemporal feature fusion for robust lesion detection and tracking in breast ultrasound video data","authors":"Radwan Qasrawi , Suliman Thwib , Ghada Issa , Razan AbuGhoush , Hussein AlMasri , Marah Qawasmi , Nael Abu Halaweh","doi":"10.1016/j.ibmed.2025.100330","DOIUrl":"10.1016/j.ibmed.2025.100330","url":null,"abstract":"<div><h3>Background</h3><div>Speckle noise, tissue deformation, low contrast, and frame inconsistencies limit the reliability of traditional breast lesion tracking approaches in ultrasound videos.</div></div><div><h3>Objective</h3><div>This study aims to develop a robust hybrid framework that integrates advanced image enhancement, deep learning-based detection, and spatiotemporal feature fusion for improved lesion detection and tracking in breast ultrasound video sequences.</div></div><div><h3>Methods</h3><div>We propose a two-phase computational framework. The first phase employs Contrast-Limited Adaptive Histogram Equalization (CLAHE) for local contrast enhancement, followed by a hybrid denoising strategy combining anisotropic diffusion and unsharp masking to suppress noise and preserve edge sharpness. In the second phase, lesion detection is performed using a YOLOv11-L model, fine-tuned on a curated dataset of annotated breast ultrasound images. For tracking, we utilize Kernelized Correlation Filtering (KCF) enhanced with a Hybrid Spatiotemporal Context (STC) representation. The system is evaluated on a dataset comprising 11,382 ultrasound images and 40 video sequences, with performance assessed using Intersection over Union (IoU), success rate, failure rate, and processing speed.</div></div><div><h3>Results</h3><div>The proposed framework achieved an IoU of 0.878 for benign lesions and 0.881 for malignant lesions. The integration of STC features and YOLO detection reduced tracking failure rates by over 25 % and improved success rates to 99.0 % for benign and 99.4 % for malignant lesions. The system processed 41–45 frames per second in real time.</div></div><div><h3>Conclusions</h3><div>Our framework provides an effective solution for real-time lesion detection and tracking in breast ultrasound videos. By enhancing both accuracy and reliability, it supports improved clinical decision-making in breast cancer diagnostics.</div></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"13 ","pages":"Article 100330"},"PeriodicalIF":0.0,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}