Background: Real-time phase-contrast magnetic resonance (RT-PCMR) imaging allows free-breathing assessment of blood flow across cardiac valves and vessels. However, the feasibility of free-breathing RT-PCMR on a mid-field (0.55T) MRI system has yet to be established. Aim: The primary objective of this study was to implement a RT-PCMR sequence using a dual-density golden-angle spiral readout with SENSE-based compressed sensing (CS) reconstruction on a 0.55T MRI system. The secondary objective was to evaluate the feasibility of this approach in an adult cohort comprising healthy volunteers and patients with cardiovascular disease. Materials and Methods: Data from 33 participants were included in the flow quantification analysis (healthy volunteers: n = 17, 9 females, mean age 30.4 ± 14.6 years; patients: n = 16, 11 females, mean age 45.9 ± 17.4 years), with breath-held (BH) segmented Cartesian PCMR used as the reference standard. Results: In volunteers, RT-PCMR showed good agreement for net flow, peak flow rate, and pulmonary-systemic flow ratio (Qp/Qs), without significant bias (p > 0.05) and slightly underestimated peak velocity [7.9% in the aorta and 8.6% in the main pulmonary artery (MPA)]. In patients, RT-PCMR slightly underestimated peak flow rate (aorta, 6.2%; MPA; 4.6%) and peak velocity (aorta,12.7%; MPA, 10.4%). A sub-analysis of six patients scanned at both 0.55T and 3T showed close agreement between field strengths. Conclusions: These results demonstrate the feasibility of our RT-PCMR sequence on a commercial 0.55T system.
{"title":"Feasibility of Golden Angle Spiral Real-Time Phase Contrast MRI at 0.55T: A Single-Center Prospective Study.","authors":"Salman Pervaiz, Chong Chen, Yingmin Liu, Katherine Binzel, Kelvin Chow, Rizwan Ahmad, Yuchi Han, Orlando P Simonetti, Ning Jin, Juliet Varghese","doi":"10.3390/bioengineering13020166","DOIUrl":"10.3390/bioengineering13020166","url":null,"abstract":"<p><p><b>Background</b>: Real-time phase-contrast magnetic resonance (RT-PCMR) imaging allows free-breathing assessment of blood flow across cardiac valves and vessels. However, the feasibility of free-breathing RT-PCMR on a mid-field (0.55T) MRI system has yet to be established. <b>Aim:</b> The primary objective of this study was to implement a RT-PCMR sequence using a dual-density golden-angle spiral readout with SENSE-based compressed sensing (CS) reconstruction on a 0.55T MRI system. The secondary objective was to evaluate the feasibility of this approach in an adult cohort comprising healthy volunteers and patients with cardiovascular disease. <b>Materials and Methods:</b> Data from 33 participants were included in the flow quantification analysis (healthy volunteers: <i>n</i> = 17, 9 females, mean age 30.4 ± 14.6 years; patients: <i>n</i> = 16, 11 females, mean age 45.9 ± 17.4 years), with breath-held (BH) segmented Cartesian PCMR used as the reference standard. <b>Results:</b> In volunteers, RT-PCMR showed good agreement for net flow, peak flow rate, and pulmonary-systemic flow ratio (Qp/Qs), without significant bias (<i>p</i> > 0.05) and slightly underestimated peak velocity [7.9% in the aorta and 8.6% in the main pulmonary artery (MPA)]. In patients, RT-PCMR slightly underestimated peak flow rate (aorta, 6.2%; MPA; 4.6%) and peak velocity (aorta,12.7%; MPA, 10.4%). A sub-analysis of six patients scanned at both 0.55T and 3T showed close agreement between field strengths. <b>Conclusions:</b> These results demonstrate the feasibility of our RT-PCMR sequence on a commercial 0.55T system.</p>","PeriodicalId":8874,"journal":{"name":"Bioengineering","volume":"13 2","pages":""},"PeriodicalIF":3.7,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12938216/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147301548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smart devices and multimodal biosignal systems, including electroencephalography (EEG/MEG), ECG-derived heart rate variability (HRV), and electromyography (EMG), increasingly supported by artificial intelligence (AI), are being explored to improve the assessment and longitudinal monitoring of mental health conditions. Despite rapid growth, the available evidence remains heterogeneous, and clinical translation is limited by variability in acquisition protocols, analytical pipelines, and validation quality. This systematic review synthesizes current applications, signal-processing approaches, and methodological limitations of biosignal-based smart systems for mental health monitoring. Methods: A PRISMA 2020-guided systematic review was conducted across PubMed/MEDLINE, Scopus, the Web of Science Core Collection, IEEE Xplore, and the ACM Digital Library for studies published between 2013 and 2026. Eligible records reported human applications of wearable/smart devices or multimodal biosignals (e.g., EEG/MEG, ECG/HRV, EMG, EDA/GSR, and sleep/activity) for the detection, monitoring, or management of mental health outcomes. The reviewed literature after predefined inclusion/exclusion criteria clustered into six themes: depression detection and monitoring (37%), stress/anxiety management (18%), post-traumatic stress disorder (PTSD)/trauma (5%), technological innovations for monitoring (25%), brain-state-dependent stimulation/interventions (3%), and socioeconomic context (7%). Across modalities, common analytical pipelines included artifact suppression, feature extraction (time/frequency/nonlinear indices such as entropy and complexity), and machine learning/deep learning models (e.g., SVM, random forests, CNNs, and transformers) for classification or prediction. However, 67% of studies involved sample sizes below 100 participants, limited ecological validity, and lacked external validation; heterogeneity in protocols and outcomes constrained comparability. Conclusions: Overall, multimodal systems demonstrate strong potential to augment conventional mental health assessment, particularly via wearable cardiac metrics and passive sensing approaches, but current evidence is dominated by proof-of-concept studies. Future work should prioritize standardized reporting, rigorous validation in diverse real-world cohorts, transparent model evaluations, and ethics-by-design principles (privacy, fairness, and clinical governance) to support translation into practice.
智能设备和多模态生物信号系统,包括脑电图(EEG/MEG)、心电图衍生的心率变异性(HRV)和肌电图(EMG),越来越多地得到人工智能(AI)的支持,正在探索改善心理健康状况的评估和纵向监测。尽管快速增长,可获得的证据仍然是异构的,临床翻译受到获取协议、分析管道和验证质量的可变性的限制。这篇系统综述综合了基于生物信号的智能心理健康监测系统的当前应用、信号处理方法和方法学局限性。方法:对2013年至2026年间发表的PubMed/MEDLINE、Scopus、Web of Science核心合集、IEEE explore和ACM数字图书馆进行以PRISMA 2020为指导的系统评价。符合条件的记录报告了人类应用可穿戴/智能设备或多模态生物信号(例如EEG/MEG、ECG/HRV、EMG、EDA/GSR和睡眠/活动)来检测、监测或管理心理健康结果。根据预先定义的纳入/排除标准,文献综述分为六个主题:抑郁检测和监测(37%),压力/焦虑管理(18%),创伤后应激障碍(PTSD)/创伤(5%),监测技术创新(25%),脑状态依赖性刺激/干预(3%)和社会经济背景(7%)。在各种模式中,常见的分析管道包括伪影抑制、特征提取(时间/频率/非线性指标,如熵和复杂性),以及用于分类或预测的机器学习/深度学习模型(如SVM、随机森林、cnn和变压器)。然而,67%的研究样本量低于100人,生态效度有限,缺乏外部验证;方案和结果的异质性限制了可比性。结论:总体而言,多模式系统显示出增强传统心理健康评估的强大潜力,特别是通过可穿戴心脏测量和被动传感方法,但目前的证据主要是概念验证研究。未来的工作应该优先考虑标准化的报告,在不同的现实世界群体中进行严格的验证,透明的模型评估,以及设计伦理原则(隐私,公平和临床治理),以支持转化为实践。
{"title":"Smart Devices and Multimodal Systems for Mental Health Monitoring: From Theory to Application.","authors":"Andreea Violeta Caragață, Mihaela Hnatiuc, Oana Geman, Simona Halunga, Adrian Tulbure, Catalin J Iov","doi":"10.3390/bioengineering13020165","DOIUrl":"10.3390/bioengineering13020165","url":null,"abstract":"<p><p>Smart devices and multimodal biosignal systems, including electroencephalography (EEG/MEG), ECG-derived heart rate variability (HRV), and electromyography (EMG), increasingly supported by artificial intelligence (AI), are being explored to improve the assessment and longitudinal monitoring of mental health conditions. Despite rapid growth, the available evidence remains heterogeneous, and clinical translation is limited by variability in acquisition protocols, analytical pipelines, and validation quality. This systematic review synthesizes current applications, signal-processing approaches, and methodological limitations of biosignal-based smart systems for mental health monitoring. Methods: A PRISMA 2020-guided systematic review was conducted across PubMed/MEDLINE, Scopus, the Web of Science Core Collection, IEEE Xplore, and the ACM Digital Library for studies published between 2013 and 2026. Eligible records reported human applications of wearable/smart devices or multimodal biosignals (e.g., EEG/MEG, ECG/HRV, EMG, EDA/GSR, and sleep/activity) for the detection, monitoring, or management of mental health outcomes. The reviewed literature after predefined inclusion/exclusion criteria clustered into six themes: depression detection and monitoring (37%), stress/anxiety management (18%), post-traumatic stress disorder (PTSD)/trauma (5%), technological innovations for monitoring (25%), brain-state-dependent stimulation/interventions (3%), and socioeconomic context (7%). Across modalities, common analytical pipelines included artifact suppression, feature extraction (time/frequency/nonlinear indices such as entropy and complexity), and machine learning/deep learning models (e.g., SVM, random forests, CNNs, and transformers) for classification or prediction. However, 67% of studies involved sample sizes below 100 participants, limited ecological validity, and lacked external validation; heterogeneity in protocols and outcomes constrained comparability. Conclusions: Overall, multimodal systems demonstrate strong potential to augment conventional mental health assessment, particularly via wearable cardiac metrics and passive sensing approaches, but current evidence is dominated by proof-of-concept studies. Future work should prioritize standardized reporting, rigorous validation in diverse real-world cohorts, transparent model evaluations, and ethics-by-design principles (privacy, fairness, and clinical governance) to support translation into practice.</p>","PeriodicalId":8874,"journal":{"name":"Bioengineering","volume":"13 2","pages":""},"PeriodicalIF":3.7,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12938191/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147301626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-29DOI: 10.3390/bioengineering13020161
Enes Bardakci, Didem Ozdemir Ozenen, Izzet Yavuz
Glass ionomer-based restorative materials are widely used in pediatric dentistry because of their chemical adhesion to tooth structure, ion-releasing capacity, and clinical handling advantages; however, their mechanical durability under simulated oral aging conditions remains a critical factor influencing long-term clinical performance. This in vitro study aimed to evaluate and compare the surface microhardness of three contemporary glass ionomer-based restorative materials-Beautifil Bulk Restorative, EQUIA Forte HT, and Fuji II LC-before and after thermocycling. A total of 90 disc-shaped specimens (10 mm in diameter and 2 mm in thickness) were prepared, with 30 samples allocated to each material group. Microhardness measurements were performed using the Vickers hardness test at baseline and after 10,000 thermocycling cycles between 5 °C and 55 °C to simulate intraoral aging. Results were expressed as the mean ± standard deviation, and statistical analyses were conducted using non-parametric tests. Thermocycling resulted in a statistically significant reduction in microhardness values for all tested materials (p < 0.05). Beautifil Bulk Restorative exhibited the highest microhardness values both before and after thermocycling, followed by Fuji II LC and EQUIA Forte HT, with significant differences observed among all groups (p < 0.001). Within the limitations of this study, Beautifil Bulk Restorative may be considered a favorable option for restorations in young permanent teeth, whereas EQUIA Forte HT, exhibiting lower microhardness values, may be more suitable for primary teeth, where physiological wear is expected.
基于玻璃离子聚体的修复材料因其与牙齿结构的化学粘附、离子释放能力和临床处理优势而广泛应用于儿科牙科;然而,它们在模拟口腔老化条件下的机械耐久性仍然是影响长期临床表现的关键因素。本体外研究旨在评估和比较三种现代基于玻璃离子的修复材料(beautifil Bulk restorative, EQUIA Forte HT和Fuji II lc)在热循环前后的表面显微硬度。制作直径10 mm、厚度2 mm的盘状试样90个,每材料组30个。显微硬度测量采用维氏硬度测试,在基线和在5°C和55°C之间进行10,000次热循环后进行,以模拟口内老化。结果以均数±标准差表示,采用非参数检验进行统计分析。热循环导致所有测试材料的显微硬度值有统计学意义的降低(p < 0.05)。在热循环前后,beaufil Bulk Restorative的显微硬度值最高,Fuji II LC次之,EQUIA Forte HT次之,各组间差异有统计学意义(p < 0.001)。在本研究的限制下,beaufil Bulk Restorative可能被认为是年轻恒牙修复的有利选择,而EQUIA Forte HT具有较低的显微硬度值,可能更适合于预期生理磨损的乳牙。
{"title":"The Effect of Thermocycling on the Microhardness of Contemporary Glass Ionomer-Based Restorative Materials: An In Vitro Study.","authors":"Enes Bardakci, Didem Ozdemir Ozenen, Izzet Yavuz","doi":"10.3390/bioengineering13020161","DOIUrl":"10.3390/bioengineering13020161","url":null,"abstract":"<p><p>Glass ionomer-based restorative materials are widely used in pediatric dentistry because of their chemical adhesion to tooth structure, ion-releasing capacity, and clinical handling advantages; however, their mechanical durability under simulated oral aging conditions remains a critical factor influencing long-term clinical performance. This in vitro study aimed to evaluate and compare the surface microhardness of three contemporary glass ionomer-based restorative materials-Beautifil Bulk Restorative, EQUIA Forte HT, and Fuji II LC-before and after thermocycling. A total of 90 disc-shaped specimens (10 mm in diameter and 2 mm in thickness) were prepared, with 30 samples allocated to each material group. Microhardness measurements were performed using the Vickers hardness test at baseline and after 10,000 thermocycling cycles between 5 °C and 55 °C to simulate intraoral aging. Results were expressed as the mean ± standard deviation, and statistical analyses were conducted using non-parametric tests. Thermocycling resulted in a statistically significant reduction in microhardness values for all tested materials (<i>p</i> < 0.05). Beautifil Bulk Restorative exhibited the highest microhardness values both before and after thermocycling, followed by Fuji II LC and EQUIA Forte HT, with significant differences observed among all groups (<i>p</i> < 0.001). Within the limitations of this study, Beautifil Bulk Restorative may be considered a favorable option for restorations in young permanent teeth, whereas EQUIA Forte HT, exhibiting lower microhardness values, may be more suitable for primary teeth, where physiological wear is expected.</p>","PeriodicalId":8874,"journal":{"name":"Bioengineering","volume":"13 2","pages":""},"PeriodicalIF":3.7,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12938537/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147301665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-29DOI: 10.3390/bioengineering13020167
Aditya Dave, Amartya Dave, Issam D Moussa
The growing number of wearable electrocardiogram (ECG) users today, combined with the surge of artificial intelligence (AI) and machine learning (ML) in medical signal-processing, has led to a new age of wearable-enabled monitoring for cardiac conditions. With the development of advanced processing methods, wearables offer the opportunity to monitor and predict the probability of various cardiac conditions, from cardiac ischemia to arrhythmias, by collecting personalized data from the comfort of a user's home. Although such technology has not yet entered the market, AI and ML research training specifically on wearable-based ECG data has grown significantly in the last decade. Despite this growing niche, there are few current articles reviewing the applications of these techniques in wearable ECG technology. To fill this gap, this article first primes the reader to the practical tools required to build models from ambulatory ECG, synthesizes the state of the field across major cardiac condition use-cases, and finally highlights recurring limitations in the current literature and outlines the need to improve reliability if this technology were to be widely utilized. As a result, we aim to help readers who otherwise may be unfamiliar with the specifics of these tools and their applications to form an interpretation of the current capabilities of AI/ML in wearable ECGs and identify key steps required for improvement based on the most current research.
{"title":"The Evolving Role of Artificial Intelligence and Machine Learning in the Wearable Electrocardiogram: A Primer on Wearable-Enabled Prediction of Cardiac Dysfunction.","authors":"Aditya Dave, Amartya Dave, Issam D Moussa","doi":"10.3390/bioengineering13020167","DOIUrl":"10.3390/bioengineering13020167","url":null,"abstract":"<p><p>The growing number of wearable electrocardiogram (ECG) users today, combined with the surge of artificial intelligence (AI) and machine learning (ML) in medical signal-processing, has led to a new age of wearable-enabled monitoring for cardiac conditions. With the development of advanced processing methods, wearables offer the opportunity to monitor and predict the probability of various cardiac conditions, from cardiac ischemia to arrhythmias, by collecting personalized data from the comfort of a user's home. Although such technology has not yet entered the market, AI and ML research training specifically on wearable-based ECG data has grown significantly in the last decade. Despite this growing niche, there are few current articles reviewing the applications of these techniques in wearable ECG technology. To fill this gap, this article first primes the reader to the practical tools required to build models from ambulatory ECG, synthesizes the state of the field across major cardiac condition use-cases, and finally highlights recurring limitations in the current literature and outlines the need to improve reliability if this technology were to be widely utilized. As a result, we aim to help readers who otherwise may be unfamiliar with the specifics of these tools and their applications to form an interpretation of the current capabilities of AI/ML in wearable ECGs and identify key steps required for improvement based on the most current research.</p>","PeriodicalId":8874,"journal":{"name":"Bioengineering","volume":"13 2","pages":""},"PeriodicalIF":3.7,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12938170/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147301610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-29DOI: 10.3390/bioengineering13020163
René Seiger, Peter Fierlinger
Convolutional neural networks (CNNs) have been the standard for computer vision tasks including applications in Alzheimer's disease (AD). Recently, Vision Transformers (ViTs) have been introduced, which have emerged as a strong alternative to CNNs. A common precursor stage of AD is a syndrome called mild cognitive impairment (MCI). However, not all individuals diagnosed with MCI progress to AD. In this exploratory investigation, we aimed to assess whether a ViT can reliably classify converters versus non-converters. A transfer learning approach was used for model training by applying a pretrained ViT model, fine-tuned on the ADNI dataset. The cohort comprised 575 individuals (299 stable MCIs; 276 progressive MCIs who converted within 36 months) from whom axial T1-weighted MRI slices covering the hippocampal region were used as model inputs. Results showed an average area under the receiver operating characteristic curve (AUC-ROC) on the test set of 0.74 ± 0.02 (mean ± SD), an accuracy of 0.69 ± 0.03, a sensitivity of 0.65 ± 0.07, a specificity of 0.72 ± 0.06, and an F1-score for the progressive MCI class of 0.67 ± 0.04. These findings demonstrate that a ViT approach achieves reasonable accuracy for classifying AD converters vs. non-converters, though its generalizability and clinical utility require further validation.
{"title":"Predicting Conversion from Mild Cognitive Impairment to Alzheimer's Disease Using a Vision Transformer and Hippocampal MRI Slices.","authors":"René Seiger, Peter Fierlinger","doi":"10.3390/bioengineering13020163","DOIUrl":"10.3390/bioengineering13020163","url":null,"abstract":"<p><p>Convolutional neural networks (CNNs) have been the standard for computer vision tasks including applications in Alzheimer's disease (AD). Recently, Vision Transformers (ViTs) have been introduced, which have emerged as a strong alternative to CNNs. A common precursor stage of AD is a syndrome called mild cognitive impairment (MCI). However, not all individuals diagnosed with MCI progress to AD. In this exploratory investigation, we aimed to assess whether a ViT can reliably classify converters versus non-converters. A transfer learning approach was used for model training by applying a pretrained ViT model, fine-tuned on the ADNI dataset. The cohort comprised 575 individuals (299 stable MCIs; 276 progressive MCIs who converted within 36 months) from whom axial T1-weighted MRI slices covering the hippocampal region were used as model inputs. Results showed an average area under the receiver operating characteristic curve (AUC-ROC) on the test set of 0.74 ± 0.02 (mean ± SD), an accuracy of 0.69 ± 0.03, a sensitivity of 0.65 ± 0.07, a specificity of 0.72 ± 0.06, and an F1-score for the progressive MCI class of 0.67 ± 0.04. These findings demonstrate that a ViT approach achieves reasonable accuracy for classifying AD converters vs. non-converters, though its generalizability and clinical utility require further validation.</p>","PeriodicalId":8874,"journal":{"name":"Bioengineering","volume":"13 2","pages":""},"PeriodicalIF":3.7,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12938448/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147301573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-29DOI: 10.3390/bioengineering13020160
Mária Ždímalová, Kristína Boratková, Viliam Sitár, Ľudovít Sebö, Viera Lehotská, Michal Trnka
Background/Objectives: The segmentation of three-dimensional radiological images constitutes a fundamental task in medical image processing for isolating tumors from complex datasets in computed tomography or magnetic resonance imaging. Precise visualization, volumetry, and treatment monitoring are enabled, which are critical for oncology diagnostics and planning. Volumetric analysis surpasses standard criteria by detecting subtle tumor changes, thereby aiding adaptive therapies. The objective of this study was to develop an enhanced, interactive Graphcut algorithm for 3D DICOM segmentation, specifically designed to improve boundary accuracy and 3D modeling of breast and brain tumors in datasets with heterogeneous tissue intensities. Methods: The standard Graphcut algorithm was augmented with a clustering mechanism (utilizing k = 2-5 clusters) to refine boundary detection in tissues with varying intensities. DICOM datasets were processed into 3D volumes using pixel spacing and slice thickness metadata. User-defined seeds were utilized for tumor and background initialization, constrained by bounding boxes. The method was implemented in Python 3.13 using the PyMaxflow library for graph optimization and pydicom for data transformation. Results: The proposed segmentation method outperformed standard thresholding and region growing techniques, demonstrating reduced noise sensitivity and improved boundary definition. An average Dice Similarity Coefficient (DSC) of 0.92 ± 0.07 was achieved for brain tumors and 0.90 ± 0.05 for breast tumors. These results were found to be comparable to state-of-the-art deep learning benchmarks (typically ranging from 0.84 to 0.95), achieved without the need for extensive pre-training. Boundary edge errors were reduced by a mean of 7.5% through the integration of clustering. Therapeutic changes were quantified accurately (e.g., a reduction from 22,106 mm3 to 14,270 mm3 post-treatment) with an average processing time of 12-15 s per stack. Conclusions: An efficient, precise 3D tumor segmentation tool suitable for diagnostics and planning is presented. This approach is demonstrated to be a robust, data-efficient alternative to deep learning, particularly advantageous in clinical settings where the large annotated datasets required for training neural networks are unavailable.
{"title":"3D Medical Image Segmentation with 3D Modelling.","authors":"Mária Ždímalová, Kristína Boratková, Viliam Sitár, Ľudovít Sebö, Viera Lehotská, Michal Trnka","doi":"10.3390/bioengineering13020160","DOIUrl":"10.3390/bioengineering13020160","url":null,"abstract":"<p><p><b>Background/Objectives:</b> The segmentation of three-dimensional radiological images constitutes a fundamental task in medical image processing for isolating tumors from complex datasets in computed tomography or magnetic resonance imaging. Precise visualization, volumetry, and treatment monitoring are enabled, which are critical for oncology diagnostics and planning. Volumetric analysis surpasses standard criteria by detecting subtle tumor changes, thereby aiding adaptive therapies. The objective of this study was to develop an enhanced, interactive Graphcut algorithm for 3D DICOM segmentation, specifically designed to improve boundary accuracy and 3D modeling of breast and brain tumors in datasets with heterogeneous tissue intensities. <b>Methods:</b> The standard Graphcut algorithm was augmented with a clustering mechanism (utilizing <i>k</i> = 2-5 clusters) to refine boundary detection in tissues with varying intensities. DICOM datasets were processed into 3D volumes using pixel spacing and slice thickness metadata. User-defined seeds were utilized for tumor and background initialization, constrained by bounding boxes. The method was implemented in Python 3.13 using the PyMaxflow library for graph optimization and pydicom for data transformation. <b>Results:</b> The proposed segmentation method outperformed standard thresholding and region growing techniques, demonstrating reduced noise sensitivity and improved boundary definition. An average Dice Similarity Coefficient (DSC) of 0.92 ± 0.07 was achieved for brain tumors and 0.90 ± 0.05 for breast tumors. These results were found to be comparable to state-of-the-art deep learning benchmarks (typically ranging from 0.84 to 0.95), achieved without the need for extensive pre-training. Boundary edge errors were reduced by a mean of 7.5% through the integration of clustering. Therapeutic changes were quantified accurately (e.g., a reduction from 22,106 mm<sup>3</sup> to 14,270 mm<sup>3</sup> post-treatment) with an average processing time of 12-15 s per stack. <b>Conclusions:</b> An efficient, precise 3D tumor segmentation tool suitable for diagnostics and planning is presented. This approach is demonstrated to be a robust, data-efficient alternative to deep learning, particularly advantageous in clinical settings where the large annotated datasets required for training neural networks are unavailable.</p>","PeriodicalId":8874,"journal":{"name":"Bioengineering","volume":"13 2","pages":""},"PeriodicalIF":3.7,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12938002/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147301607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Temporomandibular joint (TMJ) disorders represent chronic degenerative musculoskeletal conditions with a high prevalence in the general population and limited regenerative treatment options. Owing to the insufficient efficacy of current conservative and surgical therapies, there is a growing clinical need for biologically based regenerative approaches. Tissue engineering (TE), particularly scaffold-based strategies, has emerged as a promising avenue for TMJ regeneration. This systematic review analyzed preclinical in vivo studies investigating scaffold-based interventions for TMJ disc and osteochondral repair. A structured literature search of PubMed and Scopus databases identified 39 eligible studies. Extracted data included scaffold composition, use of cellular and bioactive components, animal models, and reported histological, radiological, and functional outcomes. Natural scaffolds, such as decellularized extracellular matrix and collagen-based hydrogels, demonstrated favorable biocompatibility and support for fibrocartilaginous regeneration, whereas synthetic materials including polycaprolactone, poly (lactic-co-glycolic acid), and polyvinyl alcohol provided superior mechanical stability and structural tunability. Cells were used in 17/39 studies (43%); quantitative improvements were variably reported across these studies. Bioactive molecule delivery, including transforming growth factor-β, histatin-1, and platelet-rich plasma, further enhanced tissue regeneration, while emerging drug- and gene-delivery approaches showed potential for modulating local inflammation. Despite encouraging results, the reviewed studies exhibited substantial heterogeneity in experimental design, outcome measures, and animal models, limiting direct comparison and translational interpretation. Scaffold-based approaches show preclinical promise but heterogeneity in design and incomplete quantitative reporting limit definitive conclusions. Future research should emphasize standardized methodologies, long-term functional evaluation, and the use of clinically relevant large-animal models to facilitate translation toward clinical application. However, functional and biomechanical outcomes were inconsistently reported and rarely standardized, preventing robust conclusions regarding the relationship between structural regeneration and restoration of TMJ function.
{"title":"Scaffolds and Stem Cells Show Promise for TMJ Regeneration: A Systematic Review.","authors":"Miljana Nedeljkovic, Gvozden Rosic, Dragica Selakovic, Jovana Milanovic, Aleksandra Arnaut, Milica Vasiljevic, Nemanja Jovicic, Lidija Veljkovic, Pavle Milanovic, Momir Stevanovic","doi":"10.3390/bioengineering13020169","DOIUrl":"10.3390/bioengineering13020169","url":null,"abstract":"<p><p>Temporomandibular joint (TMJ) disorders represent chronic degenerative musculoskeletal conditions with a high prevalence in the general population and limited regenerative treatment options. Owing to the insufficient efficacy of current conservative and surgical therapies, there is a growing clinical need for biologically based regenerative approaches. Tissue engineering (TE), particularly scaffold-based strategies, has emerged as a promising avenue for TMJ regeneration. This systematic review analyzed preclinical in vivo studies investigating scaffold-based interventions for TMJ disc and osteochondral repair. A structured literature search of PubMed and Scopus databases identified 39 eligible studies. Extracted data included scaffold composition, use of cellular and bioactive components, animal models, and reported histological, radiological, and functional outcomes. Natural scaffolds, such as decellularized extracellular matrix and collagen-based hydrogels, demonstrated favorable biocompatibility and support for fibrocartilaginous regeneration, whereas synthetic materials including polycaprolactone, poly (lactic-co-glycolic acid), and polyvinyl alcohol provided superior mechanical stability and structural tunability. Cells were used in 17/39 studies (43%); quantitative improvements were variably reported across these studies. Bioactive molecule delivery, including transforming growth factor-β, histatin-1, and platelet-rich plasma, further enhanced tissue regeneration, while emerging drug- and gene-delivery approaches showed potential for modulating local inflammation. Despite encouraging results, the reviewed studies exhibited substantial heterogeneity in experimental design, outcome measures, and animal models, limiting direct comparison and translational interpretation. Scaffold-based approaches show preclinical promise but heterogeneity in design and incomplete quantitative reporting limit definitive conclusions. Future research should emphasize standardized methodologies, long-term functional evaluation, and the use of clinically relevant large-animal models to facilitate translation toward clinical application. However, functional and biomechanical outcomes were inconsistently reported and rarely standardized, preventing robust conclusions regarding the relationship between structural regeneration and restoration of TMJ function.</p>","PeriodicalId":8874,"journal":{"name":"Bioengineering","volume":"13 2","pages":""},"PeriodicalIF":3.7,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12938102/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147301653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-28DOI: 10.3390/bioengineering13020151
Maria Giovanna Bianco, Camilla Calomino, Marianna Crasà, Alessia Cristofaro, Giulia Sgrò, Fabiana Novellino, Salvatore Andrea Pullano, Syed Kamrul Islam, Jolanda Buonocore, Aldo Quattrone, Andrea Quattrone, Rita Nisticò
Parkinson's disease (PD) is characterized by alterations in movement dynamics that are difficult to quantify with conventional clinical assessment. This study proposes an integrated approach combining graph-based kinematic analysis with explainable machine learning to identify digital biomarkers of Parkinsonian motor impairment. Kinematic signals were acquired using Xsens inertial sensors from 51 patients with PD and 53 healthy controls. For each participant, subject-specific kinematic networks were constructed by modeling inter-segment similarities through Jensen-Shannon divergence, from which global and local graph-theoretical metrics were extracted. A machine learning pipeline incorporating voting feature selection, and XGBoost classification was evaluated using a nested cross-validation design. The model achieved robust performance (AUC = 0.87), and explainability analyses using SHAP identified a subset of 13 features capturing alterations in velocity, inter-segment connectivity, and network centrality. PD was characterized by increased positional variability, reduced distal limb velocity, and a redistribution of network centrality towards proximal body segments. These features were associated with clinical severity, confirming their physiological relevance. By integrating graph-theoretical modeling, explainable artificial intelligence, and machine learning methodology, this work provides a method of discovering quantitative biomarkers capturing alterations in motor coordination. These findings highlight the potential of ML and kinematic networks to support objective motor assessment in PD.
{"title":"Similarity Gait Networks with XAI for Parkinson's Disease Classification: A Pilot Study.","authors":"Maria Giovanna Bianco, Camilla Calomino, Marianna Crasà, Alessia Cristofaro, Giulia Sgrò, Fabiana Novellino, Salvatore Andrea Pullano, Syed Kamrul Islam, Jolanda Buonocore, Aldo Quattrone, Andrea Quattrone, Rita Nisticò","doi":"10.3390/bioengineering13020151","DOIUrl":"10.3390/bioengineering13020151","url":null,"abstract":"<p><p>Parkinson's disease (PD) is characterized by alterations in movement dynamics that are difficult to quantify with conventional clinical assessment. This study proposes an integrated approach combining graph-based kinematic analysis with explainable machine learning to identify digital biomarkers of Parkinsonian motor impairment. Kinematic signals were acquired using Xsens inertial sensors from 51 patients with PD and 53 healthy controls. For each participant, subject-specific kinematic networks were constructed by modeling inter-segment similarities through Jensen-Shannon divergence, from which global and local graph-theoretical metrics were extracted. A machine learning pipeline incorporating voting feature selection, and XGBoost classification was evaluated using a nested cross-validation design. The model achieved robust performance (AUC = 0.87), and explainability analyses using SHAP identified a subset of 13 features capturing alterations in velocity, inter-segment connectivity, and network centrality. PD was characterized by increased positional variability, reduced distal limb velocity, and a redistribution of network centrality towards proximal body segments. These features were associated with clinical severity, confirming their physiological relevance. By integrating graph-theoretical modeling, explainable artificial intelligence, and machine learning methodology, this work provides a method of discovering quantitative biomarkers capturing alterations in motor coordination. These findings highlight the potential of ML and kinematic networks to support objective motor assessment in PD.</p>","PeriodicalId":8874,"journal":{"name":"Bioengineering","volume":"13 2","pages":""},"PeriodicalIF":3.7,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12937932/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147301590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-28DOI: 10.3390/bioengineering13020154
Vincent Majanga, Ernest Mnkandla, Donatien Koulla Moulla, Sree Thotempudi, Attipoe David Sena
Recently, deep learning methods have seen major advancements and are preferred for medical image analysis. Clinically, deep learning techniques for cancer image analysis are among the main applications for early diagnosis, detection, and treatment. Consequently, segmentation of breast histology images is a key step towards diagnosing breast cancer. However, the use of deep learning methods for image analysis is constrained by challenging features in the histology images. These challenges include poor image quality, complex microscopic tissue structures, topological intricacies, and boundary/edge inhomogeneity. Furthermore, this leads to a limited number of images required for analysis. The U-Net model was introduced and gained significant traction for its ability to produce high-accuracy results with very few input images. Many modifications of the U-Net architecture exist. Therefore, this study proposes the watershed encoder-decoder neural network (WEDN) to segment cancerous lesions in supervised breast histology images. Pre-processing of supervised breast histology images via augmentation is introduced to increase the dataset size. The augmented dataset is further enhanced and segmented into the region of interest. Data enhancement methods such as thresholding, opening, dilation, and distance transform are used to highlight foreground and background pixels while removing unwanted parts from the image. Consequently, further segmentation via the connected component analysis method is used to combine image pixel components with similar intensity values and assign them their respective labeled binary masks. The watershed filling method is then applied to these labeled binary mask components to separate and identify the edges/boundaries of the regions of interest (cancerous lesions). This resultant image information is sent to the WEDN model network for feature extraction and learning via training and testing. Residual convolutional block layers of the WEDN model are the learnable layers that extract the region of interest (ROI), which is the cancerous lesion. The method was evaluated on 3000 images-watershed masks, an augmented dataset. The model was trained on 2400 training set images and tested on 600 testing set images. This proposed method produced significant results of 98.53% validation accuracy, 96.98% validation dice coefficient, and 97.84% validation intersection over unit (IoU) metric scores.
{"title":"Watershed Encoder-Decoder Neural Network for Nuclei Segmentation of Breast Cancer Histology Images.","authors":"Vincent Majanga, Ernest Mnkandla, Donatien Koulla Moulla, Sree Thotempudi, Attipoe David Sena","doi":"10.3390/bioengineering13020154","DOIUrl":"10.3390/bioengineering13020154","url":null,"abstract":"<p><p>Recently, deep learning methods have seen major advancements and are preferred for medical image analysis. Clinically, deep learning techniques for cancer image analysis are among the main applications for early diagnosis, detection, and treatment. Consequently, segmentation of breast histology images is a key step towards diagnosing breast cancer. However, the use of deep learning methods for image analysis is constrained by challenging features in the histology images. These challenges include poor image quality, complex microscopic tissue structures, topological intricacies, and boundary/edge inhomogeneity. Furthermore, this leads to a limited number of images required for analysis. The U-Net model was introduced and gained significant traction for its ability to produce high-accuracy results with very few input images. Many modifications of the U-Net architecture exist. Therefore, this study proposes the watershed encoder-decoder neural network (WEDN) to segment cancerous lesions in supervised breast histology images. Pre-processing of supervised breast histology images via augmentation is introduced to increase the dataset size. The augmented dataset is further enhanced and segmented into the region of interest. Data enhancement methods such as thresholding, opening, dilation, and distance transform are used to highlight foreground and background pixels while removing unwanted parts from the image. Consequently, further segmentation via the connected component analysis method is used to combine image pixel components with similar intensity values and assign them their respective labeled binary masks. The watershed filling method is then applied to these labeled binary mask components to separate and identify the edges/boundaries of the regions of interest (cancerous lesions). This resultant image information is sent to the WEDN model network for feature extraction and learning via training and testing. Residual convolutional block layers of the WEDN model are the learnable layers that extract the region of interest (ROI), which is the cancerous lesion. The method was evaluated on 3000 images-watershed masks, an augmented dataset. The model was trained on 2400 training set images and tested on 600 testing set images. This proposed method produced significant results of 98.53% validation accuracy, 96.98% validation dice coefficient, and 97.84% validation intersection over unit (IoU) metric scores.</p>","PeriodicalId":8874,"journal":{"name":"Bioengineering","volume":"13 2","pages":""},"PeriodicalIF":3.7,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12937963/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147301599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-28DOI: 10.3390/bioengineering13020152
Riaz Muhammad, Ezekiel Edward Nettey-Oppong, Muhammad Usman, Saeed Ahmed Khan Abro, Toufique Ahmed Soomro, Ahmed Ali
Gaming Disorder (GD) is becoming more widely acknowledged as a behavioral addiction characterized by impaired control and functional impairment. While resting-state impairments are well understood, the neurophysiological dynamics during active gameplay remain underexplored. This study identified task-based occipital EEG biomarkers of GD and assessed their diagnostic utility. Occipital EEG (O1/O2) data from 30 participants (15 with GD, 15 controls) collected during active mobile gaming were used in this study. Spectral, temporal, and nonlinear complexity features were extracted. Feature relevance was ranked using Random Forest, and classification performance was evaluated using Leave-One-Subject-Out (LOSO) cross-validation to ensure subject-independent generalization across five models (Random Forest, KNN, SVM, Decision Tree, ANN). The GD group exhibited paradoxical "spectral slowing" during gameplay, characterized by increased Delta/Theta power and decreased Beta activity relative to controls. Beta variability was identified as a key biomarker, reflecting altered attentional stability, while elevated Alpha power suggested potential neural habituation or sensory gating. The Decision Tree classifier emerged as the most robust model, achieving a classification accuracy of 80.0%. Results suggest distinct neurophysiological patterns in GD, where increased low-frequency power may reflect automatized processing or "Neural Efficiency" despite active task engagement. These findings highlight the potential of occipital biomarkers as accessible and objective screening metrics for Gaming Disorder.
{"title":"Neural Efficiency and Attentional Instability in Gaming Disorder: A Task-Based Occipital EEG and Machine Learning Study.","authors":"Riaz Muhammad, Ezekiel Edward Nettey-Oppong, Muhammad Usman, Saeed Ahmed Khan Abro, Toufique Ahmed Soomro, Ahmed Ali","doi":"10.3390/bioengineering13020152","DOIUrl":"10.3390/bioengineering13020152","url":null,"abstract":"<p><p>Gaming Disorder (GD) is becoming more widely acknowledged as a behavioral addiction characterized by impaired control and functional impairment. While resting-state impairments are well understood, the neurophysiological dynamics during active gameplay remain underexplored. This study identified task-based occipital EEG biomarkers of GD and assessed their diagnostic utility. Occipital EEG (O1/O2) data from 30 participants (15 with GD, 15 controls) collected during active mobile gaming were used in this study. Spectral, temporal, and nonlinear complexity features were extracted. Feature relevance was ranked using Random Forest, and classification performance was evaluated using Leave-One-Subject-Out (LOSO) cross-validation to ensure subject-independent generalization across five models (Random Forest, KNN, SVM, Decision Tree, ANN). The GD group exhibited paradoxical \"spectral slowing\" during gameplay, characterized by increased Delta/Theta power and decreased Beta activity relative to controls. Beta variability was identified as a key biomarker, reflecting altered attentional stability, while elevated Alpha power suggested potential neural habituation or sensory gating. The Decision Tree classifier emerged as the most robust model, achieving a classification accuracy of 80.0%. Results suggest distinct neurophysiological patterns in GD, where increased low-frequency power may reflect automatized processing or \"Neural Efficiency\" despite active task engagement. These findings highlight the potential of occipital biomarkers as accessible and objective screening metrics for Gaming Disorder.</p>","PeriodicalId":8874,"journal":{"name":"Bioengineering","volume":"13 2","pages":""},"PeriodicalIF":3.7,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12938675/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147301483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}