Heart disease in pregnancy is an important health issue worldwide which needs precise care to improve pregnant women health care and reduce maternal mortality rate (MMR). As we know registries play an important role in improvement of health care, so we decided to design a software to take the first step for having a national registry for pregnant women with heart disease in Iran and classify them in a more effective way to reduce mismanagements. A windows-based software with C# language programming was designed and implemented by a group of specialists included two experienced cardiologists, a skilled gynecologist, and a proficient medical doctor programmer. Since the launch of the software, information for 500 pregnant women with heart disease has been entered. The most common types of heart disease in order were congenital heart disease, prosthetic heart valves, valvular disease, and cardiomyopathies. The software developed by our team provides a comprehensive and efficient tool for managing patients with heart disease in pregnancy. The use of this software can help identify high-risk patients early on, leading to better patient outcomes and ultimately contributing to the global goal of reducing MMR. In the field of pregnant women with heart disease, gathering large and accurate data over time can be utilized in artificial intelligence for analysis.
{"title":"Designing a Software for Registry of Pregnant Women with Heart Disease in Iran and Preliminary Results.","authors":"Mahdi Kalani, Fateme Mahdikhoshouei, Parvin Bahrami, Amirreza Sajjadieh Khajouei, Minoo Movahedi, Shima Mehdipour, Marzieh Rezvani Habibabadi","doi":"10.4103/jmss.jmss_43_24","DOIUrl":"10.4103/jmss.jmss_43_24","url":null,"abstract":"<p><p>Heart disease in pregnancy is an important health issue worldwide which needs precise care to improve pregnant women health care and reduce maternal mortality rate (MMR). As we know registries play an important role in improvement of health care, so we decided to design a software to take the first step for having a national registry for pregnant women with heart disease in Iran and classify them in a more effective way to reduce mismanagements. A windows-based software with C# language programming was designed and implemented by a group of specialists included two experienced cardiologists, a skilled gynecologist, and a proficient medical doctor programmer. Since the launch of the software, information for 500 pregnant women with heart disease has been entered. The most common types of heart disease in order were congenital heart disease, prosthetic heart valves, valvular disease, and cardiomyopathies. The software developed by our team provides a comprehensive and efficient tool for managing patients with heart disease in pregnancy. The use of this software can help identify high-risk patients early on, leading to better patient outcomes and ultimately contributing to the global goal of reducing MMR. In the field of pregnant women with heart disease, gathering large and accurate data over time can be utilized in artificial intelligence for analysis.</p>","PeriodicalId":37680,"journal":{"name":"Journal of Medical Signals & Sensors","volume":"15 ","pages":"21"},"PeriodicalIF":1.1,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12331175/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144800563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Optical coherence tomography (OCT) is a pivotal imaging technique for the early detection and management of critical retinal diseases, notably diabetic macular edema and age-related macular degeneration. These conditions are significant global health concerns, affecting millions and leading to vision loss if not diagnosed promptly. Current methods for OCT image classification encounter specific challenges, such as the inherent complexity of retinal structures and considerable variability across different OCT datasets.
Methods: This paper introduces a novel hybrid model that integrates the strengths of convolutional neural networks (CNNs) and vision transformer (ViT) to overcome these obstacles. The synergy between CNNs, which excel at extracting detailed localized features, and ViT, adept at recognizing long-range patterns, enables a more effective and comprehensive analysis of OCT images.
Results: While our model achieves an accuracy of 99.80% on the OCT2017 dataset, its standout feature is its parameter efficiency-requiring only 6.9 million parameters, significantly fewer than larger, more complex models such as Xception and OpticNet-71.
Conclusion: This efficiency underscores the model's suitability for clinical settings, where computational resources may be limited but high accuracy and rapid diagnosis are imperative.Code Availability: The code for this study is available at https://github.com/Amir1831/ViT4OCT.
{"title":"From Image to Sequence: Exploring Vision Transformers for Optical Coherence Tomography Classification.","authors":"Amirali Arbab, Aref Habibi, Hossein Rabbani, Mahnoosh Tajmirriahi","doi":"10.4103/jmss.jmss_58_24","DOIUrl":"10.4103/jmss.jmss_58_24","url":null,"abstract":"<p><strong>Background: </strong>Optical coherence tomography (OCT) is a pivotal imaging technique for the early detection and management of critical retinal diseases, notably diabetic macular edema and age-related macular degeneration. These conditions are significant global health concerns, affecting millions and leading to vision loss if not diagnosed promptly. Current methods for OCT image classification encounter specific challenges, such as the inherent complexity of retinal structures and considerable variability across different OCT datasets.</p><p><strong>Methods: </strong>This paper introduces a novel hybrid model that integrates the strengths of convolutional neural networks (CNNs) and vision transformer (ViT) to overcome these obstacles. The synergy between CNNs, which excel at extracting detailed localized features, and ViT, adept at recognizing long-range patterns, enables a more effective and comprehensive analysis of OCT images.</p><p><strong>Results: </strong>While our model achieves an accuracy of 99.80% on the OCT2017 dataset, its standout feature is its parameter efficiency-requiring only 6.9 million parameters, significantly fewer than larger, more complex models such as Xception and OpticNet-71.</p><p><strong>Conclusion: </strong>This efficiency underscores the model's suitability for clinical settings, where computational resources may be limited but high accuracy and rapid diagnosis are imperative.<b>Code Availability:</b> The code for this study is available at https://github.com/Amir1831/ViT4OCT.</p>","PeriodicalId":37680,"journal":{"name":"Journal of Medical Signals & Sensors","volume":"15 ","pages":"18"},"PeriodicalIF":1.3,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12180780/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144369353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-09eCollection Date: 2025-01-01DOI: 10.4103/jmss.jmss_49_24
Meenalosini Vimal Cruz, Suhaima Jamal, Sibi Chakkaravarthy Sethuraman
The brain-computer interface (BCI) technology has emerged as a groundbreaking innovation with profound implications across diverse domains, particularly in health care. By establishing a direct communication pathway between the human brain and external devices, BCI systems offer unprecedented opportunities for diagnosis, treatment, and rehabilitation, thereby reshaping the landscape of medical practice. However, despite its immense potential, the widespread adoption of BCI technology in clinical settings faces several challenges. These include the need for robust signal acquisition and processing techniques and optimizing user training and adaptation. Overcoming these challenges is crucial to unleashing the complete potential of BCI technology in health care and realizing its promise of personalized, patient-centric care. This review work underscores the transformative potential of BCI technology in revolutionizing medical practice. This paper offers a comprehensive analysis of medical-oriented BCI applications by exploring the various uses of BCI technology and its potential to transform patient care.
{"title":"A Comprehensive Survey of Brain-Computer Interface Technology in Health care: Research Perspectives.","authors":"Meenalosini Vimal Cruz, Suhaima Jamal, Sibi Chakkaravarthy Sethuraman","doi":"10.4103/jmss.jmss_49_24","DOIUrl":"10.4103/jmss.jmss_49_24","url":null,"abstract":"<p><p>The brain-computer interface (BCI) technology has emerged as a groundbreaking innovation with profound implications across diverse domains, particularly in health care. By establishing a direct communication pathway between the human brain and external devices, BCI systems offer unprecedented opportunities for diagnosis, treatment, and rehabilitation, thereby reshaping the landscape of medical practice. However, despite its immense potential, the widespread adoption of BCI technology in clinical settings faces several challenges. These include the need for robust signal acquisition and processing techniques and optimizing user training and adaptation. Overcoming these challenges is crucial to unleashing the complete potential of BCI technology in health care and realizing its promise of personalized, patient-centric care. This review work underscores the transformative potential of BCI technology in revolutionizing medical practice. This paper offers a comprehensive analysis of medical-oriented BCI applications by exploring the various uses of BCI technology and its potential to transform patient care.</p>","PeriodicalId":37680,"journal":{"name":"Journal of Medical Signals & Sensors","volume":"15 ","pages":"16"},"PeriodicalIF":1.3,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12180781/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144369352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Deep learning has gained much attention in computer-assisted minimally invasive surgery in recent years. The application of deep-learning algorithms in colonoscopy can be divided into four main categories: surgical image analysis, surgical operations analysis, evaluation of surgical skills, and surgical automation. Analysis of surgical images by deep learning can be one of the main solutions for early detection of gastrointestinal lesions and for taking appropriate actions to treat cancer.
Method: This study investigates a simple and accurate deep-learning model for polyp detection. We address the challenge of limited labeled data through transfer learning and employ multi-task learning to achieve both polyp classification and bounding box detection tasks. Considering the appropriate weight for each task in the total cost function is crucial in achieving the best results. Due to the lack of datasets with nonpolyp images, data collection was carried out. The proposed deep neural network structure was implemented on KVASIR-SEG and CVC-CLINIC datasets as polyp images in addition to the nonpolyp images extracted from the LDPolyp videos dataset.
Results: The proposed model demonstrated high accuracy, achieving 100% in polyp/non-polyp classification and 86% in bounding box detection. It also showed fast processing times (0.01 seconds), making it suitable for real-time clinical applications.
Conclusion: The developed deep-learning model offers an efficient, accurate, and cost-effective solution for real-time polyp detection in colonoscopy. Its performance on benchmark datasets confirms its potential for clinical deployment, aiding in early cancer diagnosis and treatment.
{"title":"Introducing a Deep Neural Network Model with Practical Implementation for Polyp Detection in Colonoscopy Videos.","authors":"Hajar Keshavarz, Zohreh Ansari, Hossein Abootalebian, Babak Sabet, Mohammadreza Momenzadeh","doi":"10.4103/jmss.jmss_23_24","DOIUrl":"10.4103/jmss.jmss_23_24","url":null,"abstract":"<p><strong>Background: </strong>Deep learning has gained much attention in computer-assisted minimally invasive surgery in recent years. The application of deep-learning algorithms in colonoscopy can be divided into four main categories: surgical image analysis, surgical operations analysis, evaluation of surgical skills, and surgical automation. Analysis of surgical images by deep learning can be one of the main solutions for early detection of gastrointestinal lesions and for taking appropriate actions to treat cancer.</p><p><strong>Method: </strong>This study investigates a simple and accurate deep-learning model for polyp detection. We address the challenge of limited labeled data through transfer learning and employ multi-task learning to achieve both polyp classification and bounding box detection tasks. Considering the appropriate weight for each task in the total cost function is crucial in achieving the best results. Due to the lack of datasets with nonpolyp images, data collection was carried out. The proposed deep neural network structure was implemented on KVASIR-SEG and CVC-CLINIC datasets as polyp images in addition to the nonpolyp images extracted from the LDPolyp videos dataset.</p><p><strong>Results: </strong>The proposed model demonstrated high accuracy, achieving 100% in polyp/non-polyp classification and 86% in bounding box detection. It also showed fast processing times (0.01 seconds), making it suitable for real-time clinical applications.</p><p><strong>Conclusion: </strong>The developed deep-learning model offers an efficient, accurate, and cost-effective solution for real-time polyp detection in colonoscopy. Its performance on benchmark datasets confirms its potential for clinical deployment, aiding in early cancer diagnosis and treatment.</p>","PeriodicalId":37680,"journal":{"name":"Journal of Medical Signals & Sensors","volume":"15 ","pages":"17"},"PeriodicalIF":1.3,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12180779/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144369319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01eCollection Date: 2025-01-01DOI: 10.4103/jmss.jmss_51_24
Farzaneh Ansari, Ali Neshasteh-Riz, Reza Paydar, Fathollah Mohagheghi, Sahar Felegari, Manijeh Beigi, Susan Cheraghi
Background: This study aimed to evaluate the effectiveness of clinical, dosimetric, and radiomic features from computed tomography (CT) scans in predicting the probability of heart failure in breast cancer patients undergoing chemoradiation treatment.
Materials and methods: We selected 54 breast cancer patients who received left-sided chemoradiation therapy and had a low risk of natural heart failure according to the Framingham score. We compared echocardiographic patterns and ejection fraction (EF) measurements before and 3 years after radiotherapy for each patient. Based on these comparisons, we evaluated the incidence of heart failure 3 years postchemoradiation therapy. For machine learning (ML) modeling, we first segmented the heart as the region of interest in CT images using a deep learning technique. We then extracted radiomic features from this region. We employed three widely used classifiers - decision tree, K-nearest neighbor, and random forest (RF) - using a combination of radiomic, dosimetric, and clinical features to predict chemoradiation-induced heart failure. The evaluation criteria included accuracy, sensitivity, specificity, and the area under the receiver operating characteristic curve (area under the curve [AUC]).
Results: In this study, 46% of the patients experienced heart failure, as indicated by EF. A total of 873 radiomic features were extracted from the segmented area. Out of 890 combined radiomic, dosimetric, and clinical features, 15 were selected. The RF model demonstrated the best performance, with an accuracy of 0.85 and an AUC of 0.98. Patient age and V5 irradiated heart volume were identified as key predictors of chemoradiation-induced heart failure.
Conclusion: Our quantitative findings indicate that employing ML methods and combining radiomic, dosimetric, and clinical features to identify breast cancer patients at risk of cardiotoxicity is feasible.
背景:本研究旨在评估计算机断层扫描(CT)的临床、剂量学和放射学特征在预测接受放化疗的乳腺癌患者心力衰竭概率方面的有效性。材料和方法:我们选择54例接受左侧放化疗且根据Framingham评分自然心力衰竭风险低的乳腺癌患者。我们比较了每位患者放疗前和放疗后3年的超声心动图和射血分数(EF)测量值。基于这些比较,我们评估了放化疗后3年心力衰竭的发生率。对于机器学习(ML)建模,我们首先使用深度学习技术将心脏分割为CT图像中的感兴趣区域。然后从该区域提取放射性特征。我们采用了三种广泛使用的分类器——决策树、k近邻和随机森林(RF)——结合放射学、剂量学和临床特征来预测放化疗引起的心力衰竭。评价标准包括准确性、敏感性、特异性和受试者工作特征曲线下面积(area under The curve [AUC])。结果:在这项研究中,46%的患者经历心力衰竭,如EF所示。从分割区域中提取了873个放射学特征。从890个放射学、剂量学和临床特征中,选择了15个。射频模型的精度为0.85,AUC为0.98。患者年龄和V5辐射心脏容量被确定为放化疗诱发心力衰竭的关键预测因素。结论:我们的定量研究结果表明,采用ML方法并结合放射学、剂量学和临床特征来识别有心脏毒性风险的乳腺癌患者是可行的。
{"title":"Radiomics Analysis on Computed Tomography Images for Prediction of Chemoradiation-induced Heart Failure in Breast Cancer by Machine Learning Models.","authors":"Farzaneh Ansari, Ali Neshasteh-Riz, Reza Paydar, Fathollah Mohagheghi, Sahar Felegari, Manijeh Beigi, Susan Cheraghi","doi":"10.4103/jmss.jmss_51_24","DOIUrl":"10.4103/jmss.jmss_51_24","url":null,"abstract":"<p><strong>Background: </strong>This study aimed to evaluate the effectiveness of clinical, dosimetric, and radiomic features from computed tomography (CT) scans in predicting the probability of heart failure in breast cancer patients undergoing chemoradiation treatment.</p><p><strong>Materials and methods: </strong>We selected 54 breast cancer patients who received left-sided chemoradiation therapy and had a low risk of natural heart failure according to the Framingham score. We compared echocardiographic patterns and ejection fraction (EF) measurements before and 3 years after radiotherapy for each patient. Based on these comparisons, we evaluated the incidence of heart failure 3 years postchemoradiation therapy. For machine learning (ML) modeling, we first segmented the heart as the region of interest in CT images using a deep learning technique. We then extracted radiomic features from this region. We employed three widely used classifiers - decision tree, K-nearest neighbor, and random forest (RF) - using a combination of radiomic, dosimetric, and clinical features to predict chemoradiation-induced heart failure. The evaluation criteria included accuracy, sensitivity, specificity, and the area under the receiver operating characteristic curve (area under the curve [AUC]).</p><p><strong>Results: </strong>In this study, 46% of the patients experienced heart failure, as indicated by EF. A total of 873 radiomic features were extracted from the segmented area. Out of 890 combined radiomic, dosimetric, and clinical features, 15 were selected. The RF model demonstrated the best performance, with an accuracy of 0.85 and an AUC of 0.98. Patient age and V5 irradiated heart volume were identified as key predictors of chemoradiation-induced heart failure.</p><p><strong>Conclusion: </strong>Our quantitative findings indicate that employing ML methods and combining radiomic, dosimetric, and clinical features to identify breast cancer patients at risk of cardiotoxicity is feasible.</p>","PeriodicalId":37680,"journal":{"name":"Journal of Medical Signals & Sensors","volume":"15 ","pages":"14"},"PeriodicalIF":1.3,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12105806/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144152279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Functional near-infrared spectroscopy (fNIRS) is a valuable neuroimaging tool that captures cerebral hemodynamic during various brain tasks. However, fNIRS data usually suffer physiological artifacts. As a matter of fact, these physiological artifacts are rich in valuable physiological information.
Methods: Leveraging this, our study presents a novel algorithm for extracting heart and respiratory rates (RRs) from fNIRS signals using a nonstationary, nonlinear filtering approach called cumulative curve fitting approximation. To enhance the accuracy of heart peak localization, a novel real-time method based on polynomial fitting was implemented, addressing the limitations of the 10 Hz temporal resolution in fNIRS. Simultaneous recordings of fNIRS, electrocardiogram (ECG), and respiration using a chest band strain gauge sensor were obtained from 15 subjects during a respiration task. Two-thirds of the subjects' data were used for the training procedure, employing a 5-fold cross-validation approach, while the remaining subjects were completely unseen and reserved for final testing.
Results: The results demonstrated a strong correlation (r > 0.92, Bland-Altman Ratio <6%) between heart rate variability derived from fNIRS and ECG signals. Moreover, the low mean absolute error (0.18 s) in estimating the respiration period emphasizes the feasibility of the proposed method for RR estimation from fNIRS data. In addition, paired t-tests showed no significant difference between respiration rates estimated from the fNIRS-based measurements and those from the respiration sensor for each subject (P > 0.05).
Conclusion: This study highlights fNIRS as a powerful tool for noninvasive extraction of heart and RRs alongside brain signals. The findings pave the way for developing lightweight, cost-effective wearable devices that can simultaneously monitor hemodynamic, heart, and respiratory activity, enhancing comfort and portability for health monitoring applications.
背景:功能性近红外光谱(fNIRS)是一种有价值的神经成像工具,可以捕获各种大脑任务期间的脑血流动力学。然而,fNIRS数据通常会受到生理伪影的影响。事实上,这些生理人工制品富含有价值的生理信息。方法:利用这一点,我们的研究提出了一种新的算法,该算法使用一种称为累积曲线拟合近似的非平稳非线性滤波方法从fNIRS信号中提取心脏和呼吸速率(rr)。为了提高心脏峰值定位的精度,提出了一种基于多项式拟合的实时心脏峰值定位方法,解决了近红外光谱10hz时间分辨率的局限性。使用胸带应变计传感器同时记录15名受试者在呼吸任务期间的fNIRS,心电图(ECG)和呼吸。三分之二的受试者数据用于训练程序,采用5倍交叉验证方法,而其余受试者完全不可见并保留用于最终测试。结果:结果显示有很强的相关性(r > 0.92), Bland-Altman Ratio t检验显示,每个受试者基于fnir测量的呼吸速率与呼吸传感器测量的呼吸速率之间没有显著差异(P > 0.05)。结论:本研究强调了fNIRS作为无创提取心脏和脑信号的有力工具。这一发现为开发轻质、低成本的可穿戴设备铺平了道路,这些设备可以同时监测血液动力学、心脏和呼吸活动,增强健康监测应用的舒适性和便携性。
{"title":"Enhanced Joint Heart and Respiratory Rates Extraction from Functional Near-infrared Spectroscopy Signals Using Cumulative Curve Fitting Approximation.","authors":"Navid Adib, Seyed Kamaledin Setarehdan, Shirin Ashtari Tondashti, Mahdis Yaghoubi","doi":"10.4103/jmss.jmss_48_24","DOIUrl":"10.4103/jmss.jmss_48_24","url":null,"abstract":"<p><strong>Background: </strong>Functional near-infrared spectroscopy (fNIRS) is a valuable neuroimaging tool that captures cerebral hemodynamic during various brain tasks. However, fNIRS data usually suffer physiological artifacts. As a matter of fact, these physiological artifacts are rich in valuable physiological information.</p><p><strong>Methods: </strong>Leveraging this, our study presents a novel algorithm for extracting heart and respiratory rates (RRs) from fNIRS signals using a nonstationary, nonlinear filtering approach called cumulative curve fitting approximation. To enhance the accuracy of heart peak localization, a novel real-time method based on polynomial fitting was implemented, addressing the limitations of the 10 Hz temporal resolution in fNIRS. Simultaneous recordings of fNIRS, electrocardiogram (ECG), and respiration using a chest band strain gauge sensor were obtained from 15 subjects during a respiration task. Two-thirds of the subjects' data were used for the training procedure, employing a 5-fold cross-validation approach, while the remaining subjects were completely unseen and reserved for final testing.</p><p><strong>Results: </strong>The results demonstrated a strong correlation (<i>r</i> > 0.92, Bland-Altman Ratio <6%) between heart rate variability derived from fNIRS and ECG signals. Moreover, the low mean absolute error (0.18 s) in estimating the respiration period emphasizes the feasibility of the proposed method for RR estimation from fNIRS data. In addition, paired <i>t</i>-tests showed no significant difference between respiration rates estimated from the fNIRS-based measurements and those from the respiration sensor for each subject (<i>P</i> > 0.05).</p><p><strong>Conclusion: </strong>This study highlights fNIRS as a powerful tool for noninvasive extraction of heart and RRs alongside brain signals. The findings pave the way for developing lightweight, cost-effective wearable devices that can simultaneously monitor hemodynamic, heart, and respiratory activity, enhancing comfort and portability for health monitoring applications.</p>","PeriodicalId":37680,"journal":{"name":"Journal of Medical Signals & Sensors","volume":"15 ","pages":"15"},"PeriodicalIF":1.3,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12105807/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144152272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01eCollection Date: 2025-01-01DOI: 10.4103/jmss.jmss_42_24
Monire Sheikh Hosseini, Hossein Rabbani
Background: Retinal imaging employs various modalities, each providing distinct perspectives on ocular structures. However, the integration of information from these modalities, which often have differing resolutions, requires effective image registration techniques. Existing retinal image registration methods typically rely on rigid or affine transformations, which may not adequately address the complexities of multimodal retinal images.
Method: This study introduces a nonrigid fuzzy image registration approach designed to align optical coherence tomography (OCT) images with fundus images. The method employs a fuzzy inference system (FIS) that uses vessel locations as key features for registration. The FIS applies specific rules to map points from the source image to the reference image, facilitating accurate alignment.
Results: The proposed method achieved a mean absolute registration error of 44.57 ± 39.38 µm in the superior-inferior orientation and 11.46 ± 10.06 µm in the nasal-temporal orientation. These results underscore the method's precision in aligning multimodal retinal images.
Conclusion: The nonrigid fuzzy image registration approach demonstrates robust and versatile performance in integrating multimodal retinal imaging data. Despite its straightforward implementation, the method effectively addresses the challenges of multimodal retinal image registration, providing a reliable tool for advanced ocular imaging analysis.
{"title":"Nonrigid Multimodal Registration Based on Fuzzy Inference System for Retinal Image Registration.","authors":"Monire Sheikh Hosseini, Hossein Rabbani","doi":"10.4103/jmss.jmss_42_24","DOIUrl":"10.4103/jmss.jmss_42_24","url":null,"abstract":"<p><strong>Background: </strong>Retinal imaging employs various modalities, each providing distinct perspectives on ocular structures. However, the integration of information from these modalities, which often have differing resolutions, requires effective image registration techniques. Existing retinal image registration methods typically rely on rigid or affine transformations, which may not adequately address the complexities of multimodal retinal images.</p><p><strong>Method: </strong>This study introduces a nonrigid fuzzy image registration approach designed to align optical coherence tomography (OCT) images with fundus images. The method employs a fuzzy inference system (FIS) that uses vessel locations as key features for registration. The FIS applies specific rules to map points from the source image to the reference image, facilitating accurate alignment.</p><p><strong>Results: </strong>The proposed method achieved a mean absolute registration error of 44.57 ± 39.38 µm in the superior-inferior orientation and 11.46 ± 10.06 µm in the nasal-temporal orientation. These results underscore the method's precision in aligning multimodal retinal images.</p><p><strong>Conclusion: </strong>The nonrigid fuzzy image registration approach demonstrates robust and versatile performance in integrating multimodal retinal imaging data. Despite its straightforward implementation, the method effectively addresses the challenges of multimodal retinal image registration, providing a reliable tool for advanced ocular imaging analysis.</p>","PeriodicalId":37680,"journal":{"name":"Journal of Medical Signals & Sensors","volume":"15 ","pages":"13"},"PeriodicalIF":1.3,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12105805/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144152274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-19eCollection Date: 2025-01-01DOI: 10.4103/jmss.jmss_36_24
Mohammad Hossein Vafaie, Maryam Ansarian, Hossein Rabbani
Background: Optical coherence tomography (OCT) is a biomedical imaging technique used to achieve high-resolution images from human tissues in a noninvasive manner.
Methods: In this article, a practical approach is proposed for designing ultrahigh-resolution spectral-domain OCT (UHR SD-OCT) devices. At first, block diagram of a typical SD-OCT is introduced in detail. At second, internal components of each arm are introduced where the key parameters of each component are highlighted. At third, the effects of these key parameters on the overall performance of the UHR SD-OCT are investigated in a comprehensive manner. At fourth, the most important requirements of a UHR SD-OCT are explained, where suitable optical equipment is selected for each arm based on these requirements. At fifth, optical accessories as well as the electrical devices required for managing and control of the performance of a UHR SD-OCT are introduced in brief.
Results: Performance of the proposed device is assessed through various simulations, and finally, the implementation cost and implementation challenges are investigated in detail.
Conclusions: Simulation results indicate that the proposed UHR SD-OCT has acceptable axial resolution and imaging depth; hence, it is a good candidate for use in retinal applications that require UHR imaging.
{"title":"Design and Simulation of an Ultrahigh-resolution Spectral-domain Optical Coherence Tomography.","authors":"Mohammad Hossein Vafaie, Maryam Ansarian, Hossein Rabbani","doi":"10.4103/jmss.jmss_36_24","DOIUrl":"https://doi.org/10.4103/jmss.jmss_36_24","url":null,"abstract":"<p><strong>Background: </strong>Optical coherence tomography (OCT) is a biomedical imaging technique used to achieve high-resolution images from human tissues in a noninvasive manner.</p><p><strong>Methods: </strong>In this article, a practical approach is proposed for designing ultrahigh-resolution spectral-domain OCT (UHR SD-OCT) devices. At first, block diagram of a typical SD-OCT is introduced in detail. At second, internal components of each arm are introduced where the key parameters of each component are highlighted. At third, the effects of these key parameters on the overall performance of the UHR SD-OCT are investigated in a comprehensive manner. At fourth, the most important requirements of a UHR SD-OCT are explained, where suitable optical equipment is selected for each arm based on these requirements. At fifth, optical accessories as well as the electrical devices required for managing and control of the performance of a UHR SD-OCT are introduced in brief.</p><p><strong>Results: </strong>Performance of the proposed device is assessed through various simulations, and finally, the implementation cost and implementation challenges are investigated in detail.</p><p><strong>Conclusions: </strong>Simulation results indicate that the proposed UHR SD-OCT has acceptable axial resolution and imaging depth; hence, it is a good candidate for use in retinal applications that require UHR imaging.</p>","PeriodicalId":37680,"journal":{"name":"Journal of Medical Signals & Sensors","volume":"15 ","pages":"12"},"PeriodicalIF":1.3,"publicationDate":"2025-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12063968/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144019120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Clinical decisions for stroke treatments, such as thrombolytic drugs for ischemic strokes or anticoagulants for hemorrhagic strokes, rely on accurate diagnosis and severity assessment. Our study uses diffusion-weighted magnetic resonance imaging and Convolutional Neural Networks (CNNs) to differentiate healthy and stroke samples, classify stroke types, and predict severity, aiding in decision-making for stroke management.
Methods: We evaluated 143 patients: 85 with ischemic stroke and 58 with hemorrhagic stroke. For stroke diagnosis, we compared multimodal (apparent diffusion coefficient and diffusion-weighted imaging [DWI]) and single-modal (using separate images) preprocessing techniques. Our study introduced two models, Added CNN Layer-ResNet-50 (ACL-ResNet-50) and Added CNN Layer-MobileNetV1 (ACL-MobileNetV1), based on transfer learning (MobileNetV1 and ResNet-50), enhancing performance through reinforced layers. We compared our proposed models with a scenario in which only the final layer was replaced in ResNet-50 and MobileNetV1. Furthermore, we predicted National Institutes of Health Stroke Scale (NIHSS) scores in three ranges based on DWI images to gauge stroke severity. Evaluation criteria for the models included accuracy, sensitivity, specificity, and area under the curve (AUC).
Results: In stroke classification (normal, ischemic, and hemorrhagic), ACL-MobileNetV1 outperformed other models, achieving 98% accuracy, 99% sensitivity, 98% specificity, and 99% AUC. For assessing ischemic stroke severity using NIHSS ranges, ACL-ResNet-50 showed the optimal performance with an accuracy of 0.92, sensitivity of 0.84, specificity of 0.92, and AUC of 0.95.
Conclusion: Our study's proposed method effectively classified stroke type and severity based on multimodal MR images, potentially as a practical decision support tool for stroke treatments.
{"title":"Multi-classification Deep Learning Approach for Diagnosing Stroke Type and Severity Using Multimodal Magnetic Resonance Images.","authors":"Sahar Felehgari, Payam Sariaslani, Sepideh Shamsizadeh, Saba Felehgari, Anahita Rajabi, Hiwa Mohammadi","doi":"10.4103/jmss.jmss_37_24","DOIUrl":"https://doi.org/10.4103/jmss.jmss_37_24","url":null,"abstract":"<p><strong>Background: </strong>Clinical decisions for stroke treatments, such as thrombolytic drugs for ischemic strokes or anticoagulants for hemorrhagic strokes, rely on accurate diagnosis and severity assessment. Our study uses diffusion-weighted magnetic resonance imaging and Convolutional Neural Networks (CNNs) to differentiate healthy and stroke samples, classify stroke types, and predict severity, aiding in decision-making for stroke management.</p><p><strong>Methods: </strong>We evaluated 143 patients: 85 with ischemic stroke and 58 with hemorrhagic stroke. For stroke diagnosis, we compared multimodal (apparent diffusion coefficient and diffusion-weighted imaging [DWI]) and single-modal (using separate images) preprocessing techniques. Our study introduced two models, Added CNN Layer-ResNet-50 (ACL-ResNet-50) and Added CNN Layer-MobileNetV1 (ACL-MobileNetV1), based on transfer learning (MobileNetV1 and ResNet-50), enhancing performance through reinforced layers. We compared our proposed models with a scenario in which only the final layer was replaced in ResNet-50 and MobileNetV1. Furthermore, we predicted National Institutes of Health Stroke Scale (NIHSS) scores in three ranges based on DWI images to gauge stroke severity. Evaluation criteria for the models included accuracy, sensitivity, specificity, and area under the curve (AUC).</p><p><strong>Results: </strong>In stroke classification (normal, ischemic, and hemorrhagic), ACL-MobileNetV1 outperformed other models, achieving 98% accuracy, 99% sensitivity, 98% specificity, and 99% AUC. For assessing ischemic stroke severity using NIHSS ranges, ACL-ResNet-50 showed the optimal performance with an accuracy of 0.92, sensitivity of 0.84, specificity of 0.92, and AUC of 0.95.</p><p><strong>Conclusion: </strong>Our study's proposed method effectively classified stroke type and severity based on multimodal MR images, potentially as a practical decision support tool for stroke treatments.</p>","PeriodicalId":37680,"journal":{"name":"Journal of Medical Signals & Sensors","volume":"15 ","pages":"10"},"PeriodicalIF":1.3,"publicationDate":"2025-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12063969/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143989798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-19eCollection Date: 2025-01-01DOI: 10.4103/jmss.jmss_29_24
Rahman Farnoosh, Karlo Abnoosian, Rasha Abbas Isewid
Background: The global increase in diabetes prevalence necessitates advanced diagnostic methods. Machine learning has shown promise in disease diagnosis, including diabetes.
Materials and methods: We used a dataset collected from the Medical City Hospital laboratory and the Specialized Center for Endocrinology and Diabetes at Al-Kindy Teaching Hospital in Iraq. This dataset includes 1000 physical examination samples from both male and female patients. The samples are categorized into three classes: diabetic (Y), nondiabetic (N), and predicted diabetic (P). The dataset contains twelve attributes and includes outlier data. Outliers in medical studies can result from unusual disease attributes. Therefore, consulting with a specialist physician to identify and handle these outliers using statistical methods is necessary. The main contribution of this study is the proposal of two hybrid models for diabetes diagnosis in two scenarios: (1) Scenario 1 (presence of outlier data): Hybrid Model 1 combines the K-medoids clustering algorithm with a Gaussian naive Bayes (GNB) classifier based on kernel density estimation (KDE) to handle outliers and (2) Scenario 2 (after removing outlier data): Hybrid Model 2 combines the K-means clustering algorithm with a GNB classifier based on KDE with suitable bandwidth. We performed principal component analysis to minimize dimensionality and evaluated the models using fivefold cross-validation.
Results: All experiments were conducted in identical settings. Our proposed hybrid models demonstrated superior performance in two scenarios, handling and rejecting outliers, compared to other machine-learning models in this study, including support vector machines (with radial-based, polynomial, linear, and sigmoid kernel functions), decision trees (J48), and GNB classifiers for diabetes prediction. The average accuracy for Scenario 1 with Hybrid Model 1 was 0.9743, and for Scenario 2 with Hybrid Model 2, it was 0.9867. We also evaluated precision, sensitivity, and F1-score as performance metrics.
Conclusion: This study presents two hybrid models for diabetes diagnosis, demonstrating high accuracy in distinguishing between diabetic and nondiabetic patients and effectively handling outliers. The findings highlight the potential of machine-learning techniques for improving the early diagnosis and treatment of diabetes.
{"title":"Two Machine-learning Hybrid Models for Predicting Type 2 Diabetes Mellitus.","authors":"Rahman Farnoosh, Karlo Abnoosian, Rasha Abbas Isewid","doi":"10.4103/jmss.jmss_29_24","DOIUrl":"https://doi.org/10.4103/jmss.jmss_29_24","url":null,"abstract":"<p><strong>Background: </strong>The global increase in diabetes prevalence necessitates advanced diagnostic methods. Machine learning has shown promise in disease diagnosis, including diabetes.</p><p><strong>Materials and methods: </strong>We used a dataset collected from the Medical City Hospital laboratory and the Specialized Center for Endocrinology and Diabetes at Al-Kindy Teaching Hospital in Iraq. This dataset includes 1000 physical examination samples from both male and female patients. The samples are categorized into three classes: diabetic (Y), nondiabetic (N), and predicted diabetic (P). The dataset contains twelve attributes and includes outlier data. Outliers in medical studies can result from unusual disease attributes. Therefore, consulting with a specialist physician to identify and handle these outliers using statistical methods is necessary. The main contribution of this study is the proposal of two hybrid models for diabetes diagnosis in two scenarios: (1) Scenario 1 (presence of outlier data): Hybrid Model 1 combines the K-medoids clustering algorithm with a Gaussian naive Bayes (GNB) classifier based on kernel density estimation (KDE) to handle outliers and (2) Scenario 2 (after removing outlier data): Hybrid Model 2 combines the K-means clustering algorithm with a GNB classifier based on KDE with suitable bandwidth. We performed principal component analysis to minimize dimensionality and evaluated the models using fivefold cross-validation.</p><p><strong>Results: </strong>All experiments were conducted in identical settings. Our proposed hybrid models demonstrated superior performance in two scenarios, handling and rejecting outliers, compared to other machine-learning models in this study, including support vector machines (with radial-based, polynomial, linear, and sigmoid kernel functions), decision trees (J48), and GNB classifiers for diabetes prediction. The average accuracy for Scenario 1 with Hybrid Model 1 was 0.9743, and for Scenario 2 with Hybrid Model 2, it was 0.9867. We also evaluated precision, sensitivity, and F1-score as performance metrics.</p><p><strong>Conclusion: </strong>This study presents two hybrid models for diabetes diagnosis, demonstrating high accuracy in distinguishing between diabetic and nondiabetic patients and effectively handling outliers. The findings highlight the potential of machine-learning techniques for improving the early diagnosis and treatment of diabetes.</p>","PeriodicalId":37680,"journal":{"name":"Journal of Medical Signals & Sensors","volume":"15 ","pages":"11"},"PeriodicalIF":1.3,"publicationDate":"2025-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12063970/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143989751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}