Background: Stress, a widespread mental health concern, significantly impacts people well-being and performance. This study proposes a novel approach to stress detection by fusing cardiovascular and respiratory data.
Methods: Fifteen participants underwent a mental stress induction task while their electrocardiogram (ECG) and respiration signals were recorded. A real-time peak detection algorithm was developed for ECG signal processing, and both time and frequency domain features were extracted from ECG and respiration signals. Various machine learning models, including Support Vector Machine, K-Nearest Neighbors, bagged decision trees, and random forests, were employed for classification, with accurate labeling achieved through the NASA-TLX questionnaire.
Results: The results demonstrate that combining respiration and cardiovascular features significantly enhances stress classification performance compared to using each modality alone, achieving an accuracy of 95.6% ±1.7%. Forward feature selection identifies key discriminative features from both modalities.
Conclusions: This study demonstrates the efficacy of multimodal physiological data integration for accurate stress detection, outperforming single-modality approaches and comparable studies in the literature. The findings highlight the potential of real-time monitoring systems in enhancing stress and health management.
{"title":"Human Stress Classification Using Cardiovascular and Respiratory Data Based on Machine Learning Techniques.","authors":"Mahdis Yaghoubi, Navid Adib, Abolfazl Rezaei Monfared, Shirin Ashtari Tondashti, Saeed Akhavan","doi":"10.4103/jmss.jmss_71_24","DOIUrl":"10.4103/jmss.jmss_71_24","url":null,"abstract":"<p><strong>Background: </strong>Stress, a widespread mental health concern, significantly impacts people well-being and performance. This study proposes a novel approach to stress detection by fusing cardiovascular and respiratory data.</p><p><strong>Methods: </strong>Fifteen participants underwent a mental stress induction task while their electrocardiogram (ECG) and respiration signals were recorded. A real-time peak detection algorithm was developed for ECG signal processing, and both time and frequency domain features were extracted from ECG and respiration signals. Various machine learning models, including Support Vector Machine, K-Nearest Neighbors, bagged decision trees, and random forests, were employed for classification, with accurate labeling achieved through the NASA-TLX questionnaire.</p><p><strong>Results: </strong>The results demonstrate that combining respiration and cardiovascular features significantly enhances stress classification performance compared to using each modality alone, achieving an accuracy of 95.6% ±1.7%. Forward feature selection identifies key discriminative features from both modalities.</p><p><strong>Conclusions: </strong>This study demonstrates the efficacy of multimodal physiological data integration for accurate stress detection, outperforming single-modality approaches and comparable studies in the literature. The findings highlight the potential of real-time monitoring systems in enhancing stress and health management.</p>","PeriodicalId":37680,"journal":{"name":"Journal of Medical Signals & Sensors","volume":"15 ","pages":"24"},"PeriodicalIF":1.1,"publicationDate":"2025-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12373377/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-06eCollection Date: 2025-01-01DOI: 10.4103/jmss.jmss_75_24
Moein Bahman, Seyed Saman Sajadi, Iman Ghodrati Toostani, Bahador MakkiAbadi
Background: Functional connectivity (FC), defined as the statistical reliance among different brain regions, has been an effective tool for studying cognitive brain functions. The majority of existing FC-based research has relied on the premise that networks are temporally stationary. However, there exist few research that support nonstationarity of FC which can be due to cognitive functioning. However, still there is a gap in tracking the dynamics of FC to gain a deeper understanding of how brain networks form and adapt in response to therapeutic interventions by identifying the change points that signify substantial shifts in network connectivity across the participants.
Methods: The proposed approach in this study is based on tensor representation of FC networks of the source signals of electroencephalogram (EEG) activities yielding a multi-mode tensor. Then analysis of variance has been used to investigate changing points in connectivity of brain activity in sources domain in different conditions of tasks, frequency bands, and among subjects in time. High-density EEG signals (256 channels) were acquired from 30 tinnitus patients under visual (positive emotion induction) and transcranial direct current stimulation (tDCS) stimuli.
Results: The proposed method of this study could effectively identify the significant brain connectivity change points, indicating enhanced effectiveness in capturing connectivity shifts comparing to conventional methods. Findings in tinnitus patients suggest that visual stimulation alone may not significantly alter brain connectivity networks.
Conclusion: Based on the results, a combination of visual stimulation with simultaneous High-Definition tDCS is recommended, potentially informing optimal intervention strategies to enhance tinnitus treatment effectiveness.
{"title":"A New Method for Dynamic Brain Connectivity Analysis Based on Tensor Decomposition in Tinnitus Using High-density Electroencephalogram in Source Domain.","authors":"Moein Bahman, Seyed Saman Sajadi, Iman Ghodrati Toostani, Bahador MakkiAbadi","doi":"10.4103/jmss.jmss_75_24","DOIUrl":"10.4103/jmss.jmss_75_24","url":null,"abstract":"<p><strong>Background: </strong>Functional connectivity (FC), defined as the statistical reliance among different brain regions, has been an effective tool for studying cognitive brain functions. The majority of existing FC-based research has relied on the premise that networks are temporally stationary. However, there exist few research that support nonstationarity of FC which can be due to cognitive functioning. However, still there is a gap in tracking the dynamics of FC to gain a deeper understanding of how brain networks form and adapt in response to therapeutic interventions by identifying the change points that signify substantial shifts in network connectivity across the participants.</p><p><strong>Methods: </strong>The proposed approach in this study is based on tensor representation of FC networks of the source signals of electroencephalogram (EEG) activities yielding a multi-mode tensor. Then analysis of variance has been used to investigate changing points in connectivity of brain activity in sources domain in different conditions of tasks, frequency bands, and among subjects in time. High-density EEG signals (256 channels) were acquired from 30 tinnitus patients under visual (positive emotion induction) and transcranial direct current stimulation (tDCS) stimuli.</p><p><strong>Results: </strong>The proposed method of this study could effectively identify the significant brain connectivity change points, indicating enhanced effectiveness in capturing connectivity shifts comparing to conventional methods. Findings in tinnitus patients suggest that visual stimulation alone may not significantly alter brain connectivity networks.</p><p><strong>Conclusion: </strong>Based on the results, a combination of visual stimulation with simultaneous High-Definition tDCS is recommended, potentially informing optimal intervention strategies to enhance tinnitus treatment effectiveness.</p>","PeriodicalId":37680,"journal":{"name":"Journal of Medical Signals & Sensors","volume":"15 ","pages":"23"},"PeriodicalIF":1.1,"publicationDate":"2025-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12373379/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-10eCollection Date: 2025-01-01DOI: 10.4103/jmss.jmss_73_24
Morteza Farahi, Seyed Saman Sajadi, Fateme Karbasi, Seyed Sohrab Hashemi Fesharaki, Jafar Mehvari Habibabadi, Mohsen Reza Haidari, Amir Homayoun Jafari
Background: Surgery is a well-established treatment for drug-resistant epilepsy, but outcomes are often suboptimal, especially when no lesion is visible on preoperative imaging. A major challenge in determining the seizure's origin and spread is interpreting electroencephalogram (EEG) data. Accurately tracing the seizure's signal trajectory, given the brain's complex behavior, remains a crucial hurdle.
Materials and methods: In this study, EEG data from 17 patients were analyzed, using the clinical interpretations of the epileptogenic region as the gold standard. Quantification analysis of recurrence plots primarily based on variance in recurrence rate was used to identify the regions involved during seizures based on investigation of the recurrence phenomena between the regions. This method allowed for a stage-wise analysis across EEG electrodes, highlighting simultaneously involved areas.
Results: The method effectively distinguished involved from noninvolved regions across anterior, posterior, right temporal, and left temporal areas with macro averaged F-score of 95.54. For the anterior region, it achieved an overall accuracy (correct predictions out of total predictions) of 86.96%, sensitivity (ability to correctly identify seizure-involved regions) of 82.79%, and specificity (ability to correctly identify non-involved regions) of 86.96%. For the other regions, accuracy, sensitivity, and specificity values ranged from 66.0% to 89.13%.
Conclusions: This approach could pinpoint brain regions involved in seizures at any stage and could be useful for clinical monitoring and surgical planning. The method's simplicity and strong performance suggest it is promising for the real-time application during epilepsy treatment.
{"title":"A Nonlinear Method to Identify Seizure Dynamic Trajectory Based on Variance of Recurrence Rate in Human Epilepsy Patients Using EEG.","authors":"Morteza Farahi, Seyed Saman Sajadi, Fateme Karbasi, Seyed Sohrab Hashemi Fesharaki, Jafar Mehvari Habibabadi, Mohsen Reza Haidari, Amir Homayoun Jafari","doi":"10.4103/jmss.jmss_73_24","DOIUrl":"10.4103/jmss.jmss_73_24","url":null,"abstract":"<p><strong>Background: </strong>Surgery is a well-established treatment for drug-resistant epilepsy, but outcomes are often suboptimal, especially when no lesion is visible on preoperative imaging. A major challenge in determining the seizure's origin and spread is interpreting electroencephalogram (EEG) data. Accurately tracing the seizure's signal trajectory, given the brain's complex behavior, remains a crucial hurdle.</p><p><strong>Materials and methods: </strong>In this study, EEG data from 17 patients were analyzed, using the clinical interpretations of the epileptogenic region as the gold standard. Quantification analysis of recurrence plots primarily based on variance in recurrence rate was used to identify the regions involved during seizures based on investigation of the recurrence phenomena between the regions. This method allowed for a stage-wise analysis across EEG electrodes, highlighting simultaneously involved areas.</p><p><strong>Results: </strong>The method effectively distinguished involved from noninvolved regions across anterior, posterior, right temporal, and left temporal areas with macro averaged F-score of 95.54. For the anterior region, it achieved an overall accuracy (correct predictions out of total predictions) of 86.96%, sensitivity (ability to correctly identify seizure-involved regions) of 82.79%, and specificity (ability to correctly identify non-involved regions) of 86.96%. For the other regions, accuracy, sensitivity, and specificity values ranged from 66.0% to 89.13%.</p><p><strong>Conclusions: </strong>This approach could pinpoint brain regions involved in seizures at any stage and could be useful for clinical monitoring and surgical planning. The method's simplicity and strong performance suggest it is promising for the real-time application during epilepsy treatment.</p>","PeriodicalId":37680,"journal":{"name":"Journal of Medical Signals & Sensors","volume":"15 ","pages":"19"},"PeriodicalIF":1.1,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12331190/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144800561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Heart disease in pregnancy is an important health issue worldwide which needs precise care to improve pregnant women health care and reduce maternal mortality rate (MMR). As we know registries play an important role in improvement of health care, so we decided to design a software to take the first step for having a national registry for pregnant women with heart disease in Iran and classify them in a more effective way to reduce mismanagements. A windows-based software with C# language programming was designed and implemented by a group of specialists included two experienced cardiologists, a skilled gynecologist, and a proficient medical doctor programmer. Since the launch of the software, information for 500 pregnant women with heart disease has been entered. The most common types of heart disease in order were congenital heart disease, prosthetic heart valves, valvular disease, and cardiomyopathies. The software developed by our team provides a comprehensive and efficient tool for managing patients with heart disease in pregnancy. The use of this software can help identify high-risk patients early on, leading to better patient outcomes and ultimately contributing to the global goal of reducing MMR. In the field of pregnant women with heart disease, gathering large and accurate data over time can be utilized in artificial intelligence for analysis.
{"title":"Designing a Software for Registry of Pregnant Women with Heart Disease in Iran and Preliminary Results.","authors":"Mahdi Kalani, Fateme Mahdikhoshouei, Parvin Bahrami, Amirreza Sajjadieh Khajouei, Minoo Movahedi, Shima Mehdipour, Marzieh Rezvani Habibabadi","doi":"10.4103/jmss.jmss_43_24","DOIUrl":"10.4103/jmss.jmss_43_24","url":null,"abstract":"<p><p>Heart disease in pregnancy is an important health issue worldwide which needs precise care to improve pregnant women health care and reduce maternal mortality rate (MMR). As we know registries play an important role in improvement of health care, so we decided to design a software to take the first step for having a national registry for pregnant women with heart disease in Iran and classify them in a more effective way to reduce mismanagements. A windows-based software with C# language programming was designed and implemented by a group of specialists included two experienced cardiologists, a skilled gynecologist, and a proficient medical doctor programmer. Since the launch of the software, information for 500 pregnant women with heart disease has been entered. The most common types of heart disease in order were congenital heart disease, prosthetic heart valves, valvular disease, and cardiomyopathies. The software developed by our team provides a comprehensive and efficient tool for managing patients with heart disease in pregnancy. The use of this software can help identify high-risk patients early on, leading to better patient outcomes and ultimately contributing to the global goal of reducing MMR. In the field of pregnant women with heart disease, gathering large and accurate data over time can be utilized in artificial intelligence for analysis.</p>","PeriodicalId":37680,"journal":{"name":"Journal of Medical Signals & Sensors","volume":"15 ","pages":"21"},"PeriodicalIF":1.1,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12331175/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144800563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-10eCollection Date: 2025-01-01DOI: 10.4103/jmss.jmss_61_24
Mohammad Keshtkar, Saeedeh Yazdanifar
Background: During chest CT examinations, the breasts are exposed to a significant amount of radiation, increasing the risk of radiation-induced cancers. The objective of this study is to develop and evaluate a novel silicon rubber-barium sulfate (BaSO4) composite breast shield for reducing radiation dose in chest computed tomography (CT) examinations while minimizing impact on image quality.
Methods: Four breast shields were fabricated: one with 10% bismuth and three with 10%, 15%, and 20% BaSO4. Dose reduction was assessed using a thorax phantom and ionization chamber. Image quality effects were evaluated in the thorax phantom by measuring noise and CT number changes. The 10% barium shield was further tested on 22 patients undergoing chest CT.
Results: The 10%, 15%, and 20% barium shields reduced breast dose by 36.8%, 38.6%, and 45.6%, respectively, while the 10% bismuth shield achieved a 63.1% reduction. However, the 10% barium shield had minimal impact on image quality, increasing lung noise by only 0.3 Hounsfield units (HU) and shifting CT numbers by 4.7 HU. In patient studies, 81.8% of scans showed no artifacts, with 18.2% showing slight artifacts.
Conclusion: The 10% BaSO4 shield effectively reduced breast dose while maintaining image quality, presenting a viable alternative to bismuth shielding for radiation protection in chest CT examinations.
{"title":"Balancing Radiation Dose Reduction and Image Quality in Chest Computed Tomography using Silicon Rubber-barium Sulfate Composite Shield.","authors":"Mohammad Keshtkar, Saeedeh Yazdanifar","doi":"10.4103/jmss.jmss_61_24","DOIUrl":"10.4103/jmss.jmss_61_24","url":null,"abstract":"<p><strong>Background: </strong>During chest CT examinations, the breasts are exposed to a significant amount of radiation, increasing the risk of radiation-induced cancers. The objective of this study is to develop and evaluate a novel silicon rubber-barium sulfate (BaSO4) composite breast shield for reducing radiation dose in chest computed tomography (CT) examinations while minimizing impact on image quality.</p><p><strong>Methods: </strong>Four breast shields were fabricated: one with 10% bismuth and three with 10%, 15%, and 20% BaSO4. Dose reduction was assessed using a thorax phantom and ionization chamber. Image quality effects were evaluated in the thorax phantom by measuring noise and CT number changes. The 10% barium shield was further tested on 22 patients undergoing chest CT.</p><p><strong>Results: </strong>The 10%, 15%, and 20% barium shields reduced breast dose by 36.8%, 38.6%, and 45.6%, respectively, while the 10% bismuth shield achieved a 63.1% reduction. However, the 10% barium shield had minimal impact on image quality, increasing lung noise by only 0.3 Hounsfield units (HU) and shifting CT numbers by 4.7 HU. In patient studies, 81.8% of scans showed no artifacts, with 18.2% showing slight artifacts.</p><p><strong>Conclusion: </strong>The 10% BaSO4 shield effectively reduced breast dose while maintaining image quality, presenting a viable alternative to bismuth shielding for radiation protection in chest CT examinations.</p>","PeriodicalId":37680,"journal":{"name":"Journal of Medical Signals & Sensors","volume":"15 ","pages":"20"},"PeriodicalIF":1.1,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12331176/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144800562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Optical coherence tomography (OCT) is a pivotal imaging technique for the early detection and management of critical retinal diseases, notably diabetic macular edema and age-related macular degeneration. These conditions are significant global health concerns, affecting millions and leading to vision loss if not diagnosed promptly. Current methods for OCT image classification encounter specific challenges, such as the inherent complexity of retinal structures and considerable variability across different OCT datasets.
Methods: This paper introduces a novel hybrid model that integrates the strengths of convolutional neural networks (CNNs) and vision transformer (ViT) to overcome these obstacles. The synergy between CNNs, which excel at extracting detailed localized features, and ViT, adept at recognizing long-range patterns, enables a more effective and comprehensive analysis of OCT images.
Results: While our model achieves an accuracy of 99.80% on the OCT2017 dataset, its standout feature is its parameter efficiency-requiring only 6.9 million parameters, significantly fewer than larger, more complex models such as Xception and OpticNet-71.
Conclusion: This efficiency underscores the model's suitability for clinical settings, where computational resources may be limited but high accuracy and rapid diagnosis are imperative.Code Availability: The code for this study is available at https://github.com/Amir1831/ViT4OCT.
{"title":"From Image to Sequence: Exploring Vision Transformers for Optical Coherence Tomography Classification.","authors":"Amirali Arbab, Aref Habibi, Hossein Rabbani, Mahnoosh Tajmirriahi","doi":"10.4103/jmss.jmss_58_24","DOIUrl":"10.4103/jmss.jmss_58_24","url":null,"abstract":"<p><strong>Background: </strong>Optical coherence tomography (OCT) is a pivotal imaging technique for the early detection and management of critical retinal diseases, notably diabetic macular edema and age-related macular degeneration. These conditions are significant global health concerns, affecting millions and leading to vision loss if not diagnosed promptly. Current methods for OCT image classification encounter specific challenges, such as the inherent complexity of retinal structures and considerable variability across different OCT datasets.</p><p><strong>Methods: </strong>This paper introduces a novel hybrid model that integrates the strengths of convolutional neural networks (CNNs) and vision transformer (ViT) to overcome these obstacles. The synergy between CNNs, which excel at extracting detailed localized features, and ViT, adept at recognizing long-range patterns, enables a more effective and comprehensive analysis of OCT images.</p><p><strong>Results: </strong>While our model achieves an accuracy of 99.80% on the OCT2017 dataset, its standout feature is its parameter efficiency-requiring only 6.9 million parameters, significantly fewer than larger, more complex models such as Xception and OpticNet-71.</p><p><strong>Conclusion: </strong>This efficiency underscores the model's suitability for clinical settings, where computational resources may be limited but high accuracy and rapid diagnosis are imperative.<b>Code Availability:</b> The code for this study is available at https://github.com/Amir1831/ViT4OCT.</p>","PeriodicalId":37680,"journal":{"name":"Journal of Medical Signals & Sensors","volume":"15 ","pages":"18"},"PeriodicalIF":1.3,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12180780/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144369353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-09eCollection Date: 2025-01-01DOI: 10.4103/jmss.jmss_49_24
Meenalosini Vimal Cruz, Suhaima Jamal, Sibi Chakkaravarthy Sethuraman
The brain-computer interface (BCI) technology has emerged as a groundbreaking innovation with profound implications across diverse domains, particularly in health care. By establishing a direct communication pathway between the human brain and external devices, BCI systems offer unprecedented opportunities for diagnosis, treatment, and rehabilitation, thereby reshaping the landscape of medical practice. However, despite its immense potential, the widespread adoption of BCI technology in clinical settings faces several challenges. These include the need for robust signal acquisition and processing techniques and optimizing user training and adaptation. Overcoming these challenges is crucial to unleashing the complete potential of BCI technology in health care and realizing its promise of personalized, patient-centric care. This review work underscores the transformative potential of BCI technology in revolutionizing medical practice. This paper offers a comprehensive analysis of medical-oriented BCI applications by exploring the various uses of BCI technology and its potential to transform patient care.
{"title":"A Comprehensive Survey of Brain-Computer Interface Technology in Health care: Research Perspectives.","authors":"Meenalosini Vimal Cruz, Suhaima Jamal, Sibi Chakkaravarthy Sethuraman","doi":"10.4103/jmss.jmss_49_24","DOIUrl":"10.4103/jmss.jmss_49_24","url":null,"abstract":"<p><p>The brain-computer interface (BCI) technology has emerged as a groundbreaking innovation with profound implications across diverse domains, particularly in health care. By establishing a direct communication pathway between the human brain and external devices, BCI systems offer unprecedented opportunities for diagnosis, treatment, and rehabilitation, thereby reshaping the landscape of medical practice. However, despite its immense potential, the widespread adoption of BCI technology in clinical settings faces several challenges. These include the need for robust signal acquisition and processing techniques and optimizing user training and adaptation. Overcoming these challenges is crucial to unleashing the complete potential of BCI technology in health care and realizing its promise of personalized, patient-centric care. This review work underscores the transformative potential of BCI technology in revolutionizing medical practice. This paper offers a comprehensive analysis of medical-oriented BCI applications by exploring the various uses of BCI technology and its potential to transform patient care.</p>","PeriodicalId":37680,"journal":{"name":"Journal of Medical Signals & Sensors","volume":"15 ","pages":"16"},"PeriodicalIF":1.3,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12180781/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144369352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Deep learning has gained much attention in computer-assisted minimally invasive surgery in recent years. The application of deep-learning algorithms in colonoscopy can be divided into four main categories: surgical image analysis, surgical operations analysis, evaluation of surgical skills, and surgical automation. Analysis of surgical images by deep learning can be one of the main solutions for early detection of gastrointestinal lesions and for taking appropriate actions to treat cancer.
Method: This study investigates a simple and accurate deep-learning model for polyp detection. We address the challenge of limited labeled data through transfer learning and employ multi-task learning to achieve both polyp classification and bounding box detection tasks. Considering the appropriate weight for each task in the total cost function is crucial in achieving the best results. Due to the lack of datasets with nonpolyp images, data collection was carried out. The proposed deep neural network structure was implemented on KVASIR-SEG and CVC-CLINIC datasets as polyp images in addition to the nonpolyp images extracted from the LDPolyp videos dataset.
Results: The proposed model demonstrated high accuracy, achieving 100% in polyp/non-polyp classification and 86% in bounding box detection. It also showed fast processing times (0.01 seconds), making it suitable for real-time clinical applications.
Conclusion: The developed deep-learning model offers an efficient, accurate, and cost-effective solution for real-time polyp detection in colonoscopy. Its performance on benchmark datasets confirms its potential for clinical deployment, aiding in early cancer diagnosis and treatment.
{"title":"Introducing a Deep Neural Network Model with Practical Implementation for Polyp Detection in Colonoscopy Videos.","authors":"Hajar Keshavarz, Zohreh Ansari, Hossein Abootalebian, Babak Sabet, Mohammadreza Momenzadeh","doi":"10.4103/jmss.jmss_23_24","DOIUrl":"10.4103/jmss.jmss_23_24","url":null,"abstract":"<p><strong>Background: </strong>Deep learning has gained much attention in computer-assisted minimally invasive surgery in recent years. The application of deep-learning algorithms in colonoscopy can be divided into four main categories: surgical image analysis, surgical operations analysis, evaluation of surgical skills, and surgical automation. Analysis of surgical images by deep learning can be one of the main solutions for early detection of gastrointestinal lesions and for taking appropriate actions to treat cancer.</p><p><strong>Method: </strong>This study investigates a simple and accurate deep-learning model for polyp detection. We address the challenge of limited labeled data through transfer learning and employ multi-task learning to achieve both polyp classification and bounding box detection tasks. Considering the appropriate weight for each task in the total cost function is crucial in achieving the best results. Due to the lack of datasets with nonpolyp images, data collection was carried out. The proposed deep neural network structure was implemented on KVASIR-SEG and CVC-CLINIC datasets as polyp images in addition to the nonpolyp images extracted from the LDPolyp videos dataset.</p><p><strong>Results: </strong>The proposed model demonstrated high accuracy, achieving 100% in polyp/non-polyp classification and 86% in bounding box detection. It also showed fast processing times (0.01 seconds), making it suitable for real-time clinical applications.</p><p><strong>Conclusion: </strong>The developed deep-learning model offers an efficient, accurate, and cost-effective solution for real-time polyp detection in colonoscopy. Its performance on benchmark datasets confirms its potential for clinical deployment, aiding in early cancer diagnosis and treatment.</p>","PeriodicalId":37680,"journal":{"name":"Journal of Medical Signals & Sensors","volume":"15 ","pages":"17"},"PeriodicalIF":1.3,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12180779/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144369319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01eCollection Date: 2025-01-01DOI: 10.4103/jmss.jmss_51_24
Farzaneh Ansari, Ali Neshasteh-Riz, Reza Paydar, Fathollah Mohagheghi, Sahar Felegari, Manijeh Beigi, Susan Cheraghi
Background: This study aimed to evaluate the effectiveness of clinical, dosimetric, and radiomic features from computed tomography (CT) scans in predicting the probability of heart failure in breast cancer patients undergoing chemoradiation treatment.
Materials and methods: We selected 54 breast cancer patients who received left-sided chemoradiation therapy and had a low risk of natural heart failure according to the Framingham score. We compared echocardiographic patterns and ejection fraction (EF) measurements before and 3 years after radiotherapy for each patient. Based on these comparisons, we evaluated the incidence of heart failure 3 years postchemoradiation therapy. For machine learning (ML) modeling, we first segmented the heart as the region of interest in CT images using a deep learning technique. We then extracted radiomic features from this region. We employed three widely used classifiers - decision tree, K-nearest neighbor, and random forest (RF) - using a combination of radiomic, dosimetric, and clinical features to predict chemoradiation-induced heart failure. The evaluation criteria included accuracy, sensitivity, specificity, and the area under the receiver operating characteristic curve (area under the curve [AUC]).
Results: In this study, 46% of the patients experienced heart failure, as indicated by EF. A total of 873 radiomic features were extracted from the segmented area. Out of 890 combined radiomic, dosimetric, and clinical features, 15 were selected. The RF model demonstrated the best performance, with an accuracy of 0.85 and an AUC of 0.98. Patient age and V5 irradiated heart volume were identified as key predictors of chemoradiation-induced heart failure.
Conclusion: Our quantitative findings indicate that employing ML methods and combining radiomic, dosimetric, and clinical features to identify breast cancer patients at risk of cardiotoxicity is feasible.
背景:本研究旨在评估计算机断层扫描(CT)的临床、剂量学和放射学特征在预测接受放化疗的乳腺癌患者心力衰竭概率方面的有效性。材料和方法:我们选择54例接受左侧放化疗且根据Framingham评分自然心力衰竭风险低的乳腺癌患者。我们比较了每位患者放疗前和放疗后3年的超声心动图和射血分数(EF)测量值。基于这些比较,我们评估了放化疗后3年心力衰竭的发生率。对于机器学习(ML)建模,我们首先使用深度学习技术将心脏分割为CT图像中的感兴趣区域。然后从该区域提取放射性特征。我们采用了三种广泛使用的分类器——决策树、k近邻和随机森林(RF)——结合放射学、剂量学和临床特征来预测放化疗引起的心力衰竭。评价标准包括准确性、敏感性、特异性和受试者工作特征曲线下面积(area under The curve [AUC])。结果:在这项研究中,46%的患者经历心力衰竭,如EF所示。从分割区域中提取了873个放射学特征。从890个放射学、剂量学和临床特征中,选择了15个。射频模型的精度为0.85,AUC为0.98。患者年龄和V5辐射心脏容量被确定为放化疗诱发心力衰竭的关键预测因素。结论:我们的定量研究结果表明,采用ML方法并结合放射学、剂量学和临床特征来识别有心脏毒性风险的乳腺癌患者是可行的。
{"title":"Radiomics Analysis on Computed Tomography Images for Prediction of Chemoradiation-induced Heart Failure in Breast Cancer by Machine Learning Models.","authors":"Farzaneh Ansari, Ali Neshasteh-Riz, Reza Paydar, Fathollah Mohagheghi, Sahar Felegari, Manijeh Beigi, Susan Cheraghi","doi":"10.4103/jmss.jmss_51_24","DOIUrl":"10.4103/jmss.jmss_51_24","url":null,"abstract":"<p><strong>Background: </strong>This study aimed to evaluate the effectiveness of clinical, dosimetric, and radiomic features from computed tomography (CT) scans in predicting the probability of heart failure in breast cancer patients undergoing chemoradiation treatment.</p><p><strong>Materials and methods: </strong>We selected 54 breast cancer patients who received left-sided chemoradiation therapy and had a low risk of natural heart failure according to the Framingham score. We compared echocardiographic patterns and ejection fraction (EF) measurements before and 3 years after radiotherapy for each patient. Based on these comparisons, we evaluated the incidence of heart failure 3 years postchemoradiation therapy. For machine learning (ML) modeling, we first segmented the heart as the region of interest in CT images using a deep learning technique. We then extracted radiomic features from this region. We employed three widely used classifiers - decision tree, K-nearest neighbor, and random forest (RF) - using a combination of radiomic, dosimetric, and clinical features to predict chemoradiation-induced heart failure. The evaluation criteria included accuracy, sensitivity, specificity, and the area under the receiver operating characteristic curve (area under the curve [AUC]).</p><p><strong>Results: </strong>In this study, 46% of the patients experienced heart failure, as indicated by EF. A total of 873 radiomic features were extracted from the segmented area. Out of 890 combined radiomic, dosimetric, and clinical features, 15 were selected. The RF model demonstrated the best performance, with an accuracy of 0.85 and an AUC of 0.98. Patient age and V5 irradiated heart volume were identified as key predictors of chemoradiation-induced heart failure.</p><p><strong>Conclusion: </strong>Our quantitative findings indicate that employing ML methods and combining radiomic, dosimetric, and clinical features to identify breast cancer patients at risk of cardiotoxicity is feasible.</p>","PeriodicalId":37680,"journal":{"name":"Journal of Medical Signals & Sensors","volume":"15 ","pages":"14"},"PeriodicalIF":1.3,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12105806/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144152279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Functional near-infrared spectroscopy (fNIRS) is a valuable neuroimaging tool that captures cerebral hemodynamic during various brain tasks. However, fNIRS data usually suffer physiological artifacts. As a matter of fact, these physiological artifacts are rich in valuable physiological information.
Methods: Leveraging this, our study presents a novel algorithm for extracting heart and respiratory rates (RRs) from fNIRS signals using a nonstationary, nonlinear filtering approach called cumulative curve fitting approximation. To enhance the accuracy of heart peak localization, a novel real-time method based on polynomial fitting was implemented, addressing the limitations of the 10 Hz temporal resolution in fNIRS. Simultaneous recordings of fNIRS, electrocardiogram (ECG), and respiration using a chest band strain gauge sensor were obtained from 15 subjects during a respiration task. Two-thirds of the subjects' data were used for the training procedure, employing a 5-fold cross-validation approach, while the remaining subjects were completely unseen and reserved for final testing.
Results: The results demonstrated a strong correlation (r > 0.92, Bland-Altman Ratio <6%) between heart rate variability derived from fNIRS and ECG signals. Moreover, the low mean absolute error (0.18 s) in estimating the respiration period emphasizes the feasibility of the proposed method for RR estimation from fNIRS data. In addition, paired t-tests showed no significant difference between respiration rates estimated from the fNIRS-based measurements and those from the respiration sensor for each subject (P > 0.05).
Conclusion: This study highlights fNIRS as a powerful tool for noninvasive extraction of heart and RRs alongside brain signals. The findings pave the way for developing lightweight, cost-effective wearable devices that can simultaneously monitor hemodynamic, heart, and respiratory activity, enhancing comfort and portability for health monitoring applications.
背景:功能性近红外光谱(fNIRS)是一种有价值的神经成像工具,可以捕获各种大脑任务期间的脑血流动力学。然而,fNIRS数据通常会受到生理伪影的影响。事实上,这些生理人工制品富含有价值的生理信息。方法:利用这一点,我们的研究提出了一种新的算法,该算法使用一种称为累积曲线拟合近似的非平稳非线性滤波方法从fNIRS信号中提取心脏和呼吸速率(rr)。为了提高心脏峰值定位的精度,提出了一种基于多项式拟合的实时心脏峰值定位方法,解决了近红外光谱10hz时间分辨率的局限性。使用胸带应变计传感器同时记录15名受试者在呼吸任务期间的fNIRS,心电图(ECG)和呼吸。三分之二的受试者数据用于训练程序,采用5倍交叉验证方法,而其余受试者完全不可见并保留用于最终测试。结果:结果显示有很强的相关性(r > 0.92), Bland-Altman Ratio t检验显示,每个受试者基于fnir测量的呼吸速率与呼吸传感器测量的呼吸速率之间没有显著差异(P > 0.05)。结论:本研究强调了fNIRS作为无创提取心脏和脑信号的有力工具。这一发现为开发轻质、低成本的可穿戴设备铺平了道路,这些设备可以同时监测血液动力学、心脏和呼吸活动,增强健康监测应用的舒适性和便携性。
{"title":"Enhanced Joint Heart and Respiratory Rates Extraction from Functional Near-infrared Spectroscopy Signals Using Cumulative Curve Fitting Approximation.","authors":"Navid Adib, Seyed Kamaledin Setarehdan, Shirin Ashtari Tondashti, Mahdis Yaghoubi","doi":"10.4103/jmss.jmss_48_24","DOIUrl":"10.4103/jmss.jmss_48_24","url":null,"abstract":"<p><strong>Background: </strong>Functional near-infrared spectroscopy (fNIRS) is a valuable neuroimaging tool that captures cerebral hemodynamic during various brain tasks. However, fNIRS data usually suffer physiological artifacts. As a matter of fact, these physiological artifacts are rich in valuable physiological information.</p><p><strong>Methods: </strong>Leveraging this, our study presents a novel algorithm for extracting heart and respiratory rates (RRs) from fNIRS signals using a nonstationary, nonlinear filtering approach called cumulative curve fitting approximation. To enhance the accuracy of heart peak localization, a novel real-time method based on polynomial fitting was implemented, addressing the limitations of the 10 Hz temporal resolution in fNIRS. Simultaneous recordings of fNIRS, electrocardiogram (ECG), and respiration using a chest band strain gauge sensor were obtained from 15 subjects during a respiration task. Two-thirds of the subjects' data were used for the training procedure, employing a 5-fold cross-validation approach, while the remaining subjects were completely unseen and reserved for final testing.</p><p><strong>Results: </strong>The results demonstrated a strong correlation (<i>r</i> > 0.92, Bland-Altman Ratio <6%) between heart rate variability derived from fNIRS and ECG signals. Moreover, the low mean absolute error (0.18 s) in estimating the respiration period emphasizes the feasibility of the proposed method for RR estimation from fNIRS data. In addition, paired <i>t</i>-tests showed no significant difference between respiration rates estimated from the fNIRS-based measurements and those from the respiration sensor for each subject (<i>P</i> > 0.05).</p><p><strong>Conclusion: </strong>This study highlights fNIRS as a powerful tool for noninvasive extraction of heart and RRs alongside brain signals. The findings pave the way for developing lightweight, cost-effective wearable devices that can simultaneously monitor hemodynamic, heart, and respiratory activity, enhancing comfort and portability for health monitoring applications.</p>","PeriodicalId":37680,"journal":{"name":"Journal of Medical Signals & Sensors","volume":"15 ","pages":"15"},"PeriodicalIF":1.3,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12105807/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144152272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}