Pub Date : 2021-12-08DOI: 10.1109/BioSMART54244.2021.9677802
Jin Woo Kim, Hyunjae Jeong, Kwangtaek Kim, Dustin P. DeMeo, B. Carroll
We present the Virtual Reality Haptic Surgery Platform (VRHSP), a multimodal haptic virtual reality training simulator for elliptical skin excisions (i.e., skin tumor surgeries). Using a haptic device and a head mounted display, participants interact with actual skin images mapped to a 3D simulated surgical suite. In this study, the primary aim is to build the VRHSP with an initial narrow focus of simulating the outlining and incision steps of skin tumor surgeries with realistic tactile and visual feedback collocated in a 3D clinical scene. The secondary aim is to investigate the effectiveness of the VRHSP's haptic feedback capability, which we hypothesized would play an important role because skin tumor surgery is a tactile skill. The results of user studies on non-medical and medical participants from Kent State University and University Hospitals Cleveland Medical Center, respectively. The qualitative results suggest that the VRHSP has potential for high adoption, especially with haptic feedback. The quantitative results demonstrate the VRHSP's ability to discern experts from non-experts. Finally, the improved performance of participants with feedback suggests that haptic feedback can be used as a teaching tool as well as a realism tool.
{"title":"Image Based Virtual Reality Haptic Simulation for Multimodal Skin Tumor Surgery Training","authors":"Jin Woo Kim, Hyunjae Jeong, Kwangtaek Kim, Dustin P. DeMeo, B. Carroll","doi":"10.1109/BioSMART54244.2021.9677802","DOIUrl":"https://doi.org/10.1109/BioSMART54244.2021.9677802","url":null,"abstract":"We present the Virtual Reality Haptic Surgery Platform (VRHSP), a multimodal haptic virtual reality training simulator for elliptical skin excisions (i.e., skin tumor surgeries). Using a haptic device and a head mounted display, participants interact with actual skin images mapped to a 3D simulated surgical suite. In this study, the primary aim is to build the VRHSP with an initial narrow focus of simulating the outlining and incision steps of skin tumor surgeries with realistic tactile and visual feedback collocated in a 3D clinical scene. The secondary aim is to investigate the effectiveness of the VRHSP's haptic feedback capability, which we hypothesized would play an important role because skin tumor surgery is a tactile skill. The results of user studies on non-medical and medical participants from Kent State University and University Hospitals Cleveland Medical Center, respectively. The qualitative results suggest that the VRHSP has potential for high adoption, especially with haptic feedback. The quantitative results demonstrate the VRHSP's ability to discern experts from non-experts. Finally, the improved performance of participants with feedback suggests that haptic feedback can be used as a teaching tool as well as a realism tool.","PeriodicalId":286026,"journal":{"name":"2021 4th International Conference on Bio-Engineering for Smart Technologies (BioSMART)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121218907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-08DOI: 10.1109/BioSMART54244.2021.9677747
Anubha Gupta, Jayant Jain, Shubhankar Poundrik, M. Shetty, M. Girish, M. Gupta
COVID-19 has caused immense social and economic losses throughout the world. Subjects recovered from COVID are learned to have complications. Some studies have shown a change in the heart rate variability (HRV) in COVID-recovered subjects compared to the healthy ones. This change indicates an increased risk of heart problems among the survivors of moderate-to-severe COVID. Hence, this study is aimed at finding HRV features that get altered in COVID-recovered subjects compared to healthy subjects. Data of COVID-recovered and healthy subjects were collected from two hospitals in Delhi, India. Seven ML models have been built to classify healthy versus COVID-recovered subjects. The best-performing model was further analyzed to explore the ranking of altered heart features in COVID-recovered subjects via AI interpretability. Ranking of these features can indicate cardiovascular health status to doctors, who can provide support to the COVID-recovered subjects for timely safeguard from heart disorders. To the best of our knowledge, this is the first study with an in-depth analysis of the heart status of COVID-recovered subjects via ECG analysis.
{"title":"Interpretable AI Model-Based Predictions of ECG changes in COVID-recovered patients","authors":"Anubha Gupta, Jayant Jain, Shubhankar Poundrik, M. Shetty, M. Girish, M. Gupta","doi":"10.1109/BioSMART54244.2021.9677747","DOIUrl":"https://doi.org/10.1109/BioSMART54244.2021.9677747","url":null,"abstract":"COVID-19 has caused immense social and economic losses throughout the world. Subjects recovered from COVID are learned to have complications. Some studies have shown a change in the heart rate variability (HRV) in COVID-recovered subjects compared to the healthy ones. This change indicates an increased risk of heart problems among the survivors of moderate-to-severe COVID. Hence, this study is aimed at finding HRV features that get altered in COVID-recovered subjects compared to healthy subjects. Data of COVID-recovered and healthy subjects were collected from two hospitals in Delhi, India. Seven ML models have been built to classify healthy versus COVID-recovered subjects. The best-performing model was further analyzed to explore the ranking of altered heart features in COVID-recovered subjects via AI interpretability. Ranking of these features can indicate cardiovascular health status to doctors, who can provide support to the COVID-recovered subjects for timely safeguard from heart disorders. To the best of our knowledge, this is the first study with an in-depth analysis of the heart status of COVID-recovered subjects via ECG analysis.","PeriodicalId":286026,"journal":{"name":"2021 4th International Conference on Bio-Engineering for Smart Technologies (BioSMART)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123387749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-08DOI: 10.1109/BioSMART54244.2021.9677717
Guochang Ye, Mehmet Kaya
Cell segmentation is a critical step for performing image-based experimental analysis. This study proposes an efficient and accurate cell segmentation method. This image processing pipeline involving simple morphological operations automatically achieves cell segmentation for phase-contrast images. Manual/Visual cell segmentation serves as the control group to evaluate the proposed methodology's performance. Regarding the manual labeling data (156 images as ground truth), the proposed method achieves 90.07% as the average dice coefficient, 82.16% as the average intersection over union, and 6.52% as the average relative error on measuring cell growth area. Additionally, similar degrees of segmentation accuracy are observed on training a modified U-Net model (16848 images) individually with the ground truth and the generated data resulting from the proposed method. These results demonstrate good accuracy and high practicality of the proposed cell segmentation method capable of quantitating cell growth area and generating labeled data for deep learning cell segmentation techniques.
{"title":"Automated Cell Segmentation for Phase-Contrast Images of Adhesion Cell Culture","authors":"Guochang Ye, Mehmet Kaya","doi":"10.1109/BioSMART54244.2021.9677717","DOIUrl":"https://doi.org/10.1109/BioSMART54244.2021.9677717","url":null,"abstract":"Cell segmentation is a critical step for performing image-based experimental analysis. This study proposes an efficient and accurate cell segmentation method. This image processing pipeline involving simple morphological operations automatically achieves cell segmentation for phase-contrast images. Manual/Visual cell segmentation serves as the control group to evaluate the proposed methodology's performance. Regarding the manual labeling data (156 images as ground truth), the proposed method achieves 90.07% as the average dice coefficient, 82.16% as the average intersection over union, and 6.52% as the average relative error on measuring cell growth area. Additionally, similar degrees of segmentation accuracy are observed on training a modified U-Net model (16848 images) individually with the ground truth and the generated data resulting from the proposed method. These results demonstrate good accuracy and high practicality of the proposed cell segmentation method capable of quantitating cell growth area and generating labeled data for deep learning cell segmentation techniques.","PeriodicalId":286026,"journal":{"name":"2021 4th International Conference on Bio-Engineering for Smart Technologies (BioSMART)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125298344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-08DOI: 10.1109/BioSMART54244.2021.9677652
Guochang Ye, Vignesh Balasubramanian, J. Li, M. Kaya
Abnormal elevation of intracranial pressure (ICP) can cause dangerous or even fatal outcomes. The early detection of high intracranial pressure events can be crucial in saving patients' life in an intensive care unit (ICU). This study proposes an efficient artificial recurrent neural network to predict intracranial pressure evaluation for thirteen patients. The learning model is generated uniquely for each patient to predict the occurrence of the ICP event (classified into high ICP or low ICP) for the upcoming 10 minutes by inputting the previous 20-minutes signal. The results showed that the minimal accuracy of predicting intracranial pressure events was 90% for 11 patients, whereas a minimum of 95% accuracy was obtained among five patients. This study introduces an efficient artificial recurrent neural network model on the early prediction of intracranial pressure evaluation supported by the high adaptive performance of the LSTM model.
{"title":"Intracranial Pressure Prediction with a Recurrent Neural Network Model","authors":"Guochang Ye, Vignesh Balasubramanian, J. Li, M. Kaya","doi":"10.1109/BioSMART54244.2021.9677652","DOIUrl":"https://doi.org/10.1109/BioSMART54244.2021.9677652","url":null,"abstract":"Abnormal elevation of intracranial pressure (ICP) can cause dangerous or even fatal outcomes. The early detection of high intracranial pressure events can be crucial in saving patients' life in an intensive care unit (ICU). This study proposes an efficient artificial recurrent neural network to predict intracranial pressure evaluation for thirteen patients. The learning model is generated uniquely for each patient to predict the occurrence of the ICP event (classified into high ICP or low ICP) for the upcoming 10 minutes by inputting the previous 20-minutes signal. The results showed that the minimal accuracy of predicting intracranial pressure events was 90% for 11 patients, whereas a minimum of 95% accuracy was obtained among five patients. This study introduces an efficient artificial recurrent neural network model on the early prediction of intracranial pressure evaluation supported by the high adaptive performance of the LSTM model.","PeriodicalId":286026,"journal":{"name":"2021 4th International Conference on Bio-Engineering for Smart Technologies (BioSMART)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125061161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-20DOI: 10.1109/BioSMART54244.2021.9677844
Abhishek Srivastava, S. Chanda, Debesh Jha, M. Riegler, P. Halvorsen, Dag Johansen, U. Pal
Medical image segmentation can provide detailed information for clinical analysis which can be useful for scenarios where the detailed location of a finding is important. Knowing the location of a disease can play a vital role in treatment and decision-making. Convolutional neural network (CNN) based encoder-decoder techniques have advanced the performance of automated medical image segmentation systems. Several such CNN-based methodologies utilize techniques such as spatial- and channel-wise attention to enhance performance. Another technique that has drawn attention in recent years is residual dense blocks (RDBs). The successive convolutional layers in densely connected blocks are capable of extracting diverse features with varied receptive fields and thus, enhancing performance. However, consecutive stacked convolutional operators may not necessarily generate features that facilitate the identification of the target structures. In this paper, we propose a progressive alternating attention network (PAANet). We develop progressive alternating attention dense (PAAD) blocks, which construct a guiding attention map (GAM) after every convolutional layer in the dense blocks using features from all scales. The GAM allows the following layers in the dense blocks to focus on the spatial locations relevant to the target region. Every alternate PAAD block inverts the GAM to generate a reverse attention map which guides ensuing layers to extract boundary and edge-related information, refining the segmentation process. Our experiments on three different biomedical image segmentation datasets exhibit that our PAANet achieves favorable performance when compared to other state-of-the-art methods.
{"title":"PAANet: Progressive Alternating Attention for Automatic Medical Image Segmentation","authors":"Abhishek Srivastava, S. Chanda, Debesh Jha, M. Riegler, P. Halvorsen, Dag Johansen, U. Pal","doi":"10.1109/BioSMART54244.2021.9677844","DOIUrl":"https://doi.org/10.1109/BioSMART54244.2021.9677844","url":null,"abstract":"Medical image segmentation can provide detailed information for clinical analysis which can be useful for scenarios where the detailed location of a finding is important. Knowing the location of a disease can play a vital role in treatment and decision-making. Convolutional neural network (CNN) based encoder-decoder techniques have advanced the performance of automated medical image segmentation systems. Several such CNN-based methodologies utilize techniques such as spatial- and channel-wise attention to enhance performance. Another technique that has drawn attention in recent years is residual dense blocks (RDBs). The successive convolutional layers in densely connected blocks are capable of extracting diverse features with varied receptive fields and thus, enhancing performance. However, consecutive stacked convolutional operators may not necessarily generate features that facilitate the identification of the target structures. In this paper, we propose a progressive alternating attention network (PAANet). We develop progressive alternating attention dense (PAAD) blocks, which construct a guiding attention map (GAM) after every convolutional layer in the dense blocks using features from all scales. The GAM allows the following layers in the dense blocks to focus on the spatial locations relevant to the target region. Every alternate PAAD block inverts the GAM to generate a reverse attention map which guides ensuing layers to extract boundary and edge-related information, refining the segmentation process. Our experiments on three different biomedical image segmentation datasets exhibit that our PAANet achieves favorable performance when compared to other state-of-the-art methods.","PeriodicalId":286026,"journal":{"name":"2021 4th International Conference on Bio-Engineering for Smart Technologies (BioSMART)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128015927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-04DOI: 10.1109/BioSMART54244.2021.9677634
Motasem S. Alsawadi, Miguel Rio
There has been a dramatic increase in the volume of videos and their related content uploaded to the internet. Accordingly, the need for efficient algorithms to analyse this vast amount of data has attracted significant research interest. This work aims to recognize activities of daily living using the ST-GCN model, providing a comparison between four different partitioning strategies: spatial configuration partitioning, full distance split, connection split, and index split. To achieve this aim, we present the first implementation of the ST-GCN framework upon the HMDB-51 dataset. Additionally, we show that our proposals have achieved the highest accuracy performance on the UCF-101 dataset using the ST-GCN framework than the state-of-the-art approach.
{"title":"Skeleton-Split Framework using Spatial Temporal Graph Convolutional Networks for Action Recognition","authors":"Motasem S. Alsawadi, Miguel Rio","doi":"10.1109/BioSMART54244.2021.9677634","DOIUrl":"https://doi.org/10.1109/BioSMART54244.2021.9677634","url":null,"abstract":"There has been a dramatic increase in the volume of videos and their related content uploaded to the internet. Accordingly, the need for efficient algorithms to analyse this vast amount of data has attracted significant research interest. This work aims to recognize activities of daily living using the ST-GCN model, providing a comparison between four different partitioning strategies: spatial configuration partitioning, full distance split, connection split, and index split. To achieve this aim, we present the first implementation of the ST-GCN framework upon the HMDB-51 dataset. Additionally, we show that our proposals have achieved the highest accuracy performance on the UCF-101 dataset using the ST-GCN framework than the state-of-the-art approach.","PeriodicalId":286026,"journal":{"name":"2021 4th International Conference on Bio-Engineering for Smart Technologies (BioSMART)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134151195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}