Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference最新文献
Unilateral spatial neglect (USN) occurs as a sequela of stroke. This study proposes a neglect-identification system to evaluate the ability of patients with USN to process higher-order information. The measurement is done by varying the complexity of stimuli presented in an immersive virtual-reality space. Clinical study was conducted on three patients with USN using the new system, and the results showed that the USN patients were able to recognize simple presented objects, but neglected complex presented objects on the neglected side. The difference in reaction time between complex and simple presented objects was compared, and it was found that there was a delay in the neglected side, assumed to be a delay in higher-order information processing. The time lapse from stimulus presentation to recognition is divided into search and recognition time, and the cause of the degradation in higher-order information processing is clarified based on eye movement during recognition time. Furthermore, quantifying the ability to process high-order information using the proposed higher-order information-processing (HoIP) index shows that this ability deteriorates spatially and in the neglected area.Clinical Relevance- The system developed in this study should provide efficient rehabilitation for each patient because it can evaluate the patient's ability to process higher-order information in a three-dimensional space.
{"title":"Spatiotemporal response analysis to simple and complex stimuli in patients with unilateral spatial neglect: 3D verification using immersive virtual reality.","authors":"Akira Koshino, Tomoki Akatsuka, Kazuhiro Yasuda, Saki Takazawa, Shuntaro Kawaguchi, Hiroyasu Iwata","doi":"10.1109/EMBC53108.2024.10782125","DOIUrl":"https://doi.org/10.1109/EMBC53108.2024.10782125","url":null,"abstract":"<p><p>Unilateral spatial neglect (USN) occurs as a sequela of stroke. This study proposes a neglect-identification system to evaluate the ability of patients with USN to process higher-order information. The measurement is done by varying the complexity of stimuli presented in an immersive virtual-reality space. Clinical study was conducted on three patients with USN using the new system, and the results showed that the USN patients were able to recognize simple presented objects, but neglected complex presented objects on the neglected side. The difference in reaction time between complex and simple presented objects was compared, and it was found that there was a delay in the neglected side, assumed to be a delay in higher-order information processing. The time lapse from stimulus presentation to recognition is divided into search and recognition time, and the cause of the degradation in higher-order information processing is clarified based on eye movement during recognition time. Furthermore, quantifying the ability to process high-order information using the proposed higher-order information-processing (HoIP) index shows that this ability deteriorates spatially and in the neglected area.Clinical Relevance- The system developed in this study should provide efficient rehabilitation for each patient because it can evaluate the patient's ability to process higher-order information in a three-dimensional space.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2024 ","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143559959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1109/EMBC53108.2024.10782279
Sergio Galindo-Leon, Inge Eriks-Hoogland, Kenji Suzuki, Diego Paez-Granados
Simulation of assistive devices on pathological gait through musculoskeletal models offers the potential and advantages of estimating the effect of the device in several biomechanical variables and the device characteristics ahead of manufacturing. In this study, we introduce a novel musculoskeletal modelling approach to simulate the biomechanical impact of ankle foot orthoses (AFO) on gait in individuals with spinal cord injury (SCI). Leveraging data from the Swiss Paraplegic Center, we constructed anatomically and muscularly scaled models for SCI-AFO users, aiming to predict changes in gait kinematics and kinetics. The importance of this work lies in its potential to enhance rehabilitation strategies and improve quality of life by enabling the pre-manufacturing assessment of assistive devices. Despite the application of musculoskeletal models in simulating walking aids effects in other conditions, no predictive model currently exists for SCI gait. Evaluation through RMSE showed similar results compared with other pathologies, simulation errors ranged between 0.23 to 2.3 degrees in kinematics. Moreover, the model was able to capture ankle joint muscular asymmetries and predict symmetry improvements with AFO use. However, the simulation did not reveal all the AFO effects, indicating a need for more personalized model parameters and optimized muscle activation to fully replicate orthosis effects on SCI gait.
{"title":"Validation of the estimated Effect of Ankle Foot Orthoses on Spinal Cord Injury Gait Using Subject-Adjusted Musculoskeletal Models.","authors":"Sergio Galindo-Leon, Inge Eriks-Hoogland, Kenji Suzuki, Diego Paez-Granados","doi":"10.1109/EMBC53108.2024.10782279","DOIUrl":"https://doi.org/10.1109/EMBC53108.2024.10782279","url":null,"abstract":"<p><p>Simulation of assistive devices on pathological gait through musculoskeletal models offers the potential and advantages of estimating the effect of the device in several biomechanical variables and the device characteristics ahead of manufacturing. In this study, we introduce a novel musculoskeletal modelling approach to simulate the biomechanical impact of ankle foot orthoses (AFO) on gait in individuals with spinal cord injury (SCI). Leveraging data from the Swiss Paraplegic Center, we constructed anatomically and muscularly scaled models for SCI-AFO users, aiming to predict changes in gait kinematics and kinetics. The importance of this work lies in its potential to enhance rehabilitation strategies and improve quality of life by enabling the pre-manufacturing assessment of assistive devices. Despite the application of musculoskeletal models in simulating walking aids effects in other conditions, no predictive model currently exists for SCI gait. Evaluation through RMSE showed similar results compared with other pathologies, simulation errors ranged between 0.23 to 2.3 degrees in kinematics. Moreover, the model was able to capture ankle joint muscular asymmetries and predict symmetry improvements with AFO use. However, the simulation did not reveal all the AFO effects, indicating a need for more personalized model parameters and optimized muscle activation to fully replicate orthosis effects on SCI gait.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2024 ","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143560124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1109/EMBC53108.2024.10781768
Clauirton A Siebra, Jonysberg Quintino, Andre L M Santos, Fabio Q B Da Silva
Daily behaviour directly impacts health in the short and long term. Thus, embracing and maintaining healthy behaviours work like a preventive action, avoiding or delaying the emergence of chronic diseases. The process of changing daily routines toward healthy behaviours starts by understanding the current problems. Wearable and deep learning (DL) technologies represent important resources for supporting such an understanding. This paper discusses a strategy to interpret multifeatured longitudinal wearable data to analyse possible causes of health issues. We use the sleep domain as a case example where the aim is to clarify the reasons for poor sleep quality. A dataset with wearable data of 1874 days was used to create an explainable DL model, which indicates the main day-before-night sleep behaviours that may cause poor sleep quality. We use a comparative analysis with a hormone-based framework for sleep control as the form of validation. The results show that the explanations corroborate the results of the literature. However, other datasets with more features should be explored to verify the combination of these features and their effects on the health aspect under study.
{"title":"Wearable-oriented Support for Interpretation of Behavioural Effects on Sleep.","authors":"Clauirton A Siebra, Jonysberg Quintino, Andre L M Santos, Fabio Q B Da Silva","doi":"10.1109/EMBC53108.2024.10781768","DOIUrl":"https://doi.org/10.1109/EMBC53108.2024.10781768","url":null,"abstract":"<p><p>Daily behaviour directly impacts health in the short and long term. Thus, embracing and maintaining healthy behaviours work like a preventive action, avoiding or delaying the emergence of chronic diseases. The process of changing daily routines toward healthy behaviours starts by understanding the current problems. Wearable and deep learning (DL) technologies represent important resources for supporting such an understanding. This paper discusses a strategy to interpret multifeatured longitudinal wearable data to analyse possible causes of health issues. We use the sleep domain as a case example where the aim is to clarify the reasons for poor sleep quality. A dataset with wearable data of 1874 days was used to create an explainable DL model, which indicates the main day-before-night sleep behaviours that may cause poor sleep quality. We use a comparative analysis with a hormone-based framework for sleep control as the form of validation. The results show that the explanations corroborate the results of the literature. However, other datasets with more features should be explored to verify the combination of these features and their effects on the health aspect under study.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2024 ","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143560175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1109/EMBC53108.2024.10781590
Ethan O'Connor, Emmanuel Yangue, Yu Feng, Huimin Wu, Chenang Liu
Inhalation therapy is the predominant method of treatment for a variety of respiratory diseases. The effectiveness of such treatment is dependent on the accuracy of medication delivery. Thus, personalized inhalation therapy wherein inhaler designs are specifically suited to the patient's needs is highly desirable. Although computational fluid-particle dynamics (CFPD)-based simulation has demonstrated potential in advancing personalized inhalation therapy, it still requires a 3D model of the patient's respiratory system. Such a model could be constructed with computed tomography (CT) images; however, CT scans are costly and have a high risk of radiation exposure. This concern motivates this study to bridge chest CT images and pulmonary function test (PFT) data, which is noninvasive and easy to obtain. To achieve this goal, an autoencoder is leveraged to find a lower dimensional representation of the CT image; PFT data is then mapped to the encoded image using partial least squares (PLS) regression. Using the decoder in the trained autoencoder, a CT image can be reconstructed by the encoded image predicted by PFT data. This method would allow for greater accessibility to chest CT imaging without exposing patients to the potential negative effects of CT scans, significantly advancing personalized inhalation therapy for respiratory diseases. The results of preliminary experiments using a real-world dataset demonstrate promising performance with our proposed approach.
{"title":"Towards Personalized Inhalation Therapy by Correlating Chest CT Imaging and Pulmonary Function Test Features Using Machine Learning.","authors":"Ethan O'Connor, Emmanuel Yangue, Yu Feng, Huimin Wu, Chenang Liu","doi":"10.1109/EMBC53108.2024.10781590","DOIUrl":"https://doi.org/10.1109/EMBC53108.2024.10781590","url":null,"abstract":"<p><p>Inhalation therapy is the predominant method of treatment for a variety of respiratory diseases. The effectiveness of such treatment is dependent on the accuracy of medication delivery. Thus, personalized inhalation therapy wherein inhaler designs are specifically suited to the patient's needs is highly desirable. Although computational fluid-particle dynamics (CFPD)-based simulation has demonstrated potential in advancing personalized inhalation therapy, it still requires a 3D model of the patient's respiratory system. Such a model could be constructed with computed tomography (CT) images; however, CT scans are costly and have a high risk of radiation exposure. This concern motivates this study to bridge chest CT images and pulmonary function test (PFT) data, which is noninvasive and easy to obtain. To achieve this goal, an autoencoder is leveraged to find a lower dimensional representation of the CT image; PFT data is then mapped to the encoded image using partial least squares (PLS) regression. Using the decoder in the trained autoencoder, a CT image can be reconstructed by the encoded image predicted by PFT data. This method would allow for greater accessibility to chest CT imaging without exposing patients to the potential negative effects of CT scans, significantly advancing personalized inhalation therapy for respiratory diseases. The results of preliminary experiments using a real-world dataset demonstrate promising performance with our proposed approach.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2024 ","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143560295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1109/EMBC53108.2024.10782077
Zolnamar Dorjsembe, Hsing-Kuo Pao, Furen Xiao
This study introduces Polyp-DDPM, a diffusion-based method for generating realistic images of polyps conditioned on masks, aimed at enhancing the segmentation of gastrointestinal (GI) tract polyps. Our approach addresses the challenges of data limitations, high annotation costs, and privacy concerns associated with medical images. By conditioning the diffusion model on segmentation masks-binary masks that represent abnormal areas-Polyp-DDPM outperforms state-of-the-art methods in terms of image quality (achieving a Fréchet Inception Distance (FID) score of 78.47, compared to scores above 95.82) and segmentation performance (achieving an Intersection over Union (IoU) of 0.7156, versus less than 0.6828 for synthetic images from baseline models and 0.7067 for real data). Our method generates a high-quality, diverse synthetic dataset for training, thereby enhancing polyp segmentation models to be comparable with real images and offering greater data augmentation capabilities to improve segmentation models. The source code and pretrained weights for Polyp-DDPM are made publicly available at https://github.com/mobaidoctor/polyp-ddpm.
{"title":"Polyp-DDPM: Diffusion-Based Semantic Polyp Synthesis for Enhanced Segmentation.","authors":"Zolnamar Dorjsembe, Hsing-Kuo Pao, Furen Xiao","doi":"10.1109/EMBC53108.2024.10782077","DOIUrl":"https://doi.org/10.1109/EMBC53108.2024.10782077","url":null,"abstract":"<p><p>This study introduces Polyp-DDPM, a diffusion-based method for generating realistic images of polyps conditioned on masks, aimed at enhancing the segmentation of gastrointestinal (GI) tract polyps. Our approach addresses the challenges of data limitations, high annotation costs, and privacy concerns associated with medical images. By conditioning the diffusion model on segmentation masks-binary masks that represent abnormal areas-Polyp-DDPM outperforms state-of-the-art methods in terms of image quality (achieving a Fréchet Inception Distance (FID) score of 78.47, compared to scores above 95.82) and segmentation performance (achieving an Intersection over Union (IoU) of 0.7156, versus less than 0.6828 for synthetic images from baseline models and 0.7067 for real data). Our method generates a high-quality, diverse synthetic dataset for training, thereby enhancing polyp segmentation models to be comparable with real images and offering greater data augmentation capabilities to improve segmentation models. The source code and pretrained weights for Polyp-DDPM are made publicly available at https://github.com/mobaidoctor/polyp-ddpm.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2024 ","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143559970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1109/EMBC53108.2024.10782851
Keshav Bimbraw, Jing Liu, Ye Wang, Toshiaki Koike-Akino
Biosignal-based hand gesture classification is an important component of effective human-machine interaction. For multimodal biosignal sensing, the modalities often face data loss due to missing channels in the data which can adversely affect the gesture classification performance. To make the classifiers robust to missing channels in the data, this paper proposes using Random Channel Ablation (RChA) during the training process. Ultrasound and force myography (FMG) data were acquired from the forearm for 12 hand gestures over 2 subjects. The resulting multimodal data had 16 total channels, 8 for each modality. The proposed method was applied to convolutional neural network architecture, and compared with baseline, imputation, and oracle methods. Using 5-fold cross-validation for the two subjects, on average, 12.2% and 24.5% improvement was observed for gesture classification with up to 4 and 8 missing channels respectively compared to the baseline. Notably, the proposed method is also robust to an increase in the number of missing channels compared to other methods. These results show the efficacy of using random channel ablation to improve classifier robustness for multimodal and multi-channel biosignal-based hand gesture classification.
{"title":"Random Channel Ablation for Robust Hand Gesture Classification with Multimodal Biosignals.","authors":"Keshav Bimbraw, Jing Liu, Ye Wang, Toshiaki Koike-Akino","doi":"10.1109/EMBC53108.2024.10782851","DOIUrl":"https://doi.org/10.1109/EMBC53108.2024.10782851","url":null,"abstract":"<p><p>Biosignal-based hand gesture classification is an important component of effective human-machine interaction. For multimodal biosignal sensing, the modalities often face data loss due to missing channels in the data which can adversely affect the gesture classification performance. To make the classifiers robust to missing channels in the data, this paper proposes using Random Channel Ablation (RChA) during the training process. Ultrasound and force myography (FMG) data were acquired from the forearm for 12 hand gestures over 2 subjects. The resulting multimodal data had 16 total channels, 8 for each modality. The proposed method was applied to convolutional neural network architecture, and compared with baseline, imputation, and oracle methods. Using 5-fold cross-validation for the two subjects, on average, 12.2% and 24.5% improvement was observed for gesture classification with up to 4 and 8 missing channels respectively compared to the baseline. Notably, the proposed method is also robust to an increase in the number of missing channels compared to other methods. These results show the efficacy of using random channel ablation to improve classifier robustness for multimodal and multi-channel biosignal-based hand gesture classification.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2024 ","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143559977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Electrolaryngeal (EL) speech, an artificial speech produced by an electrolarynx for laryngectomees, lacks essential phonetic features, and differs in temporal structure from normal speech, resulting in poor naturalness and intelligibility. To address this deficiency, sequence-to-sequence (seq2seq) voice conversion (VC) models have been applied in converting EL speech to normal speech (EL2SP), showing some promising performances. However, previous studies mostly focus on converting clean EL speech, thereby restricting the further applicability in real-world scenarios, especially when the EL speech is inevitably interfered with background noise and reverberation. In light of this, we suggest novel training techniques based on seq2seq VC to enhance the robustness of real-world EL2SP. We first pretrain a normal-to-normal seq2seq VC model based on a text-to-speech model. Then, a two-stage fine-tuning is conducted by effectively using pseudo noisy and reverberant EL speech data artificially generated from only a small amount of original clean data available. Several design options are investigated to figure out the effectiveness of our method. The significant improvements presented in experimental results indicate that our method can non-trivially handle both clean and noisy-reverberant EL speech, enhancing the robustness of EL2SP in real-world scenarios.
{"title":"Robust Sequence-to-sequence Voice Conversion for Electrolaryngeal Speech Enhancement in Noisy and Reverberant Conditions.","authors":"Ding Ma, Yeonjong Choi, Fengji Li, Chao Xie, Kazuhiro Kobayashi, Tomoki Toda","doi":"10.1109/EMBC53108.2024.10781979","DOIUrl":"https://doi.org/10.1109/EMBC53108.2024.10781979","url":null,"abstract":"<p><p>Electrolaryngeal (EL) speech, an artificial speech produced by an electrolarynx for laryngectomees, lacks essential phonetic features, and differs in temporal structure from normal speech, resulting in poor naturalness and intelligibility. To address this deficiency, sequence-to-sequence (seq2seq) voice conversion (VC) models have been applied in converting EL speech to normal speech (EL2SP), showing some promising performances. However, previous studies mostly focus on converting clean EL speech, thereby restricting the further applicability in real-world scenarios, especially when the EL speech is inevitably interfered with background noise and reverberation. In light of this, we suggest novel training techniques based on seq2seq VC to enhance the robustness of real-world EL2SP. We first pretrain a normal-to-normal seq2seq VC model based on a text-to-speech model. Then, a two-stage fine-tuning is conducted by effectively using pseudo noisy and reverberant EL speech data artificially generated from only a small amount of original clean data available. Several design options are investigated to figure out the effectiveness of our method. The significant improvements presented in experimental results indicate that our method can non-trivially handle both clean and noisy-reverberant EL speech, enhancing the robustness of EL2SP in real-world scenarios.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2024 ","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143559989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1109/EMBC53108.2024.10782447
Tanmay Mukherjee, Sunder Neelakantan, Kyle Myers, Carl Tong, Reza Avazmohammadi
Brightness mode (B-mode) ultrasound is a common imaging modality in the clinical assessment of several cardiovascular diseases. The utility of ultrasound-based functional indices such as the ejection fraction (EF) and stroke volume (SV) is widely described in diagnosing advanced-stage cardiovascular diseases. Additionally, structural indices obtained through the analysis of cardiac motion have been found to be important in the early-stage assessment of structural heart diseases, such as hypertrophic cardiomyopathy and myocardial infarction. Estimating heterogeneous variations in cardiac motion through B-mode ultrasound imaging is a crucial component of patient care. Despite the benefits of such imaging techniques, motion estimation algorithms are susceptible to variability between vendors due to the lack of benchmark motion quantities. In contrast, finite element (FE) simulations of cardiac biomechanics leverage well-established constitutive models of the myocardium to ensure reproducibility. In this study, we developed a methodology to create synthetic B-mode ultrasound images from FE simulations. The proposed methodology provides a detailed representation of displacements and strains under complex mouse-specific loading protocols of the LV. A comparison between the synthetic images and FE simulations revealed qualitative similarity in displacement patterns, thereby yielding benchmark quantities to improve the reproducibility of motion estimation algorithms. Thus, the study provides a methodology to create an extensive repository of images describing complex motion patterns to facilitate the enhanced reproducibility of cardiac motion analysis.
{"title":"Synthetic ultrasound images to benchmark echocardiography-based biomechanics.","authors":"Tanmay Mukherjee, Sunder Neelakantan, Kyle Myers, Carl Tong, Reza Avazmohammadi","doi":"10.1109/EMBC53108.2024.10782447","DOIUrl":"https://doi.org/10.1109/EMBC53108.2024.10782447","url":null,"abstract":"<p><p>Brightness mode (B-mode) ultrasound is a common imaging modality in the clinical assessment of several cardiovascular diseases. The utility of ultrasound-based functional indices such as the ejection fraction (EF) and stroke volume (SV) is widely described in diagnosing advanced-stage cardiovascular diseases. Additionally, structural indices obtained through the analysis of cardiac motion have been found to be important in the early-stage assessment of structural heart diseases, such as hypertrophic cardiomyopathy and myocardial infarction. Estimating heterogeneous variations in cardiac motion through B-mode ultrasound imaging is a crucial component of patient care. Despite the benefits of such imaging techniques, motion estimation algorithms are susceptible to variability between vendors due to the lack of benchmark motion quantities. In contrast, finite element (FE) simulations of cardiac biomechanics leverage well-established constitutive models of the myocardium to ensure reproducibility. In this study, we developed a methodology to create synthetic B-mode ultrasound images from FE simulations. The proposed methodology provides a detailed representation of displacements and strains under complex mouse-specific loading protocols of the LV. A comparison between the synthetic images and FE simulations revealed qualitative similarity in displacement patterns, thereby yielding benchmark quantities to improve the reproducibility of motion estimation algorithms. Thus, the study provides a methodology to create an extensive repository of images describing complex motion patterns to facilitate the enhanced reproducibility of cardiac motion analysis.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2024 ","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143560036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1109/EMBC53108.2024.10781914
Yupeng Wu, Miguel Figueroa Hernandez, Tian Lei, Siddarth Jayakumar, Rohan R Lalapet, Alexandra Joshi-Imre, Mark E Orazem, Kevin J Otto, Stuart F Cogan
Recently, there has been a growing interest in ruthenium oxide (RuOx) as an alternative mixed-conductor oxide to SIROF as an electrode coating. RuOx is recognized as a faradic charge-injection coating with high CSCc, long-term pulsing stability, and low impedance. We examined how the structural properties of sputter-deposited RuOx influence its electrochemical performance as an electrode coating for neural stimulation and recording. Thin film RuOx was deposited under various pressures: 5 mTorr, 15 mTorr, 30 mTorr, and 60 mTorr on wafer-based planar test structures. Electrochemical characterizations, including electrochemical impedance spectroscopy (EIS), cyclic voltammetry (CV), and voltage transient (VT), were employed. The structure of RuOx films was characterized by scanning electron microscope (SEM). Our findings revealed that the sputtering pressure significantly influences the growth of the RuOx film, subsequently affecting its electrochemical performance. The results indicate that the electrochemical performance of RuOx can be optimized by adjusting the deposition conditions to achieve a favorable balance between electronic and ionic conductivity.Clinical Relevance- This research underscores the potential for optimizing the structural properties of RuOx to enhance its electrochemical capabilities for neural stimulation and recording.
{"title":"The Effect of Physical Structural Properties on Electrochemical Properties of Ruthenium Oxide for Neural Stimulating and Recording Electrodes.","authors":"Yupeng Wu, Miguel Figueroa Hernandez, Tian Lei, Siddarth Jayakumar, Rohan R Lalapet, Alexandra Joshi-Imre, Mark E Orazem, Kevin J Otto, Stuart F Cogan","doi":"10.1109/EMBC53108.2024.10781914","DOIUrl":"https://doi.org/10.1109/EMBC53108.2024.10781914","url":null,"abstract":"<p><p>Recently, there has been a growing interest in ruthenium oxide (RuOx) as an alternative mixed-conductor oxide to SIROF as an electrode coating. RuOx is recognized as a faradic charge-injection coating with high CSCc, long-term pulsing stability, and low impedance. We examined how the structural properties of sputter-deposited RuOx influence its electrochemical performance as an electrode coating for neural stimulation and recording. Thin film RuOx was deposited under various pressures: 5 mTorr, 15 mTorr, 30 mTorr, and 60 mTorr on wafer-based planar test structures. Electrochemical characterizations, including electrochemical impedance spectroscopy (EIS), cyclic voltammetry (CV), and voltage transient (VT), were employed. The structure of RuOx films was characterized by scanning electron microscope (SEM). Our findings revealed that the sputtering pressure significantly influences the growth of the RuOx film, subsequently affecting its electrochemical performance. The results indicate that the electrochemical performance of RuOx can be optimized by adjusting the deposition conditions to achieve a favorable balance between electronic and ionic conductivity.Clinical Relevance- This research underscores the potential for optimizing the structural properties of RuOx to enhance its electrochemical capabilities for neural stimulation and recording.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2024 ","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143560042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1109/EMBC53108.2024.10781662
Pin-Han Ho, Yong-Sheng Chen, Chun-Shu Wei
Understanding the temporal dynamics of emotion poses a significant challenge due to the lack of methods to measure them objectively. In this study, we propose a novel approach to tracking intensity (EI) based on electroencephalogram (EEG) during continuous exposure to affective stimulation. We design selective sampling strategies to validate the association between the prediction outcome of an EEG-based emotion recognition model and the prominence of emotion-related EEG patterns, evidenced by the improvement in the classification task of discriminating arousal and valence by 2.01% and 1.71%, respectively. This study constitutes a breakthrough in the objective evaluation of the temporal dynamics of emotions, proposing a promising avenue to refine EEG-based emotion recognition models through intensity-selective sampling. Furthermore, our findings can contribute to future affective studies by providing a reliable and objective measurement method to profile emotion dynamics.
{"title":"Toward EEG-Based Objective Assessment of Emotion Intensity.","authors":"Pin-Han Ho, Yong-Sheng Chen, Chun-Shu Wei","doi":"10.1109/EMBC53108.2024.10781662","DOIUrl":"https://doi.org/10.1109/EMBC53108.2024.10781662","url":null,"abstract":"<p><p>Understanding the temporal dynamics of emotion poses a significant challenge due to the lack of methods to measure them objectively. In this study, we propose a novel approach to tracking intensity (EI) based on electroencephalogram (EEG) during continuous exposure to affective stimulation. We design selective sampling strategies to validate the association between the prediction outcome of an EEG-based emotion recognition model and the prominence of emotion-related EEG patterns, evidenced by the improvement in the classification task of discriminating arousal and valence by 2.01% and 1.71%, respectively. This study constitutes a breakthrough in the objective evaluation of the temporal dynamics of emotions, proposing a promising avenue to refine EEG-based emotion recognition models through intensity-selective sampling. Furthermore, our findings can contribute to future affective studies by providing a reliable and objective measurement method to profile emotion dynamics.</p>","PeriodicalId":72237,"journal":{"name":"Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference","volume":"2024 ","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143560081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference