Pub Date : 2022-11-10DOI: 10.1109/BMEiCON56653.2022.10011585
Duangkamol Banarsarn, W. Narkbuakaew, Kongyot Wangkaoom, Saowanee Iamsiri, S. Thongvigitmanee
A panoramic image can be reconstructed from a cone-beam CT (CBCT) dataset to manifest anatomical structures of all teeth in a patient’s mouth in one image. This research proposed a new automatic method to synthesize a ray-sum panoramic image from a dental CBCT dataset. The objective of the proposed method was to create the curve segment over the whole dental arch to instantly generate a ray-sum panoramic image. We applied the proposed algorithm to eighteen datasets, and all processes were computed under the web browser’s environment. From the results, the curve segments were consistently generated for all datasets. The edges of anatomical structures in the panoramic image were enhanced. On average, the computational time was under 5 seconds to compute the large volumetric data sized 500x500x450 voxels. In conclusion, the proposed method was functional for the web application. A raysum panoramic was quickly synthesized, and anatomical structures were clearly displayed in a web browser.
{"title":"Automatic Ray-Sum Panoramic Synthesis from Cone-Beam CT Data","authors":"Duangkamol Banarsarn, W. Narkbuakaew, Kongyot Wangkaoom, Saowanee Iamsiri, S. Thongvigitmanee","doi":"10.1109/BMEiCON56653.2022.10011585","DOIUrl":"https://doi.org/10.1109/BMEiCON56653.2022.10011585","url":null,"abstract":"A panoramic image can be reconstructed from a cone-beam CT (CBCT) dataset to manifest anatomical structures of all teeth in a patient’s mouth in one image. This research proposed a new automatic method to synthesize a ray-sum panoramic image from a dental CBCT dataset. The objective of the proposed method was to create the curve segment over the whole dental arch to instantly generate a ray-sum panoramic image. We applied the proposed algorithm to eighteen datasets, and all processes were computed under the web browser’s environment. From the results, the curve segments were consistently generated for all datasets. The edges of anatomical structures in the panoramic image were enhanced. On average, the computational time was under 5 seconds to compute the large volumetric data sized 500x500x450 voxels. In conclusion, the proposed method was functional for the web application. A raysum panoramic was quickly synthesized, and anatomical structures were clearly displayed in a web browser.","PeriodicalId":177401,"journal":{"name":"2022 14th Biomedical Engineering International Conference (BMEiCON)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123110640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-10DOI: 10.1109/BMEiCON56653.2022.10012113
Sorawit Khoruamkid, S. Visitsattapongse
Heart disease is a major problem in most deaths. To conquer this situation, heartbeat sound analysis is a convenient method for diagnosing heart disease. Heartbeat sound classification remains a challenging problem in heart sound division and feature extraction. A stethoscope is a medical device widely used by physicians to listen to the heartbeat. An acoustic stethoscope operates on the chest piece to the ears of the listener. The main problem is in listening to heart sounds that the low signal level and are difficult to be analyzed. Adding electronic circuitry and software to acoustic stethoscopes will strengthen the heart rate signal and can minimize error analysis of the state of the patient's heart. Machine learning is used to efficiently analyze and classify heart sounds. Convolutional Neural Network (CNN) models and Support Vector Machine (SVM) with feature extractors were effective methods and were used in this research. First, the Phonocardiogram (PCG) files are fragmented into pieces of equivalent length. Then, we convert the PCG files to a spectrogram. The spectrogram images are fed into a convolutional neural network and support vector machine. The best result is using an Inception V3 model with the CNN classifier which has an accuracy of 0.909, with 0.948 sensitivity and 0.869 specificity.
{"title":"A Low-Cost Digital Stethoscope For Normal and Abnormal Heart Sound Classification","authors":"Sorawit Khoruamkid, S. Visitsattapongse","doi":"10.1109/BMEiCON56653.2022.10012113","DOIUrl":"https://doi.org/10.1109/BMEiCON56653.2022.10012113","url":null,"abstract":"Heart disease is a major problem in most deaths. To conquer this situation, heartbeat sound analysis is a convenient method for diagnosing heart disease. Heartbeat sound classification remains a challenging problem in heart sound division and feature extraction. A stethoscope is a medical device widely used by physicians to listen to the heartbeat. An acoustic stethoscope operates on the chest piece to the ears of the listener. The main problem is in listening to heart sounds that the low signal level and are difficult to be analyzed. Adding electronic circuitry and software to acoustic stethoscopes will strengthen the heart rate signal and can minimize error analysis of the state of the patient's heart. Machine learning is used to efficiently analyze and classify heart sounds. Convolutional Neural Network (CNN) models and Support Vector Machine (SVM) with feature extractors were effective methods and were used in this research. First, the Phonocardiogram (PCG) files are fragmented into pieces of equivalent length. Then, we convert the PCG files to a spectrogram. The spectrogram images are fed into a convolutional neural network and support vector machine. The best result is using an Inception V3 model with the CNN classifier which has an accuracy of 0.909, with 0.948 sensitivity and 0.869 specificity.","PeriodicalId":177401,"journal":{"name":"2022 14th Biomedical Engineering International Conference (BMEiCON)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131505975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-10DOI: 10.1109/BMEiCON56653.2022.10012083
Yuya Shimomura, Zugui Peng, K. Shimba, Y. Miyamoto, T. Yagi
It is well known that cells are influenced by the properties of their extracellular matrix, which is primarily composed of sugars and proteins. In particular, there exists a strong relationship between the elasticity of the extracellular substrate and cell motility. Clarification of the relationship between cells and substrate elasticity is expected to prove useful in both the design of scaffold materials for tissue engineering and the elucidation of the mechanisms of diseases that cause fibrosis. The elasticity of the extracellular matrix is characterized by two features: a partial elastic gradient and day-to-day variation. Most previous studies reproduced either one or the other, and this likely does not reflect the in vivo situation. Thus, a greater understanding of the changes in cells and substrates is needed. In this study, we propose a scaffold based on a magnetic gel with variable elasticity. We describe the preparation of a magnetic gel and cell seeding on its surface. The cells were cultured under the application of magnetic force after seeding, revealing an increase in the cell area due to the hardening of the magnetic gel in the presence of magnetic force.
{"title":"Development of Variable-Elasticity Cell Scaffolds Using Magnetic Gels","authors":"Yuya Shimomura, Zugui Peng, K. Shimba, Y. Miyamoto, T. Yagi","doi":"10.1109/BMEiCON56653.2022.10012083","DOIUrl":"https://doi.org/10.1109/BMEiCON56653.2022.10012083","url":null,"abstract":"It is well known that cells are influenced by the properties of their extracellular matrix, which is primarily composed of sugars and proteins. In particular, there exists a strong relationship between the elasticity of the extracellular substrate and cell motility. Clarification of the relationship between cells and substrate elasticity is expected to prove useful in both the design of scaffold materials for tissue engineering and the elucidation of the mechanisms of diseases that cause fibrosis. The elasticity of the extracellular matrix is characterized by two features: a partial elastic gradient and day-to-day variation. Most previous studies reproduced either one or the other, and this likely does not reflect the in vivo situation. Thus, a greater understanding of the changes in cells and substrates is needed. In this study, we propose a scaffold based on a magnetic gel with variable elasticity. We describe the preparation of a magnetic gel and cell seeding on its surface. The cells were cultured under the application of magnetic force after seeding, revealing an increase in the cell area due to the hardening of the magnetic gel in the presence of magnetic force.","PeriodicalId":177401,"journal":{"name":"2022 14th Biomedical Engineering International Conference (BMEiCON)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116810874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-10DOI: 10.1109/BMEiCON56653.2022.10012075
Naru Sato, T. Igasaki, Chiharu Matsumoto, Tadashi Sakata, Hitomi Maeda
Although many studies have evaluated swallowing using sounds from the pharynx, few studies have evaluated it using sounds from the nostrils. Therefore, in this study, we placed two microphones near the nostrils and pharynx and recorded sounds produced while swallowing five grams of solid jelly. The participants included 16 healthy volunteers (eight in their early twenties (22-24 years old; younger group) and eight early elderly (66-74 years old; elder group), four males and four females in each group) over eight trials. We then examined the peak sound pressure levels of the nostrils and pharynx waveforms and investigated the time lags between the sounds generated by the nostrils and pharynx in each subject. As a result, although the time lags varied across the subjects, the minimum time lags observed in the elder group (0.32 – 0.78 s) were significantly longer than those of the younger group (0.04 – 0.44 s. $F_{1:15}$ = 30.10, $p lt 0.05$, repeated two-way ANOVA), and the maximum time lags observed in the female subjects were significantly longer (1.04 – 2.30 s) than those of males (0.20 – 1.43 s. $F_{1:15}$ = 5.57, $p lt 0.05$, repeated two-way ANOVA). It was suggested that measuring the sounds from both the pharynx and nostrils during swallowing helps in evaluating the swallowing function with aging, and considering gender differences.
{"title":"Preliminary Study of the Relationship Between Age and Gender using Sounds Generated from the Nostrils and Pharynx During Swallowing in Healthy Subjects","authors":"Naru Sato, T. Igasaki, Chiharu Matsumoto, Tadashi Sakata, Hitomi Maeda","doi":"10.1109/BMEiCON56653.2022.10012075","DOIUrl":"https://doi.org/10.1109/BMEiCON56653.2022.10012075","url":null,"abstract":"Although many studies have evaluated swallowing using sounds from the pharynx, few studies have evaluated it using sounds from the nostrils. Therefore, in this study, we placed two microphones near the nostrils and pharynx and recorded sounds produced while swallowing five grams of solid jelly. The participants included 16 healthy volunteers (eight in their early twenties (22-24 years old; younger group) and eight early elderly (66-74 years old; elder group), four males and four females in each group) over eight trials. We then examined the peak sound pressure levels of the nostrils and pharynx waveforms and investigated the time lags between the sounds generated by the nostrils and pharynx in each subject. As a result, although the time lags varied across the subjects, the minimum time lags observed in the elder group (0.32 – 0.78 s) were significantly longer than those of the younger group (0.04 – 0.44 s. $F_{1:15}$ = 30.10, $p lt 0.05$, repeated two-way ANOVA), and the maximum time lags observed in the female subjects were significantly longer (1.04 – 2.30 s) than those of males (0.20 – 1.43 s. $F_{1:15}$ = 5.57, $p lt 0.05$, repeated two-way ANOVA). It was suggested that measuring the sounds from both the pharynx and nostrils during swallowing helps in evaluating the swallowing function with aging, and considering gender differences.","PeriodicalId":177401,"journal":{"name":"2022 14th Biomedical Engineering International Conference (BMEiCON)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115452797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-10DOI: 10.1109/BMEiCON56653.2022.10012118
M. Shahbakhti, Matin Beiramvand, Erfan Nasiri, W. Chen, Jordi Solé-Casals, M. Wierzchoń, Anna Broniec-Wójcik, P. Augustyniak, V. Marozas
Although detection of the driver fatigue using a single electroencephalography (EEG) channel has been addressed in literature, the gender differentiation for applicability of the model has not been investigated heretofore. Motivated accordingly, we address the detection of driver fatigue based the gender-segregated datasets, where each of them contains 8 subjects. After splitting the EEG signal into its sub-bands (delta, theta, alpha, beta, and gamma) using discrete wavelet transform, the log energy entropy of each band is computed to form the feature vector. Afterwards, the feature vector is randomly split into 50% for training and 50% for the unseen testing, and fed to a support vector machine model. When comparing the classification results of fatigue driving detection between the gender segregated and non-gender segregated datasets, the former achieved the accuracy 78% and 77% for male and female subjects, respectively, than the latter (71%). The obtained results show the importance of gender-specification for the driver fatigue detection.
{"title":"The Importance of Gender Specification for Detection of Driver Fatigue using a Single EEG Channel","authors":"M. Shahbakhti, Matin Beiramvand, Erfan Nasiri, W. Chen, Jordi Solé-Casals, M. Wierzchoń, Anna Broniec-Wójcik, P. Augustyniak, V. Marozas","doi":"10.1109/BMEiCON56653.2022.10012118","DOIUrl":"https://doi.org/10.1109/BMEiCON56653.2022.10012118","url":null,"abstract":"Although detection of the driver fatigue using a single electroencephalography (EEG) channel has been addressed in literature, the gender differentiation for applicability of the model has not been investigated heretofore. Motivated accordingly, we address the detection of driver fatigue based the gender-segregated datasets, where each of them contains 8 subjects. After splitting the EEG signal into its sub-bands (delta, theta, alpha, beta, and gamma) using discrete wavelet transform, the log energy entropy of each band is computed to form the feature vector. Afterwards, the feature vector is randomly split into 50% for training and 50% for the unseen testing, and fed to a support vector machine model. When comparing the classification results of fatigue driving detection between the gender segregated and non-gender segregated datasets, the former achieved the accuracy 78% and 77% for male and female subjects, respectively, than the latter (71%). The obtained results show the importance of gender-specification for the driver fatigue detection.","PeriodicalId":177401,"journal":{"name":"2022 14th Biomedical Engineering International Conference (BMEiCON)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124076130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-10DOI: 10.1109/BMEiCON56653.2022.10012115
Sengaloun Xayalath, N. Thongpance, Anantasak Wongkamhang, Anuchit Nirapai
This research aims to design and develop a Computerized Medical Device Maintenance Management System (CMDMS) for managing medical device systems of Luangprabang Provincial hospital, Lao People’s Democratic Republic. The process of designing and developing such CMDMS consists of 3 main components: 1) the structure design of the system using Visual Studio, 2) programming in PHP and 3) Creating a database and a recording system using My SQL. The results showed that it is designed to meet the specific needs of Luangprabang Provincial hospital, Lao People’s Democratic Republic. The CMDMS can actually be used in the medical device management system of the hospital, solve problems and facilitate, as well as reduce procedures and speed up the management of medical devices, as well as save budgets for the organization in accordance with the objectives of the design and construction and follows the Generic Clinical Engineering Maintenance Management System suggested by Association for the Advancement of Medical Instrumentation (AAMI).
{"title":"Computerized Medical Device Management System In Luangprabang Provincial Hospital","authors":"Sengaloun Xayalath, N. Thongpance, Anantasak Wongkamhang, Anuchit Nirapai","doi":"10.1109/BMEiCON56653.2022.10012115","DOIUrl":"https://doi.org/10.1109/BMEiCON56653.2022.10012115","url":null,"abstract":"This research aims to design and develop a Computerized Medical Device Maintenance Management System (CMDMS) for managing medical device systems of Luangprabang Provincial hospital, Lao People’s Democratic Republic. The process of designing and developing such CMDMS consists of 3 main components: 1) the structure design of the system using Visual Studio, 2) programming in PHP and 3) Creating a database and a recording system using My SQL. The results showed that it is designed to meet the specific needs of Luangprabang Provincial hospital, Lao People’s Democratic Republic. The CMDMS can actually be used in the medical device management system of the hospital, solve problems and facilitate, as well as reduce procedures and speed up the management of medical devices, as well as save budgets for the organization in accordance with the objectives of the design and construction and follows the Generic Clinical Engineering Maintenance Management System suggested by Association for the Advancement of Medical Instrumentation (AAMI).","PeriodicalId":177401,"journal":{"name":"2022 14th Biomedical Engineering International Conference (BMEiCON)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128307550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-10DOI: 10.1109/BMEiCON56653.2022.10012108
Nina Itagaki, K. Iramina, Yutarou Nakada
In this study, we investigated ‘pain recall’ that results from showing a painful image and evoking pain without actually giving any pain, that is said to be similar to the brain activity that actually causes pain. The experiment involved 12 students showing three short videos in which a child, a female, or a male was being injected. We measured the degree of emotional changes by watching the painful scene in three ways: emotion estimation by facial expression, GSR (Galvanic Skin Response) and Eye tracking. The results showed that subjects felt the same fear and tension as when feeling pain. On the other hand, subjects felt less painful emotions when they looked at the scene that a man with solid arms was injected. The degree of emotion in pain recall varied depending on who received the injection in the short videos. These results suggest that pain may be reduced by showing some body images as visual information. It is possible to alleviate actual pain by applying how to reduce ‘pain recall’.
{"title":"The effect of visual cognition on the fear caused by pain recall","authors":"Nina Itagaki, K. Iramina, Yutarou Nakada","doi":"10.1109/BMEiCON56653.2022.10012108","DOIUrl":"https://doi.org/10.1109/BMEiCON56653.2022.10012108","url":null,"abstract":"In this study, we investigated ‘pain recall’ that results from showing a painful image and evoking pain without actually giving any pain, that is said to be similar to the brain activity that actually causes pain. The experiment involved 12 students showing three short videos in which a child, a female, or a male was being injected. We measured the degree of emotional changes by watching the painful scene in three ways: emotion estimation by facial expression, GSR (Galvanic Skin Response) and Eye tracking. The results showed that subjects felt the same fear and tension as when feeling pain. On the other hand, subjects felt less painful emotions when they looked at the scene that a man with solid arms was injected. The degree of emotion in pain recall varied depending on who received the injection in the short videos. These results suggest that pain may be reduced by showing some body images as visual information. It is possible to alleviate actual pain by applying how to reduce ‘pain recall’.","PeriodicalId":177401,"journal":{"name":"2022 14th Biomedical Engineering International Conference (BMEiCON)","volume":"751 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114001691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-10DOI: 10.1109/BMEiCON56653.2022.10012069
Chayapol Chaiyanan, B. Kaewkamnerdpong
Those who are good at implicit learning can learn things faster and are more adaptable in the fast pace age of information. Implicit learning is a type of learning without being explicitly taught. It’s commonly seen in younger children when they develop their ability to speak their native language without learning grammar. The human brain can be trained to be good at learning by training the brain to be in the state of learning more often. By using neurofeedback to regulate the human brain state, educators and learners can help each other in training the brain to be better implicit learners. Our research aims to classify implicit learning events from EEG signals to help identify and moderate such states. This paper analyzed the feature selection process section to improve classification performance. We used previously measured participants' EEG signals while performing cognitive task experiments. Those signals were then getting feature extracted into Multiscale Entropy. Previously, Artificial Bee Colony (ABC) was used on the Multiscale Entropy to help classify the implicit learning events with reasonable success. However, an improvement was required to make the entire system more optimized due to how features being selected were in a binary search space. Binary Multi-Neighborhood Artificial Bee Colony (BMNABC) was chosen as an alternative. The comparison indicated that BMNABC increased the accuracy to as high as 90.57% and can be regarded as a promising method for identifying implicit learning events.
{"title":"Evaluating Improvement on Feature Selection for Classification of Implicit Learning on EEG’s Multiscale Entropy Data using BMNABC","authors":"Chayapol Chaiyanan, B. Kaewkamnerdpong","doi":"10.1109/BMEiCON56653.2022.10012069","DOIUrl":"https://doi.org/10.1109/BMEiCON56653.2022.10012069","url":null,"abstract":"Those who are good at implicit learning can learn things faster and are more adaptable in the fast pace age of information. Implicit learning is a type of learning without being explicitly taught. It’s commonly seen in younger children when they develop their ability to speak their native language without learning grammar. The human brain can be trained to be good at learning by training the brain to be in the state of learning more often. By using neurofeedback to regulate the human brain state, educators and learners can help each other in training the brain to be better implicit learners. Our research aims to classify implicit learning events from EEG signals to help identify and moderate such states. This paper analyzed the feature selection process section to improve classification performance. We used previously measured participants' EEG signals while performing cognitive task experiments. Those signals were then getting feature extracted into Multiscale Entropy. Previously, Artificial Bee Colony (ABC) was used on the Multiscale Entropy to help classify the implicit learning events with reasonable success. However, an improvement was required to make the entire system more optimized due to how features being selected were in a binary search space. Binary Multi-Neighborhood Artificial Bee Colony (BMNABC) was chosen as an alternative. The comparison indicated that BMNABC increased the accuracy to as high as 90.57% and can be regarded as a promising method for identifying implicit learning events.","PeriodicalId":177401,"journal":{"name":"2022 14th Biomedical Engineering International Conference (BMEiCON)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132660325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-10DOI: 10.1109/BMEiCON56653.2022.10012097
Siriporn Kongrat, C. Pintavirooj, S. Tungjitkusolmun
(1) Background: An abdominal aortic aneurysm (AAA) is a swelling (aneurysm) of the aorta that occurs when the wall of the aorta weakens. An AAA is a potentially life-threatening condition, especially if it eventually ruptures, causing severe bleeding. (2) Methods: We developed an automated segmentation method for 3D AAA reconstruction from computed tomography angiography (CTA) based on the 3D U-NET deep learning network approaches for AAA and AAA with thrombus on training dataset classified as 8 normal, 14 aneurysm volume, and 38 thrombus aneurysm volume with the data augmentations app, i.e., scaling, random crop, grayscale variation, axial y flip, and shear, were added to the training model, achieving better performance. (3) Results: The results confirm that the proposed method can provide accuracy in terms of the Dice Similar Coefficient (DSC) scores of 0.9669 for training performance and 0.9868 for testing evaluation with the 3D U-Net model.
{"title":"Reconstruction of 3D Abdominal Aorta Aneurysm from Computed Tomographic Angiography Using 3D U-Net Deep Learning Network","authors":"Siriporn Kongrat, C. Pintavirooj, S. Tungjitkusolmun","doi":"10.1109/BMEiCON56653.2022.10012097","DOIUrl":"https://doi.org/10.1109/BMEiCON56653.2022.10012097","url":null,"abstract":"(1) Background: An abdominal aortic aneurysm (AAA) is a swelling (aneurysm) of the aorta that occurs when the wall of the aorta weakens. An AAA is a potentially life-threatening condition, especially if it eventually ruptures, causing severe bleeding. (2) Methods: We developed an automated segmentation method for 3D AAA reconstruction from computed tomography angiography (CTA) based on the 3D U-NET deep learning network approaches for AAA and AAA with thrombus on training dataset classified as 8 normal, 14 aneurysm volume, and 38 thrombus aneurysm volume with the data augmentations app, i.e., scaling, random crop, grayscale variation, axial y flip, and shear, were added to the training model, achieving better performance. (3) Results: The results confirm that the proposed method can provide accuracy in terms of the Dice Similar Coefficient (DSC) scores of 0.9669 for training performance and 0.9868 for testing evaluation with the 3D U-Net model.","PeriodicalId":177401,"journal":{"name":"2022 14th Biomedical Engineering International Conference (BMEiCON)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129002180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-10DOI: 10.1109/BMEiCON56653.2022.10012107
Yuanshen Zhao, Jingxian Duan, Zhicheng Li, N. Chai, Longsong Li
Predicting gastric cancer prognosis is imperative for more appropriate clinical treatment plans. Compared with traditional radiomics model adopting CT images alone, the radiopathomics is a novel medical image analysis strategy which employed the radiomcs features extracted from CT image and pathomics features extracted from pathological image to build a prediction model. In this paper, we developed a radiopathomics model to predict whether patients with gastric cancer survive more than 2 years. By using LASSO algorithm, two pathomics features, a radiomics feature and the clinical variables of TNM were selected from totally 1565 features to build the prediction model. For reflecting the advantage of the radiopathomics model, we implemented the comparison tests between the radiopathomics model with radiomics model and pathomics model. The results showed that the radiopathomics model achieved an AUC of 0.904 and an accuracy of 84.2%, which was significantly better than the other two models. It demonstrated that integrated of the microscopic level and macroscopic level phenotype information for tumor could be useful in prediction of prognosis.
{"title":"A radiopathomics model for prognosis prediction in patients with gastric cancer","authors":"Yuanshen Zhao, Jingxian Duan, Zhicheng Li, N. Chai, Longsong Li","doi":"10.1109/BMEiCON56653.2022.10012107","DOIUrl":"https://doi.org/10.1109/BMEiCON56653.2022.10012107","url":null,"abstract":"Predicting gastric cancer prognosis is imperative for more appropriate clinical treatment plans. Compared with traditional radiomics model adopting CT images alone, the radiopathomics is a novel medical image analysis strategy which employed the radiomcs features extracted from CT image and pathomics features extracted from pathological image to build a prediction model. In this paper, we developed a radiopathomics model to predict whether patients with gastric cancer survive more than 2 years. By using LASSO algorithm, two pathomics features, a radiomics feature and the clinical variables of TNM were selected from totally 1565 features to build the prediction model. For reflecting the advantage of the radiopathomics model, we implemented the comparison tests between the radiopathomics model with radiomics model and pathomics model. The results showed that the radiopathomics model achieved an AUC of 0.904 and an accuracy of 84.2%, which was significantly better than the other two models. It demonstrated that integrated of the microscopic level and macroscopic level phenotype information for tumor could be useful in prediction of prognosis.","PeriodicalId":177401,"journal":{"name":"2022 14th Biomedical Engineering International Conference (BMEiCON)","volume":"16 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132286834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}