Pub Date : 2023-01-01DOI: 10.1109/CBMS58004.2023.00254
Austin Ryan English
{"title":"Automated Design of Task-Dedicated Illumination with Particle Swarm Optimization","authors":"Austin Ryan English","doi":"10.1109/CBMS58004.2023.00254","DOIUrl":"https://doi.org/10.1109/CBMS58004.2023.00254","url":null,"abstract":"","PeriodicalId":74567,"journal":{"name":"Proceedings. IEEE International Symposium on Computer-Based Medical Systems","volume":"9 1","pages":"416-421"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78809764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Video capsule endoscopy is a hot topic in computer vision and medicine. Deep learning can have a positive impact on the future of video capsule endoscopy technology. It can improve the anomaly detection rate, reduce physicians' time for screening, and aid in real-world clinical analysis. Computer-Aided diagnosis (CADx) classification system for video capsule endoscopy has shown a great promise for further improvement. For example, detection of cancerous polyp and bleeding can lead to swift medical response and improve the survival rate of the patients. To this end, an automated CADx system must have high throughput and decent accuracy. In this study, we propose FocalConvNet, a focal modulation network integrated with lightweight convolutional layers for the classification of small bowel anatomical landmarks and luminal findings. FocalConvNet leverages focal modulation to attain global context and allows global-local spatial interactions throughout the forward pass. Moreover, the convolutional block with its intrinsic inductive/learning bias and capacity to extract hierarchical features allows our FocalConvNet to achieve favourable results with high throughput. We compare our FocalConvNet with other state-of-the-art (SOTA) on Kvasir-Capsule, a large-scale VCE dataset with 44,228 frames with 13 classes of different anomalies. We achieved the weighted F1-score, recall and Matthews correlation coefficient (MCC) of 0.6734, 0.6373 and 0.2974, respectively, outperforming SOTA methodologies. Further, we obtained the highest throughput of 148.02 images/second rate to establish the potential of FocalConvNet in a real-time clinical environment. The code of the proposed FocalConvNet is available at https://github.com/NoviceMAn-prog/FocalConvNet.
{"title":"Video Capsule Endoscopy Classification using Focal Modulation Guided Convolutional Neural Network.","authors":"Abhishek Srivastava, Nikhil Kumar Tomar, Ulas Bagci, Debesh Jha","doi":"10.1109/CBMS55023.2022.00064","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00064","url":null,"abstract":"<p><p>Video capsule endoscopy is a hot topic in computer vision and medicine. Deep learning can have a positive impact on the future of video capsule endoscopy technology. It can improve the anomaly detection rate, reduce physicians' time for screening, and aid in real-world clinical analysis. Computer-Aided diagnosis (CADx) classification system for video capsule endoscopy has shown a great promise for further improvement. For example, detection of cancerous polyp and bleeding can lead to swift medical response and improve the survival rate of the patients. To this end, an automated CADx system must have high throughput and decent accuracy. In this study, we propose <i>FocalConvNet</i>, a focal modulation network integrated with lightweight convolutional layers for the classification of small bowel anatomical landmarks and luminal findings. FocalConvNet leverages focal modulation to attain global context and allows global-local spatial interactions throughout the forward pass. Moreover, the convolutional block with its intrinsic inductive/learning bias and capacity to extract hierarchical features allows our FocalConvNet to achieve favourable results with high throughput. We compare our FocalConvNet with other state-of-the-art (SOTA) on Kvasir-Capsule, a large-scale VCE dataset with 44,228 frames with 13 classes of different anomalies. We achieved the weighted F1-score, recall and Matthews correlation coefficient (MCC) of 0.6734, 0.6373 and 0.2974, respectively, outperforming SOTA methodologies. Further, we obtained the highest throughput of 148.02 images/second rate to establish the potential of FocalConvNet in a real-time clinical environment. The code of the proposed FocalConvNet is available at https://github.com/NoviceMAn-prog/FocalConvNet.</p>","PeriodicalId":74567,"journal":{"name":"Proceedings. IEEE International Symposium on Computer-Based Medical Systems","volume":"2022 ","pages":"323-328"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9914988/pdf/nihms-1871537.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10708367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The detection and removal of precancerous polyps through colonoscopy is the primary technique for the prevention of colorectal cancer worldwide. However, the miss rate of colorectal polyp varies significantly among the endoscopists. It is well known that a computer-aided diagnosis (CAD) system can assist endoscopists in detecting colon polyps and minimize the variation among endoscopists. In this study, we introduce a novel deep learning architecture, named MKDCNet, for automatic polyp segmentation robust to significant changes in polyp data distribution. MKDCNet is simply an encoder-decoder neural network that uses the pre-trained ResNet50 as the encoder and novel multiple kernel dilated convolution (MKDC) block that expands the field of view to learn more robust and heterogeneous representation. Extensive experiments on four publicly available polyp datasets and cell nuclei dataset show that the proposed MKDCNet outperforms the state-of-the-art methods when trained and tested on the same dataset as well when tested on unseen polyp datasets from different distributions. With rich results, we demonstrated the robustness of the proposed architecture. From an efficiency perspective, our algorithm can process at (≈ 45) frames per second on RTX 3090 GPU. MKDCNet can be a strong benchmark for building real-time systems for clinical colonoscopies. The code of the proposed MKDCNet is available at https://github.com/nikhilroxtomar/MKDCNet.
{"title":"Automatic Polyp Segmentation with Multiple Kernel Dilated Convolution Network.","authors":"Nikhil Kumar Tomar, Abhishek Srivastava, Ulas Bagci, Debesh Jha","doi":"10.1109/CBMS55023.2022.00063","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00063","url":null,"abstract":"<p><p>The detection and removal of precancerous polyps through colonoscopy is the primary technique for the prevention of colorectal cancer worldwide. However, the miss rate of colorectal polyp varies significantly among the endoscopists. It is well known that a computer-aided diagnosis (CAD) system can assist endoscopists in detecting colon polyps and minimize the variation among endoscopists. In this study, we introduce a novel deep learning architecture, named MKDCNet, for automatic polyp segmentation robust to significant changes in polyp data distribution. MKDCNet is simply an encoder-decoder neural network that uses the pre-trained <i>ResNet50</i> as the encoder and novel <i>multiple kernel dilated convolution (MKDC)</i> block that expands the field of view to learn more robust and heterogeneous representation. Extensive experiments on four publicly available polyp datasets and cell nuclei dataset show that the proposed MKDCNet outperforms the state-of-the-art methods when trained and tested on the same dataset as well when tested on unseen polyp datasets from different distributions. With rich results, we demonstrated the robustness of the proposed architecture. From an efficiency perspective, our algorithm can process at (<i>≈</i> 45) frames per second on RTX 3090 GPU. MKDCNet can be a strong benchmark for building real-time systems for clinical colonoscopies. The code of the proposed MKDCNet is available at https://github.com/nikhilroxtomar/MKDCNet.</p>","PeriodicalId":74567,"journal":{"name":"Proceedings. IEEE International Symposium on Computer-Based Medical Systems","volume":"2022 ","pages":"317-322"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9921313/pdf/nihms-1871530.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10708366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/CBMS49503.2020.00052
I. Moura, Francisco Silva, L. Coutinho, A. Teles
Traditionally, the process of monitoring and evaluating social behavior related to mental health has based on self-reported information, which is limited by the subjective character of responses and by various cognitive biases. Today, however, computational methods can use ubiquitous devices to monitor social behaviors related to mental health rather than relying on self-reports. Therefore, these technologies can be used to identify the routine of social activities, which enables the recognition of abnormal behaviors that may be indicative of mental disorders. In this paper, we present a solution for detecting context-enriched sociability patterns. Specifically, we introduced an algorithm capable of recognizing the social routine of monitored people. To implement the proposed algorithm, it was used a set of Complex Event Processing (CEP) rules, which allow the continuous processing of the social data stream derived from ubiquitous devices. The experiments performed indicated that the proposed solution is capable of detecting sociability patterns similar to a batch algorithm and demonstrated that context-based recognition provides a better understanding of social routine.
{"title":"Mental Health Ubiquitous Monitoring: Detecting Context-Enriched Sociability Patterns Through Complex Event Processing","authors":"I. Moura, Francisco Silva, L. Coutinho, A. Teles","doi":"10.1109/CBMS49503.2020.00052","DOIUrl":"https://doi.org/10.1109/CBMS49503.2020.00052","url":null,"abstract":"Traditionally, the process of monitoring and evaluating social behavior related to mental health has based on self-reported information, which is limited by the subjective character of responses and by various cognitive biases. Today, however, computational methods can use ubiquitous devices to monitor social behaviors related to mental health rather than relying on self-reports. Therefore, these technologies can be used to identify the routine of social activities, which enables the recognition of abnormal behaviors that may be indicative of mental disorders. In this paper, we present a solution for detecting context-enriched sociability patterns. Specifically, we introduced an algorithm capable of recognizing the social routine of monitored people. To implement the proposed algorithm, it was used a set of Complex Event Processing (CEP) rules, which allow the continuous processing of the social data stream derived from ubiquitous devices. The experiments performed indicated that the proposed solution is capable of detecting sociability patterns similar to a batch algorithm and demonstrated that context-based recognition provides a better understanding of social routine.","PeriodicalId":74567,"journal":{"name":"Proceedings. IEEE International Symposium on Computer-Based Medical Systems","volume":"19 1","pages":"239-244"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74119568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/CBMS49503.2020.00011
Edson A. G. Coutinho, B. Carvalho
Remote visualization of medical data is a very attractive alternative to increased mobility, allowing volumetric data to be accessed even in devices with low processing capability. However, the amount of simultaneous accesses and the bandwidth available are natural bottlenecks for any solution in this field. This paper presents a methodology to evaluate 3D volumetric rendering client-servers systems with the goal of determining the maximum load of a specific system based on Quality of Service (QoS). With such input in mind, a system architect could project systems with better cost-benefit ratio, or even design a cloud system that predicts and rents servers based on the number of service requests. In order to check the viability of the methodology, a stress test was conducted in a client-server system developed to visualize Computed Tomography (CT) scans. Results have shown that it could handle at least 20 simultaneous remote visualizations, even in scenarios with low bandwidth, finding its upper limit when dealing with around 30 simultaneous visualizations.
{"title":"Evaluation of Real-Time Remote 3D Rendering of Medical Images using GPUs","authors":"Edson A. G. Coutinho, B. Carvalho","doi":"10.1109/CBMS49503.2020.00011","DOIUrl":"https://doi.org/10.1109/CBMS49503.2020.00011","url":null,"abstract":"Remote visualization of medical data is a very attractive alternative to increased mobility, allowing volumetric data to be accessed even in devices with low processing capability. However, the amount of simultaneous accesses and the bandwidth available are natural bottlenecks for any solution in this field. This paper presents a methodology to evaluate 3D volumetric rendering client-servers systems with the goal of determining the maximum load of a specific system based on Quality of Service (QoS). With such input in mind, a system architect could project systems with better cost-benefit ratio, or even design a cloud system that predicts and rents servers based on the number of service requests. In order to check the viability of the methodology, a stress test was conducted in a client-server system developed to visualize Computed Tomography (CT) scans. Results have shown that it could handle at least 20 simultaneous remote visualizations, even in scenarios with low bandwidth, finding its upper limit when dealing with around 30 simultaneous visualizations.","PeriodicalId":74567,"journal":{"name":"Proceedings. IEEE International Symposium on Computer-Based Medical Systems","volume":"102 1","pages":"19-24"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79143955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/CBMS49503.2020.00033
L. W. Oliveira, S. T. Carvalho
This work investigates how gamification is used in self-care applications. Evidence in the literature indicates that the development of gamified mobile health applications has not taken into account the user's profile in order to correctly use the game elements in the solution; there also cases in which the use of gamification goes beyond the main purpose of the application, which is to treat health. This results in inefficiency in the use of the gamification strategy. To overcome this problem, this paper presents a gamification-based Framework, called Framework L, a method which incorporates concepts and practices in terms of two dimensions, Self-Care and Gamification, so that an mobile health application developer can design his application. In this context, adaptive gamification experiments were carried out in different ways. The first aims to improve the user experience when performing a manual test for the player profile. The second experiment uses machine learning to classify the user by player profile. These aspects make up the adaptive gamification cycle. The framework evaluation used the mixed method composed of a questionnaire and an online interview with experts. The results indicate that the framework helps developers marshal mobile health applications, primarily by encouraging user engagement.
{"title":"A Gamification-Based Framework for mHealth Developers in the Context of Self-Care","authors":"L. W. Oliveira, S. T. Carvalho","doi":"10.1109/CBMS49503.2020.00033","DOIUrl":"https://doi.org/10.1109/CBMS49503.2020.00033","url":null,"abstract":"This work investigates how gamification is used in self-care applications. Evidence in the literature indicates that the development of gamified mobile health applications has not taken into account the user's profile in order to correctly use the game elements in the solution; there also cases in which the use of gamification goes beyond the main purpose of the application, which is to treat health. This results in inefficiency in the use of the gamification strategy. To overcome this problem, this paper presents a gamification-based Framework, called Framework L, a method which incorporates concepts and practices in terms of two dimensions, Self-Care and Gamification, so that an mobile health application developer can design his application. In this context, adaptive gamification experiments were carried out in different ways. The first aims to improve the user experience when performing a manual test for the player profile. The second experiment uses machine learning to classify the user by player profile. These aspects make up the adaptive gamification cycle. The framework evaluation used the mixed method composed of a questionnaire and an online interview with experts. The results indicate that the framework helps developers marshal mobile health applications, primarily by encouraging user engagement.","PeriodicalId":74567,"journal":{"name":"Proceedings. IEEE International Symposium on Computer-Based Medical Systems","volume":"1 1","pages":"138-141"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73514452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite the huge findings made by the study of the behaviour of diseases, there are currently many non-cure or non-treatment diseases and only some of their symptoms can be beaten. Understanding how the diseases behave implies a complex analysis that together with the new technologies provide researchers with more calculation and observational capabilities, as well as novel approaches that allow us to observe how the diseases behave and relate in different environments with distinct factors. Current research aims to find new ways of characterizing the diseases based on phenotypic manifestations using knowledge extraction techniques from public sources. With the characterization of the diseases, a better understanding about the diseases and how similar they are can be achieved, leading for example to find new drugs that can be applied to different diseases. In order to carry out the present research we have made use of our own dataset of symptoms and diseases developed using an approach that allows us to generate phenotypic knowledge from the extraction of medical information from several data sources.
{"title":"Characterization of Diseases Based on Phenotypic Information Through Knowledge Extraction using Public Sources","authors":"Gerardo Lagunes García, A. R. González","doi":"10.1109/CBMS.2019.00124","DOIUrl":"https://doi.org/10.1109/CBMS.2019.00124","url":null,"abstract":"Despite the huge findings made by the study of the behaviour of diseases, there are currently many non-cure or non-treatment diseases and only some of their symptoms can be beaten. Understanding how the diseases behave implies a complex analysis that together with the new technologies provide researchers with more calculation and observational capabilities, as well as novel approaches that allow us to observe how the diseases behave and relate in different environments with distinct factors. Current research aims to find new ways of characterizing the diseases based on phenotypic manifestations using knowledge extraction techniques from public sources. With the characterization of the diseases, a better understanding about the diseases and how similar they are can be achieved, leading for example to find new drugs that can be applied to different diseases. In order to carry out the present research we have made use of our own dataset of symptoms and diseases developed using an approach that allows us to generate phenotypic knowledge from the extraction of medical information from several data sources.","PeriodicalId":74567,"journal":{"name":"Proceedings. IEEE International Symposium on Computer-Based Medical Systems","volume":"16 1","pages":"596-599"},"PeriodicalIF":0.0,"publicationDate":"2019-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84403314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gilvan Veras Magalhães Júnior, João Paulo Albuquerque Vieira, Roney L. S. Santos, J. L. N. Barbosa, P. S. Neto, R. Moura
In Brazil, a current health problem is the low capacity of meeting an increasing demand for medical services. As a result, some people have resorted to supplementary health care, which involves the operation of private health plans and health insurance. However, many health maintenance organizations (HMO) face financial difficulties due to unnecessary procedures, fraud or abuses in the use of health services. In order to avoid unnecessary expenses, the HMO began to use a mechanism called prior authorization, where a prior analysis of each user's need is made to authorize or deny the required requests. This work aims to study the influence of the use of textual features in automatic prior authorization evaluation, by using Text Mining, Natural Language Processing and Machine Learning techniques. Experiments were performed using several machine learning algorithms combined with textual features, increasing the performance of the automatic prior authorization. Results indicate not only the textual features influence to the evaluation of the automatic prior authorization process but also improved the prediction of the classifiers.
{"title":"A Study of the Influence of Textual Features in Learning Medical Prior Authorization","authors":"Gilvan Veras Magalhães Júnior, João Paulo Albuquerque Vieira, Roney L. S. Santos, J. L. N. Barbosa, P. S. Neto, R. Moura","doi":"10.1109/CBMS.2019.00021","DOIUrl":"https://doi.org/10.1109/CBMS.2019.00021","url":null,"abstract":"In Brazil, a current health problem is the low capacity of meeting an increasing demand for medical services. As a result, some people have resorted to supplementary health care, which involves the operation of private health plans and health insurance. However, many health maintenance organizations (HMO) face financial difficulties due to unnecessary procedures, fraud or abuses in the use of health services. In order to avoid unnecessary expenses, the HMO began to use a mechanism called prior authorization, where a prior analysis of each user's need is made to authorize or deny the required requests. This work aims to study the influence of the use of textual features in automatic prior authorization evaluation, by using Text Mining, Natural Language Processing and Machine Learning techniques. Experiments were performed using several machine learning algorithms combined with textual features, increasing the performance of the automatic prior authorization. Results indicate not only the textual features influence to the evaluation of the automatic prior authorization process but also improved the prediction of the classifiers.","PeriodicalId":74567,"journal":{"name":"Proceedings. IEEE International Symposium on Computer-Based Medical Systems","volume":"451 2","pages":"56-61"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/CBMS.2019.00021","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72457359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jefferson Henrique Camelo Soares, J. L. N. Barbosa, L. A. Lopes, Gilvan Veras Magalhães Júnior, R. Rabêlo, E. Passos, P. S. Neto
In a Health Plan, beneficiaries can cancel their contracts at any given time. For that reason, Health Insurance/Plan Providers (HIP) need to avoid optional contract cancellations to keep their financial operations stable. This work's main purpose is to develop an approach to predict the optional contract cancellation in a Private HIP and help them to prevent those cancelations.
{"title":"How to Avoid Customer Churn in Health Insurance/Plans? A Machine Learn Approach","authors":"Jefferson Henrique Camelo Soares, J. L. N. Barbosa, L. A. Lopes, Gilvan Veras Magalhães Júnior, R. Rabêlo, E. Passos, P. S. Neto","doi":"10.1109/CBMS.2019.00115","DOIUrl":"https://doi.org/10.1109/CBMS.2019.00115","url":null,"abstract":"In a Health Plan, beneficiaries can cancel their contracts at any given time. For that reason, Health Insurance/Plan Providers (HIP) need to avoid optional contract cancellations to keep their financial operations stable. This work's main purpose is to develop an approach to predict the optional contract cancellation in a Private HIP and help them to prevent those cancelations.","PeriodicalId":74567,"journal":{"name":"Proceedings. IEEE International Symposium on Computer-Based Medical Systems","volume":"3 1","pages":"559-562"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83793528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. M. Soriano, J. Castro, J. Fernández-breis, I. S. Román, A. A. Barriuso, David Guevara Baraza
Hospital Information Systems (H.I.S) use Electronic Health Record to store heterogeneous data from the patients. One important goal in this kind of systems is that the information must be, normalized and codify with a clinical terminology to represent exactly the healthcare meaning. Usually this process need human experts to identify and map the correct concept, this is a slow and tedious task. One of the most widespread clinical terminologies with more projection is Snomed-CT. This is an ontology multilingual clinical terminology that represent the clinical concepts with a unique code. We introduce in this paper Snomed2Vec, new approach of semantic search tool to find the most similar concepts using Snomed-CT. This is an ontology based named entity recognition system using word embedding, that suggest what is the most similar concept, that appear in a text. To evaluate the tool we suggest two kind of validations, one against a corpus gold with diagnostic from clinical reports, and a social validation, with a public free web access. We publish an access web to the academic world to use, test and validate the tool. The results of validation shows that this process help to the specialist to the election of choose the correct concepts from Snomed-CT. The paper illustrates 1) how create the initial big corpus of texts, to train the word2vec models, 2) how we use this vector space model to create our final Snomed2Vec vector space model, 3) The use of the cosine similarity distance, to obtain the most similar concepts, grouping by the hierarchies from Snomed-CT. We publish to the academic world: https://github.com/NachusS/Snomed2Vec access to the public web tool, and the notebook, for develop and test this paper.
{"title":"Snomed2Vec: Representation of SNOMED CT Terms with Word2Vec","authors":"I. M. Soriano, J. Castro, J. Fernández-breis, I. S. Román, A. A. Barriuso, David Guevara Baraza","doi":"10.1109/CBMS.2019.00138","DOIUrl":"https://doi.org/10.1109/CBMS.2019.00138","url":null,"abstract":"Hospital Information Systems (H.I.S) use Electronic Health Record to store heterogeneous data from the patients. One important goal in this kind of systems is that the information must be, normalized and codify with a clinical terminology to represent exactly the healthcare meaning. Usually this process need human experts to identify and map the correct concept, this is a slow and tedious task. One of the most widespread clinical terminologies with more projection is Snomed-CT. This is an ontology multilingual clinical terminology that represent the clinical concepts with a unique code. We introduce in this paper Snomed2Vec, new approach of semantic search tool to find the most similar concepts using Snomed-CT. This is an ontology based named entity recognition system using word embedding, that suggest what is the most similar concept, that appear in a text. To evaluate the tool we suggest two kind of validations, one against a corpus gold with diagnostic from clinical reports, and a social validation, with a public free web access. We publish an access web to the academic world to use, test and validate the tool. The results of validation shows that this process help to the specialist to the election of choose the correct concepts from Snomed-CT. The paper illustrates 1) how create the initial big corpus of texts, to train the word2vec models, 2) how we use this vector space model to create our final Snomed2Vec vector space model, 3) The use of the cosine similarity distance, to obtain the most similar concepts, grouping by the hierarchies from Snomed-CT. We publish to the academic world: https://github.com/NachusS/Snomed2Vec access to the public web tool, and the notebook, for develop and test this paper.","PeriodicalId":74567,"journal":{"name":"Proceedings. IEEE International Symposium on Computer-Based Medical Systems","volume":"27 1","pages":"678-683"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75239892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}