Pub Date : 2022-07-01DOI: 10.1109/CBMS55023.2022.00029
Zefan Yang, Yi Wang
Automatic segmentation of abdominal organs in CT is of essential importance for radiation therapy and image-guided surgery. However, the development of such automatic solutions remains challenging due to complicated structures and low tissue contrast in abdominal CT images. To address these issues, we propose a novel deep neural network equipped with an edge detection (ED) module and a graph-based regional feature enhancing (GRFE) module for better organ segmentation, by enhancing the long-range representation power of regional features. Specifically, the proposed ED module learns an edge representation by leveraging both fine-grained and structural information. The edge representation is then fused with the segmentation features to provide constraint guidance for better prediction. Our GRFE module propagates features to capture contextual information via graphic voxel-by-voxel connections. The GRFE module leverages the edge representation to highlight the features of boundaries to build strong contextual dependencies between the features of organs' boundaries and central areas. We evaluate the efficacy of the proposed network on two challenging abdominal multi-organ datasets. Experimental results demonstrate that our network outperforms several state-of-the-art methods. The code is publicly available at https://github.com/zefanyang/organseg_dags.
{"title":"Graph-based Regional Feature Enhancing for Abdominal Multi-Organ Segmentation in CT","authors":"Zefan Yang, Yi Wang","doi":"10.1109/CBMS55023.2022.00029","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00029","url":null,"abstract":"Automatic segmentation of abdominal organs in CT is of essential importance for radiation therapy and image-guided surgery. However, the development of such automatic solutions remains challenging due to complicated structures and low tissue contrast in abdominal CT images. To address these issues, we propose a novel deep neural network equipped with an edge detection (ED) module and a graph-based regional feature enhancing (GRFE) module for better organ segmentation, by enhancing the long-range representation power of regional features. Specifically, the proposed ED module learns an edge representation by leveraging both fine-grained and structural information. The edge representation is then fused with the segmentation features to provide constraint guidance for better prediction. Our GRFE module propagates features to capture contextual information via graphic voxel-by-voxel connections. The GRFE module leverages the edge representation to highlight the features of boundaries to build strong contextual dependencies between the features of organs' boundaries and central areas. We evaluate the efficacy of the proposed network on two challenging abdominal multi-organ datasets. Experimental results demonstrate that our network outperforms several state-of-the-art methods. The code is publicly available at https://github.com/zefanyang/organseg_dags.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114039866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/CBMS55023.2022.00033
Shuo Liu, Zhihui Lai, Heng Kong, Linlin Shen
Mammogram mass detection is a difficult task due to the mass character of the tiny area, fuzzy boundary, and occlusion. To address these problems, this paper proposes a novel detection network for mammogram mass detection. Firstly, we propose a novel feature fusion structure and Small Target Attention Module (STAM) to improve the model's ability to detect small masses. Secondly, Results-oriented Loss (ROL) is adopted to obtain better model performance. Finally, Incremental Positive Selection (IPS) is used to divide positive and negative anchors. The scarcity of breast mammogram images for training aggravates the difficulty of mass detection. Thus, we open our collected dataset, which contains 1456 mammogram images from 400 patients. Since the model includes a double feature fusion structure, the proposed network is named Dual Fusion Mass Detector (DFMD). Experiment results show that DFMD is robust to various variations on scale, blurry and occlusion.
{"title":"Dual Fusion Mass Detector for Mammogram Mass Detection","authors":"Shuo Liu, Zhihui Lai, Heng Kong, Linlin Shen","doi":"10.1109/CBMS55023.2022.00033","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00033","url":null,"abstract":"Mammogram mass detection is a difficult task due to the mass character of the tiny area, fuzzy boundary, and occlusion. To address these problems, this paper proposes a novel detection network for mammogram mass detection. Firstly, we propose a novel feature fusion structure and Small Target Attention Module (STAM) to improve the model's ability to detect small masses. Secondly, Results-oriented Loss (ROL) is adopted to obtain better model performance. Finally, Incremental Positive Selection (IPS) is used to divide positive and negative anchors. The scarcity of breast mammogram images for training aggravates the difficulty of mass detection. Thus, we open our collected dataset, which contains 1456 mammogram images from 400 patients. Since the model includes a double feature fusion structure, the proposed network is named Dual Fusion Mass Detector (DFMD). Experiment results show that DFMD is robust to various variations on scale, blurry and occlusion.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114822570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/CBMS55023.2022.00051
Joana Rocha, Sofia Cardoso Pereira, J. Pedrosa, A. Campilho, A. Mendonça
Backed by more powerful computational resources and optimized training routines, deep learning models have attained unprecedented performance in extracting information from chest X-ray data. Preceding other tasks, an automated abnormality detection stage can be useful to prioritize certain exams and enable a more efficient clinical workflow. However, the presence of image artifacts such as lettering often generates a harmful bias in the classifier, leading to an increase of false positive results. Consequently, health care would benefit from a system that selects the thoracic region of interest prior to deciding whether an image is possibly pathologic. The current work tack-les this binary classification exercise using an attention-driven and spatially unsupervised Spatial Transformer Network (STN). The results indicate that the STN achieves similar results to using YOLO-cropped images, with fewer computational expenses and without the need for localization labels. More specifically, the system is able to distinguish between normal and abnormal CheXpert Images with a mean AUC of 84.22%.
{"title":"Attention-driven Spatial Transformer Network for Abnormality Detection in Chest X-Ray Images","authors":"Joana Rocha, Sofia Cardoso Pereira, J. Pedrosa, A. Campilho, A. Mendonça","doi":"10.1109/CBMS55023.2022.00051","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00051","url":null,"abstract":"Backed by more powerful computational resources and optimized training routines, deep learning models have attained unprecedented performance in extracting information from chest X-ray data. Preceding other tasks, an automated abnormality detection stage can be useful to prioritize certain exams and enable a more efficient clinical workflow. However, the presence of image artifacts such as lettering often generates a harmful bias in the classifier, leading to an increase of false positive results. Consequently, health care would benefit from a system that selects the thoracic region of interest prior to deciding whether an image is possibly pathologic. The current work tack-les this binary classification exercise using an attention-driven and spatially unsupervised Spatial Transformer Network (STN). The results indicate that the STN achieves similar results to using YOLO-cropped images, with fewer computational expenses and without the need for localization labels. More specifically, the system is able to distinguish between normal and abnormal CheXpert Images with a mean AUC of 84.22%.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124063403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/CBMS55023.2022.00054
G. Placidi, G. Castellano, F. Mignosi, M. Polsinelli, G. Vessio
In medical imaging, images represent the quantification of the interaction between electromagnetic waves and our body and are represented in grey-scale. In addition, medical imaging often produces multimodal images. However, the analysis and interpretation of these images mostly occur in sequence or, as in the case of automatic tools, they are simply concatenated as independent sources of information. In both cases, color perception and color contrast are not exploited. Color perception and color contrast play a crucial role in human vision to recognize objects effectively and efficiently, and this can in principle extend to automatic systems. In this paper we show how color coding, particularly using color opponent models, can become an effective tool for preliminary color-based segmentation. Tests have been conducted on multimodal Magnetic Resonance Imaging (MRI) of the brain collected in a public database and the results obtained show the importance of color coding in medical imaging analysis.
{"title":"Investigating the Effectiveness of Color Coding in Multimodal Medical Imaging","authors":"G. Placidi, G. Castellano, F. Mignosi, M. Polsinelli, G. Vessio","doi":"10.1109/CBMS55023.2022.00054","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00054","url":null,"abstract":"In medical imaging, images represent the quantification of the interaction between electromagnetic waves and our body and are represented in grey-scale. In addition, medical imaging often produces multimodal images. However, the analysis and interpretation of these images mostly occur in sequence or, as in the case of automatic tools, they are simply concatenated as independent sources of information. In both cases, color perception and color contrast are not exploited. Color perception and color contrast play a crucial role in human vision to recognize objects effectively and efficiently, and this can in principle extend to automatic systems. In this paper we show how color coding, particularly using color opponent models, can become an effective tool for preliminary color-based segmentation. Tests have been conducted on multimodal Magnetic Resonance Imaging (MRI) of the brain collected in a public database and the results obtained show the importance of color coding in medical imaging analysis.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122730168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/CBMS55023.2022.00043
Zhe Wang, A. Stell, R. Sinnott, Addn Study Group
Australia is a region with a high incidence of diabetes with approximately 1.2 million Australians diagnosed with this condition. In 2012, the Juvenile Diabetes Research Foundation (JDRF - www.jdrf.org.au) provided funding to establish the national registry - the Australasian Diabetes Data Network (ADDN - www.addn.org.au) populated with extensive longitudinal data on patients with Type-1 Diabetes (T1D). The ADDN registry has evolved over time and now includes data on over 20,000 patients from 22 paediatric centres and 11 adult centres across Australasia, i.e., where the data is uploaded from hospitals and not manually entered. This data has historically been de-identified at source, however moving forward there is increased demand from the clinical research community to link between data-sets using fully identifying data. In this context, this paper explores the challenges this poses with regards to the evolving processes that must be incorporated for data collection and use, e.g. e-Consent, and especially the impact of General Data Protection Regulation (GDPR) on the ADDN processes.
{"title":"The Impact of General Data Protection Regulation on the Australasian Type-1 Diabetes Platform","authors":"Zhe Wang, A. Stell, R. Sinnott, Addn Study Group","doi":"10.1109/CBMS55023.2022.00043","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00043","url":null,"abstract":"Australia is a region with a high incidence of diabetes with approximately 1.2 million Australians diagnosed with this condition. In 2012, the Juvenile Diabetes Research Foundation (JDRF - www.jdrf.org.au) provided funding to establish the national registry - the Australasian Diabetes Data Network (ADDN - www.addn.org.au) populated with extensive longitudinal data on patients with Type-1 Diabetes (T1D). The ADDN registry has evolved over time and now includes data on over 20,000 patients from 22 paediatric centres and 11 adult centres across Australasia, i.e., where the data is uploaded from hospitals and not manually entered. This data has historically been de-identified at source, however moving forward there is increased demand from the clinical research community to link between data-sets using fully identifying data. In this context, this paper explores the challenges this poses with regards to the evolving processes that must be incorporated for data collection and use, e.g. e-Consent, and especially the impact of General Data Protection Regulation (GDPR) on the ADDN processes.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128176295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/CBMS55023.2022.00017
Jens Scheible, Fabian Hofmann, M. Reichert, R. Pryss, Marc Schickler
Therapeutic Interventions (TIs) play an important role in modern medical and psychological treatments, but their integration into the digital world still shows deficits, e.g., in the integration of the auditory interface. Initiatives to integrate this interface into existing Internet- and Mobile-Based Interventions (IMIs) are largely focused on a small group of Voice Assistants (VAs) and their specific capabilities. To mitigate these drawbacks, the presented concept seamlessly integrates arbitrary VAs into the treatment process of TIs. To this end, an architecture - including a discussion of relevant requirements - is presented that, on the one hand, uses VAs as the only point of contact with patients and, on the other hand, provides a comprehensive web-based backend for Healthcare Providers (HCPs). Based on the architecture, a proof-of-concept implementation using Amazon Alexa is presented. Finally, it is discussed that the scenario addressed and the solution presented have great potential, but still need a lot of work and technical considerations.
{"title":"Generic Concept for Integrating Voice Assistance Into Smart Therapeutic Interventions","authors":"Jens Scheible, Fabian Hofmann, M. Reichert, R. Pryss, Marc Schickler","doi":"10.1109/CBMS55023.2022.00017","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00017","url":null,"abstract":"Therapeutic Interventions (TIs) play an important role in modern medical and psychological treatments, but their integration into the digital world still shows deficits, e.g., in the integration of the auditory interface. Initiatives to integrate this interface into existing Internet- and Mobile-Based Interventions (IMIs) are largely focused on a small group of Voice Assistants (VAs) and their specific capabilities. To mitigate these drawbacks, the presented concept seamlessly integrates arbitrary VAs into the treatment process of TIs. To this end, an architecture - including a discussion of relevant requirements - is presented that, on the one hand, uses VAs as the only point of contact with patients and, on the other hand, provides a comprehensive web-based backend for Healthcare Providers (HCPs). Based on the architecture, a proof-of-concept implementation using Amazon Alexa is presented. Finally, it is discussed that the scenario addressed and the solution presented have great potential, but still need a lot of work and technical considerations.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129307572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/CBMS55023.2022.00011
César Antonio Ortiz Toro, Á. García-Pedrero, M. Lillo-Saavedra, C. Gonzalo-Martín
Pneumonia is an acute lung infection caused by a variety of organisms, such as viruses, bacteria, or fungi, that poses a serious risk to vulnerable populations. The first step in the diagnosis and treatment of pneumonia is a prompt and accurate diagnosis, especially in the context of an epidemic outbreak such as COVID-19, where pneumonia is an important symptom. To provide tools for this purpose, this article evaluates the potential of three textural image characterisation methods, fractal dimension, radiomics, and superpixel-based histon, as biomarkers both to distinguish between healthy individuals and patients affected by pneumonia and to differentiate between potential pneumonia causes. The results show the ability of the textural characterisation methods tested to discriminate between nonpathological images and images with pneumonia, and how some of the generated models show the potential to characterise the general textural patterns that define viral and bacterial pneumonia, and the specific features associated with a COVID-19 infection.
{"title":"Textural features for automatic detection and categorisation of pneumonia in chest X-ray images","authors":"César Antonio Ortiz Toro, Á. García-Pedrero, M. Lillo-Saavedra, C. Gonzalo-Martín","doi":"10.1109/CBMS55023.2022.00011","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00011","url":null,"abstract":"Pneumonia is an acute lung infection caused by a variety of organisms, such as viruses, bacteria, or fungi, that poses a serious risk to vulnerable populations. The first step in the diagnosis and treatment of pneumonia is a prompt and accurate diagnosis, especially in the context of an epidemic outbreak such as COVID-19, where pneumonia is an important symptom. To provide tools for this purpose, this article evaluates the potential of three textural image characterisation methods, fractal dimension, radiomics, and superpixel-based histon, as biomarkers both to distinguish between healthy individuals and patients affected by pneumonia and to differentiate between potential pneumonia causes. The results show the ability of the textural characterisation methods tested to discriminate between nonpathological images and images with pneumonia, and how some of the generated models show the potential to characterise the general textural patterns that define viral and bacterial pneumonia, and the specific features associated with a COVID-19 infection.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"173 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125793801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/CBMS55023.2022.00055
Jorge Miguel Silva, João Rafael Almeida
Advances in DNA sequencing technologies have led to an unprecedented growth of sequenced data. However, when sequencing de-novo genomes, one of the biggest challenges is the classification of DNA sequences that do not match with any biological sequence from the literature. The use of reference-free methods to identify these organisms supported by compressors is one strategy for taxonomic identification. However, with the high number of compressors available, and the computational resources required to operate them, there is a problem in selecting the best compressors for classification with limited computational resources. In this paper, we present a two-step pipeline to analyze nine compressors, to understand which ones could be the best candidates for taxonomic identification. We use 500 randomly selected sequences from five taxonomic groups to conduct this analysis. The results show that besides being an excellent repre-sentative feature, depending on the compressor, the Normalized Compression (NC) reflects different aspects concerning the nature of a given sequence and its complexity. Furthermore, we show that neither the compression capability of a compressor nor the compressibility of the file correlates with classification accuracy. The code used in this work is publicly available at https://github.com/bioinformatics-ua/COMPACT.
{"title":"The value of compression for taxonomic identification","authors":"Jorge Miguel Silva, João Rafael Almeida","doi":"10.1109/CBMS55023.2022.00055","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00055","url":null,"abstract":"Advances in DNA sequencing technologies have led to an unprecedented growth of sequenced data. However, when sequencing de-novo genomes, one of the biggest challenges is the classification of DNA sequences that do not match with any biological sequence from the literature. The use of reference-free methods to identify these organisms supported by compressors is one strategy for taxonomic identification. However, with the high number of compressors available, and the computational resources required to operate them, there is a problem in selecting the best compressors for classification with limited computational resources. In this paper, we present a two-step pipeline to analyze nine compressors, to understand which ones could be the best candidates for taxonomic identification. We use 500 randomly selected sequences from five taxonomic groups to conduct this analysis. The results show that besides being an excellent repre-sentative feature, depending on the compressor, the Normalized Compression (NC) reflects different aspects concerning the nature of a given sequence and its complexity. Furthermore, we show that neither the compression capability of a compressor nor the compressibility of the file correlates with classification accuracy. The code used in this work is publicly available at https://github.com/bioinformatics-ua/COMPACT.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115259631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/CBMS55023.2022.00053
Arnaldo Pereira, João Rafael Almeida, Rui Pedro Lopes, J. L. Oliveira
Today, medical studies enable a deeper understanding of health conditions, diseases and treatments, helping to improve medical care services. In observational studies, an adequate selection of datasets is important, to ensure the study's success and the quality of the results obtained. During the feasibility study phase, inclusion and exclusion criteria are defined, together with specific database characteristics to construct the cohort. However, it is not easy to compare database characteristics and their evolution over time during this selection. Data comparisons can be made using the data properties and aggregations, but the inclusion of temporal information becomes more complex due to the continuous evolution of concepts over time. In this paper, we propose two visualisation methods aiming for a better description of data evolution in clinical registers using biomedical standard vocabularies.
{"title":"Visualising Time-evolving Semantic Biomedical Data","authors":"Arnaldo Pereira, João Rafael Almeida, Rui Pedro Lopes, J. L. Oliveira","doi":"10.1109/CBMS55023.2022.00053","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00053","url":null,"abstract":"Today, medical studies enable a deeper understanding of health conditions, diseases and treatments, helping to improve medical care services. In observational studies, an adequate selection of datasets is important, to ensure the study's success and the quality of the results obtained. During the feasibility study phase, inclusion and exclusion criteria are defined, together with specific database characteristics to construct the cohort. However, it is not easy to compare database characteristics and their evolution over time during this selection. Data comparisons can be made using the data properties and aggregations, but the inclusion of temporal information becomes more complex due to the continuous evolution of concepts over time. In this paper, we propose two visualisation methods aiming for a better description of data evolution in clinical registers using biomedical standard vocabularies.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114577889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/CBMS55023.2022.00060
Ziyan Lin, Zihao Chen
Magnetic Resonance Imaging (MRI) is important in clinic to produce high resolution images for diagnosis, but its acquisition time is long for high resolution images. Deep learning based MRI super resolution methods can reduce scan time without complicated sequence programming, but may create additional artifacts due to the discrepancy between training data and testing data. Data consistency layer can improve the deep learning results but needs raw k-space data. In this work, we propose a magnitude-image based data consistency deep learning MRI super resolution method to improve super resolution images' quality without raw k-space data. Our experiments show that the proposed method can improve NRMSE and SSIM of super resolution images compared to the same Convolutional Neural Network (CNN) block without data consistency module.
{"title":"Magnitude-image based data-consistent deep learning method for MRI super resolution","authors":"Ziyan Lin, Zihao Chen","doi":"10.1109/CBMS55023.2022.00060","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00060","url":null,"abstract":"Magnetic Resonance Imaging (MRI) is important in clinic to produce high resolution images for diagnosis, but its acquisition time is long for high resolution images. Deep learning based MRI super resolution methods can reduce scan time without complicated sequence programming, but may create additional artifacts due to the discrepancy between training data and testing data. Data consistency layer can improve the deep learning results but needs raw k-space data. In this work, we propose a magnitude-image based data consistency deep learning MRI super resolution method to improve super resolution images' quality without raw k-space data. Our experiments show that the proposed method can improve NRMSE and SSIM of super resolution images compared to the same Convolutional Neural Network (CNN) block without data consistency module.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116303812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}