Pub Date : 2020-05-01DOI: 10.1109/ICCCS49078.2020.9118577
Jingcheng Ye, Yunjie Fang, Xingda Bao
Aiming at the problem that the existing network nodes can’t detect the untrusted nodes quickly. In this paper, a condition generated fast untrusted node detection model (FUNM) for enemy network (CGAN) is proposed, which improves the detection efficiency greatly with high accuracy. Different from the traditional generative adversary network (GAN), this model limits the degree of freedom of convergence of generator and discriminator by adding constraints, so as to speed up the convergence and detect the untrusted nodes accurately and quickly. The experimental results show that the CGAN based on fast detection model of untrusted nodes has obvious advantages in terms of accuracy, false alarm rate and real rate, which provides great help for the security of edge networks.
{"title":"Fast Detection Model of Untrusted Nodes in Fog Computing Based on CGAN","authors":"Jingcheng Ye, Yunjie Fang, Xingda Bao","doi":"10.1109/ICCCS49078.2020.9118577","DOIUrl":"https://doi.org/10.1109/ICCCS49078.2020.9118577","url":null,"abstract":"Aiming at the problem that the existing network nodes can’t detect the untrusted nodes quickly. In this paper, a condition generated fast untrusted node detection model (FUNM) for enemy network (CGAN) is proposed, which improves the detection efficiency greatly with high accuracy. Different from the traditional generative adversary network (GAN), this model limits the degree of freedom of convergence of generator and discriminator by adding constraints, so as to speed up the convergence and detect the untrusted nodes accurately and quickly. The experimental results show that the CGAN based on fast detection model of untrusted nodes has obvious advantages in terms of accuracy, false alarm rate and real rate, which provides great help for the security of edge networks.","PeriodicalId":105556,"journal":{"name":"2020 5th International Conference on Computer and Communication Systems (ICCCS)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134071327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/ICCCS49078.2020.9118557
Changdao Du, Iman Firmansyah, Y. Yamaguchi
Lattice Gas Cellular Automata (LGCA) simulations are typical High-Performance Computing (HPC) applications commonly used to simulate fluid flows. Due to the computational locality and discretization of LGCA, these simulations can achieve high performance by using parallel computing devices like GPUs or multi-core CPUs. Nevertheless, many studies also have shown that state-of-the-art Field Programmable Gate Arrays (FPGAs) have enormous parallel computing potential and power-efficient for high-performance computations. In this paper, we present an FPGA-based fluid simulation architecture design for the LGCA method. Our design exploits both temporal and spatial parallelism inside the LGCA algorithm to scale up the performance on FPGA. We also propose an application-specific cache structure to overcome the memory bandwidth bottleneck. Furthermore, our development process is based on the High-Level Synthesis (HLS) approach that increases productivity. Experimental results on a Xilinx Vcu 1525 FPGA show that our design is able to achieve 17130.2 Million Lattice Updates Per Second (MLUPS).
{"title":"High-Performance Computation of LGCA Fluid Dynamics on an FPGA-Based Platform","authors":"Changdao Du, Iman Firmansyah, Y. Yamaguchi","doi":"10.1109/ICCCS49078.2020.9118557","DOIUrl":"https://doi.org/10.1109/ICCCS49078.2020.9118557","url":null,"abstract":"Lattice Gas Cellular Automata (LGCA) simulations are typical High-Performance Computing (HPC) applications commonly used to simulate fluid flows. Due to the computational locality and discretization of LGCA, these simulations can achieve high performance by using parallel computing devices like GPUs or multi-core CPUs. Nevertheless, many studies also have shown that state-of-the-art Field Programmable Gate Arrays (FPGAs) have enormous parallel computing potential and power-efficient for high-performance computations. In this paper, we present an FPGA-based fluid simulation architecture design for the LGCA method. Our design exploits both temporal and spatial parallelism inside the LGCA algorithm to scale up the performance on FPGA. We also propose an application-specific cache structure to overcome the memory bandwidth bottleneck. Furthermore, our development process is based on the High-Level Synthesis (HLS) approach that increases productivity. Experimental results on a Xilinx Vcu 1525 FPGA show that our design is able to achieve 17130.2 Million Lattice Updates Per Second (MLUPS).","PeriodicalId":105556,"journal":{"name":"2020 5th International Conference on Computer and Communication Systems (ICCCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130571221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/ICCCS49078.2020.9118422
Fahimeh Nezhadalinaei, Lei Zhang, R. Ghaemi, Faezeh Jamshidi
Cancer is considered as one of the world’s most serious illnesses. There are more than 100 types of cancer, which can bring major national burden for countries. MicroRNAs (miRNAs) are a class of small noncoding ribonucleic acids (RNAs) that have a crucial part of cancer tissue formation and some miRNAs are differentially expressed in a normal and cancerous tumor. Therefore, it is possible to diagnose cancer by analysis of individual’s miRNAs, which it is not an easy process, because of the huge number of miRNAs. In this regard, informative miRNAs selection can play an important role to diagnose cancer. The interest of this paper is to improve the performance of miRNAs selection by using different classification methods on representative miRNAs of normal and cancer class, which is determined based on FMIMS and combine its results by our proposed approach named Weighted Evidence Accumulation (W-EAC). The performances of this method are evaluated on Gene Expression Omnibus (GEO repository) consisting of the samples from Pancreas Cancer, Nasopharyngeal Cancer, Colorectal Cancer, Lung Cancer and Melanoma Cancer.
{"title":"Data Classification and Weighted Evidence Accumulation to Detect Relevant Pathology","authors":"Fahimeh Nezhadalinaei, Lei Zhang, R. Ghaemi, Faezeh Jamshidi","doi":"10.1109/ICCCS49078.2020.9118422","DOIUrl":"https://doi.org/10.1109/ICCCS49078.2020.9118422","url":null,"abstract":"Cancer is considered as one of the world’s most serious illnesses. There are more than 100 types of cancer, which can bring major national burden for countries. MicroRNAs (miRNAs) are a class of small noncoding ribonucleic acids (RNAs) that have a crucial part of cancer tissue formation and some miRNAs are differentially expressed in a normal and cancerous tumor. Therefore, it is possible to diagnose cancer by analysis of individual’s miRNAs, which it is not an easy process, because of the huge number of miRNAs. In this regard, informative miRNAs selection can play an important role to diagnose cancer. The interest of this paper is to improve the performance of miRNAs selection by using different classification methods on representative miRNAs of normal and cancer class, which is determined based on FMIMS and combine its results by our proposed approach named Weighted Evidence Accumulation (W-EAC). The performances of this method are evaluated on Gene Expression Omnibus (GEO repository) consisting of the samples from Pancreas Cancer, Nasopharyngeal Cancer, Colorectal Cancer, Lung Cancer and Melanoma Cancer.","PeriodicalId":105556,"journal":{"name":"2020 5th International Conference on Computer and Communication Systems (ICCCS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122297890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/ICCCS49078.2020.9118498
Qinghua Li, Hailong Ma, Zhao Zhang, Chao Feng
With the development of medical technology, the automatic cell analysis system plays an important role in medical diagnosis and medical image processing. The kernel recognition theory and technology based on support vector machine (SVM) classifier are mainly optimized from the perspective of the kernel segmentation algorithm to improve the recognition accuracy of the SVM classifier. Unfortunately, the nuclear overlap treatment can not accurately separate the nuclear gelling impurities in the dyeing process, resulting in the low classification accuracy of SVM. To solve the above image segmentation problems in the process of nuclear imaging processing, an effective nuclear extraction method based on the mask method for the SVM classifier is proposed. Compared with related work, the proposed method enables one to achieve a higher accuracy of SVM cross-validation.
{"title":"An Effective Nuclear Extraction Mask Method for SVM Classification","authors":"Qinghua Li, Hailong Ma, Zhao Zhang, Chao Feng","doi":"10.1109/ICCCS49078.2020.9118498","DOIUrl":"https://doi.org/10.1109/ICCCS49078.2020.9118498","url":null,"abstract":"With the development of medical technology, the automatic cell analysis system plays an important role in medical diagnosis and medical image processing. The kernel recognition theory and technology based on support vector machine (SVM) classifier are mainly optimized from the perspective of the kernel segmentation algorithm to improve the recognition accuracy of the SVM classifier. Unfortunately, the nuclear overlap treatment can not accurately separate the nuclear gelling impurities in the dyeing process, resulting in the low classification accuracy of SVM. To solve the above image segmentation problems in the process of nuclear imaging processing, an effective nuclear extraction method based on the mask method for the SVM classifier is proposed. Compared with related work, the proposed method enables one to achieve a higher accuracy of SVM cross-validation.","PeriodicalId":105556,"journal":{"name":"2020 5th International Conference on Computer and Communication Systems (ICCCS)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125114441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/ICCCS49078.2020.9118509
Shuang Cao, Yulong Wang, Xiaoxiang Wang, Qi Li
Rapid assessment of disaster information such as seismic intensity area and affecting field direction after an earthquake is important for rescue. However, seismic equipment has limited coverage and need a long time to assess disasters. Compared to seismic equipment, mobile phone base stations have wider coverage, higher density, and faster response to the damage, which can be used to quickly assess earthquake disasters. Existing methods only take damaged base stations into calculation and treat them as identical, but they should have different contributions in different conditions. In our algorithm, both damaged base stations and normal base stations are considered altogether. In order to make full use of the information, we increase the sampling points, reasonably calculate by kernel density method, and propose the concept of “damage ratio” to determine the weight of all points. Finally, the weighted standard deviation ellipse algorithm is used to obtain the seismic intensity area and affecting field direction. This method can be verified to be better than the traditional method through the real earthquake case.
{"title":"A Rapid Assessment Method for Seismic Intensity Area and Affecting Field Direction Using Mobile Phone Base Stations","authors":"Shuang Cao, Yulong Wang, Xiaoxiang Wang, Qi Li","doi":"10.1109/ICCCS49078.2020.9118509","DOIUrl":"https://doi.org/10.1109/ICCCS49078.2020.9118509","url":null,"abstract":"Rapid assessment of disaster information such as seismic intensity area and affecting field direction after an earthquake is important for rescue. However, seismic equipment has limited coverage and need a long time to assess disasters. Compared to seismic equipment, mobile phone base stations have wider coverage, higher density, and faster response to the damage, which can be used to quickly assess earthquake disasters. Existing methods only take damaged base stations into calculation and treat them as identical, but they should have different contributions in different conditions. In our algorithm, both damaged base stations and normal base stations are considered altogether. In order to make full use of the information, we increase the sampling points, reasonably calculate by kernel density method, and propose the concept of “damage ratio” to determine the weight of all points. Finally, the weighted standard deviation ellipse algorithm is used to obtain the seismic intensity area and affecting field direction. This method can be verified to be better than the traditional method through the real earthquake case.","PeriodicalId":105556,"journal":{"name":"2020 5th International Conference on Computer and Communication Systems (ICCCS)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128796739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/ICCCS49078.2020.9118551
Dongyang Li, Guiying Tang, Li Zhao, Xiaoqin Zhang, X. Ye
Haze is a major degradation factor in outdoor images. Removing haze from a single image is an ill-posed problem and the performance of existing prior-based image dehazing methods is limited by the effectiveness of hand-designed features. In this paper, new dehazing method is introduced which is refined using gamma transformation and does not utilize the traditional atmospheric scattering model. The proposed method restores haze-free images without reference to corresponding clear image or estimating a depth-dependent transmission map. A novel, simple and powerful Concentration Scale Prior (CSP) is then utilized for haze removal in a single haze image to enhance gamma transformation, and its performance is verified. Experimental results show that the proposed approach achieves superior dehazing performance compared to current state-of-the-art methods.
{"title":"Single I mage Haze Removal Based on Concentration Scale Prior","authors":"Dongyang Li, Guiying Tang, Li Zhao, Xiaoqin Zhang, X. Ye","doi":"10.1109/ICCCS49078.2020.9118551","DOIUrl":"https://doi.org/10.1109/ICCCS49078.2020.9118551","url":null,"abstract":"Haze is a major degradation factor in outdoor images. Removing haze from a single image is an ill-posed problem and the performance of existing prior-based image dehazing methods is limited by the effectiveness of hand-designed features. In this paper, new dehazing method is introduced which is refined using gamma transformation and does not utilize the traditional atmospheric scattering model. The proposed method restores haze-free images without reference to corresponding clear image or estimating a depth-dependent transmission map. A novel, simple and powerful Concentration Scale Prior (CSP) is then utilized for haze removal in a single haze image to enhance gamma transformation, and its performance is verified. Experimental results show that the proposed approach achieves superior dehazing performance compared to current state-of-the-art methods.","PeriodicalId":105556,"journal":{"name":"2020 5th International Conference on Computer and Communication Systems (ICCCS)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129053654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/ICCCS49078.2020.9118421
Xinyan Wang, Feng Gao, Jing Zhang, Xiaoke Feng, Xing Hu
As the Internet of Things (IoT) technology develops rapidly [1], complex process and privacy leakage during crossdomain authentication of power terminals have become obstacles to improve operational efficiency and user experience. To solve these problems, this paper proposes an identity authentication mechanism based on Blockchain. Via analyzing power communication network, three types of processes are designed in detail, including identity, in-domain authentication, as well as cross-domain authentication for terminals. Moreover, to solve security issues between different domains with various security levels in cross-certification, this paper evaluates identity security and then establishes a cross-domain authentication credibility matrix. Through optimizing the credibility matrix, identity levels for power terminals can be calculated more accurately. Finally, evaluation and analysis about the proposed cross-domain authentication mechanism are presented in terms of scenario, algorithm science, scalability as well as robustness.
{"title":"Cross-domain Authentication Mechanism for Power Terminals Based on Blockchain and Credibility Evaluation","authors":"Xinyan Wang, Feng Gao, Jing Zhang, Xiaoke Feng, Xing Hu","doi":"10.1109/ICCCS49078.2020.9118421","DOIUrl":"https://doi.org/10.1109/ICCCS49078.2020.9118421","url":null,"abstract":"As the Internet of Things (IoT) technology develops rapidly [1], complex process and privacy leakage during crossdomain authentication of power terminals have become obstacles to improve operational efficiency and user experience. To solve these problems, this paper proposes an identity authentication mechanism based on Blockchain. Via analyzing power communication network, three types of processes are designed in detail, including identity, in-domain authentication, as well as cross-domain authentication for terminals. Moreover, to solve security issues between different domains with various security levels in cross-certification, this paper evaluates identity security and then establishes a cross-domain authentication credibility matrix. Through optimizing the credibility matrix, identity levels for power terminals can be calculated more accurately. Finally, evaluation and analysis about the proposed cross-domain authentication mechanism are presented in terms of scenario, algorithm science, scalability as well as robustness.","PeriodicalId":105556,"journal":{"name":"2020 5th International Conference on Computer and Communication Systems (ICCCS)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129060714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/ICCCS49078.2020.9118536
Dangdang Zheng, Bing Han, Geng Liu, R. Tong
There are complex and repeated modeling works during the process of aircraft design, which leads to the heavy workload and low working efficiency. In order to improve the quality and efficiency of aircraft structure design, an automatic modeling system for skeleton model of aircraft with complex surfaces is developed based on CATIA VBA. The system can provide the generation of the main structure skeleton model of the aircraft, the creation/output of skeleton model properties and the automatic splitting of the aircraft skin. Besides, an aircraft design template is proposed to formalize the design standardization of aircraft structure and rapid modeling procedure. The efficiency and accuracy of the aircraft structure design are improved, and the finite element pre-processing time is shortened by the system.
{"title":"Automatic Modeling System for Skeleton Model of Aircraft with Complex Surfaces","authors":"Dangdang Zheng, Bing Han, Geng Liu, R. Tong","doi":"10.1109/ICCCS49078.2020.9118536","DOIUrl":"https://doi.org/10.1109/ICCCS49078.2020.9118536","url":null,"abstract":"There are complex and repeated modeling works during the process of aircraft design, which leads to the heavy workload and low working efficiency. In order to improve the quality and efficiency of aircraft structure design, an automatic modeling system for skeleton model of aircraft with complex surfaces is developed based on CATIA VBA. The system can provide the generation of the main structure skeleton model of the aircraft, the creation/output of skeleton model properties and the automatic splitting of the aircraft skin. Besides, an aircraft design template is proposed to formalize the design standardization of aircraft structure and rapid modeling procedure. The efficiency and accuracy of the aircraft structure design are improved, and the finite element pre-processing time is shortened by the system.","PeriodicalId":105556,"journal":{"name":"2020 5th International Conference on Computer and Communication Systems (ICCCS)","volume":"214 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121107240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/ICCCS49078.2020.9118506
Chengli Hou
Human Activity Recognition (HAR) has been an increasingly popular range to do researches which stems from the ubiquitous computing. And lately, identifying activities during daily life has become one of more and more challenges. Subsequently, more and more methods can be used in the recognition of human activities such as Support Vector Machine (SVM), Random Forests (RF) which are the representatives of Traditional Machine Learning (TML) and also some Deep Learning (DL) methods like Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN). However, neither TML nor DL is suitable for all kinds of situations and various datasets. As a result, we would like to explore more about such consequences. In this paper, we discover a discrepancy and phenomenon that different sizes of collected HAR datasets may produce influences on the effectiveness of traditional machine learning methods as well as the deep learning architectures. We conduct experiments on two kinds of different datasets USC-HAD and WISDM with the best accuracy nearly 90% in DL and 87% in TML. Due to the consequences of the experiments we give a conclusion on the individual heterogeneity problems of the HAR datasets–when dealing with the HAR datasets of small scales, the TML structures are more suitable. However, conversely, when the datasets have large amount of datasets. Specifically, DL approaches such as CNN and LSTM are more sensible choices.
{"title":"A study on IMU-Based Human Activity Recognition Using Deep Learning and Traditional Machine Learning","authors":"Chengli Hou","doi":"10.1109/ICCCS49078.2020.9118506","DOIUrl":"https://doi.org/10.1109/ICCCS49078.2020.9118506","url":null,"abstract":"Human Activity Recognition (HAR) has been an increasingly popular range to do researches which stems from the ubiquitous computing. And lately, identifying activities during daily life has become one of more and more challenges. Subsequently, more and more methods can be used in the recognition of human activities such as Support Vector Machine (SVM), Random Forests (RF) which are the representatives of Traditional Machine Learning (TML) and also some Deep Learning (DL) methods like Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN). However, neither TML nor DL is suitable for all kinds of situations and various datasets. As a result, we would like to explore more about such consequences. In this paper, we discover a discrepancy and phenomenon that different sizes of collected HAR datasets may produce influences on the effectiveness of traditional machine learning methods as well as the deep learning architectures. We conduct experiments on two kinds of different datasets USC-HAD and WISDM with the best accuracy nearly 90% in DL and 87% in TML. Due to the consequences of the experiments we give a conclusion on the individual heterogeneity problems of the HAR datasets–when dealing with the HAR datasets of small scales, the TML structures are more suitable. However, conversely, when the datasets have large amount of datasets. Specifically, DL approaches such as CNN and LSTM are more sensible choices.","PeriodicalId":105556,"journal":{"name":"2020 5th International Conference on Computer and Communication Systems (ICCCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131282016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-01DOI: 10.1109/ICCCS49078.2020.9118497
Xuhao Jiang, Yifei Xu, Pingping Wei, Zhuming Zhou
CT images are commonly used in medical clinical diagnosis. However, due to factors such as hardware and scanning time, CT images in real scenes are limited by spatial resolution so that doctors cannot perform accurate disease analysis on tiny lesion areas and pathological features. An image super-resolution (SR) method based on deep learning is a good way to solve this problem. Although many excellent networks have been proposed, but they all pay more attention to image quality indicators than image visual perception quality. Unlike other networks that focus more on image evaluation metrics, the super resolution generative adversarial network (SRGAN) has achieved tremendous improvements in image perception quality. Based on the above, this paper proposes a CT image super-resolution algorithm based on improved SRGAN. In order to improve the visual quality of CT images, a dilated convolution module is introduced. At the same time, in order to improve the overall visual effect of the image, the mean structural similarity (MSSIM) loss is also introduced to improve the perceptual loss function. Experimental results on the public CT image dataset demonstrate that our model is better than the baseline method SRGAN not only in mean opinion score(MOS), but also in peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) values.
{"title":"CT Image Super Resolution Based On Improved SRGAN","authors":"Xuhao Jiang, Yifei Xu, Pingping Wei, Zhuming Zhou","doi":"10.1109/ICCCS49078.2020.9118497","DOIUrl":"https://doi.org/10.1109/ICCCS49078.2020.9118497","url":null,"abstract":"CT images are commonly used in medical clinical diagnosis. However, due to factors such as hardware and scanning time, CT images in real scenes are limited by spatial resolution so that doctors cannot perform accurate disease analysis on tiny lesion areas and pathological features. An image super-resolution (SR) method based on deep learning is a good way to solve this problem. Although many excellent networks have been proposed, but they all pay more attention to image quality indicators than image visual perception quality. Unlike other networks that focus more on image evaluation metrics, the super resolution generative adversarial network (SRGAN) has achieved tremendous improvements in image perception quality. Based on the above, this paper proposes a CT image super-resolution algorithm based on improved SRGAN. In order to improve the visual quality of CT images, a dilated convolution module is introduced. At the same time, in order to improve the overall visual effect of the image, the mean structural similarity (MSSIM) loss is also introduced to improve the perceptual loss function. Experimental results on the public CT image dataset demonstrate that our model is better than the baseline method SRGAN not only in mean opinion score(MOS), but also in peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) values.","PeriodicalId":105556,"journal":{"name":"2020 5th International Conference on Computer and Communication Systems (ICCCS)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126805142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}