Li Liu, Yuanhua Wang, Dongdong Wu, Yongping Zhai, L. Tan, Jingjing Xiao
This paper presents a multitask learning network for pathomorphology recognition of squamous intraepithelial lesion in Thinprep Cytologic Test. Detecting pathological cells is a quite challenging task due to large variations in cell appearance and indistinguishable changes in pathological cells. In addition, the high resolution of scanned cell images poses a further demand for efficient detection algorithm. Therefore, we propose a multi-task learning network aims at keeping a good balance between performance and computational efficiency. First we transfer knowledge from pre-trained VGG16 network to extract low level features, which alleviate the problem caused by small training data. Then, the potential regions of interest are generated by our proposed task oriented anchor network. Finally, a fully convolutional network is applied to accurately estimate the positions of the cells and classify its corresponding labels. To demonstrate the effectiveness of the proposed method, we conducted a dataset which is cross verified by two pathologists. In the test, we compare our method to the state-of-the-art detection algorithms, i.e. YOLO [1], and Faster-rcnn [2], which were both re-trained using our dataset. The results show that our method achieves the best detection accuracy with high computational efficiency, which only takes half time compared to Faster-rcnn.
{"title":"Multitask Learning for Pathomorphology Recognition of Squamous Intraepithelial Lesion in Thinprep Cytologic Test","authors":"Li Liu, Yuanhua Wang, Dongdong Wu, Yongping Zhai, L. Tan, Jingjing Xiao","doi":"10.1145/3285996.3286013","DOIUrl":"https://doi.org/10.1145/3285996.3286013","url":null,"abstract":"This paper presents a multitask learning network for pathomorphology recognition of squamous intraepithelial lesion in Thinprep Cytologic Test. Detecting pathological cells is a quite challenging task due to large variations in cell appearance and indistinguishable changes in pathological cells. In addition, the high resolution of scanned cell images poses a further demand for efficient detection algorithm. Therefore, we propose a multi-task learning network aims at keeping a good balance between performance and computational efficiency. First we transfer knowledge from pre-trained VGG16 network to extract low level features, which alleviate the problem caused by small training data. Then, the potential regions of interest are generated by our proposed task oriented anchor network. Finally, a fully convolutional network is applied to accurately estimate the positions of the cells and classify its corresponding labels. To demonstrate the effectiveness of the proposed method, we conducted a dataset which is cross verified by two pathologists. In the test, we compare our method to the state-of-the-art detection algorithms, i.e. YOLO [1], and Faster-rcnn [2], which were both re-trained using our dataset. The results show that our method achieves the best detection accuracy with high computational efficiency, which only takes half time compared to Faster-rcnn.","PeriodicalId":287756,"journal":{"name":"International Symposium on Image Computing and Digital Medicine","volume":"89 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115504589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Intensity inhomogeneity is a common phenomenon in real world images, and inevitably leads to many difficulties for accurate image segmentation. This paper proposes a novel region-based model, named Region-Bias Fitting (RBF) model, for segmenting images with intensity inhomogeneity by introducing desirable constraint term based on region bias. Specially, we firstly propose a constraint term which includes both the intensity bias and distance information to constrain the local intensity variance of image. Then, the constraint term is utilized to construct the local bias constraint and determine the contribution of each local region so that the image intensity is fitted accurately. Finally, we use the level set method to construct the final energy functional. By using the novel constraint information, the proposed RBF model can accurately delineate the object boundary, which relies on the local statistical intensity bias and local intensity fitting to improve the segmentation results. In order to validate the effectiveness of the proposed method, we conduct thorough experiments on synthetic and real images. Experimental results show that the proposed RBF model clearly outperforms other models in comparison.
{"title":"A Region-Bias Fitting Model based Level Set for Segmenting Images with Intensity Inhomogeneity","authors":"Hai Min, Wei Jia, Yang Zhao","doi":"10.1145/3285996.3286015","DOIUrl":"https://doi.org/10.1145/3285996.3286015","url":null,"abstract":"Intensity inhomogeneity is a common phenomenon in real world images, and inevitably leads to many difficulties for accurate image segmentation. This paper proposes a novel region-based model, named Region-Bias Fitting (RBF) model, for segmenting images with intensity inhomogeneity by introducing desirable constraint term based on region bias. Specially, we firstly propose a constraint term which includes both the intensity bias and distance information to constrain the local intensity variance of image. Then, the constraint term is utilized to construct the local bias constraint and determine the contribution of each local region so that the image intensity is fitted accurately. Finally, we use the level set method to construct the final energy functional. By using the novel constraint information, the proposed RBF model can accurately delineate the object boundary, which relies on the local statistical intensity bias and local intensity fitting to improve the segmentation results. In order to validate the effectiveness of the proposed method, we conduct thorough experiments on synthetic and real images. Experimental results show that the proposed RBF model clearly outperforms other models in comparison.","PeriodicalId":287756,"journal":{"name":"International Symposium on Image Computing and Digital Medicine","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130413124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gang Wu, Zhaohe Wang, Jialin Li, Z. Yu, Baiyou Qiao
With the rapid development of the city, huge temporal and spatial changes have taken place in buildings and surrounding scenes at the same location. At present, people generally lack the technical means to understand the knowledge related to the protection of urban architecture, which leads to the lack of publicity and education of the relevant contents. This is the reason why the architectural heritage is gradually forgotten by the public. So it is an effective means to enhance public awareness and protection of urban history through the comparison of images of historical buildings in different periods. In this paper, based on the typical characteristics of urban buildings images, a contour-based historical building image matching algorithm is proposed. We improved edge detection algorithm with a new operator, meanwhile, used a local threshold automatic adjustment strategy. Before matching, we aggregated short lines which can be aggregated to highlight image features and improve the matching rate. The algorithm can accurately match the images of different historical periods with some differences by effectively extracting and matching the building contours. The experiments show that, compared with the comparison algorithm, our proposed algorithm is more sensitive to gradient changes in multiple directions, and has better effects in detail edge extraction.
{"title":"Contour-based Historical Building Image Matching","authors":"Gang Wu, Zhaohe Wang, Jialin Li, Z. Yu, Baiyou Qiao","doi":"10.1145/3285996.3286003","DOIUrl":"https://doi.org/10.1145/3285996.3286003","url":null,"abstract":"With the rapid development of the city, huge temporal and spatial changes have taken place in buildings and surrounding scenes at the same location. At present, people generally lack the technical means to understand the knowledge related to the protection of urban architecture, which leads to the lack of publicity and education of the relevant contents. This is the reason why the architectural heritage is gradually forgotten by the public. So it is an effective means to enhance public awareness and protection of urban history through the comparison of images of historical buildings in different periods. In this paper, based on the typical characteristics of urban buildings images, a contour-based historical building image matching algorithm is proposed. We improved edge detection algorithm with a new operator, meanwhile, used a local threshold automatic adjustment strategy. Before matching, we aggregated short lines which can be aggregated to highlight image features and improve the matching rate. The algorithm can accurately match the images of different historical periods with some differences by effectively extracting and matching the building contours. The experiments show that, compared with the comparison algorithm, our proposed algorithm is more sensitive to gradient changes in multiple directions, and has better effects in detail edge extraction.","PeriodicalId":287756,"journal":{"name":"International Symposium on Image Computing and Digital Medicine","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128879645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kai Yang, Xianhui Liu, Yufei Chen, Haotian Zhang, W. Zhao
The local feature of point set is as important as the global feature in the point set registration problem. In this paper, a non-rigid point set registration method based on probability model with local constraints was proposed. Firstly, Gaussian mixture model (GMM) is used to determine the global relationship between two point sets. Secondly, local constraints provided by k nearest neighbor points helps to estimate the transformation better. Thirdly, the transformation of two point sets is calculated in reproducing kernel Hilbert space (RKHS). Finally, expectation maximization (EM) algorithm is used for maximum likelihood estimation of parameters in our method. Comparative experiments on synthesized data show that our algorithm is more robust to distortion, such as deformation, noise and outlier. Our method is also applied to the retinal image registration and obtained very good results.
{"title":"Non-Rigid Point Set Registration via Gaussians Mixture Model with Local Constraints","authors":"Kai Yang, Xianhui Liu, Yufei Chen, Haotian Zhang, W. Zhao","doi":"10.1145/3285996.3286011","DOIUrl":"https://doi.org/10.1145/3285996.3286011","url":null,"abstract":"The local feature of point set is as important as the global feature in the point set registration problem. In this paper, a non-rigid point set registration method based on probability model with local constraints was proposed. Firstly, Gaussian mixture model (GMM) is used to determine the global relationship between two point sets. Secondly, local constraints provided by k nearest neighbor points helps to estimate the transformation better. Thirdly, the transformation of two point sets is calculated in reproducing kernel Hilbert space (RKHS). Finally, expectation maximization (EM) algorithm is used for maximum likelihood estimation of parameters in our method. Comparative experiments on synthesized data show that our algorithm is more robust to distortion, such as deformation, noise and outlier. Our method is also applied to the retinal image registration and obtained very good results.","PeriodicalId":287756,"journal":{"name":"International Symposium on Image Computing and Digital Medicine","volume":"681 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132953958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quantitative classification of disease regions contained in lung tissues obtained from Computed Tomography (CT) scans is one of the key steps to evaluate lesion degrees of Cystic Fibrosis Lung Disease (CFLD). In this paper, we propose a deep Convolutional Neural Network-based (CNN) framework for automatic classification of lung tissues with CFLD. Core of the framework is the integration of deep CNNs into the classification workflow. To train and validate performance of deep CNNs, we build datasets for inspiration CT scans and expiration CT scans, respectively. We employ transfer learning techniques to fine tune parameters of deep CNNs. Specifically, we train Resnet-18 and Resnet-34 and validate the performance on the built datasets. Experimental results in terms of average precision and receiver operating characteristic curve demonstrate effectiveness of deep CNNs for classification of lung tissue with CFLD.
{"title":"Classification of Lung Tissue with Cystic Fibrosis Lung Disease via Deep Convolutional Neural Networks","authors":"Xi Jiang, Hualei Shen","doi":"10.1145/3285996.3286020","DOIUrl":"https://doi.org/10.1145/3285996.3286020","url":null,"abstract":"Quantitative classification of disease regions contained in lung tissues obtained from Computed Tomography (CT) scans is one of the key steps to evaluate lesion degrees of Cystic Fibrosis Lung Disease (CFLD). In this paper, we propose a deep Convolutional Neural Network-based (CNN) framework for automatic classification of lung tissues with CFLD. Core of the framework is the integration of deep CNNs into the classification workflow. To train and validate performance of deep CNNs, we build datasets for inspiration CT scans and expiration CT scans, respectively. We employ transfer learning techniques to fine tune parameters of deep CNNs. Specifically, we train Resnet-18 and Resnet-34 and validate the performance on the built datasets. Experimental results in terms of average precision and receiver operating characteristic curve demonstrate effectiveness of deep CNNs for classification of lung tissue with CFLD.","PeriodicalId":287756,"journal":{"name":"International Symposium on Image Computing and Digital Medicine","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133741444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we mainly focus on automatic detection and segmentation of cell nuclei in histopathology images. Though some methods have been presented to solve these issues, there is scope for efficiency and performance to improve. We propose an end-to-end trainable convolutional neural network, which can learn the object-level and pixel-level information of image patches. In this way, the output feature map could be applied in nuclei detection and segmentation tasks concurrently. Then the weighted patch aggregation and refinement methods are utilized to achieve the final segmentation result. The experiments on the standard public dataset demonstrate that that our method achieves a good performance on nuclei detection and segmentation.
{"title":"Simultaneous Detection and Segmentation of Cell Nuclei based on Convolutional Neural Network","authors":"Lipeng Xie, Chunming Li","doi":"10.1145/3285996.3286024","DOIUrl":"https://doi.org/10.1145/3285996.3286024","url":null,"abstract":"In this paper, we mainly focus on automatic detection and segmentation of cell nuclei in histopathology images. Though some methods have been presented to solve these issues, there is scope for efficiency and performance to improve. We propose an end-to-end trainable convolutional neural network, which can learn the object-level and pixel-level information of image patches. In this way, the output feature map could be applied in nuclei detection and segmentation tasks concurrently. Then the weighted patch aggregation and refinement methods are utilized to achieve the final segmentation result. The experiments on the standard public dataset demonstrate that that our method achieves a good performance on nuclei detection and segmentation.","PeriodicalId":287756,"journal":{"name":"International Symposium on Image Computing and Digital Medicine","volume":"63 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131521240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
3-D interactive volume rendering can be rather complicated through conventional ray casting with simple sampling and texture mapping. Owing to the limitation of hardware resources, volume rendering algorithms are considerably time-consuming. Therefore, adaptive sampling technique is proposed to tackle the problem of excessive computational cost. In this paper, considering an optimization in parallelized ray-casting algorithms of volume rendering, we propose an adaptive sampling method, which mainly reduces the number of sampling points through non-linear sampling function. This new method can be rather effective while making a trade-off between performance and rendering quality. Our experimental results demonstrate that the proposed adaptive sampling method can result in high efficiency in computation, and produce high-quality image based on the MSE and SSIM metrics.
{"title":"Adaptive Sampling for GPU-based 3-D Volume Rendering","authors":"Chun-han Zhang, Hao Yin, Shanghua Xiao","doi":"10.1145/3285996.3286002","DOIUrl":"https://doi.org/10.1145/3285996.3286002","url":null,"abstract":"3-D interactive volume rendering can be rather complicated through conventional ray casting with simple sampling and texture mapping. Owing to the limitation of hardware resources, volume rendering algorithms are considerably time-consuming. Therefore, adaptive sampling technique is proposed to tackle the problem of excessive computational cost. In this paper, considering an optimization in parallelized ray-casting algorithms of volume rendering, we propose an adaptive sampling method, which mainly reduces the number of sampling points through non-linear sampling function. This new method can be rather effective while making a trade-off between performance and rendering quality. Our experimental results demonstrate that the proposed adaptive sampling method can result in high efficiency in computation, and produce high-quality image based on the MSE and SSIM metrics.","PeriodicalId":287756,"journal":{"name":"International Symposium on Image Computing and Digital Medicine","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117145023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fei Xie, Yuchen Ma, Zeting Pan, Xinmin Guo, Jun Liu, G. Gao
Context: Facial paralysis affects both mental and physical health of patients severely. The most of existing researches are based on subjective judgments in evaluating the degree of facial paralysis, regardless that the definition of facial paralysis is ambiguous. This will result in low evaluation accuracy and even misdiagnosis. Objective: We propose a method of assessing the degree of facial paralysis considering static facial asymmetry and dynamic transformation factors. Method: This method compares the differences of the corresponding local areas on both sides of the face, thereby analyzes the asymmetry of the abnormal face effectively. Quantitative assessment of facial asymmetry concerns the following three steps: local facial area location, extraction of asymmetric features, and quantification of asymmetrical bilateral surfaces. We use a combination of static and dynamic quantification to generate facial palsy grading models to assess the extent of facial palsy. Results: We then report an empirical study on 320 pictures of 40 patients. Even the accuracy of the experimental tests do not achieve the ideal effect, it reaches more than 80%.Conclusion: Using our facial paralysis database of 40 patients, the experiment shows that our method gains encouraging effectiveness.
{"title":"Degree Evaluation of Facial Nerve Paralysis by Combining LBP and Gabor Features","authors":"Fei Xie, Yuchen Ma, Zeting Pan, Xinmin Guo, Jun Liu, G. Gao","doi":"10.1145/3285996.3286028","DOIUrl":"https://doi.org/10.1145/3285996.3286028","url":null,"abstract":"Context: Facial paralysis affects both mental and physical health of patients severely. The most of existing researches are based on subjective judgments in evaluating the degree of facial paralysis, regardless that the definition of facial paralysis is ambiguous. This will result in low evaluation accuracy and even misdiagnosis. Objective: We propose a method of assessing the degree of facial paralysis considering static facial asymmetry and dynamic transformation factors. Method: This method compares the differences of the corresponding local areas on both sides of the face, thereby analyzes the asymmetry of the abnormal face effectively. Quantitative assessment of facial asymmetry concerns the following three steps: local facial area location, extraction of asymmetric features, and quantification of asymmetrical bilateral surfaces. We use a combination of static and dynamic quantification to generate facial palsy grading models to assess the extent of facial palsy. Results: We then report an empirical study on 320 pictures of 40 patients. Even the accuracy of the experimental tests do not achieve the ideal effect, it reaches more than 80%.Conclusion: Using our facial paralysis database of 40 patients, the experiment shows that our method gains encouraging effectiveness.","PeriodicalId":287756,"journal":{"name":"International Symposium on Image Computing and Digital Medicine","volume":"256 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123959752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The human brain is the most complex and efficient information processing system in nature. Most of the information obtained by the human brain originates from both the visual and auditory pathways. However, few data was acquired about the relationship between audiovisual interaction and spatial locations of audiovisual stimul. Here, we investigated the multisensory integration of audiovisual stimuli presented on consistency and inconsistency spatial locations. The experiment required the subjects to pay attention to the stimuli of visual and auditory channels and to respond to the target stimulation. Through comparative analysis of experimental results, it has been found that under the condition of divided attention, the multisensory stimuli promoted the response regardless of whether the positions were consistent or not. But the integration effect induced by spatially consistent multisensory stimuli was stronger than that inconsistent.That was to say, the consistency of space affects the multisensory integration. This was of great significance for the development of robotics and human-computer interaction technologies.
{"title":"The Effect of Spatial Consistence on Multisensory Integration in a Divided Attention Task","authors":"Jingjing Yang, Jinlong Chu, Xiujun Li, Dan Tong","doi":"10.1145/3285996.3286026","DOIUrl":"https://doi.org/10.1145/3285996.3286026","url":null,"abstract":"The human brain is the most complex and efficient information processing system in nature. Most of the information obtained by the human brain originates from both the visual and auditory pathways. However, few data was acquired about the relationship between audiovisual interaction and spatial locations of audiovisual stimul. Here, we investigated the multisensory integration of audiovisual stimuli presented on consistency and inconsistency spatial locations. The experiment required the subjects to pay attention to the stimuli of visual and auditory channels and to respond to the target stimulation. Through comparative analysis of experimental results, it has been found that under the condition of divided attention, the multisensory stimuli promoted the response regardless of whether the positions were consistent or not. But the integration effect induced by spatially consistent multisensory stimuli was stronger than that inconsistent.That was to say, the consistency of space affects the multisensory integration. This was of great significance for the development of robotics and human-computer interaction technologies.","PeriodicalId":287756,"journal":{"name":"International Symposium on Image Computing and Digital Medicine","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131171558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An adaptive image de-noising method based on spatial autocorrelation is proposed to effectively remove image noise and preserve structural information. A residual image is obtained using average filtering and then subtracted from the original image. The high-pass residual image should be a combination of boundary and noise. The autocorrelation of each pixel is calculated on the residual image, and then the image is adaptively filtered based on the autocorrelation values. The results show that Lena adaptive filtering quality is significantly better than global image filtering. This method was also applied to a simulated Huffman phantom PET image for validation and the same results were obtained. The spatial autocorrelation is calculated on the high-pass residual image and then adaptive de-noising is performed. The proposed method will be further developed and applied to image de-noising and image quality improvement.
{"title":"Adaptive Image De-noising Method Based on Spatial Autocorrelation","authors":"Ronghui Lu, Tzong-Jer Chen","doi":"10.1145/3285996.3286023","DOIUrl":"https://doi.org/10.1145/3285996.3286023","url":null,"abstract":"An adaptive image de-noising method based on spatial autocorrelation is proposed to effectively remove image noise and preserve structural information. A residual image is obtained using average filtering and then subtracted from the original image. The high-pass residual image should be a combination of boundary and noise. The autocorrelation of each pixel is calculated on the residual image, and then the image is adaptively filtered based on the autocorrelation values. The results show that Lena adaptive filtering quality is significantly better than global image filtering. This method was also applied to a simulated Huffman phantom PET image for validation and the same results were obtained. The spatial autocorrelation is calculated on the high-pass residual image and then adaptive de-noising is performed. The proposed method will be further developed and applied to image de-noising and image quality improvement.","PeriodicalId":287756,"journal":{"name":"International Symposium on Image Computing and Digital Medicine","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124765979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}