Pub Date : 2014-12-15DOI: 10.1109/SPAC.2014.6982704
Yanqiu Chen, Peili Sun
In this paper, a medical image 3D reconstruction system is proposed, which uses the processing and analysis of a series of 2D CT image to convert 3D model. The system is developed in the OS of Window8, and uses Microsoft Visual Studio 2012 as the development tool in which includes MFC class library and DirectX. The whole system is completely developed in the c++ language, which can propose a set of image intensification algorithm to enhance the images visual effect and surgery precision, and draw 3D human organ directly. At last, the experimental results made for the prototype illustrate the system well. This research will lay a good foundation for the development of medical image 3D reconstruction.
本文提出了一种医学图像三维重建系统,该系统通过对一系列二维CT图像进行处理和分析,实现三维模型的转换。本系统在windows 8操作系统下开发,使用Microsoft Visual Studio 2012作为开发工具,其中包含MFC类库和DirectX。整个系统完全采用c++语言开发,可以提出一套图像增强算法,增强图像的视觉效果和手术精度,直接绘制人体器官的三维图像。最后,对样机所做的实验结果很好地说明了该系统。本研究将为医学图像三维重建的发展奠定良好的基础。
{"title":"The research and practice of medical image 3D reconstruction platform","authors":"Yanqiu Chen, Peili Sun","doi":"10.1109/SPAC.2014.6982704","DOIUrl":"https://doi.org/10.1109/SPAC.2014.6982704","url":null,"abstract":"In this paper, a medical image 3D reconstruction system is proposed, which uses the processing and analysis of a series of 2D CT image to convert 3D model. The system is developed in the OS of Window8, and uses Microsoft Visual Studio 2012 as the development tool in which includes MFC class library and DirectX. The whole system is completely developed in the c++ language, which can propose a set of image intensification algorithm to enhance the images visual effect and surgery precision, and draw 3D human organ directly. At last, the experimental results made for the prototype illustrate the system well. This research will lay a good foundation for the development of medical image 3D reconstruction.","PeriodicalId":326246,"journal":{"name":"Proceedings 2014 IEEE International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130644129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/SPAC.2014.6982720
Suiyuan He, Hui Wang, Zhihong Jiang
Twitter as an online social network is used for many reasons, including information dissemination, marketing, political organizing, spamming, promotion, conversations and so on. Characterizing these activities and categorizing users is a challenging task. Traditional user classification models are based on individual user's profile information such as age, location, register time, interests and tweets, which have not considered the whole complexity of posting behavior. In this paper we introduce Multi-scale Entropy for analyzing and identifying user behavior on Twitter, and separate users to different categories. We have identified five distinct categories of tweeting activity on Twitter: individual activity, newsworthy information dissemination activity, advertising and promotion activity, automatic/robotic activity and other activities. Through the experiment we achieved good separation of different activities of these five categories based on Multi-scale Entropy of users' posting time series. The method based on Multi-scale Entropy is computationally efficient; it has many applications, including automatic spam-detection, trend identification, trust management, user-modeling in online social media.
{"title":"Identifying user behavior on Twitter based on multi-scale entropy","authors":"Suiyuan He, Hui Wang, Zhihong Jiang","doi":"10.1109/SPAC.2014.6982720","DOIUrl":"https://doi.org/10.1109/SPAC.2014.6982720","url":null,"abstract":"Twitter as an online social network is used for many reasons, including information dissemination, marketing, political organizing, spamming, promotion, conversations and so on. Characterizing these activities and categorizing users is a challenging task. Traditional user classification models are based on individual user's profile information such as age, location, register time, interests and tweets, which have not considered the whole complexity of posting behavior. In this paper we introduce Multi-scale Entropy for analyzing and identifying user behavior on Twitter, and separate users to different categories. We have identified five distinct categories of tweeting activity on Twitter: individual activity, newsworthy information dissemination activity, advertising and promotion activity, automatic/robotic activity and other activities. Through the experiment we achieved good separation of different activities of these five categories based on Multi-scale Entropy of users' posting time series. The method based on Multi-scale Entropy is computationally efficient; it has many applications, including automatic spam-detection, trend identification, trust management, user-modeling in online social media.","PeriodicalId":326246,"journal":{"name":"Proceedings 2014 IEEE International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":"234 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134022733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/SPAC.2014.6982650
Xucheng Li, Fasheng Wang, Mingyu Lu, Yaohua Xiong, Wei Sun
Abrupt motion tracking has gained special attention in visual tracking community in the past several years. However, the growing interest has not been accompanied by the development of criteria to evaluate the performance of different tracking algorithms. In this paper, we introduce an evaluation criterion for abrupt motion trackers. This criterion contains a set of trials that test the robustness of trackers on a variety of abrupt motions induced by different realworld conditions. Moreover, a new evaluation measure - Abrupt Capture Rate (ACR) is proposed to quantitatively evaluate the accuracy of different trackers. We demonstrate the effectiveness and validation of the proposed evaluation criteria experimentally on several trackers.
{"title":"Performance evaluation of Abrupt motion trackers","authors":"Xucheng Li, Fasheng Wang, Mingyu Lu, Yaohua Xiong, Wei Sun","doi":"10.1109/SPAC.2014.6982650","DOIUrl":"https://doi.org/10.1109/SPAC.2014.6982650","url":null,"abstract":"Abrupt motion tracking has gained special attention in visual tracking community in the past several years. However, the growing interest has not been accompanied by the development of criteria to evaluate the performance of different tracking algorithms. In this paper, we introduce an evaluation criterion for abrupt motion trackers. This criterion contains a set of trials that test the robustness of trackers on a variety of abrupt motions induced by different realworld conditions. Moreover, a new evaluation measure - Abrupt Capture Rate (ACR) is proposed to quantitatively evaluate the accuracy of different trackers. We demonstrate the effectiveness and validation of the proposed evaluation criteria experimentally on several trackers.","PeriodicalId":326246,"journal":{"name":"Proceedings 2014 IEEE International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133193010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/SPAC.2014.6982655
Qiang Guo, Jun Lei, D. Tu, Guohui Li
Reading text from natural images is a hard computer vision task. We present a method for applying deep convolutional neural networks to recognize numbers in natural scene images. In this paper, we proposed a noval method to eliminating the need of explicit segmentation when deal with multi-digit number recognition in natural scene images. Convolution Neural Network(CNN) requires fixed dimensional input while number images contain unknown amount of digits. Our method integrats CNN with probabilistic graphical model to deal with the problem. We use hidden Markov model(HMM) to model the image and use CNN to model digits appearance. This method combines the advantages of both the two models and make them fit to the problem. By using this method we can perform the training and recognition procedure both at word level. There is no explicit segmentation operation at all which save lots of labour for sophisticated segmentation algorithm design or finegrained character labeling. Experiments show that deep CNN can dramaticly improve the performance compared with using Gaussian Mixture model as the digit model. We obtaied competitive results on the street view house number(SVHN) dataset.
{"title":"Reading numbers in natural scene images with convolutional neural networks","authors":"Qiang Guo, Jun Lei, D. Tu, Guohui Li","doi":"10.1109/SPAC.2014.6982655","DOIUrl":"https://doi.org/10.1109/SPAC.2014.6982655","url":null,"abstract":"Reading text from natural images is a hard computer vision task. We present a method for applying deep convolutional neural networks to recognize numbers in natural scene images. In this paper, we proposed a noval method to eliminating the need of explicit segmentation when deal with multi-digit number recognition in natural scene images. Convolution Neural Network(CNN) requires fixed dimensional input while number images contain unknown amount of digits. Our method integrats CNN with probabilistic graphical model to deal with the problem. We use hidden Markov model(HMM) to model the image and use CNN to model digits appearance. This method combines the advantages of both the two models and make them fit to the problem. By using this method we can perform the training and recognition procedure both at word level. There is no explicit segmentation operation at all which save lots of labour for sophisticated segmentation algorithm design or finegrained character labeling. Experiments show that deep CNN can dramaticly improve the performance compared with using Gaussian Mixture model as the digit model. We obtaied competitive results on the street view house number(SVHN) dataset.","PeriodicalId":326246,"journal":{"name":"Proceedings 2014 IEEE International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130514552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/SPAC.2014.6982729
Ming He, Jun Cheng, Dapeng Tao
In this paper, we present a vision-based humancomputer interaction system merely consisting of a projector and a single camera, which is no longer limited to traditional displaying but allowing users to touch on any projected surfaces for interaction. The challenge of bare-hand touch detection in projector-camera system is to recover the depth from user's fingertip to projector with monocular vision. A novel approach is proposed to detect touch action through locally feature from accelerated segment test (FAST) matching between captured image and projected image. By comparing the hamming distance of these features with binary robust invariant scalable keypoint (BRISK), the 3D information near fingertips is able to be probed like deciding if there is a finger touching on table surface. Extensive experiments about hand region segmentation and touch detection are presented to show the robust performance of our system.
{"title":"Touch-sensitive interactive projection system","authors":"Ming He, Jun Cheng, Dapeng Tao","doi":"10.1109/SPAC.2014.6982729","DOIUrl":"https://doi.org/10.1109/SPAC.2014.6982729","url":null,"abstract":"In this paper, we present a vision-based humancomputer interaction system merely consisting of a projector and a single camera, which is no longer limited to traditional displaying but allowing users to touch on any projected surfaces for interaction. The challenge of bare-hand touch detection in projector-camera system is to recover the depth from user's fingertip to projector with monocular vision. A novel approach is proposed to detect touch action through locally feature from accelerated segment test (FAST) matching between captured image and projected image. By comparing the hamming distance of these features with binary robust invariant scalable keypoint (BRISK), the 3D information near fingertips is able to be probed like deciding if there is a finger touching on table surface. Extensive experiments about hand region segmentation and touch detection are presented to show the robust performance of our system.","PeriodicalId":326246,"journal":{"name":"Proceedings 2014 IEEE International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":"10 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121110501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/SPAC.2014.6982657
Yong Xia, Yanning Zhang
As an essential step in brain studies, measuring the distribution of major brain tissues, including gray matter, white matter and cerebrospinal fluid (CSF), using magnetic resonance imaging (MRI) has attracted extensive research efforts over the past years. Many brain tissue differentiation methods resulted from these efforts are based on the finite statistical mixture model, which however, in spite of its computational efficiency, is not strictly followed due to the intrinsically limited quality of MRI data and may lead to less accurate results. In this paper, a novel large-scale variational Bayesian inference (LS-VBI) learning algorithm is proposed for automated brain MRI voxels classification. To cope with the complexity and dynamic nature of MRI data, this algorithm uses a large number of local statistical models, in each of which all statistical parameters are assumed to be random variables sampled from conjugate prior distributions. Those models are learned using variational Bayesian inference and combined to predict the class label of each brain voxel. This algorithm has been evaluated against several state-of-the-art brain tissue segmentation methods on both synthetic and clinical brain MRI data sets. Our results show that the proposed algorithm can classify brain voxels more effectively and provide more precise distribution of major brain tissues.
{"title":"Learning large number of local statistical models via variational Bayesian inference for brain voxel classification in magnetic resonance images","authors":"Yong Xia, Yanning Zhang","doi":"10.1109/SPAC.2014.6982657","DOIUrl":"https://doi.org/10.1109/SPAC.2014.6982657","url":null,"abstract":"As an essential step in brain studies, measuring the distribution of major brain tissues, including gray matter, white matter and cerebrospinal fluid (CSF), using magnetic resonance imaging (MRI) has attracted extensive research efforts over the past years. Many brain tissue differentiation methods resulted from these efforts are based on the finite statistical mixture model, which however, in spite of its computational efficiency, is not strictly followed due to the intrinsically limited quality of MRI data and may lead to less accurate results. In this paper, a novel large-scale variational Bayesian inference (LS-VBI) learning algorithm is proposed for automated brain MRI voxels classification. To cope with the complexity and dynamic nature of MRI data, this algorithm uses a large number of local statistical models, in each of which all statistical parameters are assumed to be random variables sampled from conjugate prior distributions. Those models are learned using variational Bayesian inference and combined to predict the class label of each brain voxel. This algorithm has been evaluated against several state-of-the-art brain tissue segmentation methods on both synthetic and clinical brain MRI data sets. Our results show that the proposed algorithm can classify brain voxels more effectively and provide more precise distribution of major brain tissues.","PeriodicalId":326246,"journal":{"name":"Proceedings 2014 IEEE International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117070712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/SPAC.2014.6982697
Xinge You, Qingjiang Hu, Duanquan Xu, Xian-cheng Fu, Qixin Sun
In this paper, a dollar bill denomination recognition algorithm based on local texture feature is proposed. this paper proposes a local texture feature dollar denomination recognition algorithm, this algorithm first use the between-cluster variance method about the dollar's local image binarization to enhance the effect of differences, and then through the cross algorithm to extract the local texture feature, which makes the recognition work correctly. The simulation results show that the method is fast, high precision, suitable for real-time face recognition.
{"title":"Dollar bill denomination recognition algorithm based on local texture feature","authors":"Xinge You, Qingjiang Hu, Duanquan Xu, Xian-cheng Fu, Qixin Sun","doi":"10.1109/SPAC.2014.6982697","DOIUrl":"https://doi.org/10.1109/SPAC.2014.6982697","url":null,"abstract":"In this paper, a dollar bill denomination recognition algorithm based on local texture feature is proposed. this paper proposes a local texture feature dollar denomination recognition algorithm, this algorithm first use the between-cluster variance method about the dollar's local image binarization to enhance the effect of differences, and then through the cross algorithm to extract the local texture feature, which makes the recognition work correctly. The simulation results show that the method is fast, high precision, suitable for real-time face recognition.","PeriodicalId":326246,"journal":{"name":"Proceedings 2014 IEEE International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114263928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/SPAC.2014.6982681
Meixiang Xu, Fuming Sun, Xiaojun Jiang
The goal of this paper is to categorize images with multiple labels based on semi-supervised learning. Conventional semi-supervised regression methods are predominantly used to solve single label problems. However, it is more common in many real-world practical applications that an instance can be associated with a set of labels simultaneously. In this paper, a novel multi-label learning method with co-training based on semi-supervised regression is proposed to process multi-label classifications. Experimental results on two real-world data sets demonstrate that the proposed method is applicable to multi-label learning problems and its effectiveness outperforms that of three exiting state-of-the-art algorithms.
{"title":"Multi-label learning with co-training based on semi-supervised regression","authors":"Meixiang Xu, Fuming Sun, Xiaojun Jiang","doi":"10.1109/SPAC.2014.6982681","DOIUrl":"https://doi.org/10.1109/SPAC.2014.6982681","url":null,"abstract":"The goal of this paper is to categorize images with multiple labels based on semi-supervised learning. Conventional semi-supervised regression methods are predominantly used to solve single label problems. However, it is more common in many real-world practical applications that an instance can be associated with a set of labels simultaneously. In this paper, a novel multi-label learning method with co-training based on semi-supervised regression is proposed to process multi-label classifications. Experimental results on two real-world data sets demonstrate that the proposed method is applicable to multi-label learning problems and its effectiveness outperforms that of three exiting state-of-the-art algorithms.","PeriodicalId":326246,"journal":{"name":"Proceedings 2014 IEEE International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":"497 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117351715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/SPAC.2014.6982715
Yuanfeng He, Qijun Wang, Xinge You, Duanquan Xu
Intra prediction is an important coding tool to exploit correlation within one picture in image and video compression. Before the ultimate intra prediction values are generated for current block along oblique angles, a fixed low-pass filtering with 3-tap filter (1, 2, 1) will be applied to the three prediction pixel values to avoid the effect of pulse noise. In this paper, we use adaptive intra prediction filter (AIPF) to replace the fixed filter to minimize the prediction errors. To get the adaptive filter coefficients in an on-line way with an acceptable accuracy and no coding overhead, we combine it with template matching (TM). After the best estimation of current block through template matching, the optimal adaptive filter coefficients are calculated with least-square optimization through considering the best estimation as `current' block. The adaptive filter is used to obtain intra prediction values instead of the 3-tap fixed low-pass filter. Experimental results show that the AIPF can get a stable coding gain on all test sequences, and reduce the bit-rate by up to 1.74% comparing with that using only TM.
{"title":"Adaptive intra prediction filtering (AIPF)","authors":"Yuanfeng He, Qijun Wang, Xinge You, Duanquan Xu","doi":"10.1109/SPAC.2014.6982715","DOIUrl":"https://doi.org/10.1109/SPAC.2014.6982715","url":null,"abstract":"Intra prediction is an important coding tool to exploit correlation within one picture in image and video compression. Before the ultimate intra prediction values are generated for current block along oblique angles, a fixed low-pass filtering with 3-tap filter (1, 2, 1) will be applied to the three prediction pixel values to avoid the effect of pulse noise. In this paper, we use adaptive intra prediction filter (AIPF) to replace the fixed filter to minimize the prediction errors. To get the adaptive filter coefficients in an on-line way with an acceptable accuracy and no coding overhead, we combine it with template matching (TM). After the best estimation of current block through template matching, the optimal adaptive filter coefficients are calculated with least-square optimization through considering the best estimation as `current' block. The adaptive filter is used to obtain intra prediction values instead of the 3-tap fixed low-pass filter. Experimental results show that the AIPF can get a stable coding gain on all test sequences, and reduce the bit-rate by up to 1.74% comparing with that using only TM.","PeriodicalId":326246,"journal":{"name":"Proceedings 2014 IEEE International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133769769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-12-15DOI: 10.1109/SPAC.2014.6982710
W. Zeng, Long Zhou, Renhong Xu, Biao Li
For the issue of image denoising, in order to avoid the traditional multi-scale sparse representation methods, which used blocks of different sizes as a base function to represent image, the non-separable wavelets were taken. Their advantages included revealing the multi-scale structure, depicting the texture structure under different scales, and separating different directions and different types of singularity structure in a certain extent. Based on non-separable wavelets, a multi-scale sparse denoising model in the wavelet domain was we established, and then a collaboration sparse model for the sub-bands contained similar structures was designed to enhance the stability and accuracy of the sparse representation. The results show that the denoising effect based on new approach is obvious superior to the K-SVD algorithm.
{"title":"Multi-scale sparse denoising model based on non-separable wavelet","authors":"W. Zeng, Long Zhou, Renhong Xu, Biao Li","doi":"10.1109/SPAC.2014.6982710","DOIUrl":"https://doi.org/10.1109/SPAC.2014.6982710","url":null,"abstract":"For the issue of image denoising, in order to avoid the traditional multi-scale sparse representation methods, which used blocks of different sizes as a base function to represent image, the non-separable wavelets were taken. Their advantages included revealing the multi-scale structure, depicting the texture structure under different scales, and separating different directions and different types of singularity structure in a certain extent. Based on non-separable wavelets, a multi-scale sparse denoising model in the wavelet domain was we established, and then a collaboration sparse model for the sub-bands contained similar structures was designed to enhance the stability and accuracy of the sparse representation. The results show that the denoising effect based on new approach is obvious superior to the K-SVD algorithm.","PeriodicalId":326246,"journal":{"name":"Proceedings 2014 IEEE International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":"27 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114127408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}