Pub Date : 2010-12-10DOI: 10.1109/MMSP.2010.5662033
D. Vukobratović, V. Stanković
This paper focuses on recent research on unequal error protection random linear coding (UEP RLC) for applications in network coded (NC) multimedia communications. We define a class of UEP RLC called expanding window random linear coding (EW-RLC) and provide exact decoding probability analysis for different importance classes of the source data assuming the Gaussian Elimination (GE) decoder applied at the receiver. Using this analysis, we provide a detailed investigation of the EW-RLC design for the distortion optimized scalable H.264/SVC coded video transmission over packet networks with packet erasures over a range of heterogeneous receivers with varying receiver reception overhead capabilities.
{"title":"Unequal error protection random linear coding for multimedia communications","authors":"D. Vukobratović, V. Stanković","doi":"10.1109/MMSP.2010.5662033","DOIUrl":"https://doi.org/10.1109/MMSP.2010.5662033","url":null,"abstract":"This paper focuses on recent research on unequal error protection random linear coding (UEP RLC) for applications in network coded (NC) multimedia communications. We define a class of UEP RLC called expanding window random linear coding (EW-RLC) and provide exact decoding probability analysis for different importance classes of the source data assuming the Gaussian Elimination (GE) decoder applied at the receiver. Using this analysis, we provide a detailed investigation of the EW-RLC design for the distortion optimized scalable H.264/SVC coded video transmission over packet networks with packet erasures over a range of heterogeneous receivers with varying receiver reception overhead capabilities.","PeriodicalId":105774,"journal":{"name":"2010 IEEE International Workshop on Multimedia Signal Processing","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124566896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-10DOI: 10.1109/MMSP.2010.5662074
A. Dantcheva, J. Dugelay, P. Elia
This work introduces the novel idea of using a bag of facial soft biometrics for person verification and identification. The novel tool inherits the non-intrusiveness and computational efficiency of soft biometrics, which allow for fast and enrolment-free biometric analysis, even in the absence of consent and cooperation of the surveillance subject. In conjunction with the proposed system design and detection algorithms, we also proceed to shed some light on the statistical properties of different parameters that are pertinent to the proposed system, as well as provide insight on general design aspects in soft-biometric systems, and different aspects regarding efficient resource allocation.
{"title":"Person recognition using a bag of facial soft biometrics (BoFSB)","authors":"A. Dantcheva, J. Dugelay, P. Elia","doi":"10.1109/MMSP.2010.5662074","DOIUrl":"https://doi.org/10.1109/MMSP.2010.5662074","url":null,"abstract":"This work introduces the novel idea of using a bag of facial soft biometrics for person verification and identification. The novel tool inherits the non-intrusiveness and computational efficiency of soft biometrics, which allow for fast and enrolment-free biometric analysis, even in the absence of consent and cooperation of the surveillance subject. In conjunction with the proposed system design and detection algorithms, we also proceed to shed some light on the statistical properties of different parameters that are pertinent to the proposed system, as well as provide insight on general design aspects in soft-biometric systems, and different aspects regarding efficient resource allocation.","PeriodicalId":105774,"journal":{"name":"2010 IEEE International Workshop on Multimedia Signal Processing","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124545790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-10DOI: 10.1109/MMSP.2010.5662072
Rui Zhang, Kui Wu, Kim-Hui Yap, L. Guan
Conventional approaches to image annotation tackle the problem based on the low-level visual information. Considering the importance of the information on the constrained interaction among the objects in a real world scene, contextual information has been utilized to recognize scene and object categories. In this paper, we propose a Bayesian approach to region-based image annotation, which integrates the content-based search and context into a unified framework. The content-based search selects representative keywords by matching an unlabeled image with the labeled ones followed by a weighted keyword ranking, which are in turn used by the context model to calculate the a prior probabilities of the object categories. Finally, a Bayesian framework integrates the a priori probabilities and the visual properties of image regions. The framework was evaluated using two databases and several performance measures, which demonstrated its superiority to both visual content-based and context-based approaches.
{"title":"A Bayesian image annotation framework integrating search and context","authors":"Rui Zhang, Kui Wu, Kim-Hui Yap, L. Guan","doi":"10.1109/MMSP.2010.5662072","DOIUrl":"https://doi.org/10.1109/MMSP.2010.5662072","url":null,"abstract":"Conventional approaches to image annotation tackle the problem based on the low-level visual information. Considering the importance of the information on the constrained interaction among the objects in a real world scene, contextual information has been utilized to recognize scene and object categories. In this paper, we propose a Bayesian approach to region-based image annotation, which integrates the content-based search and context into a unified framework. The content-based search selects representative keywords by matching an unlabeled image with the labeled ones followed by a weighted keyword ranking, which are in turn used by the context model to calculate the a prior probabilities of the object categories. Finally, a Bayesian framework integrates the a priori probabilities and the visual properties of image regions. The framework was evaluated using two databases and several performance measures, which demonstrated its superiority to both visual content-based and context-based approaches.","PeriodicalId":105774,"journal":{"name":"2010 IEEE International Workshop on Multimedia Signal Processing","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128989911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-10DOI: 10.1109/MMSP.2010.5662045
Aymen Kammoun, F. Payan, M. Antonini
We propose an adaptive semi-regular remeshing algorithm for surface meshes. Our algorithm uses Voronoi tessellations during both simplification and refinement stages. During simplification, the algorithm constructs a first centroidal Voronoi tessellation of the vertices of the input mesh. The sites of the Voronoi cells are the vertices of the base mesh of the semi-regular output. During refinement, the new vertices added at each resolution level by regular subdivision are considered as new Voronoi sites. We then use the Lloyd relaxation algorithm to update their position, and finally we obtain uniform semi-regular meshes. Our algorithm also enables adaptive remeshing by tuning a threshold based on the mass probability of the Voronoi sites added by subdivision. Experimentation shows that our technique produces semi-regular meshes of high quality, with significantly less triangles than state of the art techniques.
{"title":"Adaptive semi-regular remeshing: A Voronoi-based approach","authors":"Aymen Kammoun, F. Payan, M. Antonini","doi":"10.1109/MMSP.2010.5662045","DOIUrl":"https://doi.org/10.1109/MMSP.2010.5662045","url":null,"abstract":"We propose an adaptive semi-regular remeshing algorithm for surface meshes. Our algorithm uses Voronoi tessellations during both simplification and refinement stages. During simplification, the algorithm constructs a first centroidal Voronoi tessellation of the vertices of the input mesh. The sites of the Voronoi cells are the vertices of the base mesh of the semi-regular output. During refinement, the new vertices added at each resolution level by regular subdivision are considered as new Voronoi sites. We then use the Lloyd relaxation algorithm to update their position, and finally we obtain uniform semi-regular meshes. Our algorithm also enables adaptive remeshing by tuning a threshold based on the mass probability of the Voronoi sites added by subdivision. Experimentation shows that our technique produces semi-regular meshes of high quality, with significantly less triangles than state of the art techniques.","PeriodicalId":105774,"journal":{"name":"2010 IEEE International Workshop on Multimedia Signal Processing","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125653085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-10DOI: 10.1109/MMSP.2010.5662056
Zhi Wang, Lifeng Sun, Shiqiang Yang
As compared to live peer-to-peer (P2P) streaming, modern P2P video-on-demand (VoD) systems have brought much larger volumes of videos and more interactive controls to the Internet users. As the increase of bitrate of the videos and the full VCR controls of P2P VoD, the behavior “buffering” motivates us to design different schedule and service strategies for peers, to improve the playback performance, and the alleviation of the dedicated streaming server, by making best use of the bandwidth and cache capacities of these buffering peers. In our design, peers strategically decide which segments in the video to download first, and which requests to serve first. We conduct extended simulations to evaluate the performance of the strategies, and the results show our design outperforms the conventional sequential scheme, with respect to improving the playback quality and reducing the server load.
{"title":"Strategies of buffering schedule in P2P VoD streaming","authors":"Zhi Wang, Lifeng Sun, Shiqiang Yang","doi":"10.1109/MMSP.2010.5662056","DOIUrl":"https://doi.org/10.1109/MMSP.2010.5662056","url":null,"abstract":"As compared to live peer-to-peer (P2P) streaming, modern P2P video-on-demand (VoD) systems have brought much larger volumes of videos and more interactive controls to the Internet users. As the increase of bitrate of the videos and the full VCR controls of P2P VoD, the behavior “buffering” motivates us to design different schedule and service strategies for peers, to improve the playback performance, and the alleviation of the dedicated streaming server, by making best use of the bandwidth and cache capacities of these buffering peers. In our design, peers strategically decide which segments in the video to download first, and which requests to serve first. We conduct extended simulations to evaluate the performance of the strategies, and the results show our design outperforms the conventional sequential scheme, with respect to improving the playback quality and reducing the server load.","PeriodicalId":105774,"journal":{"name":"2010 IEEE International Workshop on Multimedia Signal Processing","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116668791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-10DOI: 10.1109/MMSP.2010.5662042
Chen Fu, Joohee Kim
Distributed video coding is a new paradigm for video compression based on the Slepian-Wolf and Wyner-Ziv theorems. Wyner-Ziv video coding, a lossy compression with receiver side information, enables low-complexity video encoding at the expense of a complex decoder. Most of the existing distributed video coding techniques require a feedback channel to determine the number of parity bits for decoding Wyner-Ziv frames at the decoder. However, a feedback channel is not available for some applications or a feedback channel-based decoder rate control may not be used due to delay constraints in wireless video sensor network applications. In this paper, an encoder-based rate control method for distributed video coding is proposed. The proposed solution consists of a low complexity side information generation method at the encoder and a rate estimation algorithm that determines the number of parity bits to be transmitted to the decoder. The performance of the proposed algorithm is compared with existing encoder-based rate control methods and a decoder-based rate control algorithm based on a feedback channel.
{"title":"Encoder rate control for block-based distributed video coding","authors":"Chen Fu, Joohee Kim","doi":"10.1109/MMSP.2010.5662042","DOIUrl":"https://doi.org/10.1109/MMSP.2010.5662042","url":null,"abstract":"Distributed video coding is a new paradigm for video compression based on the Slepian-Wolf and Wyner-Ziv theorems. Wyner-Ziv video coding, a lossy compression with receiver side information, enables low-complexity video encoding at the expense of a complex decoder. Most of the existing distributed video coding techniques require a feedback channel to determine the number of parity bits for decoding Wyner-Ziv frames at the decoder. However, a feedback channel is not available for some applications or a feedback channel-based decoder rate control may not be used due to delay constraints in wireless video sensor network applications. In this paper, an encoder-based rate control method for distributed video coding is proposed. The proposed solution consists of a low complexity side information generation method at the encoder and a rate estimation algorithm that determines the number of parity bits to be transmitted to the decoder. The performance of the proposed algorithm is compared with existing encoder-based rate control methods and a decoder-based rate control algorithm based on a feedback channel.","PeriodicalId":105774,"journal":{"name":"2010 IEEE International Workshop on Multimedia Signal Processing","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125250408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-10DOI: 10.1109/MMSP.2010.5662031
L. Turchet, R. Nordahl, S. Serafin, Amir Berrezag, Smilen Dimitrov, V. Hayward
We describe a system which simulates in realtime the auditory and haptic sensations of walking on different surfaces. The system is based on a pair of sandals enhanced with pressure sensors and actuators. The pressure sensors detect the interaction force during walking, and control several physically based synthesis algorithms, which drive both the auditory and haptic feedback. The different hardware and software components of the system are described, together with possible uses and possibilities for improvements in future design iterations.
{"title":"Audio-haptic physically-based simulation of walking on different grounds","authors":"L. Turchet, R. Nordahl, S. Serafin, Amir Berrezag, Smilen Dimitrov, V. Hayward","doi":"10.1109/MMSP.2010.5662031","DOIUrl":"https://doi.org/10.1109/MMSP.2010.5662031","url":null,"abstract":"We describe a system which simulates in realtime the auditory and haptic sensations of walking on different surfaces. The system is based on a pair of sandals enhanced with pressure sensors and actuators. The pressure sensors detect the interaction force during walking, and control several physically based synthesis algorithms, which drive both the auditory and haptic feedback. The different hardware and software components of the system are described, together with possible uses and possibilities for improvements in future design iterations.","PeriodicalId":105774,"journal":{"name":"2010 IEEE International Workshop on Multimedia Signal Processing","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132999438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-01DOI: 10.1109/MMSP.2010.5662069
Ning Zhang, L. Guan
Efficient data mining and indexing is important for multimedia analysis and retrieval. In the field of large-scale video analysis, effective genre categorization plays an important role and serves one of the fundamental steps for data mining. Existing works utilize domain-knowledge dependent feature extraction, which is limited from genre diversification as well as data volume scalability. In this paper, we propose a systematic framework for automatically classifying video genres using domain-knowledge independent descriptors in feature extraction, and a bag-of-visualwords (BoW) based model in compact video representation. Scale invariant feature transform (SIFT) local descriptor accelerated by GPU hardware is adopted for feature extraction. BoW model with an innovative codebook generation using bottom-up two-layer K-means clustering is proposed to abstract the video characteristics. Besides the histogram-based distribution in summarizing video data, a modified latent Dirichlet allocation (mLDA) based distribution is also introduced. At the classification stage, a k-nearest neighbor (k-NN) classifier is employed. Compared with state of art large-scale genre categorization in [1], the experimental results on a 23-sports dataset demonstrate that our proposed framework achieves a comparable classification accuracy with 27% and 64% expansion in data volume and diversity, respectively.
{"title":"An efficient framework on large-scale video genre classification","authors":"Ning Zhang, L. Guan","doi":"10.1109/MMSP.2010.5662069","DOIUrl":"https://doi.org/10.1109/MMSP.2010.5662069","url":null,"abstract":"Efficient data mining and indexing is important for multimedia analysis and retrieval. In the field of large-scale video analysis, effective genre categorization plays an important role and serves one of the fundamental steps for data mining. Existing works utilize domain-knowledge dependent feature extraction, which is limited from genre diversification as well as data volume scalability. In this paper, we propose a systematic framework for automatically classifying video genres using domain-knowledge independent descriptors in feature extraction, and a bag-of-visualwords (BoW) based model in compact video representation. Scale invariant feature transform (SIFT) local descriptor accelerated by GPU hardware is adopted for feature extraction. BoW model with an innovative codebook generation using bottom-up two-layer K-means clustering is proposed to abstract the video characteristics. Besides the histogram-based distribution in summarizing video data, a modified latent Dirichlet allocation (mLDA) based distribution is also introduced. At the classification stage, a k-nearest neighbor (k-NN) classifier is employed. Compared with state of art large-scale genre categorization in [1], the experimental results on a 23-sports dataset demonstrate that our proposed framework achieves a comparable classification accuracy with 27% and 64% expansion in data volume and diversity, respectively.","PeriodicalId":105774,"journal":{"name":"2010 IEEE International Workshop on Multimedia Signal Processing","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125674929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-01DOI: 10.1109/MMSP.2010.5662068
Samuel Kim, Shiva Sundaram, P. Georgiou, Shrikanth S. Narayanan
An N-gram modeling approach for unstructured audio signals is introduced with applications to audio information retrieval. The proposed N-gram approach aims to capture local dynamic information in acoustic words within the acoustic topic model framework which assumes an audio signal consists of latent acoustic topics and each topic can be interpreted as a distribution over acoustic words. Experimental results on classifying audio clips from BBC Sound Effects Library according to both semantic and onomatopoeic labels indicate that the proposed N-gram approach performs better than using only a bag-of-words approach by providing complementary local dynamic information.
{"title":"An N-gram model for unstructured audio signals toward information retrieval","authors":"Samuel Kim, Shiva Sundaram, P. Georgiou, Shrikanth S. Narayanan","doi":"10.1109/MMSP.2010.5662068","DOIUrl":"https://doi.org/10.1109/MMSP.2010.5662068","url":null,"abstract":"An N-gram modeling approach for unstructured audio signals is introduced with applications to audio information retrieval. The proposed N-gram approach aims to capture local dynamic information in acoustic words within the acoustic topic model framework which assumes an audio signal consists of latent acoustic topics and each topic can be interpreted as a distribution over acoustic words. Experimental results on classifying audio clips from BBC Sound Effects Library according to both semantic and onomatopoeic labels indicate that the proposed N-gram approach performs better than using only a bag-of-words approach by providing complementary local dynamic information.","PeriodicalId":105774,"journal":{"name":"2010 IEEE International Workshop on Multimedia Signal Processing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115462063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-01DOI: 10.1109/MMSP.2010.5662040
S. Momcilovic, Yige Wang, S. Rane, A. Vetro
Most distributed source coding schemes involve the application of a channel code to the signal and transmission of the resulting syndromes. For low-complexity encoding with superior compression performance, graph-based channel codes such as LDPC codes are used to generate the syndromes. The encoder performs simple XOR operations, while the decoder uses belief propagation (BP) decoding to recover the signal of interest using the syndromes and some correlated side information. We consider parallelization of BP decoding on general-purpose multi-core CPUs. The motivation is to make BP decoding fast enough for realtime applications. We consider three different BP decoding algorithms: Sum-Product BP, Min-Sum BP and Algorithm E. The speedup obtained by parallelizing these algorithms is examined along with the tradeoff against decoding performance. Parallelization is achieved by dividing the received syndrome vectors among different cores, and by using vector operations to simultaneously process multiple check nodes in each core. While Min-Sum BP has intermediate decoding complexity, a “vectorized” version of Min-Sum BP performs nearly as fast as the much simpler Algorithm E with significantly fewer decoding errors. Our experiments indicate that, for the best compromise between speed and performance, the decoder should use Min-Sum BP when the side information is of good quality and Sum-Product BP otherwise.
{"title":"Toward realtime side information decoding on multi-core processors","authors":"S. Momcilovic, Yige Wang, S. Rane, A. Vetro","doi":"10.1109/MMSP.2010.5662040","DOIUrl":"https://doi.org/10.1109/MMSP.2010.5662040","url":null,"abstract":"Most distributed source coding schemes involve the application of a channel code to the signal and transmission of the resulting syndromes. For low-complexity encoding with superior compression performance, graph-based channel codes such as LDPC codes are used to generate the syndromes. The encoder performs simple XOR operations, while the decoder uses belief propagation (BP) decoding to recover the signal of interest using the syndromes and some correlated side information. We consider parallelization of BP decoding on general-purpose multi-core CPUs. The motivation is to make BP decoding fast enough for realtime applications. We consider three different BP decoding algorithms: Sum-Product BP, Min-Sum BP and Algorithm E. The speedup obtained by parallelizing these algorithms is examined along with the tradeoff against decoding performance. Parallelization is achieved by dividing the received syndrome vectors among different cores, and by using vector operations to simultaneously process multiple check nodes in each core. While Min-Sum BP has intermediate decoding complexity, a “vectorized” version of Min-Sum BP performs nearly as fast as the much simpler Algorithm E with significantly fewer decoding errors. Our experiments indicate that, for the best compromise between speed and performance, the decoder should use Min-Sum BP when the side information is of good quality and Sum-Product BP otherwise.","PeriodicalId":105774,"journal":{"name":"2010 IEEE International Workshop on Multimedia Signal Processing","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117225061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}