Pub Date : 2008-11-05DOI: 10.1109/MMSP.2008.4665139
Sheng Gao, J. Chevallet, Joo-Hwee Lim
The task of ad hoc photographic image retrieval in ImageCLEF 2007 international benchmark is to retrieve relevant images in the database to the user query formulated as keywords and image examples. This paper presents rich representation and indexing technologies exploited in our system that participated in ImageCLEF 2007. It uses diverse visual content representation, text representation, pseudo-relevance feedback and fusion, which make our system, with mean average precision 0.2833, in the 4th place among 457 automatic runs submitted from 20 participants to photographic ImageCLEF 2007 and in the 2nd place in terms of participants. Our systematic analysis in the paper demonstrates that 1) combing diverse low-level visual features and ranking technologies significantly improves the content-based image retrieval (CBIR) system; 2) cross-modality pseudo-relevance feedback improves the system performance; and 3) fusion of CBIR and TBIR outperforms individual modality based system.
{"title":"Rich representation and ranking for photographic image retrieval in ImageCLEF 2007","authors":"Sheng Gao, J. Chevallet, Joo-Hwee Lim","doi":"10.1109/MMSP.2008.4665139","DOIUrl":"https://doi.org/10.1109/MMSP.2008.4665139","url":null,"abstract":"The task of ad hoc photographic image retrieval in ImageCLEF 2007 international benchmark is to retrieve relevant images in the database to the user query formulated as keywords and image examples. This paper presents rich representation and indexing technologies exploited in our system that participated in ImageCLEF 2007. It uses diverse visual content representation, text representation, pseudo-relevance feedback and fusion, which make our system, with mean average precision 0.2833, in the 4th place among 457 automatic runs submitted from 20 participants to photographic ImageCLEF 2007 and in the 2nd place in terms of participants. Our systematic analysis in the paper demonstrates that 1) combing diverse low-level visual features and ranking technologies significantly improves the content-based image retrieval (CBIR) system; 2) cross-modality pseudo-relevance feedback improves the system performance; and 3) fusion of CBIR and TBIR outperforms individual modality based system.","PeriodicalId":402287,"journal":{"name":"2008 IEEE 10th Workshop on Multimedia Signal Processing","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126228477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-05DOI: 10.1109/MMSP.2008.4665052
K. Yang, M. Frater, E. Huntington, M. Pickering, J. Arnold
The efficiency of real-time digital image processing operations has an important impact on the cost and realizability of complex algorithms. Global motion estimation is an example of such a complex algorithm. Most digital image processing is carried out with a precision of 8 bits per pixel, however there has always been interest in low-complexity algorithms. One way of achieving low complexity is through low precision, such as might be achieved by quantization of each pixel to a single bit. Previous approaches to one-bit motion estimation have achieved quantization through a combination of spatial filtering/averaging and threshold setting. In this paper we present a generalized framework for precision reduction. Motivated by this framework, we show that bit-plane selection provides higher performance, with lower complexity, than conventional approaches to quantization.
{"title":"Generalized framework for reduced precision global motion estimation between digital images","authors":"K. Yang, M. Frater, E. Huntington, M. Pickering, J. Arnold","doi":"10.1109/MMSP.2008.4665052","DOIUrl":"https://doi.org/10.1109/MMSP.2008.4665052","url":null,"abstract":"The efficiency of real-time digital image processing operations has an important impact on the cost and realizability of complex algorithms. Global motion estimation is an example of such a complex algorithm. Most digital image processing is carried out with a precision of 8 bits per pixel, however there has always been interest in low-complexity algorithms. One way of achieving low complexity is through low precision, such as might be achieved by quantization of each pixel to a single bit. Previous approaches to one-bit motion estimation have achieved quantization through a combination of spatial filtering/averaging and threshold setting. In this paper we present a generalized framework for precision reduction. Motivated by this framework, we show that bit-plane selection provides higher performance, with lower complexity, than conventional approaches to quantization.","PeriodicalId":402287,"journal":{"name":"2008 IEEE 10th Workshop on Multimedia Signal Processing","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125621470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-05DOI: 10.1109/MMSP.2008.4665204
Yingdong Ma, S. Worrall, A. Kondoz
Automatic video object segmentation based on spatial-temporal information has been a research topic for many years. Existing approaches can achieve good results in some cases, such as where there is a simple background. However, in the case of cluttered backgrounds or low quality video input, automatic video object segmentation is still a problem without a general solution. A novel approach is introduced in this work, to deal with this problem by using depth information in the algorithm. The proposed approach obtains the initial object masks based on depth map and on motion detection. The object boundaries are obtained by updating object masks using a simultaneous combination of multiple cues, including spatial location, intensity, and edge, within an active contour model. The experimental result shows that this method is effective and has good output, even with cluttered backgrounds. It is also robust when the quality of input depth and video is low.
{"title":"Automatic video object segmentation using depth information and an active contour model","authors":"Yingdong Ma, S. Worrall, A. Kondoz","doi":"10.1109/MMSP.2008.4665204","DOIUrl":"https://doi.org/10.1109/MMSP.2008.4665204","url":null,"abstract":"Automatic video object segmentation based on spatial-temporal information has been a research topic for many years. Existing approaches can achieve good results in some cases, such as where there is a simple background. However, in the case of cluttered backgrounds or low quality video input, automatic video object segmentation is still a problem without a general solution. A novel approach is introduced in this work, to deal with this problem by using depth information in the algorithm. The proposed approach obtains the initial object masks based on depth map and on motion detection. The object boundaries are obtained by updating object masks using a simultaneous combination of multiple cues, including spatial location, intensity, and edge, within an active contour model. The experimental result shows that this method is effective and has good output, even with cluttered backgrounds. It is also robust when the quality of input depth and video is low.","PeriodicalId":402287,"journal":{"name":"2008 IEEE 10th Workshop on Multimedia Signal Processing","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132240773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-05DOI: 10.1109/MMSP.2008.4665127
D. Alonso-Caneiro, D. R. Iskander, M. Collins
An optimal videokeratoscopic image presents a strong well-oriented pattern over the majority of the measured corneal surface. In the presence of interference, arising from reflections from eyelashes or tear film instability, the patternpsilas flow is disturbed and the local orientation of the area of interference is no longer coherent with the global flow. Detecting and analysing videokeratoscopic pattern interference is important when assessing tear film surface quality, break-up time and location as well as designing tools that provide a more accurate static measurement of corneal topography. In this paper a set of algorithms for detecting interference patterns in videokeratoscopic images is presented. First a frequency approach is used to subtract the background information from the oriented structure and then a gradient-based analysis is used to obtain the patternpsilas orientation and coherence. The proposed techniques are compared to a previously reported method based on statistical block normalisation and Gabor filtering. The results indicate that the proposed technique leads, in most cases: to a better videokeratoscopic interference detection system, that for a given probability of the useful signal detection (99.7%) has a significantly lower probability of false alarm, and at the same time is computationally much more efficient than the previously reported method.
{"title":"Computationally efficient interference detection in videokeratoscopy images","authors":"D. Alonso-Caneiro, D. R. Iskander, M. Collins","doi":"10.1109/MMSP.2008.4665127","DOIUrl":"https://doi.org/10.1109/MMSP.2008.4665127","url":null,"abstract":"An optimal videokeratoscopic image presents a strong well-oriented pattern over the majority of the measured corneal surface. In the presence of interference, arising from reflections from eyelashes or tear film instability, the patternpsilas flow is disturbed and the local orientation of the area of interference is no longer coherent with the global flow. Detecting and analysing videokeratoscopic pattern interference is important when assessing tear film surface quality, break-up time and location as well as designing tools that provide a more accurate static measurement of corneal topography. In this paper a set of algorithms for detecting interference patterns in videokeratoscopic images is presented. First a frequency approach is used to subtract the background information from the oriented structure and then a gradient-based analysis is used to obtain the patternpsilas orientation and coherence. The proposed techniques are compared to a previously reported method based on statistical block normalisation and Gabor filtering. The results indicate that the proposed technique leads, in most cases: to a better videokeratoscopic interference detection system, that for a given probability of the useful signal detection (99.7%) has a significantly lower probability of false alarm, and at the same time is computationally much more efficient than the previously reported method.","PeriodicalId":402287,"journal":{"name":"2008 IEEE 10th Workshop on Multimedia Signal Processing","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130172255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-05DOI: 10.1109/MMSP.2008.4665060
K. Joshi, R. Kamathe
In ultrasound images a special type of acoustic noise, technically known as speckle noise, is the major factor of image quality degradation. In order to improve the image quality by means of speckle suppression and thus to increase the diagnostic potential of medical ultrasound, it is important to quantify the speckle. This paper describes, quality metrics for speckle in coherent imaging and their limitations. It also describes a new metric-SDI, its uniqueness in quantifying the speckle and comparison of performance with existing metrics. Empirical verification of SDI with a set of Test Images proves its speckle quantification in ultrasound. A subjective criterion is also taken into account to support the results.
{"title":"SDI: New metric for quantification of speckle noise in ultrasound imaging","authors":"K. Joshi, R. Kamathe","doi":"10.1109/MMSP.2008.4665060","DOIUrl":"https://doi.org/10.1109/MMSP.2008.4665060","url":null,"abstract":"In ultrasound images a special type of acoustic noise, technically known as speckle noise, is the major factor of image quality degradation. In order to improve the image quality by means of speckle suppression and thus to increase the diagnostic potential of medical ultrasound, it is important to quantify the speckle. This paper describes, quality metrics for speckle in coherent imaging and their limitations. It also describes a new metric-SDI, its uniqueness in quantifying the speckle and comparison of performance with existing metrics. Empirical verification of SDI with a set of Test Images proves its speckle quantification in ultrasound. A subjective criterion is also taken into account to support the results.","PeriodicalId":402287,"journal":{"name":"2008 IEEE 10th Workshop on Multimedia Signal Processing","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134131616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-05DOI: 10.1109/MMSP.2008.4665160
Jui-Hsin Lai, Shao-Yi Chien
Sport video enrichment can provide viewers more interaction and user experiences. In this paper, with tennis sport video as an example, two techniques are proposed for video enrichment: content layer separation and real-time rendering. The video content is decomposed into different layers, like field, players and ball, and the enriched video is rendered by re-integrated these layers information. They are both executed in sprite plane to avoid complex 3D model construction and rendering. Experiments shows that it can generate nature and seamless edited video by viewerspsila requests, and the real-time processing speed of 30 720times480 frames per second can be achieved on a 3 GHz CPU.
{"title":"Tennis video enrichment with content layer separation and real-time rendering in sprite plane","authors":"Jui-Hsin Lai, Shao-Yi Chien","doi":"10.1109/MMSP.2008.4665160","DOIUrl":"https://doi.org/10.1109/MMSP.2008.4665160","url":null,"abstract":"Sport video enrichment can provide viewers more interaction and user experiences. In this paper, with tennis sport video as an example, two techniques are proposed for video enrichment: content layer separation and real-time rendering. The video content is decomposed into different layers, like field, players and ball, and the enriched video is rendered by re-integrated these layers information. They are both executed in sprite plane to avoid complex 3D model construction and rendering. Experiments shows that it can generate nature and seamless edited video by viewerspsila requests, and the real-time processing speed of 30 720times480 frames per second can be achieved on a 3 GHz CPU.","PeriodicalId":402287,"journal":{"name":"2008 IEEE 10th Workshop on Multimedia Signal Processing","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131512091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-05DOI: 10.1109/MMSP.2008.4665175
Angela D'Angelo, M. Barni
Geometric transformations are known to be one of the most serious threats against any digital watermarking scheme. The goal of this work is to design an objective measurement scheme for geometric distortions in order to investigate the perceptual quality impact of geometric attacks on the watermarked images. The proposed approach is a full-reference image quality metric focusing on the problem of local geometric attacks and it is based on the use of Gabor filters. The novelty of the proposed metric is that it considers both the displacement field describing the distortion and the structure of the image.The experimental results show the good performances of the metric.
{"title":"A structural method for quality evaluation of desynchronization attacks in image watermarking","authors":"Angela D'Angelo, M. Barni","doi":"10.1109/MMSP.2008.4665175","DOIUrl":"https://doi.org/10.1109/MMSP.2008.4665175","url":null,"abstract":"Geometric transformations are known to be one of the most serious threats against any digital watermarking scheme. The goal of this work is to design an objective measurement scheme for geometric distortions in order to investigate the perceptual quality impact of geometric attacks on the watermarked images. The proposed approach is a full-reference image quality metric focusing on the problem of local geometric attacks and it is based on the use of Gabor filters. The novelty of the proposed metric is that it considers both the displacement field describing the distortion and the structure of the image.The experimental results show the good performances of the metric.","PeriodicalId":402287,"journal":{"name":"2008 IEEE 10th Workshop on Multimedia Signal Processing","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131632641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-05DOI: 10.1109/MMSP.2008.4665088
Frederik Verbist, A. Munteanu, J. Cornelis, P. Schelkens
A novel coding approach, applying open-loop coding principles in predictive coding systems is proposed in this paper. The proposed approach is instantiated with an intra-frame video codec employing the transform and spatial prediction modes from H.264. Additionally, a novel rate-distortion model for open-loop predictive coding is proposed and experimentally validated. Optimally allocating rate based on the proposed model provides significant gains in comparison to a straightforward rate allocation not accounting for drift. Furthermore, the proposed open-loop predictive codec provides gains of up to 2.3 dB in comparison to an equivalent closed-loop intra-frame video codec employing the transform, prediction modes and rate-allocation from H.264. This indicates that, with appropriate drift compensation, open-loop predictive coding offers the possibility for further improving the compression performance in predictive coding systems.
{"title":"Intra-frame video coding using an open-loop predictive coding approach","authors":"Frederik Verbist, A. Munteanu, J. Cornelis, P. Schelkens","doi":"10.1109/MMSP.2008.4665088","DOIUrl":"https://doi.org/10.1109/MMSP.2008.4665088","url":null,"abstract":"A novel coding approach, applying open-loop coding principles in predictive coding systems is proposed in this paper. The proposed approach is instantiated with an intra-frame video codec employing the transform and spatial prediction modes from H.264. Additionally, a novel rate-distortion model for open-loop predictive coding is proposed and experimentally validated. Optimally allocating rate based on the proposed model provides significant gains in comparison to a straightforward rate allocation not accounting for drift. Furthermore, the proposed open-loop predictive codec provides gains of up to 2.3 dB in comparison to an equivalent closed-loop intra-frame video codec employing the transform, prediction modes and rate-allocation from H.264. This indicates that, with appropriate drift compensation, open-loop predictive coding offers the possibility for further improving the compression performance in predictive coding systems.","PeriodicalId":402287,"journal":{"name":"2008 IEEE 10th Workshop on Multimedia Signal Processing","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132580196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-05DOI: 10.1109/MMSP.2008.4665137
Jung-Shiong Chang, A. C. Shih, Hsueh-Yi Sean Lin, Hai-Feng Kao, H. Liao, Wen-Hsien Fang
We propose a compact 3D object representation scheme that can greatly assist the search/retrieval process in a network environment. A 3D mesh-based object is transformed into a new coordinate frame by using the Isomap (isometric feature mapping) method. During the transformation process, not only the structure of the salient parts of an object will be kept, but also the geometrical relationships will be preserved. From the viewpoint of cognitive psychology, the data distributed on the Isomap manifold can be regarded as a set of significant features of a 3D mesh-based object. To perform efficient matching, we project the Isomap domain 3D object onto two different 2D maps, and the two 2D feature descriptors are used as the basis to measure the degree of similarity between two 3D mesh-based objects. Experiments demonstrate that the proposed method in retrieving similar 3D models is very effective. Most importantly, the proposed 3D mesh retrieval scheme is still valid even if a 3D mesh undergoes a mesh simplification process.
{"title":"3-D mesh representation and retrieval using Isomap manifold","authors":"Jung-Shiong Chang, A. C. Shih, Hsueh-Yi Sean Lin, Hai-Feng Kao, H. Liao, Wen-Hsien Fang","doi":"10.1109/MMSP.2008.4665137","DOIUrl":"https://doi.org/10.1109/MMSP.2008.4665137","url":null,"abstract":"We propose a compact 3D object representation scheme that can greatly assist the search/retrieval process in a network environment. A 3D mesh-based object is transformed into a new coordinate frame by using the Isomap (isometric feature mapping) method. During the transformation process, not only the structure of the salient parts of an object will be kept, but also the geometrical relationships will be preserved. From the viewpoint of cognitive psychology, the data distributed on the Isomap manifold can be regarded as a set of significant features of a 3D mesh-based object. To perform efficient matching, we project the Isomap domain 3D object onto two different 2D maps, and the two 2D feature descriptors are used as the basis to measure the degree of similarity between two 3D mesh-based objects. Experiments demonstrate that the proposed method in retrieving similar 3D models is very effective. Most importantly, the proposed 3D mesh retrieval scheme is still valid even if a 3D mesh undergoes a mesh simplification process.","PeriodicalId":402287,"journal":{"name":"2008 IEEE 10th Workshop on Multimedia Signal Processing","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114308770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-05DOI: 10.1109/MMSP.2008.4665096
D. Rusanovskyy, K. Ugur, M. Gabbouj
In order to compensate for the temporally changing effect of aliasing and improve the coding efficiency of video coders, adaptive interpolation filtering schemes have been recently proposed. In such schemes, encoder computes the interpolation filter coefficients for each frame and then re-encodes the frame with the new adaptive filter. However, the coding efficiency benefit comes with the expense of increased encoding complexity due to this additional encoding pass. In this paper, we present two novel algorithms to reduce the encoding complexity of adaptive interpolation filtering schemes. First algorithm reduces the complexity of the second encoding pass by using a very lightweight motion estimation algorithm that reuses the data already computed in the first encoding pass. Second algorithm eliminates the second coding pass and re-uses the filter coefficients already computed for previous frames. Experimental results show that the proposed methods achieve between 1.5 to 2 times encoding complexity reduction with practically negligible penalty on coding efficiency.
{"title":"Fast encoding algorithms for video coding with adaptive interpolation filters","authors":"D. Rusanovskyy, K. Ugur, M. Gabbouj","doi":"10.1109/MMSP.2008.4665096","DOIUrl":"https://doi.org/10.1109/MMSP.2008.4665096","url":null,"abstract":"In order to compensate for the temporally changing effect of aliasing and improve the coding efficiency of video coders, adaptive interpolation filtering schemes have been recently proposed. In such schemes, encoder computes the interpolation filter coefficients for each frame and then re-encodes the frame with the new adaptive filter. However, the coding efficiency benefit comes with the expense of increased encoding complexity due to this additional encoding pass. In this paper, we present two novel algorithms to reduce the encoding complexity of adaptive interpolation filtering schemes. First algorithm reduces the complexity of the second encoding pass by using a very lightweight motion estimation algorithm that reuses the data already computed in the first encoding pass. Second algorithm eliminates the second coding pass and re-uses the filter coefficients already computed for previous frames. Experimental results show that the proposed methods achieve between 1.5 to 2 times encoding complexity reduction with practically negligible penalty on coding efficiency.","PeriodicalId":402287,"journal":{"name":"2008 IEEE 10th Workshop on Multimedia Signal Processing","volume":"166 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114371466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}