Pub Date : 2010-12-10DOI: 10.1109/MMSP.2010.5662019
A. Naghdinezhad, F. Labeau
With the rapid development of multimedia technology, video transmission over error prone channels is widely used. Using predictive video coding can lead to temporal and spatial propagation of channel errors, which consequently results in high degradation in the quality of the received video. In order to address this problem different error resilient methods have been proposed. In this paper, a number of the error resilient methods based on reference frame modification are overviewed briefly and examined with scalable extension of H.264/AVC (SVC). We propose a new method based on hierarchical structure used in temporal scalable coding. Average gains of 0.76 dB over the improved generalized source channel prediction (IGSCP) method and 2.26 dB over normal coding are achieved.
{"title":"Reference frame modification methods in scalable video coding (SVC)","authors":"A. Naghdinezhad, F. Labeau","doi":"10.1109/MMSP.2010.5662019","DOIUrl":"https://doi.org/10.1109/MMSP.2010.5662019","url":null,"abstract":"With the rapid development of multimedia technology, video transmission over error prone channels is widely used. Using predictive video coding can lead to temporal and spatial propagation of channel errors, which consequently results in high degradation in the quality of the received video. In order to address this problem different error resilient methods have been proposed. In this paper, a number of the error resilient methods based on reference frame modification are overviewed briefly and examined with scalable extension of H.264/AVC (SVC). We propose a new method based on hierarchical structure used in temporal scalable coding. Average gains of 0.76 dB over the improved generalized source channel prediction (IGSCP) method and 2.26 dB over normal coding are achieved.","PeriodicalId":105774,"journal":{"name":"2010 IEEE International Workshop on Multimedia Signal Processing","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125568359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-10DOI: 10.1109/MMSP.2010.5662028
Seungju Han, Jae-Joon Han, Youngkyoo Hwang, Jungbae Kim, Won-Chul Bang, J. D. Kim, Chang-Yeong Kim
The recent online networked virtual worlds such as SecondLife, World of Warcraft and Lineage have been increasingly popular. A life-scale virtual world presentation and the intuitive interaction between the users and the virtual worlds would provide more natural and immersive experience for users. The emergence of novel interaction technologies such as sensing the facial expression and the motion of the users and the real world environments could be used to provide a strong connection between them. For the wide acceptance and use of the virtual world, a various type of novel interaction devices should have a unified interaction formats between the real world and the virtual world and interoperability among virtual worlds. Thus, MPEG-V Media Context and Control (ISO/IEC 23005) standardizes such connecting information. The paper provides an overview and its usage example of MPEG-V from the real world to the virtual world (R2V) on interfaces for controlling avatars and virtual objects in the virtual world by the real world devices. In particular, we investigate how the MPEG-V framework can be applied for the facial animation of an avatar in various types of virtual worlds.
{"title":"Controlling virtual world by the real world devices with an MPEG-V framework","authors":"Seungju Han, Jae-Joon Han, Youngkyoo Hwang, Jungbae Kim, Won-Chul Bang, J. D. Kim, Chang-Yeong Kim","doi":"10.1109/MMSP.2010.5662028","DOIUrl":"https://doi.org/10.1109/MMSP.2010.5662028","url":null,"abstract":"The recent online networked virtual worlds such as SecondLife, World of Warcraft and Lineage have been increasingly popular. A life-scale virtual world presentation and the intuitive interaction between the users and the virtual worlds would provide more natural and immersive experience for users. The emergence of novel interaction technologies such as sensing the facial expression and the motion of the users and the real world environments could be used to provide a strong connection between them. For the wide acceptance and use of the virtual world, a various type of novel interaction devices should have a unified interaction formats between the real world and the virtual world and interoperability among virtual worlds. Thus, MPEG-V Media Context and Control (ISO/IEC 23005) standardizes such connecting information. The paper provides an overview and its usage example of MPEG-V from the real world to the virtual world (R2V) on interfaces for controlling avatars and virtual objects in the virtual world by the real world devices. In particular, we investigate how the MPEG-V framework can be applied for the facial animation of an avatar in various types of virtual worlds.","PeriodicalId":105774,"journal":{"name":"2010 IEEE International Workshop on Multimedia Signal Processing","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127372020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-10DOI: 10.1109/MMSP.2010.5661996
Qingxiong Yang, K. Tan, Bruce Culbertson, J. Apostolopoulos
We envision a conference room of the future where depth sensing systems are able to capture the 3D position and pose of users, and enable users to interact with digital media and contents being shown on immersive displays. The key technical barrier is that current depth sensing systems are noisy, inaccurate, and unreliable. It is well understood that passive stereo fails in non-textured, featureless portions of a scene. Active sensors on the other hand are more accurate in these regions and tend to be noisy in highly textured regions. We propose a way to synergistically combine the two to create a state-of-the-art depth sensing system which runs in near real time. In contrast the only known previous method for fusion is slow and fails to take advantage of the complementary nature of the two types of sensors.
{"title":"Fusion of active and passive sensors for fast 3D capture","authors":"Qingxiong Yang, K. Tan, Bruce Culbertson, J. Apostolopoulos","doi":"10.1109/MMSP.2010.5661996","DOIUrl":"https://doi.org/10.1109/MMSP.2010.5661996","url":null,"abstract":"We envision a conference room of the future where depth sensing systems are able to capture the 3D position and pose of users, and enable users to interact with digital media and contents being shown on immersive displays. The key technical barrier is that current depth sensing systems are noisy, inaccurate, and unreliable. It is well understood that passive stereo fails in non-textured, featureless portions of a scene. Active sensors on the other hand are more accurate in these regions and tend to be noisy in highly textured regions. We propose a way to synergistically combine the two to create a state-of-the-art depth sensing system which runs in near real time. In contrast the only known previous method for fusion is slow and fails to take advantage of the complementary nature of the two types of sensors.","PeriodicalId":105774,"journal":{"name":"2010 IEEE International Workshop on Multimedia Signal Processing","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114601450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-10DOI: 10.1109/MMSP.2010.5662052
Christian Keimel, Julian Habigt, Tim Habigt, Martin Rothbucher, K. Diepold
High definition video over IP based networks (IPTV) has become a mainstay in today's consumer environment. In most applications, encoders conforming to the H.264/AVC standard are used. But even within one standard, often a wide range of coding tools are available that can deliver a vastly different visual quality. Therefore we evaluate in this contribution different coding technologies, using different encoder settings of H.264/AVC, but also a completely different encoder like Dirac. We cover a wide range of different bitrates from ADSL to VDSL and different content, with low and high demand on the encoders. As PSNR is not well suited to describe the perceived visual quality, we conducted extensive subject tests to determine the visual quality. Our results show that for currently common bitrates, the visual quality can be more than doubled, if the same coding technology, but different coding tools are used.
{"title":"Visual quality of current coding technologies at high definition IPTV bitrates","authors":"Christian Keimel, Julian Habigt, Tim Habigt, Martin Rothbucher, K. Diepold","doi":"10.1109/MMSP.2010.5662052","DOIUrl":"https://doi.org/10.1109/MMSP.2010.5662052","url":null,"abstract":"High definition video over IP based networks (IPTV) has become a mainstay in today's consumer environment. In most applications, encoders conforming to the H.264/AVC standard are used. But even within one standard, often a wide range of coding tools are available that can deliver a vastly different visual quality. Therefore we evaluate in this contribution different coding technologies, using different encoder settings of H.264/AVC, but also a completely different encoder like Dirac. We cover a wide range of different bitrates from ADSL to VDSL and different content, with low and high demand on the encoders. As PSNR is not well suited to describe the perceived visual quality, we conducted extensive subject tests to determine the visual quality. Our results show that for currently common bitrates, the visual quality can be more than doubled, if the same coding technology, but different coding tools are used.","PeriodicalId":105774,"journal":{"name":"2010 IEEE International Workshop on Multimedia Signal Processing","volume":"97 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122572232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-10DOI: 10.1109/MMSP.2010.5662022
Jean-Marc Thiesse, Joël Jung, M. Antonini
2010 appears to be the launching date for new compression activities intended to challenge the current video compression standard H.264/AVC. Several improvements of this standard are already known like competition-based motion vector prediction. However the targeted 50% bitrate saving for equivalent quality is not yet achieved. In this context, this paper proposes to reduce the signaling information resulting from this vector competition, by using data hiding techniques. As data hiding and video compression traditionally have contradictory goals, a study of data hiding is first performed. Then, an efficient way of using data hiding for video compression is proposed. The main idea is to hide the indices into appropriately selected chroma and luma transform coefficients. To minimize the prediction errors, the modification is performed via a rate-distortion optimization. Objective improvements (up to 2.3% bitrate saving) and subjective assess ment of chroma loss are reported and analyzed for several sequences.
{"title":"Data hiding of motion information in chroma and luma samples for video compression","authors":"Jean-Marc Thiesse, Joël Jung, M. Antonini","doi":"10.1109/MMSP.2010.5662022","DOIUrl":"https://doi.org/10.1109/MMSP.2010.5662022","url":null,"abstract":"2010 appears to be the launching date for new compression activities intended to challenge the current video compression standard H.264/AVC. Several improvements of this standard are already known like competition-based motion vector prediction. However the targeted 50% bitrate saving for equivalent quality is not yet achieved. In this context, this paper proposes to reduce the signaling information resulting from this vector competition, by using data hiding techniques. As data hiding and video compression traditionally have contradictory goals, a study of data hiding is first performed. Then, an efficient way of using data hiding for video compression is proposed. The main idea is to hide the indices into appropriately selected chroma and luma transform coefficients. To minimize the prediction errors, the modification is performed via a rate-distortion optimization. Objective improvements (up to 2.3% bitrate saving) and subjective assess ment of chroma loss are reported and analyzed for several sequences.","PeriodicalId":105774,"journal":{"name":"2010 IEEE International Workshop on Multimedia Signal Processing","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131246292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-10DOI: 10.1109/MMSP.2010.5661994
S. Voloshynovskiy, O. Koval, F. Beekhof, F. Farhadzadeh, T. Holotyak
In light of the recent development of multimedia and networking technologies, an exponentially increasing amount of content is available via various public services. That is why content identification attracts a lot of attention. One possible technology for content identification is based on digital fingerprinting. When trying to establish information-theoretic limits in this application, usually it is assumed that the codewords are of infinite length and that a jointly typical decoder is used in the analysis. These assumptions represent a certain over-generalization for the majority of practical applications. Consequently, the impact of the finite length on the mentioned limits remains an open and largely unexplored problem. Furthermore, leaking of privacy-related information to third parties due to storage, distribution and sharing of fingerprinting data represents an emerging research issue that should be addressed carefully. This paper contains an information-theoretic analysis of finite length digital fingerprinting under privacy constraints. A particular link between the considered setup and Forney's erasure/list decoding [1] is presented. Finally, complexity issues of reliable identification in large databases are addressed.
{"title":"Private content identification: Performance-privacy-complexity trade-off","authors":"S. Voloshynovskiy, O. Koval, F. Beekhof, F. Farhadzadeh, T. Holotyak","doi":"10.1109/MMSP.2010.5661994","DOIUrl":"https://doi.org/10.1109/MMSP.2010.5661994","url":null,"abstract":"In light of the recent development of multimedia and networking technologies, an exponentially increasing amount of content is available via various public services. That is why content identification attracts a lot of attention. One possible technology for content identification is based on digital fingerprinting. When trying to establish information-theoretic limits in this application, usually it is assumed that the codewords are of infinite length and that a jointly typical decoder is used in the analysis. These assumptions represent a certain over-generalization for the majority of practical applications. Consequently, the impact of the finite length on the mentioned limits remains an open and largely unexplored problem. Furthermore, leaking of privacy-related information to third parties due to storage, distribution and sharing of fingerprinting data represents an emerging research issue that should be addressed carefully. This paper contains an information-theoretic analysis of finite length digital fingerprinting under privacy constraints. A particular link between the considered setup and Forney's erasure/list decoding [1] is presented. Finally, complexity issues of reliable identification in large databases are addressed.","PeriodicalId":105774,"journal":{"name":"2010 IEEE International Workshop on Multimedia Signal Processing","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131475558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-10DOI: 10.1109/MMSP.2010.5662071
A. Bakhtiari, N. Bouguila
In many applications it is necessary to be able to classify images in a database accurately and with acceptable speed. The main problem is to assign different images to right categories. The later problem becomes more challenging while dealing with large databases with many categories and subcategories. In this paper we propose a novel classification method based on an adopted hierarchical Dirichlet generative model, previously proposed for corpora document classification. In order to adopt the model to work with image data we use the bag of visual words model. We show that if properly applied the model can achieve adequate results for hierarchical image classification. Experimental results are presented and discussed to show the merits of the proposed approach.
{"title":"A hierarchical statistical model for object classification","authors":"A. Bakhtiari, N. Bouguila","doi":"10.1109/MMSP.2010.5662071","DOIUrl":"https://doi.org/10.1109/MMSP.2010.5662071","url":null,"abstract":"In many applications it is necessary to be able to classify images in a database accurately and with acceptable speed. The main problem is to assign different images to right categories. The later problem becomes more challenging while dealing with large databases with many categories and subcategories. In this paper we propose a novel classification method based on an adopted hierarchical Dirichlet generative model, previously proposed for corpora document classification. In order to adopt the model to work with image data we use the bag of visual words model. We show that if properly applied the model can achieve adequate results for hierarchical image classification. Experimental results are presented and discussed to show the merits of the proposed approach.","PeriodicalId":105774,"journal":{"name":"2010 IEEE International Workshop on Multimedia Signal Processing","volume":"503 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134031344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-10DOI: 10.1109/MMSP.2010.5662011
Wan-Chien Chiou, Yi-Lei Chen, Chiou-Ting Hsu
This paper proposes an automatic color transfer method for processing images with complex content based on intrinsic component. Although several automatic color transfer methods has been proposed by including region information and/or using multiple references, these methods tend to become ineffective when processing images with complex content and lighting variation. In this paper, our goal is to incorporate the idea of intrinsic component to better characterize the local organization within an image and to reduce the color-bleeding artifact across complex regions. Using intrinsic information, we first represent each image in region level and determine the best-matched reference region for each target region. Next, we conduct color transfer between the best-matched region pairs and perform weighted color transfer for pixels across complex regions in a de-correlated color space. Both subjective and objective evaluation of our experiments demonstrates that the proposed method outperforms the existing methods.
{"title":"Color transfer for complex content images based on intrinsic component","authors":"Wan-Chien Chiou, Yi-Lei Chen, Chiou-Ting Hsu","doi":"10.1109/MMSP.2010.5662011","DOIUrl":"https://doi.org/10.1109/MMSP.2010.5662011","url":null,"abstract":"This paper proposes an automatic color transfer method for processing images with complex content based on intrinsic component. Although several automatic color transfer methods has been proposed by including region information and/or using multiple references, these methods tend to become ineffective when processing images with complex content and lighting variation. In this paper, our goal is to incorporate the idea of intrinsic component to better characterize the local organization within an image and to reduce the color-bleeding artifact across complex regions. Using intrinsic information, we first represent each image in region level and determine the best-matched reference region for each target region. Next, we conduct color transfer between the best-matched region pairs and perform weighted color transfer for pixels across complex regions in a de-correlated color space. Both subjective and objective evaluation of our experiments demonstrates that the proposed method outperforms the existing methods.","PeriodicalId":105774,"journal":{"name":"2010 IEEE International Workshop on Multimedia Signal Processing","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124033100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-10DOI: 10.1109/MMSP.2010.5662008
David Antonio Gómez Jáuregui, P. Horain, Manoj Kumar Rajagopal, S. S. Karri
Particle filtering is known as a robust approach for motion tracking by vision, at the cost of heavy computation in a high dimensional pose space. In this work, we describe a number of heuristics that we demonstrate to jointly improve robustness and real-time for motion capture. 3D human motion capture by monocular vision without markers can be achieved in realtime by registering a 3D articulated model on a video. First, we search the high-dimensional space of 3D poses by generating new hypotheses (or particles) with equivalent 2D projection by kinematic flipping. Second, we use a semi-deterministic particle prediction based on local optimization. Third, we deterministi-cally resample the probability distribution for a more efficient selection of particles. Particles (or poses) are evaluated using a match cost function and penalized with a Gaussian probability pose distribution learned off-line. In order to achieve real-time, measurement step is parallelized on GPU using the OpenCL API. We present experimental results demonstrating robust real-time 3D motion capture with a consumer computer and webcam.
{"title":"Real-time particle filtering with heuristics for 3D motion capture by monocular vision","authors":"David Antonio Gómez Jáuregui, P. Horain, Manoj Kumar Rajagopal, S. S. Karri","doi":"10.1109/MMSP.2010.5662008","DOIUrl":"https://doi.org/10.1109/MMSP.2010.5662008","url":null,"abstract":"Particle filtering is known as a robust approach for motion tracking by vision, at the cost of heavy computation in a high dimensional pose space. In this work, we describe a number of heuristics that we demonstrate to jointly improve robustness and real-time for motion capture. 3D human motion capture by monocular vision without markers can be achieved in realtime by registering a 3D articulated model on a video. First, we search the high-dimensional space of 3D poses by generating new hypotheses (or particles) with equivalent 2D projection by kinematic flipping. Second, we use a semi-deterministic particle prediction based on local optimization. Third, we deterministi-cally resample the probability distribution for a more efficient selection of particles. Particles (or poses) are evaluated using a match cost function and penalized with a Gaussian probability pose distribution learned off-line. In order to achieve real-time, measurement step is parallelized on GPU using the OpenCL API. We present experimental results demonstrating robust real-time 3D motion capture with a consumer computer and webcam.","PeriodicalId":105774,"journal":{"name":"2010 IEEE International Workshop on Multimedia Signal Processing","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124080466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-12-10DOI: 10.1109/MMSP.2010.5662055
Ester Gutiérrez, Hyunggon Park, P. Frossard
In this paper, we present a solution to efficient multimedia streaming applications over P2P networks based on the foresighted resource reciprocation strategy. We study several priority functions that can explicitly consider the timing constraints and the importance of each data segment in terms of multimedia quality, and successfully incorporate them into the foresighted resource reciprocation strategy. This enables peers to enhance their multimedia streaming capability. The simulation results confirm that the proposed approach outperforms existing algorithms such as tit-for-tat in BitTorrent and BiToS solutions.
{"title":"An improved foresighted resource reciprocation strategy for multimedia streaming applications","authors":"Ester Gutiérrez, Hyunggon Park, P. Frossard","doi":"10.1109/MMSP.2010.5662055","DOIUrl":"https://doi.org/10.1109/MMSP.2010.5662055","url":null,"abstract":"In this paper, we present a solution to efficient multimedia streaming applications over P2P networks based on the foresighted resource reciprocation strategy. We study several priority functions that can explicitly consider the timing constraints and the importance of each data segment in terms of multimedia quality, and successfully incorporate them into the foresighted resource reciprocation strategy. This enables peers to enhance their multimedia streaming capability. The simulation results confirm that the proposed approach outperforms existing algorithms such as tit-for-tat in BitTorrent and BiToS solutions.","PeriodicalId":105774,"journal":{"name":"2010 IEEE International Workshop on Multimedia Signal Processing","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114402849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}