Pub Date : 2012-07-05DOI: 10.1109/QoMEX.2012.6263871
Milind S. Gide, Lina Karam
Image quality assessment is one application out of many that can be aided by the use of computational saliency models. Existing visual saliency models have not been extensively tested under a quality assessment context. Also, these models are typically geared towards predicting saliency in non-distorted images. Recent work has also focussed on mimicking the human visual system in order to predict fixation points from saliency maps. One such technique (GAFFE) that uses foveation has been found to perform well for non-distorted images. This work extends the foveation framework by integrating it with saliency maps from well known saliency models. The performance of the foveated saliency models is evaluated based on a comparison with human ground-truth eye-tracking data. For comparison, the performance of the original non-foveated saliency predictions is also presented. It is shown that the integration of saliency models with a foveation based fixation finding framework significantly improves the prediction performance of existing saliency models over different distortion types. It is also found that the information maximization based saliency maps perform the best consistently over different distortion types and levels under this foveation based framework.
{"title":"Improved foveation- and saliency-based visual attention prediction under a quality assessment task","authors":"Milind S. Gide, Lina Karam","doi":"10.1109/QoMEX.2012.6263871","DOIUrl":"https://doi.org/10.1109/QoMEX.2012.6263871","url":null,"abstract":"Image quality assessment is one application out of many that can be aided by the use of computational saliency models. Existing visual saliency models have not been extensively tested under a quality assessment context. Also, these models are typically geared towards predicting saliency in non-distorted images. Recent work has also focussed on mimicking the human visual system in order to predict fixation points from saliency maps. One such technique (GAFFE) that uses foveation has been found to perform well for non-distorted images. This work extends the foveation framework by integrating it with saliency maps from well known saliency models. The performance of the foveated saliency models is evaluated based on a comparison with human ground-truth eye-tracking data. For comparison, the performance of the original non-foveated saliency predictions is also presented. It is shown that the integration of saliency models with a foveation based fixation finding framework significantly improves the prediction performance of existing saliency models over different distortion types. It is also found that the information maximization based saliency maps perform the best consistently over different distortion types and levels under this foveation based framework.","PeriodicalId":6303,"journal":{"name":"2012 Fourth International Workshop on Quality of Multimedia Experience","volume":"44 1","pages":"200-205"},"PeriodicalIF":0.0,"publicationDate":"2012-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82847809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-05DOI: 10.1109/QoMEX.2012.6263848
W. V. D. Broeck, An Jacobs, N. Staelens
Controlled subjective quality experiments are a well-known method to make decisions on the improvement of Quality of Service (QoS) of video streams. Recently it became clear that from a point of view of the consumer or user, the Quality of Experience (QoE) is more relevant and can influence the optimal QoS as determined in a lab-setting. The measurement of QoS parameters in a lab-setting does not take into account the specific context of the practice that is under examination (e,g. watching video content). In this paper we discuss a method of contextualized subjective quality experiments as we applied in different research projects in complement to the standardized lab-experiments. The strength of this method is that video and audio quality is assessed in the real-life context of the user, i.e. his or her natural habitat in which the behavior or practice normally takes place. We provide an overview of how we applied the contextualized research approach in two cases. Next we discuss the method's strengths and weaknesses and suggest a refinement of the methodology.
{"title":"Integrating the everyday-life context in subjective video quality experiments","authors":"W. V. D. Broeck, An Jacobs, N. Staelens","doi":"10.1109/QoMEX.2012.6263848","DOIUrl":"https://doi.org/10.1109/QoMEX.2012.6263848","url":null,"abstract":"Controlled subjective quality experiments are a well-known method to make decisions on the improvement of Quality of Service (QoS) of video streams. Recently it became clear that from a point of view of the consumer or user, the Quality of Experience (QoE) is more relevant and can influence the optimal QoS as determined in a lab-setting. The measurement of QoS parameters in a lab-setting does not take into account the specific context of the practice that is under examination (e,g. watching video content). In this paper we discuss a method of contextualized subjective quality experiments as we applied in different research projects in complement to the standardized lab-experiments. The strength of this method is that video and audio quality is assessed in the real-life context of the user, i.e. his or her natural habitat in which the behavior or practice normally takes place. We provide an overview of how we applied the contextualized research approach in two cases. Next we discuss the method's strengths and weaknesses and suggest a refinement of the methodology.","PeriodicalId":6303,"journal":{"name":"2012 Fourth International Workshop on Quality of Multimedia Experience","volume":"4 1","pages":"19-24"},"PeriodicalIF":0.0,"publicationDate":"2012-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77276593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-05DOI: 10.1109/QOMEX.2012.6263857
S. Davis, E. Cheng, C. Ritz, I. Burnett
This paper investigates how minimal user interaction paradigms and markerless image recognition technologies can be applied to matching print media content to online digital proofs. By linking print material to online content, users can enhance their experience with traditional forms of print media with updated online content, videos, interactive online features etc. The proposed approach is based on extracting features from images/text from mobile device camera images to form `fingerprints' that are used to find matching images/text within a limited test set. An important criterion for these applications is to ensure that the user Quality of Experience (QoE), particularly in terms of matching accuracy and time, is robust to a variety of conditions typically encountered in practical scenarios. In this paper, the performance of a number of computer vision techniques that extract the image features and form the fingerprints are analysed and compared. Both computer simulation tests and mobile device experiments in realistic user conditions are conducted to study the effectiveness of the techniques when considering scale, rotation, blur and lighting variations typically encountered by a user.
{"title":"Ensuring Quality of Experience for markerless image recognition applied to print media content","authors":"S. Davis, E. Cheng, C. Ritz, I. Burnett","doi":"10.1109/QOMEX.2012.6263857","DOIUrl":"https://doi.org/10.1109/QOMEX.2012.6263857","url":null,"abstract":"This paper investigates how minimal user interaction paradigms and markerless image recognition technologies can be applied to matching print media content to online digital proofs. By linking print material to online content, users can enhance their experience with traditional forms of print media with updated online content, videos, interactive online features etc. The proposed approach is based on extracting features from images/text from mobile device camera images to form `fingerprints' that are used to find matching images/text within a limited test set. An important criterion for these applications is to ensure that the user Quality of Experience (QoE), particularly in terms of matching accuracy and time, is robust to a variety of conditions typically encountered in practical scenarios. In this paper, the performance of a number of computer vision techniques that extract the image features and form the fingerprints are analysed and compared. Both computer simulation tests and mobile device experiments in realistic user conditions are conducted to study the effectiveness of the techniques when considering scale, rotation, blur and lighting variations typically encountered by a user.","PeriodicalId":6303,"journal":{"name":"2012 Fourth International Workshop on Quality of Multimedia Experience","volume":"29 1","pages":"158-163"},"PeriodicalIF":0.0,"publicationDate":"2012-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75982417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-05DOI: 10.1109/QoMEX.2012.6263851
Peter Fröhlich, S. Egger, R. Schatz, M. Mühlegger, Kathrin Masuch, B. Gardlo
Standard methodologies for subjective video quality testing are based on very short test clips of 10 seconds. But is this duration sufficient for Quality of Experience assessment? In this paper, we present the results of a comparative user study that tests whether quality perception and rating behavior may be different if video clip durations are longer. We did not find strong overall MOS differences between clip durations, but the three longer clips (60, 120 and 240 seconds) were rated slightly more positively than the three shorter durations under comparison (10, 15 and 30 seconds). This difference was most apparent when high quality videos were presented. However, we did not find an interaction between content class and the duration effect itself. Furthermore, methodological implications of these results are discussed.
{"title":"QoE in 10 seconds: Are short video clip lengths sufficient for Quality of Experience assessment?","authors":"Peter Fröhlich, S. Egger, R. Schatz, M. Mühlegger, Kathrin Masuch, B. Gardlo","doi":"10.1109/QoMEX.2012.6263851","DOIUrl":"https://doi.org/10.1109/QoMEX.2012.6263851","url":null,"abstract":"Standard methodologies for subjective video quality testing are based on very short test clips of 10 seconds. But is this duration sufficient for Quality of Experience assessment? In this paper, we present the results of a comparative user study that tests whether quality perception and rating behavior may be different if video clip durations are longer. We did not find strong overall MOS differences between clip durations, but the three longer clips (60, 120 and 240 seconds) were rated slightly more positively than the three shorter durations under comparison (10, 15 and 30 seconds). This difference was most apparent when high quality videos were presented. However, we did not find an interaction between content class and the duration effect itself. Furthermore, methodological implications of these results are discussed.","PeriodicalId":6303,"journal":{"name":"2012 Fourth International Workshop on Quality of Multimedia Experience","volume":"8 1","pages":"242-247"},"PeriodicalIF":0.0,"publicationDate":"2012-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79766199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-05DOI: 10.1109/QoMEX.2012.6263847
Matthieu Urvoy, M. Barkowsky, Romain Cousseau, Yao Koudota, V. Ricordel, P. Callet, Jesús Gutiérrez, N. García
Research in stereoscopic 3D coding, transmission and subjective assessment methodology depends largely on the availability of source content that can be used in cross-lab evaluations. While several studies have already been presented using proprietary content, comparisons between the studies are difficult since discrepant contents are used. Therefore in this paper, a freely available dataset of high quality Full-HD stereoscopic sequences shot with a semiprofessional 3D camera is introduced in detail. The content was designed to be suited for usage in a wide variety of applications, including high quality studies. A set of depth maps was calculated from the stereoscopic pair. As an application example, a subjective assessment has been performed using coding and spatial degradations. The Absolute Category Rating with Hidden Reference method was used. The observers were instructed to vote on video quality only. Results of this experiment are also freely available and will be presented in this paper as a first step towards objective video quality measurement for 3DTV.
{"title":"NAMA3DS1-COSPAD1: Subjective video quality assessment database on coding conditions introducing freely available high quality 3D stereoscopic sequences","authors":"Matthieu Urvoy, M. Barkowsky, Romain Cousseau, Yao Koudota, V. Ricordel, P. Callet, Jesús Gutiérrez, N. García","doi":"10.1109/QoMEX.2012.6263847","DOIUrl":"https://doi.org/10.1109/QoMEX.2012.6263847","url":null,"abstract":"Research in stereoscopic 3D coding, transmission and subjective assessment methodology depends largely on the availability of source content that can be used in cross-lab evaluations. While several studies have already been presented using proprietary content, comparisons between the studies are difficult since discrepant contents are used. Therefore in this paper, a freely available dataset of high quality Full-HD stereoscopic sequences shot with a semiprofessional 3D camera is introduced in detail. The content was designed to be suited for usage in a wide variety of applications, including high quality studies. A set of depth maps was calculated from the stereoscopic pair. As an application example, a subjective assessment has been performed using coding and spatial degradations. The Absolute Category Rating with Hidden Reference method was used. The observers were instructed to vote on video quality only. Results of this experiment are also freely available and will be presented in this paper as a first step towards objective video quality measurement for 3DTV.","PeriodicalId":6303,"journal":{"name":"2012 Fourth International Workshop on Quality of Multimedia Experience","volume":"152 1","pages":"109-114"},"PeriodicalIF":0.0,"publicationDate":"2012-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86268773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-05DOI: 10.1109/QoMEX.2012.6263881
H. Luong, B. Goossens, J. Aelterman, L. Platisa, W. Philips
Modern magnetic resonance (MR) applications require high-speed acquisitions. One of the possible ways to accelerate the process is to acquire the data along a k-space trajectory at sub-Nyquist rate and then reconstruct the image by an iterative non-linear reconstruction algorithm. The choice of k-space trajectory and its parameters has a large influence on the image quality. For physicians it is more important to optimize the reconstructed image and thus the trajectory for diagnostic tasks than creating aesthetically pleasing images. Task-specific model observers have been proposed in order to replace the time-consuming and costly human observer experiments. Very recently, we have developed a novel model observer for signal-known-statistically tasks, which can also measure several image quality factors such as noise, blur and contrast without reference images. In this paper, we discuss the image quality for several k-space trajectories in a pilot study. We find that traditionally used measures such as RMSE or PSNR do not correlate with the diagnostic image quality. Alternative measures are brought through our newly developed model observers.
{"title":"Optimizing image quality in MRI: On the evaluation of k-space trajectories for under-sampled MR acquisition","authors":"H. Luong, B. Goossens, J. Aelterman, L. Platisa, W. Philips","doi":"10.1109/QoMEX.2012.6263881","DOIUrl":"https://doi.org/10.1109/QoMEX.2012.6263881","url":null,"abstract":"Modern magnetic resonance (MR) applications require high-speed acquisitions. One of the possible ways to accelerate the process is to acquire the data along a k-space trajectory at sub-Nyquist rate and then reconstruct the image by an iterative non-linear reconstruction algorithm. The choice of k-space trajectory and its parameters has a large influence on the image quality. For physicians it is more important to optimize the reconstructed image and thus the trajectory for diagnostic tasks than creating aesthetically pleasing images. Task-specific model observers have been proposed in order to replace the time-consuming and costly human observer experiments. Very recently, we have developed a novel model observer for signal-known-statistically tasks, which can also measure several image quality factors such as noise, blur and contrast without reference images. In this paper, we discuss the image quality for several k-space trajectories in a pilot study. We find that traditionally used measures such as RMSE or PSNR do not correlate with the diagnostic image quality. Alternative measures are brought through our newly developed model observers.","PeriodicalId":6303,"journal":{"name":"2012 Fourth International Workshop on Quality of Multimedia Experience","volume":"230 1","pages":"25-26"},"PeriodicalIF":0.0,"publicationDate":"2012-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83689092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-05DOI: 10.1109/QoMEX.2012.6263863
L. Janowski
The quality obtained from a CCTV or a telemedicine system has to be good enough to correctly and with high probability recognize a face or a medical pathology. In order to measure “good enough” term subjective experiments are used. One of the problems related to subjective experiment data analysis is the subjects' reliability or, more precisely, each individual subject's reliability. In this paper two metrics which can be used to detect unreliable subjects in case of a task-based subjective experiment are presented. A task-based subjective experiment is an experiment in which a subject performs a task (recognizes a person, characters, pathology, etc.). In this paper metrics enabling measurement of a subject's reliability are referred to as reliability metrics. Reliability metrics should be considered as a part of a standard of a task based subjective experiment like ITU P.912 [1].
{"title":"Task-based subject validation: Reliability metrics","authors":"L. Janowski","doi":"10.1109/QoMEX.2012.6263863","DOIUrl":"https://doi.org/10.1109/QoMEX.2012.6263863","url":null,"abstract":"The quality obtained from a CCTV or a telemedicine system has to be good enough to correctly and with high probability recognize a face or a medical pathology. In order to measure “good enough” term subjective experiments are used. One of the problems related to subjective experiment data analysis is the subjects' reliability or, more precisely, each individual subject's reliability. In this paper two metrics which can be used to detect unreliable subjects in case of a task-based subjective experiment are presented. A task-based subjective experiment is an experiment in which a subject performs a task (recognizes a person, characters, pathology, etc.). In this paper metrics enabling measurement of a subject's reliability are referred to as reliability metrics. Reliability metrics should be considered as a part of a standard of a task based subjective experiment like ITU P.912 [1].","PeriodicalId":6303,"journal":{"name":"2012 Fourth International Workshop on Quality of Multimedia Experience","volume":"1 1","pages":"182-187"},"PeriodicalIF":0.0,"publicationDate":"2012-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90618537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-05DOI: 10.1109/QoMEX.2012.6263836
S. Arndt, Jan-Niklas Antons, R. Schleicher, S. Möller, G. Curio
The subjective evaluation of video quality mostly relies on opinion tests in which test participants judge perceived quality on rating scales. However, these methods provide limited insight how the quality judgments are being formed in the brain. In past studies we showed the general feasibility to complement opinion tests with physiological measures, as the electroencephalography (EEG), for pure video and audio experiments. To establish EEG as a reliable complement measurement method in standard quality rating tests, the next step is to validate the method in the audiovisual domain. For this purpose we conducted an experiment using audiovisual stimuli and degraded these in both modalities. We show that the more degraded a video is the earlier and higher the P300 amplitude is rising. In addition, the peak amplitudes are highly correlated with the audiovisual Mean Opinion Score (MOS).
{"title":"Perception of low-quality videos analyzed by means of electroencephalography","authors":"S. Arndt, Jan-Niklas Antons, R. Schleicher, S. Möller, G. Curio","doi":"10.1109/QoMEX.2012.6263836","DOIUrl":"https://doi.org/10.1109/QoMEX.2012.6263836","url":null,"abstract":"The subjective evaluation of video quality mostly relies on opinion tests in which test participants judge perceived quality on rating scales. However, these methods provide limited insight how the quality judgments are being formed in the brain. In past studies we showed the general feasibility to complement opinion tests with physiological measures, as the electroencephalography (EEG), for pure video and audio experiments. To establish EEG as a reliable complement measurement method in standard quality rating tests, the next step is to validate the method in the audiovisual domain. For this purpose we conducted an experiment using audiovisual stimuli and degraded these in both modalities. We show that the more degraded a video is the earlier and higher the P300 amplitude is rising. In addition, the peak amplitudes are highly correlated with the audiovisual Mean Opinion Score (MOS).","PeriodicalId":6303,"journal":{"name":"2012 Fourth International Workshop on Quality of Multimedia Experience","volume":"24 1","pages":"284-289"},"PeriodicalIF":0.0,"publicationDate":"2012-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79141541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-05DOI: 10.1109/QoMEX.2012.6263834
Andrew Catellier, M. Pinson, William Ingram, Arthur A. Webster
We explore the quality impact when audiovisual content is delivered to different mobile devices. Subjects were shown the same sequences on five different mobile devices and a broadcast quality television. Factors influencing quality ratings include video resolution, viewing distance, and monitor size. Analysis shows how subjects' perception of multimedia quality differs when content is viewed on different mobile devices. In addition, quality ratings from laboratory and simulated living room sessions were statistically equivalent.
{"title":"Impact of mobile devices and usage location on perceived multimedia quality","authors":"Andrew Catellier, M. Pinson, William Ingram, Arthur A. Webster","doi":"10.1109/QoMEX.2012.6263834","DOIUrl":"https://doi.org/10.1109/QoMEX.2012.6263834","url":null,"abstract":"We explore the quality impact when audiovisual content is delivered to different mobile devices. Subjects were shown the same sequences on five different mobile devices and a broadcast quality television. Factors influencing quality ratings include video resolution, viewing distance, and monitor size. Analysis shows how subjects' perception of multimedia quality differs when content is viewed on different mobile devices. In addition, quality ratings from laboratory and simulated living room sessions were statistically equivalent.","PeriodicalId":6303,"journal":{"name":"2012 Fourth International Workshop on Quality of Multimedia Experience","volume":"5 1","pages":"39-44"},"PeriodicalIF":0.0,"publicationDate":"2012-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79003872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-05DOI: 10.1109/QoMEX.2012.6263850
U. Reiter, K. Moor
We present a study that combines and compares explicit (questionnaire-generated) and implicit (EEG-based) feedback from test subjects on perceptual dimensions of different types of audiovisual content. We found significant differences in importance and evaluation of perceptual-, viewer-and clip-related dimensions across a limited set of contents. The results suggest that additional bio-feedback data can help to increase validity and robustness of user feedback in Quality of Experience (QoE) and content categorization research.
{"title":"Content categorization based on implicit and explicit user feedback: Combining self-reports with EEG emotional state analysis","authors":"U. Reiter, K. Moor","doi":"10.1109/QoMEX.2012.6263850","DOIUrl":"https://doi.org/10.1109/QoMEX.2012.6263850","url":null,"abstract":"We present a study that combines and compares explicit (questionnaire-generated) and implicit (EEG-based) feedback from test subjects on perceptual dimensions of different types of audiovisual content. We found significant differences in importance and evaluation of perceptual-, viewer-and clip-related dimensions across a limited set of contents. The results suggest that additional bio-feedback data can help to increase validity and robustness of user feedback in Quality of Experience (QoE) and content categorization research.","PeriodicalId":6303,"journal":{"name":"2012 Fourth International Workshop on Quality of Multimedia Experience","volume":"32 1","pages":"266-271"},"PeriodicalIF":0.0,"publicationDate":"2012-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82322719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}