Pub Date : 2008-11-05DOI: 10.1109/MMSP.2008.4665112
N. Ponomarenko, V. Lukin, K. Egiazarian, J. Astola, M. Carli, F. Battisti
In this contribution, a new image database for testing full-reference image quality assessment metrics is presented. It is based on 1700 test images (25 reference images, 17 types of distortions for each reference image, 4 levels for each type of distortion). Using this image database, 654 observers from three different countries (Finland, Italy, and Ukraine) have carried out about 400000 individual human quality judgments (more than 200 judgments for each distorted image). The obtained mean opinion scores for the considered images can be used for evaluating the performances of visual quality metrics as well as for comparison and for the design of new metrics. The database, with testing results, is freely available.
{"title":"Color image database for evaluation of image quality metrics","authors":"N. Ponomarenko, V. Lukin, K. Egiazarian, J. Astola, M. Carli, F. Battisti","doi":"10.1109/MMSP.2008.4665112","DOIUrl":"https://doi.org/10.1109/MMSP.2008.4665112","url":null,"abstract":"In this contribution, a new image database for testing full-reference image quality assessment metrics is presented. It is based on 1700 test images (25 reference images, 17 types of distortions for each reference image, 4 levels for each type of distortion). Using this image database, 654 observers from three different countries (Finland, Italy, and Ukraine) have carried out about 400000 individual human quality judgments (more than 200 judgments for each distorted image). The obtained mean opinion scores for the considered images can be used for evaluating the performances of visual quality metrics as well as for comparison and for the design of new metrics. The database, with testing results, is freely available.","PeriodicalId":402287,"journal":{"name":"2008 IEEE 10th Workshop on Multimedia Signal Processing","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115398389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-05DOI: 10.1109/MMSP.2008.4665167
S. Braci, C. Delpha, R. Boyer, Gaëtan Le Guelvouit
Several authors have studied stego-systems based on Costa scheme, but just a few ones gave both theoretical and experimental justifications of these schemes performance in an active warden context. We provide in this paper a steganographic and comparative study of three informed stego-systems in active warden context: scalar Costa scheme, trellis-coded quantization and spread transform scalar sosta Scheme. By leading on analytical formulations and on experimental evaluations, we show the advantages and limits of each scheme in term of statistical undetectability and capacity in the case of active warden. Such as the undetectability is given by the distance between the stego-signal and the cover distance. It is measured by the Kullback-Leibler distance.
{"title":"Informed stego-systems in active warden context: Statistical undetectability and capacity","authors":"S. Braci, C. Delpha, R. Boyer, Gaëtan Le Guelvouit","doi":"10.1109/MMSP.2008.4665167","DOIUrl":"https://doi.org/10.1109/MMSP.2008.4665167","url":null,"abstract":"Several authors have studied stego-systems based on Costa scheme, but just a few ones gave both theoretical and experimental justifications of these schemes performance in an active warden context. We provide in this paper a steganographic and comparative study of three informed stego-systems in active warden context: scalar Costa scheme, trellis-coded quantization and spread transform scalar sosta Scheme. By leading on analytical formulations and on experimental evaluations, we show the advantages and limits of each scheme in term of statistical undetectability and capacity in the case of active warden. Such as the undetectability is given by the distance between the stego-signal and the cover distance. It is measured by the Kullback-Leibler distance.","PeriodicalId":402287,"journal":{"name":"2008 IEEE 10th Workshop on Multimedia Signal Processing","volume":"158 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123100153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-05DOI: 10.1109/MMSP.2008.4665184
Yi-Lei Chen, Chiou-Ting Hsu
Since JPEG image format has been a popularly used image compression standard, tampering detection in JPEG images now plays an important role. The artifacts introduced by lossy JPEG compression can be seen as an inherent signature for compressed images. In this paper, we propose a new approach to analyse the blocking periodicity by, 1) developing a linearly dependency model of pixel differences, 2) constructing a probability map of each pixelpsilas belonging to this model, and 3) finally extracting a peak window from the Fourier spectrum of the probability map. We will show that, for single and double compressed images, their peakspsila energy distribution behave very differently. We exploit this property and derive statistic features from peak windows to classify whether an image has been tampered by cropping and recompression. Experimental results demonstrate the validity of the proposed approach.
{"title":"Image tampering detection by blocking periodicity analysis in JPEG compressed images","authors":"Yi-Lei Chen, Chiou-Ting Hsu","doi":"10.1109/MMSP.2008.4665184","DOIUrl":"https://doi.org/10.1109/MMSP.2008.4665184","url":null,"abstract":"Since JPEG image format has been a popularly used image compression standard, tampering detection in JPEG images now plays an important role. The artifacts introduced by lossy JPEG compression can be seen as an inherent signature for compressed images. In this paper, we propose a new approach to analyse the blocking periodicity by, 1) developing a linearly dependency model of pixel differences, 2) constructing a probability map of each pixelpsilas belonging to this model, and 3) finally extracting a peak window from the Fourier spectrum of the probability map. We will show that, for single and double compressed images, their peakspsila energy distribution behave very differently. We exploit this property and derive statistic features from peak windows to classify whether an image has been tampered by cropping and recompression. Experimental results demonstrate the validity of the proposed approach.","PeriodicalId":402287,"journal":{"name":"2008 IEEE 10th Workshop on Multimedia Signal Processing","volume":"76 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120821178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-05DOI: 10.1109/MMSP.2008.4665164
Hao Wen, L. Fang, L. Guan
An approach to model and quantify a userpsilas Web search interests using the userpsilas navigational data is presented. The approach is based on the premise that frequently visiting certain types of content indicates that the user is interested in that content. The proposed approach can be divided into three steps: monitoring the userpsilas navigational data; using the cumulative weight to determine a Web pagepsilas content; and employing the Naive Bayes Model for updating the userpsilas interest model. In order to demonstrate the effectiveness of the proposed model, experimental software is developed to analyze a userpsilas interests in sports. The experimental results demonstrate that the approach can effectively model the userpsilas interest. The proposed model could be integrated with personalized Web services.
{"title":"Modelling an individual’s Web search interests by utilizing navigational data","authors":"Hao Wen, L. Fang, L. Guan","doi":"10.1109/MMSP.2008.4665164","DOIUrl":"https://doi.org/10.1109/MMSP.2008.4665164","url":null,"abstract":"An approach to model and quantify a userpsilas Web search interests using the userpsilas navigational data is presented. The approach is based on the premise that frequently visiting certain types of content indicates that the user is interested in that content. The proposed approach can be divided into three steps: monitoring the userpsilas navigational data; using the cumulative weight to determine a Web pagepsilas content; and employing the Naive Bayes Model for updating the userpsilas interest model. In order to demonstrate the effectiveness of the proposed model, experimental software is developed to analyze a userpsilas interests in sports. The experimental results demonstrate that the approach can effectively model the userpsilas interest. The proposed model could be integrated with personalized Web services.","PeriodicalId":402287,"journal":{"name":"2008 IEEE 10th Workshop on Multimedia Signal Processing","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121086298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-05DOI: 10.1109/MMSP.2008.4665129
Shu Ling Alycia Lee, A. Kouzani, E. Hu
A system that can automatically detect nodules within lung images may assist expert radiologists in interpreting the abnormal patterns as nodules in 2D CT lung images. A system is presented that can automatically identify nodules of various sizes within lung images. The pattern classification method is employed to develop the proposed system. A random forest ensemble classifier is formed consisting of many weak learners that can grow decision trees. The forest selects the decision that has the most votes. The developed system consists of two random forest classifiers connected in a series fashion. A subset of CT lung images from the LIDC database is employed. It consists of 5721 images to train and test the system. There are 411 images that contained expert- radiologists identified nodules. Training sets consisting of nodule, non-nodule, and false-detection patterns are constructed. A collection of test images are also built. The first classifier is developed to detect all nodules. The second classifier is developed to eliminate the false detections produced by the first classifier. According to the experimental results, a true positive rate of 100%, and false positive rate of 1.4 per lung image are achieved.
{"title":"Automated identification of lung nodules","authors":"Shu Ling Alycia Lee, A. Kouzani, E. Hu","doi":"10.1109/MMSP.2008.4665129","DOIUrl":"https://doi.org/10.1109/MMSP.2008.4665129","url":null,"abstract":"A system that can automatically detect nodules within lung images may assist expert radiologists in interpreting the abnormal patterns as nodules in 2D CT lung images. A system is presented that can automatically identify nodules of various sizes within lung images. The pattern classification method is employed to develop the proposed system. A random forest ensemble classifier is formed consisting of many weak learners that can grow decision trees. The forest selects the decision that has the most votes. The developed system consists of two random forest classifiers connected in a series fashion. A subset of CT lung images from the LIDC database is employed. It consists of 5721 images to train and test the system. There are 411 images that contained expert- radiologists identified nodules. Training sets consisting of nodule, non-nodule, and false-detection patterns are constructed. A collection of test images are also built. The first classifier is developed to detect all nodules. The second classifier is developed to eliminate the false detections produced by the first classifier. According to the experimental results, a true positive rate of 100%, and false positive rate of 1.4 per lung image are achieved.","PeriodicalId":402287,"journal":{"name":"2008 IEEE 10th Workshop on Multimedia Signal Processing","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127337497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-05DOI: 10.1109/MMSP.2008.4665182
Giacomo Cancelli, G. Doërr, M. Barni, I. Cox
We compare the performance of three steganalysis system for detection of plusmn1 steganography. We examine the relative performance of each system on three commonly used image databases. Experimental results clearly demonstrate that both absolute and relative performance of all three algorithms vary considerably across databases. This sensitivity suggests that considerably more work is needed to develop databases that are more representative of diverse imagery. In addition, we investigate how performance varies based on a variety of training and testing assumptions, specifically (i) that training and testing are performed for a fixed and known embedding rate, (ii) training is performed at one embedding rate, but testing is over a range of embedding rates, (iii) training and testing are performed over a range of embedding rates. As expected, experimental results show that performance under (ii) and (iii) is inferior to (i). The experimental results also suggest that test results for different embedding rates should not be consolidated into a single score, but rather reported separately. Otherwise, good performance at high embedding rates may mask poor performance at low embedding rates.
{"title":"A comparative study of ± steganalyzers","authors":"Giacomo Cancelli, G. Doërr, M. Barni, I. Cox","doi":"10.1109/MMSP.2008.4665182","DOIUrl":"https://doi.org/10.1109/MMSP.2008.4665182","url":null,"abstract":"We compare the performance of three steganalysis system for detection of plusmn1 steganography. We examine the relative performance of each system on three commonly used image databases. Experimental results clearly demonstrate that both absolute and relative performance of all three algorithms vary considerably across databases. This sensitivity suggests that considerably more work is needed to develop databases that are more representative of diverse imagery. In addition, we investigate how performance varies based on a variety of training and testing assumptions, specifically (i) that training and testing are performed for a fixed and known embedding rate, (ii) training is performed at one embedding rate, but testing is over a range of embedding rates, (iii) training and testing are performed over a range of embedding rates. As expected, experimental results show that performance under (ii) and (iii) is inferior to (i). The experimental results also suggest that test results for different embedding rates should not be consolidated into a single score, but rather reported separately. Otherwise, good performance at high embedding rates may mask poor performance at low embedding rates.","PeriodicalId":402287,"journal":{"name":"2008 IEEE 10th Workshop on Multimedia Signal Processing","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125029774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-05DOI: 10.1109/MMSP.2008.4665079
Xin Huang, Søren Forchhammer
As a new coding paradigm, distributed video coding (DVC) deals with lossy source coding using side information to exploit the statistics at the decoder to reduce computational demands at the encoder. The performance of DVC highly depends on the quality of side information. With a better side information generation method, fewer bits will be requested from the encoder and more reliable decoded frames will be obtained. In this paper, a side information generation method is introduced to further improve the rate-distortion (RD) performance of transform domain distributed video coding. This algorithm consists of a variable block size based Y, U and V component motion estimation and an adaptive weighted overlapped block motion compensation (OBMC). The proposal is tested and compared with the results of an executable DVC codec released by DISCOVER group (DIStributed COding for Video sERvices). RD improvements on the set of test sequences are observed.
{"title":"Improved side information generation for Distributed Video Coding","authors":"Xin Huang, Søren Forchhammer","doi":"10.1109/MMSP.2008.4665079","DOIUrl":"https://doi.org/10.1109/MMSP.2008.4665079","url":null,"abstract":"As a new coding paradigm, distributed video coding (DVC) deals with lossy source coding using side information to exploit the statistics at the decoder to reduce computational demands at the encoder. The performance of DVC highly depends on the quality of side information. With a better side information generation method, fewer bits will be requested from the encoder and more reliable decoded frames will be obtained. In this paper, a side information generation method is introduced to further improve the rate-distortion (RD) performance of transform domain distributed video coding. This algorithm consists of a variable block size based Y, U and V component motion estimation and an adaptive weighted overlapped block motion compensation (OBMC). The proposal is tested and compared with the results of an executable DVC codec released by DISCOVER group (DIStributed COding for Video sERvices). RD improvements on the set of test sequences are observed.","PeriodicalId":402287,"journal":{"name":"2008 IEEE 10th Workshop on Multimedia Signal Processing","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125854354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-05DOI: 10.1109/MMSP.2008.4665202
Jinqiu Sun, Yanning Zhang, Jiangbin Zheng, Lei Jiang, Si-wei You
In this article, the selective visual attention mechanism and curve detection by Connect The Dots model is introduced to small and dim target detection in deep space background. The greyscale and movement continual significance are fully taken into account to get focus of attention integration map. A curve detection method which based on Connect The Dots model is designed to detect the target trajectory. Qualitative and quantitative results prove that the proposed algorithm has strong anti-noise performance and improve calculate efficiency of detection system effectively.
{"title":"Small and dim moving target detection in deep space background","authors":"Jinqiu Sun, Yanning Zhang, Jiangbin Zheng, Lei Jiang, Si-wei You","doi":"10.1109/MMSP.2008.4665202","DOIUrl":"https://doi.org/10.1109/MMSP.2008.4665202","url":null,"abstract":"In this article, the selective visual attention mechanism and curve detection by Connect The Dots model is introduced to small and dim target detection in deep space background. The greyscale and movement continual significance are fully taken into account to get focus of attention integration map. A curve detection method which based on Connect The Dots model is designed to detect the target trajectory. Qualitative and quantitative results prove that the proposed algorithm has strong anti-noise performance and improve calculate efficiency of detection system effectively.","PeriodicalId":402287,"journal":{"name":"2008 IEEE 10th Workshop on Multimedia Signal Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116089368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-05DOI: 10.1109/MMSP.2008.4665172
P. Branch, A. Cricenti, G. Armitage
Modeling traffic generated by Internet-based multiplayer computer games has attracted a great deal of attention in the past few years. In part this has been driven by a need to simulate correctly the network impact of highly interactive online game genres such as the first person shooter (FPS). Packet size distributions and autocovariance models are important elements in the creation of realistic traffic generators for network simulators. In this paper we present simple techniques for creating representative models for N-player FPS games based on empirically measured traffic of 2- and 3-player games. The models capture the packet size distribution as well as the time series behaviour of game traffic. We illustrate the likely generality of our approach using data from seven FPS games that have been popular over the past nine years: Half-Life, Half-Life Counterstrike, Half-Life 2, Half-Life 2 Counterstrike, Quake III Arena, Quake 4 and Wolfenstein Enemy Territory.
{"title":"An ARMA(1,1) prediction model of first person shooter game traffic","authors":"P. Branch, A. Cricenti, G. Armitage","doi":"10.1109/MMSP.2008.4665172","DOIUrl":"https://doi.org/10.1109/MMSP.2008.4665172","url":null,"abstract":"Modeling traffic generated by Internet-based multiplayer computer games has attracted a great deal of attention in the past few years. In part this has been driven by a need to simulate correctly the network impact of highly interactive online game genres such as the first person shooter (FPS). Packet size distributions and autocovariance models are important elements in the creation of realistic traffic generators for network simulators. In this paper we present simple techniques for creating representative models for N-player FPS games based on empirically measured traffic of 2- and 3-player games. The models capture the packet size distribution as well as the time series behaviour of game traffic. We illustrate the likely generality of our approach using data from seven FPS games that have been popular over the past nine years: Half-Life, Half-Life Counterstrike, Half-Life 2, Half-Life 2 Counterstrike, Quake III Arena, Quake 4 and Wolfenstein Enemy Territory.","PeriodicalId":402287,"journal":{"name":"2008 IEEE 10th Workshop on Multimedia Signal Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122394673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-11-05DOI: 10.1109/MMSP.2008.4665162
S. Aydın, M. Karsligil
In this paper, we propose a method to acquire the possession information in different zones of the playfield from soccer video by using view type and playfield zone mid-level descriptors. First, each video frame is classified into three kinds of view type according to a domain-specific feature, grass area ratio and series of classification rules. Then, the classified frames are used to determine the currently active playfield zone in the match. The history of active playfield zones is post processed to acquire the possession information in playfield zones during the game. The efficiency and effectiveness of the proposed method are demonstrated over a large collection of soccer video data with different stadiums and conditions.
{"title":"An evaluation of possession information in playfield zones from soccer video using mid-level descriptors","authors":"S. Aydın, M. Karsligil","doi":"10.1109/MMSP.2008.4665162","DOIUrl":"https://doi.org/10.1109/MMSP.2008.4665162","url":null,"abstract":"In this paper, we propose a method to acquire the possession information in different zones of the playfield from soccer video by using view type and playfield zone mid-level descriptors. First, each video frame is classified into three kinds of view type according to a domain-specific feature, grass area ratio and series of classification rules. Then, the classified frames are used to determine the currently active playfield zone in the match. The history of active playfield zones is post processed to acquire the possession information in playfield zones during the game. The efficiency and effectiveness of the proposed method are demonstrated over a large collection of soccer video data with different stadiums and conditions.","PeriodicalId":402287,"journal":{"name":"2008 IEEE 10th Workshop on Multimedia Signal Processing","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122431632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}