Motion estimation is the most time consuming part in H.264/AVC. In this paper, according to computing redundancy of UMHexagonS algorithm, the method of motion vector distribution prediction is proposed and combines with designed patterns to achieve adaptive sub-regional searching. Simulation results show that the proposed motion estimation scheme achieves reducing 21.48% motion estimation encoding time with a good rate-distortion performance compared to UMHexagonS algorithm in JM18.4. The proposed algorithm improves the performance of real-time encoding.
{"title":"Fast Video Encoding Algorithm Based on Motion Estimation for H.264/AVC","authors":"Yuan Gao, Pengyu Liu","doi":"10.1109/IIH-MSP.2013.42","DOIUrl":"https://doi.org/10.1109/IIH-MSP.2013.42","url":null,"abstract":"Motion estimation is the most time consuming part in H.264/AVC. In this paper, according to computing redundancy of UMHexagonS algorithm, the method of motion vector distribution prediction is proposed and combines with designed patterns to achieve adaptive sub-regional searching. Simulation results show that the proposed motion estimation scheme achieves reducing 21.48% motion estimation encoding time with a good rate-distortion performance compared to UMHexagonS algorithm in JM18.4. The proposed algorithm improves the performance of real-time encoding.","PeriodicalId":105427,"journal":{"name":"2013 Ninth International Conference on Intelligent Information Hiding and Multimedia Signal Processing","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132039512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One of the interesting image processing applications is to detect and/or restore a damaged image. Because image damage would vary in different ways, a straightforward method is to use a program to represent the damage. Then, the type of artefact can be searched by applying programs to the original image and comparing with the target image. The run-time environment of a program is the structure of the execution resources. In this paper, we define a cellular automaton based structure as the run-time environment and use genetic programming (GP) to find the proper program for the given image artefacts. The results show that an effective GP engine requires careful configuration. The important lesson learned from the experiments is also discussed.
{"title":"Experiments on Genetic Programming Based Image Artefact Detection","authors":"Feng-Cheng Chang, Hsiang-Cheh Huang","doi":"10.1109/IIH-MSP.2013.11","DOIUrl":"https://doi.org/10.1109/IIH-MSP.2013.11","url":null,"abstract":"One of the interesting image processing applications is to detect and/or restore a damaged image. Because image damage would vary in different ways, a straightforward method is to use a program to represent the damage. Then, the type of artefact can be searched by applying programs to the original image and comparing with the target image. The run-time environment of a program is the structure of the execution resources. In this paper, we define a cellular automaton based structure as the run-time environment and use genetic programming (GP) to find the proper program for the given image artefacts. The results show that an effective GP engine requires careful configuration. The important lesson learned from the experiments is also discussed.","PeriodicalId":105427,"journal":{"name":"2013 Ninth International Conference on Intelligent Information Hiding and Multimedia Signal Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128511201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper provides a novel two-stage verification scheme based on watermarking for electronic passport. The watermarking includes the multimodal biometric feature of the passport owner and the parity check codes of multimodal feature. The first part is taken as the template to do field certification which confirms whether the person is the true passport holder. The parity check codes verify the integrity of the passport. ORL face database and PolyU palm print database are selected as the experimental subjects. Experimental results show the availability and anslysis the impact of different embedding capacity for the multimodal biometric recognition.
{"title":"Two-Stage Verification Based on Watermarking for Electronic Passport","authors":"Zhifang Wang, Lei Yang, Yue Cheng, Qun Ding","doi":"10.1109/IIH-MSP.2013.19","DOIUrl":"https://doi.org/10.1109/IIH-MSP.2013.19","url":null,"abstract":"This paper provides a novel two-stage verification scheme based on watermarking for electronic passport. The watermarking includes the multimodal biometric feature of the passport owner and the parity check codes of multimodal feature. The first part is taken as the template to do field certification which confirms whether the person is the true passport holder. The parity check codes verify the integrity of the passport. ORL face database and PolyU palm print database are selected as the experimental subjects. Experimental results show the availability and anslysis the impact of different embedding capacity for the multimodal biometric recognition.","PeriodicalId":105427,"journal":{"name":"2013 Ninth International Conference on Intelligent Information Hiding and Multimedia Signal Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128756083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-16DOI: 10.1109/IIH-MSP.2013.134
J. Bahi, C. Guyeux, Pierre-Cyrille Héam
In this research work, security concepts are formalized in steganography, and the common paradigms based on information theory are replaced by another ones inspired from cryptography, more practicable are closer than what is usually done in other cryptographic domains. These preliminaries lead to a first proof of a cryptographically secure information hiding scheme.
{"title":"A Cryptographic Approach for Steganography","authors":"J. Bahi, C. Guyeux, Pierre-Cyrille Héam","doi":"10.1109/IIH-MSP.2013.134","DOIUrl":"https://doi.org/10.1109/IIH-MSP.2013.134","url":null,"abstract":"In this research work, security concepts are formalized in steganography, and the common paradigms based on information theory are replaced by another ones inspired from cryptography, more practicable are closer than what is usually done in other cryptographic domains. These preliminaries lead to a first proof of a cryptographically secure information hiding scheme.","PeriodicalId":105427,"journal":{"name":"2013 Ninth International Conference on Intelligent Information Hiding and Multimedia Signal Processing","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115160509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hsiang-Cheh Huang, Ting-Hsuan Wang, Yueh-Hong Chen, J. Hung
Reversible data hiding is a newly developed topic in watermarking researches. At the encoder, it relies on slightly modifying the characteristics of original images for embedding secret information. At the decoder, original image and secret information can be separated from marked image with slight amount of overhead. In this paper, we propose the scheme by predicting the difference between output and input images for making reversible data hiding possible. By carefully selecting prediction coefficients, which are optimized by genetic algorithm, the output image quality can be preserved, while the enhanced amount of embedding capacity can be observed. We apply the algorithm to medical images for protecting patients' cases from possible human errors incurred. With the training of genetic algorithm, simulation results with our algorithm have demonstrated the enhanced embedding capacity, while keeping the output image quality. Optimized prediction coefficients with genetic algorithm lead to better performances with our scheme.
{"title":"Prediction-Based Reversible Data Hiding for Medical Images with Genetic Algorithms","authors":"Hsiang-Cheh Huang, Ting-Hsuan Wang, Yueh-Hong Chen, J. Hung","doi":"10.1109/IIH-MSP.2013.10","DOIUrl":"https://doi.org/10.1109/IIH-MSP.2013.10","url":null,"abstract":"Reversible data hiding is a newly developed topic in watermarking researches. At the encoder, it relies on slightly modifying the characteristics of original images for embedding secret information. At the decoder, original image and secret information can be separated from marked image with slight amount of overhead. In this paper, we propose the scheme by predicting the difference between output and input images for making reversible data hiding possible. By carefully selecting prediction coefficients, which are optimized by genetic algorithm, the output image quality can be preserved, while the enhanced amount of embedding capacity can be observed. We apply the algorithm to medical images for protecting patients' cases from possible human errors incurred. With the training of genetic algorithm, simulation results with our algorithm have demonstrated the enhanced embedding capacity, while keeping the output image quality. Optimized prediction coefficients with genetic algorithm lead to better performances with our scheme.","PeriodicalId":105427,"journal":{"name":"2013 Ninth International Conference on Intelligent Information Hiding and Multimedia Signal Processing","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127206467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ramachandra Raghavendra, K. Raja, Bian Yang, C. Busch
In this paper, we address the problem of identifying multiple faces present at different distance using Light-Field Camera (LFC). Since a LFC can provide different focus (or depth) images in single capture, we are motivated to investigate its applicability to identify multiple faces at a distance by exploring its all-in-focus property. We first collect the new face dataset using LFC and then carry out extensive experiments to evaluate the merits and demerits of LFC, especially in identifying multiple faces at a distance. We explore the applicability of light field camera for face recognition applications in at-a-distance surveillance scenario.
{"title":"Multi-face Recognition at a Distance Using Light-Field Camera","authors":"Ramachandra Raghavendra, K. Raja, Bian Yang, C. Busch","doi":"10.1109/IIH-MSP.2013.93","DOIUrl":"https://doi.org/10.1109/IIH-MSP.2013.93","url":null,"abstract":"In this paper, we address the problem of identifying multiple faces present at different distance using Light-Field Camera (LFC). Since a LFC can provide different focus (or depth) images in single capture, we are motivated to investigate its applicability to identify multiple faces at a distance by exploring its all-in-focus property. We first collect the new face dataset using LFC and then carry out extensive experiments to evaluate the merits and demerits of LFC, especially in identifying multiple faces at a distance. We explore the applicability of light field camera for face recognition applications in at-a-distance surveillance scenario.","PeriodicalId":105427,"journal":{"name":"2013 Ninth International Conference on Intelligent Information Hiding and Multimedia Signal Processing","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125598217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-16DOI: 10.1109/IIH-MSP.2013.155
Na Li, C. Bao, Bingyin Xia, Feng Bao
Existing speech enhancement methods can improve speech quality but not speech intelligibility, especially in low SNR conditions. To solve this problem, an algorithm for improving speech intelligibility using the Constraints on Speech Distortion and Noise Over-estimation (CSDNO) is proposed in this paper. Based on the fact that the attenuation distortion and amplification distortion have different impacts on the speech intelligibility, the gain function and noise estimation method used in conventional algorithm are modified in this paper. The performance of the proposed method has been evaluated by Diagnosis Rhyme Test (DRT), fractional Articulation Index (fAI) and frequency-weighted SNR segmental (fwSNRseg) for three types of noises. The experimental results show that, with smaller speech distortion, the proposed method can improve the intelligibility of the enhanced speech in comparison with the reference method.
{"title":"Speech Intelligibility Improvement Using the Constraints on Speech Distortion and Noise Over-estimation","authors":"Na Li, C. Bao, Bingyin Xia, Feng Bao","doi":"10.1109/IIH-MSP.2013.155","DOIUrl":"https://doi.org/10.1109/IIH-MSP.2013.155","url":null,"abstract":"Existing speech enhancement methods can improve speech quality but not speech intelligibility, especially in low SNR conditions. To solve this problem, an algorithm for improving speech intelligibility using the Constraints on Speech Distortion and Noise Over-estimation (CSDNO) is proposed in this paper. Based on the fact that the attenuation distortion and amplification distortion have different impacts on the speech intelligibility, the gain function and noise estimation method used in conventional algorithm are modified in this paper. The performance of the proposed method has been evaluated by Diagnosis Rhyme Test (DRT), fractional Articulation Index (fAI) and frequency-weighted SNR segmental (fwSNRseg) for three types of noises. The experimental results show that, with smaller speech distortion, the proposed method can improve the intelligibility of the enhanced speech in comparison with the reference method.","PeriodicalId":105427,"journal":{"name":"2013 Ninth International Conference on Intelligent Information Hiding and Multimedia Signal Processing","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124288527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cyber Physical System (CPS) is a combination of physical systems with cyber systems, where there is a tight coupling between the two systems. It is widely used in critical national infrastructure, such as electric power, petroleum and chemical industries. Once an attack against the CPS obtains success, the consequence will be unimaginable. A well-designed risk assessment of CPS will provide an overall view of CPS security status and support efficient allocations of safeguard resources. Though there is much relationship between CPS and IT system, they are still different in various aspects, especially the requirement for real-time. Therefore, traditional risk assessment method for IT system can't be directly applied in CPS. New ideas on CPS risk assessment are in urgent need and one idea about this is addressed in this paper. Firstly, it presents a depict description of a three-level CPS architecture and makes an analysis on the corresponding security features in each level. Secondly, it sums up traditional risk assessment methods analyzes the differences between cyber physical system security and traditional IT system security. Finally, the authors blaze a trail under the new perspective of CPS after breaking the restriction of traditional risk assessment methods and propose a risk assessment idea for CPS.
{"title":"Cyber-physical System Risk Assessment","authors":"Yong Peng, Tianbo Lu, Jingli Liu, Yang Gao, Xiaobo Guo, Feng Xie","doi":"10.1109/IIH-MSP.2013.116","DOIUrl":"https://doi.org/10.1109/IIH-MSP.2013.116","url":null,"abstract":"Cyber Physical System (CPS) is a combination of physical systems with cyber systems, where there is a tight coupling between the two systems. It is widely used in critical national infrastructure, such as electric power, petroleum and chemical industries. Once an attack against the CPS obtains success, the consequence will be unimaginable. A well-designed risk assessment of CPS will provide an overall view of CPS security status and support efficient allocations of safeguard resources. Though there is much relationship between CPS and IT system, they are still different in various aspects, especially the requirement for real-time. Therefore, traditional risk assessment method for IT system can't be directly applied in CPS. New ideas on CPS risk assessment are in urgent need and one idea about this is addressed in this paper. Firstly, it presents a depict description of a three-level CPS architecture and makes an analysis on the corresponding security features in each level. Secondly, it sums up traditional risk assessment methods analyzes the differences between cyber physical system security and traditional IT system security. Finally, the authors blaze a trail under the new perspective of CPS after breaking the restriction of traditional risk assessment methods and propose a risk assessment idea for CPS.","PeriodicalId":105427,"journal":{"name":"2013 Ninth International Conference on Intelligent Information Hiding and Multimedia Signal Processing","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124401082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual cryptography schemes (VCS) have been introduced by Naor and Shamir [NS94] and involve a dealer encoding a secret image into shares that are distributed to a number of participants. In general, the collection of subsets of participants that can recover the secret is organized in an access structure. In this paper we consider graph based access structures, where participants are nodes of a graph G and only subsets containing an edge are allowed to reconstruct the secret image. We provide some bounds on the pixel expansion of path and cycle graphs, showing also a simple construction for such graphs for a generic number of n participants.
{"title":"Visual Cryptography Schemes for Graph Based Access Structures","authors":"S. Cimato","doi":"10.1109/IIH-MSP.2013.98","DOIUrl":"https://doi.org/10.1109/IIH-MSP.2013.98","url":null,"abstract":"Visual cryptography schemes (VCS) have been introduced by Naor and Shamir [NS94] and involve a dealer encoding a secret image into shares that are distributed to a number of participants. In general, the collection of subsets of participants that can recover the secret is organized in an access structure. In this paper we consider graph based access structures, where participants are nodes of a graph G and only subsets containing an edge are allowed to reconstruct the secret image. We provide some bounds on the pixel expansion of path and cycle graphs, showing also a simple construction for such graphs for a generic number of n participants.","PeriodicalId":105427,"journal":{"name":"2013 Ninth International Conference on Intelligent Information Hiding and Multimedia Signal Processing","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121670812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, a novel image feature extraction algorithm, called Subsampling based neighborhood preserving embedding (SNPE), is proposed. SNPE aims to preserve the neighborhood of the subsampling image samples. The proposed algorithm is applied to image classification on Finger-Knuckle-Print database. The experimental results confirm the effectiveness of the proposed algorithm.
{"title":"Subsampling Based Neighborhood Preserving Embedding for Image Classification","authors":"Li-Yan Zhao, Dong Zou, Guanghong Gao","doi":"10.1109/IIH-MSP.2013.96","DOIUrl":"https://doi.org/10.1109/IIH-MSP.2013.96","url":null,"abstract":"In this paper, a novel image feature extraction algorithm, called Subsampling based neighborhood preserving embedding (SNPE), is proposed. SNPE aims to preserve the neighborhood of the subsampling image samples. The proposed algorithm is applied to image classification on Finger-Knuckle-Print database. The experimental results confirm the effectiveness of the proposed algorithm.","PeriodicalId":105427,"journal":{"name":"2013 Ninth International Conference on Intelligent Information Hiding and Multimedia Signal Processing","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124804803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}