Pub Date : 2007-11-12DOI: 10.1109/ICIP.2007.4379291
S. Weng, Yao Zhao, Jeng-Shyang Pan, R. Ni
A novel reversible data hiding scheme based on an integer transform is presented in this paper. The invertible integer transform exploits the correlations among four pixels in a quad. Data embedding is carried out by expanding the differences between one pixel and each of its three neighboring pixels. However, the high hiding capacity can not be achieved only by difference expansion, so the companding technique is introduced into the embedding process so as to further increase hiding capacity. A series of experiments are conducted to verify the feasibility and effectiveness of the proposed approach.
{"title":"A Novel Reversible Watermarking Based on an Integer Transform","authors":"S. Weng, Yao Zhao, Jeng-Shyang Pan, R. Ni","doi":"10.1109/ICIP.2007.4379291","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379291","url":null,"abstract":"A novel reversible data hiding scheme based on an integer transform is presented in this paper. The invertible integer transform exploits the correlations among four pixels in a quad. Data embedding is carried out by expanding the differences between one pixel and each of its three neighboring pixels. However, the high hiding capacity can not be achieved only by difference expansion, so the companding technique is introduced into the embedding process so as to further increase hiding capacity. A series of experiments are conducted to verify the feasibility and effectiveness of the proposed approach.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114959899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-11-12DOI: 10.1109/ICIP.2007.4380004
Waqar Zia, K. Diepold, T. Stockhammer
Robust video conversational applications for hand-held devices come with numerous challenges, e.g. real-time processing, complexity constrained devices and small end-to-end delays, etc. Transmission losses of compressed video data result in spatio-temporal error propagation in the decoded video sequence. To ensure some QoS, the video codec has to be well tuned to combat the degradation resulting from losses. Several feedback based error mitigation technique are assessed in this work. The proposed error robustness technique based on reference picture selection (RPS) and error tracking enhances the overall performance of the target system by more than 4 dB for moderate radio link control (RLC) PDU loss rates of 1.5%. This enhancement is achieved without any additional computational complexity.
{"title":"Complexity Constrained Robust Video Transmission for Hand-Held Devices","authors":"Waqar Zia, K. Diepold, T. Stockhammer","doi":"10.1109/ICIP.2007.4380004","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4380004","url":null,"abstract":"Robust video conversational applications for hand-held devices come with numerous challenges, e.g. real-time processing, complexity constrained devices and small end-to-end delays, etc. Transmission losses of compressed video data result in spatio-temporal error propagation in the decoded video sequence. To ensure some QoS, the video codec has to be well tuned to combat the degradation resulting from losses. Several feedback based error mitigation technique are assessed in this work. The proposed error robustness technique based on reference picture selection (RPS) and error tracking enhances the overall performance of the target system by more than 4 dB for moderate radio link control (RLC) PDU loss rates of 1.5%. This enhancement is achieved without any additional computational complexity.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117149306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-11-12DOI: 10.1109/ICIP.2007.4379287
Takashi Miyaki, T. Yamasaki, K. Aizawa
This paper describes an object tracking scheme employs sensor fusion approach which is composed of visual information and location information estimated from Wi-Fi signals. Location information is calculated by a set of received signal strength values of beacon packets from Wi-Fi access points (APs) around the targets. Different from the conventional approaches which use another kind of sensors, our approach can cover wider areas both indoor and outdoor with lower cost because of characteristics of Wi-Fi signals. Particle filter is applied to combine these two different kinds of sensory input to track the target continuously. Wi-Fi observation model is involved in a conventional visual particle filtering scheme in order to evaluate importance weights of each particle. By using multiple modality, robust tracking performance is achieved even if reliability of one sensory input declines. In this paper, we present experimental results applied to outdoor surveillance camera environment.
{"title":"Tracking Persons using Particle Filter Fusing Visual and Wi-Fi Localizations for Widely Distributed Camera","authors":"Takashi Miyaki, T. Yamasaki, K. Aizawa","doi":"10.1109/ICIP.2007.4379287","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379287","url":null,"abstract":"This paper describes an object tracking scheme employs sensor fusion approach which is composed of visual information and location information estimated from Wi-Fi signals. Location information is calculated by a set of received signal strength values of beacon packets from Wi-Fi access points (APs) around the targets. Different from the conventional approaches which use another kind of sensors, our approach can cover wider areas both indoor and outdoor with lower cost because of characteristics of Wi-Fi signals. Particle filter is applied to combine these two different kinds of sensory input to track the target continuously. Wi-Fi observation model is involved in a conventional visual particle filtering scheme in order to evaluate importance weights of each particle. By using multiple modality, robust tracking performance is achieved even if reliability of one sensory input declines. In this paper, we present experimental results applied to outdoor surveillance camera environment.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116026042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-11-12DOI: 10.1109/ICIP.2007.4379853
A. Sinha, Xiaolin Wu
We propose a new superresolution algorithm based on a fast motion estimation technique. Two stages of this algorithm, namely, motion estimation and high-resolution reconstruction, rely on an area-based interpolation scheme that involves intersecting two pixel grids in arbitrary orientation, displacement, and scaling. We develop a fast approximate solution of the above problem, whose exact solution is prohibitively expensive. Also, gradient descent algorithm is used for fast convergence of the motion estimation algorithm. Experimental results demonstrate the good performance of the proposed superresolution algorithm as well its robustness against noise.
{"title":"Fast Generalized Motion Estimation and Superresolution","authors":"A. Sinha, Xiaolin Wu","doi":"10.1109/ICIP.2007.4379853","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379853","url":null,"abstract":"We propose a new superresolution algorithm based on a fast motion estimation technique. Two stages of this algorithm, namely, motion estimation and high-resolution reconstruction, rely on an area-based interpolation scheme that involves intersecting two pixel grids in arbitrary orientation, displacement, and scaling. We develop a fast approximate solution of the above problem, whose exact solution is prohibitively expensive. Also, gradient descent algorithm is used for fast convergence of the motion estimation algorithm. Experimental results demonstrate the good performance of the proposed superresolution algorithm as well its robustness against noise.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116442625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-11-12DOI: 10.1109/ICIP.2007.4379571
R. Chittineni, M. Su, O. Nalcioglu
MRI is the most accurate imaging modality to monitor response of breast cancer undergoing neoadjuvant chemotherapy, by comparing the tumor volume measured in follow up MRI, taken during the course of therapy, to its baseline value. Due to the deformable nature of the breast, its' shape in MR acquisitions taken in different studies varies significantly. If these images can be co-registered, the location of lesion in each study can be matched. Breast MR images collected often include large areas outside the breast, such as the thoracic region and surrounding air, which may pose a hindrance to registration algorithms. In this paper, we describe a segmentation algorithm to delineate the breast region from the chest by using the invariant, rigid structure such as the chest, as opposed to the use of varying breast outlines employed in currently available solutions. This ensures robustness and reproducibility of our algorithm.
{"title":"Breast Delineation using Active Contours to Facilitate Coregistration of Serial MRI Studies for Therapy Response Evaluation","authors":"R. Chittineni, M. Su, O. Nalcioglu","doi":"10.1109/ICIP.2007.4379571","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379571","url":null,"abstract":"MRI is the most accurate imaging modality to monitor response of breast cancer undergoing neoadjuvant chemotherapy, by comparing the tumor volume measured in follow up MRI, taken during the course of therapy, to its baseline value. Due to the deformable nature of the breast, its' shape in MR acquisitions taken in different studies varies significantly. If these images can be co-registered, the location of lesion in each study can be matched. Breast MR images collected often include large areas outside the breast, such as the thoracic region and surrounding air, which may pose a hindrance to registration algorithms. In this paper, we describe a segmentation algorithm to delineate the breast region from the chest by using the invariant, rigid structure such as the chest, as opposed to the use of varying breast outlines employed in currently available solutions. This ensures robustness and reproducibility of our algorithm.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116443758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-11-12DOI: 10.1109/ICIP.2007.4379138
K. Hara, R. Kurazume, Kohei Inoue, K. Urahama
The Chan-Vese level set algorithm has been successfully applied to segmentation of images on Cartesian coordinate meshes, including ordinary planar images. In this paper we present a Chan-Vese model for segmentation of images on polar coordinate meshes, such as topography and remote sensing images. The image segmentation is accomplished by formulating the associated evolution equation in the polar coordinate system and then numerically solving the partial differential equation on an overset grid system called the Yin-Yang grid, which is free from the problem of singularity at the poles. We include examples of segmentations of real earth data that demonstrate the performance of our method.
{"title":"Segmentation of Images on Polar Coordinate Meshes","authors":"K. Hara, R. Kurazume, Kohei Inoue, K. Urahama","doi":"10.1109/ICIP.2007.4379138","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379138","url":null,"abstract":"The Chan-Vese level set algorithm has been successfully applied to segmentation of images on Cartesian coordinate meshes, including ordinary planar images. In this paper we present a Chan-Vese model for segmentation of images on polar coordinate meshes, such as topography and remote sensing images. The image segmentation is accomplished by formulating the associated evolution equation in the polar coordinate system and then numerically solving the partial differential equation on an overset grid system called the Yin-Yang grid, which is free from the problem of singularity at the poles. We include examples of segmentations of real earth data that demonstrate the performance of our method.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116460429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-11-12DOI: 10.1109/ICIP.2007.4379811
T. Ell
The linear filtering of color images using hypercomplex convolution and Fourier transforms provides a holistic treatment of color by representing pixels as 3-space vector quantities within the quaternion algebra. But, this technique is limited to images with at most three channels of information, e.g., RGB images. Linear filtering of color images by representing color pixels as multi-vectors embedded in a geometric algebra is presented. This multi-vector representation has similar convolution and Fourier transforms as the quaternion based filters, but provides an avenue for multi-spectral images composed of more than three channels.
{"title":"Multi-Vector Color-Image Filters","authors":"T. Ell","doi":"10.1109/ICIP.2007.4379811","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379811","url":null,"abstract":"The linear filtering of color images using hypercomplex convolution and Fourier transforms provides a holistic treatment of color by representing pixels as 3-space vector quantities within the quaternion algebra. But, this technique is limited to images with at most three channels of information, e.g., RGB images. Linear filtering of color images by representing color pixels as multi-vectors embedded in a geometric algebra is presented. This multi-vector representation has similar convolution and Fourier transforms as the quaternion based filters, but provides an avenue for multi-spectral images composed of more than three channels.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123464712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-11-12DOI: 10.1109/ICIP.2007.4379778
E. Imre, Aydin Alatan, U. Güdükbay
This paper proposes a novel 3D piecewise planar reconstruction algorithm, to build a 3D scene representation that minimizes the intensity error between a particular frame and its prediction. 3D scene geometry is exploited to remove the visual redundancy between frame pairs for any predictive coding scheme. This approach associates the rate increase with the quality of representation, and is shown to be rate-distortion efficient by the experiments.
{"title":"Rate-Distortion Based Piecewise Planar 3D Scene Geometry Representation","authors":"E. Imre, Aydin Alatan, U. Güdükbay","doi":"10.1109/ICIP.2007.4379778","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379778","url":null,"abstract":"This paper proposes a novel 3D piecewise planar reconstruction algorithm, to build a 3D scene representation that minimizes the intensity error between a particular frame and its prediction. 3D scene geometry is exploited to remove the visual redundancy between frame pairs for any predictive coding scheme. This approach associates the rate increase with the quality of representation, and is shown to be rate-distortion efficient by the experiments.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123728416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-11-12DOI: 10.1109/ICIP.2007.4378957
Hao Hu, G. Haan
Bilateral filtering is a simple and non-linear technique to remove the image noise while preserving edges. However, it is difficult to optimize a bilateral filter to obtain desired effect by supervised training. In this paper, we propose a new type of trained bilateral filter, which possesses the essential characteristics of the original bilateral filter and can be optimized offline by least mean square optimization. In applications of JPEG and H.264/MEPG4 AVC deblocking, we compared the proposed filter with the original bilateral filter and other state-of-the-art methods. Experimental results show that the proposed method has a better performance at artifacts reduction and edge preserving.
{"title":"Trained Bilateral Filters and Applications to Coding Artifacts Reduction","authors":"Hao Hu, G. Haan","doi":"10.1109/ICIP.2007.4378957","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4378957","url":null,"abstract":"Bilateral filtering is a simple and non-linear technique to remove the image noise while preserving edges. However, it is difficult to optimize a bilateral filter to obtain desired effect by supervised training. In this paper, we propose a new type of trained bilateral filter, which possesses the essential characteristics of the original bilateral filter and can be optimized offline by least mean square optimization. In applications of JPEG and H.264/MEPG4 AVC deblocking, we compared the proposed filter with the original bilateral filter and other state-of-the-art methods. Experimental results show that the proposed method has a better performance at artifacts reduction and edge preserving.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122022279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-11-12DOI: 10.1109/ICIP.2007.4379950
C. Bauckhage
Tensor-based approaches to visual object detection can drastically reduce the number of parameters in the training process. Compared to their vector-based counterparts, tensor methods therefore train faster, better manage noisy or corrupted training samples, and are less prone to over-fitting. In this paper, we show how to incorporate the kernel trick into tensor-based filter design. Dealing with object detection in cluttered natural environments, the method is shown to cope with substantially varying training data and a cascade of only two kernel tensor-filters is demonstrated to provide very reliable results.
{"title":"Tensor-Based Filter Design using Kernel Ridge Regression","authors":"C. Bauckhage","doi":"10.1109/ICIP.2007.4379950","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379950","url":null,"abstract":"Tensor-based approaches to visual object detection can drastically reduce the number of parameters in the training process. Compared to their vector-based counterparts, tensor methods therefore train faster, better manage noisy or corrupted training samples, and are less prone to over-fitting. In this paper, we show how to incorporate the kernel trick into tensor-based filter design. Dealing with object detection in cluttered natural environments, the method is shown to cope with substantially varying training data and a cascade of only two kernel tensor-filters is demonstrated to provide very reliable results.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122182144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}