This research proposes a framework for reconstructing an image of high dynamic range and high spatial resolution from a sequence of multi sampled images. In the framework, there are two key issues: robust motion estimation and appropriate measurement discarding for reconstruction. An approach to increase the robustness of motion estimation for a multi sampled image sequence is presented. By extracting the luminance component from each captured image in the sequence, and then enhancing the low intensity regions, conventional motion estimation methods can be applied with good results. The research also presents a novel discarding method of invalid measurements. The proposed spatio-temporal discarding method prevents over- discarding in the case of extreme exposure differences. Experimental results show the effectiveness of the proposed framework.
{"title":"Reconstruction of a High Dynamic Range and High Resolution Image from a Multisampled Image Sequence","authors":"H. B. Haraldsson, Masayuki Tanaka, M. Okutomi","doi":"10.1109/ICIAP.2007.109","DOIUrl":"https://doi.org/10.1109/ICIAP.2007.109","url":null,"abstract":"This research proposes a framework for reconstructing an image of high dynamic range and high spatial resolution from a sequence of multi sampled images. In the framework, there are two key issues: robust motion estimation and appropriate measurement discarding for reconstruction. An approach to increase the robustness of motion estimation for a multi sampled image sequence is presented. By extracting the luminance component from each captured image in the sequence, and then enhancing the low intensity regions, conventional motion estimation methods can be applied with good results. The research also presents a novel discarding method of invalid measurements. The proposed spatio-temporal discarding method prevents over- discarding in the case of extreme exposure differences. Experimental results show the effectiveness of the proposed framework.","PeriodicalId":118466,"journal":{"name":"14th International Conference on Image Analysis and Processing (ICIAP 2007)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134310372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the information and communication personal verification is a crucial aspect. Among the different means for personal verification, handwritten signature plays a fundamental role since signature is the most diffuse means for personal verification in our daily life. In this paper a brief overview of the field of handwritten signature verification is presented and some of the most relevant perspectives are highlighted.
{"title":"Verification of Handwritten Signatures: an Overview","authors":"S. Impedovo, G. Pirlo","doi":"10.1109/ICIAP.2007.131","DOIUrl":"https://doi.org/10.1109/ICIAP.2007.131","url":null,"abstract":"In the information and communication personal verification is a crucial aspect. Among the different means for personal verification, handwritten signature plays a fundamental role since signature is the most diffuse means for personal verification in our daily life. In this paper a brief overview of the field of handwritten signature verification is presented and some of the most relevant perspectives are highlighted.","PeriodicalId":118466,"journal":{"name":"14th International Conference on Image Analysis and Processing (ICIAP 2007)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134405618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider the problem of localizing a moving ball from a single calibrated perspective image; after showing that ordinary algorithms fail in analyzing motion blurred scenes, we describe a theoretically-sound model for the blurred image of a ball. Then, we present an algorithm capable of recovering both the ball 3D position and its velocity. The algorithm is experimentally validated both on real and synthetic images.
{"title":"Ball Position and Motion Reconstruction from Blur in a Single Perspective Image","authors":"G. Boracchi, V. Caglioti, A. Giusti","doi":"10.1109/ICIAP.2007.36","DOIUrl":"https://doi.org/10.1109/ICIAP.2007.36","url":null,"abstract":"We consider the problem of localizing a moving ball from a single calibrated perspective image; after showing that ordinary algorithms fail in analyzing motion blurred scenes, we describe a theoretically-sound model for the blurred image of a ball. Then, we present an algorithm capable of recovering both the ball 3D position and its velocity. The algorithm is experimentally validated both on real and synthetic images.","PeriodicalId":118466,"journal":{"name":"14th International Conference on Image Analysis and Processing (ICIAP 2007)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133180215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Video analysis of athlete action is becoming an important tool for sports training, since it has no intervention to the athlete and there are abundant archived data it can exploit. In this paper we present our work on the automatic analysis of complex individual actions in diving video aiming at providing biometric measurements and visual tools for coaching assistant and performance improving. The main body joint angles of the athlete are automatic obtained by 2D articulated human body model fitting and shape analysis techniques. Two visual analyzing tools: motion panorama and overlay composition, which are extremely suitable for individual sports game training are presented. The encouraging experimental results show the effectiveness of the proposed system.
{"title":"Automatic Video-based Analysis of Athlete Action","authors":"Haojie Li, Shouxun Lin, Yongdong Zhang, Kun Tao","doi":"10.1109/ICIAP.2007.35","DOIUrl":"https://doi.org/10.1109/ICIAP.2007.35","url":null,"abstract":"Video analysis of athlete action is becoming an important tool for sports training, since it has no intervention to the athlete and there are abundant archived data it can exploit. In this paper we present our work on the automatic analysis of complex individual actions in diving video aiming at providing biometric measurements and visual tools for coaching assistant and performance improving. The main body joint angles of the athlete are automatic obtained by 2D articulated human body model fitting and shape analysis techniques. Two visual analyzing tools: motion panorama and overlay composition, which are extremely suitable for individual sports game training are presented. The encouraging experimental results show the effectiveness of the proposed system.","PeriodicalId":118466,"journal":{"name":"14th International Conference on Image Analysis and Processing (ICIAP 2007)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131376703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a generative model to perform cosegmentation on an arbitrary number of images, where cosegmentation has been defined as the task of segmenting simultaneously the common parts between a pair of images. We build upon a previous work that introduced a new approach to model-based clustering under prior knowledge, and exploit its simplicity and flexibility to solve the problem of cosegmentation. We show experiments performed with datasets as diverse as slices of an MRI scan, frames from a video sequence, images in a database of objects, and with a set of 3D range images.
{"title":"Cosegmentation for Image Sequences","authors":"D. Cheng, Mário A. T. Figueiredo","doi":"10.1109/ICIAP.2007.48","DOIUrl":"https://doi.org/10.1109/ICIAP.2007.48","url":null,"abstract":"We present a generative model to perform cosegmentation on an arbitrary number of images, where cosegmentation has been defined as the task of segmenting simultaneously the common parts between a pair of images. We build upon a previous work that introduced a new approach to model-based clustering under prior knowledge, and exploit its simplicity and flexibility to solve the problem of cosegmentation. We show experiments performed with datasets as diverse as slices of an MRI scan, frames from a video sequence, images in a database of objects, and with a set of 3D range images.","PeriodicalId":118466,"journal":{"name":"14th International Conference on Image Analysis and Processing (ICIAP 2007)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115432437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lei-Chun Chou, ChengJing Ye, Yuan-Chen Liu, Bin-Cheng Jhao
In this paper, two faster algorithms - fast predictive search algorithm (FPSA) and fast predictive search algorithm-early termination (FPSA-ET), used in motion estimation were proposed. These algorithms are proceeded by the temporal and spatial correlation between BMA and the film and by the prediction of motor vectors. The video coding is generally following the order from top to down and left to right and it causes the spatial correlation between the adjacent blocks, especially for those blocks on the left and up sides. The motion of the object in the image would follow the constant trace, so there would be temporal correlation between the blocks. Based on the experimental data of this paper, the proposed algorithms are faster than the conventional one for about 20%~25% and faster than UMHexagonS for about 20 times. The PSNR value of the proposed algorithms are higher than the conventional one for 0.22dM~0.32dM. Therefore the proposed algorithms are more appropriate for the real-time or high quality video or for video with large amounts of motion.
{"title":"Fast Predictive Search Algorithm for Video Motion Estimation","authors":"Lei-Chun Chou, ChengJing Ye, Yuan-Chen Liu, Bin-Cheng Jhao","doi":"10.1109/ICIAP.2007.68","DOIUrl":"https://doi.org/10.1109/ICIAP.2007.68","url":null,"abstract":"In this paper, two faster algorithms - fast predictive search algorithm (FPSA) and fast predictive search algorithm-early termination (FPSA-ET), used in motion estimation were proposed. These algorithms are proceeded by the temporal and spatial correlation between BMA and the film and by the prediction of motor vectors. The video coding is generally following the order from top to down and left to right and it causes the spatial correlation between the adjacent blocks, especially for those blocks on the left and up sides. The motion of the object in the image would follow the constant trace, so there would be temporal correlation between the blocks. Based on the experimental data of this paper, the proposed algorithms are faster than the conventional one for about 20%~25% and faster than UMHexagonS for about 20 times. The PSNR value of the proposed algorithms are higher than the conventional one for 0.22dM~0.32dM. Therefore the proposed algorithms are more appropriate for the real-time or high quality video or for video with large amounts of motion.","PeriodicalId":118466,"journal":{"name":"14th International Conference on Image Analysis and Processing (ICIAP 2007)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116930252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a machine learning approach to detect changes in human appearance between instances of the same person that may be taken with different cameras, but over short periods of time. For each video sequence of the person, we approximately align each frame in the sequence and then generate a set of features that captures the differences between the two sequences. The features are the occupancy difference map, the codeword frequency difference map (based on a vector quantization of the set of colors and frequencies) at each aligned pixel and the histogram intersection map. A boosting technique is then applied to learn the most discriminative set of features, and these features are then used to train a support vector machine classifier to recognize significant appearance changes. We apply our approach to the problem of left package detection. We train the classifiers on a laboratory database of videos in which people are seen with and without common articles that people carry - backpacks and suitcases. We test the approach on some real airport video sequences. Moving to the real world videos requires addressing additional problems, including the view selection problem and the frame selection problem.
{"title":"Human Appearance Change Detection","authors":"Nagia M. Ghanem, L. Davis","doi":"10.1109/ICIAP.2007.75","DOIUrl":"https://doi.org/10.1109/ICIAP.2007.75","url":null,"abstract":"We present a machine learning approach to detect changes in human appearance between instances of the same person that may be taken with different cameras, but over short periods of time. For each video sequence of the person, we approximately align each frame in the sequence and then generate a set of features that captures the differences between the two sequences. The features are the occupancy difference map, the codeword frequency difference map (based on a vector quantization of the set of colors and frequencies) at each aligned pixel and the histogram intersection map. A boosting technique is then applied to learn the most discriminative set of features, and these features are then used to train a support vector machine classifier to recognize significant appearance changes. We apply our approach to the problem of left package detection. We train the classifiers on a laboratory database of videos in which people are seen with and without common articles that people carry - backpacks and suitcases. We test the approach on some real airport video sequences. Moving to the real world videos requires addressing additional problems, including the view selection problem and the frame selection problem.","PeriodicalId":118466,"journal":{"name":"14th International Conference on Image Analysis and Processing (ICIAP 2007)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115208997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Real-time histogram specification methods aims to find a continuous function that transforms a source image to match a target distribution with the highest possible degree of accuracy. Many approaches privilege exact specification exploiting multi-valued ordering functions but incur in highly computational expensive implementations. Histogram specification algorithms can be classified according to computational complexity, image distortion and accuracy of reproduction of the target histogram. The method we propose permits an exact match of a given target histogram independently of the source image meanwhile introducing negligible image distortion. The simplicity of the method enables fast computation making the algorithm suitable for real time applications. Exhaustive experiments and accurate comparisons are carried out against the most representative approaches reported in literature.
{"title":"A High Performance Exact Histogram Specification Algorithm","authors":"A. Bevilacqua, Pietro Azzari","doi":"10.1109/ICIAP.2007.8","DOIUrl":"https://doi.org/10.1109/ICIAP.2007.8","url":null,"abstract":"Real-time histogram specification methods aims to find a continuous function that transforms a source image to match a target distribution with the highest possible degree of accuracy. Many approaches privilege exact specification exploiting multi-valued ordering functions but incur in highly computational expensive implementations. Histogram specification algorithms can be classified according to computational complexity, image distortion and accuracy of reproduction of the target histogram. The method we propose permits an exact match of a given target histogram independently of the source image meanwhile introducing negligible image distortion. The simplicity of the method enables fast computation making the algorithm suitable for real time applications. Exhaustive experiments and accurate comparisons are carried out against the most representative approaches reported in literature.","PeriodicalId":118466,"journal":{"name":"14th International Conference on Image Analysis and Processing (ICIAP 2007)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115291497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Image denoising and enhancement problems have many physical analogues that highlight new approaches to novel solutions. One such solution, based on viewing the image as elastic sheet, is presented. A processing scheme for grayscale images is outlined and further considered in the context of color images. Preliminary analysis and simulations on noisy images indicate that multidimensional manifold representation of combined space-color information incorporates the advantages of separate color channel representations. Experimental analysis reveals elastic sheet method to be a powerful and robust denoising tool, which preserves most meaningful details.
{"title":"Image Enhancement Using Elastic Manifolds","authors":"V. Ratner, Y. Zeevi","doi":"10.1109/ICIAP.2007.78","DOIUrl":"https://doi.org/10.1109/ICIAP.2007.78","url":null,"abstract":"Image denoising and enhancement problems have many physical analogues that highlight new approaches to novel solutions. One such solution, based on viewing the image as elastic sheet, is presented. A processing scheme for grayscale images is outlined and further considered in the context of color images. Preliminary analysis and simulations on noisy images indicate that multidimensional manifold representation of combined space-color information incorporates the advantages of separate color channel representations. Experimental analysis reveals elastic sheet method to be a powerful and robust denoising tool, which preserves most meaningful details.","PeriodicalId":118466,"journal":{"name":"14th International Conference on Image Analysis and Processing (ICIAP 2007)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121549344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maurício Pamplona Segundo, Chauã C. Queirolo, O. Bellon, Luciano Silva
This paper presents our methodology for face and facial features detection to improve 3D face recognition in a presence of facial expression variation. Our goal was to develop an automatic process to be embedded in a face recognition system, using only range images as input. To do that, our approach combines traditional image segmentation techniques for face segmentation and detect facial features by combining an adapted method for 2D facial features extraction with the surface curvature information. The experiments were performed in a large, well-known face image database available on the Biometric Experimentation Environment (BEE), including 4,950 images. The results confirms that our method is efficient for the proposed application.
{"title":"Automatic 3D facial segmentation and landmark detection","authors":"Maurício Pamplona Segundo, Chauã C. Queirolo, O. Bellon, Luciano Silva","doi":"10.1109/ICIAP.2007.29","DOIUrl":"https://doi.org/10.1109/ICIAP.2007.29","url":null,"abstract":"This paper presents our methodology for face and facial features detection to improve 3D face recognition in a presence of facial expression variation. Our goal was to develop an automatic process to be embedded in a face recognition system, using only range images as input. To do that, our approach combines traditional image segmentation techniques for face segmentation and detect facial features by combining an adapted method for 2D facial features extraction with the surface curvature information. The experiments were performed in a large, well-known face image database available on the Biometric Experimentation Environment (BEE), including 4,950 images. The results confirms that our method is efficient for the proposed application.","PeriodicalId":118466,"journal":{"name":"14th International Conference on Image Analysis and Processing (ICIAP 2007)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124821515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}