Pub Date : 2003-09-17DOI: 10.1109/ICIAP.2003.1234041
Andrea Fusiello, Stefano Caldrer, S. Ceglie, N. Mattern, Vittorio Murino
This work deals with the view synthesis problem, i.e., how to generate snapshots of a scene taken from a "virtual" viewpoint different from all the viewpoints of the real views. Starting from uncalibrated reference images, the geometry of the scene is recovered by means of the relative affine structure. This information is used to extrapolate novel views using planar warping plus parallax correction. The contributions of this paper are twofold. First we introduce an automatic method for specifying the virtual viewpoint based on the replication of the epipolar geometry linking two reference views. Second, we present a method for generating synthetic views of a soccer ground starting from a single uncalibrated image. Experimental results using real images are shown.
{"title":"View synthesis from uncalibrated images using parallax","authors":"Andrea Fusiello, Stefano Caldrer, S. Ceglie, N. Mattern, Vittorio Murino","doi":"10.1109/ICIAP.2003.1234041","DOIUrl":"https://doi.org/10.1109/ICIAP.2003.1234041","url":null,"abstract":"This work deals with the view synthesis problem, i.e., how to generate snapshots of a scene taken from a \"virtual\" viewpoint different from all the viewpoints of the real views. Starting from uncalibrated reference images, the geometry of the scene is recovered by means of the relative affine structure. This information is used to extrapolate novel views using planar warping plus parallax correction. The contributions of this paper are twofold. First we introduce an automatic method for specifying the virtual viewpoint based on the replication of the epipolar geometry linking two reference views. Second, we present a method for generating synthetic views of a soccer ground starting from a single uncalibrated image. Experimental results using real images are shown.","PeriodicalId":218076,"journal":{"name":"12th International Conference on Image Analysis and Processing, 2003.Proceedings.","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116153642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-09-17DOI: 10.1109/ICIAP.2003.1234038
S. Yonemoto, Hiroshi Nakano, R. Taniguchi
This paper describes a vision based human figure motion control. Our purpose is to do seamless mapping of human motion in the real world into virtual environments. With the aim of making computing systems suited for users, we have developed a vision based human motion analysis and synthesis method. The human motion analysis method is implemented by blob tracking, and the motion synthesis method is focused on generating realistic motion from a limited number of blobs. This synthesis method is realized by using physical constraints and the other constraints. In order to realize more realistic motion synthesis, we introduce additional constraints in the synthesis method. We have estimated good constraints by analyzing real motion capture data. As a PUI application, we have applied these methods to real-time 3D interaction such as 3D direct manipulation interfaces.
{"title":"Real-time human figure control using tracked blobs","authors":"S. Yonemoto, Hiroshi Nakano, R. Taniguchi","doi":"10.1109/ICIAP.2003.1234038","DOIUrl":"https://doi.org/10.1109/ICIAP.2003.1234038","url":null,"abstract":"This paper describes a vision based human figure motion control. Our purpose is to do seamless mapping of human motion in the real world into virtual environments. With the aim of making computing systems suited for users, we have developed a vision based human motion analysis and synthesis method. The human motion analysis method is implemented by blob tracking, and the motion synthesis method is focused on generating realistic motion from a limited number of blobs. This synthesis method is realized by using physical constraints and the other constraints. In order to realize more realistic motion synthesis, we introduce additional constraints in the synthesis method. We have estimated good constraints by analyzing real motion capture data. As a PUI application, we have applied these methods to real-time 3D interaction such as 3D direct manipulation interfaces.","PeriodicalId":218076,"journal":{"name":"12th International Conference on Image Analysis and Processing, 2003.Proceedings.","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125724203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-09-17DOI: 10.1109/ICIAP.2003.1234079
A. Krivoulets, Xiaolin Wu
Optimal context quantization with respect to the minimum conditional entropy (MCECQ) is proven to be an efficient way for high order statistical modeling and model complexity reduction in data compression systems. The MCECQ merges together contexts that have similar statistics to reduce the size of the original model. In this technique, the number of output clusters (the model size) must be set before quantization. Optimal model size for the given data is not usually known in advance. We extend the MCECQ technique to a multi-model approach for context modeling, which overcomes this problem and gives the possibilities for better fitting the model to the actual data. The method is primarily intended for image compression algorithms. In our experiments, we applied the proposed technique to embedded conditional bit-plane entropy coding of wavelet transform coefficients. We show that the performance of the proposed modeling achieves the performance of the optimal model of fixed size found individually for given data using MCECQ (and in most cases it is even slightly better).
{"title":"Hierarchical modeling via optimal context quantization","authors":"A. Krivoulets, Xiaolin Wu","doi":"10.1109/ICIAP.2003.1234079","DOIUrl":"https://doi.org/10.1109/ICIAP.2003.1234079","url":null,"abstract":"Optimal context quantization with respect to the minimum conditional entropy (MCECQ) is proven to be an efficient way for high order statistical modeling and model complexity reduction in data compression systems. The MCECQ merges together contexts that have similar statistics to reduce the size of the original model. In this technique, the number of output clusters (the model size) must be set before quantization. Optimal model size for the given data is not usually known in advance. We extend the MCECQ technique to a multi-model approach for context modeling, which overcomes this problem and gives the possibilities for better fitting the model to the actual data. The method is primarily intended for image compression algorithms. In our experiments, we applied the proposed technique to embedded conditional bit-plane entropy coding of wavelet transform coefficients. We show that the performance of the proposed modeling achieves the performance of the optimal model of fixed size found individually for given data using MCECQ (and in most cases it is even slightly better).","PeriodicalId":218076,"journal":{"name":"12th International Conference on Image Analysis and Processing, 2003.Proceedings.","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120864667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-09-17DOI: 10.1109/ICIAP.2003.1234057
P. Jeong, S. Nedevschi
The aim of this paper is to obtain real-time classification for robust road region detection in both highway and rural way environments. This approach uses a local averaging classifier relying on decision trees, and in case of altered or noisy road regions, a special intelligent detection procedure. The local averaging classifier based on the decision tree provides real-time road/nonroad classification. The main idea is that the neighbor feature vectors around the control point are analyzed, and the control point has conditioned feature vector by the decision tree. However, this algorithm performs poorly in case of noisy road regions. To overcome this problem, we use the intelligent detection method for missing road regions. Let us assume that there are two problematic situations in the highways: in the first one, a lane marking is missing. in the second one, both lane markings are missing. In the first case, we can predict where the other line marking is, and apple the ordinary K-means onto that region. In the second case, we split the image into six parts, and the ordinary K-means is applied onto the most left and right four regions. In the case of rural ways, we also split the image into six parts, and apply the ordinary K-means as in the second situation of the highways. The merits of the proposed method are that it provides efficient, accurate, and low cost classification in the real-time application.
{"title":"Intelligent road detection based on local averaging classifier in real-time environments","authors":"P. Jeong, S. Nedevschi","doi":"10.1109/ICIAP.2003.1234057","DOIUrl":"https://doi.org/10.1109/ICIAP.2003.1234057","url":null,"abstract":"The aim of this paper is to obtain real-time classification for robust road region detection in both highway and rural way environments. This approach uses a local averaging classifier relying on decision trees, and in case of altered or noisy road regions, a special intelligent detection procedure. The local averaging classifier based on the decision tree provides real-time road/nonroad classification. The main idea is that the neighbor feature vectors around the control point are analyzed, and the control point has conditioned feature vector by the decision tree. However, this algorithm performs poorly in case of noisy road regions. To overcome this problem, we use the intelligent detection method for missing road regions. Let us assume that there are two problematic situations in the highways: in the first one, a lane marking is missing. in the second one, both lane markings are missing. In the first case, we can predict where the other line marking is, and apple the ordinary K-means onto that region. In the second case, we split the image into six parts, and the ordinary K-means is applied onto the most left and right four regions. In the case of rural ways, we also split the image into six parts, and apply the ordinary K-means as in the second situation of the highways. The merits of the proposed method are that it provides efficient, accurate, and low cost classification in the real-time application.","PeriodicalId":218076,"journal":{"name":"12th International Conference on Image Analysis and Processing, 2003.Proceedings.","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121339607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-09-17DOI: 10.1109/ICIAP.2003.1234092
A. Machì, F. Collura
Spikes of brightness often locally affect single frames of aged motion pictures, because of dust, dirt and scratches injuring the film surface. We present a method for accurate digital detection and restoration of such kinds of film defects. The method evaluates both global and local intraframe disparity statistics after motion-compensation and uses them to detect abnormal spikes. It recovers image structure from the same frame by linear interpolation of defect surroundings and refines details from the temporal neighbourhood. Weights of the blending filter are set according to local reliability of the motion estimation maps. A texture pattern is also extracted from spatial support areas, and added to regions with limited recovery from the temporal neighbourhood. Experiments performed on synthetic sequences show very high recall and precision rates, and low recovery errors. High quality restoration of severely damaged sequences is shown.
{"title":"Accurate spatio-temporal restoration of compact single frame defects in aged motion pictures","authors":"A. Machì, F. Collura","doi":"10.1109/ICIAP.2003.1234092","DOIUrl":"https://doi.org/10.1109/ICIAP.2003.1234092","url":null,"abstract":"Spikes of brightness often locally affect single frames of aged motion pictures, because of dust, dirt and scratches injuring the film surface. We present a method for accurate digital detection and restoration of such kinds of film defects. The method evaluates both global and local intraframe disparity statistics after motion-compensation and uses them to detect abnormal spikes. It recovers image structure from the same frame by linear interpolation of defect surroundings and refines details from the temporal neighbourhood. Weights of the blending filter are set according to local reliability of the motion estimation maps. A texture pattern is also extracted from spatial support areas, and added to regions with limited recovery from the temporal neighbourhood. Experiments performed on synthetic sequences show very high recall and precision rates, and low recovery errors. High quality restoration of severely damaged sequences is shown.","PeriodicalId":218076,"journal":{"name":"12th International Conference on Image Analysis and Processing, 2003.Proceedings.","volume":"173 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123283742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-09-17DOI: 10.1109/ICIAP.2003.1234052
Sekhar Mandal, S. Chowdhury, A. Das, B. Chanda
The requirement of identifying and segmenting the table of contents (TOC) and index pages in the development of a digital library is obvious. A digital document library is created to provide a non-labour intensive, cheap and flexible way of storing, representing and managing paper documents in electronic form to facilitate indexing, viewing, printing and extracting the intended portions. Information from the TOC and index pages is extracted to use in a document database for effective retrieval of the required pieces of information. We present fully automatic identification and segmentation of TOC and index pages from a scanned document.
{"title":"Automated detection and segmentation of table of contents page and index pages from document images","authors":"Sekhar Mandal, S. Chowdhury, A. Das, B. Chanda","doi":"10.1109/ICIAP.2003.1234052","DOIUrl":"https://doi.org/10.1109/ICIAP.2003.1234052","url":null,"abstract":"The requirement of identifying and segmenting the table of contents (TOC) and index pages in the development of a digital library is obvious. A digital document library is created to provide a non-labour intensive, cheap and flexible way of storing, representing and managing paper documents in electronic form to facilitate indexing, viewing, printing and extracting the intended portions. Information from the TOC and index pages is extracted to use in a document database for effective retrieval of the required pieces of information. We present fully automatic identification and segmentation of TOC and index pages from a scanned document.","PeriodicalId":218076,"journal":{"name":"12th International Conference on Image Analysis and Processing, 2003.Proceedings.","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134282450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-09-17DOI: 10.1109/ICIAP.2003.1234065
E. Catanzariti, R. Esposito, Roberta Santilli, M. Santoro
In order to build an automatic system that searches, acquires and analyses chromosomal aberrations, we have coupled a commercial system, Metafer4 (MetaSystems, Germany), with an original software (VRAIC) that is able to separate normal from aberrant metaphase images. Images of metaphases are first stained with a FISH technique and then acquired in three different channels: a BLUE channel, where all chromosomes are stained, a RED channel and a GREEN channel, where only specific pairs of chromosomes are stained. The analysis takes place in three stages. Images are first segmented in the BLUE channel by an edge detection technique. Edge sets not belonging to chromosomal contours are then eliminated by thresholding techniques. Finally, segmentation obtained in the first stage is used to facilitate detection of chromosomal aberrations in the the RED and GREEN channels.
{"title":"Toward an automated system for the analysis of cytogenetic abnormalities using fluorescence in situ hybridization technique","authors":"E. Catanzariti, R. Esposito, Roberta Santilli, M. Santoro","doi":"10.1109/ICIAP.2003.1234065","DOIUrl":"https://doi.org/10.1109/ICIAP.2003.1234065","url":null,"abstract":"In order to build an automatic system that searches, acquires and analyses chromosomal aberrations, we have coupled a commercial system, Metafer4 (MetaSystems, Germany), with an original software (VRAIC) that is able to separate normal from aberrant metaphase images. Images of metaphases are first stained with a FISH technique and then acquired in three different channels: a BLUE channel, where all chromosomes are stained, a RED channel and a GREEN channel, where only specific pairs of chromosomes are stained. The analysis takes place in three stages. Images are first segmented in the BLUE channel by an edge detection technique. Edge sets not belonging to chromosomal contours are then eliminated by thresholding techniques. Finally, segmentation obtained in the first stage is used to facilitate detection of chromosomal aberrations in the the RED and GREEN channels.","PeriodicalId":218076,"journal":{"name":"12th International Conference on Image Analysis and Processing, 2003.Proceedings.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133398397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-09-17DOI: 10.1109/ICIAP.2003.1234029
Yijun Xiao, N. Werghi, P. Siebert
Segmentation of a 3D human body, is a very challenging problem in applications exploiting human scan data. To tackle this problem, the paper proposes a topological approach based on the discrete Reeb graph (DRG) which is an extension of the classical Reeb graph to handle unorganized clouds of 3D points. The essence of the approach concerns detecting critical nodes in the DRG, thereby permitting the extraction of branches that represent parts of the body. Because the human body shape representation is built upon global topological features that are preserved so long as the whole structure of the human body does not change, our approach is quite robust against noise, holes, irregular sampling, frame change and posture variation. Experimental results performed on real scan data demonstrate the validity of our method.
{"title":"A topological approach for segmenting human body shape","authors":"Yijun Xiao, N. Werghi, P. Siebert","doi":"10.1109/ICIAP.2003.1234029","DOIUrl":"https://doi.org/10.1109/ICIAP.2003.1234029","url":null,"abstract":"Segmentation of a 3D human body, is a very challenging problem in applications exploiting human scan data. To tackle this problem, the paper proposes a topological approach based on the discrete Reeb graph (DRG) which is an extension of the classical Reeb graph to handle unorganized clouds of 3D points. The essence of the approach concerns detecting critical nodes in the DRG, thereby permitting the extraction of branches that represent parts of the body. Because the human body shape representation is built upon global topological features that are preserved so long as the whole structure of the human body does not change, our approach is quite robust against noise, holes, irregular sampling, frame change and posture variation. Experimental results performed on real scan data demonstrate the validity of our method.","PeriodicalId":218076,"journal":{"name":"12th International Conference on Image Analysis and Processing, 2003.Proceedings.","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130187044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-09-17DOI: 10.1109/ICIAP.2003.1234047
R. Cucchiara, C. Grana, A. Prati, R. Vezzani
The paper presents an approach for a robust (semi-)automatic correction of radial lens distortion in images and videos. This method, based on the Hough transform, has the characteristics to be applicable also on videos from unknown cameras that, consequently, can not be a priori calibrated. We approximated the lens distortion by considering only the lower-order term of the radial distortion. Thus, the method relies on the assumption that pure radial distortion transforms straight lines into curves. The computation of the best value of the distortion parameter is performed in a multi-resolution way. The method precision depends on the scale of the multi-resolution and on the Hough space's resolution. Experiments are provided for both outdoor, uncalibrated camera and an indoor, calibrated one. The stability of the value found in different frames of the same video demonstrates the reliability of the proposed method.
{"title":"A Hough transform-based method for radial lens distortion correction","authors":"R. Cucchiara, C. Grana, A. Prati, R. Vezzani","doi":"10.1109/ICIAP.2003.1234047","DOIUrl":"https://doi.org/10.1109/ICIAP.2003.1234047","url":null,"abstract":"The paper presents an approach for a robust (semi-)automatic correction of radial lens distortion in images and videos. This method, based on the Hough transform, has the characteristics to be applicable also on videos from unknown cameras that, consequently, can not be a priori calibrated. We approximated the lens distortion by considering only the lower-order term of the radial distortion. Thus, the method relies on the assumption that pure radial distortion transforms straight lines into curves. The computation of the best value of the distortion parameter is performed in a multi-resolution way. The method precision depends on the scale of the multi-resolution and on the Hough space's resolution. Experiments are provided for both outdoor, uncalibrated camera and an indoor, calibrated one. The stability of the value found in different frames of the same video demonstrates the reliability of the proposed method.","PeriodicalId":218076,"journal":{"name":"12th International Conference on Image Analysis and Processing, 2003.Proceedings.","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132707014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-09-17DOI: 10.1109/ICIAP.2003.1234103
S. Battiato, G. Gallo, Salvatore Nicotra
We present some analysis techniques and indexing strategies aimed to support classification and retrieval of textures using only perceptual features. The goal of this research is to provide a visual system that, starting from graphical cues representing relevant perceptual features of texture, interactively searches the most similar texture in the set of candidates in the corresponding texture space. Hence, a set of relevant perceptual features, used for indexing, is proposed: directionality, contrast and coarseness. A graphical representation of the computed characteristics is presented together with some examples. Finally, texture retrieval experiments using such iconic representations are presented and discussed.
{"title":"Perceptive visual texture classification and retrieval","authors":"S. Battiato, G. Gallo, Salvatore Nicotra","doi":"10.1109/ICIAP.2003.1234103","DOIUrl":"https://doi.org/10.1109/ICIAP.2003.1234103","url":null,"abstract":"We present some analysis techniques and indexing strategies aimed to support classification and retrieval of textures using only perceptual features. The goal of this research is to provide a visual system that, starting from graphical cues representing relevant perceptual features of texture, interactively searches the most similar texture in the set of candidates in the corresponding texture space. Hence, a set of relevant perceptual features, used for indexing, is proposed: directionality, contrast and coarseness. A graphical representation of the computed characteristics is presented together with some examples. Finally, texture retrieval experiments using such iconic representations are presented and discussed.","PeriodicalId":218076,"journal":{"name":"12th International Conference on Image Analysis and Processing, 2003.Proceedings.","volume":"52 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113977131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}