首页 > 最新文献

14th International Conference on Image Analysis and Processing (ICIAP 2007)最新文献

英文 中文
Corner Displacement from Motion Blur 角位移从运动模糊
Pub Date : 2007-09-10 DOI: 10.1109/ICIAP.2007.47
G. Boracchi, V. Caglioti
We propose a novel procedure for estimating blur in a single image corrupted by blur due to a rigid camera motion during the exposure. Often this blur is approximated as space invariant, even if this assumption holds, for example, only on small image region in perspective images captured during camera movement. Our algorithm analyzes separately selected image regions containing a corner and in each region the blur is described by its direction and extent. The algorithm works directly in space domain, exploiting gradient vectors at pixels belonging to the blurred corner edges. The algorithm has been successfully tested both on synthetic and real images showing good performance even on small image regions and in presence of noise.
我们提出了一种新的方法来估计由于曝光过程中相机刚性运动造成的模糊而损坏的单个图像中的模糊。通常这种模糊被近似为空间不变的,即使这个假设成立,例如,只有在相机运动期间拍摄的透视图像中的小图像区域。我们的算法分别分析选定的包含角的图像区域,并在每个区域中通过其方向和范围来描述模糊。该算法直接在空间域中工作,利用属于模糊角边缘的像素的梯度向量。该算法已成功地在合成图像和真实图像上进行了测试,即使在小图像区域和存在噪声的情况下也表现出良好的性能。
{"title":"Corner Displacement from Motion Blur","authors":"G. Boracchi, V. Caglioti","doi":"10.1109/ICIAP.2007.47","DOIUrl":"https://doi.org/10.1109/ICIAP.2007.47","url":null,"abstract":"We propose a novel procedure for estimating blur in a single image corrupted by blur due to a rigid camera motion during the exposure. Often this blur is approximated as space invariant, even if this assumption holds, for example, only on small image region in perspective images captured during camera movement. Our algorithm analyzes separately selected image regions containing a corner and in each region the blur is described by its direction and extent. The algorithm works directly in space domain, exploiting gradient vectors at pixels belonging to the blurred corner edges. The algorithm has been successfully tested both on synthetic and real images showing good performance even on small image regions and in presence of noise.","PeriodicalId":118466,"journal":{"name":"14th International Conference on Image Analysis and Processing (ICIAP 2007)","volume":"49 21","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113936557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Segmenting Moving Objects in MPEG Videos in the Presence of Camera Motion 在摄像机运动的情况下分割MPEG视频中的运动物体
Pub Date : 2007-09-10 DOI: 10.1109/ICIAP.2007.115
R. Ewerth, M. Schwalb, P. Tessmann, Bernd Freisleben
The distinction of translational and rotational camera motion and the recognition of moving objects is an important topic for scientific film studies. In this paper, we present an approach to distinguish between camera and object motion in MPEG videos and provide a pixel-accurate segmentation of moving objects. Compressed domain features are used as far as possible in order to reduce computation time. First, camera motion parameters are estimated and translational movements are distinguished from rotational movements based on a three-dimensional (3D) camera model. Then, motion vectors which do not fit to the camera motion estimate are assigned to object clusters. The moving object information is utilized to refine the camera motion estimate, and a novel compressed domain tracking algorithm is applied to verify the temporal consistency of detected objects. In contrast to previous approaches, the tracking of both moving objects and background allows to perform their separation iteratively only once per shot. The object boundary is estimated with pixel accuracy via active contour models. Experimental results demonstrate the feasibility of the proposed algorithm.
摄像机平移运动和旋转运动的区分以及运动物体的识别是科学电影研究的一个重要课题。在本文中,我们提出了一种区分MPEG视频中摄像机和物体运动的方法,并提供了一个像素精确的运动物体分割。为了减少计算时间,尽可能使用压缩的域特征。首先,基于三维摄像机模型估计摄像机运动参数,区分平移运动和旋转运动;然后,将不适合相机运动估计的运动向量分配给目标簇。利用运动目标信息对摄像机运动估计进行细化,并采用一种新颖的压缩域跟踪算法来验证检测目标的时间一致性。与之前的方法相比,移动物体和背景的跟踪允许每个镜头只迭代地执行一次分离。通过活动轮廓模型以像素精度估计目标边界。实验结果证明了该算法的可行性。
{"title":"Segmenting Moving Objects in MPEG Videos in the Presence of Camera Motion","authors":"R. Ewerth, M. Schwalb, P. Tessmann, Bernd Freisleben","doi":"10.1109/ICIAP.2007.115","DOIUrl":"https://doi.org/10.1109/ICIAP.2007.115","url":null,"abstract":"The distinction of translational and rotational camera motion and the recognition of moving objects is an important topic for scientific film studies. In this paper, we present an approach to distinguish between camera and object motion in MPEG videos and provide a pixel-accurate segmentation of moving objects. Compressed domain features are used as far as possible in order to reduce computation time. First, camera motion parameters are estimated and translational movements are distinguished from rotational movements based on a three-dimensional (3D) camera model. Then, motion vectors which do not fit to the camera motion estimate are assigned to object clusters. The moving object information is utilized to refine the camera motion estimate, and a novel compressed domain tracking algorithm is applied to verify the temporal consistency of detected objects. In contrast to previous approaches, the tracking of both moving objects and background allows to perform their separation iteratively only once per shot. The object boundary is estimated with pixel accuracy via active contour models. Experimental results demonstrate the feasibility of the proposed algorithm.","PeriodicalId":118466,"journal":{"name":"14th International Conference on Image Analysis and Processing (ICIAP 2007)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133879321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Automatic Handwriting Identification on Medieval Documents 中世纪文献的自动笔迹识别
Pub Date : 2007-09-10 DOI: 10.1109/ICIAP.2007.33
M. Bulacu, Lambert Schomaker
In this paper, we evaluate the performance of text- independent writer identification methods on a handwriting dataset containing medieval English documents. Applicable identification rates are achieved by combining textural features (joint directional probability distributions) with allographic features (grapheme-emission distributions). The aim is to develop an automatic handwriting identification tool that can assist the paleographer in the task of determining the authorship of historical manuscripts.
在本文中,我们评估了文本无关的写作者识别方法在包含中世纪英语文档的手写数据集上的性能。通过结合纹理特征(联合方向概率分布)和异位特征(石墨烯发射分布)来实现适用的识别率。其目的是开发一种自动笔迹识别工具,以帮助古文字学家确定历史手稿的作者身份。
{"title":"Automatic Handwriting Identification on Medieval Documents","authors":"M. Bulacu, Lambert Schomaker","doi":"10.1109/ICIAP.2007.33","DOIUrl":"https://doi.org/10.1109/ICIAP.2007.33","url":null,"abstract":"In this paper, we evaluate the performance of text- independent writer identification methods on a handwriting dataset containing medieval English documents. Applicable identification rates are achieved by combining textural features (joint directional probability distributions) with allographic features (grapheme-emission distributions). The aim is to develop an automatic handwriting identification tool that can assist the paleographer in the task of determining the authorship of historical manuscripts.","PeriodicalId":118466,"journal":{"name":"14th International Conference on Image Analysis and Processing (ICIAP 2007)","volume":"336 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115456470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
Noise versus Facial Expression on 3D Face Recognition 噪声与面部表情的3D人脸识别
Pub Date : 2007-09-10 DOI: 10.1109/ICIAP.2007.94
Chauã C. Queirolo, Maurício Pamplona Segundo, O. Bellon, Luciano Silva
This paper presents a new method for 3D face recognition. The method combines a Simulated Annealing-based approach for image registration using the surface interpenetration measure (SIM) to perform a precise matching between two face images. The recognition score is obtained by combining the SIM scores of four different face regions after their alignment. Experiments were conducted on two databases with a variety official expressions. The images from the databases were classified according to noise level and facial expression, allowing the analysis of each particular effect on 3D face recognition. The method allows a verification rate of 99.9%, at a false acceptance rate (FAR) of 0%, for the FRGC ver 2.0 database when only noiseless, neutral expression face images are used. Also, the results using face images with expressions and noise demonstrate that subjects still can be recognized with 87.5% of verification rate, at a FAR of 0%.
提出了一种新的三维人脸识别方法。该方法结合了基于模拟退火的图像配准方法,利用表面穿插测量(SIM)在两幅人脸图像之间进行精确匹配。将4个不同人脸区域的SIM分数对齐后组合得到识别分数。实验在两个具有多种官方表达式的数据库上进行。数据库中的图像根据噪声水平和面部表情进行分类,从而可以分析3D人脸识别中的每种特定效果。当仅使用无噪声、中性表情面部图像时,该方法允许FRGC 2.0数据库的验证率为99.9%,错误接受率(FAR)为0%。此外,使用带有表情和噪声的人脸图像的结果表明,该方法仍然可以以87.5%的验证率识别被试,识别率为0%。
{"title":"Noise versus Facial Expression on 3D Face Recognition","authors":"Chauã C. Queirolo, Maurício Pamplona Segundo, O. Bellon, Luciano Silva","doi":"10.1109/ICIAP.2007.94","DOIUrl":"https://doi.org/10.1109/ICIAP.2007.94","url":null,"abstract":"This paper presents a new method for 3D face recognition. The method combines a Simulated Annealing-based approach for image registration using the surface interpenetration measure (SIM) to perform a precise matching between two face images. The recognition score is obtained by combining the SIM scores of four different face regions after their alignment. Experiments were conducted on two databases with a variety official expressions. The images from the databases were classified according to noise level and facial expression, allowing the analysis of each particular effect on 3D face recognition. The method allows a verification rate of 99.9%, at a false acceptance rate (FAR) of 0%, for the FRGC ver 2.0 database when only noiseless, neutral expression face images are used. Also, the results using face images with expressions and noise demonstrate that subjects still can be recognized with 87.5% of verification rate, at a FAR of 0%.","PeriodicalId":118466,"journal":{"name":"14th International Conference on Image Analysis and Processing (ICIAP 2007)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115571553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Visual feature group matching for autonomous robot localization 自主机器人定位的视觉特征组匹配
Pub Date : 2007-09-10 DOI: 10.1109/ICIAP.2007.137
E. Frontoni, P. Zingaretti
The Scale Invariant Feature Transform, SIFT, has been successfully applied to robot vision, object recognition, motion estimation, etc. In this work, we propose a SIFT improvement that makes feature extraction and matching more robust, adding a feature group matching layer, which takes into account mutual spatial relations between features. The feature group matching is very fast to be computed and leads to interesting results, above all for the absence of outliers. Results of vision based robot localization using the proposed approach are presented.
尺度不变特征变换(SIFT)已成功应用于机器人视觉、目标识别、运动估计等领域。在这项工作中,我们提出了一种改进SIFT的方法,使特征提取和匹配更加鲁棒,增加了一个特征组匹配层,该层考虑了特征之间的相互空间关系。特征组匹配的计算速度非常快,并且会产生有趣的结果,尤其是因为没有异常值。最后给出了基于视觉的机器人定位的结果。
{"title":"Visual feature group matching for autonomous robot localization","authors":"E. Frontoni, P. Zingaretti","doi":"10.1109/ICIAP.2007.137","DOIUrl":"https://doi.org/10.1109/ICIAP.2007.137","url":null,"abstract":"The Scale Invariant Feature Transform, SIFT, has been successfully applied to robot vision, object recognition, motion estimation, etc. In this work, we propose a SIFT improvement that makes feature extraction and matching more robust, adding a feature group matching layer, which takes into account mutual spatial relations between features. The feature group matching is very fast to be computed and leads to interesting results, above all for the absence of outliers. Results of vision based robot localization using the proposed approach are presented.","PeriodicalId":118466,"journal":{"name":"14th International Conference on Image Analysis and Processing (ICIAP 2007)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123895172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
On Genuine Connectivity Relations Based on Logical Predicates 基于逻辑谓词的真实连通性关系
Pub Date : 2007-09-10 DOI: 10.1109/ICIAP.2007.96
P. Soille
This paper introduces a framework for the generation of genuine connectivity relations whose equivalent classes (called connected components) define unique partitions of the definition domain of a given grey tone image. This framework exploits the total ordering relation between the alpha-connected components of a pixel (two pixels are alpha-connected if there exists at least one path joining them such that the intensity differences between successive pixels of this path does not exceed a threshold value equal to alpha). Genuine connectivity relations are then obtained by considering the largest alpha-connected components satisfying one or more logical predicates such as the variance of the intensity values of the alpha-connected components not exceeding a given threshold value. Fine to coarse hierarchy partitions are generated by carefully varying the input threshold values. The proposed framework has the striking property of uniqueness. That is, the results do not depend on pixel processing order and are fully defined by the values of the threshold values, in contrast to most region growing procedures.
本文介绍了一种用于生成真正连接关系的框架,其等价类(称为连接组件)定义给定灰度图像的定义域的唯一分区。该框架利用了像素的alpha连接组件之间的总排序关系(如果存在至少一条路径连接两个像素,并且该路径的连续像素之间的强度差异不超过等于alpha的阈值,则两个像素是alpha连接的)。然后,通过考虑满足一个或多个逻辑谓词(如不超过给定阈值的α连接组件的强度值的方差)的最大α连接组件,获得真正的连接关系。精细到粗糙的层次划分是通过仔细改变输入阈值来生成的。该框架具有显著的唯一性。也就是说,与大多数区域生长过程相反,结果不依赖于像素处理顺序,并且完全由阈值的值定义。
{"title":"On Genuine Connectivity Relations Based on Logical Predicates","authors":"P. Soille","doi":"10.1109/ICIAP.2007.96","DOIUrl":"https://doi.org/10.1109/ICIAP.2007.96","url":null,"abstract":"This paper introduces a framework for the generation of genuine connectivity relations whose equivalent classes (called connected components) define unique partitions of the definition domain of a given grey tone image. This framework exploits the total ordering relation between the alpha-connected components of a pixel (two pixels are alpha-connected if there exists at least one path joining them such that the intensity differences between successive pixels of this path does not exceed a threshold value equal to alpha). Genuine connectivity relations are then obtained by considering the largest alpha-connected components satisfying one or more logical predicates such as the variance of the intensity values of the alpha-connected components not exceeding a given threshold value. Fine to coarse hierarchy partitions are generated by carefully varying the input threshold values. The proposed framework has the striking property of uniqueness. That is, the results do not depend on pixel processing order and are fully defined by the values of the threshold values, in contrast to most region growing procedures.","PeriodicalId":118466,"journal":{"name":"14th International Conference on Image Analysis and Processing (ICIAP 2007)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115051277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Tensor Voting Fields: Direct Votes Computation and New Saliency Functions 张量投票域:直接投票计算和新的显著性函数
Pub Date : 2007-09-10 DOI: 10.1109/ICIAP.2007.124
P. Campadelli, G. Lombardi
The tensor voting framework (TVF), proposed by Medioni at at, has proved its effectiveness in perceptual grouping of arbitrary dimensional data. In the computer vision and image processing fields, this algorithm has been applied to solve various problems like stereo-matching, 3D reconstruction, and image in painting. The TVF technique can detect and remove a big percentage of outliers, but unfortunately it does not generate satisfactory results when the data are corrupted by additive noise. In this paper a new direct votes computation algorithm for high dimensional spaces is described, and a parametric class of decay functions is proposed to deal with noisy data. Preliminary comparative results between the original TVF and our algorithm are shown on synthetic data.
Medioni在2008年提出的张量投票框架(TVF)在任意维度数据的感知分组中已经证明了它的有效性。在计算机视觉和图像处理领域,该算法已被应用于解决立体匹配、三维重建、绘画中的图像等各种问题。TVF技术可以检测和去除很大比例的异常值,但不幸的是,当数据被加性噪声破坏时,它不能产生令人满意的结果。本文提出了一种新的高维空间直接投票计算算法,并提出了一类参数化的衰减函数来处理噪声数据。在合成数据上给出了原始TVF和算法的初步对比结果。
{"title":"Tensor Voting Fields: Direct Votes Computation and New Saliency Functions","authors":"P. Campadelli, G. Lombardi","doi":"10.1109/ICIAP.2007.124","DOIUrl":"https://doi.org/10.1109/ICIAP.2007.124","url":null,"abstract":"The tensor voting framework (TVF), proposed by Medioni at at, has proved its effectiveness in perceptual grouping of arbitrary dimensional data. In the computer vision and image processing fields, this algorithm has been applied to solve various problems like stereo-matching, 3D reconstruction, and image in painting. The TVF technique can detect and remove a big percentage of outliers, but unfortunately it does not generate satisfactory results when the data are corrupted by additive noise. In this paper a new direct votes computation algorithm for high dimensional spaces is described, and a parametric class of decay functions is proposed to deal with noisy data. Preliminary comparative results between the original TVF and our algorithm are shown on synthetic data.","PeriodicalId":118466,"journal":{"name":"14th International Conference on Image Analysis and Processing (ICIAP 2007)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128374318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Calibration and Image Generation of Mobile Projector-Camera Systems 移动投影-摄像系统的标定与图像生成
Pub Date : 2007-09-10 DOI: 10.1109/ICIAP.2007.39
K. Hamada, J. Sato
The projector-camera system has recently been studied extensively as one of new information presenting systems. For generating screen images properly, it is important to calibrate projector-camera systems accurately. The existing methods for calibrating projector-camera systems are based on 4 markers on the screen and 4 light projections from projectors, and thus require at least 8 basis points totally in images. However, it is not easy to track 8 or more basis points reliably in images, if the projector camera system moves arbitrarily. Thus, we in this paper propose a method for generating screen images properly from less basis points in camera images.
投影-摄像机系统作为一种新型的信息呈现系统,近年来得到了广泛的研究。为了正确生成屏幕图像,准确校准投影机-摄像机系统是非常重要的。现有的投影机-摄像机系统标定方法是基于屏幕上的4个标记和投影机的4个光投影,因此至少需要图像中的8个基点。然而,如果投影仪摄像机系统任意移动,则不容易在图像中可靠地跟踪8个或更多基点。因此,本文提出了一种从相机图像中较少的基点正确生成屏幕图像的方法。
{"title":"Calibration and Image Generation of Mobile Projector-Camera Systems","authors":"K. Hamada, J. Sato","doi":"10.1109/ICIAP.2007.39","DOIUrl":"https://doi.org/10.1109/ICIAP.2007.39","url":null,"abstract":"The projector-camera system has recently been studied extensively as one of new information presenting systems. For generating screen images properly, it is important to calibrate projector-camera systems accurately. The existing methods for calibrating projector-camera systems are based on 4 markers on the screen and 4 light projections from projectors, and thus require at least 8 basis points totally in images. However, it is not easy to track 8 or more basis points reliably in images, if the projector camera system moves arbitrarily. Thus, we in this paper propose a method for generating screen images properly from less basis points in camera images.","PeriodicalId":118466,"journal":{"name":"14th International Conference on Image Analysis and Processing (ICIAP 2007)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129803128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An Uncalibrated View-Synthesis Pipeline 一个未校准的视图合成管道
Pub Date : 2007-09-10 DOI: 10.1109/ICIAP.2007.24
Andrea Fusiello, L. Irsara
This paper deals with the process of view synthesis based on the relative affine structure. It describes a complete pipeline that, starting with uncalibrated images, produces a virtual sequence with viewpoint control. Experiments illustrate the approach.
本文研究了基于相对仿射结构的视点合成过程。它描述了一个完整的流水线,从未校准的图像开始,产生一个具有视点控制的虚拟序列。实验验证了该方法。
{"title":"An Uncalibrated View-Synthesis Pipeline","authors":"Andrea Fusiello, L. Irsara","doi":"10.1109/ICIAP.2007.24","DOIUrl":"https://doi.org/10.1109/ICIAP.2007.24","url":null,"abstract":"This paper deals with the process of view synthesis based on the relative affine structure. It describes a complete pipeline that, starting with uncalibrated images, produces a virtual sequence with viewpoint control. Experiments illustrate the approach.","PeriodicalId":118466,"journal":{"name":"14th International Conference on Image Analysis and Processing (ICIAP 2007)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125652923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Computing Epipolar Geometry from Unsynchronized Cameras 从非同步相机计算极几何
Pub Date : 2007-09-10 DOI: 10.1093/ietisy/e91-d.8.2171
Ying Piao, J. Sato
Recently, many application systems have been developed by using a large number of cameras. If the 3D points are observed from synchronized cameras, the multiple view geometry of these cameras can be computed and the 3D reconstruction of the scene is available. Thus, the synchronization of multiple cameras is essential. In this paper, we propose a method for finding synchronization of multiple cameras and for computing the epipolar geometry from un- calibrated and unsynchronized cameras. In particular we use the affine invariance on frame numbers of camera images for finding the synchronization. The proposed method is tested by using real image sequences taken from uncalibrated and unsynchronized cameras.
近年来,许多应用系统都采用了大量的摄像机。如果从同步摄像机中观察到三维点,则可以计算这些摄像机的多视图几何形状,从而可以对场景进行三维重建。因此,多摄像机的同步是必不可少的。在本文中,我们提出了一种寻找多相机同步的方法,并从未校准和未同步的相机计算极几何。特别地,我们利用相机图像帧数的仿射不变性来寻找同步。利用未校准和未同步相机的真实图像序列对该方法进行了测试。
{"title":"Computing Epipolar Geometry from Unsynchronized Cameras","authors":"Ying Piao, J. Sato","doi":"10.1093/ietisy/e91-d.8.2171","DOIUrl":"https://doi.org/10.1093/ietisy/e91-d.8.2171","url":null,"abstract":"Recently, many application systems have been developed by using a large number of cameras. If the 3D points are observed from synchronized cameras, the multiple view geometry of these cameras can be computed and the 3D reconstruction of the scene is available. Thus, the synchronization of multiple cameras is essential. In this paper, we propose a method for finding synchronization of multiple cameras and for computing the epipolar geometry from un- calibrated and unsynchronized cameras. In particular we use the affine invariance on frame numbers of camera images for finding the synchronization. The proposed method is tested by using real image sequences taken from uncalibrated and unsynchronized cameras.","PeriodicalId":118466,"journal":{"name":"14th International Conference on Image Analysis and Processing (ICIAP 2007)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130674001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
14th International Conference on Image Analysis and Processing (ICIAP 2007)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1