首页 > 最新文献

Fourth Canadian Conference on Computer and Robot Vision (CRV '07)最新文献

英文 中文
Petri Net-Based Cooperation In Multi-Agent Systems 基于Petri网的多智能体系统协作
Pub Date : 2007-05-28 DOI: 10.1109/CRV.2007.49
Y. Kotb, S. Beauchemin, J. Barron
We present a formal framework for robotic cooperation in which we use an extension to Petri nets, known as workflow nets, to establish a protocol among mobile agents based on the task coverage they maintain. Our choice is motivated by the fact that Petri nets handle concurrency and that goal reachability can be theoretically established. We describe the means by which cooperation is performed with Petri nets and analyze their structural and behavioral characteristics in order to show the correctness of our framework.
我们提出了一个机器人合作的正式框架,在这个框架中,我们使用了Petri网的扩展,即工作流网,在移动代理之间建立了基于它们维护的任务覆盖的协议。我们的选择是因为Petri网可以处理并发性,并且理论上可以建立目标可达性。我们描述了用Petri网进行合作的方法,并分析了它们的结构和行为特征,以表明我们的框架的正确性。
{"title":"Petri Net-Based Cooperation In Multi-Agent Systems","authors":"Y. Kotb, S. Beauchemin, J. Barron","doi":"10.1109/CRV.2007.49","DOIUrl":"https://doi.org/10.1109/CRV.2007.49","url":null,"abstract":"We present a formal framework for robotic cooperation in which we use an extension to Petri nets, known as workflow nets, to establish a protocol among mobile agents based on the task coverage they maintain. Our choice is motivated by the fact that Petri nets handle concurrency and that goal reachability can be theoretically established. We describe the means by which cooperation is performed with Petri nets and analyze their structural and behavioral characteristics in order to show the correctness of our framework.","PeriodicalId":304254,"journal":{"name":"Fourth Canadian Conference on Computer and Robot Vision (CRV '07)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123953469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Adaptive Appearance Model for Object Contour Tracking in Videos 视频中目标轮廓跟踪的自适应外观模型
Pub Date : 2007-05-28 DOI: 10.1109/CRV.2007.9
M. S. Allili, D. Ziou
In this paper, we propose a novel object tracking algorithm in video sequences. The formulation of the object tracking is based on variational calculus, where an adaptive parametric mixture model is used for object features representation. The tracking is based on matching the object mixture models between successive frames of the sequence by using active contours while adapting the mixture model to varying object appearance changes due to illumination conditions and camera geometry. The implementation of the method is based on level set active contours which allow for automatic topology changes and stable numerical schemes. We validate our approach on examples of object tracking performed on real video sequences.
本文提出了一种新的视频序列目标跟踪算法。该方法基于变分演算,采用自适应参数混合模型对目标特征进行表征。跟踪的基础是利用活动轮廓在序列的连续帧之间匹配物体混合模型,同时使混合模型适应由于光照条件和相机几何形状而变化的物体外观变化。该方法的实现基于水平集活动轮廓,允许自动拓扑变化和稳定的数值格式。我们通过在真实视频序列上执行的对象跟踪示例验证了我们的方法。
{"title":"Adaptive Appearance Model for Object Contour Tracking in Videos","authors":"M. S. Allili, D. Ziou","doi":"10.1109/CRV.2007.9","DOIUrl":"https://doi.org/10.1109/CRV.2007.9","url":null,"abstract":"In this paper, we propose a novel object tracking algorithm in video sequences. The formulation of the object tracking is based on variational calculus, where an adaptive parametric mixture model is used for object features representation. The tracking is based on matching the object mixture models between successive frames of the sequence by using active contours while adapting the mixture model to varying object appearance changes due to illumination conditions and camera geometry. The implementation of the method is based on level set active contours which allow for automatic topology changes and stable numerical schemes. We validate our approach on examples of object tracking performed on real video sequences.","PeriodicalId":304254,"journal":{"name":"Fourth Canadian Conference on Computer and Robot Vision (CRV '07)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122437776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
INVICON: A Toolkit for Knowledge-Based Control of Vision Systems 一个基于知识的视觉系统控制工具
Pub Date : 2007-05-28 DOI: 10.1109/CRV.2007.41
O. Borzenko, Y. Lespérance, M. Jenkin
To perform as desired in a dynamic environment a vision system must adapt to a variety of operating conditions by selecting vision modules, tuning their parameters, and controlling image acquisition. Knowledge-based (KB) controller-agents that reason over explicitly represented knowledge and interact with their environment can be used for this task; however, the lack of a unifyingmethodology and development tools makes KB controllers difficult to create, maintain, and reuse. This paper presents the INVICON toolkit, based on the IndiGolog agent programming language with elements from control theory. It provides a basic methodology, a vision module declaration template, a suite of control components, and support tools for KB controller development. We have evaluated INVICON in two case studies that involved controlling vision-based pose estimation systems. The case studies show that INVICON reduces the effort needed to build KB controllers for challenging domains and improves their flexibility and robustness.
为了在动态环境中按预期工作,视觉系统必须通过选择视觉模块、调整其参数和控制图像采集来适应各种操作条件。基于知识(KB)的控制器代理可以对显式表示的知识进行推理并与环境交互,可以用于此任务;然而,由于缺乏统一的方法和开发工具,使得KB控制器难以创建、维护和重用。本文提出了基于IndiGolog代理程序设计语言的INVICON工具箱,并结合了控制理论的元素。它为KB控制器开发提供了一种基本方法、一个视觉模块声明模板、一套控制组件和支持工具。我们在两个涉及控制基于视觉的姿态估计系统的案例研究中评估了INVICON。案例研究表明,INVICON减少了为具有挑战性的领域构建知识库控制器所需的工作量,并提高了它们的灵活性和鲁棒性。
{"title":"INVICON: A Toolkit for Knowledge-Based Control of Vision Systems","authors":"O. Borzenko, Y. Lespérance, M. Jenkin","doi":"10.1109/CRV.2007.41","DOIUrl":"https://doi.org/10.1109/CRV.2007.41","url":null,"abstract":"To perform as desired in a dynamic environment a vision system must adapt to a variety of operating conditions by selecting vision modules, tuning their parameters, and controlling image acquisition. Knowledge-based (KB) controller-agents that reason over explicitly represented knowledge and interact with their environment can be used for this task; however, the lack of a unifyingmethodology and development tools makes KB controllers difficult to create, maintain, and reuse. This paper presents the INVICON toolkit, based on the IndiGolog agent programming language with elements from control theory. It provides a basic methodology, a vision module declaration template, a suite of control components, and support tools for KB controller development. We have evaluated INVICON in two case studies that involved controlling vision-based pose estimation systems. The case studies show that INVICON reduces the effort needed to build KB controllers for challenging domains and improves their flexibility and robustness.","PeriodicalId":304254,"journal":{"name":"Fourth Canadian Conference on Computer and Robot Vision (CRV '07)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121080132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Speckle Simulation Based on B-Mode Echographic Image Acquisition Model 基于b模超声图像采集模型的散斑仿真
Pub Date : 2007-05-28 DOI: 10.1109/CRV.2007.61
Charles Perreault, M. Auclair-Fortier
This paper introduces a novel method to simulate B-mode medical ultrasound speckle in synthetic images. Our approach takes into account both the ultrasound image formation model and the speckle formation model. The algorithm first modifies the geometry of an ideal noiseless image to match that of a sectoral B-mode ultrasonogram, by subsampling a grid of pixels to simulate the acquisition and quantization steps of image formation. Then, speckle is added by simulating a random walk in the plane of the complex amplitude, according to the Burckhardt speckle formation model. We finally interpolate the noisy subsampled pixels in order to fill the space introduced by the sampling step and recover a complete image, as would a real ultrasonograph. Synthetic speckle images generated by this method are visually and theoretically very close to real ultrasonograms.
介绍了一种模拟医学b超合成图像中散斑的新方法。我们的方法同时考虑了超声图像形成模型和斑点形成模型。该算法首先修改理想的无噪声图像的几何形状,以匹配扇形b型超声图的几何形状,通过对像素网格进行子采样来模拟图像形成的采集和量化步骤。然后,根据Burckhardt散斑形成模型,通过模拟复振幅平面的随机游走来添加散斑。最后,我们对噪声下采样像素进行插值,以填充采样步骤引入的空间,并恢复完整的图像,就像真实的超声图一样。用这种方法合成的散斑图像在视觉上和理论上都非常接近真实的超声图像。
{"title":"Speckle Simulation Based on B-Mode Echographic Image Acquisition Model","authors":"Charles Perreault, M. Auclair-Fortier","doi":"10.1109/CRV.2007.61","DOIUrl":"https://doi.org/10.1109/CRV.2007.61","url":null,"abstract":"This paper introduces a novel method to simulate B-mode medical ultrasound speckle in synthetic images. Our approach takes into account both the ultrasound image formation model and the speckle formation model. The algorithm first modifies the geometry of an ideal noiseless image to match that of a sectoral B-mode ultrasonogram, by subsampling a grid of pixels to simulate the acquisition and quantization steps of image formation. Then, speckle is added by simulating a random walk in the plane of the complex amplitude, according to the Burckhardt speckle formation model. We finally interpolate the noisy subsampled pixels in order to fill the space introduced by the sampling step and recover a complete image, as would a real ultrasonograph. Synthetic speckle images generated by this method are visually and theoretically very close to real ultrasonograms.","PeriodicalId":304254,"journal":{"name":"Fourth Canadian Conference on Computer and Robot Vision (CRV '07)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130706944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Extrinsic Recalibration in Camera Networks 摄像机网络中的外部再标定
Pub Date : 2007-05-28 DOI: 10.1109/CRV.2007.31
C. Hermans, Maarten Dumont, P. Bekaert
This work addresses the practical problem of keeping a camera network calibrated during a recording session. When dealing with real-time applications, a robust calibration of the camera network needs to be assured, without the burden of a full system recalibration at every (un)intended camera displacement. In this paper we present an efficient algorithm to detect when the extrinsic parameters of a camera are no longer valid, and reintegrate the displaced camera into the previously calibrated camera network. When the intrinsic parameters of the cameras are known, the algorithm can also be used to build ad-hoc distributed camera networks, starting from three calibrated cameras. Recalibration is done using pairs of essential matrices, based on image point correspondences. Unlike other approaches, we do not explicitly compute any 3D structure for our calibration purposes.
这项工作解决了在录制过程中保持摄像机网络校准的实际问题。在处理实时应用时,需要确保摄像机网络的鲁棒校准,而无需在每次(非)预期摄像机位移时重新校准整个系统。在本文中,我们提出了一种有效的算法来检测相机的外在参数不再有效,并将位移的相机重新整合到先前校准的相机网络中。当摄像机的固有参数已知时,该算法还可以用于构建自组织分布式摄像机网络,从三个校准的摄像机开始。重新校准是使用对基本矩阵,基于图像点对应。与其他方法不同,我们不明确地计算任何3D结构用于我们的校准目的。
{"title":"Extrinsic Recalibration in Camera Networks","authors":"C. Hermans, Maarten Dumont, P. Bekaert","doi":"10.1109/CRV.2007.31","DOIUrl":"https://doi.org/10.1109/CRV.2007.31","url":null,"abstract":"This work addresses the practical problem of keeping a camera network calibrated during a recording session. When dealing with real-time applications, a robust calibration of the camera network needs to be assured, without the burden of a full system recalibration at every (un)intended camera displacement. In this paper we present an efficient algorithm to detect when the extrinsic parameters of a camera are no longer valid, and reintegrate the displaced camera into the previously calibrated camera network. When the intrinsic parameters of the cameras are known, the algorithm can also be used to build ad-hoc distributed camera networks, starting from three calibrated cameras. Recalibration is done using pairs of essential matrices, based on image point correspondences. Unlike other approaches, we do not explicitly compute any 3D structure for our calibration purposes.","PeriodicalId":304254,"journal":{"name":"Fourth Canadian Conference on Computer and Robot Vision (CRV '07)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116785339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A non-myopic approach to visual search 视觉搜索的非短视方法
Pub Date : 2007-05-28 DOI: 10.1109/CRV.2007.5
Julia Vogel, Kevin P. Murphy
We show how a greedy approach to visual search - i.e., directly moving to the most likely location of the target - can be suboptimal, if the target object is hard to detect. Instead it is more efficient and leads to higher detection accuracy to first look for other related objects, that are easier to detect. These provide contextual priors for the target that make it easier to find. We demonstrate this in simulation using POMDP models, focussing on two special cases: where the target object is contained within the related object, and where the target object is spatially adjacent to the related object.
我们展示了贪婪的视觉搜索方法——即,直接移动到最有可能的目标位置——如果目标对象难以检测,可能是次优的。相反,首先寻找其他更容易检测到的相关对象,效率更高,检测精度更高。这些为目标提供了上下文先验,使其更容易被找到。我们使用POMDP模型在模拟中演示了这一点,重点关注两种特殊情况:目标对象包含在相关对象中,以及目标对象在空间上与相关对象相邻。
{"title":"A non-myopic approach to visual search","authors":"Julia Vogel, Kevin P. Murphy","doi":"10.1109/CRV.2007.5","DOIUrl":"https://doi.org/10.1109/CRV.2007.5","url":null,"abstract":"We show how a greedy approach to visual search - i.e., directly moving to the most likely location of the target - can be suboptimal, if the target object is hard to detect. Instead it is more efficient and leads to higher detection accuracy to first look for other related objects, that are easier to detect. These provide contextual priors for the target that make it easier to find. We demonstrate this in simulation using POMDP models, focussing on two special cases: where the target object is contained within the related object, and where the target object is spatially adjacent to the related object.","PeriodicalId":304254,"journal":{"name":"Fourth Canadian Conference on Computer and Robot Vision (CRV '07)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128957233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Extracting Salient Objects from Operator-Framed Images 从算子框架图像中提取显著目标
Pub Date : 2007-05-28 DOI: 10.1109/CRV.2007.30
D. Crevier
In images framed by human operators, as opposed to those taken under computer control, the position of objects can be an important clue to saliency. This paper uses the Berkeley image data set to show how locational and photometric information can be combined to extract a probability of saliency for all image pixels. This probability can then be thresholded and segmented to extract compact image regions with high probability of saliency. A self assessment procedure allows the algorithm to evaluate the accuracy of its results. The method can extract salient regions of non uniform color, brightness or texture against highly variable background.
与计算机控制下拍摄的图像相反,在人工操作的图像中,物体的位置可以成为显著性的重要线索。本文使用伯克利图像数据集来展示如何结合位置和光度信息来提取所有图像像素的显著性概率。然后可以对该概率进行阈值化和分割,以提取具有高显著性概率的紧凑图像区域。自我评估程序允许算法评估其结果的准确性。该方法可以在高度变化的背景下提取颜色、亮度或纹理不均匀的显著区域。
{"title":"Extracting Salient Objects from Operator-Framed Images","authors":"D. Crevier","doi":"10.1109/CRV.2007.30","DOIUrl":"https://doi.org/10.1109/CRV.2007.30","url":null,"abstract":"In images framed by human operators, as opposed to those taken under computer control, the position of objects can be an important clue to saliency. This paper uses the Berkeley image data set to show how locational and photometric information can be combined to extract a probability of saliency for all image pixels. This probability can then be thresholded and segmented to extract compact image regions with high probability of saliency. A self assessment procedure allows the algorithm to evaluate the accuracy of its results. The method can extract salient regions of non uniform color, brightness or texture against highly variable background.","PeriodicalId":304254,"journal":{"name":"Fourth Canadian Conference on Computer and Robot Vision (CRV '07)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115827710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Monocular Range Estimation through a Double-Sided Half-Mirror Plate 利用双面半镜板估算单眼距离
Pub Date : 2007-05-28 DOI: 10.1109/CRV.2007.46
M. Shimizu, M. Okutomi
This paper proposes a novel single-camera range estimation method using images transmitted through a double-sided half-mirror plate. The exit-side halfmirror reflects and transmits the transmitted light through the incident-side half-mirror. The transmitted light directly reaches to the camera through the exit-side half-mirror, but some of the reflected light at the exitside half-mirror is then reflected again at the incidentside half-mirror and reaches the camera. Those multiple paths create a layered image in which displacement between the images differs with object distance. The constraint in the layered image is presented. The range to the object can be derived by finding correspondences on constraint lines using the autocorrelation of the layered image with similarity indices. The correspondence position is estimated using a parabola fitting with systematic error cancellation to enhance the accuracy without iteration. Ray tracing enables computation of a rigorous range to the object. This paper presents a theoretical formulation and experimental results obtained using an actual system with a doublesided half-mirror, which is realized using a transparent acrylic plate and two half-mirrors on thin glass-plate.
提出了一种利用双面半镜片传输的图像进行单相机距离估计的新方法。出口侧半反射镜将透射光通过入射侧半反射镜进行反射和透射。透射光通过出口侧半反射镜直接到达摄像机,但部分反射光在出口侧半反射镜再次反射到入射侧半反射镜到达摄像机。这些多条路径创建了一个分层图像,其中图像之间的位移随物体距离的不同而不同。给出了分层图像中的约束。利用具有相似度指标的分层图像的自相关,找到约束线上的对应关系,从而得到目标的距离。采用系统误差抵消的抛物线拟合方法估计对应位置,提高了精度,无需迭代。光线追踪可以计算物体的严格范围。本文给出了一个实际系统的理论公式和实验结果,该系统采用透明亚克力板和薄玻璃板上的两个半镜来实现双面半镜。
{"title":"Monocular Range Estimation through a Double-Sided Half-Mirror Plate","authors":"M. Shimizu, M. Okutomi","doi":"10.1109/CRV.2007.46","DOIUrl":"https://doi.org/10.1109/CRV.2007.46","url":null,"abstract":"This paper proposes a novel single-camera range estimation method using images transmitted through a double-sided half-mirror plate. The exit-side halfmirror reflects and transmits the transmitted light through the incident-side half-mirror. The transmitted light directly reaches to the camera through the exit-side half-mirror, but some of the reflected light at the exitside half-mirror is then reflected again at the incidentside half-mirror and reaches the camera. Those multiple paths create a layered image in which displacement between the images differs with object distance. The constraint in the layered image is presented. The range to the object can be derived by finding correspondences on constraint lines using the autocorrelation of the layered image with similarity indices. The correspondence position is estimated using a parabola fitting with systematic error cancellation to enhance the accuracy without iteration. Ray tracing enables computation of a rigorous range to the object. This paper presents a theoretical formulation and experimental results obtained using an actual system with a doublesided half-mirror, which is realized using a transparent acrylic plate and two half-mirrors on thin glass-plate.","PeriodicalId":304254,"journal":{"name":"Fourth Canadian Conference on Computer and Robot Vision (CRV '07)","volume":"194 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132415674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Establishing Visual Correspondence from Multi-Resolution Graph Cuts for Stereo-Motion 基于立体运动的多分辨率图形切割建立视觉对应关系
Pub Date : 2007-05-28 DOI: 10.1109/CRV.2007.28
Joshua Worby, James MacLean
This paper presents the design and implementation of a multi-resolution graph cuts (MRGC) for stereo-motion framework that produces dense disparity maps. Both stereo and motion are estimated simultaneously under the original graph cuts framework. Our framework extends the problem from one to five dimensions, creating a large in- crease in complexity. Using three different multi-resolution graph cut algorithms, LDNR, EL and SAC, we reduce the number of pixels m and the number of labels n that limit the alpha - beta swap algorithm (with complexity O(mn 2) required from the definition of our semi-metric smoothness function. This results in a reduction of computation time and the ability to handle larger images and larger label sets. The choice of the three MRGC algorithms to use in computation deter- mines the appropriate level of accuracy and computation time desired.
本文提出了一种用于生成密集视差图的立体运动框架的多分辨率图切割(MRGC)的设计和实现。在原始图切割框架下,同时估计立体和运动。我们的框架将问题从一个维度扩展到五个维度,从而大大增加了复杂性。使用三种不同的多分辨率图切算法,LDNR, EL和SAC,我们从半度量平滑函数的定义中减少了限制alpha - beta交换算法(复杂度为O(mn 2))所需的像素数m和标签数n。这减少了计算时间,并且能够处理更大的图像和更大的标签集。在计算中使用的三种MRGC算法的选择决定了适当的精度水平和所需的计算时间。
{"title":"Establishing Visual Correspondence from Multi-Resolution Graph Cuts for Stereo-Motion","authors":"Joshua Worby, James MacLean","doi":"10.1109/CRV.2007.28","DOIUrl":"https://doi.org/10.1109/CRV.2007.28","url":null,"abstract":"This paper presents the design and implementation of a multi-resolution graph cuts (MRGC) for stereo-motion framework that produces dense disparity maps. Both stereo and motion are estimated simultaneously under the original graph cuts framework. Our framework extends the problem from one to five dimensions, creating a large in- crease in complexity. Using three different multi-resolution graph cut algorithms, LDNR, EL and SAC, we reduce the number of pixels m and the number of labels n that limit the alpha - beta swap algorithm (with complexity O(mn 2) required from the definition of our semi-metric smoothness function. This results in a reduction of computation time and the ability to handle larger images and larger label sets. The choice of the three MRGC algorithms to use in computation deter- mines the appropriate level of accuracy and computation time desired.","PeriodicalId":304254,"journal":{"name":"Fourth Canadian Conference on Computer and Robot Vision (CRV '07)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132203569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Version and vergence control of a stereo camera head by fitting the movement into the Hering's law 通过将运动拟合到赫林定律中的立体摄像机头的版本和聚光控制
Pub Date : 2007-05-28 DOI: 10.1109/CRV.2007.69
J. Samarawickrama, S. Sabatini
An active vision system has to enable the implementation of reactive visual processes in real time. Given a stereoscopic vision system, the vergence angle, together with version and tilt angles, describes uniquely the fixation point in space. We interpret vision and motor control, and in particular we focus on developing and testing of a control strategy that fits the Hering's law, by studying the cooperation of vergence and version movements. The analysis of the simulation results confirmed the advantages of the Hering's law to achieve fast system reactions. We show that real-time active vergence and depth estimation become possible when the estimated disparity is reliable and fast. In this framework, the advantage of a simple and fast phase-based technique for depth estimation that allows real-time stereo processing with sub-pixel resolution is also discussed.
主动视觉系统必须能够实时地实现反应性视觉过程。对于一个立体视觉系统,汇聚角与平移角和倾斜角共同描述了空间中的注视点。我们解释视觉和运动控制,特别是我们专注于开发和测试一个控制策略,符合赫林定律,通过研究收敛和版本运动的合作。仿真结果的分析证实了赫林定律在实现系统快速反应方面的优势。研究表明,当视差估计可靠且快速时,实时主动收敛和深度估计成为可能。在这个框架中,还讨论了一种简单快速的基于相位的深度估计技术的优势,该技术允许以亚像素分辨率进行实时立体处理。
{"title":"Version and vergence control of a stereo camera head by fitting the movement into the Hering's law","authors":"J. Samarawickrama, S. Sabatini","doi":"10.1109/CRV.2007.69","DOIUrl":"https://doi.org/10.1109/CRV.2007.69","url":null,"abstract":"An active vision system has to enable the implementation of reactive visual processes in real time. Given a stereoscopic vision system, the vergence angle, together with version and tilt angles, describes uniquely the fixation point in space. We interpret vision and motor control, and in particular we focus on developing and testing of a control strategy that fits the Hering's law, by studying the cooperation of vergence and version movements. The analysis of the simulation results confirmed the advantages of the Hering's law to achieve fast system reactions. We show that real-time active vergence and depth estimation become possible when the estimated disparity is reliable and fast. In this framework, the advantage of a simple and fast phase-based technique for depth estimation that allows real-time stereo processing with sub-pixel resolution is also discussed.","PeriodicalId":304254,"journal":{"name":"Fourth Canadian Conference on Computer and Robot Vision (CRV '07)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114328754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
期刊
Fourth Canadian Conference on Computer and Robot Vision (CRV '07)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1