首页 > 最新文献

The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)最新文献

英文 中文
Automatic Classification of Outdoor Images by Region Matching 基于区域匹配的户外图像自动分类
Pub Date : 2006-06-07 DOI: 10.1109/CRV.2006.15
O. V. Kaick, Greg Mori
This paper presents a novel method for image classification. It differs from previous approaches by computing image similarity based on region matching. Firstly, the images to be classified are segmented into regions or partitioned into regular blocks. Next, low-level features are extracted from each segment or block, and the similarity between two images is computed as the cost of a pairwise matching of regions according to their related features. Experiments are performed to verify that the proposed approach improves the quality of image classification. In addition, unsupervised clustering results are presented to verify the efficacy of this image similarity measure.
提出了一种新的图像分类方法。它与以往的方法不同,是基于区域匹配计算图像相似度。首先,对待分类图像进行区域分割或规则块分割。接下来,从每个片段或块中提取低级特征,并根据两幅图像的相关特征计算两幅图像之间的相似度作为区域两两匹配的代价。实验结果表明,该方法提高了图像分类的质量。此外,给出了无监督聚类结果来验证该图像相似度量的有效性。
{"title":"Automatic Classification of Outdoor Images by Region Matching","authors":"O. V. Kaick, Greg Mori","doi":"10.1109/CRV.2006.15","DOIUrl":"https://doi.org/10.1109/CRV.2006.15","url":null,"abstract":"This paper presents a novel method for image classification. It differs from previous approaches by computing image similarity based on region matching. Firstly, the images to be classified are segmented into regions or partitioned into regular blocks. Next, low-level features are extracted from each segment or block, and the similarity between two images is computed as the cost of a pairwise matching of regions according to their related features. Experiments are performed to verify that the proposed approach improves the quality of image classification. In addition, unsupervised clustering results are presented to verify the efficacy of this image similarity measure.","PeriodicalId":369170,"journal":{"name":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116046159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Extracting and tracking Colon’s "Pattern" from Colonoscopic Images 从结肠镜图像中提取和跟踪colons“模式”
Pub Date : 2006-06-07 DOI: 10.1109/CRV.2006.35
Hanene Chettaoui, G. Thomann, C. Amar, T. Redarce
In this paper, we propose a new method for "pattern" extraction and tracking from endoscopic images. During colonoscopic intervention, the endoscope advance slowly. Therefore the displacement of the endoscope tool between two successive images is small. In this condition, it is possible to predict the set of possible positions of the target. We use this idea to develop two methods. The first method presented is based on the region growth. The information of continuity was used to extract and track colon "pattern" with resolving the traditional problem of this technique: the identification of the seed point. In the second method, we introduce a notion of distance between two successive images that the "pattern" cannot exceed. We also propose criteria of shape to identify diverticula. A set of endoscopic images is tested to demonstrate the effectiveness of the proposed approaches. An interpretation of the results and the possible amelioration is presented.
本文提出了一种从内镜图像中提取和跟踪“模式”的新方法。在结肠镜干预期间,内窥镜进展缓慢。因此,内窥镜工具在两个连续图像之间的位移很小。在这种情况下,可以预测目标的可能位置集。我们利用这个想法开发了两种方法。第一种方法是基于区域增长的。利用连续信息对冒号“模式”进行提取和跟踪,解决了该技术传统的种子点识别问题。在第二种方法中,我们引入了“模式”不能超过的两个连续图像之间距离的概念。我们也提出形状的标准,以确定憩室。一组内窥镜图像进行了测试,以证明所提出的方法的有效性。给出了对结果的解释和可能的改进。
{"title":"Extracting and tracking Colon’s \"Pattern\" from Colonoscopic Images","authors":"Hanene Chettaoui, G. Thomann, C. Amar, T. Redarce","doi":"10.1109/CRV.2006.35","DOIUrl":"https://doi.org/10.1109/CRV.2006.35","url":null,"abstract":"In this paper, we propose a new method for \"pattern\" extraction and tracking from endoscopic images. During colonoscopic intervention, the endoscope advance slowly. Therefore the displacement of the endoscope tool between two successive images is small. In this condition, it is possible to predict the set of possible positions of the target. We use this idea to develop two methods. The first method presented is based on the region growth. The information of continuity was used to extract and track colon \"pattern\" with resolving the traditional problem of this technique: the identification of the seed point. In the second method, we introduce a notion of distance between two successive images that the \"pattern\" cannot exceed. We also propose criteria of shape to identify diverticula. A set of endoscopic images is tested to demonstrate the effectiveness of the proposed approaches. An interpretation of the results and the possible amelioration is presented.","PeriodicalId":369170,"journal":{"name":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121101338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Integrating Animated Pedagogical Agent as Motivational Supporter into Interactive System 将动画教学主体作为激励支持者融入互动系统
Pub Date : 2006-06-07 DOI: 10.1109/CRV.2006.43
P. D. Silva, A. Madurapperuma, A. Marasinghe, M. Osano
In modern world, children are interested in interacting with computers in many ways, for e.g. game playing, ELearning, chatting etc. This interest could be effectively exploited to develop their personality by creating interactive systems that adapt to different emotional states and intensities of children interacting with them. Many of the existing games are designed to beat the children rather than encourage them to win. Further, many of these systems do not take neither the emotional state nor the intensity of emotions into consideration. In this paper we present an interactive multi-agent based system that recognizes child’s emotion. A social agent uses cognitive and non-cognitive factors to estimate a child’s intensity of emotions in real time and an autonomous/intelligent agent uses cognitive and non-cognitive factors to estimate a child’s intensity of emotions in real time and an autonomous/intelligent agent uses an adaptation model based on the intensity of child’s emotion to change the game status. An animated pedagogical agent gives motivational help to encourage the adaptation of the system in an interactive manner. Results show that affective gesture recognition model recognizes a child’s emotion with a considerably higher rate of over 82.5% and the social agent (estimate intensity of emotion) has strong relationship with observers’ feedback except in low intensity levels.
在现代社会,孩子们喜欢以多种方式与电脑互动,例如玩游戏、网上学习、聊天等。这种兴趣可以通过创建适应不同情绪状态和儿童与他们互动强度的互动系统来有效地开发他们的个性。现有的许多游戏都是为了打败孩子而设计的,而不是鼓励他们获胜。此外,许多这些系统既不考虑情绪状态,也不考虑情绪的强度。本文提出了一种基于多智能体的交互式儿童情感识别系统。社会主体使用认知和非认知因素实时估计儿童的情绪强度,自主/智能主体使用认知和非认知因素实时估计儿童的情绪强度,自主/智能主体使用基于儿童情绪强度的适应模型来改变游戏状态。动画教学代理以互动的方式提供动机帮助,以鼓励系统的适应。结果表明,情感手势识别模型对儿童情绪的识别率高达82.5%以上,社会代理(情绪估计强度)与观察者的反馈除低强度水平外有较强的关系。
{"title":"Integrating Animated Pedagogical Agent as Motivational Supporter into Interactive System","authors":"P. D. Silva, A. Madurapperuma, A. Marasinghe, M. Osano","doi":"10.1109/CRV.2006.43","DOIUrl":"https://doi.org/10.1109/CRV.2006.43","url":null,"abstract":"In modern world, children are interested in interacting with computers in many ways, for e.g. game playing, ELearning, chatting etc. This interest could be effectively exploited to develop their personality by creating interactive systems that adapt to different emotional states and intensities of children interacting with them. Many of the existing games are designed to beat the children rather than encourage them to win. Further, many of these systems do not take neither the emotional state nor the intensity of emotions into consideration. In this paper we present an interactive multi-agent based system that recognizes child’s emotion. A social agent uses cognitive and non-cognitive factors to estimate a child’s intensity of emotions in real time and an autonomous/intelligent agent uses cognitive and non-cognitive factors to estimate a child’s intensity of emotions in real time and an autonomous/intelligent agent uses an adaptation model based on the intensity of child’s emotion to change the game status. An animated pedagogical agent gives motivational help to encourage the adaptation of the system in an interactive manner. Results show that affective gesture recognition model recognizes a child’s emotion with a considerably higher rate of over 82.5% and the social agent (estimate intensity of emotion) has strong relationship with observers’ feedback except in low intensity levels.","PeriodicalId":369170,"journal":{"name":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126632295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Tracking 3D free form object in video sequence 跟踪视频序列中的3D自由形式对象
Pub Date : 2006-06-07 DOI: 10.1109/CRV.2006.79
D. Merad, Jean-Yves Didier, Mihaela Scuturici
In this paper we describe an original method for the 3D free form object tracking in monocular vision. The main contribution of this article is the use of the skeleton of an object in order to recognize, locate and track this object in real time. Indeed, the use of this kind of representation made it possible to avoid difficulties related to the absence of prominent elements in free form objects (which makes the matching process easier). The skeleton is a lower dimension representation of the object, it is homotopic and it has a graph structure. This allowed us to use powerful tools of the graph theory in order to perform matching between scene objects and models (recognition step). Thereafter, we used skeleton extremities as interest points for the tracking. Keywords: Tracking, 3D free form object, Skeletonization, Graph matching.
本文提出了一种新颖的单目视觉下三维自由形状物体跟踪方法。本文的主要贡献是利用对象的骨架来实时识别、定位和跟踪该对象。事实上,使用这种表示法可以避免与自由形式对象中缺少突出元素相关的困难(这使得匹配过程更容易)。骨架是对象的低维表示,它是同伦的,具有图结构。这使我们能够使用图论的强大工具来执行场景对象和模型之间的匹配(识别步骤)。之后,我们使用骨骼末端作为兴趣点进行跟踪。关键词:跟踪,三维自由形状物体,骨架化,图匹配。
{"title":"Tracking 3D free form object in video sequence","authors":"D. Merad, Jean-Yves Didier, Mihaela Scuturici","doi":"10.1109/CRV.2006.79","DOIUrl":"https://doi.org/10.1109/CRV.2006.79","url":null,"abstract":"In this paper we describe an original method for the 3D free form object tracking in monocular vision. The main contribution of this article is the use of the skeleton of an object in order to recognize, locate and track this object in real time. Indeed, the use of this kind of representation made it possible to avoid difficulties related to the absence of prominent elements in free form objects (which makes the matching process easier). The skeleton is a lower dimension representation of the object, it is homotopic and it has a graph structure. This allowed us to use powerful tools of the graph theory in order to perform matching between scene objects and models (recognition step). Thereafter, we used skeleton extremities as interest points for the tracking. Keywords: Tracking, 3D free form object, Skeletonization, Graph matching.","PeriodicalId":369170,"journal":{"name":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128459864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Colour-Gradient Redundancy for Real-time Spatial Pose Tracking in Autonomous Robot Navigation 自主机器人导航中实时空间姿态跟踪的颜色梯度冗余
Pub Date : 2006-06-07 DOI: 10.1109/CRV.2006.22
H. D. Ruiter, B. Benhabib
Mobile-robot interception or rendezvous with a maneuvering target requires the target’s pose to be tracked. This paper presents a novel 6 degree-of-freedom pose tracking algorithm. This algorithm incorporates an initial-pose estimation scheme to initiate tracking, operates in real-time, and, is robust to large motions. Initial-pose estimation is performed using the on-screen position and size of the target to extract 3D position, and, Principal Component Analysis (PCA) to extract orientation. Real-time operation is achieved by using GPU-based filters and a novel data-reduction algorithm. This data reduction algorithm exploits an important property of colour images, namely, that the gradients of all colour channels are generally aligned. A processing rate of approximately 60 to 85 fps was obtained. Multi-scale optical-flow has been adapted for use in the tracker, to increase robustness to larger motions.
移动机器人与机动目标的拦截或会合需要跟踪目标的姿态。提出了一种新颖的六自由度姿态跟踪算法。该算法采用初始姿态估计方案来启动跟踪,实时运行,并且对大运动具有鲁棒性。利用目标在屏幕上的位置和大小进行初始姿态估计,提取三维位置,并通过主成分分析(PCA)提取方向。采用基于gpu的滤波器和一种新颖的数据约简算法实现了实时操作。这种数据约简算法利用了彩色图像的一个重要特性,即所有颜色通道的梯度通常是对齐的。获得了大约60 ~ 85 fps的处理速率。多尺度光流已适应用于跟踪器,以增加对较大运动的鲁棒性。
{"title":"Colour-Gradient Redundancy for Real-time Spatial Pose Tracking in Autonomous Robot Navigation","authors":"H. D. Ruiter, B. Benhabib","doi":"10.1109/CRV.2006.22","DOIUrl":"https://doi.org/10.1109/CRV.2006.22","url":null,"abstract":"Mobile-robot interception or rendezvous with a maneuvering target requires the target’s pose to be tracked. This paper presents a novel 6 degree-of-freedom pose tracking algorithm. This algorithm incorporates an initial-pose estimation scheme to initiate tracking, operates in real-time, and, is robust to large motions. Initial-pose estimation is performed using the on-screen position and size of the target to extract 3D position, and, Principal Component Analysis (PCA) to extract orientation. Real-time operation is achieved by using GPU-based filters and a novel data-reduction algorithm. This data reduction algorithm exploits an important property of colour images, namely, that the gradients of all colour channels are generally aligned. A processing rate of approximately 60 to 85 fps was obtained. Multi-scale optical-flow has been adapted for use in the tracker, to increase robustness to larger motions.","PeriodicalId":369170,"journal":{"name":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128254226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Autonomous fish tracking by ROV using Monocular Camera 基于单目摄像机的ROV自主跟踪鱼类
Pub Date : 2006-06-07 DOI: 10.1109/CRV.2006.16
Jun Zhou, C. Clark
This paper concerns the autonomous tracking of fish using a Remotely Operated Vehicle (ROV) equipped with a single camera. An efficient image processing algorithm is presented that enables pose estimation of a particular species of fish - a Large Mouth Bass. The algorithm uses a series of filters including the Gabor filter for texture, projection segmentation, and geometrical shape feature extraction to find the fishes distinctive dark lines that mark the body and tail. Feature based scaling then produces the position and orientation of the fish relative to the ROV. By implementing this algorithm on each frame of a series of video frames, successive relative state estimates can be obtained which are fused across time via a Kalman Filter. Video taken from a VideoRay MicroROV operating within Paradise Lake, Ontario, Canada was used to demonstrate off-line fish state estimation. In the future, this approach will be integrated within a closed-loop controller that allows the robot to autonomously follow the fish and monitor its behavior.
本文研究了一种配备单摄像头的遥控机器人(ROV)对鱼类的自主跟踪。提出了一种有效的图像处理算法,能够对一种特殊的鱼类-大嘴鲈鱼进行姿态估计。该算法利用Gabor滤波器进行纹理、投影分割、几何形状特征提取等一系列滤波,找到鱼体和鱼尾的鲜明暗线。然后,基于特征的缩放生成鱼相对于ROV的位置和方向。通过对一系列视频帧的每一帧实现该算法,可以获得连续的相对状态估计,并通过卡尔曼滤波器进行时间融合。视频来自加拿大安大略省天堂湖的VideoRay MicroROV,用于演示离线鱼类状态估计。在未来,这种方法将集成在一个闭环控制器中,使机器人能够自主地跟踪鱼并监控其行为。
{"title":"Autonomous fish tracking by ROV using Monocular Camera","authors":"Jun Zhou, C. Clark","doi":"10.1109/CRV.2006.16","DOIUrl":"https://doi.org/10.1109/CRV.2006.16","url":null,"abstract":"This paper concerns the autonomous tracking of fish using a Remotely Operated Vehicle (ROV) equipped with a single camera. An efficient image processing algorithm is presented that enables pose estimation of a particular species of fish - a Large Mouth Bass. The algorithm uses a series of filters including the Gabor filter for texture, projection segmentation, and geometrical shape feature extraction to find the fishes distinctive dark lines that mark the body and tail. Feature based scaling then produces the position and orientation of the fish relative to the ROV. By implementing this algorithm on each frame of a series of video frames, successive relative state estimates can be obtained which are fused across time via a Kalman Filter. Video taken from a VideoRay MicroROV operating within Paradise Lake, Ontario, Canada was used to demonstrate off-line fish state estimation. In the future, this approach will be integrated within a closed-loop controller that allows the robot to autonomously follow the fish and monitor its behavior.","PeriodicalId":369170,"journal":{"name":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","volume":"15 10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131070865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
An Enhanced Positioning Algorithm for a Self-Referencing Hand-Held 3D Sensor 一种自参考手持3D传感器的增强定位算法
Pub Date : 2006-06-07 DOI: 10.1109/CRV.2006.10
R. Khoury
This study deals with the design of an enhanced selfreferencing algorithm for a typical hand-held 3D sensor. The enhancement we propose takes the form of a new algorithm which forms and matches triangles out of the scatter of observed reference points and the sensor’s list of reference points. Three different techniques to select which triangles to consider in each scatter of points are considered in this paper, and theoretical arguments and experimental results are used to determine the best of the three.
本研究针对典型的手持式3D感测器设计了一种增强的自参算法。我们提出的增强采用一种新算法的形式,该算法从观测到的参考点的散点和传感器的参考点列表中形成三角形并进行匹配。本文考虑了三种不同的技术来选择在每个点的分散中考虑哪个三角形,并使用理论论证和实验结果来确定三种方法中的最佳方法。
{"title":"An Enhanced Positioning Algorithm for a Self-Referencing Hand-Held 3D Sensor","authors":"R. Khoury","doi":"10.1109/CRV.2006.10","DOIUrl":"https://doi.org/10.1109/CRV.2006.10","url":null,"abstract":"This study deals with the design of an enhanced selfreferencing algorithm for a typical hand-held 3D sensor. The enhancement we propose takes the form of a new algorithm which forms and matches triangles out of the scatter of observed reference points and the sensor’s list of reference points. Three different techniques to select which triangles to consider in each scatter of points are considered in this paper, and theoretical arguments and experimental results are used to determine the best of the three.","PeriodicalId":369170,"journal":{"name":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115655542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Underwater 3D Mapping: Experiences and Lessons learned 水下3D制图:经验和教训
Pub Date : 2006-06-07 DOI: 10.1109/CRV.2006.80
A. Hogue, A. German, J. Zacher, M. Jenkin
This paper provides details on the development of a tool to aid in 3D coral reef mapping designed to be operated by a single diver and later integrated into an autonomous robot. We discuss issues that influence the deployment and development of underwater sensor technology for 6DOF hand-held and robotic mapping. We describe our current underwater vision-based mapping system, some of our experiences, lessons learned, and discuss how this knowledge is being incorporated into our underwater sensor.
本文详细介绍了一种工具的开发,该工具可以帮助进行3D珊瑚礁测绘,该工具可以由单个潜水员操作,然后集成到一个自主机器人中。我们讨论了影响6DOF手持和机器人测绘水下传感器技术部署和发展的问题。我们描述了我们目前基于水下视觉的测绘系统,我们的一些经验和教训,并讨论了如何将这些知识整合到我们的水下传感器中。
{"title":"Underwater 3D Mapping: Experiences and Lessons learned","authors":"A. Hogue, A. German, J. Zacher, M. Jenkin","doi":"10.1109/CRV.2006.80","DOIUrl":"https://doi.org/10.1109/CRV.2006.80","url":null,"abstract":"This paper provides details on the development of a tool to aid in 3D coral reef mapping designed to be operated by a single diver and later integrated into an autonomous robot. We discuss issues that influence the deployment and development of underwater sensor technology for 6DOF hand-held and robotic mapping. We describe our current underwater vision-based mapping system, some of our experiences, lessons learned, and discuss how this knowledge is being incorporated into our underwater sensor.","PeriodicalId":369170,"journal":{"name":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131093814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
An Iterative Super-Resolution Reconstruction of Image Sequences using a Bayesian Approach with BTV prior and Affine Block-Based Registration 基于BTV先验和仿射块配准的贝叶斯图像序列迭代超分辨率重建
Pub Date : 2006-06-07 DOI: 10.1109/CRV.2006.12
V. Patanavijit, S. Jitapunkul
The traditional SR image registrations are based on translation motion model therefore super-resolution applications can apply only on the sequences that have simple translation motion. In this paper, we present a novel image registration, the fast affine block-based registration, for performing super-resolution using multiple images. We propose super-resolution reconstruction that uses a high accuracy registration algorithm, the fast affine block-based registration [15], and is based on a maximum a posteriori estimation technique by minimizing a cost function. The L1 norm is used for measuring the difference between the projected estimate of the high-resolution image and each low resolution image, removing outliers in the data and errors due to possibly inaccurate motion estimation. Bilateral regularization is used as prior knowledge for removing outliers, resulting in sharp edges and forcing interpolation along the edges and not across them. The experimental results show that the proposed reconstruction can apply on real sequence such as Suzie.
传统的SR图像配准基于平移运动模型,因此超分辨率应用只能应用于具有简单平移运动的序列。本文提出了一种新的图像配准方法——基于仿射块的快速配准,用于多幅图像的超分辨率配准。我们提出了使用高精度配准算法的超分辨率重建,即基于快速仿射块的配准[15],并基于最小化成本函数的最大后验估计技术。L1范数用于测量高分辨率图像和每个低分辨率图像的投影估计之间的差异,去除数据中的异常值和由于可能不准确的运动估计而产生的误差。双边正则化被用作去除异常值的先验知识,导致尖锐的边缘,并迫使沿边缘而不是跨边缘进行插值。实验结果表明,所提出的重构方法可以应用于真实序列,如Suzie序列。
{"title":"An Iterative Super-Resolution Reconstruction of Image Sequences using a Bayesian Approach with BTV prior and Affine Block-Based Registration","authors":"V. Patanavijit, S. Jitapunkul","doi":"10.1109/CRV.2006.12","DOIUrl":"https://doi.org/10.1109/CRV.2006.12","url":null,"abstract":"The traditional SR image registrations are based on translation motion model therefore super-resolution applications can apply only on the sequences that have simple translation motion. In this paper, we present a novel image registration, the fast affine block-based registration, for performing super-resolution using multiple images. We propose super-resolution reconstruction that uses a high accuracy registration algorithm, the fast affine block-based registration [15], and is based on a maximum a posteriori estimation technique by minimizing a cost function. The L1 norm is used for measuring the difference between the projected estimate of the high-resolution image and each low resolution image, removing outliers in the data and errors due to possibly inaccurate motion estimation. Bilateral regularization is used as prior knowledge for removing outliers, resulting in sharp edges and forcing interpolation along the edges and not across them. The experimental results show that the proposed reconstruction can apply on real sequence such as Suzie.","PeriodicalId":369170,"journal":{"name":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","volume":"182 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123702121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Object Boundary Detection in Ultrasound Images 超声图像中的目标边界检测
Pub Date : 2006-06-07 DOI: 10.1109/CRV.2006.51
Moi Hoon Yap, E. Edirisinghe, H. Bez
This paper presents a novel approach to boundary detection of regions-of-interest (ROI) in ultrasound images, more specifically applied to ultrasound breast images. In the proposed method, histogram equalization is used to preprocess the ultrasound images followed by a hybrid filtering stage that consists of a combination of a nonlinear diffusion filter and a linear filter. Subsequently the multifractal dimension is used to analyse the visually distinct areas of the ultrasound image. Finally, using different threshold values, region growing segmentation is used to the partition the image. The partition with the highest Radial Gradient Index (RGI) is selected as the lesion. A total of 200 images have been used in the analysis of the presented results. We compare the performance of our algorithm with two well known methods proposed by Kupinski et al. and Joo et al. We show that the proposed method performs better in solving the boundary detection problem in ultrasound images.
本文提出了一种超声图像感兴趣区域(ROI)边界检测的新方法,更具体地应用于超声乳房图像。该方法首先对超声图像进行直方图均衡化预处理,然后进行非线性扩散滤波器和线性滤波器的混合滤波。随后,利用多重分形维数分析超声图像中视觉上不同的区域。最后,利用不同的阈值对图像进行区域增长分割。选取径向梯度指数(RGI)最高的分区作为病灶。总共有200幅图像被用于分析所呈现的结果。我们将该算法的性能与Kupinski et al.和Joo et al.提出的两种众所周知的方法进行了比较。结果表明,该方法能较好地解决超声图像的边界检测问题。
{"title":"Object Boundary Detection in Ultrasound Images","authors":"Moi Hoon Yap, E. Edirisinghe, H. Bez","doi":"10.1109/CRV.2006.51","DOIUrl":"https://doi.org/10.1109/CRV.2006.51","url":null,"abstract":"This paper presents a novel approach to boundary detection of regions-of-interest (ROI) in ultrasound images, more specifically applied to ultrasound breast images. In the proposed method, histogram equalization is used to preprocess the ultrasound images followed by a hybrid filtering stage that consists of a combination of a nonlinear diffusion filter and a linear filter. Subsequently the multifractal dimension is used to analyse the visually distinct areas of the ultrasound image. Finally, using different threshold values, region growing segmentation is used to the partition the image. The partition with the highest Radial Gradient Index (RGI) is selected as the lesion. A total of 200 images have been used in the analysis of the presented results. We compare the performance of our algorithm with two well known methods proposed by Kupinski et al. and Joo et al. We show that the proposed method performs better in solving the boundary detection problem in ultrasound images.","PeriodicalId":369170,"journal":{"name":"The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)","volume":"166 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132836051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
期刊
The 3rd Canadian Conference on Computer and Robot Vision (CRV'06)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1