首页 > 最新文献

2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops最新文献

英文 中文
3D priors for scene learning from a single view 3D先验的场景学习从单一视图
D. Rother, K. A. Patwardhan, I. Aganj, G. Sapiro
A framework for scene learning from a single still video camera is presented in this work. In particular, the camera transformation and the direction of the shadows are learned using information extracted from pedestrians walking in the scene. The proposed approach poses the scene learning estimation as a likelihood maximization problem, efficiently solved via factorization and dynamic programming, and amenable to an online implementation. We introduce a 3D prior to model the pedestrianpsilas appearance from any viewpoint, and learn it using a standard off-the-shelf consumer video camera and the Radon transform. This 3D prior or ldquoappearance modelrdquo is used to quantify the agreement between the tentative parameters and the actual video observations, taking into account not only the pixels occupied by the pedestrian, but also those occupied by the his shadows and/or reflections. The presentation of the framework is complemented with an example of a casual video scene showing the importance of the learned 3D pedestrian prior and the accuracy of the proposed approach.
在这项工作中,提出了一个从单个静止摄像机进行场景学习的框架。特别是,摄像机的变换和阴影的方向是利用从场景中行走的行人中提取的信息来学习的。该方法将场景学习估计作为一个似然最大化问题,通过因式分解和动态规划有效地解决,并且适合在线实现。我们引入了一个3D模型之前行人的外观从任何观点,并学习它使用标准的现成的消费者摄像机和Radon变换。这种3D先验或外观模型用于量化暂定参数与实际视频观察之间的一致性,不仅考虑到行人占用的像素,还考虑到他的阴影和/或反射占用的像素。该框架的介绍与一个休闲视频场景的示例相补充,该示例显示了预先学习的3D行人的重要性和所提出方法的准确性。
{"title":"3D priors for scene learning from a single view","authors":"D. Rother, K. A. Patwardhan, I. Aganj, G. Sapiro","doi":"10.1109/CVPRW.2008.4563034","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563034","url":null,"abstract":"A framework for scene learning from a single still video camera is presented in this work. In particular, the camera transformation and the direction of the shadows are learned using information extracted from pedestrians walking in the scene. The proposed approach poses the scene learning estimation as a likelihood maximization problem, efficiently solved via factorization and dynamic programming, and amenable to an online implementation. We introduce a 3D prior to model the pedestrianpsilas appearance from any viewpoint, and learn it using a standard off-the-shelf consumer video camera and the Radon transform. This 3D prior or ldquoappearance modelrdquo is used to quantify the agreement between the tentative parameters and the actual video observations, taking into account not only the pixels occupied by the pedestrian, but also those occupied by the his shadows and/or reflections. The presentation of the framework is complemented with an example of a casual video scene showing the importance of the learned 3D pedestrian prior and the accuracy of the proposed approach.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130778601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A methodology for quality assessment in tensor images 张量图像质量评价方法
E. Muñoz-Moreno, S. Aja‐Fernández, M. Martín-Fernández
Since tensor usage has become more and more popular in image processing, the assessment of the quality between tensor images is necessary for the evaluation of the advanced processing algorithms that deal with this kind of data. In this paper, we expose the methodology that should be followed to extend well-known image quality measures to tensor data. Two of these measures based on structural comparison are adapted to tensor images and their performance is shown by a set of examples. By means of these experiments the advantages of structural based measures will be highlighted, as well as the need for considering all the tensor components in the quality assessment.
由于张量的使用在图像处理中越来越普遍,因此评价处理这类数据的高级处理算法需要对张量图像之间的质量进行评估。在本文中,我们揭示了应该遵循的方法,将众所周知的图像质量度量扩展到张量数据。其中两种基于结构比较的度量方法适用于张量图像,并通过一组实例展示了它们的性能。通过这些实验,将突出基于结构的度量的优势,以及在质量评估中考虑所有张量分量的必要性。
{"title":"A methodology for quality assessment in tensor images","authors":"E. Muñoz-Moreno, S. Aja‐Fernández, M. Martín-Fernández","doi":"10.1109/CVPRW.2008.4562965","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4562965","url":null,"abstract":"Since tensor usage has become more and more popular in image processing, the assessment of the quality between tensor images is necessary for the evaluation of the advanced processing algorithms that deal with this kind of data. In this paper, we expose the methodology that should be followed to extend well-known image quality measures to tensor data. Two of these measures based on structural comparison are adapted to tensor images and their performance is shown by a set of examples. By means of these experiments the advantages of structural based measures will be highlighted, as well as the need for considering all the tensor components in the quality assessment.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132851702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Active sampling via tracking 主动跟踪采样
P. Roth, H. Bischof
To learn an object detector labeled training data is required. Since unlabeled training data is often given as an image sequence we propose a tracking-based approach to minimize the manual effort when learning an object detector. The main idea is to apply a tracker within an active on-line learning framework for selecting and labeling unlabeled samples. For that purpose the current classifier is evaluated on a test image and the obtained detection result is verified by the tracker. In this way the most valuable samples can be estimated and used for updating the classifier. Thus, the number of needed samples can be reduced and an incrementally better detector is obtained. To enable efficient learning (i.e., to have real-time performance) and to assure robust tracking results, we apply on-line boosting for both, learning and tracking. If the tracker can be initialized automatically no user interaction is needed and we have an autonomous learning/labeling system. In the experiments the approach is evaluated in detail for learning a face detector. In addition, to show the generality, also results for completely different objects are presented.
学习目标检测器需要有标记的训练数据。由于未标记的训练数据通常作为图像序列给出,我们提出了一种基于跟踪的方法,以最大限度地减少学习目标检测器时的人工工作量。主要思想是在一个主动的在线学习框架中应用跟踪器来选择和标记未标记的样本。为此,在测试图像上评估当前分类器,并由跟踪器验证获得的检测结果。通过这种方式,可以估计最有价值的样本并用于更新分类器。因此,可以减少所需样本的数量,并获得一个增量更好的检测器。为了实现有效的学习(即具有实时性能)并确保稳健的跟踪结果,我们对学习和跟踪都应用了在线增强。如果跟踪器可以自动初始化,就不需要用户交互,我们就有了一个自主学习/标记系统。在实验中对该方法进行了详细的评估,以学习人脸检测器。此外,为了显示通用性,还给出了完全不同对象的结果。
{"title":"Active sampling via tracking","authors":"P. Roth, H. Bischof","doi":"10.1109/CVPRW.2008.4563069","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563069","url":null,"abstract":"To learn an object detector labeled training data is required. Since unlabeled training data is often given as an image sequence we propose a tracking-based approach to minimize the manual effort when learning an object detector. The main idea is to apply a tracker within an active on-line learning framework for selecting and labeling unlabeled samples. For that purpose the current classifier is evaluated on a test image and the obtained detection result is verified by the tracker. In this way the most valuable samples can be estimated and used for updating the classifier. Thus, the number of needed samples can be reduced and an incrementally better detector is obtained. To enable efficient learning (i.e., to have real-time performance) and to assure robust tracking results, we apply on-line boosting for both, learning and tracking. If the tracker can be initialized automatically no user interaction is needed and we have an autonomous learning/labeling system. In the experiments the approach is evaluated in detail for learning a face detector. In addition, to show the generality, also results for completely different objects are presented.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"51 Suppl 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132006732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Adaptive color classification for structured light systems 结构光系统的自适应色彩分类
P. Fechteler, P. Eisert
We present a system to capture high accuracy 3D models of faces by taking just one photo without the need of specialized hardware, just a consumer grade digital camera and beamer. The proposed 3D face scanner utilizes structured light techniques: A colored pattern is projected into the face of interest while a photo is taken. Then, the 3D geometry is calculated based on the distortions of the pattern detected in the face. This is performed by triangulating the pattern found in the captured image with the projected one.
我们提出了一个系统,只需拍摄一张照片就可以捕获高精度的面部3D模型,而不需要专门的硬件,只需一个消费级的数码相机和激光相机。所提出的3D面部扫描仪利用结构光技术:在拍摄照片时,将彩色图案投射到感兴趣的脸上。然后,根据在人脸中检测到的图案畸变计算三维几何形状。这是通过将捕获图像中的模式与投影图像进行三角测量来实现的。
{"title":"Adaptive color classification for structured light systems","authors":"P. Fechteler, P. Eisert","doi":"10.1109/CVPRW.2008.4563048","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563048","url":null,"abstract":"We present a system to capture high accuracy 3D models of faces by taking just one photo without the need of specialized hardware, just a consumer grade digital camera and beamer. The proposed 3D face scanner utilizes structured light techniques: A colored pattern is projected into the face of interest while a photo is taken. Then, the 3D geometry is calculated based on the distortions of the pattern detected in the face. This is performed by triangulating the pattern found in the captured image with the projected one.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130604870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 93
Embedded contours extraction for high-speed scene dynamics based on a neuromorphic temporal contrast vision sensor 基于神经形态时间对比视觉传感器的高速场景动态嵌入轮廓提取
A. Belbachir, M. Hofstätter, Nenad Milosevic, P. Schön
The paper presents a compact vision system for efficient contours extraction in high-speed applications. By exploiting the ultra high temporal resolution and the sparse representation of the sensorpsilas data in reacting to scene dynamics, the system fosters efficient embedded computer vision for ultra high-speed applications. The results reported in this paper show the sensor output quality for a wide range of object velocity (5-40 m/s), and demonstrate the object data volume independence from the velocity as well as the steadiness of the object quality. The influence of object velocity on high-performance embedded computer vision is also discussed.
本文提出了一种用于高速应用的高效轮廓提取的紧凑视觉系统。通过利用超高的时间分辨率和传感器数据的稀疏表示来响应场景动态,该系统为超高速应用培养了高效的嵌入式计算机视觉。本文的结果显示了传感器在大范围的物体速度(5-40 m/s)下的输出质量,并证明了物体数据量与速度无关以及物体质量的稳定性。讨论了目标速度对高性能嵌入式计算机视觉的影响。
{"title":"Embedded contours extraction for high-speed scene dynamics based on a neuromorphic temporal contrast vision sensor","authors":"A. Belbachir, M. Hofstätter, Nenad Milosevic, P. Schön","doi":"10.1109/CVPRW.2008.4563153","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563153","url":null,"abstract":"The paper presents a compact vision system for efficient contours extraction in high-speed applications. By exploiting the ultra high temporal resolution and the sparse representation of the sensorpsilas data in reacting to scene dynamics, the system fosters efficient embedded computer vision for ultra high-speed applications. The results reported in this paper show the sensor output quality for a wide range of object velocity (5-40 m/s), and demonstrate the object data volume independence from the velocity as well as the steadiness of the object quality. The influence of object velocity on high-performance embedded computer vision is also discussed.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130285923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Gromov-Hausdorff distances in Euclidean spaces 欧几里德空间中的Gromov-Hausdorff距离
Facundo Mémoli
The purpose of this paper is to study the relationship between measures of dissimilarity between shapes in Euclidean space. We first concentrate on the pair Gromov-Hausdorff distance (GH) versus Hausdorff distance under the action of Euclidean isometries (EH). Then, we (1) show they are comparable in a precise sense that is not the linear behaviour one would expect and (2) explain the source of this phenomenon via explicit constructions. Finally, (3) by conveniently modifying the expression for the GH distance, we recover the EH distance. This allows us to uncover a connection that links the problem of computing GH and EH and the family of Euclidean Distance Matrix completion problems. The second pair of dissimilarity notions we study is the so called Lp-Gromov-Hausdorff distance versus the Earth Moverpsilas distance under the action of Euclidean isometries. We obtain results about comparability in this situation as well.
本文的目的是研究欧几里得空间中形状之间的不相似性测度之间的关系。首先研究了欧几里得等距作用下的Gromov-Hausdorff距离(GH)和Hausdorff距离(EH)。然后,我们(1)表明它们在精确意义上具有可比性,而不是人们所期望的线性行为;(2)通过明确的结构解释这种现象的来源。最后,(3)通过方便地修改GH距离表达式,恢复EH距离。这使我们能够发现计算GH和EH问题与欧几里得距离矩阵补全问题家族之间的联系。我们研究的第二对不同概念是所谓的在欧几里得等距作用下的p- gromov - hausdorff距离和地球运动距离。在这种情况下,我们也得到了可比性的结果。
{"title":"Gromov-Hausdorff distances in Euclidean spaces","authors":"Facundo Mémoli","doi":"10.1109/CVPRW.2008.4563074","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563074","url":null,"abstract":"The purpose of this paper is to study the relationship between measures of dissimilarity between shapes in Euclidean space. We first concentrate on the pair Gromov-Hausdorff distance (GH) versus Hausdorff distance under the action of Euclidean isometries (EH). Then, we (1) show they are comparable in a precise sense that is not the linear behaviour one would expect and (2) explain the source of this phenomenon via explicit constructions. Finally, (3) by conveniently modifying the expression for the GH distance, we recover the EH distance. This allows us to uncover a connection that links the problem of computing GH and EH and the family of Euclidean Distance Matrix completion problems. The second pair of dissimilarity notions we study is the so called Lp-Gromov-Hausdorff distance versus the Earth Moverpsilas distance under the action of Euclidean isometries. We obtain results about comparability in this situation as well.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130441785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 83
Entropy-based active learning for object recognition 基于熵的目标识别主动学习
Alex Holub, P. Perona, M. Burl
Most methods for learning object categories require large amounts of labeled training data. However, obtaining such data can be a difficult and time-consuming endeavor. We have developed a novel, entropy-based ldquoactive learningrdquo approach which makes significant progress towards this problem. The main idea is to sequentially acquire labeled data by presenting an oracle (the user) with unlabeled images that will be particularly informative when labeled. Active learning adaptively prioritizes the order in which the training examples are acquired, which, as shown by our experiments, can significantly reduce the overall number of training examples required to reach near-optimal performance. At first glance this may seem counter-intuitive: how can the algorithm know whether a group of unlabeled images will be informative, when, by definition, there is no label directly associated with any of the images? Our approach is based on choosing an image to label that maximizes the expected amount of information we gain about the set of unlabeled images. The technique is demonstrated in several contexts, including improving the efficiency of Web image-search queries and open-world visual learning by an autonomous agent. Experiments on a large set of 140 visual object categories taken directly from text-based Web image searches show that our technique can provide large improvements (up to 10 x reduction in the number of training examples needed) over baseline techniques.
大多数学习对象分类的方法都需要大量的标记训练数据。然而,获得这样的数据可能是一项困难且耗时的工作。我们开发了一种新颖的、基于熵的交互式学习方法,在这个问题上取得了重大进展。其主要思想是通过向oracle(用户)展示未标记的图像来顺序获取标记的数据,这些图像在标记后将具有特别的信息。主动学习自适应地优先考虑获得训练样例的顺序,正如我们的实验所示,这可以显着减少达到接近最佳性能所需的训练样例总数。乍一看,这似乎是违反直觉的:算法如何知道一组未标记的图像是否有信息,当,根据定义,没有标签直接与任何图像相关联?我们的方法是基于选择一个图像来标记,使我们获得的关于未标记图像集的预期信息量最大化。该技术在几个环境中进行了演示,包括提高Web图像搜索查询的效率和自主代理的开放世界视觉学习。在直接从基于文本的Web图像搜索中获取的140个视觉对象类别的大集合上进行的实验表明,与基线技术相比,我们的技术可以提供很大的改进(所需训练示例的数量减少了10倍)。
{"title":"Entropy-based active learning for object recognition","authors":"Alex Holub, P. Perona, M. Burl","doi":"10.1109/CVPRW.2008.4563068","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563068","url":null,"abstract":"Most methods for learning object categories require large amounts of labeled training data. However, obtaining such data can be a difficult and time-consuming endeavor. We have developed a novel, entropy-based ldquoactive learningrdquo approach which makes significant progress towards this problem. The main idea is to sequentially acquire labeled data by presenting an oracle (the user) with unlabeled images that will be particularly informative when labeled. Active learning adaptively prioritizes the order in which the training examples are acquired, which, as shown by our experiments, can significantly reduce the overall number of training examples required to reach near-optimal performance. At first glance this may seem counter-intuitive: how can the algorithm know whether a group of unlabeled images will be informative, when, by definition, there is no label directly associated with any of the images? Our approach is based on choosing an image to label that maximizes the expected amount of information we gain about the set of unlabeled images. The technique is demonstrated in several contexts, including improving the efficiency of Web image-search queries and open-world visual learning by an autonomous agent. Experiments on a large set of 140 visual object categories taken directly from text-based Web image searches show that our technique can provide large improvements (up to 10 x reduction in the number of training examples needed) over baseline techniques.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126500225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 252
A parallel color-based particle filter for object tracking 一种用于目标跟踪的基于颜色的并行粒子滤波器
Henry Medeiros, Johnny Park, A. Kak
Porting well known computer vision algorithms to low power, high performance computing devices such as SIMD linear processor arrays can be a challenging task. One especially useful such algorithm is the color-based particle filter, which has been applied successfully by many research groups to the problem of tracking non-rigid objects. In this paper, we propose an implementation of the color-based particle filter suitable for SIMD processors. The main focus of our work is on the parallel computation of the particle weights. This step is the major bottleneck of standard implementations of the color-based particle filter since it requires the knowledge of the histograms of the regions surrounding each hypothesized target position. We expect this approach to perform faster in an SIMD processor than an implementation in a standard desktop computer even running at much lower clock speeds.
将众所周知的计算机视觉算法移植到低功耗、高性能的计算设备(如SIMD线性处理器阵列)可能是一项具有挑战性的任务。其中一个特别有用的算法是基于颜色的粒子滤波,它已经被许多研究小组成功地应用于跟踪非刚性物体的问题。在本文中,我们提出了一种适用于SIMD处理器的基于颜色的粒子滤波实现。我们的工作重点是粒子权的并行计算。这一步是基于颜色的粒子滤波标准实现的主要瓶颈,因为它需要了解每个假设目标位置周围区域的直方图。我们期望这种方法在SIMD处理器中的执行速度比在标准桌面计算机中的实现更快,即使在低得多的时钟速度下运行。
{"title":"A parallel color-based particle filter for object tracking","authors":"Henry Medeiros, Johnny Park, A. Kak","doi":"10.1109/CVPRW.2008.4563148","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563148","url":null,"abstract":"Porting well known computer vision algorithms to low power, high performance computing devices such as SIMD linear processor arrays can be a challenging task. One especially useful such algorithm is the color-based particle filter, which has been applied successfully by many research groups to the problem of tracking non-rigid objects. In this paper, we propose an implementation of the color-based particle filter suitable for SIMD processors. The main focus of our work is on the parallel computation of the particle weights. This step is the major bottleneck of standard implementations of the color-based particle filter since it requires the knowledge of the histograms of the regions surrounding each hypothesized target position. We expect this approach to perform faster in an SIMD processor than an implementation in a standard desktop computer even running at much lower clock speeds.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"193 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121191365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
Codomain scale space and regularization for high angular resolution diffusion imaging 高角分辨率扩散成像的上域尺度空间和正则化
L. Florack
Regularization is an important aspect in high angular resolution diffusion imaging (HARDI), since, unlike with classical diffusion tensor imaging (DTI), there is no a priori regularity of raw data in the co-domain, i.e. considered as a multispectral signal for fixed spatial position. HARDI preprocessing is therefore a crucial step prior to any subsequent analysis, and some insight in regularization paradigms and their interrelations is compulsory. In this paper we posit a codomain scale space regularization paradigm that has hitherto not been applied in the context of HARDI. Unlike previous (first and second order) schemes it is based on infinite order regularization, yet can be fully operationalized. We furthermore establish a closed-form relation with first order Tikhonov regularization via the Laplace transform.
正则化是高角分辨率扩散成像(HARDI)的一个重要方面,因为与经典扩散张量成像(DTI)不同,原始数据在上域中没有先验的规律性,即被视为固定空间位置的多光谱信号。因此,HARDI预处理是任何后续分析之前的关键步骤,并且必须对正则化范式及其相互关系有所了解。本文提出了一种上域尺度空间正则化范式,该范式迄今尚未应用于HARDI。不同于以前的(一阶和二阶)方案,它基于无限阶正则化,但可以完全可操作。进一步利用拉普拉斯变换建立了一阶Tikhonov正则化的闭形式关系。
{"title":"Codomain scale space and regularization for high angular resolution diffusion imaging","authors":"L. Florack","doi":"10.1109/CVPRW.2008.4562967","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4562967","url":null,"abstract":"Regularization is an important aspect in high angular resolution diffusion imaging (HARDI), since, unlike with classical diffusion tensor imaging (DTI), there is no a priori regularity of raw data in the co-domain, i.e. considered as a multispectral signal for fixed spatial position. HARDI preprocessing is therefore a crucial step prior to any subsequent analysis, and some insight in regularization paradigms and their interrelations is compulsory. In this paper we posit a codomain scale space regularization paradigm that has hitherto not been applied in the context of HARDI. Unlike previous (first and second order) schemes it is based on infinite order regularization, yet can be fully operationalized. We furthermore establish a closed-form relation with first order Tikhonov regularization via the Laplace transform.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121212919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Rotational flows for interpolation between sampled surfaces 用于采样表面之间插值的旋转流
J. Levy, M. Foskey, S. Pizer
We introduce a locally defined shape-maintaining method for interpolating between corresponding oriented samples (vertices) from a pair of surfaces. We have applied this method to interpolate synthetic data sets in two and three dimensions and to interpolate medially represented shape models of anatomical objects in three dimensions. In the plane, each oriented vertex follows a circular arc as if it was rotating to its destination. In three dimensions, each oriented vertex moves along a helical path that combines in-plane rotation with translation along the axis of rotation. We show that our planar method provides shape-maintaining interpolations when the reference and target objects are similar. Moreover, the interpolations are size maintaining when the reference and target objects are congruent. In three dimensions, similar objects are interpolated by an affine transformation. We use measurements of the fractional anisotropy of such global affine transformations to demonstrate that our method is generally more-shape preserving than the alternative of interpolating vertices along linear paths irrespective of changes in orientation. In both two and three dimensions we have experimental evidence that when non-shape-preserving deformations are applied to template shapes, the interpolation tends to be visually satisfying with each intermediate object appearing to belong to the same class of objects as the end points.
我们引入了一种局部定义的形状保持方法,用于在一对表面的对应方向样本(顶点)之间进行插值。我们已经将该方法应用于二维和三维合成数据集的插值,以及三维解剖对象的中表示形状模型的插值。在平面上,每个有方向的顶点都沿着一个圆弧,就好像它在旋转到它的目的地一样。在三维空间中,每个定向顶点沿着螺旋路径移动,该路径结合了平面内旋转和沿旋转轴的平移。我们证明,当参考对象和目标对象相似时,我们的平面方法提供了形状保持插值。此外,当参考对象和目标对象一致时,插值是保持尺寸的。在三维空间中,用仿射变换插值相似的对象。我们使用这种全局仿射变换的分数各向异性的测量来证明,我们的方法通常比沿线性路径插入顶点的替代方法更能保持形状,而不考虑方向的变化。在二维和三维中,我们都有实验证据表明,当将非形状保持变形应用于模板形状时,插值往往在视觉上令人满意,每个中间对象似乎与端点属于同一类对象。
{"title":"Rotational flows for interpolation between sampled surfaces","authors":"J. Levy, M. Foskey, S. Pizer","doi":"10.1109/CVPRW.2008.4563017","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563017","url":null,"abstract":"We introduce a locally defined shape-maintaining method for interpolating between corresponding oriented samples (vertices) from a pair of surfaces. We have applied this method to interpolate synthetic data sets in two and three dimensions and to interpolate medially represented shape models of anatomical objects in three dimensions. In the plane, each oriented vertex follows a circular arc as if it was rotating to its destination. In three dimensions, each oriented vertex moves along a helical path that combines in-plane rotation with translation along the axis of rotation. We show that our planar method provides shape-maintaining interpolations when the reference and target objects are similar. Moreover, the interpolations are size maintaining when the reference and target objects are congruent. In three dimensions, similar objects are interpolated by an affine transformation. We use measurements of the fractional anisotropy of such global affine transformations to demonstrate that our method is generally more-shape preserving than the alternative of interpolating vertices along linear paths irrespective of changes in orientation. In both two and three dimensions we have experimental evidence that when non-shape-preserving deformations are applied to template shapes, the interpolation tends to be visually satisfying with each intermediate object appearing to belong to the same class of objects as the end points.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121225081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1