首页 > 最新文献

2010 2nd International Conference on Image Processing Theory, Tools and Applications最新文献

英文 中文
Co-parent selection for fast region merging in pyramidal image segmentation 锥体图像分割中快速区域合并的共父选择
M. Stojmenovic, Andrés Solís Montero, A. Nayak
The goal of image segmentation is to partition an image into regions that are internally homogeneous and heterogeneous with respect to other neighbouring regions. We build on the pyramid image segmentation work proposed by [3] and [9] by making a more efficient method by which children chose parents within the pyramid structure. Instead of considering only four immediate parents as in [3], in [9] each child node considers the neighbours of its candidate parent, and the candidate parents of its neighbouring nodes in the same level. In this paper, we also introduce the concept of a co-parent node for possible region merging at the end of each iteration. The new parents of the former children are co-parent candidates as if they are similar. The co-parent is chosen to be the one with the largest receptive field among candidate co-parents. Each child then additionally considers one more candidate, the co-parent of the previous parent. Other steps in the algorithm, and its overall layout, were also improved. The new algorithm is tested on a set of images. Our algorithm is fast (produces segmentations within seconds), results in the correct segmentation of elongated and large regions, very simple compared to plethora of existing algorithms, and appears competitive in segmentation quality with the best publicly available implementations. The major improvement over [9] is that it produces visually appealing results at earlier levels of pyramid segmentation, and not only at the top one.
图像分割的目标是将图像划分为相对于其他邻近区域内部同质和异质的区域。我们在[3]和[9]提出的金字塔图像分割工作的基础上,提出了一种更有效的方法,即儿童在金字塔结构中选择父母。与[3]中只考虑4个直系父节点不同,[9]中每个子节点考虑其候选父节点的邻居,以及同级相邻节点的候选父节点。在本文中,我们还引入了共同父节点的概念,用于在每次迭代结束时可能的区域合并。以前孩子的新父母是共同父母候选人,就像他们相似一样。共同父母被选为在候选共同父母中接受范围最大的一方。然后,每个子节点会额外考虑另一个候选人,即前一个父节点的共同父节点。算法的其他步骤及其总体布局也得到了改进。在一组图像上对新算法进行了测试。我们的算法速度很快(在几秒钟内产生分割),可以正确分割细长和大的区域,与大量现有算法相比非常简单,并且在分割质量方面与最好的公开实现相比具有竞争力。相对于[9]的主要改进是,它在金字塔分割的早期层次产生了视觉上吸引人的结果,而不仅仅是在顶部。
{"title":"Co-parent selection for fast region merging in pyramidal image segmentation","authors":"M. Stojmenovic, Andrés Solís Montero, A. Nayak","doi":"10.1109/IPTA.2010.5586811","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586811","url":null,"abstract":"The goal of image segmentation is to partition an image into regions that are internally homogeneous and heterogeneous with respect to other neighbouring regions. We build on the pyramid image segmentation work proposed by [3] and [9] by making a more efficient method by which children chose parents within the pyramid structure. Instead of considering only four immediate parents as in [3], in [9] each child node considers the neighbours of its candidate parent, and the candidate parents of its neighbouring nodes in the same level. In this paper, we also introduce the concept of a co-parent node for possible region merging at the end of each iteration. The new parents of the former children are co-parent candidates as if they are similar. The co-parent is chosen to be the one with the largest receptive field among candidate co-parents. Each child then additionally considers one more candidate, the co-parent of the previous parent. Other steps in the algorithm, and its overall layout, were also improved. The new algorithm is tested on a set of images. Our algorithm is fast (produces segmentations within seconds), results in the correct segmentation of elongated and large regions, very simple compared to plethora of existing algorithms, and appears competitive in segmentation quality with the best publicly available implementations. The major improvement over [9] is that it produces visually appealing results at earlier levels of pyramid segmentation, and not only at the top one.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114323662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Human visual system based mammogram enhancement and analysis 基于人类视觉系统的乳房x线照片增强与分析
Yicong Zhou, K. Panetta, S. Agaian
This paper introduces a new mammogram enhancement algorithm using the human visual system (HVS) based image decomposition. A new enhancement measure based on the second derivative is also introduced to measure and assess the enhancement performance. Experimental results show that the presented algorithm can improve the visual quality of fine details in mammograms. The HVS-based image decomposition can segment the regions/objects from their surroundings. It offers the users flexibility to enhance either sub-images containing only significant illumination information or all the sub-images of the original mammograms. The algorithm can be used in the computer-aided diagnosis systems for breast cancer detection.
本文介绍了一种新的基于人类视觉系统(HVS)的图像分解的乳房x光增强算法。提出了一种新的基于二阶导数的增强方法来测量和评价增强性能。实验结果表明,该算法可以提高乳房x光片精细细节的视觉质量。基于hvs的图像分解可以将区域/物体从周围环境中分割出来。它为用户提供了灵活性,可以增强只包含重要照明信息的子图像,也可以增强原始乳房x光片的所有子图像。该算法可用于计算机辅助诊断系统中乳腺癌的检测。
{"title":"Human visual system based mammogram enhancement and analysis","authors":"Yicong Zhou, K. Panetta, S. Agaian","doi":"10.1109/IPTA.2010.5586759","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586759","url":null,"abstract":"This paper introduces a new mammogram enhancement algorithm using the human visual system (HVS) based image decomposition. A new enhancement measure based on the second derivative is also introduced to measure and assess the enhancement performance. Experimental results show that the presented algorithm can improve the visual quality of fine details in mammograms. The HVS-based image decomposition can segment the regions/objects from their surroundings. It offers the users flexibility to enhance either sub-images containing only significant illumination information or all the sub-images of the original mammograms. The algorithm can be used in the computer-aided diagnosis systems for breast cancer detection.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129889032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Improved two-bit transform-based motion estimation via extension of matching criterion 基于扩展匹配准则的改进二比特变换运动估计
Changryoul Choi, Jechang Jeong
An improved two-bit transform-based motion estimation algorithm is proposed in this paper. By extending the typical two-bit transform (2BT) matching criterion, the proposed algorithm enhances the motion estimation accuracy with almost the same computational complexity, while preserving the binary matching characteristic. Experimental results show that the proposed algorithm achieves peak-to-peak signal-to-noise ratio (PSNR) gains of 0.29dB on average compared with the conventional 2BT-based motion estimation.
提出了一种改进的基于两位变换的运动估计算法。该算法通过扩展典型的2位变换(two-bit transform, 2BT)匹配准则,在保持二值匹配特性的同时,在几乎相同的计算复杂度下提高了运动估计精度。实验结果表明,与传统的基于2bt的运动估计相比,该算法的峰值信噪比(PSNR)平均增益为0.29dB。
{"title":"Improved two-bit transform-based motion estimation via extension of matching criterion","authors":"Changryoul Choi, Jechang Jeong","doi":"10.1109/IPTA.2010.5586808","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586808","url":null,"abstract":"An improved two-bit transform-based motion estimation algorithm is proposed in this paper. By extending the typical two-bit transform (2BT) matching criterion, the proposed algorithm enhances the motion estimation accuracy with almost the same computational complexity, while preserving the binary matching characteristic. Experimental results show that the proposed algorithm achieves peak-to-peak signal-to-noise ratio (PSNR) gains of 0.29dB on average compared with the conventional 2BT-based motion estimation.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132444872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
People re-identification by classification of silhouettes based on sparse representation 基于稀疏表示的人物轮廓分类再识别
D. T. Cong, C. Achard, L. Khoudour
The research presented in this paper consists in developing an automatic system for people re-identification across multiple cameras with non-overlapping fields of view. We first propose a robust algorithm for silhouette extraction which is based on an adaptive spatio-colorimetric background and foreground model coupled with a dynamic decision framework. Such a method is able to deal with the difficult conditions of outdoor environments where lighting is not stable and distracting motions are very numerous. A robust classification procedure, which exploits the discriminative nature of sparse representation, is then presented to perform people re-identification task. The global system is tested on two real data sets recorded in very difficult environments. The experimental results show that the proposed system leads to very satisfactory results compared to other approaches of the literature.
本文的研究内容是开发一种跨多摄像机、视场不重叠的人的自动再识别系统。本文首先提出了一种基于自适应空间比色背景前景模型和动态决策框架的鲁棒轮廓提取算法。这种方法能够处理光线不稳定和分散运动非常多的室外环境的困难条件。然后,利用稀疏表示的判别特性,提出了一种鲁棒分类方法来执行人的再识别任务。全球系统是在非常困难的环境中记录的两个真实数据集上进行测试的。实验结果表明,与文献中的其他方法相比,所提出的系统取得了令人满意的结果。
{"title":"People re-identification by classification of silhouettes based on sparse representation","authors":"D. T. Cong, C. Achard, L. Khoudour","doi":"10.1109/IPTA.2010.5586809","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586809","url":null,"abstract":"The research presented in this paper consists in developing an automatic system for people re-identification across multiple cameras with non-overlapping fields of view. We first propose a robust algorithm for silhouette extraction which is based on an adaptive spatio-colorimetric background and foreground model coupled with a dynamic decision framework. Such a method is able to deal with the difficult conditions of outdoor environments where lighting is not stable and distracting motions are very numerous. A robust classification procedure, which exploits the discriminative nature of sparse representation, is then presented to perform people re-identification task. The global system is tested on two real data sets recorded in very difficult environments. The experimental results show that the proposed system leads to very satisfactory results compared to other approaches of the literature.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131829365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Detecting potential human activities using coherent change detection 使用连贯的变化检测来检测潜在的人类活动
N. Milisavljevic, D. Closson, I. Bloch
This paper describes detection and interpretation of temporal changes in an area of interest using coherent change detection in repeat-pass Synthetic Aperture Radar imagery, with the main goal of detecting subtle scene changes such as potential human activities. Possibilities of introducing knowledge sources in order to improve the final result are also presented.
本文描述了在重复通过合成孔径雷达图像中使用相干变化检测来检测和解释感兴趣区域的时间变化,其主要目标是检测细微的场景变化,如潜在的人类活动。提出了引入知识来源以改进最终结果的可能性。
{"title":"Detecting potential human activities using coherent change detection","authors":"N. Milisavljevic, D. Closson, I. Bloch","doi":"10.1109/IPTA.2010.5586772","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586772","url":null,"abstract":"This paper describes detection and interpretation of temporal changes in an area of interest using coherent change detection in repeat-pass Synthetic Aperture Radar imagery, with the main goal of detecting subtle scene changes such as potential human activities. Possibilities of introducing knowledge sources in order to improve the final result are also presented.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"101 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132236425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Covariance-based adaptive deinterlacing method using edge map 基于协方差的边缘图自适应去隔行方法
Sang-Jun Park, Gwanggil Jeon, Jechang Jeong
The purpose of this article is to discuss deinterlacing results in a computationally constrained and varied environment. The proposed covariance-based adaptive deinterlacing method using edge map (CADEM) consists of two methods: the modified edge-based line averaging (MELA) method for plain regions and the covariance-based adaptive deinterlacing (CAD) method along the edges. The proposed CADEM uses the edge map of the interlaced input image for assigning the appropriate method between MELA and the modified CAD (MCAD) methods. We first introduce the MCAD method. The principle idea of the MCAD is based on the correspondence between the high-resolution covariance and the low-resolution covariance. The MCAD estimates the local covariance coefficients from an interlaced image using Wiener filtering theory and then uses these optimal minimum mean squared error interpolation coefficients to obtain a deinterlaced image. However, the MCAD method, though more robust than most known methods, was not found to be very fast compared with the others. To alleviate this issue, we propose an adaptive selection approach rather than using only one MCAD algorithm. The proposed hybrid approach of switching between the MELA and MCAD is proposed to reduce the overall computational load. A reliable condition to be used for switching the schemes is established by the edge map composed of binary image. The results of computer simulations showed that the proposed methods outperformed a number of methods presented in the literature.
本文的目的是讨论计算受限和变化环境中的去隔行结果。提出了一种基于协方差的边缘图自适应去隔行方法(CADEM),包括两种方法:针对平原区域的改进的基于边缘的线平均法(MELA)和沿边缘的基于协方差的自适应去隔行法(CAD)。提出的CADEM使用隔行输入图像的边缘图在MELA和改进的CAD (MCAD)方法之间分配适当的方法。我们首先介绍MCAD方法。MCAD的基本思想是基于高分辨率协方差和低分辨率协方差的对应关系。MCAD利用维纳滤波理论估计隔行图像的局部协方差系数,然后利用这些最优的最小均方误差插值系数获得去隔行图像。然而,与其他方法相比,MCAD方法虽然比大多数已知方法更健壮,但速度并不快。为了缓解这一问题,我们提出了一种自适应选择方法,而不是只使用一种MCAD算法。提出了在MELA和MCAD之间切换的混合方法,以减少总体计算负荷。由二值图像组成的边缘映射建立了切换方案的可靠条件。计算机模拟的结果表明,所提出的方法优于文献中提出的许多方法。
{"title":"Covariance-based adaptive deinterlacing method using edge map","authors":"Sang-Jun Park, Gwanggil Jeon, Jechang Jeong","doi":"10.1109/IPTA.2010.5586741","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586741","url":null,"abstract":"The purpose of this article is to discuss deinterlacing results in a computationally constrained and varied environment. The proposed covariance-based adaptive deinterlacing method using edge map (CADEM) consists of two methods: the modified edge-based line averaging (MELA) method for plain regions and the covariance-based adaptive deinterlacing (CAD) method along the edges. The proposed CADEM uses the edge map of the interlaced input image for assigning the appropriate method between MELA and the modified CAD (MCAD) methods. We first introduce the MCAD method. The principle idea of the MCAD is based on the correspondence between the high-resolution covariance and the low-resolution covariance. The MCAD estimates the local covariance coefficients from an interlaced image using Wiener filtering theory and then uses these optimal minimum mean squared error interpolation coefficients to obtain a deinterlaced image. However, the MCAD method, though more robust than most known methods, was not found to be very fast compared with the others. To alleviate this issue, we propose an adaptive selection approach rather than using only one MCAD algorithm. The proposed hybrid approach of switching between the MELA and MCAD is proposed to reduce the overall computational load. A reliable condition to be used for switching the schemes is established by the edge map composed of binary image. The results of computer simulations showed that the proposed methods outperformed a number of methods presented in the literature.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125135047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Intravascular Ultrasound image segmentation: A helical active contour method 血管内超声图像分割:一种螺旋活动轮廓法
M. Jourdain, J. Meunier, J. Sequeira, G. Cloutier, J. Tardif
During an Intravascular Ultrasound (IVUS) examination, a catheter with an ultrasound transducer is introduced in the body through a blood vessel and then pulled back to image a sequence of vessel cross-sections. An IVUS exam results in several hundred noisy images often hard to analyze. Hence, developing powerful automatic analysis tool would facilitate the interpretation of structures in IVUS images. In this paper we present a new IVUS segmentation method based on an original active contour model. The contour has a helical geometry and evolves like a spiral shape that is distorted until it reaches the artery lumen boundaries. Despite the use of a simple statistical model and a very sparse initialization of the snake, the algorithm converges to satisfying solutions that can be compared with much more sophisticated segmentation methods. To validate the method, we compared our results to manually traced contours and obtained an Hausdorff distance < 0∶61mm (n = 540 images) indicating the robustness of the method.
在血管内超声(IVUS)检查中,通过血管将带有超声换能器的导管引入体内,然后将其拉回以成像一系列血管横截面。IVUS检查结果是几百张噪声图像,通常难以分析。因此,开发强大的自动分析工具将有助于对IVUS图像结构的解释。本文提出了一种新的基于原始活动轮廓模型的IVUS分割方法。轮廓具有螺旋几何形状,并像螺旋形状一样演变,直到它到达动脉腔边界为止。尽管使用了简单的统计模型和非常稀疏的蛇形初始化,但该算法收敛到令人满意的解,可以与更复杂的分割方法相比。为了验证该方法的有效性,我们将结果与手工绘制的轮廓进行了比较,得到了Hausdorff距离< 0∶61mm (n = 540幅图像),表明了该方法的鲁棒性。
{"title":"Intravascular Ultrasound image segmentation: A helical active contour method","authors":"M. Jourdain, J. Meunier, J. Sequeira, G. Cloutier, J. Tardif","doi":"10.1109/IPTA.2010.5586803","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586803","url":null,"abstract":"During an Intravascular Ultrasound (IVUS) examination, a catheter with an ultrasound transducer is introduced in the body through a blood vessel and then pulled back to image a sequence of vessel cross-sections. An IVUS exam results in several hundred noisy images often hard to analyze. Hence, developing powerful automatic analysis tool would facilitate the interpretation of structures in IVUS images. In this paper we present a new IVUS segmentation method based on an original active contour model. The contour has a helical geometry and evolves like a spiral shape that is distorted until it reaches the artery lumen boundaries. Despite the use of a simple statistical model and a very sparse initialization of the snake, the algorithm converges to satisfying solutions that can be compared with much more sophisticated segmentation methods. To validate the method, we compared our results to manually traced contours and obtained an Hausdorff distance < 0∶61mm (n = 540 images) indicating the robustness of the method.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127709827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Multi-level visual alphabets 多层次视觉字母
Menno Israël, J. Schaar, E. V. D. Broek, M. D. Uyl, P. V. D. Putten
A central debate in visual perception theory is the argument for indirect versus direct perception; i.e., the use of intermediate, abstract, and hierarchical representations versus direct semantic interpretation of images through interaction with the outside world. We present a content-based representation that combines both approaches. The previously developed Visual Alphabet method is extended with a hierarchy of representations, each level feeding into the next one, but based on features that are not abstract but directly relevant to the task at hand. Explorative benchmark experiments are carried out on face images to investigate and explain the impact of the key parameters such as pattern size, number of prototypes, and distance measures used. Results show that adding an additional middle layer improves results, by encoding the spatial co-occurrence of lower-level pattern prototypes.
视觉知觉理论的一个核心争论是间接知觉与直接知觉的争论;即,使用中间的、抽象的和分层的表示,而不是通过与外部世界的交互对图像进行直接的语义解释。我们提出了一种结合了这两种方法的基于内容的表示。先前开发的Visual Alphabet方法扩展了表示层次结构,每一层都进入下一层,但基于的特征不是抽象的,而是与手头的任务直接相关的。探索性基准实验在人脸图像上进行,以调查和解释关键参数的影响,如模式大小、原型数量和使用的距离测量。结果表明,增加一个额外的中间层,通过编码低层模式原型的空间共现来改善结果。
{"title":"Multi-level visual alphabets","authors":"Menno Israël, J. Schaar, E. V. D. Broek, M. D. Uyl, P. V. D. Putten","doi":"10.1109/IPTA.2010.5586757","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586757","url":null,"abstract":"A central debate in visual perception theory is the argument for indirect versus direct perception; i.e., the use of intermediate, abstract, and hierarchical representations versus direct semantic interpretation of images through interaction with the outside world. We present a content-based representation that combines both approaches. The previously developed Visual Alphabet method is extended with a hierarchy of representations, each level feeding into the next one, but based on features that are not abstract but directly relevant to the task at hand. Explorative benchmark experiments are carried out on face images to investigate and explain the impact of the key parameters such as pattern size, number of prototypes, and distance measures used. Results show that adding an additional middle layer improves results, by encoding the spatial co-occurrence of lower-level pattern prototypes.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126918369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An algorithm for iris extraction 一种虹膜提取算法
Tomáš Fabián, Jan Gaura, Petr Kotas
In this paper, we describe a new method for detecting iris in digital images. Our method is simple yet effective. It is based on statistical point of view when searching for limbic boundary and rather analytical approach when detecting pupillary boundary. It can be described in three simple steps; firstly, the bright point inside the pupil is detected; secondly, outer limbic boundary is found via statistical measurements of outer boundary points; and thirdly, inner boundary points are found by means of defined cost function maximization. Performance of the presented method is evaluated on series of iris close-up images and compared with the traditional Hough method as well.
本文提出了一种检测数字图像中虹膜的新方法。我们的方法简单而有效。在寻找边缘边界时采用统计的观点,在寻找瞳孔边界时采用分析的方法。它可以用三个简单的步骤来描述;首先,检测瞳孔内部的亮点;其次,通过外边界点的统计测量得到外边缘边界;第三,利用定义的成本函数极大化方法寻找内边界点。在一系列虹膜近距离图像上评价了该方法的性能,并与传统的霍夫方法进行了比较。
{"title":"An algorithm for iris extraction","authors":"Tomáš Fabián, Jan Gaura, Petr Kotas","doi":"10.1109/IPTA.2010.5586756","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586756","url":null,"abstract":"In this paper, we describe a new method for detecting iris in digital images. Our method is simple yet effective. It is based on statistical point of view when searching for limbic boundary and rather analytical approach when detecting pupillary boundary. It can be described in three simple steps; firstly, the bright point inside the pupil is detected; secondly, outer limbic boundary is found via statistical measurements of outer boundary points; and thirdly, inner boundary points are found by means of defined cost function maximization. Performance of the presented method is evaluated on series of iris close-up images and compared with the traditional Hough method as well.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122889918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Support vector machine fusion of multisensor imagery in tropical ecosystems 热带生态系统多传感器图像的支持向量机融合
R. Pouteau, B. Stoll, S. Chabrier
One of the major stakeholders of image fusion is being able to process the most complex images at the finest possible integration level and with the most reliable accuracy. The use of support vector machine (SVM) fusion for the classification of multisensors images representing a complex tropical ecosystem is investigated. First, SVM are trained individually on a set of complementary sources: multispectral, synthetic aperture radar (SAR) images and a digital elevation model (DEM). Then a SVM-based decision fusion is performed on the three sources. SVM fusion outperforms all monosource classifications outputting results with the same accuracy as the majority of other comparable studies on cultural landscapes. SVM-based hybrid consensus classification does not only balance successful and misclassified results, it also uses misclassification patterns as information. Such a successful approach is partially due to the integration of DEM-extracted indices which are relevant to land cover mapping in non-cultural and topographically complex landscapes.
图像融合的主要利益相关者之一是能够以最好的集成水平和最可靠的精度处理最复杂的图像。研究了基于支持向量机(SVM)的热带生态系统多传感器图像分类方法。首先,SVM在一组互补源上分别进行训练:多光谱合成孔径雷达(SAR)图像和数字高程模型(DEM)。然后对三个源进行基于支持向量机的决策融合。SVM融合优于所有单源分类输出结果,与大多数其他文化景观的可比研究具有相同的准确性。基于支持向量机的混合共识分类不仅平衡了成功分类和错误分类的结果,而且还使用错误分类模式作为信息。这种成功的方法部分是由于整合了dem提取的指数,这些指数与非文化和地形复杂景观的土地覆盖测绘有关。
{"title":"Support vector machine fusion of multisensor imagery in tropical ecosystems","authors":"R. Pouteau, B. Stoll, S. Chabrier","doi":"10.1109/IPTA.2010.5586788","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586788","url":null,"abstract":"One of the major stakeholders of image fusion is being able to process the most complex images at the finest possible integration level and with the most reliable accuracy. The use of support vector machine (SVM) fusion for the classification of multisensors images representing a complex tropical ecosystem is investigated. First, SVM are trained individually on a set of complementary sources: multispectral, synthetic aperture radar (SAR) images and a digital elevation model (DEM). Then a SVM-based decision fusion is performed on the three sources. SVM fusion outperforms all monosource classifications outputting results with the same accuracy as the majority of other comparable studies on cultural landscapes. SVM-based hybrid consensus classification does not only balance successful and misclassified results, it also uses misclassification patterns as information. Such a successful approach is partially due to the integration of DEM-extracted indices which are relevant to land cover mapping in non-cultural and topographically complex landscapes.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124490517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2010 2nd International Conference on Image Processing Theory, Tools and Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1