首页 > 最新文献

2010 2nd International Conference on Image Processing Theory, Tools and Applications最新文献

英文 中文
Improved two-bit transform-based motion estimation via extension of matching criterion 基于扩展匹配准则的改进二比特变换运动估计
Changryoul Choi, Jechang Jeong
An improved two-bit transform-based motion estimation algorithm is proposed in this paper. By extending the typical two-bit transform (2BT) matching criterion, the proposed algorithm enhances the motion estimation accuracy with almost the same computational complexity, while preserving the binary matching characteristic. Experimental results show that the proposed algorithm achieves peak-to-peak signal-to-noise ratio (PSNR) gains of 0.29dB on average compared with the conventional 2BT-based motion estimation.
提出了一种改进的基于两位变换的运动估计算法。该算法通过扩展典型的2位变换(two-bit transform, 2BT)匹配准则,在保持二值匹配特性的同时,在几乎相同的计算复杂度下提高了运动估计精度。实验结果表明,与传统的基于2bt的运动估计相比,该算法的峰值信噪比(PSNR)平均增益为0.29dB。
{"title":"Improved two-bit transform-based motion estimation via extension of matching criterion","authors":"Changryoul Choi, Jechang Jeong","doi":"10.1109/IPTA.2010.5586808","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586808","url":null,"abstract":"An improved two-bit transform-based motion estimation algorithm is proposed in this paper. By extending the typical two-bit transform (2BT) matching criterion, the proposed algorithm enhances the motion estimation accuracy with almost the same computational complexity, while preserving the binary matching characteristic. Experimental results show that the proposed algorithm achieves peak-to-peak signal-to-noise ratio (PSNR) gains of 0.29dB on average compared with the conventional 2BT-based motion estimation.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132444872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Improvement of the space resolution of the optical remote sensing image by the principle of CCD imaging 利用CCD成像原理提高光学遥感图像的空间分辨率
Qing Liu, Sun'an Wang, Xiaohui Zhang, Yun Hou
Based on the feature of CCD image forming, the internal principle of image forming is analyzed, and the loss of charge transfer is calculated by the Shockley - Read - Hall equation, in which the distribution function between the charge transfer is reconstructed. Rational polynomial interpolation algorithm is used to determine the unknown pixel points for the adjacent pixels that do not overcome the loss of charge transfer to enhance the image. It is an self-adaptive interpolation algorithm, in which the interpolation function can be adjusted automatically with electrical potential difference of the adjoining pixels and its energy zone, by means of which, the image can be magnified self-adaptively. Remote sensing image is tested, and the example results show that not only the image quality is improved, but also the clear margin and contour information is kept with this algorithm. And thus the processed images are more conducive to the naked eye.
根据CCD成像的特点,分析了CCD成像的内部原理,利用Shockley - Read - Hall方程计算了电荷转移损失,重构了电荷转移之间的分布函数。采用有理多项式插值算法,对未克服电荷转移损失的相邻像素点确定未知像素点,增强图像。它是一种自适应插值算法,可以根据相邻像素及其能量区的电位差自动调整插值函数,从而实现图像的自适应放大。对遥感图像进行了测试,算例结果表明,该算法不仅提高了图像质量,而且保持了清晰的边缘和轮廓信息。因此,处理后的图像更有利于肉眼。
{"title":"Improvement of the space resolution of the optical remote sensing image by the principle of CCD imaging","authors":"Qing Liu, Sun'an Wang, Xiaohui Zhang, Yun Hou","doi":"10.1109/IPTA.2010.5586774","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586774","url":null,"abstract":"Based on the feature of CCD image forming, the internal principle of image forming is analyzed, and the loss of charge transfer is calculated by the Shockley - Read - Hall equation, in which the distribution function between the charge transfer is reconstructed. Rational polynomial interpolation algorithm is used to determine the unknown pixel points for the adjacent pixels that do not overcome the loss of charge transfer to enhance the image. It is an self-adaptive interpolation algorithm, in which the interpolation function can be adjusted automatically with electrical potential difference of the adjoining pixels and its energy zone, by means of which, the image can be magnified self-adaptively. Remote sensing image is tested, and the example results show that not only the image quality is improved, but also the clear margin and contour information is kept with this algorithm. And thus the processed images are more conducive to the naked eye.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"156 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128504125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Human visual system based mammogram enhancement and analysis 基于人类视觉系统的乳房x线照片增强与分析
Yicong Zhou, K. Panetta, S. Agaian
This paper introduces a new mammogram enhancement algorithm using the human visual system (HVS) based image decomposition. A new enhancement measure based on the second derivative is also introduced to measure and assess the enhancement performance. Experimental results show that the presented algorithm can improve the visual quality of fine details in mammograms. The HVS-based image decomposition can segment the regions/objects from their surroundings. It offers the users flexibility to enhance either sub-images containing only significant illumination information or all the sub-images of the original mammograms. The algorithm can be used in the computer-aided diagnosis systems for breast cancer detection.
本文介绍了一种新的基于人类视觉系统(HVS)的图像分解的乳房x光增强算法。提出了一种新的基于二阶导数的增强方法来测量和评价增强性能。实验结果表明,该算法可以提高乳房x光片精细细节的视觉质量。基于hvs的图像分解可以将区域/物体从周围环境中分割出来。它为用户提供了灵活性,可以增强只包含重要照明信息的子图像,也可以增强原始乳房x光片的所有子图像。该算法可用于计算机辅助诊断系统中乳腺癌的检测。
{"title":"Human visual system based mammogram enhancement and analysis","authors":"Yicong Zhou, K. Panetta, S. Agaian","doi":"10.1109/IPTA.2010.5586759","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586759","url":null,"abstract":"This paper introduces a new mammogram enhancement algorithm using the human visual system (HVS) based image decomposition. A new enhancement measure based on the second derivative is also introduced to measure and assess the enhancement performance. Experimental results show that the presented algorithm can improve the visual quality of fine details in mammograms. The HVS-based image decomposition can segment the regions/objects from their surroundings. It offers the users flexibility to enhance either sub-images containing only significant illumination information or all the sub-images of the original mammograms. The algorithm can be used in the computer-aided diagnosis systems for breast cancer detection.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129889032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
People re-identification by classification of silhouettes based on sparse representation 基于稀疏表示的人物轮廓分类再识别
D. T. Cong, C. Achard, L. Khoudour
The research presented in this paper consists in developing an automatic system for people re-identification across multiple cameras with non-overlapping fields of view. We first propose a robust algorithm for silhouette extraction which is based on an adaptive spatio-colorimetric background and foreground model coupled with a dynamic decision framework. Such a method is able to deal with the difficult conditions of outdoor environments where lighting is not stable and distracting motions are very numerous. A robust classification procedure, which exploits the discriminative nature of sparse representation, is then presented to perform people re-identification task. The global system is tested on two real data sets recorded in very difficult environments. The experimental results show that the proposed system leads to very satisfactory results compared to other approaches of the literature.
本文的研究内容是开发一种跨多摄像机、视场不重叠的人的自动再识别系统。本文首先提出了一种基于自适应空间比色背景前景模型和动态决策框架的鲁棒轮廓提取算法。这种方法能够处理光线不稳定和分散运动非常多的室外环境的困难条件。然后,利用稀疏表示的判别特性,提出了一种鲁棒分类方法来执行人的再识别任务。全球系统是在非常困难的环境中记录的两个真实数据集上进行测试的。实验结果表明,与文献中的其他方法相比,所提出的系统取得了令人满意的结果。
{"title":"People re-identification by classification of silhouettes based on sparse representation","authors":"D. T. Cong, C. Achard, L. Khoudour","doi":"10.1109/IPTA.2010.5586809","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586809","url":null,"abstract":"The research presented in this paper consists in developing an automatic system for people re-identification across multiple cameras with non-overlapping fields of view. We first propose a robust algorithm for silhouette extraction which is based on an adaptive spatio-colorimetric background and foreground model coupled with a dynamic decision framework. Such a method is able to deal with the difficult conditions of outdoor environments where lighting is not stable and distracting motions are very numerous. A robust classification procedure, which exploits the discriminative nature of sparse representation, is then presented to perform people re-identification task. The global system is tested on two real data sets recorded in very difficult environments. The experimental results show that the proposed system leads to very satisfactory results compared to other approaches of the literature.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131829365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Detecting potential human activities using coherent change detection 使用连贯的变化检测来检测潜在的人类活动
N. Milisavljevic, D. Closson, I. Bloch
This paper describes detection and interpretation of temporal changes in an area of interest using coherent change detection in repeat-pass Synthetic Aperture Radar imagery, with the main goal of detecting subtle scene changes such as potential human activities. Possibilities of introducing knowledge sources in order to improve the final result are also presented.
本文描述了在重复通过合成孔径雷达图像中使用相干变化检测来检测和解释感兴趣区域的时间变化,其主要目标是检测细微的场景变化,如潜在的人类活动。提出了引入知识来源以改进最终结果的可能性。
{"title":"Detecting potential human activities using coherent change detection","authors":"N. Milisavljevic, D. Closson, I. Bloch","doi":"10.1109/IPTA.2010.5586772","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586772","url":null,"abstract":"This paper describes detection and interpretation of temporal changes in an area of interest using coherent change detection in repeat-pass Synthetic Aperture Radar imagery, with the main goal of detecting subtle scene changes such as potential human activities. Possibilities of introducing knowledge sources in order to improve the final result are also presented.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"101 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132236425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Support vector machine fusion of multisensor imagery in tropical ecosystems 热带生态系统多传感器图像的支持向量机融合
R. Pouteau, B. Stoll, S. Chabrier
One of the major stakeholders of image fusion is being able to process the most complex images at the finest possible integration level and with the most reliable accuracy. The use of support vector machine (SVM) fusion for the classification of multisensors images representing a complex tropical ecosystem is investigated. First, SVM are trained individually on a set of complementary sources: multispectral, synthetic aperture radar (SAR) images and a digital elevation model (DEM). Then a SVM-based decision fusion is performed on the three sources. SVM fusion outperforms all monosource classifications outputting results with the same accuracy as the majority of other comparable studies on cultural landscapes. SVM-based hybrid consensus classification does not only balance successful and misclassified results, it also uses misclassification patterns as information. Such a successful approach is partially due to the integration of DEM-extracted indices which are relevant to land cover mapping in non-cultural and topographically complex landscapes.
图像融合的主要利益相关者之一是能够以最好的集成水平和最可靠的精度处理最复杂的图像。研究了基于支持向量机(SVM)的热带生态系统多传感器图像分类方法。首先,SVM在一组互补源上分别进行训练:多光谱合成孔径雷达(SAR)图像和数字高程模型(DEM)。然后对三个源进行基于支持向量机的决策融合。SVM融合优于所有单源分类输出结果,与大多数其他文化景观的可比研究具有相同的准确性。基于支持向量机的混合共识分类不仅平衡了成功分类和错误分类的结果,而且还使用错误分类模式作为信息。这种成功的方法部分是由于整合了dem提取的指数,这些指数与非文化和地形复杂景观的土地覆盖测绘有关。
{"title":"Support vector machine fusion of multisensor imagery in tropical ecosystems","authors":"R. Pouteau, B. Stoll, S. Chabrier","doi":"10.1109/IPTA.2010.5586788","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586788","url":null,"abstract":"One of the major stakeholders of image fusion is being able to process the most complex images at the finest possible integration level and with the most reliable accuracy. The use of support vector machine (SVM) fusion for the classification of multisensors images representing a complex tropical ecosystem is investigated. First, SVM are trained individually on a set of complementary sources: multispectral, synthetic aperture radar (SAR) images and a digital elevation model (DEM). Then a SVM-based decision fusion is performed on the three sources. SVM fusion outperforms all monosource classifications outputting results with the same accuracy as the majority of other comparable studies on cultural landscapes. SVM-based hybrid consensus classification does not only balance successful and misclassified results, it also uses misclassification patterns as information. Such a successful approach is partially due to the integration of DEM-extracted indices which are relevant to land cover mapping in non-cultural and topographically complex landscapes.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124490517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Eye tracking for foveation video coding and simple scene description 眼动跟踪的注视点视频编码和简单的场景描述
Mauritz Panggabean, Stig Salater, L. A. Rønningen
This paper investigates the use of eye tracking to define the important objects in a scene, i.e. those focused by viewers. These objects are rich in information, such as human face. They should be coded with higher quality than unfocused ones as in foveated video coding, for which eye tracking can serve as a preliminary step. We show that human gaze points often cover only a very small area of the screen, even less than 10%, providing great opportunities to save much more bit rate. Gaze points from a number of subjects watching a scene can be represented compactly in the proposed box diagram, another contribution from this work, that can give simple description of the scene just in a one diagram.
本文研究了使用眼动追踪来定义场景中的重要对象,即观众关注的对象。这些物体含有丰富的信息,比如人脸。它们的编码质量应该比未聚焦的视频编码质量更高,就像注视点视频编码一样,眼动追踪可以作为一个初步步骤。我们表明,人类的注视点通常只覆盖屏幕上很小的区域,甚至不到10%,这为节省更多的比特率提供了很大的机会。许多被试观看一个场景时的注视点可以在提议的框图中紧凑地表示,这是这项工作的另一个贡献,它可以在一个图中对场景进行简单的描述。
{"title":"Eye tracking for foveation video coding and simple scene description","authors":"Mauritz Panggabean, Stig Salater, L. A. Rønningen","doi":"10.1109/IPTA.2010.5586762","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586762","url":null,"abstract":"This paper investigates the use of eye tracking to define the important objects in a scene, i.e. those focused by viewers. These objects are rich in information, such as human face. They should be coded with higher quality than unfocused ones as in foveated video coding, for which eye tracking can serve as a preliminary step. We show that human gaze points often cover only a very small area of the screen, even less than 10%, providing great opportunities to save much more bit rate. Gaze points from a number of subjects watching a scene can be represented compactly in the proposed box diagram, another contribution from this work, that can give simple description of the scene just in a one diagram.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129231289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Hyperspectral image compression based on Tucker Decomposition and Discrete Cosine Transform 基于Tucker分解和离散余弦变换的高光谱图像压缩
A. Karami, M. Yazdi, A. Z. Asli
In this paper, an efficient method for Hyperspectral image compression based on the Tucker Decomposition (TD) and the Three Dimensional Discrete Cosine Transform (3D-DCT) is proposed. The core idea behind our proposed technique is to apply TD to the 3D-DCT coefficients of Hyperspectral image in order to not only exploit redundancies between bands but also to use spatial correlations of every image band and therefore, as simulation results applied to real Hyperspectral images demonstrate, it leads to a remarkable compression ratio with improved quality.
提出了一种基于Tucker分解(TD)和三维离散余弦变换(3D-DCT)的高效高光谱图像压缩方法。我们提出的技术背后的核心思想是将TD应用于高光谱图像的3D-DCT系数,不仅可以利用波段之间的冗余,还可以利用每个图像波段的空间相关性,因此,应用于真实高光谱图像的仿真结果表明,它可以获得显着的压缩比和改进的质量。
{"title":"Hyperspectral image compression based on Tucker Decomposition and Discrete Cosine Transform","authors":"A. Karami, M. Yazdi, A. Z. Asli","doi":"10.1109/IPTA.2010.5586739","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586739","url":null,"abstract":"In this paper, an efficient method for Hyperspectral image compression based on the Tucker Decomposition (TD) and the Three Dimensional Discrete Cosine Transform (3D-DCT) is proposed. The core idea behind our proposed technique is to apply TD to the 3D-DCT coefficients of Hyperspectral image in order to not only exploit redundancies between bands but also to use spatial correlations of every image band and therefore, as simulation results applied to real Hyperspectral images demonstrate, it leads to a remarkable compression ratio with improved quality.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129293054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Covariance-based adaptive deinterlacing method using edge map 基于协方差的边缘图自适应去隔行方法
Sang-Jun Park, Gwanggil Jeon, Jechang Jeong
The purpose of this article is to discuss deinterlacing results in a computationally constrained and varied environment. The proposed covariance-based adaptive deinterlacing method using edge map (CADEM) consists of two methods: the modified edge-based line averaging (MELA) method for plain regions and the covariance-based adaptive deinterlacing (CAD) method along the edges. The proposed CADEM uses the edge map of the interlaced input image for assigning the appropriate method between MELA and the modified CAD (MCAD) methods. We first introduce the MCAD method. The principle idea of the MCAD is based on the correspondence between the high-resolution covariance and the low-resolution covariance. The MCAD estimates the local covariance coefficients from an interlaced image using Wiener filtering theory and then uses these optimal minimum mean squared error interpolation coefficients to obtain a deinterlaced image. However, the MCAD method, though more robust than most known methods, was not found to be very fast compared with the others. To alleviate this issue, we propose an adaptive selection approach rather than using only one MCAD algorithm. The proposed hybrid approach of switching between the MELA and MCAD is proposed to reduce the overall computational load. A reliable condition to be used for switching the schemes is established by the edge map composed of binary image. The results of computer simulations showed that the proposed methods outperformed a number of methods presented in the literature.
本文的目的是讨论计算受限和变化环境中的去隔行结果。提出了一种基于协方差的边缘图自适应去隔行方法(CADEM),包括两种方法:针对平原区域的改进的基于边缘的线平均法(MELA)和沿边缘的基于协方差的自适应去隔行法(CAD)。提出的CADEM使用隔行输入图像的边缘图在MELA和改进的CAD (MCAD)方法之间分配适当的方法。我们首先介绍MCAD方法。MCAD的基本思想是基于高分辨率协方差和低分辨率协方差的对应关系。MCAD利用维纳滤波理论估计隔行图像的局部协方差系数,然后利用这些最优的最小均方误差插值系数获得去隔行图像。然而,与其他方法相比,MCAD方法虽然比大多数已知方法更健壮,但速度并不快。为了缓解这一问题,我们提出了一种自适应选择方法,而不是只使用一种MCAD算法。提出了在MELA和MCAD之间切换的混合方法,以减少总体计算负荷。由二值图像组成的边缘映射建立了切换方案的可靠条件。计算机模拟的结果表明,所提出的方法优于文献中提出的许多方法。
{"title":"Covariance-based adaptive deinterlacing method using edge map","authors":"Sang-Jun Park, Gwanggil Jeon, Jechang Jeong","doi":"10.1109/IPTA.2010.5586741","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586741","url":null,"abstract":"The purpose of this article is to discuss deinterlacing results in a computationally constrained and varied environment. The proposed covariance-based adaptive deinterlacing method using edge map (CADEM) consists of two methods: the modified edge-based line averaging (MELA) method for plain regions and the covariance-based adaptive deinterlacing (CAD) method along the edges. The proposed CADEM uses the edge map of the interlaced input image for assigning the appropriate method between MELA and the modified CAD (MCAD) methods. We first introduce the MCAD method. The principle idea of the MCAD is based on the correspondence between the high-resolution covariance and the low-resolution covariance. The MCAD estimates the local covariance coefficients from an interlaced image using Wiener filtering theory and then uses these optimal minimum mean squared error interpolation coefficients to obtain a deinterlaced image. However, the MCAD method, though more robust than most known methods, was not found to be very fast compared with the others. To alleviate this issue, we propose an adaptive selection approach rather than using only one MCAD algorithm. The proposed hybrid approach of switching between the MELA and MCAD is proposed to reduce the overall computational load. A reliable condition to be used for switching the schemes is established by the edge map composed of binary image. The results of computer simulations showed that the proposed methods outperformed a number of methods presented in the literature.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125135047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
An algorithm for iris extraction 一种虹膜提取算法
Tomáš Fabián, Jan Gaura, Petr Kotas
In this paper, we describe a new method for detecting iris in digital images. Our method is simple yet effective. It is based on statistical point of view when searching for limbic boundary and rather analytical approach when detecting pupillary boundary. It can be described in three simple steps; firstly, the bright point inside the pupil is detected; secondly, outer limbic boundary is found via statistical measurements of outer boundary points; and thirdly, inner boundary points are found by means of defined cost function maximization. Performance of the presented method is evaluated on series of iris close-up images and compared with the traditional Hough method as well.
本文提出了一种检测数字图像中虹膜的新方法。我们的方法简单而有效。在寻找边缘边界时采用统计的观点,在寻找瞳孔边界时采用分析的方法。它可以用三个简单的步骤来描述;首先,检测瞳孔内部的亮点;其次,通过外边界点的统计测量得到外边缘边界;第三,利用定义的成本函数极大化方法寻找内边界点。在一系列虹膜近距离图像上评价了该方法的性能,并与传统的霍夫方法进行了比较。
{"title":"An algorithm for iris extraction","authors":"Tomáš Fabián, Jan Gaura, Petr Kotas","doi":"10.1109/IPTA.2010.5586756","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586756","url":null,"abstract":"In this paper, we describe a new method for detecting iris in digital images. Our method is simple yet effective. It is based on statistical point of view when searching for limbic boundary and rather analytical approach when detecting pupillary boundary. It can be described in three simple steps; firstly, the bright point inside the pupil is detected; secondly, outer limbic boundary is found via statistical measurements of outer boundary points; and thirdly, inner boundary points are found by means of defined cost function maximization. Performance of the presented method is evaluated on series of iris close-up images and compared with the traditional Hough method as well.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122889918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2010 2nd International Conference on Image Processing Theory, Tools and Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1