首页 > 最新文献

2010 2nd International Conference on Image Processing Theory, Tools and Applications最新文献

英文 中文
A Markov Random Field description of fuzzy color segmentation 模糊颜色分割的马尔可夫随机场描述
Angela D'Angelo, J. Dugelay
Image segmentation is a fundamental task in many computer vision applications. In this paper, we describe a new unsupervised color image segmentation algorithm, which exploits the color characteristics of the image. The introduced system is based on a color quantization of the image in the Lab color space using the popular eleven culture colors in order to avoid the well known problem of oversegmentation. To partially overcome the problem of highlight and shadows in the image, which is one of the main aspect affecting the performance of color segmentation systems, the proposed approach uses a fuzzy classifier trained on an ad-hoc designed dataset. A Markov Random Field description of the full algorithm is moreover provided which helps to remove resilient errors trough the use of an iterative strategy. The experimantal results show the good performance of the proposed approach which is comparable to state of the art systems even if based only on the color information of the image.
图像分割是许多计算机视觉应用中的一项基本任务。本文提出了一种新的利用图像颜色特征的无监督彩色图像分割算法。为了避免众所周知的过度分割问题,该系统采用了常用的11种文化颜色对实验室色彩空间中的图像进行颜色量化。为了部分克服影响颜色分割系统性能的主要方面之一图像的高光和阴影问题,提出的方法使用在自定义设计的数据集上训练的模糊分类器。此外,还提供了完整算法的马尔可夫随机场描述,该描述有助于通过使用迭代策略消除弹性误差。实验结果表明,即使仅基于图像的颜色信息,该方法的性能也与目前的系统相当。
{"title":"A Markov Random Field description of fuzzy color segmentation","authors":"Angela D'Angelo, J. Dugelay","doi":"10.1109/IPTA.2010.5586796","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586796","url":null,"abstract":"Image segmentation is a fundamental task in many computer vision applications. In this paper, we describe a new unsupervised color image segmentation algorithm, which exploits the color characteristics of the image. The introduced system is based on a color quantization of the image in the Lab color space using the popular eleven culture colors in order to avoid the well known problem of oversegmentation. To partially overcome the problem of highlight and shadows in the image, which is one of the main aspect affecting the performance of color segmentation systems, the proposed approach uses a fuzzy classifier trained on an ad-hoc designed dataset. A Markov Random Field description of the full algorithm is moreover provided which helps to remove resilient errors trough the use of an iterative strategy. The experimantal results show the good performance of the proposed approach which is comparable to state of the art systems even if based only on the color information of the image.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114633141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The computation of the Bhattacharyya distance between histograms without histograms 无直方图的直方图间Bhattacharyya距离的计算
Séverine Dubuisson
In this paper we present a new method for fast histogram computing and its extension to bin to bin histogram distance computing. The idea consists in using the information of spatial differences between images, or between regions of images (a current and a reference one), and encoding it into a specific data structure: a tree. The Bhattacharyya distance between two histograms is then computed using an incremental approach that avoid histogram: we just need histograms of the reference image, and spatial differences between the reference and the current image to compute this distance using an updating process. We compare our approach with the well-known Integral Histogram one, and obtain better results in terms of processing time while reducing the memory footprint. We show theoretically and with experimental results the superiority of our approach in many cases. Finally, we demonstrate the advantages of our approach on a real visual tracking application using a particle filter framework by improving its correction step computation time.
本文提出了一种新的快速直方图计算方法,并将其扩展到bin到bin直方图距离计算。其思想是利用图像之间或图像区域之间(当前图像和参考图像)的空间差异信息,并将其编码为特定的数据结构:树。然后使用避免直方图的增量方法计算两个直方图之间的Bhattacharyya距离:我们只需要参考图像的直方图,以及参考图像与当前图像之间的空间差异来使用更新过程计算这个距离。我们将我们的方法与众所周知的积分直方图方法进行了比较,在减少内存占用的同时,在处理时间方面获得了更好的结果。在许多情况下,我们用理论和实验结果证明了我们的方法的优越性。最后,我们通过改进粒子滤波框架的校正步长计算时间,证明了我们的方法在使用粒子滤波框架的真实视觉跟踪应用中的优势。
{"title":"The computation of the Bhattacharyya distance between histograms without histograms","authors":"Séverine Dubuisson","doi":"10.1109/IPTA.2010.5586745","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586745","url":null,"abstract":"In this paper we present a new method for fast histogram computing and its extension to bin to bin histogram distance computing. The idea consists in using the information of spatial differences between images, or between regions of images (a current and a reference one), and encoding it into a specific data structure: a tree. The Bhattacharyya distance between two histograms is then computed using an incremental approach that avoid histogram: we just need histograms of the reference image, and spatial differences between the reference and the current image to compute this distance using an updating process. We compare our approach with the well-known Integral Histogram one, and obtain better results in terms of processing time while reducing the memory footprint. We show theoretically and with experimental results the superiority of our approach in many cases. Finally, we demonstrate the advantages of our approach on a real visual tracking application using a particle filter framework by improving its correction step computation time.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117167444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Image database categorization using robust modeling of finite Generalized Dirichlet mixture 基于有限广义Dirichlet混合鲁棒建模的图像数据库分类
M. Ismail, H. Frigui
We propose a novel image database categorization approach using a possibilistic clustering algorithm. The proposed algorithm is based on a robust data modeling using the Generalized Dirichlet (GD) finite mixture and generates two types of membership degrees. The first one is a posterior probability that indicates the degree to which the point fits the estimated distribution. The second membership represents the degree of “typicality” and is used to indentify and discard noise points. The algorithm minimizes one objective function to optimize GD mixture parameters and possibilistic membership values. This optimization is done iteratively by dynamically updating the density mixture parameters and the membership values in each iteration. The performance of the proposed algorithm is illustrated by using it to categorize a collection of 500 color images. The results are compared with those obtained by the Fuzzy C-means algorithm.
提出了一种基于可能性聚类算法的图像数据库分类方法。该算法基于基于广义Dirichlet (GD)有限混合的稳健数据建模,并生成两种类型的隶属度。第一个是后验概率,表示点与估计分布的拟合程度。第二个隶属度表示“典型”的程度,用于识别和丢弃噪声点。该算法通过最小化一个目标函数来优化GD混合参数和可能隶属度值。这种优化是通过在每次迭代中动态更新密度混合参数和隶属度值来迭代完成的。通过对500张彩色图像进行分类,说明了该算法的性能。并与模糊c均值算法的结果进行了比较。
{"title":"Image database categorization using robust modeling of finite Generalized Dirichlet mixture","authors":"M. Ismail, H. Frigui","doi":"10.1109/IPTA.2010.5586778","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586778","url":null,"abstract":"We propose a novel image database categorization approach using a possibilistic clustering algorithm. The proposed algorithm is based on a robust data modeling using the Generalized Dirichlet (GD) finite mixture and generates two types of membership degrees. The first one is a posterior probability that indicates the degree to which the point fits the estimated distribution. The second membership represents the degree of “typicality” and is used to indentify and discard noise points. The algorithm minimizes one objective function to optimize GD mixture parameters and possibilistic membership values. This optimization is done iteratively by dynamically updating the density mixture parameters and the membership values in each iteration. The performance of the proposed algorithm is illustrated by using it to categorize a collection of 500 color images. The results are compared with those obtained by the Fuzzy C-means algorithm.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128485902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Empirical mode decomposition based visual enhancement of underwater images 基于经验模态分解的水下图像视觉增强
A. Çelebi, S. Ertürk
Most underwater vehicles are nowadays equipped with vision sensors. However, underwater images captured using optic cameras can be of poor quality due to lighting conditions underwater. In such cases it is necessary to apply image enhancement methods to underwater images in order to enhance visual quality as well as interpretability. In this paper, an Empirical Mode Decomposition (EMD) based image enhancement algorithm is applied to underwater images for this purpose. EMD has been shown to be particularly suitable for non-linear and non-stationary signals in the literature, and therefore provides very useful in real life applications. In the approach presented in this paper, initially each R, G and B channel of the color underwater image is separately decomposed into Intrinsic Mode Functions (IMFs) using EMD. Then, the enhanced image is constructed by combining the IMFs of each channel with different weights, so as to obtain a new image with increased visual quality. It is shown that the proposed approach provides superior results compared to conventional image enhancement methods such as contrast stretching.
现在大多数水下航行器都配备了视觉传感器。然而,由于水下的照明条件,使用光学相机拍摄的水下图像质量可能很差。在这种情况下,有必要对水下图像应用图像增强方法,以提高视觉质量和可解释性。为此,本文将基于经验模态分解(EMD)的图像增强算法应用于水下图像。在文献中,EMD已被证明特别适用于非线性和非平稳信号,因此在现实生活中提供了非常有用的应用。在本文的方法中,首先使用EMD将彩色水下图像的R、G和B通道分别分解为内禀模态函数(IMFs)。然后,将各通道的权重不同的imf组合,构造增强图像,得到视觉质量提高的新图像。结果表明,与传统的图像增强方法(如对比度拉伸)相比,该方法具有更好的增强效果。
{"title":"Empirical mode decomposition based visual enhancement of underwater images","authors":"A. Çelebi, S. Ertürk","doi":"10.1109/IPTA.2010.5586758","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586758","url":null,"abstract":"Most underwater vehicles are nowadays equipped with vision sensors. However, underwater images captured using optic cameras can be of poor quality due to lighting conditions underwater. In such cases it is necessary to apply image enhancement methods to underwater images in order to enhance visual quality as well as interpretability. In this paper, an Empirical Mode Decomposition (EMD) based image enhancement algorithm is applied to underwater images for this purpose. EMD has been shown to be particularly suitable for non-linear and non-stationary signals in the literature, and therefore provides very useful in real life applications. In the approach presented in this paper, initially each R, G and B channel of the color underwater image is separately decomposed into Intrinsic Mode Functions (IMFs) using EMD. Then, the enhanced image is constructed by combining the IMFs of each channel with different weights, so as to obtain a new image with increased visual quality. It is shown that the proposed approach provides superior results compared to conventional image enhancement methods such as contrast stretching.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132662353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Watermarking ancient documents schema using wavelet packets and convolutional code 利用小波包和卷积码对古代文档模式进行水印
M. Maatouk, Majd Bellaj, N. Amara
The ancient documents have a major importance in the history of every people and every nation. These documents involve important information that many people need. As a consequence, it is necessary to preserve these documents in order to build a numerical library in the service of the public. Therefore, the necessity of digitizing these documents permits the simultaneous access to the same documents and provides the possibility of the reproduction of these documents existing most of the time in just one example. This task is considered as an important step in the research domain. In fact, many researches have been invested for the processing, compression, segmentation and indexation of these documents. Nevertheless, with a numerical form, there is a threat of hacking, stocking, copying, modifying and finally diffusing these documents in an illegal way without losing their quality. As a consequence, we face the problem of losing the intellectual property because of the lack of methods that concern the protection of data. In order to prevent these frauds, watermarking represents a promising method to protect these images. In this context, our work makes part of protecting ancient documents essentially. In this paper, we have proposed the method of watermarking ancient documents. This method is based on the Wavelet Packet Transform (WPT) and it provides a good robustness which can face different attacks like signal processing (noise, filter and compression) and noticeable signature invisibility.
古代文献在每个民族和每个国家的历史中都具有重要的意义。这些文件包含许多人需要的重要信息。因此,有必要保存这些文件,以便建立一个数字图书馆,为公众服务。因此,数字化这些文件的必要性允许同时访问相同的文件,并提供了复制这些文件的可能性,这些文件大部分时间只存在于一个例子中。这项任务被认为是研究领域的重要一步。事实上,人们对这些文档的处理、压缩、分割和索引进行了大量的研究。然而,如果采用数字形式,这些文件就有可能遭到黑客入侵、储存、复制、修改,最后以非法方式传播,而又不损失其质量。因此,由于缺乏保护数据的方法,我们面临着失去知识产权的问题。为了防止这些欺诈,水印是一种很有前途的保护这些图像的方法。在这种背景下,我们的工作本质上是保护古代文献的一部分。本文提出了一种对古代文献进行水印的方法。该方法以小波包变换(WPT)为基础,具有较好的鲁棒性,可以应对各种攻击,如信号处理(噪声、滤波和压缩)和显著签名不可见。
{"title":"Watermarking ancient documents schema using wavelet packets and convolutional code","authors":"M. Maatouk, Majd Bellaj, N. Amara","doi":"10.1109/IPTA.2010.5586787","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586787","url":null,"abstract":"The ancient documents have a major importance in the history of every people and every nation. These documents involve important information that many people need. As a consequence, it is necessary to preserve these documents in order to build a numerical library in the service of the public. Therefore, the necessity of digitizing these documents permits the simultaneous access to the same documents and provides the possibility of the reproduction of these documents existing most of the time in just one example. This task is considered as an important step in the research domain. In fact, many researches have been invested for the processing, compression, segmentation and indexation of these documents. Nevertheless, with a numerical form, there is a threat of hacking, stocking, copying, modifying and finally diffusing these documents in an illegal way without losing their quality. As a consequence, we face the problem of losing the intellectual property because of the lack of methods that concern the protection of data. In order to prevent these frauds, watermarking represents a promising method to protect these images. In this context, our work makes part of protecting ancient documents essentially. In this paper, we have proposed the method of watermarking ancient documents. This method is based on the Wavelet Packet Transform (WPT) and it provides a good robustness which can face different attacks like signal processing (noise, filter and compression) and noticeable signature invisibility.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133991109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Mojette reconstruction from noisy projections Mojette重建噪声投影
B. Recur, P. Desbarats, J. Domenger
Apart from the usual methods based on the Radon theorem, the Mojette transform proposes a specific algorithm called Corner Based Inversion (CBI) to reconstruct an image from its projections. Contrary to other transforms, it offers two interesting properties. First, the acquisition follows discrete image geometry and resolves the well-known irregular sampling problem. Second, it updates projection values during the reconstruction such that the sinogram contains only data for not yet reconstructed pixels. Unfortunately, the CBI algorithm is noise sensitive and reconstruction from corrupted data fails. In this paper, we develop a new noise-robust CBI algorithm based on data redundancy and noise modelling in the projections. This algorithm is applied in discrete tomography from a Radon acquisition. Reconstructed image results are discussed and applications in usual tomography are detailed.
除了基于Radon定理的常用方法外,Mojette变换还提出了一种特殊的基于角点反演(CBI)的算法,通过投影重建图像。与其他转换相反,它提供了两个有趣的属性。首先,采集遵循离散图像几何,解决了众所周知的不规则采样问题。其次,它在重建过程中更新投影值,使正弦图只包含尚未重建的像素的数据。不幸的是,CBI算法对噪声敏感,从损坏的数据中重建失败。本文基于投影中的数据冗余和噪声建模,提出了一种新的抗噪声CBI算法。该算法应用于Radon采集的离散层析成像。讨论了重建图像的结果,并详细介绍了在常规断层扫描中的应用。
{"title":"Mojette reconstruction from noisy projections","authors":"B. Recur, P. Desbarats, J. Domenger","doi":"10.1109/IPTA.2010.5586740","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586740","url":null,"abstract":"Apart from the usual methods based on the Radon theorem, the Mojette transform proposes a specific algorithm called Corner Based Inversion (CBI) to reconstruct an image from its projections. Contrary to other transforms, it offers two interesting properties. First, the acquisition follows discrete image geometry and resolves the well-known irregular sampling problem. Second, it updates projection values during the reconstruction such that the sinogram contains only data for not yet reconstructed pixels. Unfortunately, the CBI algorithm is noise sensitive and reconstruction from corrupted data fails. In this paper, we develop a new noise-robust CBI algorithm based on data redundancy and noise modelling in the projections. This algorithm is applied in discrete tomography from a Radon acquisition. Reconstructed image results are discussed and applications in usual tomography are detailed.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115643037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
An efficient vision system to measure granule velocity and mass flow distribution in fertiliser centrifugal spreading 一种高效的视觉系统,用于测量肥料离心撒料中颗粒速度和质量流分布
S. Villette, C. Gée, E. Piron, R. Martin, D. Miclet, M. Paindavoine
This article reports a new approach to measure the velocity and the mass flow distribution of granules in the vicinity of a spinning disc in order to improve fertiliser spreading in agriculture. In this approach, the acquisition system consists of a digital camera placed above the disc so that its view axis corresponds to the disc axle. This provides useful geometrical properties to develop a simple and efficient image processing. A specific Hough transform is implemented to extract relevant data (polar coordinates of granule trajectory with respect to the disc centre) from granule streaks deduced from “motion-blurred images”. The Hough space directly provides the mean-radius of the polar coordinates of the trajectories from which the mean value of the outlet velocity is deduced. The Hough space also provides the angular distribution of the trajectories from which an estimation of the mass flow distribution is deduced. Results are compared with those obtained with reference methods.
本文报道了一种测量旋转圆盘附近颗粒速度和质量流分布的新方法,以改善农业施肥。在这种方法中,采集系统由放置在光盘上方的数码相机组成,以便其视轴与光盘轴相对应。这为开发简单有效的图像处理提供了有用的几何属性。一个特定的霍夫变换实现提取相关数据(颗粒轨迹相对于光盘中心的极坐标)从颗粒条纹推导出的“运动模糊图像”。霍夫空间直接提供了轨迹极坐标的平均半径,由此可以推导出出口速度的平均值。霍夫空间还提供了轨迹的角分布,由此可以推导出质量流分布的估计。结果与参考方法进行了比较。
{"title":"An efficient vision system to measure granule velocity and mass flow distribution in fertiliser centrifugal spreading","authors":"S. Villette, C. Gée, E. Piron, R. Martin, D. Miclet, M. Paindavoine","doi":"10.1109/IPTA.2010.5586738","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586738","url":null,"abstract":"This article reports a new approach to measure the velocity and the mass flow distribution of granules in the vicinity of a spinning disc in order to improve fertiliser spreading in agriculture. In this approach, the acquisition system consists of a digital camera placed above the disc so that its view axis corresponds to the disc axle. This provides useful geometrical properties to develop a simple and efficient image processing. A specific Hough transform is implemented to extract relevant data (polar coordinates of granule trajectory with respect to the disc centre) from granule streaks deduced from “motion-blurred images”. The Hough space directly provides the mean-radius of the polar coordinates of the trajectories from which the mean value of the outlet velocity is deduced. The Hough space also provides the angular distribution of the trajectories from which an estimation of the mass flow distribution is deduced. Results are compared with those obtained with reference methods.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117270549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An adaptive algorithm for phase retrieval from high intensity images 高强度图像相位恢复的自适应算法
Gilad Avidor, E. Gur
In this paper, we present an adaptive Gerchberg-Saxton algorithm for phase retrieval. One of the drawbacks of the original Gerchberg-Saxton algorithm is the poor results it yields for very bright images. In this paper we demonstrate how a dynamic phase retrieval approach can improve the correlation between the required image and the reconstructed image by up to 10 percent. The paper gives explicit explanations to the principle behind the algorithm and shows experimental results to support the dynamic approach.
本文提出了一种自适应Gerchberg-Saxton相位检索算法。原始Gerchberg-Saxton算法的一个缺点是,对于非常明亮的图像,它产生的结果很差。在本文中,我们演示了动态相位检索方法如何将所需图像与重建图像之间的相关性提高10%。本文对算法的原理进行了明确的解释,并给出了支持动态方法的实验结果。
{"title":"An adaptive algorithm for phase retrieval from high intensity images","authors":"Gilad Avidor, E. Gur","doi":"10.1109/IPTA.2010.5586791","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586791","url":null,"abstract":"In this paper, we present an adaptive Gerchberg-Saxton algorithm for phase retrieval. One of the drawbacks of the original Gerchberg-Saxton algorithm is the poor results it yields for very bright images. In this paper we demonstrate how a dynamic phase retrieval approach can improve the correlation between the required image and the reconstructed image by up to 10 percent. The paper gives explicit explanations to the principle behind the algorithm and shows experimental results to support the dynamic approach.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115851793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A new descriptor for textured image segmentation based on fuzzy type-2 clustering approach 基于模糊2型聚类方法的纹理图像分割描述符
Lotfi Tlig, M. Sayadi, Farhat Fnaeich
In this paper we present a novel segmentation approach that performs fuzzy clustering and feature extraction. The proposed method consists in forming a new descriptor combining a set of texture sub-features derived from the Grating Cell Operator (GCO) responses of an optimized Gabor filter bank, and Local Binary Pattern (LBP) outputs. The new feature vector offers two advantages. First, it only considers the optimized filters. Second, it aims to characterize both micro and macro textures. In addition, an extended version of a type 2 fuzzy c-means clustering algorithm is proposed. The extension is based on the integration of spatial information in the membership function (MF). The performance of this method is demonstrated by several experiments on natural textures.
本文提出了一种基于模糊聚类和特征提取的图像分割方法。该方法将一组由优化Gabor滤波器组的光栅单元算子(GCO)响应和局部二值模式(LBP)输出得到的纹理子特征组合成一个新的描述子。新的特征向量有两个优点。首先,它只考虑优化后的过滤器。其次,它的目的是表征微观和宏观纹理。此外,提出了二类模糊c均值聚类算法的扩展版本。该扩展是基于隶属度函数(MF)中空间信息的集成。通过对自然纹理的实验验证了该方法的有效性。
{"title":"A new descriptor for textured image segmentation based on fuzzy type-2 clustering approach","authors":"Lotfi Tlig, M. Sayadi, Farhat Fnaeich","doi":"10.1109/IPTA.2010.5586746","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586746","url":null,"abstract":"In this paper we present a novel segmentation approach that performs fuzzy clustering and feature extraction. The proposed method consists in forming a new descriptor combining a set of texture sub-features derived from the Grating Cell Operator (GCO) responses of an optimized Gabor filter bank, and Local Binary Pattern (LBP) outputs. The new feature vector offers two advantages. First, it only considers the optimized filters. Second, it aims to characterize both micro and macro textures. In addition, an extended version of a type 2 fuzzy c-means clustering algorithm is proposed. The extension is based on the integration of spatial information in the membership function (MF). The performance of this method is demonstrated by several experiments on natural textures.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116146409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Adaptive feature selection for heterogeneous image databases 异构图像数据库的自适应特征选择
R. Kachouri, K. Djemal, H. Maaref
Various visual characteristics based discriminative classification has become a standard technique for image recognition tasks in heterogeneous databases. Nevertheless, the encountered problem is the choice of the most relevant features depending on the considered image database content. In this aim, feature selection methods are used to remove the effect of the outlier features. Therefore, they allow to reduce the cost of extracting features and improve the classification accuracy. We propose, in this paper, an original feature selection method, that we call Adaptive Feature Selection (AFS). Proposed method combines Filter and Wrapper approaches. From an extracted feature set, AFS ensures a multiple learning of Support Vector Machine classifiers (SVM). Based on Fisher Linear Discrimination (FLD), it removes then redundant and irrelevant features automatically depending on their corresponding discrimination power. Using a large number of features, extensive experiments are performed on the heterogeneous COREL image database. A comparison with existing selection method is also provided. Results prove the efficiency and the robustness of the proposed AFS method.
基于各种视觉特征的判别分类已经成为异构数据库中图像识别任务的标准技术。然而,遇到的问题是根据所考虑的图像数据库内容选择最相关的特征。为此,采用特征选择方法去除离群特征的影响。因此,它们可以降低提取特征的成本,提高分类精度。本文提出了一种新颖的特征选择方法,我们称之为自适应特征选择(AFS)。该方法结合了过滤和包装两种方法。从提取的特征集中,AFS确保支持向量机分类器(SVM)的多次学习。基于Fisher线性判别(FLD),根据其对应的判别能力自动去除冗余和不相关的特征。利用大量的特征,在异构COREL图像数据库上进行了大量的实验。并与现有的选择方法进行了比较。实验结果证明了该方法的有效性和鲁棒性。
{"title":"Adaptive feature selection for heterogeneous image databases","authors":"R. Kachouri, K. Djemal, H. Maaref","doi":"10.1109/IPTA.2010.5586751","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586751","url":null,"abstract":"Various visual characteristics based discriminative classification has become a standard technique for image recognition tasks in heterogeneous databases. Nevertheless, the encountered problem is the choice of the most relevant features depending on the considered image database content. In this aim, feature selection methods are used to remove the effect of the outlier features. Therefore, they allow to reduce the cost of extracting features and improve the classification accuracy. We propose, in this paper, an original feature selection method, that we call Adaptive Feature Selection (AFS). Proposed method combines Filter and Wrapper approaches. From an extracted feature set, AFS ensures a multiple learning of Support Vector Machine classifiers (SVM). Based on Fisher Linear Discrimination (FLD), it removes then redundant and irrelevant features automatically depending on their corresponding discrimination power. Using a large number of features, extensive experiments are performed on the heterogeneous COREL image database. A comparison with existing selection method is also provided. Results prove the efficiency and the robustness of the proposed AFS method.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125792585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
2010 2nd International Conference on Image Processing Theory, Tools and Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1