首页 > 最新文献

2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis最新文献

英文 中文
Classification with invariant scattering representations 用不变散射表示的分类
Pub Date : 2011-06-16 DOI: 10.1109/IVMSPW.2011.5970362
Joan Bruna, S. Mallat
A scattering transform defines a signal representation which is invariant to translations and Lipschitz continuous relatively to deformations. It is implemented with a non-linear convolution network that iterates over wavelet and modulus operators. Lipschitz continuity locally linearizes deformations. Complex classes of signals and textures can be modeled with low-dimensional affine spaces, computed with a PCA in the scattering domain. Classification is performed with a penalized model selection. State of the art results are obtained for handwritten digit recognition over small training sets, and for texture classification. 1
散射变换定义了一种信号表示,它对平移是不变的,对变形是连续的。它是用一个非线性卷积网络实现的,迭代小波算子和模算子。Lipschitz连续性局部线性化变形。复杂类型的信号和纹理可以用低维仿射空间建模,在散射域用PCA计算。分类是通过惩罚模型选择来执行的。在小训练集上的手写数字识别和纹理分类中获得了最先进的结果。1
{"title":"Classification with invariant scattering representations","authors":"Joan Bruna, S. Mallat","doi":"10.1109/IVMSPW.2011.5970362","DOIUrl":"https://doi.org/10.1109/IVMSPW.2011.5970362","url":null,"abstract":"A scattering transform defines a signal representation which is invariant to translations and Lipschitz continuous relatively to deformations. It is implemented with a non-linear convolution network that iterates over wavelet and modulus operators. Lipschitz continuity locally linearizes deformations. Complex classes of signals and textures can be modeled with low-dimensional affine spaces, computed with a PCA in the scattering domain. Classification is performed with a penalized model selection. State of the art results are obtained for handwritten digit recognition over small training sets, and for texture classification. 1","PeriodicalId":405588,"journal":{"name":"2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116603906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Color fidelity of chromatic distributions by triad illuminant comparison 用三种光源比较色分布的色彩保真度
Pub Date : 2011-06-16 DOI: 10.1109/IVMSPW.2011.5970345
M. Lucassen, T. Gevers, A. Gijsenij
Performance measures for quantifying human color constancy and computational color constancy are very different. The former relate to measurements on individual object colors whereas the latter relate to the accuracy of the estimated illuminant. To bridge this gap, we propose a psychophysical method in which observers judge the global color fidelity of the visual scene rendered under different illuminants. In each experimental trial, the scene is rendered under three illuminants, two chromatic test illuminants and one neutral reference illuminant. Observers indicate which of the two test illuminants leads to better color fidelity in comparison to the reference illuminant. Here we study multicolor scenes with chromatic distributions that are differently oriented in color space, while having the same average chromaticity. We show that when these distributions are rendered under colored illumination they lead to different perceptual estimates of the color fidelity.
量化人类色彩恒定性和计算色彩恒定性的性能度量是非常不同的。前者涉及对单个物体颜色的测量,而后者涉及估计光源的准确性。为了弥补这一差距,我们提出了一种心理物理学方法,在这种方法中,观察者判断在不同光源下呈现的视觉场景的全局色彩保真度。在每次实验中,场景在三种光源下渲染,两种彩色测试光源和一种中性参考光源。观察者指出哪两个测试光源导致更好的色彩保真度相比,参考光源。本文研究了具有不同色彩空间取向的色彩分布的多色场景,同时具有相同的平均色度。我们表明,当这些分布在彩色照明下呈现时,它们会导致对色彩保真度的不同感知估计。
{"title":"Color fidelity of chromatic distributions by triad illuminant comparison","authors":"M. Lucassen, T. Gevers, A. Gijsenij","doi":"10.1109/IVMSPW.2011.5970345","DOIUrl":"https://doi.org/10.1109/IVMSPW.2011.5970345","url":null,"abstract":"Performance measures for quantifying human color constancy and computational color constancy are very different. The former relate to measurements on individual object colors whereas the latter relate to the accuracy of the estimated illuminant. To bridge this gap, we propose a psychophysical method in which observers judge the global color fidelity of the visual scene rendered under different illuminants. In each experimental trial, the scene is rendered under three illuminants, two chromatic test illuminants and one neutral reference illuminant. Observers indicate which of the two test illuminants leads to better color fidelity in comparison to the reference illuminant. Here we study multicolor scenes with chromatic distributions that are differently oriented in color space, while having the same average chromaticity. We show that when these distributions are rendered under colored illumination they lead to different perceptual estimates of the color fidelity.","PeriodicalId":405588,"journal":{"name":"2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124764996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A new subjective procedure for evaluation and development of texture similarity metrics 一种新的评价和发展纹理相似度度量的主观程序
Pub Date : 2011-06-16 DOI: 10.1109/IVMSPW.2011.5970366
J. Zujovic, T. Pappas, D. Neuhoff, R. Egmond, H. Ridder
In order to facilitate the development of objective texture similarity metrics and to evaluate their performance, one needs a large texture database accurately labeled with perceived similarities between images. We propose ViSiProG, a new Visual Similarity by Progressive Grouping procedure for conducting subjective experiments that organizes a texture database into clusters of visually similar images. The grouping is based on visual blending, and greatly simplifies pairwise labeling. ViSiProG collects subjective data in an efficient and effectivemanner, so that a relatively large database of textures can be accommodated. Experimental results and comparisons with structural texture similarity metrics demonstrate both the effectiveness of the proposed subjective testing procedure and the performance of the metrics.
为了促进客观纹理相似度度量的发展并评估其性能,需要一个大型纹理数据库,准确地标记图像之间的感知相似度。我们提出了ViSiProG,一个新的视觉相似性渐进分组程序,用于进行主观实验,将纹理数据库组织成视觉上相似的图像簇。分组基于视觉混合,极大地简化了两两标记。ViSiProG以高效和有效的方式收集主观数据,因此可以容纳一个相对较大的纹理数据库。实验结果和与结构纹理相似度度量的比较表明了所提出的主观测试方法的有效性和度量的性能。
{"title":"A new subjective procedure for evaluation and development of texture similarity metrics","authors":"J. Zujovic, T. Pappas, D. Neuhoff, R. Egmond, H. Ridder","doi":"10.1109/IVMSPW.2011.5970366","DOIUrl":"https://doi.org/10.1109/IVMSPW.2011.5970366","url":null,"abstract":"In order to facilitate the development of objective texture similarity metrics and to evaluate their performance, one needs a large texture database accurately labeled with perceived similarities between images. We propose ViSiProG, a new Visual Similarity by Progressive Grouping procedure for conducting subjective experiments that organizes a texture database into clusters of visually similar images. The grouping is based on visual blending, and greatly simplifies pairwise labeling. ViSiProG collects subjective data in an efficient and effectivemanner, so that a relatively large database of textures can be accommodated. Experimental results and comparisons with structural texture similarity metrics demonstrate both the effectiveness of the proposed subjective testing procedure and the performance of the metrics.","PeriodicalId":405588,"journal":{"name":"2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131321566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Manipulating attention in computer games 在电脑游戏中操纵注意力
Pub Date : 2011-06-16 DOI: 10.1109/IVMSPW.2011.5970371
M. Bernhard, Ling Zhang, M. Wimmer
In computer games, a user's attention is focused on the current task, and task-irrelevant details remain unnoticed. This behavior, known as inattentional blindness, is a main problem for the optimal placement of information or advertisements. We propose a guiding principle based on Wolfe's theory of Guided Search, which predicts the saliency of objects during a visual search task. Assuming that computer games elicit visual search tasks frequently, we applied this model in a “reverse” direction: Given a target item (e.g., advertisement) which should be noticed by the user, we choose a frequently searched game item and modify it so that it shares some perceptual features (e.g., color or orientation) with the target item. A memory experiment with 36 participants showed that in an action video game, advertisements were more noticeable to users when this method is applied.
在电脑游戏中,用户的注意力集中在当前的任务上,而与任务无关的细节则被忽略。这种行为被称为无意失明,是信息或广告最佳放置的主要问题。基于Wolfe的引导搜索理论,我们提出了一个预测视觉搜索任务中目标显著性的指导原则。假设电脑游戏经常引发视觉搜索任务,我们将该模型应用于“相反”的方向:给定一个用户应该注意到的目标项目(如广告),我们选择一个经常搜索的游戏项目并对其进行修改,使其与目标项目共享一些感知特征(如颜色或方向)。一项有36名参与者参与的记忆实验表明,在动作视频游戏中,当使用这种方法时,广告对用户来说更明显。
{"title":"Manipulating attention in computer games","authors":"M. Bernhard, Ling Zhang, M. Wimmer","doi":"10.1109/IVMSPW.2011.5970371","DOIUrl":"https://doi.org/10.1109/IVMSPW.2011.5970371","url":null,"abstract":"In computer games, a user's attention is focused on the current task, and task-irrelevant details remain unnoticed. This behavior, known as inattentional blindness, is a main problem for the optimal placement of information or advertisements. We propose a guiding principle based on Wolfe's theory of Guided Search, which predicts the saliency of objects during a visual search task. Assuming that computer games elicit visual search tasks frequently, we applied this model in a “reverse” direction: Given a target item (e.g., advertisement) which should be noticed by the user, we choose a frequently searched game item and modify it so that it shares some perceptual features (e.g., color or orientation) with the target item. A memory experiment with 36 participants showed that in an action video game, advertisements were more noticeable to users when this method is applied.","PeriodicalId":405588,"journal":{"name":"2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis","volume":"117 1-2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123566634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Selective rendering with graphical saliency model 图形显著性模型的选择性渲染
Pub Date : 2011-06-16 DOI: 10.1109/IVMSPW.2011.5970372
L. Dong, Weisi Lin, Ce Zhu, S. H. Soon
In this work, we firstly identify the shortcomings of the existing work of selective image rendering. In order to remedy the identified problems, we put forward the concept and formulation of a graphical saliency model (GSM) for selective image rendering applications, in which the sampling rate is determined adaptively according to the resultant saliency map under a computation budget. Different from the existing visual attention (VA) models which have been devised for natural image/video processing and applied to image rendering, the GSM considers the characteristics of the rendering process and aims to detect regions which require high computation to be rendered for good use of the said budget. The proposed GSM improves a VA model by incorporating a metric of rendering complexity. Experiment results show that, under a limited computation budget, selective rendering guided by the proposed GSM can achieve better perceived graphic quality, compared with that merely based upon a VA model.
在这项工作中,我们首先指出了现有选择性图像渲染工作的不足之处。为了解决这些问题,我们提出了一种图形显著性模型(GSM)的概念和公式,用于选择性图像渲染应用,其中采样率在计算预算下根据生成的显著性图自适应确定。与现有的用于自然图像/视频处理并应用于图像渲染的视觉注意(VA)模型不同,GSM考虑了渲染过程的特点,旨在检测需要高计算量的区域,以充分利用所述预算。提出的GSM通过加入渲染复杂性度量来改进VA模型。实验结果表明,在计算预算有限的情况下,与仅基于VA模型的选择性渲染相比,基于GSM的选择性渲染可以获得更好的感知图形质量。
{"title":"Selective rendering with graphical saliency model","authors":"L. Dong, Weisi Lin, Ce Zhu, S. H. Soon","doi":"10.1109/IVMSPW.2011.5970372","DOIUrl":"https://doi.org/10.1109/IVMSPW.2011.5970372","url":null,"abstract":"In this work, we firstly identify the shortcomings of the existing work of selective image rendering. In order to remedy the identified problems, we put forward the concept and formulation of a graphical saliency model (GSM) for selective image rendering applications, in which the sampling rate is determined adaptively according to the resultant saliency map under a computation budget. Different from the existing visual attention (VA) models which have been devised for natural image/video processing and applied to image rendering, the GSM considers the characteristics of the rendering process and aims to detect regions which require high computation to be rendered for good use of the said budget. The proposed GSM improves a VA model by incorporating a metric of rendering complexity. Experiment results show that, under a limited computation budget, selective rendering guided by the proposed GSM can achieve better perceived graphic quality, compared with that merely based upon a VA model.","PeriodicalId":405588,"journal":{"name":"2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129333718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A novel multifocus image fusion scheme based on pixel significance using wavelet transform 一种基于小波变换的像素显著性多焦点图像融合方法
Pub Date : 2011-06-16 DOI: 10.1109/IVMSPW.2011.5970354
Parul Shah, T. V. Srikanth, S. N. Merchant, U. Desai
In this paper, we propose a novel fusion rule for combining multifocus images of a scene by taking their weighted average in wavelet domain. The weights are decided adaptively by computing significance of the pixel using information available at finer resolution bands such that the proposed significance depends on the strength of edges giving more weightage to pixel with sharper neighborhood. The performance have been extensively tested on several pairs of multifocus images and compared quantitatively with various existing methods. The analysis shows that the proposed method increases the quality of the fused image significantly, both visually and in terms quantitative parameters, by achieving major reduction in artefacts.
本文提出了一种基于小波域加权平均的场景多焦图像融合规则。权重是通过使用在更精细的分辨率波段上可用的信息计算像素的重要度来自适应地确定的,这样所提出的重要度取决于边缘的强度,赋予具有更清晰邻域的像素更多的权重。在多对多聚焦图像上进行了广泛的性能测试,并与现有的各种方法进行了定量比较。分析表明,该方法通过大幅度减少伪影,在视觉和定量参数方面显著提高了融合图像的质量。
{"title":"A novel multifocus image fusion scheme based on pixel significance using wavelet transform","authors":"Parul Shah, T. V. Srikanth, S. N. Merchant, U. Desai","doi":"10.1109/IVMSPW.2011.5970354","DOIUrl":"https://doi.org/10.1109/IVMSPW.2011.5970354","url":null,"abstract":"In this paper, we propose a novel fusion rule for combining multifocus images of a scene by taking their weighted average in wavelet domain. The weights are decided adaptively by computing significance of the pixel using information available at finer resolution bands such that the proposed significance depends on the strength of edges giving more weightage to pixel with sharper neighborhood. The performance have been extensively tested on several pairs of multifocus images and compared quantitatively with various existing methods. The analysis shows that the proposed method increases the quality of the fused image significantly, both visually and in terms quantitative parameters, by achieving major reduction in artefacts.","PeriodicalId":405588,"journal":{"name":"2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121176516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Local correction with global constraint for image enhancement 基于全局约束的局部校正图像增强
Pub Date : 2011-06-16 DOI: 10.1109/IVMSPW.2011.5970351
Z. Hou, H. Eng, T. Koh
This paper presents a method to improve the image contrast adaptively with account of both local and global image context. Firstly, the image is analyzed to find the region containing meaningful contents with good contrast and the region containing meaningful contents but with poor contrast. The analysis is based on the different responses from two edge detectors: the Canny and the zero-crossing detector. Then statistics of the gradient field in the former region is utilized to correct the gradient field in the latter region. Reconstruction of the contents in the latter region is accomplished through solving a Poisson equation with Dirichlet boundary conditions. Throughout the process, objects with poor visibility are automatically detected and adaptively enhanced without sacrifice of the contrast of image contents that are properly illuminated. Experiments show the advantages of the proposed method over the conventional contrast enhancement methods.
本文提出了一种同时考虑局部和全局图像上下文的自适应提高图像对比度的方法。首先对图像进行分析,找出含有有意义内容且对比度较好的区域和含有有意义内容但对比度较差的区域。分析是基于两种边缘检测器的不同响应:Canny检测器和过零检测器。然后利用前一区域的梯度场统计量对后一区域的梯度场进行校正。后一区域内容的重建是通过求解具有狄利克雷边界条件的泊松方程来完成的。在整个过程中,在不牺牲适当照明的图像内容的对比度的情况下,自动检测并自适应增强能见度较差的物体。实验结果表明,该方法优于传统的对比度增强方法。
{"title":"Local correction with global constraint for image enhancement","authors":"Z. Hou, H. Eng, T. Koh","doi":"10.1109/IVMSPW.2011.5970351","DOIUrl":"https://doi.org/10.1109/IVMSPW.2011.5970351","url":null,"abstract":"This paper presents a method to improve the image contrast adaptively with account of both local and global image context. Firstly, the image is analyzed to find the region containing meaningful contents with good contrast and the region containing meaningful contents but with poor contrast. The analysis is based on the different responses from two edge detectors: the Canny and the zero-crossing detector. Then statistics of the gradient field in the former region is utilized to correct the gradient field in the latter region. Reconstruction of the contents in the latter region is accomplished through solving a Poisson equation with Dirichlet boundary conditions. Throughout the process, objects with poor visibility are automatically detected and adaptively enhanced without sacrifice of the contrast of image contents that are properly illuminated. Experiments show the advantages of the proposed method over the conventional contrast enhancement methods.","PeriodicalId":405588,"journal":{"name":"2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123829761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A comparative study on the local-pyramid approach for Content-Based Image Retrieval 基于内容的局部金字塔图像检索方法的比较研究
Pub Date : 2011-06-16 DOI: 10.1109/IVMSPW.2011.5970363
Lin Feng, Anand Bilas Ray
The local-pyramid approach for image representation and feature extraction is studied for the Content-Based Image Retrieval (CBIR). Lazebnik's pyramid matching kernels and the K-means clustering is used. The SIFT descriptor is deployed for feature extraction from the images, resulting in an efficient image representation scheme and reduction of the computational complexity. Histogram intersection is used to compute the similarity between the query image and the database images. The local-pyramid approach with a 3-level pyramid and a dictionary size of 100 achieves an average precision of 86.5% in retrieving images from the benchmark database, COREL 1K, and 77.35% for that with random image database.
研究了基于内容的图像检索(CBIR)中图像表示和特征提取的局部金字塔方法。使用Lazebnik的金字塔匹配核和K-means聚类。利用SIFT描述符对图像进行特征提取,得到了一种高效的图像表示方案,降低了计算复杂度。直方图交集用于计算查询图像与数据库图像之间的相似度。采用3层金字塔和100个字典大小的局部金字塔方法,从基准数据库COREL 1K检索图像的平均精度为86.5%,随机图像数据库检索图像的平均精度为77.35%。
{"title":"A comparative study on the local-pyramid approach for Content-Based Image Retrieval","authors":"Lin Feng, Anand Bilas Ray","doi":"10.1109/IVMSPW.2011.5970363","DOIUrl":"https://doi.org/10.1109/IVMSPW.2011.5970363","url":null,"abstract":"The local-pyramid approach for image representation and feature extraction is studied for the Content-Based Image Retrieval (CBIR). Lazebnik's pyramid matching kernels and the K-means clustering is used. The SIFT descriptor is deployed for feature extraction from the images, resulting in an efficient image representation scheme and reduction of the computational complexity. Histogram intersection is used to compute the similarity between the query image and the database images. The local-pyramid approach with a 3-level pyramid and a dictionary size of 100 achieves an average precision of 86.5% in retrieving images from the benchmark database, COREL 1K, and 77.35% for that with random image database.","PeriodicalId":405588,"journal":{"name":"2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127379407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection of repetitive patterns in near regular texture images 近规则纹理图像中重复模式的检测
Pub Date : 2011-06-16 DOI: 10.1109/IVMSPW.2011.5970355
Yunliang Cai, G. Baciu
Detection of repetitive patterns in texture images is a longstanding problem in texture analysis. In the textile industry, this is particularly useful in isolating repeats in woven fabric designs. Based on repetitive patterns, textile designers can identify and classify complex textures. In this paper, we propose a new method for detecting, locating, and grouping the repetitive patterns, particularly for near regular textures (NRT) based on a mid-level patch descriptor. A NRT is parameterized as a vector-valued function representing a texton unit together with a set of geometric transformations. We perform shape alignment by image congealing and correlation matching. Our experiments demonstrate that our patch-based method significantly improves the performance and the versatility of repetitive pattern detection in NRT images.
纹理图像中重复模式的检测是纹理分析中一个长期存在的问题。在纺织工业中,这在隔离机织织物设计中的重复时特别有用。基于重复的图案,纺织品设计师可以识别和分类复杂的纹理。在本文中,我们提出了一种基于中级补丁描述符的重复模式检测、定位和分组的新方法,特别是对于近规则纹理(NRT)。NRT被参数化为一个向量值函数,表示一个文本单元和一组几何变换。通过图像凝聚和相关匹配实现形状对齐。我们的实验表明,我们基于补丁的方法显著提高了NRT图像中重复模式检测的性能和通用性。
{"title":"Detection of repetitive patterns in near regular texture images","authors":"Yunliang Cai, G. Baciu","doi":"10.1109/IVMSPW.2011.5970355","DOIUrl":"https://doi.org/10.1109/IVMSPW.2011.5970355","url":null,"abstract":"Detection of repetitive patterns in texture images is a longstanding problem in texture analysis. In the textile industry, this is particularly useful in isolating repeats in woven fabric designs. Based on repetitive patterns, textile designers can identify and classify complex textures. In this paper, we propose a new method for detecting, locating, and grouping the repetitive patterns, particularly for near regular textures (NRT) based on a mid-level patch descriptor. A NRT is parameterized as a vector-valued function representing a texton unit together with a set of geometric transformations. We perform shape alignment by image congealing and correlation matching. Our experiments demonstrate that our patch-based method significantly improves the performance and the versatility of repetitive pattern detection in NRT images.","PeriodicalId":405588,"journal":{"name":"2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116691857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Perceptual curve extraction 感知曲线提取
Pub Date : 2011-06-16 DOI: 10.1109/IVMSPW.2011.5970361
Baptiste Magnier, Daniel Diep, P. Montesinos
In this paper we propose a new perceptual curve detection method in images based on the difference of half rotating Gaussian filters. The novelty of this approach resides in the mixing of ideas coming both from directional filters, perceptual organization and DoG method. We obtain a new anisotropic DoG detector enabling very precise detection of perceptual curve points. Moreover, this detector performs correctly at perceptual curves even if highly bended, and is precise on perceptual junctions. This detector has been tested successfully on various image types presenting real difficult problems for classical detection methods.
本文提出了一种基于半旋转高斯滤波器差分的图像感知曲线检测方法。这种方法的新颖之处在于混合了来自方向过滤器、感知组织和DoG方法的想法。我们获得了一种新的各向异性狗检测器,可以非常精确地检测感知曲线点。此外,该检测器即使在高度弯曲的感知曲线上也能正确执行,并且在感知连接点上也很精确。该检测器已成功地在各种图像类型上进行了测试,这些图像类型为经典检测方法提供了真正的难题。
{"title":"Perceptual curve extraction","authors":"Baptiste Magnier, Daniel Diep, P. Montesinos","doi":"10.1109/IVMSPW.2011.5970361","DOIUrl":"https://doi.org/10.1109/IVMSPW.2011.5970361","url":null,"abstract":"In this paper we propose a new perceptual curve detection method in images based on the difference of half rotating Gaussian filters. The novelty of this approach resides in the mixing of ideas coming both from directional filters, perceptual organization and DoG method. We obtain a new anisotropic DoG detector enabling very precise detection of perceptual curve points. Moreover, this detector performs correctly at perceptual curves even if highly bended, and is precise on perceptual junctions. This detector has been tested successfully on various image types presenting real difficult problems for classical detection methods.","PeriodicalId":405588,"journal":{"name":"2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125723296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2011 IEEE 10th IVMSP Workshop: Perception and Visual Signal Analysis
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1