首页 > 最新文献

2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)最新文献

英文 中文
Recursive Binary Particle Swarm Optimization based Face Localization 基于递归二元粒子群算法的人脸定位
N. Sanket, K. Manikantan, S. Ramachandran
Face Localization on frontal pose grayscale images under varying conditions of illumination, background and gender is challenging. Developing a robust technique to handle all the aforementioned variations requires a lot of training time and hardware to obtain a good localization rate. In this paper, a novel Recursive Binary Particle Swarm Optimization is proposed, to create a generic template of the face. This template is then used for template matching in the Block DCT Signal Space to obtain the position of the face in the test image. Experimental results, obtained by applying the proposed algorithm on CalTech, FERET and Extended Yale B face databases, show that the proposed system provides good localization rates with a low training time.
在不同光照、背景和性别条件下,正面灰度图像的人脸定位具有挑战性。开发一种健壮的技术来处理上述所有变化需要大量的训练时间和硬件来获得良好的定位率。本文提出了一种新的递归二元粒子群优化算法,用于人脸通用模板的生成。然后使用该模板在块DCT信号空间中进行模板匹配,得到人脸在测试图像中的位置。将该算法应用于CalTech、FERET和Extended Yale B人脸数据库的实验结果表明,该算法具有较好的定位率和较低的训练时间。
{"title":"Recursive Binary Particle Swarm Optimization based Face Localization","authors":"N. Sanket, K. Manikantan, S. Ramachandran","doi":"10.1109/NCVPRIPG.2013.6776227","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776227","url":null,"abstract":"Face Localization on frontal pose grayscale images under varying conditions of illumination, background and gender is challenging. Developing a robust technique to handle all the aforementioned variations requires a lot of training time and hardware to obtain a good localization rate. In this paper, a novel Recursive Binary Particle Swarm Optimization is proposed, to create a generic template of the face. This template is then used for template matching in the Block DCT Signal Space to obtain the position of the face in the test image. Experimental results, obtained by applying the proposed algorithm on CalTech, FERET and Extended Yale B face databases, show that the proposed system provides good localization rates with a low training time.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"558 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131858111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Kernel estimation from blurred edge profiles using Radon Transform for shaken images 基于Radon变换的抖动图像模糊边缘轮廓核估计
C. Fasil, C. Jiji
Motion blur due to camera shake during exposure often leads to noticeable artifacts in images. In this paper, we address the problem of recovering the true image from its blurred version. The problem is challenging since both the blur kernel and the sharp image are unknown. The quality of a deblurred image is closely related to the correctness of the estimated blur kernel. In this work we focus on the use of Radon Transform for blur kernel estimation. It is done by analyzing edges in the blurred image and there by constructing the projections of the blur kernel. Estimation of the blur kernel from its projections is done by incorporating the sparse nature of the blur kernel. The problem is solved through l1 minimization making use of the estimated projections. After building the kernel, we use a non-blind deconvolution algorithm for producing the sharp image. Results show that this approach is well suited for blurred images having significant edges.
由于相机在曝光过程中的抖动而引起的运动模糊常常导致图像中出现明显的伪影。在本文中,我们解决了从其模糊版本恢复真实图像的问题。这个问题很有挑战性,因为模糊核和清晰图像都是未知的。去模糊图像的质量与估计的模糊核的正确性密切相关。在这项工作中,我们专注于使用Radon变换进行模糊核估计。它是通过分析模糊图像中的边缘,并在那里构造模糊核的投影来完成的。模糊核的投影估计是通过结合模糊核的稀疏特性来完成的。利用估计的投影,通过l1最小化来解决这个问题。在构建核后,我们使用非盲反卷积算法来生成清晰的图像。结果表明,该方法适用于具有明显边缘的模糊图像。
{"title":"Kernel estimation from blurred edge profiles using Radon Transform for shaken images","authors":"C. Fasil, C. Jiji","doi":"10.1109/NCVPRIPG.2013.6776254","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776254","url":null,"abstract":"Motion blur due to camera shake during exposure often leads to noticeable artifacts in images. In this paper, we address the problem of recovering the true image from its blurred version. The problem is challenging since both the blur kernel and the sharp image are unknown. The quality of a deblurred image is closely related to the correctness of the estimated blur kernel. In this work we focus on the use of Radon Transform for blur kernel estimation. It is done by analyzing edges in the blurred image and there by constructing the projections of the blur kernel. Estimation of the blur kernel from its projections is done by incorporating the sparse nature of the blur kernel. The problem is solved through l1 minimization making use of the estimated projections. After building the kernel, we use a non-blind deconvolution algorithm for producing the sharp image. Results show that this approach is well suited for blurred images having significant edges.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115929963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lip tracking under varying expressions utilizing domain knowledge 基于领域知识的唇形跟踪
Swapna Agarwal, D. Mukherjee
In recent years the need of a robust facial component tracking especially lip tracking algorithm has increased dramatically. We implement an active contour (snake) model inspired by human perception for lip tracking. In addition to the conventional energy terms for tension, rigidity (internal energy) and gradient magnitude (external energy) we propose to include energy terms from domain knowledge for lip shape constraint and local region profile constraint. Generalized deterministic annealing (GDA) update of the energy functional helps the solution to escape suboptimal local minima in the energy space and give better tracking result. Experimental results show that the proposed method efficiently adapts to the highly deformable lip boundaries even for lips with indistinct edges and colored (adorned) lips where gradient magnitude based or local region based tracking methods respectively fail. We have done a number of experiments to evaluate the performance of our method in comparison with the existing state-of-the-art methods.
近年来,对鲁棒性面部特征跟踪尤其是唇形跟踪算法的需求急剧增加。我们实现了一个受人类感知启发的活动轮廓(蛇)模型,用于唇形跟踪。除了常规的张力、刚度(内部能量)和梯度大小(外部能量)的能量项外,我们建议在唇形约束和局部区域轮廓约束中加入来自领域知识的能量项。对能量泛函进行广义确定性退火(GDA)更新,使解摆脱了能量空间中的次优局部极小值,得到了更好的跟踪结果。实验结果表明,在基于梯度幅度和局部区域的跟踪方法均无法实现的情况下,该方法能够有效地适应唇边界高度变形的情况。我们已经做了许多实验来评估我们的方法与现有的最先进的方法的性能。
{"title":"Lip tracking under varying expressions utilizing domain knowledge","authors":"Swapna Agarwal, D. Mukherjee","doi":"10.1109/NCVPRIPG.2013.6776201","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776201","url":null,"abstract":"In recent years the need of a robust facial component tracking especially lip tracking algorithm has increased dramatically. We implement an active contour (snake) model inspired by human perception for lip tracking. In addition to the conventional energy terms for tension, rigidity (internal energy) and gradient magnitude (external energy) we propose to include energy terms from domain knowledge for lip shape constraint and local region profile constraint. Generalized deterministic annealing (GDA) update of the energy functional helps the solution to escape suboptimal local minima in the energy space and give better tracking result. Experimental results show that the proposed method efficiently adapts to the highly deformable lip boundaries even for lips with indistinct edges and colored (adorned) lips where gradient magnitude based or local region based tracking methods respectively fail. We have done a number of experiments to evaluate the performance of our method in comparison with the existing state-of-the-art methods.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117196440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Search based Video Recommendations 基于搜索的视频推荐
Abhranil Chatterjee, Bijoy Sarkar, Prateeksha Chandraghatgi, K. Seal, Girish Ananthakrishnan
In this paper, we present a search powered approach we have used in building a Video Recommendations Engine for Yahoo hosted videos and Yahoo Video Search. The aim is to increase user engagement by recommending related videos and hence increase revenue by being able to show more advertisements as the user keeps consuming more videos. This system accepts an input context which provides information about the user and the video consumed and returns a set of related videos as recommended. We look at this problem as a multi-faceted problem since the intent of the user at a particular point in time cannot be known deterministically. So we generate the candidate set of recommendations using an ensemble of algorithms and available search signals. We discuss these algorithms and mechanisms for retrieving related videos in details along with an explore-exploit strategy to learn a near optimal ranking of the candidate recommendations, and provide the performance results. This system has been able to increase the number of video plays at Yahoo by 66%.
在本文中,我们提出了一种搜索驱动的方法,我们已经使用它为雅虎托管的视频和雅虎视频搜索构建了一个视频推荐引擎。其目的是通过推荐相关视频来提高用户参与度,从而随着用户不断消费更多视频,通过展示更多广告来增加收入。该系统接受一个输入上下文,该上下文提供有关用户和所消费视频的信息,并返回一组推荐的相关视频。我们把这个问题看作是一个多方面的问题,因为用户在特定时间点的意图是无法确定的。因此,我们使用算法集合和可用的搜索信号生成候选推荐集。我们详细讨论了这些检索相关视频的算法和机制,以及一种探索利用策略,以学习候选推荐的接近最优排名,并提供性能结果。该系统使雅虎的视频播放量增加了66%。
{"title":"Search based Video Recommendations","authors":"Abhranil Chatterjee, Bijoy Sarkar, Prateeksha Chandraghatgi, K. Seal, Girish Ananthakrishnan","doi":"10.1109/NCVPRIPG.2013.6776190","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776190","url":null,"abstract":"In this paper, we present a search powered approach we have used in building a Video Recommendations Engine for Yahoo hosted videos and Yahoo Video Search. The aim is to increase user engagement by recommending related videos and hence increase revenue by being able to show more advertisements as the user keeps consuming more videos. This system accepts an input context which provides information about the user and the video consumed and returns a set of related videos as recommended. We look at this problem as a multi-faceted problem since the intent of the user at a particular point in time cannot be known deterministically. So we generate the candidate set of recommendations using an ensemble of algorithms and available search signals. We discuss these algorithms and mechanisms for retrieving related videos in details along with an explore-exploit strategy to learn a near optimal ranking of the candidate recommendations, and provide the performance results. This system has been able to increase the number of video plays at Yahoo by 66%.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124768085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Shape recognition based on shape-signature identification and condensibility: Application to underwater imagery 基于形状特征识别和可压缩性的形状识别:在水下图像中的应用
J. Banerjee, R. Ray, S. R. K. Vadali, R. Layek, S. N. Shome
In this paper, a shape recognition method is proposed for a few common geometrical shapes including straight line, circle, ellipse, triangle, quadrilateral, pentagon and hexagon. In the present work, two indices namely Unique Shape Signature (USS) and Condensibility (C) are employed for shape recognition of an object. Using the USS index, all the above mentioned non-circular shapes are neatly recognized, whereas, the C index recognized the circular objects. An added advantage of the proposed method is that it can further differentiate triangles, quadrilaterals and both symmetric and non-symmetric shapes of pentagon and hexagon using distance variance (V ar(dsi)) parameter calculated from USS. Applying the proposed method on above mentioned shapes, an overall recognition rate of 98.80% is achieved on several simulated and real objects of different shapes. Proposed method has also been compared with two existing methods, presents better result. Performance of the proposed method is illustrated by applying it on underwater images and it is observed to perform satisfactory on all the images under test.
本文针对直线、圆、椭圆、三角形、四边形、五边形和六边形等几种常见几何形状,提出了一种形状识别方法。本文采用独特形状特征(USS)和可压缩性(C)两个指标对物体进行形状识别。使用USS索引可以很好地识别上述所有非圆形物体,而使用C索引则可以识别圆形物体。该方法的另一个优点是,利用USS计算的距离方差(var (dsi))参数,可以进一步区分三角形、四边形以及五边形和六边形的对称和非对称形状。将该方法应用于上述形状,对几种不同形状的模拟物体和真实物体的总体识别率达到了98.80%。并与现有的两种方法进行了比较,取得了较好的效果。通过在水下图像上的应用,证明了该方法的性能,并观察到该方法对所有被测图像的性能都令人满意。
{"title":"Shape recognition based on shape-signature identification and condensibility: Application to underwater imagery","authors":"J. Banerjee, R. Ray, S. R. K. Vadali, R. Layek, S. N. Shome","doi":"10.1109/NCVPRIPG.2013.6776224","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776224","url":null,"abstract":"In this paper, a shape recognition method is proposed for a few common geometrical shapes including straight line, circle, ellipse, triangle, quadrilateral, pentagon and hexagon. In the present work, two indices namely Unique Shape Signature (USS) and Condensibility (C) are employed for shape recognition of an object. Using the USS index, all the above mentioned non-circular shapes are neatly recognized, whereas, the C index recognized the circular objects. An added advantage of the proposed method is that it can further differentiate triangles, quadrilaterals and both symmetric and non-symmetric shapes of pentagon and hexagon using distance variance (V ar(dsi)) parameter calculated from USS. Applying the proposed method on above mentioned shapes, an overall recognition rate of 98.80% is achieved on several simulated and real objects of different shapes. Proposed method has also been compared with two existing methods, presents better result. Performance of the proposed method is illustrated by applying it on underwater images and it is observed to perform satisfactory on all the images under test.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129773964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Dual Objective Feature Selection and Scaled Euclidean Classification for face recognition 面向人脸识别的双目标特征选择与尺度欧氏分类
Siddharth Srivatsa, Prajwal Shanthakumar, K. Manikantan, S. Ramachandran
The statistical description of the face varies drastically with changes in pose, illumination and expression. These variations make face recognition (FR) even more challenging. In this paper, two novel techniques are proposed, viz., Dual Objective Feature Selection to learn and select only discriminant features and Scaled Euclidean Classification to exploit within-class information for smarter matching. The 1-D discrete cosine transform (DCT) is used for efficient feature extraction. A complete FR system for enhanced recognition performance is presented. Experimental results on three benchmark face databases, namely, Color FERET, CMU PIE and ORL, illustrate the promising performance of the proposed techniques for face recognition.
面部的统计描述随着姿势、光照和表情的变化而急剧变化。这些变化使得人脸识别(FR)更具挑战性。本文提出了两种新技术:双目标特征选择(Dual Objective Feature Selection)和尺度欧几里德分类(scale Euclidean Classification),分别用于学习和选择判别特征和利用类内信息进行智能匹配。采用一维离散余弦变换(DCT)进行特征提取。提出了一种完整的增强识别性能的FR系统。在Color FERET、CMU PIE和ORL三个基准人脸数据库上的实验结果表明,所提出的人脸识别技术具有良好的性能。
{"title":"Dual Objective Feature Selection and Scaled Euclidean Classification for face recognition","authors":"Siddharth Srivatsa, Prajwal Shanthakumar, K. Manikantan, S. Ramachandran","doi":"10.1109/NCVPRIPG.2013.6776153","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776153","url":null,"abstract":"The statistical description of the face varies drastically with changes in pose, illumination and expression. These variations make face recognition (FR) even more challenging. In this paper, two novel techniques are proposed, viz., Dual Objective Feature Selection to learn and select only discriminant features and Scaled Euclidean Classification to exploit within-class information for smarter matching. The 1-D discrete cosine transform (DCT) is used for efficient feature extraction. A complete FR system for enhanced recognition performance is presented. Experimental results on three benchmark face databases, namely, Color FERET, CMU PIE and ORL, illustrate the promising performance of the proposed techniques for face recognition.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126919338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Psychovisual saliency in color images 彩色图像的心理视觉显著性
Soumyajit Gupta, Rahul Agrawal, R. Layek, J. Mukhopadhyay
Visual attention is an indispensable component of complex vision tasks. When looking at a complex scene, our ocular perception is confronted with a large amount of data that needs to be broken down for processing by our psychovisual system. Selective visual attention provides a mechanism for serializing the visual data, allowing for sequential processing of the content of the scene. A Bottom-Up computational model is described that simulates the psycho-visual model of saliency based on features of intensity and color. The method gives sequential priorities to objects which other computational models cannot account for. The results demonstrate a fast execution time, full resolution maps and high detection accuracy. The model is applicable on both natural and artificial images.
视觉注意是复杂视觉任务不可缺少的组成部分。在观看一个复杂的场景时,我们的视觉感知面临着大量的数据,这些数据需要我们的心理视觉系统进行分解处理。选择性视觉注意提供了一种序列化视觉数据的机制,允许对场景内容进行顺序处理。描述了一种自下而上的计算模型,该模型模拟了基于强度和颜色特征的显著性心理-视觉模型。该方法为其他计算模型无法解释的对象提供顺序优先级。结果表明,该方法具有执行速度快、地图分辨率高、检测精度高等优点。该模型适用于自然图像和人工图像。
{"title":"Psychovisual saliency in color images","authors":"Soumyajit Gupta, Rahul Agrawal, R. Layek, J. Mukhopadhyay","doi":"10.1109/NCVPRIPG.2013.6776158","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776158","url":null,"abstract":"Visual attention is an indispensable component of complex vision tasks. When looking at a complex scene, our ocular perception is confronted with a large amount of data that needs to be broken down for processing by our psychovisual system. Selective visual attention provides a mechanism for serializing the visual data, allowing for sequential processing of the content of the scene. A Bottom-Up computational model is described that simulates the psycho-visual model of saliency based on features of intensity and color. The method gives sequential priorities to objects which other computational models cannot account for. The results demonstrate a fast execution time, full resolution maps and high detection accuracy. The model is applicable on both natural and artificial images.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123289979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
CHILD: A robust Computationally-Efficient Histogram-based Image Local Descriptor CHILD:基于直方图的鲁棒高效图像局部描述符
Sai Hareesh Anamandra, V. Chandrasekaran
Designing a robust image local descriptor for the purpose of pattern recognition and classification has been an active area of research. Towards this end, a number of local descriptors based on Weber's law have been proposed recently. Notable among them are Weber Local Descriptor (WLD), Weber Local Binary Pattern (WLBP) and Gabor Weber Local Descriptor (GWLD). Experiments reveal their inability to classify patterns under noisy environments. Our analysis indicates that the components of the WLD: differential excitation and orientation are to be redesigned for robustness and computational efficiency.
设计鲁棒的图像局部描述符用于模式识别和分类一直是一个活跃的研究领域。为此,最近提出了一些基于韦伯定律的局部描述符。其中比较著名的有Weber局部描述子(WLD)、Weber局部二元模式(WLBP)和Gabor Weber局部描述子(GWLD)。实验表明,在嘈杂的环境下,它们无法对图案进行分类。我们的分析表明,为了鲁棒性和计算效率,需要重新设计WLD的组成部分:微分激励和方向。
{"title":"CHILD: A robust Computationally-Efficient Histogram-based Image Local Descriptor","authors":"Sai Hareesh Anamandra, V. Chandrasekaran","doi":"10.1109/NCVPRIPG.2013.6776154","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776154","url":null,"abstract":"Designing a robust image local descriptor for the purpose of pattern recognition and classification has been an active area of research. Towards this end, a number of local descriptors based on Weber's law have been proposed recently. Notable among them are Weber Local Descriptor (WLD), Weber Local Binary Pattern (WLBP) and Gabor Weber Local Descriptor (GWLD). Experiments reveal their inability to classify patterns under noisy environments. Our analysis indicates that the components of the WLD: differential excitation and orientation are to be redesigned for robustness and computational efficiency.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126365149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Mesh denoising by improved 3D geometric bilateral filter 改进的三维几何双边滤波器对网格进行去噪
Somnath Dutta, Sumandeep Banerjee, P. Biswas, Partha Bhowmick
We present an improved mesh denoising method based on 3D geometric bilateral filtering. Its novelty is that it can preserve the details of the object as well as reduce the noise in an effective manner. The previous approach of geometric bilateral filtering for 3D-scan points has a limitation that it reduces the point density, thereby losing the details present in the object. The approach proposed by us, on the contrary, works on the surface mesh obtained after triangulating the 3D-scan points without any data downsampling. Each vertex of the mesh is repositioned appropriately based on the estimated centroid of the vertices in its local neighborhood and a Gaussian weight function. Experimental results demonstrate its strength, efficiency, and robustness.
提出了一种改进的基于三维几何双边滤波的网格去噪方法。它的新颖之处在于既能有效地保留物体的细节,又能有效地降低噪声。先前的3d扫描点几何双边滤波方法有一个局限性,即它降低了点密度,从而失去了物体中存在的细节。相反,我们提出的方法适用于3d扫描点三角剖分后获得的表面网格,而无需进行任何数据降采样。网格的每个顶点根据其局部邻域内顶点的估计质心和高斯权函数进行适当的重新定位。实验结果证明了该方法的强度、有效性和鲁棒性。
{"title":"Mesh denoising by improved 3D geometric bilateral filter","authors":"Somnath Dutta, Sumandeep Banerjee, P. Biswas, Partha Bhowmick","doi":"10.1109/NCVPRIPG.2013.6776193","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776193","url":null,"abstract":"We present an improved mesh denoising method based on 3D geometric bilateral filtering. Its novelty is that it can preserve the details of the object as well as reduce the noise in an effective manner. The previous approach of geometric bilateral filtering for 3D-scan points has a limitation that it reduces the point density, thereby losing the details present in the object. The approach proposed by us, on the contrary, works on the surface mesh obtained after triangulating the 3D-scan points without any data downsampling. Each vertex of the mesh is repositioned appropriately based on the estimated centroid of the vertices in its local neighborhood and a Gaussian weight function. Experimental results demonstrate its strength, efficiency, and robustness.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"34 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116592442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Outdoor scene classification using invariant features 基于不变特征的户外场景分类
R. Raja, S. Roomi, D. Dharmalakshmi
Scene classification using semantic description has gained much attention towards automatic image retrieval. In many cases, visual appearance of images get affected by environmental conditions such as low lighting and viewing conditions. Such problems in semantic scenes pose difficult challenges during the classification of sceneries. To address this issue, a new outdoor scene classification method for using low level feature has been proposed in this work. To support automatic scene classification at the concept level an efficient illumination and rotation invariant low level features such as color, texture and edge like features have been used in conjunction with multiclass Support Vector Machine (SVM). In this work, we have taken scene categories like mountains, forests, highways, rivers, buildings etc., from the outdoor scenes for classification experimentation. From the experimental results, we demonstrate that the proposed method provides better classification in the large scale image databases like Eight scene category, upright scene and COREL dataset and gives better performance in terms of classification accuracy.
基于语义描述的场景分类是图像自动检索领域的研究热点。在许多情况下,图像的视觉外观受到环境条件的影响,如低光照和观看条件。语义场景中的这些问题给场景分类带来了难题。为了解决这一问题,本文提出了一种基于低水平特征的户外场景分类方法。为了支持概念级别的自动场景分类,将有效的照明和旋转不变的低级别特征(如颜色、纹理和边缘特征)与多类支持向量机(SVM)结合使用。在这项工作中,我们从户外场景中选取了山、森林、公路、河流、建筑等场景类别进行分类实验。实验结果表明,该方法在八大场景类、直立场景和COREL数据集等大型图像数据库中具有较好的分类效果,在分类精度上有较好的表现。
{"title":"Outdoor scene classification using invariant features","authors":"R. Raja, S. Roomi, D. Dharmalakshmi","doi":"10.1109/NCVPRIPG.2013.6776188","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776188","url":null,"abstract":"Scene classification using semantic description has gained much attention towards automatic image retrieval. In many cases, visual appearance of images get affected by environmental conditions such as low lighting and viewing conditions. Such problems in semantic scenes pose difficult challenges during the classification of sceneries. To address this issue, a new outdoor scene classification method for using low level feature has been proposed in this work. To support automatic scene classification at the concept level an efficient illumination and rotation invariant low level features such as color, texture and edge like features have been used in conjunction with multiclass Support Vector Machine (SVM). In this work, we have taken scene categories like mountains, forests, highways, rivers, buildings etc., from the outdoor scenes for classification experimentation. From the experimental results, we demonstrate that the proposed method provides better classification in the large scale image databases like Eight scene category, upright scene and COREL dataset and gives better performance in terms of classification accuracy.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121468437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
期刊
2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1