首页 > 最新文献

2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)最新文献

英文 中文
An optimized derivative projection warping approach for moving platform video stabilization 一种用于移动平台视频稳定的导数投影翘曲优化方法
Deepika Shukla, R. K. Jha
This paper presents an optimized and efficient video stabilization technique based on projection curve warping. In most of the recorded videos, the relative displacement between two consecutive frames goes from 3-4 pixel for hand-held and 25-30 for moving platform applications. Based on this experimental data, the use of Sakoe-Chiba band with fixed window size has been proposed for constraining distance matrix estimation, in the dynamic time warping algorithm. In the existing projection based stabilization techniques, intensity values are matched for motion estimation. Any change in the local intensity values either induced due to intensity variation, moving objects or scene variation, causes error in the estimated motion. To overcome this problem, a higher level feature i.e. shape of the projection curve has been incorporated by matching the local derivative of curve instead of the intensity values itself. Robustness and time efficiency of the proposed technique is measured in terms of interframe transformation fidelity and processing time respectively.
提出了一种基于投影曲线翘曲的视频稳像技术。在大多数录制的视频中,两个连续帧之间的相对位移从手持的3-4像素到移动平台应用的25-30像素不等。在此实验数据的基础上,提出了在动态时间规整算法中使用固定窗口大小的Sakoe-Chiba波段约束距离矩阵估计。在现有的基于投影的稳定技术中,运动估计需要匹配强度值。由于强度变化、移动物体或场景变化引起的局部强度值的任何变化都会导致估计运动的误差。为了克服这个问题,通过匹配曲线的局部导数而不是强度值本身,加入了更高层次的特征,即投影曲线的形状。用帧间变换保真度和处理时间分别衡量了该技术的鲁棒性和时间效率。
{"title":"An optimized derivative projection warping approach for moving platform video stabilization","authors":"Deepika Shukla, R. K. Jha","doi":"10.1109/NCVPRIPG.2013.6776218","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776218","url":null,"abstract":"This paper presents an optimized and efficient video stabilization technique based on projection curve warping. In most of the recorded videos, the relative displacement between two consecutive frames goes from 3-4 pixel for hand-held and 25-30 for moving platform applications. Based on this experimental data, the use of Sakoe-Chiba band with fixed window size has been proposed for constraining distance matrix estimation, in the dynamic time warping algorithm. In the existing projection based stabilization techniques, intensity values are matched for motion estimation. Any change in the local intensity values either induced due to intensity variation, moving objects or scene variation, causes error in the estimated motion. To overcome this problem, a higher level feature i.e. shape of the projection curve has been incorporated by matching the local derivative of curve instead of the intensity values itself. Robustness and time efficiency of the proposed technique is measured in terms of interframe transformation fidelity and processing time respectively.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125182837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Tracking based depth-guided video inpainting 基于跟踪的深度引导视频绘画
Saroj Hatheele, M. Zaveri
In this paper, we propose a novel technique of tracking based video inpainting using depth information. Depth information obtained from the structure of motion is refined by extended proposed voting based algorithm. The refined depth map is used to extract moving foreground object from tracked moving object then replaces it into other video frame using integrated color and depth information based video inpainting. We compared the color based video inpainting with integrated color and depth information based video inpainting. Our proposed method acquaints special effect by including tracking and depth information to video inpainting. Inclusion of depth information increases the quality of inpainted video. Finally, we present experimental results of depth refinement and video inpainting for molecular video sequences captured with static camera with moving objects.
在本文中,我们提出了一种基于深度信息的跟踪视频绘制技术。从运动结构中获得的深度信息通过扩展提出的基于投票的算法进行细化。利用改进的深度图从跟踪的运动物体中提取出运动的前景物体,然后利用基于颜色和深度信息的视频补绘将其替换到其他视频帧中。我们比较了基于颜色的视频补绘与基于综合颜色和深度信息的视频补绘。我们提出的方法通过将跟踪和深度信息加入到视频绘制中来了解特效。包含深度信息可以提高绘制视频的质量。最后,我们给出了用静态摄像机拍摄的带有运动物体的分子视频序列的深度细化和视频绘制的实验结果。
{"title":"Tracking based depth-guided video inpainting","authors":"Saroj Hatheele, M. Zaveri","doi":"10.1109/NCVPRIPG.2013.6776217","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776217","url":null,"abstract":"In this paper, we propose a novel technique of tracking based video inpainting using depth information. Depth information obtained from the structure of motion is refined by extended proposed voting based algorithm. The refined depth map is used to extract moving foreground object from tracked moving object then replaces it into other video frame using integrated color and depth information based video inpainting. We compared the color based video inpainting with integrated color and depth information based video inpainting. Our proposed method acquaints special effect by including tracking and depth information to video inpainting. Inclusion of depth information increases the quality of inpainted video. Finally, we present experimental results of depth refinement and video inpainting for molecular video sequences captured with static camera with moving objects.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122794635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A new approach for terrain analysis of lunar surface by Chandrayaan-1 data using open source libraries 基于开源库的月船1号数据月球表面地形分析新方法
Hardik Acharya, Amitabh, T. Srinivasan, B. Gopalakrishna
Chandrayaan-1, India's first moon mission was launched by ISRO in October 2008. SAC (Space Applications centre) is responsible for development of software for processing data from HySI (Hyper Spectral Imager) and TMC (Terrain Mapping Camera). The present work discusses the technique and methodology for generating terrain parameters i.e. slope, aspect, relief-shade, contour etc. using Digital Elevation Model (DEM) generated from Chandrayaan-1 TMC datasets. In this paper, an algorithm and corresponding desktop application software has been developed and implemented. Preliminary testing of application using Chandrayaan-1 DEM data indicate promising results. Environment creation for execution of the code using open source technology is the challenging task, as it includes the building of open source libraries with visual studio. This paper describes the Slope, Aspect, Relief-Shade, Painted slope, Painted aspect and Painted DEM generation method and discusses the results achieved for the good evaluation of terrain.
印度首个月球任务“月船1号”是由ISRO于2008年10月发射的。SAC(空间应用中心)负责开发用于处理来自HySI(高光谱成像仪)和TMC(地形测绘相机)数据的软件。本文讨论了利用月船1号TMC数据集生成的数字高程模型(DEM)生成地形参数的技术和方法,即坡度、坡向、地形起伏、等高线等。本文开发并实现了一种算法和相应的桌面应用软件。利用月船1号DEM数据进行的初步应用测试显示出良好的结果。使用开源技术创建执行代码的环境是一项具有挑战性的任务,因为它包括使用visual studio构建开源库。本文介绍了坡度、坡向、浮雕阴影、绘制坡度、绘制坡向和绘制DEM的生成方法,并讨论了对地形进行良好评价所取得的成果。
{"title":"A new approach for terrain analysis of lunar surface by Chandrayaan-1 data using open source libraries","authors":"Hardik Acharya, Amitabh, T. Srinivasan, B. Gopalakrishna","doi":"10.1109/NCVPRIPG.2013.6776166","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776166","url":null,"abstract":"Chandrayaan-1, India's first moon mission was launched by ISRO in October 2008. SAC (Space Applications centre) is responsible for development of software for processing data from HySI (Hyper Spectral Imager) and TMC (Terrain Mapping Camera). The present work discusses the technique and methodology for generating terrain parameters i.e. slope, aspect, relief-shade, contour etc. using Digital Elevation Model (DEM) generated from Chandrayaan-1 TMC datasets. In this paper, an algorithm and corresponding desktop application software has been developed and implemented. Preliminary testing of application using Chandrayaan-1 DEM data indicate promising results. Environment creation for execution of the code using open source technology is the challenging task, as it includes the building of open source libraries with visual studio. This paper describes the Slope, Aspect, Relief-Shade, Painted slope, Painted aspect and Painted DEM generation method and discusses the results achieved for the good evaluation of terrain.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133481683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Near real-time face parsing 近实时人脸分析
A. Minocha, Digvijay Singh, Nataraj Jammalamadaka, C. V. Jawahar
Commercial applications like driver assistance programs in cars, smile detection softwares in cameras typically require reliable facial landmark points like the location of eyes, lips etc. and face pose at near real-time. Current methods are often unreliable, very cumbersome or computationally intensive. In this work, we focus on implementing a reliable and real-time method which parses an image and detects faces, estimates their pose and locates landmark points on the face. Our method builds on the existing literature. The method can work both for images and videos.
商业应用,如汽车驾驶辅助程序,相机中的微笑检测软件,通常需要可靠的面部标志点,如眼睛、嘴唇等的位置,以及近乎实时的面部姿势。目前的方法往往是不可靠的,非常繁琐或计算密集。在这项工作中,我们专注于实现一种可靠的实时方法,该方法可以解析图像并检测人脸,估计其姿态并定位人脸上的地标点。我们的方法建立在现有文献的基础上。该方法对图像和视频都有效。
{"title":"Near real-time face parsing","authors":"A. Minocha, Digvijay Singh, Nataraj Jammalamadaka, C. V. Jawahar","doi":"10.1109/NCVPRIPG.2013.6776192","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776192","url":null,"abstract":"Commercial applications like driver assistance programs in cars, smile detection softwares in cameras typically require reliable facial landmark points like the location of eyes, lips etc. and face pose at near real-time. Current methods are often unreliable, very cumbersome or computationally intensive. In this work, we focus on implementing a reliable and real-time method which parses an image and detects faces, estimates their pose and locates landmark points on the face. Our method builds on the existing literature. The method can work both for images and videos.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133920665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Time-frequency analysis based motion detection in perfusion weighted MRI 灌注加权MRI中基于时频分析的运动检测
M. Sushma, Anubha Gupta, J. Sivaswamy
In this paper, we present a novel automated method to detect motion in perfusion weighted images (PWI), which is a type of magnetic resonance imaging (MRI). In PWI, blood perfusion is measured by injecting an exogenous tracer called bolus into the blood flow of a patient and then tracking it in the brain. PWI requires a long data acquisition time to form a time series of volumes. Hence, motion occurs due to patient's unavoidable movements during a scan, which in turn results into motion corrupted data. There is a necessity of detection of these motion artifacts on captured data for correct disease diagnosis. In PWI, intensity profile gets disturbed due to occurrence of motion and/or bolus passage through the blood vessels. There is no way to distinguish between motion occurrence and bolus passage. In this paper, we propose an efficient time-frequency analysis based motion detection method. We show that proposed method is computationally inexpensive and fast. This method is evaluated on a DSC-MRI sequence with simulated motion of different degrees. We show that our approach detects motion in a few seconds.
在本文中,我们提出了一种新的自动检测灌注加权图像(PWI)运动的方法,这是一种磁共振成像(MRI)。在PWI中,血液灌注是通过向患者的血流中注射一种称为bolus的外源性示踪剂,然后在大脑中进行跟踪来测量的。PWI需要较长的数据采集时间来形成时间序列的卷。因此,在扫描过程中,由于患者不可避免的运动而发生运动,这反过来又导致运动损坏数据。为了正确诊断疾病,有必要在捕获的数据上检测这些运动伪影。在PWI中,由于运动和/或药物通过血管,强度分布受到干扰。没有办法区分运动的发生和药丸的通过。本文提出了一种有效的基于时频分析的运动检测方法。结果表明,该方法计算成本低,速度快。该方法在模拟不同程度运动的DSC-MRI序列上进行了评估。我们证明,我们的方法可以在几秒钟内检测到运动。
{"title":"Time-frequency analysis based motion detection in perfusion weighted MRI","authors":"M. Sushma, Anubha Gupta, J. Sivaswamy","doi":"10.1109/NCVPRIPG.2013.6776215","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776215","url":null,"abstract":"In this paper, we present a novel automated method to detect motion in perfusion weighted images (PWI), which is a type of magnetic resonance imaging (MRI). In PWI, blood perfusion is measured by injecting an exogenous tracer called bolus into the blood flow of a patient and then tracking it in the brain. PWI requires a long data acquisition time to form a time series of volumes. Hence, motion occurs due to patient's unavoidable movements during a scan, which in turn results into motion corrupted data. There is a necessity of detection of these motion artifacts on captured data for correct disease diagnosis. In PWI, intensity profile gets disturbed due to occurrence of motion and/or bolus passage through the blood vessels. There is no way to distinguish between motion occurrence and bolus passage. In this paper, we propose an efficient time-frequency analysis based motion detection method. We show that proposed method is computationally inexpensive and fast. This method is evaluated on a DSC-MRI sequence with simulated motion of different degrees. We show that our approach detects motion in a few seconds.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133928361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correlation based object-specific attentional mechanism for target localization in high resolution satellite images 基于相关性的高分辨率卫星图像目标定位注意机制
Phool Preet, P. Chowdhury, G. S. Malik
Attentional Mechanism or Focus of Attention is the front end of object recognition systems with the task of rapidly reducing the search area in the image. In this paper we present correlation based template matching as an attentional mechanism for high resolution satellite images. We experimentally show that despite intra-class variations and object transformations, correlation based template matching can be deployed as attentional mechanism. Different image variants like gradient magnitude and gradient orientation are also compared for correlation matching. Based on the experiments a threshold selection mechanism is given.
注意机制或注意焦点是目标识别系统的前端,其任务是快速缩小图像中的搜索区域。本文提出了一种基于相关性模板匹配的高分辨率卫星图像注意机制。我们的实验表明,尽管类内变化和对象转换,基于相关性的模板匹配可以部署为注意机制。还比较了梯度大小和梯度方向等不同的图像变量进行相关匹配。在实验的基础上,给出了一种阈值选择机制。
{"title":"Correlation based object-specific attentional mechanism for target localization in high resolution satellite images","authors":"Phool Preet, P. Chowdhury, G. S. Malik","doi":"10.1109/NCVPRIPG.2013.6776221","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776221","url":null,"abstract":"Attentional Mechanism or Focus of Attention is the front end of object recognition systems with the task of rapidly reducing the search area in the image. In this paper we present correlation based template matching as an attentional mechanism for high resolution satellite images. We experimentally show that despite intra-class variations and object transformations, correlation based template matching can be deployed as attentional mechanism. Different image variants like gradient magnitude and gradient orientation are also compared for correlation matching. Based on the experiments a threshold selection mechanism is given.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131928607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Single image super-resolution using compressive sensing with learned overcomplete dictionary 基于学习过完全字典的压缩感知单幅图像超分辨率
B. Deka, Kanchan Kumar Gorain, Navadeep Kalita, B. Das
This paper proposes a novel framework that unifies the concept of sparsity of a signal over a properly chosen basis set and the theory of signal reconstruction via compressed sensing in order to obtain a high-resolution image derived by using a single down-sampled version of the same image. First, we enforce sparse overcomplete representations on the low-resolution patches of the input image. Then, using the sparse coefficients as obtained above, we reconstruct a high-resolution output image. A blurring matrix is introduced in order to enhance the incoherency between the sparsifying dictionary and the sensing matrices which also resulted in better preservation of image edges and other textures. When compared with the similar techniques, the proposed method yields much better result both visually and quantitatively.
本文提出了一个新的框架,该框架将信号在适当选择的基集上的稀疏性概念与通过压缩感知的信号重建理论相结合,以便通过使用同一图像的单个降采样版本获得高分辨率图像。首先,我们对输入图像的低分辨率补丁执行稀疏过完全表示。然后,利用得到的稀疏系数,重构出高分辨率的输出图像。为了增强稀疏字典和感知矩阵之间的不相干性,引入了模糊矩阵,从而更好地保留了图像边缘和其他纹理。与同类技术相比,该方法在视觉和定量上都取得了较好的效果。
{"title":"Single image super-resolution using compressive sensing with learned overcomplete dictionary","authors":"B. Deka, Kanchan Kumar Gorain, Navadeep Kalita, B. Das","doi":"10.1109/NCVPRIPG.2013.6776176","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776176","url":null,"abstract":"This paper proposes a novel framework that unifies the concept of sparsity of a signal over a properly chosen basis set and the theory of signal reconstruction via compressed sensing in order to obtain a high-resolution image derived by using a single down-sampled version of the same image. First, we enforce sparse overcomplete representations on the low-resolution patches of the input image. Then, using the sparse coefficients as obtained above, we reconstruct a high-resolution output image. A blurring matrix is introduced in order to enhance the incoherency between the sparsifying dictionary and the sensing matrices which also resulted in better preservation of image edges and other textures. When compared with the similar techniques, the proposed method yields much better result both visually and quantitatively.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122522540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Real-time approximate and exact CSG of implicit surfaces on the GPU 在GPU上实现隐式曲面的实时逼近和精确CSG
Jag Mohan Singh
We present a simple and powerful scheme to allow CSG of implicit surfaces on the GPU. We decompose the boolean expression of surfaces into sum-of-products form. Our algorithm presented in this paper then renders each product term, sum of products can be automatically by enabling depth test. Our Approximate CSG uses adaptive marching points algorithm for finding ray-surface intersection. Once we find an interval where root exists after root-isolation, this is used for presence of intersection. We perform root-refinement only for the uncomplemented terms in the product. Exact CSG is done by using the discriminant of the ray-surface intersection for the presence of the root. Now we can simply evaluate the product expression by checking all uncomplemented terms should be true and all complemented terms should be false. If our condition is met, we find the maximum of all the roots among uncomplemented terms to be the solution. Our algorithm is linear in the number of terms O(n). We achieve real-time rates for 4-5 terms in the product for approximate CSG. We achieve more than real-time rates for Exact CSG. Our primitives are implicit surfaces so we can achieve fairly complex results with less terms.
我们提出了一个简单而强大的方案来实现GPU上隐式曲面的CSG。我们把曲面的布尔表达式分解成乘积和的形式。本文提出的算法可以通过启用深度测试自动生成每个产品项,产品和。我们的近似CSG使用自适应行军点算法来寻找射线表面交集。一旦我们在根隔离之后找到了一个存在根的区间,这就被用来表示交集的存在。我们只对乘积中的未补项执行根细化。精确的CSG是通过对存在根的射线-表面相交的判别来完成的。现在我们可以简单地计算乘积表达式,通过检查所有未互补项是否为真,所有互补项是否为假。如果我们的条件满足,我们找到所有未互补项的根的最大值为解。我们的算法在项数O(n)上是线性的。我们在产品中实现了4-5个术语的实时率,近似CSG。我们在Exact CSG中实现的不仅仅是实时速率。我们的原语是隐式曲面,所以我们可以用更少的项获得相当复杂的结果。
{"title":"Real-time approximate and exact CSG of implicit surfaces on the GPU","authors":"Jag Mohan Singh","doi":"10.1109/NCVPRIPG.2013.6776199","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776199","url":null,"abstract":"We present a simple and powerful scheme to allow CSG of implicit surfaces on the GPU. We decompose the boolean expression of surfaces into sum-of-products form. Our algorithm presented in this paper then renders each product term, sum of products can be automatically by enabling depth test. Our Approximate CSG uses adaptive marching points algorithm for finding ray-surface intersection. Once we find an interval where root exists after root-isolation, this is used for presence of intersection. We perform root-refinement only for the uncomplemented terms in the product. Exact CSG is done by using the discriminant of the ray-surface intersection for the presence of the root. Now we can simply evaluate the product expression by checking all uncomplemented terms should be true and all complemented terms should be false. If our condition is met, we find the maximum of all the roots among uncomplemented terms to be the solution. Our algorithm is linear in the number of terms O(n). We achieve real-time rates for 4-5 terms in the product for approximate CSG. We achieve more than real-time rates for Exact CSG. Our primitives are implicit surfaces so we can achieve fairly complex results with less terms.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122552265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Geometric invariant Target classification using 2D Mellin cepstrum with modified grid formation 基于改进网格结构的二维Mellin倒谱几何不变目标分类
B. Sathyabama, S. Roomi, R. EvangelineJenitaKamalam
The Classification of Targets in Synthetic Aperture Radar Images is greatly affected by scale, rotation and translation. This paper proposes a geometric invariant algorithm to classify military targets based on extracting cepstral features derived from the modified grid selection over spectral components of Fourier Mellin Transform. The proposed non uniform grid is formed by a window with a cell of 2×2 pixels at the center, surrounded by the cells of 4×4 pixels, and so on, with overlapping concept to extract better representative features. Further each cell is divided into upper and lower triangular bins. The energy of each bin forms the down sampled M×M data accounting the larger value between the two triangles so that the information is enhanced. The experiments are carried out with a total of 700 SAR images collected from MSTAR database with different combinations of rotation, scale and translations. The proposed method has been tested against existing methods such as Region Covariance, Co-differencing and 2D Mellin cepstrum with non- overlapping grids. The results from 2D-Mellin Cepstrum using the proposed grid formation have been observed to be better in terms of 92% detection accuracy compared with 86% for region covariance method and 89% for non-uniform grid formation method.
合成孔径雷达图像中目标的分类受尺度、旋转、平移等因素的影响较大。提出了一种几何不变的军事目标分类算法,该算法基于傅里叶梅林变换的频谱分量提取改进网格选择后的倒谱特征。提出的非均匀网格由中心为2×2像素单元的窗口组成,周围为4×4像素单元,以此类推,采用重叠概念提取更好的代表性特征。每个单元进一步分为上下三角形箱。每个仓的能量形成下采样M×M数据,占两个三角形之间较大的值,从而增强信息。实验采用MSTAR数据库中采集的700幅不同旋转、比例尺和平移组合的SAR图像进行。该方法与现有的区域协方差、共差分和二维Mellin倒谱等非重叠网格方法进行了比较。与区域协方差法的86%和非均匀网格形成法的89%相比,使用该网格形成法的2D-Mellin倒谱结果具有92%的更好的检测精度。
{"title":"Geometric invariant Target classification using 2D Mellin cepstrum with modified grid formation","authors":"B. Sathyabama, S. Roomi, R. EvangelineJenitaKamalam","doi":"10.1109/NCVPRIPG.2013.6776260","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776260","url":null,"abstract":"The Classification of Targets in Synthetic Aperture Radar Images is greatly affected by scale, rotation and translation. This paper proposes a geometric invariant algorithm to classify military targets based on extracting cepstral features derived from the modified grid selection over spectral components of Fourier Mellin Transform. The proposed non uniform grid is formed by a window with a cell of 2×2 pixels at the center, surrounded by the cells of 4×4 pixels, and so on, with overlapping concept to extract better representative features. Further each cell is divided into upper and lower triangular bins. The energy of each bin forms the down sampled M×M data accounting the larger value between the two triangles so that the information is enhanced. The experiments are carried out with a total of 700 SAR images collected from MSTAR database with different combinations of rotation, scale and translations. The proposed method has been tested against existing methods such as Region Covariance, Co-differencing and 2D Mellin cepstrum with non- overlapping grids. The results from 2D-Mellin Cepstrum using the proposed grid formation have been observed to be better in terms of 92% detection accuracy compared with 86% for region covariance method and 89% for non-uniform grid formation method.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124553743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recognition and identification of target images using feature based retrieval in UAV missions 无人机任务中基于特征检索的目标图像识别与识别
Shweta Singh, D. V. Rao
With the introduction of unmanned air vehicles as force multipliers in the defense services worldwide, automatic recognition and identification of ground based targets has become an important area of research in the defense community. Due to inherent instabilities in smaller unmanned platforms, image blurredness and distortion need to be addressed for the successful recognition of the target. In this paper, an image enhancement technique that can improve images' quality acquired by an unmanned system is proposed. An image de-blurring technique based on blind de-convolution algorithm which adaptively enhances the edges of characters and wipes off blurredness effectively is proposed. A content-based image retrieval technique based on features extraction to generate an image description and a compact feature vector that represents the visual information, color, texture and shape is used with a minimum distance algorithm to effectively retrieve the plausible target images from a library of images stored in a target folder. This methodology was implemented for planning and gaming the UAV/UCAV missions in the Air Warfare Simulation System.
随着无人飞行器作为力量倍增器在世界范围内的应用,地面目标的自动识别和识别已成为防务界的一个重要研究领域。由于小型无人平台固有的不稳定性,为了成功识别目标,需要解决图像模糊和失真问题。本文提出了一种提高无人系统图像质量的图像增强技术。提出了一种基于盲反卷积算法的图像去模糊技术,该技术能自适应增强字符的边缘,有效地消除模糊。采用基于内容的图像检索技术,基于特征提取生成图像描述和表示视觉信息、颜色、纹理和形状的紧凑特征向量,并采用最小距离算法从存储在目标文件夹中的图像库中有效检索出可信的目标图像。该方法在空战模拟系统中用于UAV/UCAV任务的规划和博弈。
{"title":"Recognition and identification of target images using feature based retrieval in UAV missions","authors":"Shweta Singh, D. V. Rao","doi":"10.1109/NCVPRIPG.2013.6776165","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776165","url":null,"abstract":"With the introduction of unmanned air vehicles as force multipliers in the defense services worldwide, automatic recognition and identification of ground based targets has become an important area of research in the defense community. Due to inherent instabilities in smaller unmanned platforms, image blurredness and distortion need to be addressed for the successful recognition of the target. In this paper, an image enhancement technique that can improve images' quality acquired by an unmanned system is proposed. An image de-blurring technique based on blind de-convolution algorithm which adaptively enhances the edges of characters and wipes off blurredness effectively is proposed. A content-based image retrieval technique based on features extraction to generate an image description and a compact feature vector that represents the visual information, color, texture and shape is used with a minimum distance algorithm to effectively retrieve the plausible target images from a library of images stored in a target folder. This methodology was implemented for planning and gaming the UAV/UCAV missions in the Air Warfare Simulation System.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128882269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1