首页 > 最新文献

Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing最新文献

英文 中文
Robust segmentation of corneal fibers from noisy images 噪声图像中角膜纤维的鲁棒分割
Pub Date : 2016-12-18 DOI: 10.1145/3009977.3010051
Jia Chen, J. Jester, M. Gopi
Corneal collagen structure, which plays an important role in determining visual acuity, has drawn a lot of research attention to exploring its geometric properties. Advancement of nonlinear optical (NLO) imaging provides a potential way for capturing fiber-level structure of cornea, however, the artifacts introduced by the NLO imaging process make image segmentation on such images a bottleneck for further analysis. Especially, the existing methods fail to preserve the branching points which are important for mechanical analysis. In this paper, we propose a hybrid image segmentation method, which integrates seeded region growing and iterative voting. Results show that our algorithm outperforms state-of-the-art techniques in segmenting fibers from background while preserving branching points. Finally, we show that, based on the segmentation result, branching points and the width of fibers can be determined more accurately than the other methods, which is critical for mechanical analysis on corneal structure.
角膜胶原蛋白结构在决定视力中起着重要的作用,其几何性质的探索引起了人们的广泛关注。非线性光学成像(NLO)的发展为捕捉角膜的纤维级结构提供了一种潜在的方法,然而,NLO成像过程中引入的伪影使得对此类图像的图像分割成为进一步分析的瓶颈。特别是,现有的方法不能保留对力学分析至关重要的分支点。本文提出了一种结合种子区域生长和迭代投票的混合图像分割方法。结果表明,在保留分支点的同时,我们的算法在从背景中分割纤维方面优于最先进的技术。最后,我们证明基于分割结果可以比其他方法更准确地确定分支点和纤维宽度,这对于角膜结构的力学分析至关重要。
{"title":"Robust segmentation of corneal fibers from noisy images","authors":"Jia Chen, J. Jester, M. Gopi","doi":"10.1145/3009977.3010051","DOIUrl":"https://doi.org/10.1145/3009977.3010051","url":null,"abstract":"Corneal collagen structure, which plays an important role in determining visual acuity, has drawn a lot of research attention to exploring its geometric properties. Advancement of nonlinear optical (NLO) imaging provides a potential way for capturing fiber-level structure of cornea, however, the artifacts introduced by the NLO imaging process make image segmentation on such images a bottleneck for further analysis. Especially, the existing methods fail to preserve the branching points which are important for mechanical analysis. In this paper, we propose a hybrid image segmentation method, which integrates seeded region growing and iterative voting. Results show that our algorithm outperforms state-of-the-art techniques in segmenting fibers from background while preserving branching points. Finally, we show that, based on the segmentation result, branching points and the width of fibers can be determined more accurately than the other methods, which is critical for mechanical analysis on corneal structure.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"128 1","pages":"58:1-58:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82784025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Robust pedestrian tracking using improved tracking-learning-detection algorithm 基于改进跟踪-学习-检测算法的鲁棒行人跟踪
Pub Date : 2016-12-18 DOI: 10.1145/3009977.3009999
Ritika Verma, I. Sreedevi
Manual analysis of pedestrians for surveillance of large crowds in real time applications is not practical. Tracking-Learning-Detection suggested by Kalal, Mikolajczyk and Matas [1] is one of the most prominent automatic object tracking system. TLD can track single object and can handle occlusion and appearance change but it suffers from limitations. In this paper, tracking of multiple objects and estimation of their trajectory is suggested using improved TLD. Feature tracking is suggested in place of grid based tracking to solve the limitation of tracking during out of plane rotation. This also leads to optimization of algorithm. Proposed algorithm also achieves auto-initialization with detection of pedestrians in the first frame which makes it suitable for real time pedestrian tracking.
人工分析行人以实时监控大量人群的应用是不现实的。Kalal, Mikolajczyk和Matas[1]提出的跟踪-学习-检测是最突出的自动目标跟踪系统之一。TLD可以跟踪单个物体,可以处理遮挡和外观变化,但存在局限性。本文提出了一种基于改进TLD的多目标跟踪和轨迹估计方法。为了解决非平面旋转时跟踪的局限性,提出用特征跟踪代替网格跟踪。这也导致了算法的优化。该算法在第一帧检测行人的情况下实现了自动初始化,适合于实时行人跟踪。
{"title":"Robust pedestrian tracking using improved tracking-learning-detection algorithm","authors":"Ritika Verma, I. Sreedevi","doi":"10.1145/3009977.3009999","DOIUrl":"https://doi.org/10.1145/3009977.3009999","url":null,"abstract":"Manual analysis of pedestrians for surveillance of large crowds in real time applications is not practical. Tracking-Learning-Detection suggested by Kalal, Mikolajczyk and Matas [1] is one of the most prominent automatic object tracking system. TLD can track single object and can handle occlusion and appearance change but it suffers from limitations. In this paper, tracking of multiple objects and estimation of their trajectory is suggested using improved TLD. Feature tracking is suggested in place of grid based tracking to solve the limitation of tracking during out of plane rotation. This also leads to optimization of algorithm. Proposed algorithm also achieves auto-initialization with detection of pedestrians in the first frame which makes it suitable for real time pedestrian tracking.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"08 1","pages":"35:1-35:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85950954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A stratified registration framework for DSA artifact reduction using random walker 一种基于随机漫步器的分层配准框架
Pub Date : 2016-12-18 DOI: 10.1145/3009977.3010066
Manivannan Sundarapandian, K. Ramakrishnan
In Digital Subtraction Angiography (DSA), non-rigid registration of the mask and contrast images to reduce the motion artifacts is a challenging problem. In this paper, we have proposed a novel stratified registration framework for DSA artifact reduction. We use quad-trees to generate the non-uniform grid of control points and obtain the sub-pixel displacement offsets using Random Walker (RW). We have also proposed a sequencing logic for the control points and an incremental LU decomposition approach that enables reuse of the computations in the RW step. We have tested our approach using clinical data sets, and found that our registration framework has performed comparable to the graph-cuts (at the same partition level), in regions wherein 95% artifact reduction was achieved. The optimization step achieves a speed improvement of 4.2 times with respect to graph-cuts.
在数字减影血管造影(DSA)中,如何对掩膜和对比度图像进行非刚性配准以减少运动伪影是一个具有挑战性的问题。在本文中,我们提出了一种新的分层配准框架来减少DSA伪影。我们使用四叉树生成控制点的非均匀网格,并使用Random Walker (RW)获得亚像素位移偏移。我们还提出了控制点的顺序逻辑和增量LU分解方法,该方法可以重用RW步骤中的计算。我们已经使用临床数据集测试了我们的方法,并发现我们的注册框架的性能与图切割相当(在相同的分区级别上),其中95%的伪像减少了。优化步骤使图形切割的速度提高了4.2倍。
{"title":"A stratified registration framework for DSA artifact reduction using random walker","authors":"Manivannan Sundarapandian, K. Ramakrishnan","doi":"10.1145/3009977.3010066","DOIUrl":"https://doi.org/10.1145/3009977.3010066","url":null,"abstract":"In Digital Subtraction Angiography (DSA), non-rigid registration of the mask and contrast images to reduce the motion artifacts is a challenging problem. In this paper, we have proposed a novel stratified registration framework for DSA artifact reduction. We use quad-trees to generate the non-uniform grid of control points and obtain the sub-pixel displacement offsets using Random Walker (RW). We have also proposed a sequencing logic for the control points and an incremental LU decomposition approach that enables reuse of the computations in the RW step. We have tested our approach using clinical data sets, and found that our registration framework has performed comparable to the graph-cuts (at the same partition level), in regions wherein 95% artifact reduction was achieved. The optimization step achieves a speed improvement of 4.2 times with respect to graph-cuts.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"8 1","pages":"68:1-68:7"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85034710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Iris recognition using partial sum of second order Taylor series expansion 利用二阶泰勒级数展开的部分和进行虹膜识别
Pub Date : 2016-12-18 DOI: 10.1145/3009977.3010065
B. H. Shekar, S. S. Bhat
Iris is presently one among the most sought after traits in biometric research. Extracting well-suited features from iris has been a favourite topic of the researchers. This paper proposes a novel iris feature extraction technique based on partial sum of second order Taylor series expansion (TSE). The finite sum of TSE computed on an arbitrary small neighbourhood on multiple scales can approximate the function extremely well and hence provides a powerful mechanism to extract the complex natured localised features of iris structure. To compute the higher order derivatives of TSE, we propose kernel structures by extending the Sobel operators. Extensive experiments are conducted with multiple scales on IITD, MMU v-2 and CASIA v-4 distance databases and comparative analysis is performed with the existing algorithms to substantiate the performance of the proposed method.
虹膜是目前生物识别研究中最受追捧的特征之一。从虹膜中提取合适的特征一直是研究人员最喜欢的话题。提出了一种基于二阶泰勒级数展开式部分和的虹膜特征提取方法。在多个尺度上任意小邻域上计算的TSE有限和可以很好地逼近该函数,从而为提取虹膜结构的复杂自然局部特征提供了一种强大的机制。为了计算TSE的高阶导数,我们通过扩展Sobel算子提出了核结构。在IITD、MMU v-2和CASIA v-4距离数据库上进行了多尺度的大量实验,并与现有算法进行了对比分析,验证了所提方法的性能。
{"title":"Iris recognition using partial sum of second order Taylor series expansion","authors":"B. H. Shekar, S. S. Bhat","doi":"10.1145/3009977.3010065","DOIUrl":"https://doi.org/10.1145/3009977.3010065","url":null,"abstract":"Iris is presently one among the most sought after traits in biometric research. Extracting well-suited features from iris has been a favourite topic of the researchers. This paper proposes a novel iris feature extraction technique based on partial sum of second order Taylor series expansion (TSE). The finite sum of TSE computed on an arbitrary small neighbourhood on multiple scales can approximate the function extremely well and hence provides a powerful mechanism to extract the complex natured localised features of iris structure. To compute the higher order derivatives of TSE, we propose kernel structures by extending the Sobel operators. Extensive experiments are conducted with multiple scales on IITD, MMU v-2 and CASIA v-4 distance databases and comparative analysis is performed with the existing algorithms to substantiate the performance of the proposed method.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"11 1","pages":"81:1-81:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82900181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Fast frontier detection in indoor environment for monocular SLAM 单目SLAM在室内环境下的快速边界检测
Pub Date : 2016-12-18 DOI: 10.1145/3009977.3010063
Sarthak Upadhyay, K. Krishna, S. Kumar
Frontier detection is a critical component in autonomous exploration, wherein the robot decides the next best location to move in order to continue its mapping process. The existing frontier detection methods require dense reconstruction which is difficult to attain in a poorly textured indoor environment using a monocular camera. In this effort, we present an alternate method of detecting frontiers during the course of robot motion that circumvents the requirement of dense mapping. Based on the observation that frontiers typically occur around areas with sudden change in texture (zero-crossings), we propose a novel linear chain Conditional Random Field(CRF) formulation that is able to detect the presence or absence of frontier regions around such areas. We use cues like spread of 3D points and scene change around these areas as an observation to CRF. We demonstrate that this method gives us more relevant frontiers compared to other monocular camera based methods in the literature. Finally, we present results in an indoor environment, wherein frontiers are reliably detected around walls leading to new corridors, doors leading to new rooms or corridors and tables and other objects that open up to a new space in rooms.
边界检测是自主探索的关键组成部分,其中机器人决定下一个最佳移动位置,以便继续其绘图过程。现有的边界检测方法需要密集的重建,而在纹理较差的室内环境中,单目摄像机很难实现这一目标。在这项工作中,我们提出了一种在机器人运动过程中检测边界的替代方法,该方法绕过了密集映射的要求。基于边界通常出现在纹理突然变化(零交叉)的区域周围的观察,我们提出了一种新的线性链条件随机场(CRF)公式,该公式能够检测这些区域周围是否存在边界区域。我们使用诸如3D点的扩散和场景变化等线索来观察这些区域的CRF。我们证明,与文献中其他基于单目相机的方法相比,这种方法为我们提供了更多相关的前沿。最后,我们展示了室内环境中的结果,其中可以可靠地检测到通往新走廊的墙壁周围的边界,通往新房间或走廊的门,以及通往房间新空间的桌子和其他物体。
{"title":"Fast frontier detection in indoor environment for monocular SLAM","authors":"Sarthak Upadhyay, K. Krishna, S. Kumar","doi":"10.1145/3009977.3010063","DOIUrl":"https://doi.org/10.1145/3009977.3010063","url":null,"abstract":"Frontier detection is a critical component in autonomous exploration, wherein the robot decides the next best location to move in order to continue its mapping process. The existing frontier detection methods require dense reconstruction which is difficult to attain in a poorly textured indoor environment using a monocular camera. In this effort, we present an alternate method of detecting frontiers during the course of robot motion that circumvents the requirement of dense mapping. Based on the observation that frontiers typically occur around areas with sudden change in texture (zero-crossings), we propose a novel linear chain Conditional Random Field(CRF) formulation that is able to detect the presence or absence of frontier regions around such areas. We use cues like spread of 3D points and scene change around these areas as an observation to CRF. We demonstrate that this method gives us more relevant frontiers compared to other monocular camera based methods in the literature. Finally, we present results in an indoor environment, wherein frontiers are reliably detected around walls leading to new corridors, doors leading to new rooms or corridors and tables and other objects that open up to a new space in rooms.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"75 1","pages":"39:1-39:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83794189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An image analysis approach for transcription of music played on keyboard-like instruments 一种图像分析方法,用于在类似键盘的乐器上演奏音乐的转录
Pub Date : 2016-12-18 DOI: 10.1145/3009977.3010007
Souvik Deb, Ajit V. Rajwade
Music transcription refers to the process of analyzing a piece of music to generate a sequence of constituent notes and their duration. Transcription of music from audio signals is fraught with problems due to auditory interference such as ambient noise, multiple instruments playing simultaneously, accompanying vocals or polyphonic sounds. For several instruments, there exists added information for music transcription which can be derived from a video sequence of the instrument as it is being played. This paper proposes a method to utilize this visual information for the case of keyboard-like instruments to generate a transcript automatically, by analyzing the video frames. We present encouraging results under varying lighting conditions on different song sequences played out on a keyboard.
音乐转录是指分析一段音乐,以产生一系列组成音符及其持续时间的过程。由于听觉干扰,如环境噪音、多乐器同时演奏、伴随人声或复调声,从音频信号中转录音乐充满了问题。对于一些乐器,存在额外的音乐转录信息,这些信息可以从乐器播放时的视频序列中获得。本文通过分析视频帧,提出了一种利用这种视觉信息自动生成文本的方法。我们提出了令人鼓舞的结果,在不同的照明条件下,不同的歌曲序列在键盘上播放。
{"title":"An image analysis approach for transcription of music played on keyboard-like instruments","authors":"Souvik Deb, Ajit V. Rajwade","doi":"10.1145/3009977.3010007","DOIUrl":"https://doi.org/10.1145/3009977.3010007","url":null,"abstract":"Music transcription refers to the process of analyzing a piece of music to generate a sequence of constituent notes and their duration. Transcription of music from audio signals is fraught with problems due to auditory interference such as ambient noise, multiple instruments playing simultaneously, accompanying vocals or polyphonic sounds. For several instruments, there exists added information for music transcription which can be derived from a video sequence of the instrument as it is being played. This paper proposes a method to utilize this visual information for the case of keyboard-like instruments to generate a transcript automatically, by analyzing the video frames. We present encouraging results under varying lighting conditions on different song sequences played out on a keyboard.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"41 1","pages":"5:1-5:6"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80556385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Mosaicing deep underwater imagery 拼接深水图像
Pub Date : 2016-12-18 DOI: 10.1145/3009977.3010029
Kuldeep Purohit, Subeesh Vasu, A. Rajagopalan, V. Jyothi, Ramesh Raju
Numerous sources of distortions render mosaicing of underwater (UW) images an immensely challenging effort. Methods that can process conventional photographs (terrestrial/aerial) fail to deliver the desired results on UW images. Taking the sources of underwater degradations into account is central to ensuring quality performance. The work described in this paper specifically deals with the problem of mosaicing deep UW images captured by Remotely Operated Vehicles (ROVs). These images are mainly degraded by haze, color changes, and non-uniform illumination. We propose a framework that restores these images in accordance with a suitably derived degradation model. Furthermore, our scheme harnesses the scene geometry information present in each image to aid in constructing a mosaic that is free from artifacts such as local blurring, ghosting, double contouring and visible seams. Several experiments on real underwater images sequences have been carried out to demonstrate the performance of our mosaicing pipeline along with comparisons.
许多来源的失真使得水下(UW)图像的拼接非常具有挑战性的努力。可以处理常规照片(地面/空中)的方法无法在UW图像上提供所需的结果。考虑到水下退化的来源是确保质量性能的核心。本文所描述的工作专门针对远程操作车辆(rov)捕获的深紫外图像的拼接问题。这些图像主要受到雾霾、颜色变化和不均匀光照的影响。我们提出了一个框架,恢复这些图像按照一个适当的派生退化模型。此外,我们的方案利用每个图像中存在的场景几何信息来帮助构建一个没有局部模糊、重影、双重轮廓和可见接缝等人工制品的马赛克。在真实的水下图像序列上进行了几个实验,以证明我们的拼接管道的性能并进行了比较。
{"title":"Mosaicing deep underwater imagery","authors":"Kuldeep Purohit, Subeesh Vasu, A. Rajagopalan, V. Jyothi, Ramesh Raju","doi":"10.1145/3009977.3010029","DOIUrl":"https://doi.org/10.1145/3009977.3010029","url":null,"abstract":"Numerous sources of distortions render mosaicing of underwater (UW) images an immensely challenging effort. Methods that can process conventional photographs (terrestrial/aerial) fail to deliver the desired results on UW images. Taking the sources of underwater degradations into account is central to ensuring quality performance. The work described in this paper specifically deals with the problem of mosaicing deep UW images captured by Remotely Operated Vehicles (ROVs). These images are mainly degraded by haze, color changes, and non-uniform illumination. We propose a framework that restores these images in accordance with a suitably derived degradation model. Furthermore, our scheme harnesses the scene geometry information present in each image to aid in constructing a mosaic that is free from artifacts such as local blurring, ghosting, double contouring and visible seams. Several experiments on real underwater images sequences have been carried out to demonstrate the performance of our mosaicing pipeline along with comparisons.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"33 1","pages":"74:1-74:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83641796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Automatic detection of Malaria infected RBCs from a focus stack of bright field microscope slide images 从亮场显微镜载玻片图像聚焦堆栈中自动检测疟疾感染红细胞
Pub Date : 2016-12-18 DOI: 10.1145/3009977.3010024
G. Gopakumar, M. Swetha, G. S. Siva, G. R. S. Subrahmanyam
Malaria is a deadly infectious disease affecting red blood cells in humans due to the protozoan of type Plasmodium. In 2015, there is an estimated death toll of 438, 000 patients out of the total 214 million malaria cases reported world-wide. Thus, building an accurate automatic system for detecting the malarial cases is beneficial and has huge medical value. This paper addresses the detection of Plasmodium Falciparum infected RBCs from Leishman's stained microscope slide images. Unlike the traditional way of examining a single focused image to detect the parasite, we make use of a focus stack of images collected using a bright field microscope. Rather than the conventional way of extracting the specific features we opt for using Convolutional Neural Network that can directly operate on images bypassing the need for hand-engineered features. We work with image patches at the suspected parasite location there by avoiding the need for cell segmentation. We experiment, report and compare the detection rate received when only a single focused image is used and when operated on the focus stack of images. Altogether the proposed novel approach results in highly accurate malaria detection.
疟疾是一种影响人类红细胞的致命传染病,是由疟原虫型原生动物引起的。2015年,在全球报告的2.14亿疟疾病例中,估计有43.8万名患者死亡。因此,建立一个准确的疟疾病例自动检测系统是有益的,具有巨大的医学价值。本文讨论了利用利什曼染色显微镜载玻片图像检测恶性疟原虫感染红细胞的方法。与传统的检查单个聚焦图像来检测寄生虫的方法不同,我们利用明亮场显微镜收集的聚焦图像堆栈。与传统的提取特定特征的方法不同,我们选择使用卷积神经网络,它可以直接对图像进行操作,而无需手工设计特征。我们在疑似寄生虫的位置使用图像补丁,避免了细胞分割的需要。我们实验、报告并比较了仅使用单个聚焦图像和在图像聚焦堆栈上操作时收到的检测率。总的来说,提出的新方法导致高度准确的疟疾检测。
{"title":"Automatic detection of Malaria infected RBCs from a focus stack of bright field microscope slide images","authors":"G. Gopakumar, M. Swetha, G. S. Siva, G. R. S. Subrahmanyam","doi":"10.1145/3009977.3010024","DOIUrl":"https://doi.org/10.1145/3009977.3010024","url":null,"abstract":"Malaria is a deadly infectious disease affecting red blood cells in humans due to the protozoan of type Plasmodium. In 2015, there is an estimated death toll of 438, 000 patients out of the total 214 million malaria cases reported world-wide. Thus, building an accurate automatic system for detecting the malarial cases is beneficial and has huge medical value. This paper addresses the detection of Plasmodium Falciparum infected RBCs from Leishman's stained microscope slide images. Unlike the traditional way of examining a single focused image to detect the parasite, we make use of a focus stack of images collected using a bright field microscope. Rather than the conventional way of extracting the specific features we opt for using Convolutional Neural Network that can directly operate on images bypassing the need for hand-engineered features. We work with image patches at the suspected parasite location there by avoiding the need for cell segmentation. We experiment, report and compare the detection rate received when only a single focused image is used and when operated on the focus stack of images. Altogether the proposed novel approach results in highly accurate malaria detection.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"94 1","pages":"16:1-16:7"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74408271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
User guided generation of corroded objects 用户引导生成腐蚀对象
Pub Date : 2016-12-18 DOI: 10.1145/3009977.3010031
N. Jain, P. Kalra, R. Ranjan, Subodh Kumar
Rendering of corrosion often requires pain-staking modeling and texturing. On the other hand, there exist techniques for stochastic modeling of corrosion, which can automatically perform simulation and rendering under control of some user-specified parameters. Unfortunately, these parameters are non-intuitive and have a global impact. It is hard to determine the values of these parameters to obtain a desired look. For example, in real life corrosion gets influenced by both internal object-specific geometric factors, like sharp corners and curvatures, and external interventions like scratches, blemishes etc. Further, a graphics designer may want to selectively corrode areas to obtain a particular scene. We present a technique for user guided spread of corrosion. Our framework encapsulates both structural and aesthetic factors. Given the material properties and the surrounding environmental conditions of an object, we employ a physio-chemically based stochastic model to deduce the decay of different points on that object. Our system equips the user with a platform where the imperfections can be provided by either manual or systematic interference on a rendering of the three dimensional object. We demonstrate several user guided characteristic simulations encompassing varied influences including material, object characteristics and environment conditions. Our results are visually validated to understand the impact of imperfections with elapsed time.
腐蚀的渲染通常需要艰苦的建模和纹理。另一方面,已有的腐蚀随机建模技术可以在用户指定的参数控制下自动进行模拟和绘制。不幸的是,这些参数不是直观的,并且具有全局影响。很难确定这些参数的值以获得所需的外观。例如,在现实生活中,腐蚀受到物体内部特定几何因素(如尖角和曲率)和外部干预(如划痕、瑕疵等)的影响。此外,图形设计师可能希望有选择地腐蚀区域以获得特定的场景。我们提出了一种用户引导的腐蚀扩散技术。我们的框架包含了结构和美学因素。给定物体的材料特性和周围环境条件,我们采用基于物理化学的随机模型来推断该物体上不同点的衰变。我们的系统为用户提供了一个平台,在这个平台上,可以通过手动或系统干扰来提供三维物体的渲染。我们演示了几个用户引导的特性模拟,包括各种影响,包括材料,对象特性和环境条件。我们的结果经过视觉验证,以了解不完美对运行时间的影响。
{"title":"User guided generation of corroded objects","authors":"N. Jain, P. Kalra, R. Ranjan, Subodh Kumar","doi":"10.1145/3009977.3010031","DOIUrl":"https://doi.org/10.1145/3009977.3010031","url":null,"abstract":"Rendering of corrosion often requires pain-staking modeling and texturing. On the other hand, there exist techniques for stochastic modeling of corrosion, which can automatically perform simulation and rendering under control of some user-specified parameters. Unfortunately, these parameters are non-intuitive and have a global impact. It is hard to determine the values of these parameters to obtain a desired look. For example, in real life corrosion gets influenced by both internal object-specific geometric factors, like sharp corners and curvatures, and external interventions like scratches, blemishes etc. Further, a graphics designer may want to selectively corrode areas to obtain a particular scene. We present a technique for user guided spread of corrosion. Our framework encapsulates both structural and aesthetic factors. Given the material properties and the surrounding environmental conditions of an object, we employ a physio-chemically based stochastic model to deduce the decay of different points on that object. Our system equips the user with a platform where the imperfections can be provided by either manual or systematic interference on a rendering of the three dimensional object. We demonstrate several user guided characteristic simulations encompassing varied influences including material, object characteristics and environment conditions. Our results are visually validated to understand the impact of imperfections with elapsed time.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"5 1","pages":"89:1-89:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82299752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Analyzing object categories via novel category ranking measures defined on visual feature embeddings 基于视觉特征嵌入的分类排序方法分析目标类别
Pub Date : 2016-12-18 DOI: 10.1145/3009977.3010037
Ravi Kiran Sarvadevabhatla, Raviteja Meesala, Manjunath Hegde, R. Venkatesh Babu
Visualizing 2-D/3-D embeddings of image features can help gain an intuitive understanding of the image category landscape. However, popular visualization methods of visualizing such embeddings (e.g. color-coding by category) are impractical when the number of categories is large. To address this and other shortcomings, we propose novel quantitative measures defined on image feature embeddings. Each measure produces a ranked ordering of the categories and provides an intuitive vantage point from which to view the entire set of categories. As an experimental testbed, we use deep features obtained from category-epitomes, a recently introduced minimalist visual representation, across 160 object categories. We embed the features in a visualization-friendly yet similarity-preserving 2-D manifold and analyze the inter/intra-category distributions of these embeddings using the proposed measures. Our analysis demonstrates that the category ordering methods enable new insights for the domain of large-category object representations. Moreover, our ordering measure approach is general in nature and can be applied to any feature-based representation of categories.
可视化图像特征的二维/三维嵌入可以帮助获得对图像类别景观的直观理解。然而,当类别数量很大时,流行的可视化方法(例如按类别进行颜色编码)是不切实际的。为了解决这个问题和其他缺点,我们提出了基于图像特征嵌入的新的定量度量。每个度量都会产生类别的排序,并提供直观的有利位置,从中查看整个类别集。作为实验测试平台,我们使用了从最近引入的极简主义视觉表示类别-缩影(category-epitomes)中获得的深度特征,涵盖了160个对象类别。我们将特征嵌入到可视化友好且保持相似性的二维流形中,并使用所提出的措施分析这些嵌入的类别间/类别内分布。我们的分析表明,类别排序方法为大类别对象表示领域提供了新的见解。此外,我们的排序度量方法本质上是通用的,可以应用于任何基于特征的类别表示。
{"title":"Analyzing object categories via novel category ranking measures defined on visual feature embeddings","authors":"Ravi Kiran Sarvadevabhatla, Raviteja Meesala, Manjunath Hegde, R. Venkatesh Babu","doi":"10.1145/3009977.3010037","DOIUrl":"https://doi.org/10.1145/3009977.3010037","url":null,"abstract":"Visualizing 2-D/3-D embeddings of image features can help gain an intuitive understanding of the image category landscape. However, popular visualization methods of visualizing such embeddings (e.g. color-coding by category) are impractical when the number of categories is large. To address this and other shortcomings, we propose novel quantitative measures defined on image feature embeddings. Each measure produces a ranked ordering of the categories and provides an intuitive vantage point from which to view the entire set of categories. As an experimental testbed, we use deep features obtained from category-epitomes, a recently introduced minimalist visual representation, across 160 object categories. We embed the features in a visualization-friendly yet similarity-preserving 2-D manifold and analyze the inter/intra-category distributions of these embeddings using the proposed measures. Our analysis demonstrates that the category ordering methods enable new insights for the domain of large-category object representations. Moreover, our ordering measure approach is general in nature and can be applied to any feature-based representation of categories.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"53 1","pages":"79:1-79:6"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83263374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1