首页 > 最新文献

Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing最新文献

英文 中文
Crowd motion analysis for group detection 群体检测的人群运动分析
Pub Date : 2016-12-18 DOI: 10.1145/3009977.3010071
Neha Bhargava, S. Chaudhuri
Understanding crowd dynamics is an interesting problem in computer vision owing to its various applications. We propose a dynamical system to model the dynamics of collective motion of the crowd. The model learns the spatio-temporal interaction pattern of the crowd from the track data captured over a time period. The model is trained under a least square formulation with spatial and temporal constraints. The spatial constraint allows the model to consider only the neighbors of a particular agent and the temporal constraint enforces temporal smoothness in the model. We also propose an effective group detection algorithm that utilizes the eigenvectors of the interaction matrix of the model. The group detection is cast as a spectral clustering problem. Extensive experimentation demonstrates a superlative performance of our group detection algorithm over state-of-the-art methods.
在计算机视觉中,理解群体动力学是一个有趣的问题,因为它的应用非常广泛。我们提出了一个动力系统来模拟人群集体运动的动力学。该模型从一段时间内捕获的轨迹数据中学习人群的时空交互模式。该模型在具有时空约束的最小二乘公式下进行训练。空间约束允许模型只考虑特定代理的邻居,时间约束强制模型中的时间平滑。我们还提出了一种有效的群体检测算法,该算法利用了模型相互作用矩阵的特征向量。群体检测是一个光谱聚类问题。广泛的实验证明了我们的群体检测算法优于最先进的方法。
{"title":"Crowd motion analysis for group detection","authors":"Neha Bhargava, S. Chaudhuri","doi":"10.1145/3009977.3010071","DOIUrl":"https://doi.org/10.1145/3009977.3010071","url":null,"abstract":"Understanding crowd dynamics is an interesting problem in computer vision owing to its various applications. We propose a dynamical system to model the dynamics of collective motion of the crowd. The model learns the spatio-temporal interaction pattern of the crowd from the track data captured over a time period. The model is trained under a least square formulation with spatial and temporal constraints. The spatial constraint allows the model to consider only the neighbors of a particular agent and the temporal constraint enforces temporal smoothness in the model. We also propose an effective group detection algorithm that utilizes the eigenvectors of the interaction matrix of the model. The group detection is cast as a spectral clustering problem. Extensive experimentation demonstrates a superlative performance of our group detection algorithm over state-of-the-art methods.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"38 1","pages":"21:1-21:6"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80610686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TraCount: a deep convolutional neural network for highly overlapping vehicle counting TraCount:用于高度重叠车辆计数的深度卷积神经网络
Pub Date : 2016-12-18 DOI: 10.1145/3009977.3010060
Shiv Surya, R. Venkatesh Babu
We propose a novel deep framework, TraCount, for highly overlapping vehicle counting in congested traffic scenes. TraCount uses multiple fully convolutional(FC) sub-networks to predict the density map for a given static image of a traffic scene. The different FC sub-networks provide a range in size of receptive fields that enable us to count vehicles whose perspective effect varies significantly in a scene due to the large visual field of surveillance cameras. The predictions of different FC sub-networks are fused by weighted averaging to obtain a final density map. We show that TraCount outperforms the state of the art methods on the challenging TRANCOS dataset that has a total of 46796 vehicles annotated across 1244 images.
我们提出了一个新的深度框架,TraCount,用于在拥挤的交通场景中进行高度重叠的车辆计数。TraCount使用多个全卷积(FC)子网络来预测给定交通场景静态图像的密度图。不同的FC子网络提供了一个接收域大小的范围,使我们能够计算在一个场景中由于监控摄像机的大视野而导致视角效果显著变化的车辆。采用加权平均的方法对不同FC子网的预测结果进行融合,得到最终的密度图。我们表明,在具有挑战性的TRANCOS数据集上,TraCount优于最先进的方法,该数据集在1244张图像中总共标注了46796辆汽车。
{"title":"TraCount: a deep convolutional neural network for highly overlapping vehicle counting","authors":"Shiv Surya, R. Venkatesh Babu","doi":"10.1145/3009977.3010060","DOIUrl":"https://doi.org/10.1145/3009977.3010060","url":null,"abstract":"We propose a novel deep framework, TraCount, for highly overlapping vehicle counting in congested traffic scenes. TraCount uses multiple fully convolutional(FC) sub-networks to predict the density map for a given static image of a traffic scene. The different FC sub-networks provide a range in size of receptive fields that enable us to count vehicles whose perspective effect varies significantly in a scene due to the large visual field of surveillance cameras. The predictions of different FC sub-networks are fused by weighted averaging to obtain a final density map.\u0000 We show that TraCount outperforms the state of the art methods on the challenging TRANCOS dataset that has a total of 46796 vehicles annotated across 1244 images.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"38 1","pages":"46:1-46:6"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74523642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Generic TV advertisement detection using progressively balanced perceptron trees 基于渐进式平衡感知器树的通用电视广告检测
Pub Date : 2016-12-18 DOI: 10.1145/3009977.3009995
Raghvendra Kannao, P. Guha
Automatic detection of TV advertisements is of paramount importance for various media monitoring agencies. Existing works in this domain have mostly focused on news channels using news specific features. Most commercial products use near copy detection algorithms instead of generic advertisement classification. A generic detector needs to handle inter-class and intra-class imbalances present in data due to variability in content aired across channels and frequent repetition of advertisements. Imbalances present in data make classifiers biased towards one of the classes and thus require special treatment. We propose to use tree of perceptrons to solve this problem. The training data available for each perceptron node is balanced using cluster based over-sampling and TOMEK link cleaning as we traverse the tree downwards. The trained perceptron node then passes the original unbalanced data to its children. This process is repeated recursively till we reach the leaf nodes. We call this new algorithm as "Progressively Balanced Perceptron Tree". We have also contributed a TV advertisements dataset consisting of 250 hours of videos recorded from five non-news TV channels of different genres. Experimentations on this dataset have shown that the proposed approach has comparatively superior and balanced performance with respect to six baseline methods. Our proposal generalizes well across channels, with varying training data sizes and achieved a top F1-score of 97% in detecting advertisements.
电视广告的自动检测是各媒体监控机构的头等大事。该领域的现有工作主要集中在使用新闻特定功能的新闻频道上。大多数商业产品使用近复制检测算法,而不是通用的广告分类。通用检测器需要处理由于跨频道播放的内容的可变性和广告的频繁重复而导致的数据中的类间和类内不平衡。数据中的不平衡使分类器偏向于其中一个类,因此需要特殊处理。我们建议使用感知器树来解决这个问题。当我们向下遍历树时,使用基于聚类的过采样和TOMEK链路清洗来平衡每个感知器节点的可用训练数据。然后,经过训练的感知器节点将原始的不平衡数据传递给它的子节点。这个过程递归地重复,直到我们到达叶节点。我们称这种新算法为“渐进式平衡感知器树”。我们还提供了一个电视广告数据集,其中包括从五个不同类型的非新闻电视频道录制的250小时视频。在该数据集上的实验表明,该方法相对于六种基线方法具有相对优越和平衡的性能。我们的建议可以很好地泛化各个渠道,使用不同的训练数据大小,并在检测广告方面获得了97%的最高f1分数。
{"title":"Generic TV advertisement detection using progressively balanced perceptron trees","authors":"Raghvendra Kannao, P. Guha","doi":"10.1145/3009977.3009995","DOIUrl":"https://doi.org/10.1145/3009977.3009995","url":null,"abstract":"Automatic detection of TV advertisements is of paramount importance for various media monitoring agencies. Existing works in this domain have mostly focused on news channels using news specific features. Most commercial products use near copy detection algorithms instead of generic advertisement classification. A generic detector needs to handle inter-class and intra-class imbalances present in data due to variability in content aired across channels and frequent repetition of advertisements. Imbalances present in data make classifiers biased towards one of the classes and thus require special treatment. We propose to use tree of perceptrons to solve this problem. The training data available for each perceptron node is balanced using cluster based over-sampling and TOMEK link cleaning as we traverse the tree downwards. The trained perceptron node then passes the original unbalanced data to its children. This process is repeated recursively till we reach the leaf nodes. We call this new algorithm as \"Progressively Balanced Perceptron Tree\". We have also contributed a TV advertisements dataset consisting of 250 hours of videos recorded from five non-news TV channels of different genres. Experimentations on this dataset have shown that the proposed approach has comparatively superior and balanced performance with respect to six baseline methods. Our proposal generalizes well across channels, with varying training data sizes and achieved a top F1-score of 97% in detecting advertisements.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"15 1","pages":"8:1-8:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74600618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
On improved CS-SS image watermark detection over radio mobile channel 无线移动信道上改进的CS-SS图像水印检测
Pub Date : 2016-12-18 DOI: 10.1145/3009977.3010049
A. Bose, S. Maity
Recently compressed sensing or compressive sampling (CS), apart from its intrinsic applications of sub-sample signal reconstruction, is explored a lot in the design of bandwidth preserving-energy efficient wireless networks. At the same time, due to open nature of wireless channel, digital data (media) transmission needs their protection from unauthorized access and digital watermarking has been devised as one form of potential solution over the years. Among the various methods, spread spectrum (SS) watermarking is found to be efficient due to its improved robustness and imperceptibility. SS watermarking on digital images in presence of additive and multiplicative noise is studied a lot. To the best of knowledge, CS-SS watermarking in presence of both multiplicative (fading channel) and additive noise is not explored much in the existing literature. To address this problem, a wireless communication theoretic model is suggested here to develop an improved detection scheme on additive SS image watermark framework. System model considers sub-sample (CS) transmission of the watermarked image over both non-fading and fading channel. Then a diversity assisted weighted combining scheme for the improved watermark detection is developed. An optimization problem is formulated where the weight for the individual link is calculated through eigen filter approach to maximize the watermark detection probability for a fixed false alarm rate under the constraint of an embedding power (strength). A large set of simulation results validate the mathematical model of the diversity assisted compressive watermark detector.
近年来,压缩感知或压缩采样(CS)技术除了在子样本信号重构方面的固有应用外,还被广泛应用于保带宽节能无线网络的设计中。同时,由于无线信道的开放性,数字数据(媒体)传输需要保护其免受未经授权的访问,数字水印作为一种潜在的解决方案已经被设计出来。在各种方法中,扩频(SS)水印由于其增强的鲁棒性和不可感知性而被认为是有效的。对存在加性和乘性噪声的数字图像的SS水印进行了深入的研究。据我们所知,现有文献对同时存在乘性(衰落信道)和加性噪声的CS-SS水印进行的研究并不多。为了解决这一问题,本文提出了一种无线通信理论模型来开发一种改进的加性SS图像水印框架检测方案。系统模型考虑了水印图像在非衰落信道和衰落信道上的子样本传输。然后,提出了一种分集辅助加权组合的改进水印检测方案。在给定嵌入功率(强度)的约束下,通过特征滤波方法计算单个链路的权值,使固定虚警率下的水印检测概率最大化。大量的仿真结果验证了分集辅助压缩水印检测器的数学模型。
{"title":"On improved CS-SS image watermark detection over radio mobile channel","authors":"A. Bose, S. Maity","doi":"10.1145/3009977.3010049","DOIUrl":"https://doi.org/10.1145/3009977.3010049","url":null,"abstract":"Recently compressed sensing or compressive sampling (CS), apart from its intrinsic applications of sub-sample signal reconstruction, is explored a lot in the design of bandwidth preserving-energy efficient wireless networks. At the same time, due to open nature of wireless channel, digital data (media) transmission needs their protection from unauthorized access and digital watermarking has been devised as one form of potential solution over the years. Among the various methods, spread spectrum (SS) watermarking is found to be efficient due to its improved robustness and imperceptibility. SS watermarking on digital images in presence of additive and multiplicative noise is studied a lot. To the best of knowledge, CS-SS watermarking in presence of both multiplicative (fading channel) and additive noise is not explored much in the existing literature. To address this problem, a wireless communication theoretic model is suggested here to develop an improved detection scheme on additive SS image watermark framework. System model considers sub-sample (CS) transmission of the watermarked image over both non-fading and fading channel. Then a diversity assisted weighted combining scheme for the improved watermark detection is developed. An optimization problem is formulated where the weight for the individual link is calculated through eigen filter approach to maximize the watermark detection probability for a fixed false alarm rate under the constraint of an embedding power (strength). A large set of simulation results validate the mathematical model of the diversity assisted compressive watermark detector.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"6 1","pages":"60:1-60:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83817376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeepFly: towards complete autonomous navigation of MAVs with monocular camera DeepFly:用单目相机实现无人机的完全自主导航
Pub Date : 2016-12-18 DOI: 10.1145/3009977.3010047
Utsav Shah, Rishabh Khawad, K. Krishna
Recently, the interest in Micro Aerial Vehicles (MAVs) and their autonomous flights has increased tremendously and significant advances have been made. The monocular camera has turned out to be most popular sensing modality for MAVs as it is light-weight, does not consume more power, and encodes rich information about the environment around. In this paper, we present DeepFly, our framework for autonomous navigation of a quadcopter equipped with monocular camera. The navigable space detection and waypoint selection are fundamental components of autonomous navigation system. They have broader meaning than just detecting and avoiding immediate obstacles. Finding the navigable space emphasizes equally on avoiding obstacles and detecting ideal regions to move next to. The ideal region can be defined by two properties: 1) All the points in the region have approximately same high depth value and 2) The area covered by the points of the region in the disparity map is considerably large. The waypoints selected from these navigable spaces assure collision-free path which is safer than path obtained from other waypoint selection methods which do not consider neighboring information. In our approach, we obtain a dense disparity map by performing a translation maneuver. This disparity map is input to a deep neural network which predicts bounding boxes for multiple navigable regions. Our deep convolutional neural network with shortcut connections regresses variable number of outputs without any complex architectural add on. Our autonomous navigation approach has been successfully tested in both indoors and outdoors environment and in range of lighting conditions.
近年来,人们对微型飞行器(MAVs)及其自主飞行的兴趣急剧增加,并取得了重大进展。单目摄像机由于其重量轻、功耗低、可对周围环境的丰富信息进行编码,已被证明是MAVs最受欢迎的传感方式。在本文中,我们介绍了DeepFly,这是我们为配备单目摄像机的四轴飞行器自主导航的框架。可航空间探测和航点选择是自主导航系统的基本组成部分。它们有更广泛的意义,而不仅仅是探测和避免直接的障碍。寻找可航行空间同样强调避开障碍物和发现理想的移动区域。理想区域可以由两个属性来定义:1)区域内所有点具有近似相同的高深度值;2)视差图中该区域的点所覆盖的面积相当大。从这些可航空间中选择的航点保证了无碰撞路径,这比其他不考虑相邻信息的航点选择方法获得的路径更安全。在我们的方法中,我们通过执行平移机动来获得密集的视差映射。这个视差图被输入到一个深度神经网络中,该网络预测多个可导航区域的边界框。我们的具有快捷连接的深度卷积神经网络在没有任何复杂架构添加的情况下回归可变数量的输出。我们的自主导航方法已经在室内和室外环境以及各种照明条件下成功进行了测试。
{"title":"DeepFly: towards complete autonomous navigation of MAVs with monocular camera","authors":"Utsav Shah, Rishabh Khawad, K. Krishna","doi":"10.1145/3009977.3010047","DOIUrl":"https://doi.org/10.1145/3009977.3010047","url":null,"abstract":"Recently, the interest in Micro Aerial Vehicles (MAVs) and their autonomous flights has increased tremendously and significant advances have been made. The monocular camera has turned out to be most popular sensing modality for MAVs as it is light-weight, does not consume more power, and encodes rich information about the environment around. In this paper, we present DeepFly, our framework for autonomous navigation of a quadcopter equipped with monocular camera. The navigable space detection and waypoint selection are fundamental components of autonomous navigation system. They have broader meaning than just detecting and avoiding immediate obstacles. Finding the navigable space emphasizes equally on avoiding obstacles and detecting ideal regions to move next to. The ideal region can be defined by two properties: 1) All the points in the region have approximately same high depth value and 2) The area covered by the points of the region in the disparity map is considerably large. The waypoints selected from these navigable spaces assure collision-free path which is safer than path obtained from other waypoint selection methods which do not consider neighboring information.\u0000 In our approach, we obtain a dense disparity map by performing a translation maneuver. This disparity map is input to a deep neural network which predicts bounding boxes for multiple navigable regions. Our deep convolutional neural network with shortcut connections regresses variable number of outputs without any complex architectural add on. Our autonomous navigation approach has been successfully tested in both indoors and outdoors environment and in range of lighting conditions.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"27 1","pages":"59:1-59:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83063301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Robust registration of Mouse brain slices with severe histological artifacts 具有严重组织伪影的小鼠脑切片的鲁棒配准
Pub Date : 2016-12-18 DOI: 10.1145/3009977.3010053
Nitin Agarwal, Xiangmin Xu, M. Gopi
Brain mapping research is facilitated by first aligning digital images of mouse brain slices to standardized atlas framework such as the Allen Reference Atlas (ARA). However, conventional processing of these brain slices introduces many histological artifacts such as tears and missing regions in the tissue, which make the automatic alignment process extremely challenging. We present an end-to-end fully automatic registration pipeline for alignment of digital images of mouse brain slices that may have histological artifacts, to a standardized atlas space. We use a geometric approach where we first align the bounding box of convex hulls of brain slice contours and atlas template contours, which are extracted using a variant of Canny edge detector. We then detect the artifacts using Constrained Delaunay Triangulation (CDT) and remove them from the contours before performing global alignment of points using iterative closest point (ICP). This is followed by a final non-linear registration by solving the Laplace's equation with Dirichlet boundary conditions. We tested our algorithm on 200 mouse brain slice images including slices acquired from conventional processing techniques having major histological artifacts, and from serial two-photon tomography (STPT) with no major artifacts. We show significant improvement over other registration techniques, both qualitatively and quantitatively, in all slices especially on slices with significant histological artifacts.
首先,通过将小鼠脑切片的数字图像与标准化的图谱框架(如Allen参考图谱(ARA))对齐,促进了脑制图研究。然而,这些脑切片的传统处理引入了许多组织学伪影,如组织中的撕裂和缺失区域,这使得自动校准过程极具挑战性。我们提出了一个端到端的全自动配准管道,用于将可能具有组织学伪影的小鼠脑切片的数字图像对齐到标准化的图谱空间。我们使用了一种几何方法,首先将脑切片轮廓和atlas模板轮廓的凸壳的边界框对齐,这些轮廓是使用Canny边缘检测器的变体提取的。然后,我们使用约束Delaunay三角测量(CDT)检测工件,并在使用迭代最近点(ICP)执行点的全局对齐之前将它们从轮廓中移除。然后通过求解具有狄利克雷边界条件的拉普拉斯方程进行最后的非线性配准。我们在200张小鼠脑切片图像上测试了我们的算法,其中包括通过常规处理技术获得的具有主要组织学伪影的切片,以及通过串行双光子断层扫描(STPT)获得的无主要伪影的切片。在所有切片上,特别是在具有显著组织学伪影的切片上,我们在定性和定量上都显示了比其他配准技术的显着改进。
{"title":"Robust registration of Mouse brain slices with severe histological artifacts","authors":"Nitin Agarwal, Xiangmin Xu, M. Gopi","doi":"10.1145/3009977.3010053","DOIUrl":"https://doi.org/10.1145/3009977.3010053","url":null,"abstract":"Brain mapping research is facilitated by first aligning digital images of mouse brain slices to standardized atlas framework such as the Allen Reference Atlas (ARA). However, conventional processing of these brain slices introduces many histological artifacts such as tears and missing regions in the tissue, which make the automatic alignment process extremely challenging. We present an end-to-end fully automatic registration pipeline for alignment of digital images of mouse brain slices that may have histological artifacts, to a standardized atlas space. We use a geometric approach where we first align the bounding box of convex hulls of brain slice contours and atlas template contours, which are extracted using a variant of Canny edge detector. We then detect the artifacts using Constrained Delaunay Triangulation (CDT) and remove them from the contours before performing global alignment of points using iterative closest point (ICP). This is followed by a final non-linear registration by solving the Laplace's equation with Dirichlet boundary conditions. We tested our algorithm on 200 mouse brain slice images including slices acquired from conventional processing techniques having major histological artifacts, and from serial two-photon tomography (STPT) with no major artifacts. We show significant improvement over other registration techniques, both qualitatively and quantitatively, in all slices especially on slices with significant histological artifacts.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"36 1","pages":"10:1-10:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83158190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
First quantization matrix estimation for double compressed JPEG images utilizing novel DCT histogram selection strategy 首先利用新的DCT直方图选择策略对双压缩JPEG图像进行量化矩阵估计
Pub Date : 2016-12-18 DOI: 10.1145/3009977.3010067
N. Dalmia, M. Okade
The Double JPEG problem in image forensics has been gaining importance since it involves two compression cycles and there is a possibility of tampering having taken place after the first cycle thereby calling for accurate methods to detect and localize the introduced tamper. First quantization matrix estimation which basically retrieves the missing quantization table of the first cycle is one of the ways of image authentication for Double JPEG images. This paper presents a robust method for first quantization matrix estimation in case of double compressed JPEG images by improving the selection strategy which chooses the quantization estimate from the filtered DCT histograms. The selection strategy is made robust by increasing the available statistics utilizing the DCT coefficients from the double compressed image under investigation coupled with performing relative comparison between the obtained histograms followed by a novel priority assignment and selection step, which accurately estimates the first quantization value. Experimental testing and comparative analysis with two state-of-art methods show the robustness of the proposed method for accurate first quantization estimation. The proposed method finds its application in image forensics as well as in steganalysis.
图像取证中的双JPEG问题已经变得越来越重要,因为它涉及两个压缩周期,并且在第一个周期之后可能发生篡改,因此需要准确的方法来检测和定位引入的篡改。一阶量化矩阵估计是双JPEG图像认证的方法之一,它基本上是对第一周期缺失的量化表进行检索。本文通过改进从滤波后的DCT直方图中选择量化估计的选择策略,提出了一种双压缩JPEG图像的第一次量化矩阵估计的鲁棒方法。通过利用所研究的双压缩图像的DCT系数增加可用统计量,并在获得的直方图之间进行相对比较,然后进行新的优先级分配和选择步骤,从而准确地估计第一个量化值,从而使选择策略具有鲁棒性。实验测试和两种最新方法的对比分析表明,该方法具有较好的稳健性。该方法在图像取证和隐写分析中均有应用。
{"title":"First quantization matrix estimation for double compressed JPEG images utilizing novel DCT histogram selection strategy","authors":"N. Dalmia, M. Okade","doi":"10.1145/3009977.3010067","DOIUrl":"https://doi.org/10.1145/3009977.3010067","url":null,"abstract":"The Double JPEG problem in image forensics has been gaining importance since it involves two compression cycles and there is a possibility of tampering having taken place after the first cycle thereby calling for accurate methods to detect and localize the introduced tamper. First quantization matrix estimation which basically retrieves the missing quantization table of the first cycle is one of the ways of image authentication for Double JPEG images. This paper presents a robust method for first quantization matrix estimation in case of double compressed JPEG images by improving the selection strategy which chooses the quantization estimate from the filtered DCT histograms. The selection strategy is made robust by increasing the available statistics utilizing the DCT coefficients from the double compressed image under investigation coupled with performing relative comparison between the obtained histograms followed by a novel priority assignment and selection step, which accurately estimates the first quantization value. Experimental testing and comparative analysis with two state-of-art methods show the robustness of the proposed method for accurate first quantization estimation. The proposed method finds its application in image forensics as well as in steganalysis.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"1 1","pages":"27:1-27:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90108629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Hierarchical spectral clustering based large margin classification of visually correlated categories 基于层次光谱聚类的视觉相关类目大余量分类
Pub Date : 2016-12-18 DOI: 10.1145/3009977.3010064
Digbalay Bose, S. Chaudhuri
Object recognition is one of the challenging tasks in computer vision and the problem becomes increasingly difficult when the image categories are visually correlated among themselves i.e. they are visually similar and only fine differences exist among the categories. This paper has a two-fold objective which involves organization of the image categories in a hierarchical tree like structure using self tuning spectral clustering for exploiting the correlations among them. The organization phase is followed by a node specific large margin nearest neighbor classification scheme, where a Mahalnobis distance metric is learnt for each non-leaf node. Further a procedure for hyperparameters selection has been discussed w.r.t two strategies i.e. grid search and Bayesian optimization. The proposed algorithm's effectiveness is tested on selected classes of the popular Imagenet dataset.
物体识别是计算机视觉领域的难点之一,当图像类别在视觉上相互关联,即它们在视觉上相似,而类别之间只有细微的差异时,问题就变得越来越困难。本文有两个目标,即使用自调谐光谱聚类来利用它们之间的相关性,将图像类别组织成层次树状结构。组织阶段之后是节点特定的大边缘最近邻分类方案,其中每个非叶节点学习Mahalnobis距离度量。进一步讨论了基于网格搜索和贝叶斯优化两种策略的超参数选择过程。在常用的Imagenet数据集上对算法的有效性进行了测试。
{"title":"Hierarchical spectral clustering based large margin classification of visually correlated categories","authors":"Digbalay Bose, S. Chaudhuri","doi":"10.1145/3009977.3010064","DOIUrl":"https://doi.org/10.1145/3009977.3010064","url":null,"abstract":"Object recognition is one of the challenging tasks in computer vision and the problem becomes increasingly difficult when the image categories are visually correlated among themselves i.e. they are visually similar and only fine differences exist among the categories. This paper has a two-fold objective which involves organization of the image categories in a hierarchical tree like structure using self tuning spectral clustering for exploiting the correlations among them. The organization phase is followed by a node specific large margin nearest neighbor classification scheme, where a Mahalnobis distance metric is learnt for each non-leaf node. Further a procedure for hyperparameters selection has been discussed w.r.t two strategies i.e. grid search and Bayesian optimization. The proposed algorithm's effectiveness is tested on selected classes of the popular Imagenet dataset.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"160 1","pages":"48:1-48:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80104390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Overlapping cell nuclei segmentation in microscopic images using deep belief networks 基于深度信念网络的显微图像重叠细胞核分割
Pub Date : 2016-12-18 DOI: 10.1145/3009977.3010043
Rahul Duggal, Anubha Gupta, Ritu Gupta, Manya Wadhwa, Chirag Ahuja
This paper proposes a method for segmentation of nuclei of single/isolated and overlapping/touching immature white blood cells from microscopic images of B-Lineage acute lymphoblastic leukemia (ALL) prepared from peripheral blood and bone marrow aspirate. We propose deep belief network approach for the segmentation of these nuclei. Simulation results and comparison with some of the existing methods demonstrate the efficacy of the proposed method.
本文提出了一种从外周血和骨髓抽吸制备的b系急性淋巴细胞白血病(ALL)显微图像中分割单个/分离和重叠/接触未成熟白细胞细胞核的方法。我们提出了一种基于深度信念网络的核分割方法。仿真结果和与现有方法的比较表明了该方法的有效性。
{"title":"Overlapping cell nuclei segmentation in microscopic images using deep belief networks","authors":"Rahul Duggal, Anubha Gupta, Ritu Gupta, Manya Wadhwa, Chirag Ahuja","doi":"10.1145/3009977.3010043","DOIUrl":"https://doi.org/10.1145/3009977.3010043","url":null,"abstract":"This paper proposes a method for segmentation of nuclei of single/isolated and overlapping/touching immature white blood cells from microscopic images of B-Lineage acute lymphoblastic leukemia (ALL) prepared from peripheral blood and bone marrow aspirate. We propose deep belief network approach for the segmentation of these nuclei. Simulation results and comparison with some of the existing methods demonstrate the efficacy of the proposed method.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"54 1","pages":"82:1-82:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77035065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 56
Event recognition in egocentric videos using a novel trajectory based feature 基于轨迹特征的自我中心视频事件识别
Pub Date : 2016-12-18 DOI: 10.1145/3009977.3010011
Vinodh Buddubariki, Sunitha Gowd Tulluri, Snehasis Mukherjee
This paper proposes an approach for event recognition in Egocentric videos using dense trajectories over Gradient Flow - Space Time Interest Point (GF-STIP) feature. We focus on recognizing events of diverse categories (including indoor and outdoor activities, sports and social activities and adventures) in egocentric videos. We introduce a dataset with diverse egocentric events, as all the existing egocentric activity recognition datasets consist of indoor videos only. The dataset introduced in this paper contains 102 videos with 9 different events (containing indoor and outdoor videos with varying lighting conditions). We extract Space Time Interest Points (STIP) from each frame of the video. The interest points are taken as the lead pixels and Gradient-Weighted Optical Flow (GWOF) features are calculated on the lead pixels by multiplying the optical flow measure and the magnitude of gradient at the pixel, to obtain the GF-STIP feature. We construct pose descriptors with the GF-STIP feature. We use the GF-STIP descriptors for recognizing events in egocentric videos with three different approaches: following a Bag of Words (BoW) model, implementing Fisher Vectors and obtaining dense trajectories for the videos. We show that the dense trajectory features based on the proposed GF-STIP descriptors enhance the efficacy of the event recognition system in egocentric videos.
本文提出了一种基于梯度流-时空兴趣点(GF-STIP)特征的密集轨迹自中心视频事件识别方法。我们专注于在以自我为中心的视频中识别不同类别的事件(包括室内和室外活动,体育和社会活动和冒险)。我们引入了一个具有多种自我中心事件的数据集,因为所有现有的自我中心活动识别数据集仅由室内视频组成。本文引入的数据集包含102个具有9个不同事件的视频(包含不同照明条件下的室内和室外视频)。我们从视频的每一帧提取时空兴趣点(STIP)。以兴趣点为先导像元,将光流测量值与像素处的梯度大小相乘,在先导像元上计算梯度加权光流(GWOF)特征,得到GF-STIP特征。我们利用GF-STIP特征构造姿态描述符。我们使用GF-STIP描述符通过三种不同的方法来识别以自我为中心的视频中的事件:遵循单词袋(BoW)模型,实现Fisher向量并获得视频的密集轨迹。结果表明,基于所提出的GF-STIP描述符的密集轨迹特征增强了自中心视频事件识别系统的有效性。
{"title":"Event recognition in egocentric videos using a novel trajectory based feature","authors":"Vinodh Buddubariki, Sunitha Gowd Tulluri, Snehasis Mukherjee","doi":"10.1145/3009977.3010011","DOIUrl":"https://doi.org/10.1145/3009977.3010011","url":null,"abstract":"This paper proposes an approach for event recognition in Egocentric videos using dense trajectories over Gradient Flow - Space Time Interest Point (GF-STIP) feature. We focus on recognizing events of diverse categories (including indoor and outdoor activities, sports and social activities and adventures) in egocentric videos. We introduce a dataset with diverse egocentric events, as all the existing egocentric activity recognition datasets consist of indoor videos only. The dataset introduced in this paper contains 102 videos with 9 different events (containing indoor and outdoor videos with varying lighting conditions). We extract Space Time Interest Points (STIP) from each frame of the video. The interest points are taken as the lead pixels and Gradient-Weighted Optical Flow (GWOF) features are calculated on the lead pixels by multiplying the optical flow measure and the magnitude of gradient at the pixel, to obtain the GF-STIP feature. We construct pose descriptors with the GF-STIP feature. We use the GF-STIP descriptors for recognizing events in egocentric videos with three different approaches: following a Bag of Words (BoW) model, implementing Fisher Vectors and obtaining dense trajectories for the videos. We show that the dense trajectory features based on the proposed GF-STIP descriptors enhance the efficacy of the event recognition system in egocentric videos.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"82 1","pages":"76:1-76:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83921090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1