首页 > 最新文献

2016 IEEE International Conference on Image Processing (ICIP)最新文献

英文 中文
End-to-end crowd counting via joint learning local and global count 通过联合学习局部和全局计数进行端到端人群计数
Pub Date : 2016-09-01 DOI: 10.1109/ICIP.2016.7532551
C. Shang, H. Ai, Bo Bai
Crowd counting is a very challenging task in crowded scenes due to heavy occlusions, appearance variations and perspective distortions. Current crowd counting methods typically operate on an image patch level with overlaps, then sum over the patches to get the final count. In this paper, we propose an end-to-end convolutional neural network (CNN) architecture that takes a whole image as its input and directly outputs the counting result. While making use of sharing computations over overlapping regions, our method takes advantages of contextual information when predicting both local and global count. In particular, we first feed the image to a pre-trained CNN to get a set of high level features. Then the features are mapped to local counting numbers using recurrent network layers with memory cells. We perform the experiments on several challenging crowd counting datasets, which achieve the state-of-the-art results and demonstrate the effectiveness of our method.
在拥挤的场景中,由于严重的遮挡、外观变化和视角扭曲,人群计数是一项非常具有挑战性的任务。当前的人群计数方法通常在具有重叠的图像补丁级别上操作,然后对补丁求和以获得最终计数。在本文中,我们提出了一种端到端卷积神经网络(CNN)架构,该架构以整幅图像为输入,直接输出计数结果。在利用重叠区域上的共享计算时,我们的方法在预测局部和全局计数时利用了上下文信息的优势。特别是,我们首先将图像馈送到预训练的CNN以获得一组高级特征。然后使用带有存储单元的循环网络层将特征映射到局部计数数。我们在几个具有挑战性的人群计数数据集上进行了实验,得到了最先进的结果,并证明了我们方法的有效性。
{"title":"End-to-end crowd counting via joint learning local and global count","authors":"C. Shang, H. Ai, Bo Bai","doi":"10.1109/ICIP.2016.7532551","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532551","url":null,"abstract":"Crowd counting is a very challenging task in crowded scenes due to heavy occlusions, appearance variations and perspective distortions. Current crowd counting methods typically operate on an image patch level with overlaps, then sum over the patches to get the final count. In this paper, we propose an end-to-end convolutional neural network (CNN) architecture that takes a whole image as its input and directly outputs the counting result. While making use of sharing computations over overlapping regions, our method takes advantages of contextual information when predicting both local and global count. In particular, we first feed the image to a pre-trained CNN to get a set of high level features. Then the features are mapped to local counting numbers using recurrent network layers with memory cells. We perform the experiments on several challenging crowd counting datasets, which achieve the state-of-the-art results and demonstrate the effectiveness of our method.","PeriodicalId":6521,"journal":{"name":"2016 IEEE International Conference on Image Processing (ICIP)","volume":"14 1","pages":"1215-1219"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79313403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 169
A multimedia gesture dataset for human robot communication: Acquisition, tools and recognition results 人机交流的多媒体手势数据集:获取、工具和识别结果
Pub Date : 2016-09-01 DOI: 10.1109/ICIP.2016.7532923
I. Rodomagoulakis, N. Kardaris, Vassilis Pitsikalis, A. Arvanitakis, P. Maragos
Motivated by the recent advances in human-robot interaction we present a new dataset, a suite of tools to handle it and state-of-the-art work on visual gestures and audio commands recognition. The dataset has been collected with an integrated annotation and acquisition web-interface that facilitates on-the-way temporal ground-truths for fast acquisition. The dataset includes gesture instances in which the subjects are not in strict setup positions, and contains multiple scenarios, not restricted to a single static configuration. We accompany it by a valuable suite of tools as the practical interface to acquire audio-visual data in the robotic operating system, a state-of-the-art learning pipeline to train visual gesture and audio command models, and an online gesture recognition system. Finally, we include a rich evaluation of the dataset providing rich and insightfull experimental recognition results.
受人机交互最新进展的激励,我们提出了一个新的数据集,一套工具来处理它,以及最先进的视觉手势和音频命令识别工作。数据集的收集采用了一个集成的注释和获取网络界面,方便了快速获取的实时事实。该数据集包括手势实例,其中受试者没有处于严格的设置位置,并且包含多个场景,不限于单个静态配置。我们还提供了一套有价值的工具,作为在机器人操作系统中获取视听数据的实用界面,一个最先进的学习管道来训练视觉手势和音频命令模型,以及一个在线手势识别系统。最后,我们对数据集进行了丰富的评估,提供了丰富而有见解的实验识别结果。
{"title":"A multimedia gesture dataset for human robot communication: Acquisition, tools and recognition results","authors":"I. Rodomagoulakis, N. Kardaris, Vassilis Pitsikalis, A. Arvanitakis, P. Maragos","doi":"10.1109/ICIP.2016.7532923","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532923","url":null,"abstract":"Motivated by the recent advances in human-robot interaction we present a new dataset, a suite of tools to handle it and state-of-the-art work on visual gestures and audio commands recognition. The dataset has been collected with an integrated annotation and acquisition web-interface that facilitates on-the-way temporal ground-truths for fast acquisition. The dataset includes gesture instances in which the subjects are not in strict setup positions, and contains multiple scenarios, not restricted to a single static configuration. We accompany it by a valuable suite of tools as the practical interface to acquire audio-visual data in the robotic operating system, a state-of-the-art learning pipeline to train visual gesture and audio command models, and an online gesture recognition system. Finally, we include a rich evaluation of the dataset providing rich and insightfull experimental recognition results.","PeriodicalId":6521,"journal":{"name":"2016 IEEE International Conference on Image Processing (ICIP)","volume":"32 1","pages":"3066-3070"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81270829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Scale-invariant anomaly detection with multiscale group-sparse models 基于多尺度群稀疏模型的尺度不变异常检测
Pub Date : 2016-09-01 DOI: 10.1109/ICIP.2016.7533089
Diego Carrera, G. Boracchi, A. Foi, B. Wohlberg
The automatic detection of anomalies, defined as patterns that are not encountered in representative set of normal images, is an important problem in industrial control and biomedical applications. We have shown that this problem can be successfully addressed by the sparse representation of individual image patches using a dictionary learned from a large set of patches extracted from normal images. Anomalous patches are detected as those for which the sparse representation on this dictionary exceeds sparsity or error tolerances. Unfortunately, this solution is not suitable for many real-world visual inspection-systems since it is not scale invariant: since the dictionary is learned at a single scale, patches in normal images acquired at a different magnification level might be detected as anomalous. We present an anomaly-detection algorithm that learns a dictionary that is invariant to a range of scale changes, and overcomes this limitation by use of an appropriate sparse coding stage. The algorithm was successfully tested in an industrial application by analyzing a dataset of Scanning Electron Microscope (SEM) images, which typically exhibit different magnification levels.
异常的自动检测,定义为在正常图像的代表性集合中没有遇到的模式,是工业控制和生物医学应用中的一个重要问题。我们已经证明,通过使用从正常图像中提取的大量补丁中学习到的字典,单个图像补丁的稀疏表示可以成功地解决这个问题。当字典上的稀疏表示超过稀疏性或错误容忍度时,检测到异常补丁。不幸的是,这种解决方案不适合许多现实世界的视觉检测系统,因为它不是尺度不变的:因为字典是在单一尺度下学习的,在不同放大水平下获得的正常图像中的补丁可能被检测为异常。我们提出了一种异常检测算法,该算法学习一个对尺度变化范围不变的字典,并通过使用适当的稀疏编码阶段克服了这一限制。通过分析扫描电子显微镜(SEM)图像数据集,该算法成功地在工业应用中进行了测试,这些图像通常具有不同的放大水平。
{"title":"Scale-invariant anomaly detection with multiscale group-sparse models","authors":"Diego Carrera, G. Boracchi, A. Foi, B. Wohlberg","doi":"10.1109/ICIP.2016.7533089","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7533089","url":null,"abstract":"The automatic detection of anomalies, defined as patterns that are not encountered in representative set of normal images, is an important problem in industrial control and biomedical applications. We have shown that this problem can be successfully addressed by the sparse representation of individual image patches using a dictionary learned from a large set of patches extracted from normal images. Anomalous patches are detected as those for which the sparse representation on this dictionary exceeds sparsity or error tolerances. Unfortunately, this solution is not suitable for many real-world visual inspection-systems since it is not scale invariant: since the dictionary is learned at a single scale, patches in normal images acquired at a different magnification level might be detected as anomalous. We present an anomaly-detection algorithm that learns a dictionary that is invariant to a range of scale changes, and overcomes this limitation by use of an appropriate sparse coding stage. The algorithm was successfully tested in an industrial application by analyzing a dataset of Scanning Electron Microscope (SEM) images, which typically exhibit different magnification levels.","PeriodicalId":6521,"journal":{"name":"2016 IEEE International Conference on Image Processing (ICIP)","volume":"4 1","pages":"3892-3896"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81893954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Patch similarity based edge-preserving background estimation for single frame infrared small target detection 基于Patch相似度的单帧红外小目标检测边缘保持背景估计
Pub Date : 2016-09-01 DOI: 10.1109/ICIP.2016.7532343
Kun Bai, Yuehuang Wang, Qiong Song
Edges in infrared image usually cause serious false alarms in single frame infrared small target detection. So a novel edge-preserving background estimation method is proposed for small target detection to attenuate this problem. First we will introduce the patch similarity feature of infrared image. Then, patch similarity of infrared image is utilized to formulate edge-preserving infrared background estimation. At last, estimated background will be eliminated from original infrared image to suppress edges. The effective edge-preserving ability of our approach will be shown through experiments and comparisons with state-of-the-art background estimation methods.
在单帧红外小目标检测中,红外图像的边缘往往会引起严重的虚警。为此,提出了一种新的小目标检测的保边背景估计方法来解决这一问题。首先,我们将介绍红外图像的斑块相似特征。然后,利用红外图像的斑块相似度进行边缘保持红外背景估计。最后,从原始红外图像中消除估计的背景,抑制边缘。我们的方法将通过实验和与最先进的背景估计方法的比较来证明其有效的边缘保持能力。
{"title":"Patch similarity based edge-preserving background estimation for single frame infrared small target detection","authors":"Kun Bai, Yuehuang Wang, Qiong Song","doi":"10.1109/ICIP.2016.7532343","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532343","url":null,"abstract":"Edges in infrared image usually cause serious false alarms in single frame infrared small target detection. So a novel edge-preserving background estimation method is proposed for small target detection to attenuate this problem. First we will introduce the patch similarity feature of infrared image. Then, patch similarity of infrared image is utilized to formulate edge-preserving infrared background estimation. At last, estimated background will be eliminated from original infrared image to suppress edges. The effective edge-preserving ability of our approach will be shown through experiments and comparisons with state-of-the-art background estimation methods.","PeriodicalId":6521,"journal":{"name":"2016 IEEE International Conference on Image Processing (ICIP)","volume":"29 1","pages":"181-185"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85731160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Automatic vehicle counting method based on principal component pursuit background modeling 基于主成分追踪背景建模的车辆自动计数方法
Pub Date : 2016-09-01 DOI: 10.1109/ICIP.2016.7533075
Jorge Quesada, P. Rodríguez
Estimating the number of vehicles present in traffic video sequences is a common task in applications such as active traffic management and automated route planning. There exist several vehicle counting methods such as Particle Filtering or Headlight Detection, among others. Although Principal Component Pursuit (PCP) is considered to be the state-of-the-art for video background modeling, it has not been previously exploited for this task. This is mainly because most of the existing PCP algorithms are batch methods and have a high computational cost that makes them unsuitable for real-time vehicle counting. In this paper, we propose to use a novel incremental PCP-based algorithm to estimate the number of vehicles present in top-view traffic video sequences in real-time. We test our method against several challenging datasets, achieving results that compare favorably with state-of-the-art methods in performance and speed: an average accuracy of 98% when counting vehicles passing through a virtual door, 91% when estimating the total number of vehicles present in the scene, and up to 26 fps in processing time.
在主动交通管理和自动路线规划等应用中,估计交通视频序列中存在的车辆数量是一个常见的任务。目前有几种车辆计数方法,如粒子滤波或前灯检测等。虽然主成分追踪(PCP)被认为是视频背景建模的最先进技术,但它以前还没有被用于这项任务。这主要是因为现有的PCP算法大多是批处理方法,计算成本高,不适合实时车辆计数。在本文中,我们提出使用一种新的基于pcp的增量算法来实时估计俯视交通视频序列中存在的车辆数量。我们针对几个具有挑战性的数据集测试了我们的方法,在性能和速度方面取得了与最先进的方法相媲美的结果:在计算通过虚拟门的车辆时,平均准确率为98%,在估计场景中存在的车辆总数时,平均准确率为91%,处理时间高达26 fps。
{"title":"Automatic vehicle counting method based on principal component pursuit background modeling","authors":"Jorge Quesada, P. Rodríguez","doi":"10.1109/ICIP.2016.7533075","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7533075","url":null,"abstract":"Estimating the number of vehicles present in traffic video sequences is a common task in applications such as active traffic management and automated route planning. There exist several vehicle counting methods such as Particle Filtering or Headlight Detection, among others. Although Principal Component Pursuit (PCP) is considered to be the state-of-the-art for video background modeling, it has not been previously exploited for this task. This is mainly because most of the existing PCP algorithms are batch methods and have a high computational cost that makes them unsuitable for real-time vehicle counting. In this paper, we propose to use a novel incremental PCP-based algorithm to estimate the number of vehicles present in top-view traffic video sequences in real-time. We test our method against several challenging datasets, achieving results that compare favorably with state-of-the-art methods in performance and speed: an average accuracy of 98% when counting vehicles passing through a virtual door, 91% when estimating the total number of vehicles present in the scene, and up to 26 fps in processing time.","PeriodicalId":6521,"journal":{"name":"2016 IEEE International Conference on Image Processing (ICIP)","volume":"12 1","pages":"3822-3826"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77016754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Automated blood vessel extraction in two-dimensional breast thermography 二维乳房热成像中的自动血管提取
Pub Date : 2016-09-01 DOI: 10.1109/ICIP.2016.7532383
S. Kakileti, K. Venkataramani
In this paper, we present an automated algorithm for detection of blood vessels in 2D-thermographic images for breast cancer screening. Vessel extraction from breast thermal images help in the classification of malignancy as cancer causes increased blood flow at warmer temperatures, additional vessel formation and tortuosity of vessels feeding the cancerous growth. The proposed algorithm uses three enhanced images to detect possible vessel regions based on their intensity and shape. The final vessel detection combines these three outputs. The algorithm does not depend on the variation of pixel intensity in the images but only depends on the relative variation unlike many standard algorithms. On a dataset of over 40 subjects with high-resolution thermographic images, we are able to extract the vessels accurately with elimination of diffused heat regions. Future studies would involve extracting features from the detected vessels and using these features for classification of malignancy.
在本文中,我们提出了一种用于乳腺癌筛查的二维热成像图像中血管检测的自动算法。从乳房热图像中提取血管有助于恶性肿瘤的分类,因为癌症会在较高的温度下导致血流量增加,额外的血管形成和血管扭曲,为癌症的生长提供养分。该算法使用三幅增强图像根据其强度和形状检测可能的血管区域。最后的船舶检测结合了这三个输出。与许多标准算法不同,该算法不依赖于图像中像素强度的变化,而只依赖于相对变化。在40多名受试者的高分辨率热成像图像数据集上,我们能够准确地提取血管,消除扩散热区。未来的研究将包括从检测到的血管中提取特征并使用这些特征进行恶性分类。
{"title":"Automated blood vessel extraction in two-dimensional breast thermography","authors":"S. Kakileti, K. Venkataramani","doi":"10.1109/ICIP.2016.7532383","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532383","url":null,"abstract":"In this paper, we present an automated algorithm for detection of blood vessels in 2D-thermographic images for breast cancer screening. Vessel extraction from breast thermal images help in the classification of malignancy as cancer causes increased blood flow at warmer temperatures, additional vessel formation and tortuosity of vessels feeding the cancerous growth. The proposed algorithm uses three enhanced images to detect possible vessel regions based on their intensity and shape. The final vessel detection combines these three outputs. The algorithm does not depend on the variation of pixel intensity in the images but only depends on the relative variation unlike many standard algorithms. On a dataset of over 40 subjects with high-resolution thermographic images, we are able to extract the vessels accurately with elimination of diffused heat regions. Future studies would involve extracting features from the detected vessels and using these features for classification of malignancy.","PeriodicalId":6521,"journal":{"name":"2016 IEEE International Conference on Image Processing (ICIP)","volume":"53 1 1","pages":"380-384"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78592141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Robust calibration of broadcast cameras based on ellipse and line contours 基于椭圆和直线轮廓的广播摄像机鲁棒标定
Pub Date : 2016-09-01 DOI: 10.1109/ICIP.2016.7532377
S. Croci, N. Stefanoski, A. Smolic
Professional TV studio footage often poses specific challenges to camera calibration due to lack of features and complex camera operation. As available algorithms often fail, we propose a novel approach based on robust tracking of ellipse and line features of a predefined logo. We further devise a predictive and iterative estimation algorithm, which incorporates confidence measures and filtering. Our results validate accuracy and reliability of our approach, demonstrated with challenging professional footage.
由于缺乏功能和复杂的摄像机操作,专业电视演播室的镜头经常给摄像机校准带来特定的挑战。由于现有的算法经常失败,我们提出了一种基于鲁棒跟踪的椭圆和直线特征的新方法。我们进一步设计了一种结合置信度和滤波的预测迭代估计算法。我们的结果验证了我们的方法的准确性和可靠性,并展示了具有挑战性的专业镜头。
{"title":"Robust calibration of broadcast cameras based on ellipse and line contours","authors":"S. Croci, N. Stefanoski, A. Smolic","doi":"10.1109/ICIP.2016.7532377","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532377","url":null,"abstract":"Professional TV studio footage often poses specific challenges to camera calibration due to lack of features and complex camera operation. As available algorithms often fail, we propose a novel approach based on robust tracking of ellipse and line features of a predefined logo. We further devise a predictive and iterative estimation algorithm, which incorporates confidence measures and filtering. Our results validate accuracy and reliability of our approach, demonstrated with challenging professional footage.","PeriodicalId":6521,"journal":{"name":"2016 IEEE International Conference on Image Processing (ICIP)","volume":"24 1","pages":"350-354"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78557208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Projective non-negative matrix factorization for unsupervised graph clustering 无监督图聚类的射影非负矩阵分解
Pub Date : 2016-09-01 DOI: 10.1109/ICIP.2016.7532559
C. Bampis, P. Maragos, A. Bovik
We develop an unsupervised graph clustering and image segmentation algorithm based on non-negative matrix factorization. We consider arbitrarily represented visual signals (in 2D or 3D) and use a graph embedding approach for image or point cloud segmentation. We extend a Projective Non-negative Matrix Factorization variant to include local spatial relationships over the image graph. By using properly defined region features, one can apply our method of unsupervised graph clustering for object and image segmentation. To demonstrate this, we apply our ideas on many graph based segmentation tasks such as 2D pixel and super-pixel segmentation and 3D point cloud segmentation. Finally, we show results comparable to those achieved by the only existing work in pixel based texture segmentation using Nonnegative Matrix Factorization, deploying a simple yet effective extension that is parameter free. We provide a detailed convergence proof of our spatially regularized method and various demonstrations as supplementary material. This novel work brings together graph clustering with image segmentation.
提出了一种基于非负矩阵分解的无监督图聚类和图像分割算法。我们考虑任意表示的视觉信号(2D或3D),并使用图嵌入方法进行图像或点云分割。我们扩展了一个射影非负矩阵分解变体,以包括图像图上的局部空间关系。通过使用适当定义的区域特征,可以将我们的无监督图聚类方法应用于对象和图像分割。为了证明这一点,我们将我们的想法应用于许多基于图的分割任务,如2D像素和超像素分割以及3D点云分割。最后,我们展示了与使用非负矩阵分解的基于像素的纹理分割的唯一现有工作相媲美的结果,部署了一个简单而有效的无参数扩展。我们提供了空间正则化方法的详细收敛证明和各种证明作为补充材料。这项新颖的工作将图聚类与图像分割结合在一起。
{"title":"Projective non-negative matrix factorization for unsupervised graph clustering","authors":"C. Bampis, P. Maragos, A. Bovik","doi":"10.1109/ICIP.2016.7532559","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532559","url":null,"abstract":"We develop an unsupervised graph clustering and image segmentation algorithm based on non-negative matrix factorization. We consider arbitrarily represented visual signals (in 2D or 3D) and use a graph embedding approach for image or point cloud segmentation. We extend a Projective Non-negative Matrix Factorization variant to include local spatial relationships over the image graph. By using properly defined region features, one can apply our method of unsupervised graph clustering for object and image segmentation. To demonstrate this, we apply our ideas on many graph based segmentation tasks such as 2D pixel and super-pixel segmentation and 3D point cloud segmentation. Finally, we show results comparable to those achieved by the only existing work in pixel based texture segmentation using Nonnegative Matrix Factorization, deploying a simple yet effective extension that is parameter free. We provide a detailed convergence proof of our spatially regularized method and various demonstrations as supplementary material. This novel work brings together graph clustering with image segmentation.","PeriodicalId":6521,"journal":{"name":"2016 IEEE International Conference on Image Processing (ICIP)","volume":"36 1","pages":"1255-1258"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79969161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Context-aware event-driven stereo matching 上下文感知事件驱动的立体匹配
Pub Date : 2016-09-01 DOI: 10.1109/ICIP.2016.7532523
Dongqing Zou, Ping Guo, Qiang Wang, Xiaotao Wang, Guangqi Shao, Feng Shi, Jia Li, P. Park
Similarity measuring plays as an import role in stereo matching, whether for visual data from standard cameras or for those from novel sensors such as Dynamic Vision Sensors (DVS). Generally speaking, robust feature descriptors contribute to designing a powerful similarity measurement, as demonstrated by classic stereo matching methods. However, the kind and representative ability of feature descriptors for DVS data are so limited that achieving accurate stereo matching on DVS data becomes very challenging. In this paper, a novel feature descriptor is proposed to improve the accuracy for DVS stereo matching. Our feature descriptor can describe the local context or distribution of the DVS data, contributing to constructing an effective similarity measurement for DVS data matching, yielding an accurate stereo matching result. Our method is evaluated by testing our method on groundtruth data and comparing with various standard stereo methods. Experiments demonstrate the efficiency and effectiveness of our method.
相似性测量在立体匹配中起着重要的作用,无论是对于来自标准摄像机的视觉数据还是来自动态视觉传感器(DVS)等新型传感器的视觉数据。一般来说,鲁棒特征描述符有助于设计强大的相似性度量,经典的立体匹配方法证明了这一点。然而,分布式交换机数据的特征描述符的种类和代表能力有限,使得对分布式交换机数据进行精确的立体匹配变得非常困难。本文提出了一种新的特征描述符,以提高分布式交换机立体匹配的精度。我们的特征描述符可以描述分布式交换机数据的本地上下文或分布,有助于构建有效的分布式交换机数据匹配相似度度量,从而产生准确的立体匹配结果。通过对真实数据的测试和与各种标准立体方法的比较,对我们的方法进行了评价。实验证明了该方法的有效性和有效性。
{"title":"Context-aware event-driven stereo matching","authors":"Dongqing Zou, Ping Guo, Qiang Wang, Xiaotao Wang, Guangqi Shao, Feng Shi, Jia Li, P. Park","doi":"10.1109/ICIP.2016.7532523","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532523","url":null,"abstract":"Similarity measuring plays as an import role in stereo matching, whether for visual data from standard cameras or for those from novel sensors such as Dynamic Vision Sensors (DVS). Generally speaking, robust feature descriptors contribute to designing a powerful similarity measurement, as demonstrated by classic stereo matching methods. However, the kind and representative ability of feature descriptors for DVS data are so limited that achieving accurate stereo matching on DVS data becomes very challenging. In this paper, a novel feature descriptor is proposed to improve the accuracy for DVS stereo matching. Our feature descriptor can describe the local context or distribution of the DVS data, contributing to constructing an effective similarity measurement for DVS data matching, yielding an accurate stereo matching result. Our method is evaluated by testing our method on groundtruth data and comparing with various standard stereo methods. Experiments demonstrate the efficiency and effectiveness of our method.","PeriodicalId":6521,"journal":{"name":"2016 IEEE International Conference on Image Processing (ICIP)","volume":"23 1","pages":"1076-1080"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81467347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
A novel classification system for dysplastic nevus and malignant melanoma 发育不良痣和恶性黑色素瘤的新分类系统
Pub Date : 2016-09-01 DOI: 10.1109/ICIP.2016.7532993
Mutlu Mete, N. Sirakov, John Griffin, A. Menter
Melanoma is a potentially deadly form of skin cancer, however, if detected early, it is curable. A dysplastic nevus (atypical mole) is not cancerous but may represent a precursor to malignancy as nearly 40% of melanomas arise from a preexisting mole. In this study, we propose a system to classify a skin lesion image as melanoma (M), dysplastic nevus (D), and benign (B). For this purpose we develop a new two layered-system. The first layer consists of three binary Support Vector Machine (SVM) classifiers, one for each pair of classes, M vs B, M vs D, and B vs D. The second layer is a novel decision maker function, which uses probability memberships derived from the first layer. Each lesion is characterized with five features, which mostly overlaps with the ABCD rule of dermatology. The dataset we used have 112 lesions with 54 M, 38 D, and 20 B cases. In the experiments of melanoma detection, we obtained 98% specificity, 76% sensitivity, and 85% F-measure accuracy.
黑色素瘤是一种可能致命的皮肤癌,但如果发现得早,是可以治愈的。发育不良的痣(非典型痣)不是癌变的,但可能是恶性肿瘤的前兆,因为近40%的黑色素瘤起源于先前存在的痣。在这项研究中,我们提出了一个将皮肤病变图像分类为黑色素瘤(M)、发育不良痣(D)和良性(B)的系统。为此,我们开发了一个新的两层系统。第一层由三个二元支持向量机(SVM)分类器组成,每个分类器对应一对类,M vs B, M vs D和B vs D。第二层是一个新的决策者函数,它使用从第一层派生的概率隶属关系。每个病变都有五个特征,这些特征大多与皮肤病学的ABCD规则重叠。我们使用的数据集有112例病变,其中54例为M, 38例为D, 20例为B。在黑色素瘤检测实验中,我们获得了98%的特异性,76%的敏感性和85%的F-measure准确度。
{"title":"A novel classification system for dysplastic nevus and malignant melanoma","authors":"Mutlu Mete, N. Sirakov, John Griffin, A. Menter","doi":"10.1109/ICIP.2016.7532993","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532993","url":null,"abstract":"Melanoma is a potentially deadly form of skin cancer, however, if detected early, it is curable. A dysplastic nevus (atypical mole) is not cancerous but may represent a precursor to malignancy as nearly 40% of melanomas arise from a preexisting mole. In this study, we propose a system to classify a skin lesion image as melanoma (M), dysplastic nevus (D), and benign (B). For this purpose we develop a new two layered-system. The first layer consists of three binary Support Vector Machine (SVM) classifiers, one for each pair of classes, M vs B, M vs D, and B vs D. The second layer is a novel decision maker function, which uses probability memberships derived from the first layer. Each lesion is characterized with five features, which mostly overlaps with the ABCD rule of dermatology. The dataset we used have 112 lesions with 54 M, 38 D, and 20 B cases. In the experiments of melanoma detection, we obtained 98% specificity, 76% sensitivity, and 85% F-measure accuracy.","PeriodicalId":6521,"journal":{"name":"2016 IEEE International Conference on Image Processing (ICIP)","volume":"98 1","pages":"3414-3418"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76186152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
2016 IEEE International Conference on Image Processing (ICIP)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1