首页 > 最新文献

2020 IEEE International Conference on Image Processing (ICIP)最新文献

英文 中文
Kinship Verification From Gait? 从步态验证亲属关系?
Pub Date : 2020-10-01 DOI: 10.1109/ICIP40778.2020.9190787
Salah Eddine Bekhouche, A. Chergui, A. Hadid, Y. Ruichek
Kinship verification aims to determine whether two persons are kin related or not. This is an emerging topic in computer vision due to its practical potential applications such as family album management. Most of previous works are based on checking kinship from face patterns and more recently from voices. We provide in this paper the first investigation in the literature on kinship verification from gait. The main purpose is to study whether family members do share some gait patterns. As this is a new topic, we started by collecting a new dataset for kinship verification from human gait containing several pairs of video sequences of celebrities and their relatives. The database will be released to the research community for research purposes. Along with the database, we provide results using baseline methods using silhouette and video based analysis. Moreover, we also propose a two-stream 3DCNN to tackle the problem. The preliminary experimental results point out the potential usefulness of gait information for kinship verification.
亲属关系核实的目的是确定两个人是否有亲属关系。这是计算机视觉中的一个新兴课题,因为它具有实际的潜在应用,如家庭相册管理。以前的大部分工作都是基于从面部图案和最近的声音来检查亲属关系。本文首次对步态的亲属关系验证进行了研究。主要目的是研究家庭成员是否有共同的步态模式。由于这是一个新的主题,我们首先收集了一个新的数据集,用于从人类步态中验证亲属关系,其中包含了几对名人及其亲属的视频序列。该数据库将发布给研究界用于研究目的。除了数据库,我们还使用基于剪影和视频分析的基线方法提供结果。此外,我们还提出了一个双流3DCNN来解决这个问题。初步的实验结果指出了步态信息在亲属关系验证中的潜在用途。
{"title":"Kinship Verification From Gait?","authors":"Salah Eddine Bekhouche, A. Chergui, A. Hadid, Y. Ruichek","doi":"10.1109/ICIP40778.2020.9190787","DOIUrl":"https://doi.org/10.1109/ICIP40778.2020.9190787","url":null,"abstract":"Kinship verification aims to determine whether two persons are kin related or not. This is an emerging topic in computer vision due to its practical potential applications such as family album management. Most of previous works are based on checking kinship from face patterns and more recently from voices. We provide in this paper the first investigation in the literature on kinship verification from gait. The main purpose is to study whether family members do share some gait patterns. As this is a new topic, we started by collecting a new dataset for kinship verification from human gait containing several pairs of video sequences of celebrities and their relatives. The database will be released to the research community for research purposes. Along with the database, we provide results using baseline methods using silhouette and video based analysis. Moreover, we also propose a two-stream 3DCNN to tackle the problem. The preliminary experimental results point out the potential usefulness of gait information for kinship verification.","PeriodicalId":405734,"journal":{"name":"2020 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126600006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
B-Spline Level Set For Drosophila Image Segmentation 果蝇图像分割的b样条水平集
Pub Date : 2020-10-01 DOI: 10.1109/ICIP40778.2020.9191177
Rim Rahali, Yassine Ben Salem, Noura Dridi, H. Dahman
Segmentation of biological images is a challenging task, due to non convex shapes, intensity inhomogeneity and clustered cells. To address these issues, a new algorithm is proposed based on the B-spline level set method. The implicit function of the level set is modelled as a continuous parametric function represented with the B-spline basis. It is different from the discrete formulation associated with conventional level set. In this paper the proposed framework takes into account properties of biological images. The algorithm is applied to Drosophila images, and compared to conventional level set and Marker Controlled Watershed (MCW). Results show good performance in term of the DICE coefficient, for noisy and noiseless images.
由于生物图像的非凸形状、强度不均匀性和细胞聚集性,生物图像的分割是一项具有挑战性的任务。为了解决这些问题,提出了一种基于b样条水平集方法的新算法。将水平集的隐函数建模为用b样条基表示的连续参数函数。它不同于传统水平集的离散公式。本文提出的框架考虑了生物图像的特性。将该算法应用于果蝇图像,并与常规水平集和标记控制分水岭(Marker Controlled Watershed, MCW)进行了比较。结果表明,无论对有噪图像还是无噪图像,该方法都具有良好的DICE系数。
{"title":"B-Spline Level Set For Drosophila Image Segmentation","authors":"Rim Rahali, Yassine Ben Salem, Noura Dridi, H. Dahman","doi":"10.1109/ICIP40778.2020.9191177","DOIUrl":"https://doi.org/10.1109/ICIP40778.2020.9191177","url":null,"abstract":"Segmentation of biological images is a challenging task, due to non convex shapes, intensity inhomogeneity and clustered cells. To address these issues, a new algorithm is proposed based on the B-spline level set method. The implicit function of the level set is modelled as a continuous parametric function represented with the B-spline basis. It is different from the discrete formulation associated with conventional level set. In this paper the proposed framework takes into account properties of biological images. The algorithm is applied to Drosophila images, and compared to conventional level set and Marker Controlled Watershed (MCW). Results show good performance in term of the DICE coefficient, for noisy and noiseless images.","PeriodicalId":405734,"journal":{"name":"2020 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126341103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Interactive Separation Network For Image Inpainting 用于图像绘制的交互式分离网络
Pub Date : 2020-10-01 DOI: 10.1109/ICIP40778.2020.9191263
Siyuan Li, Luanhao Lu, Zhiqiang Zhang, Xin Cheng, Kepeng Xu, Wenxin Yu, Gang He, Jinjia Zhou, Zhuo Yang
Image inpainting, also known as image completion, is the process of filling in the missing region of an incomplete image to make the repaired image visually plausible. Strided convolutional layer learns high-level representations while reducing the computational complexity, but fails to preserve existing detail from the original images (eg, texture, sharp transients), therefore it degrades the generative model in image inpainting task. To reduce the erosion of high-resolution components of images meanwhile maintaining the semantic representation, this paper designs a brand-new network called Interactive Separation Network that progressively decomposites the features into two streams and fuses them. Besides, the rationality of network design and the efficiency of proposed network is demonstrated in the ablation study. To the best of our knowledge, the experimental results of proposed method are superior to state-of-the-art inpainting approaches.
图像补全,也称为图像补全,是对不完整图像的缺失区域进行填充,使修复后的图像在视觉上看起来可信的过程。跨行卷积层在学习高级表示的同时降低了计算复杂度,但不能保留原始图像的现有细节(如纹理、锐利瞬态),因此降低了图像绘制任务中的生成模型。为了在保持图像语义表征的同时减少图像高分辨率成分的侵蚀,本文设计了一种全新的网络,称为交互式分离网络,将特征逐步分解为两流并融合。此外,在烧蚀研究中还验证了网络设计的合理性和所提出网络的有效性。据我们所知,所提出的方法的实验结果优于最先进的油漆方法。
{"title":"Interactive Separation Network For Image Inpainting","authors":"Siyuan Li, Luanhao Lu, Zhiqiang Zhang, Xin Cheng, Kepeng Xu, Wenxin Yu, Gang He, Jinjia Zhou, Zhuo Yang","doi":"10.1109/ICIP40778.2020.9191263","DOIUrl":"https://doi.org/10.1109/ICIP40778.2020.9191263","url":null,"abstract":"Image inpainting, also known as image completion, is the process of filling in the missing region of an incomplete image to make the repaired image visually plausible. Strided convolutional layer learns high-level representations while reducing the computational complexity, but fails to preserve existing detail from the original images (eg, texture, sharp transients), therefore it degrades the generative model in image inpainting task. To reduce the erosion of high-resolution components of images meanwhile maintaining the semantic representation, this paper designs a brand-new network called Interactive Separation Network that progressively decomposites the features into two streams and fuses them. Besides, the rationality of network design and the efficiency of proposed network is demonstrated in the ablation study. To the best of our knowledge, the experimental results of proposed method are superior to state-of-the-art inpainting approaches.","PeriodicalId":405734,"journal":{"name":"2020 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126509869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Self-Training Of Graph Neural Networks Using Similarity Reference For Robust Training With Noisy Labels 基于相似性参考的图神经网络鲁棒训练
Pub Date : 2020-10-01 DOI: 10.1109/ICIP40778.2020.9191054
Hyoungseob Park, Minki Jeong, Youngeun Kim, Changick Kim
Filtering noisy labels is crucial for robust training of deep neural networks. To train networks with noisy labels, sampling methods have been introduced, which sample the reliable instances to update networks using only sampled data. Since they rarely employ the non-sampled data for training, these methods have a fundamental limitation that they reduce the amount of the training data. To alleviate this problem, our approach aims to fully utilize the whole dataset by leveraging the information of the sampled data. To this end, we propose a novel graph-based learning framework that enables networks to propagate the label information of the sampled data to adjacent data, whether they are sampled or not. Also, we propose a novel self-training strategy to utilize the non-sampled data without labels and to regularize the network update using the information of the sampled data. Our method outperforms state-of-the-art sampling methods.
噪声标签的过滤是深度神经网络鲁棒训练的关键。为了训练带有噪声标签的网络,引入了采样方法,该方法只使用采样数据对可靠实例进行采样以更新网络。由于这些方法很少使用非采样数据进行训练,因此它们有一个基本的限制,即它们减少了训练数据的数量。为了缓解这一问题,我们的方法旨在通过利用采样数据的信息来充分利用整个数据集。为此,我们提出了一种新的基于图的学习框架,使网络能够将采样数据的标签信息传播到相邻数据,无论它们是否被采样。此外,我们还提出了一种新的自训练策略,利用无标签的非采样数据,并利用采样数据的信息对网络更新进行正则化。我们的方法优于最先进的抽样方法。
{"title":"Self-Training Of Graph Neural Networks Using Similarity Reference For Robust Training With Noisy Labels","authors":"Hyoungseob Park, Minki Jeong, Youngeun Kim, Changick Kim","doi":"10.1109/ICIP40778.2020.9191054","DOIUrl":"https://doi.org/10.1109/ICIP40778.2020.9191054","url":null,"abstract":"Filtering noisy labels is crucial for robust training of deep neural networks. To train networks with noisy labels, sampling methods have been introduced, which sample the reliable instances to update networks using only sampled data. Since they rarely employ the non-sampled data for training, these methods have a fundamental limitation that they reduce the amount of the training data. To alleviate this problem, our approach aims to fully utilize the whole dataset by leveraging the information of the sampled data. To this end, we propose a novel graph-based learning framework that enables networks to propagate the label information of the sampled data to adjacent data, whether they are sampled or not. Also, we propose a novel self-training strategy to utilize the non-sampled data without labels and to regularize the network update using the information of the sampled data. Our method outperforms state-of-the-art sampling methods.","PeriodicalId":405734,"journal":{"name":"2020 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122234119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Upright Adjustment With Graph Convolutional Networks 基于图卷积网络的垂直调整
Pub Date : 2020-10-01 DOI: 10.1109/ICIP40778.2020.9190715
Raehyuk Jung, Sungmin Cho, Junseok Kwon
We present a novel method for the upright adjustment of 360° images. Our network consists of two modules, which are a convolutional neural network (CNN) and a graph convolutional network (GCN). The input 360° images is processed with the CNN for visual feature extraction, and the extracted feature map is converted into a graph that finds a spherical representation of the input. We also introduce a novel loss function to address the issue of discrete probability distributions defined on the surface of a sphere. Experimental results demonstrate that our method outperforms fully connected-based methods.
提出了一种360°图像垂直调整的新方法。我们的网络由卷积神经网络(CNN)和图卷积网络(GCN)两个模块组成。使用CNN对输入的360°图像进行视觉特征提取,将提取的特征映射转换成图形,找到输入的球面表示。我们还引入了一个新的损失函数来解决定义在球面上的离散概率分布的问题。实验结果表明,该方法优于基于全连接的方法。
{"title":"Upright Adjustment With Graph Convolutional Networks","authors":"Raehyuk Jung, Sungmin Cho, Junseok Kwon","doi":"10.1109/ICIP40778.2020.9190715","DOIUrl":"https://doi.org/10.1109/ICIP40778.2020.9190715","url":null,"abstract":"We present a novel method for the upright adjustment of 360° images. Our network consists of two modules, which are a convolutional neural network (CNN) and a graph convolutional network (GCN). The input 360° images is processed with the CNN for visual feature extraction, and the extracted feature map is converted into a graph that finds a spherical representation of the input. We also introduce a novel loss function to address the issue of discrete probability distributions defined on the surface of a sphere. Experimental results demonstrate that our method outperforms fully connected-based methods.","PeriodicalId":405734,"journal":{"name":"2020 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126757221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Channel–Spatial fusion aware net for accurate and fast object Detection 通道-空间融合感知网络用于准确快速的目标检测
Pub Date : 2020-10-01 DOI: 10.1109/ICIP40778.2020.9191058
Linhuang Wu, Xiujun Yang, Zhenjia Fan, Chunjun Wang, Z. Chen
A major challenge of object detection is that accurate detector is limited by speed due to enormous network, while the lightweight detector can reach real-time but its weak representation ability leads to the expense of accuracy. To overcome the issue, we propose a channel–spatial fusion awareness module (CSFA) to improve the accuracy by enhancing the feature representation of network at the negligible cost of complexity. Given a feature map, our method exploits two parts sequentially, channel awareness and spatial awareness, to reconstruct feature map without deepening the network. Because of the property of CSFA for easy integrating into any layer of CNN architectures, we assemble this module into ResNet-18 and DLA-34 in CenterNet to form a CSFA detector. Results consistently show that CSFA-Net runs in a fairly fast speed, and achieves state-of-the-art, i.e., mAP of 81.12% on VOC2007 and AP of 43.2% on COCO.
目标检测面临的一个主要挑战是,由于庞大的网络,精确的检测器受到速度的限制,而轻量级检测器虽然可以达到实时性,但其表示能力较弱,导致准确性的损失。为了克服这个问题,我们提出了一种信道空间融合感知模块(CSFA),通过在可忽略的复杂性代价下增强网络的特征表示来提高准确性。对于给定的特征图,我们的方法在不深化网络的情况下,依次利用通道感知和空间感知两部分来重构特征图。由于CSFA易于集成到CNN架构的任何一层,我们将该模块组装到CenterNet中的ResNet-18和DLA-34中,形成CSFA检测器。结果一致表明,CSFA-Net的运行速度相当快,在VOC2007上的mAP为81.12%,在COCO上的AP为43.2%,达到了最先进的水平。
{"title":"Channel–Spatial fusion aware net for accurate and fast object Detection","authors":"Linhuang Wu, Xiujun Yang, Zhenjia Fan, Chunjun Wang, Z. Chen","doi":"10.1109/ICIP40778.2020.9191058","DOIUrl":"https://doi.org/10.1109/ICIP40778.2020.9191058","url":null,"abstract":"A major challenge of object detection is that accurate detector is limited by speed due to enormous network, while the lightweight detector can reach real-time but its weak representation ability leads to the expense of accuracy. To overcome the issue, we propose a channel–spatial fusion awareness module (CSFA) to improve the accuracy by enhancing the feature representation of network at the negligible cost of complexity. Given a feature map, our method exploits two parts sequentially, channel awareness and spatial awareness, to reconstruct feature map without deepening the network. Because of the property of CSFA for easy integrating into any layer of CNN architectures, we assemble this module into ResNet-18 and DLA-34 in CenterNet to form a CSFA detector. Results consistently show that CSFA-Net runs in a fairly fast speed, and achieves state-of-the-art, i.e., mAP of 81.12% on VOC2007 and AP of 43.2% on COCO.","PeriodicalId":405734,"journal":{"name":"2020 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126771578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
CGO: Multiband Astronomical Source Detection With Component-Graphs 基于分量图的多波段天文源探测
Pub Date : 2020-10-01 DOI: 10.1109/ICIP40778.2020.9191276
T. X. Nguyen, G. Chierchia, Laurent Najman, A. Venhola, C. Haigh, R. Peletier, M. Wilkinson, Hugues Talbot, B. Perret
Component-graphs provide powerful and complex structures for multi-band image processing. We propose a multiband astronomical source detection framework with the component-graphs relying on a new set of component attributes. We propose two modules to differentiate nodes belong to distinct objects and to detect partial object nodes. Experiments demonstrate an improved capacity at detecting faint objects on a multi-band astronomical dataset.
组件图为多波段图像处理提供了强大而复杂的结构。提出了一种基于一组新的分量属性的多波段天文源探测框架。我们提出了两个模块来区分节点属于不同的对象和检测部分对象节点。实验证明,该方法提高了在多波段天文数据集上探测微弱物体的能力。
{"title":"CGO: Multiband Astronomical Source Detection With Component-Graphs","authors":"T. X. Nguyen, G. Chierchia, Laurent Najman, A. Venhola, C. Haigh, R. Peletier, M. Wilkinson, Hugues Talbot, B. Perret","doi":"10.1109/ICIP40778.2020.9191276","DOIUrl":"https://doi.org/10.1109/ICIP40778.2020.9191276","url":null,"abstract":"Component-graphs provide powerful and complex structures for multi-band image processing. We propose a multiband astronomical source detection framework with the component-graphs relying on a new set of component attributes. We propose two modules to differentiate nodes belong to distinct objects and to detect partial object nodes. Experiments demonstrate an improved capacity at detecting faint objects on a multi-band astronomical dataset.","PeriodicalId":405734,"journal":{"name":"2020 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116043001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
KRF-SLAM: A Robust AI Slam Based On Keypoint Resampling And Fusion KRF-SLAM:一种基于关键点重采样和融合的鲁棒AI Slam
Pub Date : 2020-10-01 DOI: 10.1109/ICIP40778.2020.9191192
Wai Mun Wong, Christopher Lim, Chia-Da Lee, Lilian Wang, Shih-Che Chen, Pei-Kuei Tsung
Artificial Intelligence (AI) based feature extractors provide new possibility in the localization problem because of trainable characteristic. In this paper, the confidence information from AI learning process is used to further improve the accuracy. By resampling interest points based on different confidence thresholds, we are able to pixel-stack highlyconfident interest points to increase their bias for pose optimization. Then, the complementary descriptors are used to describe the pixel stacked interest points. As the result, the proposed Keypoint Resampling and Fusion (KRF) method improves the absolute trajectory error by 40% over state-of the-art vision SLAM algorithm on TUM Freiburg dataset. It is also more robust against tracking lost, and is compatible with existing optimizers.
基于人工智能(AI)的特征提取器由于具有可训练的特征,为定位问题提供了新的可能。本文利用人工智能学习过程中的置信度信息进一步提高准确率。通过基于不同置信度阈值的兴趣点重采样,我们能够像素堆叠高置信度兴趣点,以增加它们对姿态优化的偏差。然后,利用互补描述符对像素堆叠的兴趣点进行描述。结果表明,在TUM Freiburg数据集上,所提出的关键点重采样和融合(KRF)方法比目前最先进的视觉SLAM算法的绝对轨迹误差提高了40%。它对跟踪丢失也更健壮,并且与现有的优化器兼容。
{"title":"KRF-SLAM: A Robust AI Slam Based On Keypoint Resampling And Fusion","authors":"Wai Mun Wong, Christopher Lim, Chia-Da Lee, Lilian Wang, Shih-Che Chen, Pei-Kuei Tsung","doi":"10.1109/ICIP40778.2020.9191192","DOIUrl":"https://doi.org/10.1109/ICIP40778.2020.9191192","url":null,"abstract":"Artificial Intelligence (AI) based feature extractors provide new possibility in the localization problem because of trainable characteristic. In this paper, the confidence information from AI learning process is used to further improve the accuracy. By resampling interest points based on different confidence thresholds, we are able to pixel-stack highlyconfident interest points to increase their bias for pose optimization. Then, the complementary descriptors are used to describe the pixel stacked interest points. As the result, the proposed Keypoint Resampling and Fusion (KRF) method improves the absolute trajectory error by 40% over state-of the-art vision SLAM algorithm on TUM Freiburg dataset. It is also more robust against tracking lost, and is compatible with existing optimizers.","PeriodicalId":405734,"journal":{"name":"2020 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121148443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blind Image Deblurring With Joint Extreme Channels And L0-Regularized Intensity And Gradient Priors 联合极端通道和0-正则化强度和梯度先验的盲图像去模糊
Pub Date : 2020-10-01 DOI: 10.1109/ICIP40778.2020.9191010
Kai Zhou, Peixian Zhuang, J. Xiong, Jin Zhao, Muyao Du
The extreme channels prior (ECP) relies on the bright and dark channels of an image, and the corresponding ECP-based methods perform well in blind image deblurring. However, we experimentally observe that the pixel values of dark and bright channels in some images are not concentratedly distributed on 0 and 1 respectively. Based on this observation, we develop a model with a joint prior which combines the extreme channels prior and the $L_{0}-$regularized intensity and gradient prior for blind image deblurring, and previous image deblurring approaches based on dark channel prior, $L_{0^{-}}$ regularized intensity and gradient, and extreme channels prior can be seen as a particular case of our model. Then we derive an efficient optimization algorithm using the half-quadratic splitting method to address the non-convex $L_{0}-$minimization problem. A large number of experiments are finally performed to demonstrate the superiority of the proposed model in details restoration and artifacts removal, and our model outperforms several leading deblurring approaches in terms of subjective results and objective assessments. In addition, our method is more applicable for deblurring natural, text and face images which do not contain many bright or dark pixels.
极端通道先验(ECP)依赖于图像的明暗通道,基于极端通道先验的方法在盲图像去模糊中表现良好。然而,我们通过实验观察到,在一些图像中,暗通道和亮通道的像素值并没有分别集中分布在0和1上。在此基础上,我们开发了一个联合先验模型,该模型将极端通道先验和$L_{0}-$正则化强度和梯度先验相结合,用于盲图像去模糊,而之前基于暗通道先验、$L_{0^{-}}$正则化强度和梯度和极端通道先验的图像去模糊方法可以视为我们模型的一个特殊案例。然后,我们利用半二次分裂法推导了一种有效的优化算法来解决非凸$L_{0}-$最小化问题。最后进行了大量的实验,证明了所提出的模型在细节恢复和人工制品去除方面的优越性,并且在主观结果和客观评估方面,我们的模型优于几种领先的去模糊方法。此外,我们的方法更适用于自然图像、文本图像和人脸图像的去模糊处理,这些图像没有太多的明暗像素。
{"title":"Blind Image Deblurring With Joint Extreme Channels And L0-Regularized Intensity And Gradient Priors","authors":"Kai Zhou, Peixian Zhuang, J. Xiong, Jin Zhao, Muyao Du","doi":"10.1109/ICIP40778.2020.9191010","DOIUrl":"https://doi.org/10.1109/ICIP40778.2020.9191010","url":null,"abstract":"The extreme channels prior (ECP) relies on the bright and dark channels of an image, and the corresponding ECP-based methods perform well in blind image deblurring. However, we experimentally observe that the pixel values of dark and bright channels in some images are not concentratedly distributed on 0 and 1 respectively. Based on this observation, we develop a model with a joint prior which combines the extreme channels prior and the $L_{0}-$regularized intensity and gradient prior for blind image deblurring, and previous image deblurring approaches based on dark channel prior, $L_{0^{-}}$ regularized intensity and gradient, and extreme channels prior can be seen as a particular case of our model. Then we derive an efficient optimization algorithm using the half-quadratic splitting method to address the non-convex $L_{0}-$minimization problem. A large number of experiments are finally performed to demonstrate the superiority of the proposed model in details restoration and artifacts removal, and our model outperforms several leading deblurring approaches in terms of subjective results and objective assessments. In addition, our method is more applicable for deblurring natural, text and face images which do not contain many bright or dark pixels.","PeriodicalId":405734,"journal":{"name":"2020 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121190913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Egok360: A 360 Egocentric Kinetic Human Activity Video Dataset Egok360: 360以自我为中心的动态人类活动视频数据集
Pub Date : 2020-10-01 DOI: 10.1109/ICIP40778.2020.9191256
Keshav Bhandari, Mario A. DeLaGarza, Ziliang Zong, Hugo Latapie, Yan Yan
Recently, there has been a growing interest in wearable sensors which provides new research perspectives for 360 ° video analysis. However, the lack of 360 ° datasets in literature hinders the research in this field. To bridge this gap, in this paper we propose a novel Egocentric (first-person) 360° Kinetic human activity video dataset (EgoK360). The EgoK360 dataset contains annotations of human activity with different sub-actions, e.g., activity Ping-Pong with four sub-actions which are pickup-ball, hit, bounce-ball and serve. To the best of our knowledge, EgoK360 is the first dataset in the domain of first-person activity recognition with a 360° environmental setup, which will facilitate the egocentric 360 ° video understanding. We provide experimental results and comprehensive analysis of variants of the two-stream network for 360 egocentric activity recognition. The EgoK360 dataset can be downloaded from https://egok360.github.io/.
最近,人们对可穿戴传感器的兴趣日益浓厚,它为360°视频分析提供了新的研究视角。然而,文献中缺乏360°数据集,阻碍了该领域的研究。为了弥补这一差距,本文提出了一种新颖的以自我为中心(第一人称)360°动态人类活动视频数据集(EgoK360)。EgoK360数据集包含人类活动的不同子动作的注释,例如,乒乓球活动有四个子动作,分别是捡球、击球、弹回球和发球。据我们所知,EgoK360是第一人称活动识别领域的第一个数据集,具有360°环境设置,这将有助于以自我为中心的360°视频理解。我们提供了360度自我中心活动识别的两流网络变体的实验结果和综合分析。EgoK360数据集可从https://egok360.github.io/下载。
{"title":"Egok360: A 360 Egocentric Kinetic Human Activity Video Dataset","authors":"Keshav Bhandari, Mario A. DeLaGarza, Ziliang Zong, Hugo Latapie, Yan Yan","doi":"10.1109/ICIP40778.2020.9191256","DOIUrl":"https://doi.org/10.1109/ICIP40778.2020.9191256","url":null,"abstract":"Recently, there has been a growing interest in wearable sensors which provides new research perspectives for 360 ° video analysis. However, the lack of 360 ° datasets in literature hinders the research in this field. To bridge this gap, in this paper we propose a novel Egocentric (first-person) 360° Kinetic human activity video dataset (EgoK360). The EgoK360 dataset contains annotations of human activity with different sub-actions, e.g., activity Ping-Pong with four sub-actions which are pickup-ball, hit, bounce-ball and serve. To the best of our knowledge, EgoK360 is the first dataset in the domain of first-person activity recognition with a 360° environmental setup, which will facilitate the egocentric 360 ° video understanding. We provide experimental results and comprehensive analysis of variants of the two-stream network for 360 egocentric activity recognition. The EgoK360 dataset can be downloaded from https://egok360.github.io/.","PeriodicalId":405734,"journal":{"name":"2020 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123864201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2020 IEEE International Conference on Image Processing (ICIP)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1