首页 > 最新文献

2020 IEEE International Conference on Image Processing (ICIP)最新文献

英文 中文
Learning Image Aesthetics by Learning Inpainting 通过学习绘画学习图像美学
Pub Date : 2020-10-01 DOI: 10.1109/ICIP40778.2020.9191130
June Hao Ching, John See, L. Wong
Due to the high capability of learning robust features, convolutional neural networks (CNN) are becoming a mainstay solution for many computer vision problems, including aesthetic quality assessment (AQA). However, there remains the issue that learning with CNN requires time-consuming and expensive data annotations especially for a task like AQA. In this paper, we present a novel approach to AQA that incorporates self-supervised learning (SSL) by learning how to inpaint images according to photographic rules such as rules-of-thirds and visual saliency. We conduct extensive quantitative experiments on a variety of pretext tasks and also different ways of masking patches for inpainting, reporting fairer distribution-based metrics. We also show the suitability and practicality of the inpainting task which yielded comparably good benchmark results with much lighter model complexity.
由于卷积神经网络(CNN)具有强大的鲁棒特征学习能力,它正成为许多计算机视觉问题的主要解决方案,包括美学质量评估(AQA)。然而,仍然存在一个问题,即使用CNN学习需要耗时且昂贵的数据注释,特别是对于像AQA这样的任务。在本文中,我们提出了一种新的AQA方法,该方法结合了自我监督学习(SSL),通过学习如何根据摄影规则(如三分法则和视觉显著性)来绘制图像。我们对各种各样的借口任务进行了广泛的定量实验,也对不同的掩盖补丁的方法进行了绘制,报告了更公平的基于分布的指标。我们还展示了inpainting任务的适用性和实用性,该任务产生了相当好的基准测试结果,并且模型复杂度大大降低。
{"title":"Learning Image Aesthetics by Learning Inpainting","authors":"June Hao Ching, John See, L. Wong","doi":"10.1109/ICIP40778.2020.9191130","DOIUrl":"https://doi.org/10.1109/ICIP40778.2020.9191130","url":null,"abstract":"Due to the high capability of learning robust features, convolutional neural networks (CNN) are becoming a mainstay solution for many computer vision problems, including aesthetic quality assessment (AQA). However, there remains the issue that learning with CNN requires time-consuming and expensive data annotations especially for a task like AQA. In this paper, we present a novel approach to AQA that incorporates self-supervised learning (SSL) by learning how to inpaint images according to photographic rules such as rules-of-thirds and visual saliency. We conduct extensive quantitative experiments on a variety of pretext tasks and also different ways of masking patches for inpainting, reporting fairer distribution-based metrics. We also show the suitability and practicality of the inpainting task which yielded comparably good benchmark results with much lighter model complexity.","PeriodicalId":405734,"journal":{"name":"2020 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122094282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
3d Imaging For Thermal Cameras Using Structured Light 使用结构光的热像仪三维成像
Pub Date : 2020-10-01 DOI: 10.1109/ICIP40778.2020.9191297
Jack Erdozain, Kazuto Ichimaru, Tomohiro Maeda, Hiroshi Kawasaki, R. Raskar, A. Kadambi
Optical 3D sensing technologies are exploited for many applications in autonomous vehicles, manufacturing, and consumer products. However, existing techniques may suffer in certain challenging conditions, where scattering may occur due to particles. While the light in the visible and near IR spectrum is affected by scattering, long-wave IR (LWIR) tends to experience less scattering, especially when the particles are much smaller than the incident radiation. We propose and demonstrate the expansion of structured light scanning approaches into the LWIR spectrum using a thermal camera and black body radiation source. We then validate the results produced against ground truth scans from traditional structured light scanners. Additional means for projecting these scanning patterns are also discussed alongside potential drawbacks and challenges of this technique associated with future adoption.
光学3D传感技术在自动驾驶汽车、制造业和消费产品中有着广泛的应用。然而,现有的技术在某些具有挑战性的条件下可能会受到影响,在这些条件下,粒子可能会发生散射。而可见光和近红外光谱中的光受到散射的影响,长波红外(LWIR)往往经历较少的散射,特别是当粒子比入射辐射小得多的时候。我们提出并演示了使用热像仪和黑体辐射源将结构光扫描方法扩展到LWIR光谱。然后,我们对传统结构光扫描仪的地面真值扫描结果进行验证。此外,还讨论了投射这些扫描模式的其他方法,以及与将来采用该技术相关的潜在缺点和挑战。
{"title":"3d Imaging For Thermal Cameras Using Structured Light","authors":"Jack Erdozain, Kazuto Ichimaru, Tomohiro Maeda, Hiroshi Kawasaki, R. Raskar, A. Kadambi","doi":"10.1109/ICIP40778.2020.9191297","DOIUrl":"https://doi.org/10.1109/ICIP40778.2020.9191297","url":null,"abstract":"Optical 3D sensing technologies are exploited for many applications in autonomous vehicles, manufacturing, and consumer products. However, existing techniques may suffer in certain challenging conditions, where scattering may occur due to particles. While the light in the visible and near IR spectrum is affected by scattering, long-wave IR (LWIR) tends to experience less scattering, especially when the particles are much smaller than the incident radiation. We propose and demonstrate the expansion of structured light scanning approaches into the LWIR spectrum using a thermal camera and black body radiation source. We then validate the results produced against ground truth scans from traditional structured light scanners. Additional means for projecting these scanning patterns are also discussed alongside potential drawbacks and challenges of this technique associated with future adoption.","PeriodicalId":405734,"journal":{"name":"2020 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122166757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Foldmatch: Accurate and High Fidelity Garment Fitting Onto 3D Scans 折叠匹配:准确和高保真服装合身到3D扫描
Pub Date : 2020-10-01 DOI: 10.1109/ICIP40778.2020.9190730
Sk Aziz Ali, Sikang Yan, W. Dornisch, D. Stricker
In this paper, we propose a new template fitting method that can capture fine details of garments in target 3D scans of dressed human bodies. Matching the high fidelity details of such loose/tight-fit garments is a challenging task as they express intricate folds, creases, wrinkle patterns, and other high fidelity surface details. Our proposed method of non-rigid shape fitting – FoldMatch – uses physics-based particle dynamics to explicitly model the deformation of loose-fit garments and wrinkle vector fields for capturing clothing details. The 3D scan point cloud behaves as a collection of astrophysical particles, which attracts the points in template mesh and defines the template motion model. We use this point-based motion model to derive regularized deformation gradients for the template mesh. We show the parameterization of the wrinkle vector fields helps in the accurate shape fitting. Our method shows better performance than the stateof-the-art methods. We define several deformation and shape matching quality measurement metrics to evaluate FoldMatch on synthetic and real data sets.
在本文中,我们提出了一种新的模板拟合方法,可以在穿着人体的目标三维扫描中捕获服装的精细细节。匹配这种宽松/紧身服装的高保真细节是一项具有挑战性的任务,因为它们表达了复杂的褶皱、折痕、褶皱图案和其他高保真表面细节。我们提出的非刚性形状拟合方法- FoldMatch -使用基于物理的粒子动力学来明确建模宽松服装的变形和褶皱矢量场,以捕获服装细节。三维扫描点云表现为天体物理粒子的集合,它吸引模板网格中的点并定义模板的运动模型。我们使用这个基于点的运动模型来导出模板网格的正则化变形梯度。我们证明了褶皱向量场的参数化有助于精确的形状拟合。我们的方法比最先进的方法表现出更好的性能。我们定义了几个变形和形状匹配质量测量指标来评估合成和真实数据集上的FoldMatch。
{"title":"Foldmatch: Accurate and High Fidelity Garment Fitting Onto 3D Scans","authors":"Sk Aziz Ali, Sikang Yan, W. Dornisch, D. Stricker","doi":"10.1109/ICIP40778.2020.9190730","DOIUrl":"https://doi.org/10.1109/ICIP40778.2020.9190730","url":null,"abstract":"In this paper, we propose a new template fitting method that can capture fine details of garments in target 3D scans of dressed human bodies. Matching the high fidelity details of such loose/tight-fit garments is a challenging task as they express intricate folds, creases, wrinkle patterns, and other high fidelity surface details. Our proposed method of non-rigid shape fitting – FoldMatch – uses physics-based particle dynamics to explicitly model the deformation of loose-fit garments and wrinkle vector fields for capturing clothing details. The 3D scan point cloud behaves as a collection of astrophysical particles, which attracts the points in template mesh and defines the template motion model. We use this point-based motion model to derive regularized deformation gradients for the template mesh. We show the parameterization of the wrinkle vector fields helps in the accurate shape fitting. Our method shows better performance than the stateof-the-art methods. We define several deformation and shape matching quality measurement metrics to evaluate FoldMatch on synthetic and real data sets.","PeriodicalId":405734,"journal":{"name":"2020 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116612663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Generalized Fast Iteratively Reweighted Soft-Thresholding Algorithm for Sparse Coding Under Tight Frames in the Complex-Domain 复域紧帧稀疏编码的广义快速迭代重加权软阈值算法
Pub Date : 2020-10-01 DOI: 10.1109/ICIP40778.2020.9190686
P. Pokala, Satvik Chemudupati, C. Seelamantula
We present a new method for fast magnetic resonance image (MRI) reconstruction in the complex-domain under tight frames. We propose a generalized problem formulation that allows for different weight-update strategies for iteratively reweighted ℓ1-minimization under tight frames. Further, we impose sufficient conditions on the function of the weights that leads to the reweighting strategy, which follows the interpretation originally given by Candès et al, but is more efficient than theirs. Since the objective function in complex-domain compressive sensing MRI (CS-MRI) reconstruction problem is nonholomorphic, we resort to Wirtinger calculus for deriving the update strategies. We develop an algorithm called generalized iteratively reweighted soft-thresholding algorithm (GIRSTA) and its fast variant, namely, generalized fast iteratively reweighted soft-thresholding algorithm (GFIRSTA). We provide convergence guarantees for GIRSTA and empirical convergence results for GFIRSTA. Our experiments show a remarkable performance of the proposed algorithms for complex-domain CS-MRI reconstruction considering both random sampling and radial sampling strategies. GFIRSTA outperforms state-of-the-art techniques in terms of peak signal-to-noise ratio (PSNR) and structural similarity index metric (SSIM).
提出了一种紧凑框架下复杂域快速磁共振图像重建的新方法。我们提出了一个广义的问题公式,它允许在紧框架下迭代重加权最小化的不同权重更新策略。此外,我们对权重函数施加了充分条件,从而导致重权重策略,该策略遵循cand等人最初给出的解释,但比他们的解释更有效。由于复杂域压缩感知MRI (CS-MRI)重构问题的目标函数是非全纯的,我们采用Wirtinger演算来推导更新策略。本文提出了一种广义迭代重加权软阈值算法(GIRSTA)及其快速变体——广义快速迭代重加权软阈值算法(GFIRSTA)。给出了GIRSTA的收敛性保证和GFIRSTA的经验收敛性结果。我们的实验表明,在考虑随机采样和径向采样策略的复杂域CS-MRI重建中,所提出的算法具有显着的性能。GFIRSTA在峰值信噪比(PSNR)和结构相似性指数(SSIM)方面优于最先进的技术。
{"title":"Generalized Fast Iteratively Reweighted Soft-Thresholding Algorithm for Sparse Coding Under Tight Frames in the Complex-Domain","authors":"P. Pokala, Satvik Chemudupati, C. Seelamantula","doi":"10.1109/ICIP40778.2020.9190686","DOIUrl":"https://doi.org/10.1109/ICIP40778.2020.9190686","url":null,"abstract":"We present a new method for fast magnetic resonance image (MRI) reconstruction in the complex-domain under tight frames. We propose a generalized problem formulation that allows for different weight-update strategies for iteratively reweighted ℓ1-minimization under tight frames. Further, we impose sufficient conditions on the function of the weights that leads to the reweighting strategy, which follows the interpretation originally given by Candès et al, but is more efficient than theirs. Since the objective function in complex-domain compressive sensing MRI (CS-MRI) reconstruction problem is nonholomorphic, we resort to Wirtinger calculus for deriving the update strategies. We develop an algorithm called generalized iteratively reweighted soft-thresholding algorithm (GIRSTA) and its fast variant, namely, generalized fast iteratively reweighted soft-thresholding algorithm (GFIRSTA). We provide convergence guarantees for GIRSTA and empirical convergence results for GFIRSTA. Our experiments show a remarkable performance of the proposed algorithms for complex-domain CS-MRI reconstruction considering both random sampling and radial sampling strategies. GFIRSTA outperforms state-of-the-art techniques in terms of peak signal-to-noise ratio (PSNR) and structural similarity index metric (SSIM).","PeriodicalId":405734,"journal":{"name":"2020 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129712268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual Tracking Via Temporally-Regularized Context-Aware Correlation Filters 通过时间正则化上下文感知相关过滤器的视觉跟踪
Pub Date : 2020-10-01 DOI: 10.1109/ICIP40778.2020.9191027
Jiawen Liao, C. Qi, Jianzhong Cao, He Bian
Classical discriminative correlation filter (DCF) model suffers from boundary effects, several modified discriminative correlation filter models have been proposed to mitigate this drawback using enlarged search region, and remarkable performance improvement has been reported by related papers. However, model deterioration is still not well addressed when facing occlusion and other challenging scenarios. In this work, we propose a novel Temporally-regularized Context-aware Correlation Filters (TCCF) model to model the target appearance more robustly. We take advantage of the enlarged search region to obtain more negative samples to make the filter sufficiently trained, and a temporal regularizer, which restricting variation in filter models between frames, is seamlessly integrated into the original formulation. Our model is derived from the new discriminative learning loss formulation, a closed form solution for multidimensional features is provided, which is solved efficiently using Alternating Direction Method of Multipliers (ADMM). Extensive experiments on standard OTB-2015, TempleColor-128 and VOT-2016 benchmarks show that the proposed approach performs favorably against many state-of-the-art methods with real-time performance of 28fps on single CPU.
经典判别相关滤波模型存在边界效应的缺陷,提出了几种改进的判别相关滤波模型,通过扩大搜索区域来克服这一缺陷,并取得了显著的性能改进。然而,当面对遮挡和其他具有挑战性的场景时,模型恶化仍然没有得到很好的解决。在这项工作中,我们提出了一种新的时间正则化上下文感知相关过滤器(TCCF)模型来更稳健地建模目标外观。我们利用扩大的搜索区域来获得更多的负样本,以使滤波器得到充分的训练,并将一个时间正则器无缝地集成到原始公式中,该正则器限制了帧之间滤波器模型的变化。我们的模型是由新的判别学习损失公式衍生而来,提供了一个多维特征的封闭形式解,并使用乘法器的交替方向法(ADMM)高效地求解。在标准OTB-2015、TempleColor-128和VOT-2016基准测试上进行的大量实验表明,与许多最先进的方法相比,该方法在单CPU上的实时性能为28fps。
{"title":"Visual Tracking Via Temporally-Regularized Context-Aware Correlation Filters","authors":"Jiawen Liao, C. Qi, Jianzhong Cao, He Bian","doi":"10.1109/ICIP40778.2020.9191027","DOIUrl":"https://doi.org/10.1109/ICIP40778.2020.9191027","url":null,"abstract":"Classical discriminative correlation filter (DCF) model suffers from boundary effects, several modified discriminative correlation filter models have been proposed to mitigate this drawback using enlarged search region, and remarkable performance improvement has been reported by related papers. However, model deterioration is still not well addressed when facing occlusion and other challenging scenarios. In this work, we propose a novel Temporally-regularized Context-aware Correlation Filters (TCCF) model to model the target appearance more robustly. We take advantage of the enlarged search region to obtain more negative samples to make the filter sufficiently trained, and a temporal regularizer, which restricting variation in filter models between frames, is seamlessly integrated into the original formulation. Our model is derived from the new discriminative learning loss formulation, a closed form solution for multidimensional features is provided, which is solved efficiently using Alternating Direction Method of Multipliers (ADMM). Extensive experiments on standard OTB-2015, TempleColor-128 and VOT-2016 benchmarks show that the proposed approach performs favorably against many state-of-the-art methods with real-time performance of 28fps on single CPU.","PeriodicalId":405734,"journal":{"name":"2020 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129859273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attention Boosted Deep Networks For Video Classification 关注增强的视频分类深度网络
Pub Date : 2020-10-01 DOI: 10.1109/ICIP40778.2020.9190996
Junyong You, J. Korhonen
Video classification can be performed by summarizing image contents of individual frames into one class by deep neural networks, e.g., CNN and LSTM. Human interpretation of video content is influenced by the attention mechanism. In other words, video class can be more attentively decided by certain information than others. In this paper, we propose to integrate the attention mechanism into deep networks for video classification. The proposed framework employs 2D CNN networks with ImageNet pretrained weights to extract features of video frames that are then fed to a bidirectional LSTM network for video classification. An attention block has been developed that can be added after the LSTM network in the proposed framework. Several different 2D CNN architectures have been tested in the experiments. The results with respect to two publicly available datasets have demonstrated that integrating attention can boost the performance of deep networks in video classification compared to not applying the attention block. We also found out that applying attention to the LSTM outputs on the VGG19 architecture provides the highest classification accuracy in the proposed framework.
视频分类可以通过深度神经网络,如CNN和LSTM,将单个帧的图像内容归纳为一类。人类对视频内容的解读受到注意机制的影响。换句话说,视频课可以由某些信息比其他信息更专注地决定。在本文中,我们提出将注意力机制整合到深度网络中进行视频分类。该框架采用带有ImageNet预训练权值的二维CNN网络来提取视频帧的特征,然后将其馈送到双向LSTM网络进行视频分类。我们开发了一个注意力块,可以添加到LSTM网络之后。几种不同的二维CNN架构已经在实验中进行了测试。针对两个公开可用的数据集的结果表明,与不应用注意力块相比,集成注意力可以提高深度网络在视频分类中的性能。我们还发现,在提出的框架中,将注意力应用于VGG19架构上的LSTM输出提供了最高的分类精度。
{"title":"Attention Boosted Deep Networks For Video Classification","authors":"Junyong You, J. Korhonen","doi":"10.1109/ICIP40778.2020.9190996","DOIUrl":"https://doi.org/10.1109/ICIP40778.2020.9190996","url":null,"abstract":"Video classification can be performed by summarizing image contents of individual frames into one class by deep neural networks, e.g., CNN and LSTM. Human interpretation of video content is influenced by the attention mechanism. In other words, video class can be more attentively decided by certain information than others. In this paper, we propose to integrate the attention mechanism into deep networks for video classification. The proposed framework employs 2D CNN networks with ImageNet pretrained weights to extract features of video frames that are then fed to a bidirectional LSTM network for video classification. An attention block has been developed that can be added after the LSTM network in the proposed framework. Several different 2D CNN architectures have been tested in the experiments. The results with respect to two publicly available datasets have demonstrated that integrating attention can boost the performance of deep networks in video classification compared to not applying the attention block. We also found out that applying attention to the LSTM outputs on the VGG19 architecture provides the highest classification accuracy in the proposed framework.","PeriodicalId":405734,"journal":{"name":"2020 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128380269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Precise Statistical Approach for Leaf Segmentation 叶片分割的精确统计方法
Pub Date : 2020-10-01 DOI: 10.1109/ICIP40778.2020.9190754
M. Ghazal, Ali M. Mahmoud, A. Shalaby, Shams Shaker, A. Khelifi, A. El-Baz
One thing that assists in automatic environmental monitoring is leaf segmentation. By segmenting a leaf, image-based leaf health assessment can be performed which is crucial in maintaining the effectiveness of the environmental balance. This paper presents a technique that serves an accurate framework for diseased leaf segmentation from Coloured imaged. In other words, this method works to use information generated from RGB images that we have stored in our data base to represent the current input image. To achieve such technique, four main steps were constructed: 1) Using contrast variations to characterize the region of interest (ROI) of a given leaf which enhances the accuracy of the segmentation using minimal time. 2) using linear combination of discrete Gaussians (LCDG) to represent the visual appearance of the input image and to assume the marginal probability distributions of the three regions of interest classes. 3) Using information generated from RGB images that we have stored in our data base to calculate the probabilities of the three classes on a pixel basis in step two. 4) Lastly, clarifying the labels with Gauss-Markov random field model (GGMRF) to maintain the continuity. After all these steps, the experimental validation promised high accuracy.
帮助自动环境监测的一件事是树叶分割。通过对叶片进行分割,可以进行基于图像的叶片健康评估,这对于维持环境平衡的有效性至关重要。本文提出了一种从彩色图像中准确分割病叶的技术框架。换句话说,该方法使用我们存储在数据库中的RGB图像生成的信息来表示当前输入图像。为了实现该技术,构建了四个主要步骤:1)利用对比度变化来表征给定叶子的感兴趣区域(ROI),从而在最短的时间内提高分割的准确性。2)使用离散高斯的线性组合(LCDG)来表示输入图像的视觉外观,并假设三个兴趣类区域的边际概率分布。3)使用我们存储在数据库中的RGB图像生成的信息,在第二步中以像素为基础计算三类的概率。4)最后,利用高斯-马尔可夫随机场模型(GGMRF)对标签进行澄清,保持标签的连续性。经过实验验证,该方法具有较高的精度。
{"title":"Precise Statistical Approach for Leaf Segmentation","authors":"M. Ghazal, Ali M. Mahmoud, A. Shalaby, Shams Shaker, A. Khelifi, A. El-Baz","doi":"10.1109/ICIP40778.2020.9190754","DOIUrl":"https://doi.org/10.1109/ICIP40778.2020.9190754","url":null,"abstract":"One thing that assists in automatic environmental monitoring is leaf segmentation. By segmenting a leaf, image-based leaf health assessment can be performed which is crucial in maintaining the effectiveness of the environmental balance. This paper presents a technique that serves an accurate framework for diseased leaf segmentation from Coloured imaged. In other words, this method works to use information generated from RGB images that we have stored in our data base to represent the current input image. To achieve such technique, four main steps were constructed: 1) Using contrast variations to characterize the region of interest (ROI) of a given leaf which enhances the accuracy of the segmentation using minimal time. 2) using linear combination of discrete Gaussians (LCDG) to represent the visual appearance of the input image and to assume the marginal probability distributions of the three regions of interest classes. 3) Using information generated from RGB images that we have stored in our data base to calculate the probabilities of the three classes on a pixel basis in step two. 4) Lastly, clarifying the labels with Gauss-Markov random field model (GGMRF) to maintain the continuity. After all these steps, the experimental validation promised high accuracy.","PeriodicalId":405734,"journal":{"name":"2020 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128974280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Information Hiding In Image Enhancement 信息隐藏在图像增强
Pub Date : 2020-10-01 DOI: 10.1109/ICIP40778.2020.9191093
Simying Ong, Koksheik Wong
This paper proposes an information hiding method to embed data while executing image enhancement steps. The 2D Median Filter is adapted and re-engineered to demonstrate the feasibility of this concept. In particular, the filtering-embedding steps are performed for each pixel in a sliding window manner. Pixels enclosed within the predefined window (neighborhood) are gathered, linearized and sorted. Then, the linearized pixels are divided into partitions, in which each partition is assigned to represent a certain sequence of bits. The performance of the proposed method is evaluated by using the BSD300 dataset for various settings. The embedding capacity, image quality, data extraction error rate are reported and analyzed. Besides, the robustness of the proposed method against brute force attack is also discussed. In the best case scenario, when the window size is $7 times 7, sim 0.97$ bpp is achieved with acceptable image quality while having $sim 3.5$% data extraction error rate.
本文提出了一种信息隐藏方法,在执行图像增强步骤时嵌入数据。二维中值滤波器经过调整和重新设计,以证明这一概念的可行性。特别地,以滑动窗口的方式对每个像素执行滤波嵌入步骤。封闭在预定义窗口(邻域)内的像素被收集、线性化和排序。然后,将线性化后的像素分成若干个分区,每个分区被分配来表示一定的位序列。通过使用BSD300数据集对各种设置进行性能评估。报告并分析了嵌入容量、图像质量、数据提取错误率。此外,还讨论了该方法对暴力破解的鲁棒性。在最佳情况下,当窗口大小为$7 × 7时,在具有$ 3.5 %数据提取错误率的情况下,可以获得0.97$ bpp的图像质量。
{"title":"Information Hiding In Image Enhancement","authors":"Simying Ong, Koksheik Wong","doi":"10.1109/ICIP40778.2020.9191093","DOIUrl":"https://doi.org/10.1109/ICIP40778.2020.9191093","url":null,"abstract":"This paper proposes an information hiding method to embed data while executing image enhancement steps. The 2D Median Filter is adapted and re-engineered to demonstrate the feasibility of this concept. In particular, the filtering-embedding steps are performed for each pixel in a sliding window manner. Pixels enclosed within the predefined window (neighborhood) are gathered, linearized and sorted. Then, the linearized pixels are divided into partitions, in which each partition is assigned to represent a certain sequence of bits. The performance of the proposed method is evaluated by using the BSD300 dataset for various settings. The embedding capacity, image quality, data extraction error rate are reported and analyzed. Besides, the robustness of the proposed method against brute force attack is also discussed. In the best case scenario, when the window size is $7 times 7, sim 0.97$ bpp is achieved with acceptable image quality while having $sim 3.5$% data extraction error rate.","PeriodicalId":405734,"journal":{"name":"2020 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129054448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Light Field Stitching Based On Concentric Spherical Modeling 基于同心球建模的光场拼接
Pub Date : 2020-10-01 DOI: 10.1109/ICIP40778.2020.9190965
Siyao Zhou, Xin Jin, Pei Wang
VR image in form of the spherical panoramic image is already widely available while enhancing its immersive experience with six degrees of freedom (6-DoF) is fundamentally required. Spherical panoramic light field (LF) becomes a potential solution because of recording the spatial and angular information of the light rays in the 360° spherical space. In this paper, a novel method is proposed to generate spherical panoramic LF by stitching LFs captured at different rotational angles. First, concentric spherical modeling is proposed to parameterize the recorded rays to eliminate the projection biases in registration. Then, the concentric spherical model-based LF registration which is insensitive to the ordering is introduced to transform each 4D LFs mesh accurately. Finally, the stitching result is projected to Two-parallel-plane (TPP) coordinates for viewing. Experimental results show that the proposed method outperforms the existing methods in terms of subjective quality and continuity in the stitched LF.
球形全景图像形式的VR图像已经广泛应用,但增强其六自由度(6-DoF)的沉浸式体验是必不可少的。球形全景光场记录了光线在360°球形空间中的空间和角度信息,成为一种潜在的解决方案。本文提出了一种通过拼接不同旋转角度捕获的光场来生成球形全景光场的新方法。首先,采用同心球建模方法对记录的射线进行参数化,消除配准过程中的投影偏差;然后,引入了对序列不敏感的同心球面模型LF配准,实现了各4D LFs网格的精确变换;最后,将拼接结果投影到两平行平面(TPP)坐标系中供观看。实验结果表明,该方法在主观质量和拼接LF的连续性方面优于现有方法。
{"title":"Light Field Stitching Based On Concentric Spherical Modeling","authors":"Siyao Zhou, Xin Jin, Pei Wang","doi":"10.1109/ICIP40778.2020.9190965","DOIUrl":"https://doi.org/10.1109/ICIP40778.2020.9190965","url":null,"abstract":"VR image in form of the spherical panoramic image is already widely available while enhancing its immersive experience with six degrees of freedom (6-DoF) is fundamentally required. Spherical panoramic light field (LF) becomes a potential solution because of recording the spatial and angular information of the light rays in the 360° spherical space. In this paper, a novel method is proposed to generate spherical panoramic LF by stitching LFs captured at different rotational angles. First, concentric spherical modeling is proposed to parameterize the recorded rays to eliminate the projection biases in registration. Then, the concentric spherical model-based LF registration which is insensitive to the ordering is introduced to transform each 4D LFs mesh accurately. Finally, the stitching result is projected to Two-parallel-plane (TPP) coordinates for viewing. Experimental results show that the proposed method outperforms the existing methods in terms of subjective quality and continuity in the stitched LF.","PeriodicalId":405734,"journal":{"name":"2020 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130348977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Broadband Hyperspectral Phase Retrieval From Noisy Data 基于噪声数据的宽带高光谱相位检索
Pub Date : 2020-10-01 DOI: 10.1109/ICIP40778.2020.9191204
V. Katkovnik, I. Shevkunov, K. Egiazarian
Hyperspectral (HS) imaging retrieves information from data obtained across a wide spectral range of spectral channels. The object to reconstruct is a 3D cube, where two coordinates are spatial and third one is spectral. We assume that this cube is complex-valued, i.e. characterized spatially frequency varying amplitude and phase. The observations are squared magnitudes measured as intensities summarized over spectrum. The HS phase retrieval problem is formulated as a reconstruction of the HS complex-valued object cube from Gaussian noisy intensity observations. The derived iterative algorithm includes the original proximal spectral analysis operator and the sparsity modeling for complex-valued 3D cubes. The efficiency of the algorithm is confirmed by simulation tests.
高光谱(HS)成像从广泛的光谱通道中获得的数据中检索信息。重建的对象是一个三维立方体,其中两个坐标是空间坐标,第三个是光谱坐标。我们假设这个立方体是复值的,即以空间频率变化的幅度和相位为特征。观测结果是用光谱上总结的强度测量的平方幅度。HS相位恢复问题被表述为从高斯噪声强度观测中重建HS复值目标立方体。推导出的迭代算法包括原始的近端谱分析算子和复值三维立方体的稀疏性建模。仿真实验验证了该算法的有效性。
{"title":"Broadband Hyperspectral Phase Retrieval From Noisy Data","authors":"V. Katkovnik, I. Shevkunov, K. Egiazarian","doi":"10.1109/ICIP40778.2020.9191204","DOIUrl":"https://doi.org/10.1109/ICIP40778.2020.9191204","url":null,"abstract":"Hyperspectral (HS) imaging retrieves information from data obtained across a wide spectral range of spectral channels. The object to reconstruct is a 3D cube, where two coordinates are spatial and third one is spectral. We assume that this cube is complex-valued, i.e. characterized spatially frequency varying amplitude and phase. The observations are squared magnitudes measured as intensities summarized over spectrum. The HS phase retrieval problem is formulated as a reconstruction of the HS complex-valued object cube from Gaussian noisy intensity observations. The derived iterative algorithm includes the original proximal spectral analysis operator and the sparsity modeling for complex-valued 3D cubes. The efficiency of the algorithm is confirmed by simulation tests.","PeriodicalId":405734,"journal":{"name":"2020 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130637499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2020 IEEE International Conference on Image Processing (ICIP)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1