首页 > 最新文献

2016 IEEE International Conference on Image Processing (ICIP)最新文献

英文 中文
Hyperbolic wavelet leaders for anisotropic multifractal texture analysis 各向异性多重分形织构分析的双曲小波导
Pub Date : 2016-09-25 DOI: 10.1109/ICIP.2016.7533022
S. Roux, P. Abry, B. Vedel, S. Jaffard, H. Wendt
Scale invariance has proven a crucial concept in texture modeling and analysis. Isotropic and self-similar fractional Brownian fields (2D-fBf) are often used as the natural reference process to model scale free textures. Its analysis is standardly conducted using the 2D discrete wavelet transform. Generalizations of 2D-fBf were considered independently in two respects: Anisotropy in the texture is allowed while preserving exact self-similarity, analysis then needs to be conducted using the 2D-Hyperbolic wavelet transform; Multifractality enables more versatile scale free models but requires isotropy, analysis is then achieved using wavelet leaders. The present paper proposes a first unifying extension, which is enabled through the following two key contributions: The definition of 2D process that incorporates jointly anisotropy and multi-fractality : The definition of the corresponding analysis tool, the hyperbolic wavelet leaders. Their relevance are studied by numerical simulations using synthetic scale free textures.
尺度不变性已经被证明是纹理建模和分析中的一个重要概念。各向同性和自相似分数布朗场(2D-fBf)常被用作无尺度纹理模型的自然参考过程。采用二维离散小波变换对其进行标准分析。2D-fBf的推广分别考虑两个方面:允许纹理各向异性同时保持精确的自相似性,然后使用2d -双曲小波变换进行分析;多重分形可以实现更通用的无尺度模型,但需要各向同性,然后使用小波导实现分析。本文提出了第一个统一的扩展,这是通过以下两个关键贡献实现的:二维过程的定义,结合了各向异性和多重分形;相应的分析工具,双曲小波先导的定义。通过合成无尺度纹理的数值模拟研究了它们的相关性。
{"title":"Hyperbolic wavelet leaders for anisotropic multifractal texture analysis","authors":"S. Roux, P. Abry, B. Vedel, S. Jaffard, H. Wendt","doi":"10.1109/ICIP.2016.7533022","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7533022","url":null,"abstract":"Scale invariance has proven a crucial concept in texture modeling and analysis. Isotropic and self-similar fractional Brownian fields (2D-fBf) are often used as the natural reference process to model scale free textures. Its analysis is standardly conducted using the 2D discrete wavelet transform. Generalizations of 2D-fBf were considered independently in two respects: Anisotropy in the texture is allowed while preserving exact self-similarity, analysis then needs to be conducted using the 2D-Hyperbolic wavelet transform; Multifractality enables more versatile scale free models but requires isotropy, analysis is then achieved using wavelet leaders. The present paper proposes a first unifying extension, which is enabled through the following two key contributions: The definition of 2D process that incorporates jointly anisotropy and multi-fractality : The definition of the corresponding analysis tool, the hyperbolic wavelet leaders. Their relevance are studied by numerical simulations using synthetic scale free textures.","PeriodicalId":6521,"journal":{"name":"2016 IEEE International Conference on Image Processing (ICIP)","volume":"49 1","pages":"3558-3562"},"PeriodicalIF":0.0,"publicationDate":"2016-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85398767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Multi-view semantic temporal video segmentation 多视图语义时态视频分割
Pub Date : 2016-09-25 DOI: 10.1109/ICIP.2016.7533100
T. Theodoridis, A. Tefas, I. Pitas
In this work, we propose a multi-view temporal video segmentation approach that employs a Gaussian scoring process for determining the best segmentation positions. By exploiting the semantic action information that the dense trajectories video description offers, this method can detect intra-shot actions as well, unlike shot boundary detection approaches. We compare the temporal segmentation results of the proposed method to both single-view and multi-view methods, and also compare the action recognition results obtained on ground truth video segments to the ones obtained on the proposed multi-view segments, on the IMPART multi-view action data set.
在这项工作中,我们提出了一种多视图时间视频分割方法,该方法采用高斯评分过程来确定最佳分割位置。通过利用密集轨迹视频描述提供的语义动作信息,与镜头边界检测方法不同,该方法也可以检测镜头内动作。我们将所提方法的时间分割结果与单视图和多视图方法进行了比较,并将所提方法在多视图动作数据集上对地面真实视频片段的动作识别结果与多视图视频片段的动作识别结果进行了比较。
{"title":"Multi-view semantic temporal video segmentation","authors":"T. Theodoridis, A. Tefas, I. Pitas","doi":"10.1109/ICIP.2016.7533100","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7533100","url":null,"abstract":"In this work, we propose a multi-view temporal video segmentation approach that employs a Gaussian scoring process for determining the best segmentation positions. By exploiting the semantic action information that the dense trajectories video description offers, this method can detect intra-shot actions as well, unlike shot boundary detection approaches. We compare the temporal segmentation results of the proposed method to both single-view and multi-view methods, and also compare the action recognition results obtained on ground truth video segments to the ones obtained on the proposed multi-view segments, on the IMPART multi-view action data set.","PeriodicalId":6521,"journal":{"name":"2016 IEEE International Conference on Image Processing (ICIP)","volume":"25 1","pages":"3947-3951"},"PeriodicalIF":0.0,"publicationDate":"2016-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89304766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Learning clustering-based linear mappings for quantization noise removal 基于学习聚类的线性映射量化去噪
Pub Date : 2016-09-25 DOI: 10.1109/ICIP.2016.7533151
Martin Alain, C. Guillemot, D. Thoreau, P. Guillotel
This paper describes a novel scheme to reduce the quantization noise of compressed videos and improve the overall coding performances. The proposed scheme first consists in clustering noisy patches of the compressed sequence. Then, at the encoder side, linear mappings are learned for each cluster between the noisy patches and the corresponding source patches. The linear mappings are then transmitted to the decoder where they can be applied to perform de-noising. The method has been tested with the HEVC standard, leading to a bitrate saving of up to 9.63%.
本文提出了一种降低压缩视频量化噪声,提高整体编码性能的新方案。该方法首先对压缩序列的噪声块进行聚类。然后,在编码器端,为每个簇学习噪声补丁和相应源补丁之间的线性映射。然后将线性映射传输到解码器,在那里它们可以应用于执行去噪。该方法已在HEVC标准下进行了测试,比特率节省高达9.63%。
{"title":"Learning clustering-based linear mappings for quantization noise removal","authors":"Martin Alain, C. Guillemot, D. Thoreau, P. Guillotel","doi":"10.1109/ICIP.2016.7533151","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7533151","url":null,"abstract":"This paper describes a novel scheme to reduce the quantization noise of compressed videos and improve the overall coding performances. The proposed scheme first consists in clustering noisy patches of the compressed sequence. Then, at the encoder side, linear mappings are learned for each cluster between the noisy patches and the corresponding source patches. The linear mappings are then transmitted to the decoder where they can be applied to perform de-noising. The method has been tested with the HEVC standard, leading to a bitrate saving of up to 9.63%.","PeriodicalId":6521,"journal":{"name":"2016 IEEE International Conference on Image Processing (ICIP)","volume":"1 1","pages":"4200-4204"},"PeriodicalIF":0.0,"publicationDate":"2016-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86184540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Speeding-up a convolutional neural network by connecting an SVM network 通过连接支持向量机网络来加速卷积神经网络
Pub Date : 2016-09-25 DOI: 10.1109/ICIP.2016.7532766
J. Pasquet, M. Chaumont, G. Subsol, Mustapha Derras
Deep neural networks yield positive object detection results in aerial imaging. To deal with the massive computational time required, we propose to connect an SVM Network to the different feature maps of a CNN. After the training of this SVM Network, we use an activation path to cross the network in a predefined order. We stop the crossing as quickly as possible. This early exit from the CNN allows us to reduce the computational burden. Experimental results are obtained for an industrial application in urban object detection. We show that potentially the computation cost could be reduced by 98%. Additionally, performance is slightly improved; for example, for a 55% recall, precision increases by 5%.
深度神经网络在航空成像中产生积极的目标检测结果。为了处理所需的大量计算时间,我们建议将SVM网络与CNN的不同特征映射连接起来。在对SVM网络进行训练后,我们使用激活路径以预定的顺序穿越网络。我们要尽快停止穿越。CNN的提前退出让我们减少了计算负担。在城市目标检测的工业应用中,得到了实验结果。我们表明,潜在的计算成本可以减少98%。此外,性能略有提高;例如,对于55%的召回率,准确率提高了5%。
{"title":"Speeding-up a convolutional neural network by connecting an SVM network","authors":"J. Pasquet, M. Chaumont, G. Subsol, Mustapha Derras","doi":"10.1109/ICIP.2016.7532766","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532766","url":null,"abstract":"Deep neural networks yield positive object detection results in aerial imaging. To deal with the massive computational time required, we propose to connect an SVM Network to the different feature maps of a CNN. After the training of this SVM Network, we use an activation path to cross the network in a predefined order. We stop the crossing as quickly as possible. This early exit from the CNN allows us to reduce the computational burden. Experimental results are obtained for an industrial application in urban object detection. We show that potentially the computation cost could be reduced by 98%. Additionally, performance is slightly improved; for example, for a 55% recall, precision increases by 5%.","PeriodicalId":6521,"journal":{"name":"2016 IEEE International Conference on Image Processing (ICIP)","volume":"18 1","pages":"2286-2290"},"PeriodicalIF":0.0,"publicationDate":"2016-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76357481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Multiple features learning via rotation strategy 通过旋转策略学习多个特征
Pub Date : 2016-09-25 DOI: 10.1109/ICIP.2016.7532750
J. Xia, L. Bombrun, Y. Berthoumieu, C. Germain
Images are usually represented by different groups of features, such as color, shape and texture attributes. In this paper, we propose a classification approach that integrates multiple features, such as spectral and spatial information. We refer this approach to multiple feature learning via rotation (MFL-R) strategy, which adopt a rotation-based ensemble method by using a data transformation approach. Five data transformation methods, including principal component analysis (PCA), neighborhood preserving embedding (NPE), linear local tangent space alignment (LLTSA), linearity preserving projection (LPP) and multiple feature combination via manifold learning and patch alignment (MLPA) are used in the MFL-R framework. Experimental results over two hyperspectral remote sensing images demonstrate that MFL-R with MLPA gains better performances and is not sensitive to the tuning parameters.
图像通常由不同的特征组表示,例如颜色、形状和纹理属性。在本文中,我们提出了一种融合光谱和空间信息等多种特征的分类方法。我们将这种方法称为旋转多特征学习(MFL-R)策略,该策略采用基于旋转的集成方法,并使用数据转换方法。在MFL-R框架中,采用了主成分分析(PCA)、邻域保持嵌入(NPE)、线性局部切线空间对齐(LLTSA)、线性保持投影(LPP)和通过流形学习和斑块对齐(MLPA)的多特征组合等5种数据转换方法。在两幅高光谱遥感图像上的实验结果表明,采用MLPA的MFL-R获得了更好的性能,并且对调谐参数不敏感。
{"title":"Multiple features learning via rotation strategy","authors":"J. Xia, L. Bombrun, Y. Berthoumieu, C. Germain","doi":"10.1109/ICIP.2016.7532750","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532750","url":null,"abstract":"Images are usually represented by different groups of features, such as color, shape and texture attributes. In this paper, we propose a classification approach that integrates multiple features, such as spectral and spatial information. We refer this approach to multiple feature learning via rotation (MFL-R) strategy, which adopt a rotation-based ensemble method by using a data transformation approach. Five data transformation methods, including principal component analysis (PCA), neighborhood preserving embedding (NPE), linear local tangent space alignment (LLTSA), linearity preserving projection (LPP) and multiple feature combination via manifold learning and patch alignment (MLPA) are used in the MFL-R framework. Experimental results over two hyperspectral remote sensing images demonstrate that MFL-R with MLPA gains better performances and is not sensitive to the tuning parameters.","PeriodicalId":6521,"journal":{"name":"2016 IEEE International Conference on Image Processing (ICIP)","volume":"9 6 1","pages":"2206-2210"},"PeriodicalIF":0.0,"publicationDate":"2016-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86449357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
On the performance of 3D just noticeable difference models 就3D性能而言,模型之间的差别只是显而易见的
Pub Date : 2016-09-25 DOI: 10.1109/ICIP.2016.7532511
Yu Fan, M. Larabi, F. A. Cheikh, C. Fernandez-Maloigne
The just noticeable difference (JND) notion reflects the maximum tolerable distortion. It has been extensively used for the optimization of 2D applications. For stereoscopic 3D (S3D) content, this notion is different since it relies on different mechanisms linked to our binocular vision. Unlike 2D, 3D-JND models appeared recently and the related literature is rather limited. These models can be used for the sake of compression and quality assessment improvement for S3D content. In this paper, we propose a deep and comparative study of the existing 3D-JND models. Additionally, in order to analyze their performance, the 3D-JND models have been integrated in recent metric dedicated to stereoscopic image quality assessment (SIQA). The results are reported on two widely used S3D image databases.
刚可注意差分(JND)概念反映了最大可容忍失真。它已被广泛用于二维应用程序的优化。对于立体3D (S3D)内容,这个概念是不同的,因为它依赖于与我们的双目视觉相关的不同机制。与2D模型不同,3D-JND模型是最近才出现的,相关文献相当有限。这些模型可用于S3D内容的压缩和质量评价改进。在本文中,我们对现有的3D-JND模型进行了深入的比较研究。此外,为了分析其性能,3D-JND模型已被集成到最近专门用于立体图像质量评估(SIQA)的度量中。结果报告了两个广泛使用的S3D图像数据库。
{"title":"On the performance of 3D just noticeable difference models","authors":"Yu Fan, M. Larabi, F. A. Cheikh, C. Fernandez-Maloigne","doi":"10.1109/ICIP.2016.7532511","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532511","url":null,"abstract":"The just noticeable difference (JND) notion reflects the maximum tolerable distortion. It has been extensively used for the optimization of 2D applications. For stereoscopic 3D (S3D) content, this notion is different since it relies on different mechanisms linked to our binocular vision. Unlike 2D, 3D-JND models appeared recently and the related literature is rather limited. These models can be used for the sake of compression and quality assessment improvement for S3D content. In this paper, we propose a deep and comparative study of the existing 3D-JND models. Additionally, in order to analyze their performance, the 3D-JND models have been integrated in recent metric dedicated to stereoscopic image quality assessment (SIQA). The results are reported on two widely used S3D image databases.","PeriodicalId":6521,"journal":{"name":"2016 IEEE International Conference on Image Processing (ICIP)","volume":"95 1","pages":"1017-1021"},"PeriodicalIF":0.0,"publicationDate":"2016-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83183427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Perceptually-adaptive quantization for stereoscopic video coding 立体视频编码的感知自适应量化
Pub Date : 2016-09-25 DOI: 10.1109/ICIP.2016.7533160
Sami Jaballah, M. Larabi, J. B. Tahar
In this paper, we present a novel perceptually-based optimization for the improvement of stereoscopic video coding efficiency. The main idea of this proposed scheme is to adaptively adjust the quantization parameter by taking into account the Human Visual System perceptual characteristics. For this, a saliency map is generated from both views and then segmented into salient and non-salient regions. To make the proposed scheme effective, and inspired from the binocular suppression theory, the asymmetry is ensured by altering the saliency map and not the view. As a result, the proposed perceptual coding scheme effectively reduces the bit-budget without affecting the perceptual quality based on an optimization approach with asymmetric video coding taking into account the saliency map of each view. Experimental results on HEVC-MV show that the proposed algorithm can achieve over 20% bit-rate saving while preserving the perceived image quality.
本文提出了一种新的基于感知的优化方法来提高立体视频编码效率。该方案的主要思想是考虑人类视觉系统的感知特性,自适应地调整量化参数。为此,从两个视图生成显著性图,然后分割为显著和非显著区域。为了使该方案有效,从双目抑制理论中得到启发,通过改变显著性图而不是视图来保证不对称性。结果表明,基于非对称视频编码优化方法的感知编码方案在不影响感知质量的前提下,有效地减少了比特预算。在HEVC-MV上的实验结果表明,该算法在保持感知图像质量的前提下,可以实现20%以上的比特率节省。
{"title":"Perceptually-adaptive quantization for stereoscopic video coding","authors":"Sami Jaballah, M. Larabi, J. B. Tahar","doi":"10.1109/ICIP.2016.7533160","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7533160","url":null,"abstract":"In this paper, we present a novel perceptually-based optimization for the improvement of stereoscopic video coding efficiency. The main idea of this proposed scheme is to adaptively adjust the quantization parameter by taking into account the Human Visual System perceptual characteristics. For this, a saliency map is generated from both views and then segmented into salient and non-salient regions. To make the proposed scheme effective, and inspired from the binocular suppression theory, the asymmetry is ensured by altering the saliency map and not the view. As a result, the proposed perceptual coding scheme effectively reduces the bit-budget without affecting the perceptual quality based on an optimization approach with asymmetric video coding taking into account the saliency map of each view. Experimental results on HEVC-MV show that the proposed algorithm can achieve over 20% bit-rate saving while preserving the perceived image quality.","PeriodicalId":6521,"journal":{"name":"2016 IEEE International Conference on Image Processing (ICIP)","volume":"33 1","pages":"4245-4249"},"PeriodicalIF":0.0,"publicationDate":"2016-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80336014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Texture image classification with Riemannian fisher vectors 黎曼fisher向量纹理图像分类
Pub Date : 2016-09-25 DOI: 10.1109/ICIP.2016.7533019
Ioana Ilea, L. Bombrun, C. Germain, R. Terebeș, M. Borda, Y. Berthoumieu
This paper introduces a generalization of the Fisher vectors to the Riemannian manifold. The proposed descriptors, called Riemannian Fisher vectors, are defined first, based on the mixture model of Riemannian Gaussian distributions. Next, their expressions are derived and they are applied in the context of texture image classification. The results are compared to those given by the recently proposed algorithms, bag of Riemannian words and R-VLAD. In addition, the most discriminant Riemannian Fisher vectors are identified.
本文介绍了费雪向量在黎曼流形中的推广。提出的描述符,称为黎曼费雪向量,首先定义,基于黎曼高斯分布的混合模型。其次,推导了它们的表达式,并将其应用于纹理图像分类。将结果与最近提出的算法、黎曼词包算法和R-VLAD算法进行了比较。此外,还确定了最具判别性的黎曼费雪向量。
{"title":"Texture image classification with Riemannian fisher vectors","authors":"Ioana Ilea, L. Bombrun, C. Germain, R. Terebeș, M. Borda, Y. Berthoumieu","doi":"10.1109/ICIP.2016.7533019","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7533019","url":null,"abstract":"This paper introduces a generalization of the Fisher vectors to the Riemannian manifold. The proposed descriptors, called Riemannian Fisher vectors, are defined first, based on the mixture model of Riemannian Gaussian distributions. Next, their expressions are derived and they are applied in the context of texture image classification. The results are compared to those given by the recently proposed algorithms, bag of Riemannian words and R-VLAD. In addition, the most discriminant Riemannian Fisher vectors are identified.","PeriodicalId":6521,"journal":{"name":"2016 IEEE International Conference on Image Processing (ICIP)","volume":"16 1","pages":"3543-3547"},"PeriodicalIF":0.0,"publicationDate":"2016-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91216307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Multi-source domain adaptation using C⁁1-smooth subspaces interpolation 基于C - log -光滑子空间插值的多源域自适应
Pub Date : 2016-09-01 DOI: 10.1109/ICIP.2016.7532879
Jorge Batista, K. Krakowski, Luís Machado, P. Martins, F. Leite
Manifold-based domain adaptation algorithms are receiving increasing attention in computer vision to model distribution shifts between source and target domain. In contrast to early works, that mainly explore intermediate subspaces along geodesics, in this work we propose to interpolate subspaces through C1-smooth curves on the Grassmann manifold. The new methodis based on the geometric Casteljau algorithm that is used to generate smooth interpolating polynomial curves on non-euclidean spaces and can be extended to generate polynomial splines that interpolate a given set of data on the Grassmann manifold. To evaluate the usefulness of the proposed interpolating curves on vision related problems, several experiments were conducted. We show the advantage of using smooth subspaces interpolation in multi-source unsupervised domain adaptation problems and in object recognition problems across datasets.
基于流形的域自适应算法用于模拟源域和目标域之间的分布变化,在计算机视觉领域受到越来越多的关注。与早期主要沿着测地线探索中间子空间的工作相反,在这项工作中,我们提出通过c1 -光滑曲线在Grassmann流形上插值子空间。该方法基于几何Casteljau算法,该算法用于在非欧几里德空间上生成光滑的插值多项式曲线,并可扩展到生成多项式样条,在Grassmann流形上对给定数据集进行插值。为了评估所提出的插值曲线在视觉相关问题上的有效性,进行了几个实验。我们展示了在多源无监督域自适应问题和跨数据集的目标识别问题中使用光滑子空间插值的优势。
{"title":"Multi-source domain adaptation using C⁁1-smooth subspaces interpolation","authors":"Jorge Batista, K. Krakowski, Luís Machado, P. Martins, F. Leite","doi":"10.1109/ICIP.2016.7532879","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532879","url":null,"abstract":"Manifold-based domain adaptation algorithms are receiving increasing attention in computer vision to model distribution shifts between source and target domain. In contrast to early works, that mainly explore intermediate subspaces along geodesics, in this work we propose to interpolate subspaces through C1-smooth curves on the Grassmann manifold. The new methodis based on the geometric Casteljau algorithm that is used to generate smooth interpolating polynomial curves on non-euclidean spaces and can be extended to generate polynomial splines that interpolate a given set of data on the Grassmann manifold. To evaluate the usefulness of the proposed interpolating curves on vision related problems, several experiments were conducted. We show the advantage of using smooth subspaces interpolation in multi-source unsupervised domain adaptation problems and in object recognition problems across datasets.","PeriodicalId":6521,"journal":{"name":"2016 IEEE International Conference on Image Processing (ICIP)","volume":"1 1","pages":"2846-2850"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75870341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Vascular network formation in silico using the extended cellular potts model 利用扩展细胞波模型在计算机上形成血管网络
Pub Date : 2016-09-01 DOI: 10.1109/ICIP.2016.7532946
D. Svoboda, V. Ulman, Peter Kovác, B. Salingova, L. Tesarová, I. Koutná, P. Matula
Cardiovascular diseases belong to the most widespread illnesses in the developed countries. Therefore, the regenerative medicine and tissue modeling applications are highly interested in studying the ability of endothelial cells, derived from human stem cells, to form vascular networks. Several characteristics can be measured on images of these networks and hence describe the quality of the endothelial cells. With advances in the image processing, automatic analysis of these complex images becomes increasingly common. In this study, we introduce a new graph structure and additional constraints to the cellular Potts model, a framework commonly utilized in computational biology. Our extension allows to generate visually plausible synthetic image sequences of evolving fluorescently labeled vascular networks with ground truth data. Such generated datasets can be subsequently used for testing and validating methods employed for the analysis and measurement of the images of real vascular networks.
心血管疾病是发达国家最普遍的疾病之一。因此,再生医学和组织建模应用对研究源自人类干细胞的内皮细胞形成血管网络的能力非常感兴趣。在这些网络的图像上可以测量出几个特征,从而描述内皮细胞的质量。随着图像处理技术的进步,这些复杂图像的自动分析变得越来越普遍。在这项研究中,我们引入了一个新的图结构和额外的约束到细胞Potts模型,一个通常用于计算生物学的框架。我们的扩展允许生成视觉上合理的合成图像序列的演变荧光标记血管网络与地面真实数据。这些生成的数据集随后可用于测试和验证用于分析和测量真实血管网络图像的方法。
{"title":"Vascular network formation in silico using the extended cellular potts model","authors":"D. Svoboda, V. Ulman, Peter Kovác, B. Salingova, L. Tesarová, I. Koutná, P. Matula","doi":"10.1109/ICIP.2016.7532946","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532946","url":null,"abstract":"Cardiovascular diseases belong to the most widespread illnesses in the developed countries. Therefore, the regenerative medicine and tissue modeling applications are highly interested in studying the ability of endothelial cells, derived from human stem cells, to form vascular networks. Several characteristics can be measured on images of these networks and hence describe the quality of the endothelial cells. With advances in the image processing, automatic analysis of these complex images becomes increasingly common. In this study, we introduce a new graph structure and additional constraints to the cellular Potts model, a framework commonly utilized in computational biology. Our extension allows to generate visually plausible synthetic image sequences of evolving fluorescently labeled vascular networks with ground truth data. Such generated datasets can be subsequently used for testing and validating methods employed for the analysis and measurement of the images of real vascular networks.","PeriodicalId":6521,"journal":{"name":"2016 IEEE International Conference on Image Processing (ICIP)","volume":"158 1","pages":"3180-3183"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75113460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
期刊
2016 IEEE International Conference on Image Processing (ICIP)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1