首页 > 最新文献

2014 IEEE International Conference on Image Processing (ICIP)最新文献

英文 中文
Compressed image quality assessment: Application to an interactive upper limb radiology atlas 压缩图像质量评估:应用于互动式上肢放射图谱
Pub Date : 2014-10-27 DOI: 10.1109/ICIP.2014.7025100
Y. Gaudeau, Julien Lambert, N. Labonne, J. Moureaux
It is admitted that lossy compression can be used in the field of medical images under the control of experts. Lossy compression can offer substantial reduction of the volumes of medical images, being thus an efficient solution for both storage and transmission problem in the medical context. Furthermore, the use of touchpads in medicine has grown and many medical applications on this kind of support is now available. The storage capacity of this kind of terminal is limited, lossy compression represents a good alternative to allow greedy medical applications on such terminals. In this work, we address the problem of quality assessment of MRI scans from an interactive upper limb radiology atlas (Monster Anatomy Upper Limb). The quality assessment protocol is adapted from the International Telecommunication Union recommendations (ITU-R BT-500-11). In this paper, we propose to determine compression thresholds which are acceptable according to the quality required for the proper use of this radiology atlas. We show that this application (using a simple JPEG encoder) has a lossy compression threshold ranging from 13: 1 for the majority of the atlas images up to 27: 1 for the hand images. Finally, several objective image quality assessment algorithms (IQA) are also linked to subjective ratings of the panel of health professionals.
在专家的控制下,有损压缩可以应用于医学图像领域。有损压缩可以大大减少医学图像的体积,因此是医学环境中存储和传输问题的有效解决方案。此外,触控板在医学上的使用也在增长,现在有许多基于这种支持的医疗应用。这种终端的存储容量是有限的,有损压缩是一种很好的选择,允许贪婪的医疗应用在这种终端上。在这项工作中,我们解决了来自交互式上肢放射学图谱(Monster Anatomy upper limb)的MRI扫描质量评估问题。质量评估议定书改编自国际电信联盟的建议(ITU-R BT-500-11)。在本文中,我们建议根据正确使用该放射学图谱所需的质量来确定可接受的压缩阈值。我们展示了这个应用程序(使用一个简单的JPEG编码器)有一个有损压缩阈值,从大多数地图集图像的13:1到手部图像的27:1。最后,一些客观图像质量评估算法(IQA)也与卫生专业人员小组的主观评分相关联。
{"title":"Compressed image quality assessment: Application to an interactive upper limb radiology atlas","authors":"Y. Gaudeau, Julien Lambert, N. Labonne, J. Moureaux","doi":"10.1109/ICIP.2014.7025100","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025100","url":null,"abstract":"It is admitted that lossy compression can be used in the field of medical images under the control of experts. Lossy compression can offer substantial reduction of the volumes of medical images, being thus an efficient solution for both storage and transmission problem in the medical context. Furthermore, the use of touchpads in medicine has grown and many medical applications on this kind of support is now available. The storage capacity of this kind of terminal is limited, lossy compression represents a good alternative to allow greedy medical applications on such terminals. In this work, we address the problem of quality assessment of MRI scans from an interactive upper limb radiology atlas (Monster Anatomy Upper Limb). The quality assessment protocol is adapted from the International Telecommunication Union recommendations (ITU-R BT-500-11). In this paper, we propose to determine compression thresholds which are acceptable according to the quality required for the proper use of this radiology atlas. We show that this application (using a simple JPEG encoder) has a lossy compression threshold ranging from 13: 1 for the majority of the atlas images up to 27: 1 for the hand images. Finally, several objective image quality assessment algorithms (IQA) are also linked to subjective ratings of the panel of health professionals.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75884255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Model based clustering for 3D directional features: Application to depth image analysis 基于模型的三维方向特征聚类:在深度图像分析中的应用
Pub Date : 2014-10-27 DOI: 10.1109/ICIP.2014.7025765
A. Hasnat, O. Alata, A. Trémeau
Model Based Clustering (MBC) is a method that estimates a model for the data and produces probabilistic clustering. In this paper, we propose a novel MBC method to cluster three dimensional directional features. We assume that the features are generated from a finite statistical mixture model based on the von Mises-Fisher (vMF) distribution. The core elements of our proposed method are: (a) generate a set of vMF Mixture Models (vMFMM) and (b) select the optimal model using a parsimony based approach with information criteria. We empirically validate our proposed method by applying it on simulated data. Next, we apply it to cluster image normals in order to perform depth image analysis.
基于模型的聚类(MBC)是一种估计数据模型并产生概率聚类的方法。在本文中,我们提出了一种新的MBC方法来聚类三维方向特征。我们假设这些特征是由基于von Mises-Fisher (vMF)分布的有限统计混合模型产生的。我们提出的方法的核心要素是:(a)生成一组vMF混合模型(vMFMM); (b)使用基于信息标准的简约方法选择最优模型。通过对模拟数据的实验验证了本文提出的方法。接下来,我们将其应用于聚类图像法线以进行深度图像分析。
{"title":"Model based clustering for 3D directional features: Application to depth image analysis","authors":"A. Hasnat, O. Alata, A. Trémeau","doi":"10.1109/ICIP.2014.7025765","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025765","url":null,"abstract":"Model Based Clustering (MBC) is a method that estimates a model for the data and produces probabilistic clustering. In this paper, we propose a novel MBC method to cluster three dimensional directional features. We assume that the features are generated from a finite statistical mixture model based on the von Mises-Fisher (vMF) distribution. The core elements of our proposed method are: (a) generate a set of vMF Mixture Models (vMFMM) and (b) select the optimal model using a parsimony based approach with information criteria. We empirically validate our proposed method by applying it on simulated data. Next, we apply it to cluster image normals in order to perform depth image analysis.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73843927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Newton active appearance models 快速牛顿活跃外观模型
Pub Date : 2014-10-27 DOI: 10.1109/ICIP.2014.7025284
Jean Kossaifi, Georgios Tzimiropoulos, M. Pantic
Active Appearance Models (AAMs) are statistical models of shape and appearance widely used in computer vision to detect landmarks on objects like faces. Fitting an AAM to a new image can be formulated as a non-linear least-squares problem which is typically solved using iterative methods. Owing to its efficiency, Gauss-Newton optimization has been the standard choice over more sophisticated approaches like Newton. In this paper, we show that the AAM problem has structure which can be used to solve efficiently the original Newton problem without any approximations. We then make connections to the original Gauss-Newton algorithm and study experimentally the effect of the additional terms introduced by the Newton formulation on both fitting accuracy and convergence. Based on our derivations, we also propose a combined Newton and Gauss-Newton method which achieves promising fitting and convergence performance. Our findings are validated on two challenging in-the-wild data sets.
主动外观模型(AAMs)是一种广泛应用于计算机视觉的形状和外观统计模型,用于检测人脸等物体上的标志。将AAM拟合到新图像可以表述为非线性最小二乘问题,通常使用迭代方法求解。由于其效率,高斯-牛顿优化已经成为比牛顿等更复杂的方法更标准的选择。在本文中,我们证明了AAM问题具有不需要任何近似就能有效求解原牛顿问题的结构。然后,我们与原始的高斯-牛顿算法建立联系,并实验研究了牛顿公式引入的附加项对拟合精度和收敛性的影响。在推导的基础上,我们还提出了一种牛顿和高斯-牛顿相结合的方法,该方法具有良好的拟合和收敛性能。我们的发现在两个具有挑战性的野外数据集上得到了验证。
{"title":"Fast Newton active appearance models","authors":"Jean Kossaifi, Georgios Tzimiropoulos, M. Pantic","doi":"10.1109/ICIP.2014.7025284","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025284","url":null,"abstract":"Active Appearance Models (AAMs) are statistical models of shape and appearance widely used in computer vision to detect landmarks on objects like faces. Fitting an AAM to a new image can be formulated as a non-linear least-squares problem which is typically solved using iterative methods. Owing to its efficiency, Gauss-Newton optimization has been the standard choice over more sophisticated approaches like Newton. In this paper, we show that the AAM problem has structure which can be used to solve efficiently the original Newton problem without any approximations. We then make connections to the original Gauss-Newton algorithm and study experimentally the effect of the additional terms introduced by the Newton formulation on both fitting accuracy and convergence. Based on our derivations, we also propose a combined Newton and Gauss-Newton method which achieves promising fitting and convergence performance. Our findings are validated on two challenging in-the-wild data sets.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77433120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Learning visual categories through a sparse representation classifier based cross-category knowledge transfer 基于跨类别知识迁移的稀疏表示分类器学习视觉类别
Pub Date : 2014-10-27 DOI: 10.1109/ICIP.2014.7025032
Ying Lu, Liming Chen, A. Saidi, Zhaoxiang Zhang, Yunhong Wang
To solve the challenging task of learning effective visual categories with limited training samples, we propose a new sparse representation classifier based transfer learning method, namely SparseTL, which propagates the cross-category knowledge from multiple source categories to the target category. Specifically, we enhance the target classification task in learning a both generative and discriminative sparse representation based classifier using pairs of source categories most positively and most negatively correlated to the target category. We further improve the discriminative ability of the classifier by choosing the most discriminative bins in the feature vector with a feature selection process. The experimental results show that the proposed method achieves competitive performance on the NUS-WIDE Scene database compared to several state of the art transfer learning algorithms while keeping a very efficient runtime.
为了解决在有限的训练样本下学习有效视觉类别的难题,我们提出了一种新的基于稀疏表示分类器的迁移学习方法,即SparseTL,它将多个源类别的跨类别知识传播到目标类别。具体来说,我们使用与目标类别最正相关和最负相关的源类别对,在学习基于生成和判别稀疏表示的分类器时增强了目标分类任务。我们通过特征选择过程在特征向量中选择最具判别性的bin,进一步提高了分类器的判别能力。实验结果表明,该方法在保持高效运行时间的同时,在NUS-WIDE场景数据库上取得了与几种最先进的迁移学习算法相媲美的性能。
{"title":"Learning visual categories through a sparse representation classifier based cross-category knowledge transfer","authors":"Ying Lu, Liming Chen, A. Saidi, Zhaoxiang Zhang, Yunhong Wang","doi":"10.1109/ICIP.2014.7025032","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025032","url":null,"abstract":"To solve the challenging task of learning effective visual categories with limited training samples, we propose a new sparse representation classifier based transfer learning method, namely SparseTL, which propagates the cross-category knowledge from multiple source categories to the target category. Specifically, we enhance the target classification task in learning a both generative and discriminative sparse representation based classifier using pairs of source categories most positively and most negatively correlated to the target category. We further improve the discriminative ability of the classifier by choosing the most discriminative bins in the feature vector with a feature selection process. The experimental results show that the proposed method achieves competitive performance on the NUS-WIDE Scene database compared to several state of the art transfer learning algorithms while keeping a very efficient runtime.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84524927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Dimensionality reduction of visual features using sparse projectors for content-based image retrieval 基于内容的图像检索中使用稀疏投影的视觉特征降维
Pub Date : 2014-10-27 DOI: 10.1109/ICIP.2014.7025444
Romain Negrel, David Picard, P. Gosselin
In web-scale image retrieval, the most effective strategy is to aggregate local descriptors into a high dimensionality signature and then reduce it to a small dimensionality. Thanks to this strategy, web-scale image databases can be represented with small index and explored using fast visual similarities. However, the computation of this index has a very high complexity, because of the high dimensionality of signature projectors. In this work, we propose a new efficient method to greatly reduce the signature dimensionality with low computational and storage costs. Our method is based on the linear projection of the signature onto a small subspace using a sparse projection matrix. We report several experimental results on two standard datasets (Inria Holidays and Oxford) and with 100k image distractors. We show that our method reduces both the projectors storage cost and the computational cost of projection step while incurring a very slight loss in mAP (mean Average Precision) performance of these computed signatures.
在网络规模的图像检索中,最有效的策略是将局部描述符聚合成一个高维特征,然后降维到一个小维特征。由于这种策略,网络规模的图像数据库可以用小索引表示,并使用快速的视觉相似性进行探索。然而,由于特征投影的高维性,该指标的计算具有很高的复杂度。在这项工作中,我们提出了一种新的有效方法,可以在低计算和低存储成本的情况下大大降低签名维数。我们的方法是基于使用稀疏投影矩阵将签名线性投影到一个小的子空间上。我们报告了两个标准数据集(Inria Holidays和Oxford)和100k图像干扰物的实验结果。我们表明,我们的方法降低了投影仪的存储成本和投影步骤的计算成本,而这些计算签名的mAP(平均平均精度)性能损失很小。
{"title":"Dimensionality reduction of visual features using sparse projectors for content-based image retrieval","authors":"Romain Negrel, David Picard, P. Gosselin","doi":"10.1109/ICIP.2014.7025444","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025444","url":null,"abstract":"In web-scale image retrieval, the most effective strategy is to aggregate local descriptors into a high dimensionality signature and then reduce it to a small dimensionality. Thanks to this strategy, web-scale image databases can be represented with small index and explored using fast visual similarities. However, the computation of this index has a very high complexity, because of the high dimensionality of signature projectors. In this work, we propose a new efficient method to greatly reduce the signature dimensionality with low computational and storage costs. Our method is based on the linear projection of the signature onto a small subspace using a sparse projection matrix. We report several experimental results on two standard datasets (Inria Holidays and Oxford) and with 100k image distractors. We show that our method reduces both the projectors storage cost and the computational cost of projection step while incurring a very slight loss in mAP (mean Average Precision) performance of these computed signatures.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89388248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Iterative poisson-Gaussian noise parametric estimation for blind image denoising 盲图像去噪的迭代泊松高斯噪声参数估计
Pub Date : 2014-10-27 DOI: 10.1109/ICIP.2014.7025570
A. Jezierska, J. Pesquet, Hugues Talbot, C. Chaux
This paper deals with noise parameter estimation from a single image under Poisson-Gaussian noise statistics. The problem is formulated within a mixed discrete-continuous optimization framework. The proposed approach jointly estimates the signal of interest and the noise parameters. This is achieved by introducing an adjustable regularization term inside an optimized criterion, together with a data fidelity error measure. The optimal solution is sought iteratively by alternating the minimization of a label field and of a noise parameter vector. Noise parameters are updated at each iteration using an Expectation-Maximization approach. The proposed algorithm is inspired from a spatial regularization approach for vector quantization. We illustrate the usefulness of our approach on macroconfocal images. The identified noise parameters are applied to a denoising algorithm, so yielding a complete denoising scheme.
本文研究了泊松-高斯噪声统计下单幅图像的噪声参数估计问题。该问题是在一个混合离散-连续优化框架内表述的。该方法对感兴趣的信号和噪声参数进行联合估计。这是通过在优化的准则中引入可调节的正则化项以及数据保真度误差测量来实现的。通过交替求标签域和噪声参数向量的最小值来迭代求最优解。在每次迭代中使用期望最大化方法更新噪声参数。该算法的灵感来自矢量量化的空间正则化方法。我们说明了我们的方法对宏观共聚焦图像的有用性。将识别出的噪声参数应用到去噪算法中,从而得到一个完整的去噪方案。
{"title":"Iterative poisson-Gaussian noise parametric estimation for blind image denoising","authors":"A. Jezierska, J. Pesquet, Hugues Talbot, C. Chaux","doi":"10.1109/ICIP.2014.7025570","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025570","url":null,"abstract":"This paper deals with noise parameter estimation from a single image under Poisson-Gaussian noise statistics. The problem is formulated within a mixed discrete-continuous optimization framework. The proposed approach jointly estimates the signal of interest and the noise parameters. This is achieved by introducing an adjustable regularization term inside an optimized criterion, together with a data fidelity error measure. The optimal solution is sought iteratively by alternating the minimization of a label field and of a noise parameter vector. Noise parameters are updated at each iteration using an Expectation-Maximization approach. The proposed algorithm is inspired from a spatial regularization approach for vector quantization. We illustrate the usefulness of our approach on macroconfocal images. The identified noise parameters are applied to a denoising algorithm, so yielding a complete denoising scheme.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74515138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Reduced-reference metric based on the quaternionic wavelet coefficients modeling by information criteria 基于四元数小波系数信息准则建模的约简参考度量
Pub Date : 2014-10-27 DOI: 10.1109/ICIP.2014.7025105
A. Traoré, P. Carré, C. Olivier
This paper proposes a new reduced-reference metric based on the modeling of Quaternionic Wavelet Transform (QWT) coefficients from Information Criteria (IC). To obtain the reduced-references, we will model the QWT coefficients using probability density functions (pdf) whose parameters are used as reduced-references. IC are proposed in order to build the optimal histograms of the QWT coefficients to get most likely pdf of these. In the mixture model, IC are also used to obtain the number of distribution. From these models, we propose a measure of degradation by comparing probability density functions of the reference image and the distributions of the degraded image of the QWT subbands. We shall demonstrate that one phase of the QWT provides relevant information in the Image Quality Assessment. Tests confirmed the potentiality of this information and showed that the QWT produces a better coefficient of correlation with the Human Visual System than the Discrete Wavelet Transform.
基于信息准则对四元数小波变换(QWT)系数的建模,提出了一种新的约简参考度量。为了获得约简引用,我们将使用概率密度函数(pdf)对QWT系数建模,其参数用作约简引用。为了构建QWT系数的最优直方图,以获得这些系数的最可能的pdf,提出了集成电路。在混合模型中,还使用集成电路来获得分布的数量。从这些模型中,我们提出了一种通过比较参考图像的概率密度函数和QWT子带退化图像的分布来衡量退化的方法。我们将演示量子小波变换的一个阶段在图像质量评估中提供相关信息。测试证实了这些信息的潜力,并表明QWT比离散小波变换产生更好的与人类视觉系统的相关系数。
{"title":"Reduced-reference metric based on the quaternionic wavelet coefficients modeling by information criteria","authors":"A. Traoré, P. Carré, C. Olivier","doi":"10.1109/ICIP.2014.7025105","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025105","url":null,"abstract":"This paper proposes a new reduced-reference metric based on the modeling of Quaternionic Wavelet Transform (QWT) coefficients from Information Criteria (IC). To obtain the reduced-references, we will model the QWT coefficients using probability density functions (pdf) whose parameters are used as reduced-references. IC are proposed in order to build the optimal histograms of the QWT coefficients to get most likely pdf of these. In the mixture model, IC are also used to obtain the number of distribution. From these models, we propose a measure of degradation by comparing probability density functions of the reference image and the distributions of the degraded image of the QWT subbands. We shall demonstrate that one phase of the QWT provides relevant information in the Image Quality Assessment. Tests confirmed the potentiality of this information and showed that the QWT produces a better coefficient of correlation with the Human Visual System than the Discrete Wavelet Transform.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74007023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Maximum likelihood extension for non-circulant deconvolution 非循环反褶积的极大似然扩展
Pub Date : 2014-10-27 DOI: 10.1109/ICIP.2014.7025868
J. Portilla
Directly applying circular de-convolution to real-world blurred images usually results in boundary artifacts. Classic boundary extension techniques fail to provide likely results, in terms of a circular boundary-condition observation model. Boundary reflection gives raise to non-smooth features, especially when oblique oriented features encounter the image boundaries. Tapering the boundaries of the image support, or similar strategies (like constrained diffusion), provides smoothness on the toroidal support; however this does not guarantee consistency with the spectral properties of the blur (in particular, to its zeros). Here we propose a simple, yet effective, model-derived method for extending real-world blurred images, so that they become likely in terms of a Gaussian circular boundary-condition observation model. We achieve artifact-free results, even under highly unfavorable conditions, when other methods fail.
直接对现实世界的模糊图像应用圆反卷积通常会导致边界伪影。在圆形边界条件观测模型中,经典的边界扩展技术无法提供可能的结果。边界反射会产生非光滑特征,特别是当斜向特征遇到图像边界时。逐渐缩小图像支持的边界,或类似的策略(如约束扩散),提供环面支持的平滑性;然而,这并不能保证与模糊的光谱特性的一致性(特别是,到它的零点)。在这里,我们提出了一种简单而有效的模型派生方法来扩展真实世界的模糊图像,使它们在高斯圆形边界条件观测模型中变得可能。即使在非常不利的条件下,当其他方法失败时,我们也可以获得无伪影的结果。
{"title":"Maximum likelihood extension for non-circulant deconvolution","authors":"J. Portilla","doi":"10.1109/ICIP.2014.7025868","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025868","url":null,"abstract":"Directly applying circular de-convolution to real-world blurred images usually results in boundary artifacts. Classic boundary extension techniques fail to provide likely results, in terms of a circular boundary-condition observation model. Boundary reflection gives raise to non-smooth features, especially when oblique oriented features encounter the image boundaries. Tapering the boundaries of the image support, or similar strategies (like constrained diffusion), provides smoothness on the toroidal support; however this does not guarantee consistency with the spectral properties of the blur (in particular, to its zeros). Here we propose a simple, yet effective, model-derived method for extending real-world blurred images, so that they become likely in terms of a Gaussian circular boundary-condition observation model. We achieve artifact-free results, even under highly unfavorable conditions, when other methods fail.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91470144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Asymmetric coding of stereoscopic 3D based on perceptual significance 基于感知意义的立体三维非对称编码
Pub Date : 2014-10-27 DOI: 10.1109/ICIP.2014.7026144
Sid Ahmed Fezza, M. Larabi, K. Faraoun
Asymmetric stereoscopic coding is a very promising technique to decrease the bandwidth required for stereoscopic 3D delivery. However, one large obstacle is linked to the limit of asymmetric coding or the just noticeable threshold of asymmetry, so that 3D viewing experience is not altered. By way of subjective experiments, recent works have attempted to identify this asymmetry threshold. However, fixed threshold, highly dependent on the experiment design, do not allow to adapt to quality and content variation of the image. In this paper, we propose a new non-uniform asymmetric stereoscopic coding adjusting in a dynamic manner the level of asymmetry for each image region to ensure unaltered binocular perception. This is achieved by exploiting several HVS-inspired models; specifically we used the Binocular Just Noticeable Difference (BJND) combined with visual saliency map and depth information to quantify precisely the asymmetry threshold. Simulation results show that the proposed method results in up to 44% of bitrate saving and provides better 3D visual quality compared to state-of-the-art asymmetric coding methods.
非对称立体编码是一种很有前途的技术,可以减少立体3D传输所需的带宽。然而,一个很大的障碍与非对称编码的限制或不对称的明显阈值有关,因此3D观看体验不会改变。通过主观实验,最近的工作试图确定这种不对称阈值。然而,固定的阈值,高度依赖于实验设计,不允许适应图像质量和内容的变化。在本文中,我们提出了一种新的非均匀不对称立体编码,以动态方式调整每个图像区域的不对称程度,以确保双眼感知不变。这是通过利用几种hvs启发模型实现的;具体来说,我们使用双目可注意差异(BJND)结合视觉显著性图和深度信息来精确量化不对称阈值。仿真结果表明,与目前最先进的非对称编码方法相比,该方法可节省高达44%的比特率,并提供更好的3D视觉质量。
{"title":"Asymmetric coding of stereoscopic 3D based on perceptual significance","authors":"Sid Ahmed Fezza, M. Larabi, K. Faraoun","doi":"10.1109/ICIP.2014.7026144","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7026144","url":null,"abstract":"Asymmetric stereoscopic coding is a very promising technique to decrease the bandwidth required for stereoscopic 3D delivery. However, one large obstacle is linked to the limit of asymmetric coding or the just noticeable threshold of asymmetry, so that 3D viewing experience is not altered. By way of subjective experiments, recent works have attempted to identify this asymmetry threshold. However, fixed threshold, highly dependent on the experiment design, do not allow to adapt to quality and content variation of the image. In this paper, we propose a new non-uniform asymmetric stereoscopic coding adjusting in a dynamic manner the level of asymmetry for each image region to ensure unaltered binocular perception. This is achieved by exploiting several HVS-inspired models; specifically we used the Binocular Just Noticeable Difference (BJND) combined with visual saliency map and depth information to quantify precisely the asymmetry threshold. Simulation results show that the proposed method results in up to 44% of bitrate saving and provides better 3D visual quality compared to state-of-the-art asymmetric coding methods.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81371171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Inverse problem formulation for regularity estimation in images 图像中正则性估计的反问题公式
Pub Date : 2014-10-27 DOI: 10.1109/ICIP.2014.7026227
N. Pustelnik, P. Abry, H. Wendt, N. Dobigeon
The identification of texture changes is a challenging problem that can be addressed by considering local regularity fluctuations in an image. This work develops a procedure for local regularity estimation that combines a convex optimization strategy with wavelet leaders, specific wavelet coefficients recently introduced in the context of multifractal analysis. The proposed procedure is formulated as an inverse problem that combines the joint estimation of both local regularity exponent and of the optimal weights underlying regularity measurement. Numerical experiments using synthetic texture indicate that the performance of the proposed approach compares favorably against other wavelet based local regularity estimation formulations. The method is also illustrated with an example involving real-world texture.
纹理变化的识别是一个具有挑战性的问题,可以通过考虑图像中的局部规则波动来解决。本研究开发了一种局部正则性估计方法,该方法将凸优化策略与小波前导相结合,小波前导是最近在多重分形分析中引入的特定小波系数。该方法将局部正则性指数的联合估计与正则性度量的最优权值的联合估计结合起来,形成一个逆问题。使用合成纹理的数值实验表明,该方法的性能优于其他基于小波的局部正则性估计公式。该方法还通过一个涉及现实世界纹理的例子进行了说明。
{"title":"Inverse problem formulation for regularity estimation in images","authors":"N. Pustelnik, P. Abry, H. Wendt, N. Dobigeon","doi":"10.1109/ICIP.2014.7026227","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7026227","url":null,"abstract":"The identification of texture changes is a challenging problem that can be addressed by considering local regularity fluctuations in an image. This work develops a procedure for local regularity estimation that combines a convex optimization strategy with wavelet leaders, specific wavelet coefficients recently introduced in the context of multifractal analysis. The proposed procedure is formulated as an inverse problem that combines the joint estimation of both local regularity exponent and of the optimal weights underlying regularity measurement. Numerical experiments using synthetic texture indicate that the performance of the proposed approach compares favorably against other wavelet based local regularity estimation formulations. The method is also illustrated with an example involving real-world texture.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79569852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
2014 IEEE International Conference on Image Processing (ICIP)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1