首页 > 最新文献

Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)最新文献

英文 中文
A novel Bayesian method for fitting parametric and non-parametric models to noisy data 一种新的贝叶斯方法拟合参数和非参数模型到噪声数据
M. Werman, D. Keren
We offer a simple paradigm for fitting models, parametric and non-parametric, to noisy data, which resolves some of the problems associated with classic MSE algorithms. This is done by considering each point on the model as a possible source for each data point. The paradigm also allows to solve problems which are not defined in the classical MSE approach, such as fitting a segment (as opposed to a line). It is shown to be non-biased, and to achieve excellent results for general curves, even in the presence of strong discontinuities. Results are shown for a number of fitting problems, including lines, circles, segments, and general curves, contaminated by Gaussian and uniform noise.
我们提供了一个简单的模型,参数和非参数拟合到噪声数据,这解决了一些与经典MSE算法相关的问题。这是通过考虑模型上的每个点作为每个数据点的可能来源来实现的。该范式还允许解决经典MSE方法中未定义的问题,例如拟合段(而不是线)。它是无偏的,对于一般曲线,即使在存在强不连续的情况下,也能得到很好的结果。结果显示了一些拟合问题,包括线,圆,段,和一般曲线,污染高斯和均匀噪声。
{"title":"A novel Bayesian method for fitting parametric and non-parametric models to noisy data","authors":"M. Werman, D. Keren","doi":"10.1109/CVPR.1999.784964","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784964","url":null,"abstract":"We offer a simple paradigm for fitting models, parametric and non-parametric, to noisy data, which resolves some of the problems associated with classic MSE algorithms. This is done by considering each point on the model as a possible source for each data point. The paradigm also allows to solve problems which are not defined in the classical MSE approach, such as fitting a segment (as opposed to a line). It is shown to be non-biased, and to achieve excellent results for general curves, even in the presence of strong discontinuities. Results are shown for a number of fitting problems, including lines, circles, segments, and general curves, contaminated by Gaussian and uniform noise.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"12 1","pages":"552-558 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73069392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Color image segmentation 彩色图像分割
Yining Deng, B. S. Manjunath, H. Shin
In this work, a new approach to fully automatic color image segmentation, called JSEG, is presented. First, colors in the image are quantized to several representing classes that can be used to differentiate regions in the image. Then, image pixel colors are replaced by their corresponding color class labels, thus forming a class-map of the image. A criterion for "good" segmentation using this class-map is proposed. Applying the criterion to local windows in the class-map results in the "J-image", in which high and low values correspond to possible region boundaries and region centers, respectively. A region growing method is then used to segment the image based on the multi-scale J-images. Experiments show that JSEG provides good segmentation results on a variety of images.
在这项工作中,提出了一种新的全自动彩色图像分割方法——JSEG。首先,图像中的颜色被量化为几个代表类,可以用来区分图像中的区域。然后,将图像像素颜色替换为其对应的颜色类标签,从而形成图像的类映射。提出了使用类映射进行“良好”分割的标准。将该准则应用于类图的局部窗口得到“J-image”,其中高值和低值分别对应可能的区域边界和区域中心。然后在多尺度j图像的基础上,采用区域增长方法对图像进行分割。实验表明,JSEG在多种图像上都有很好的分割效果。
{"title":"Color image segmentation","authors":"Yining Deng, B. S. Manjunath, H. Shin","doi":"10.1109/CVPR.1999.784719","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784719","url":null,"abstract":"In this work, a new approach to fully automatic color image segmentation, called JSEG, is presented. First, colors in the image are quantized to several representing classes that can be used to differentiate regions in the image. Then, image pixel colors are replaced by their corresponding color class labels, thus forming a class-map of the image. A criterion for \"good\" segmentation using this class-map is proposed. Applying the criterion to local windows in the class-map results in the \"J-image\", in which high and low values correspond to possible region boundaries and region centers, respectively. A region growing method is then used to segment the image based on the multi-scale J-images. Experiments show that JSEG provides good segmentation results on a variety of images.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"11 1","pages":"446-451 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74720926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 687
Radiometric self calibration 辐射自校正
T. Mitsunaga, S. Nayar
A simple algorithm is described that computes the radiometric response function of an imaging system, from images of an arbitrary scene taken using different exposures. The exposure is varied by changing either the aperture setting or the shutter speed. The algorithm does not require precise estimates of the exposures used. Rough estimates of the ratios of the exposures (e.g. F-number settings on an inexpensive lens) are sufficient for accurate recovery of the response function as well as the actual exposure ratios. The computed response function is used to fuse the multiple images into a single high dynamic range radiance image. Robustness is tested using a variety of scenes and cameras as well as noisy synthetic images generated using 100 randomly selected response curves. Automatic rejection of image areas that have large vignetting effects or temporal scene variations make the algorithm applicable to not just photographic but also video cameras.
本文描述了一种简单的算法,从使用不同曝光的任意场景的图像中计算成像系统的辐射响应函数。曝光可以通过改变光圈设置或快门速度来改变。该算法不需要精确估计所使用的曝光量。曝光比的粗略估计(例如廉价镜头的f值设置)足以准确恢复响应函数和实际曝光比。利用计算得到的响应函数将多幅图像融合成一张高动态范围辐射图像。鲁棒性测试使用各种场景和相机,以及使用100随机选择的响应曲线生成的噪声合成图像。自动拒绝图像区域有很大的渐晕效果或时间场景变化使算法不仅适用于摄影,也适用于摄像机。
{"title":"Radiometric self calibration","authors":"T. Mitsunaga, S. Nayar","doi":"10.1109/CVPR.1999.786966","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786966","url":null,"abstract":"A simple algorithm is described that computes the radiometric response function of an imaging system, from images of an arbitrary scene taken using different exposures. The exposure is varied by changing either the aperture setting or the shutter speed. The algorithm does not require precise estimates of the exposures used. Rough estimates of the ratios of the exposures (e.g. F-number settings on an inexpensive lens) are sufficient for accurate recovery of the response function as well as the actual exposure ratios. The computed response function is used to fuse the multiple images into a single high dynamic range radiance image. Robustness is tested using a variety of scenes and cameras as well as noisy synthetic images generated using 100 randomly selected response curves. Automatic rejection of image areas that have large vignetting effects or temporal scene variations make the algorithm applicable to not just photographic but also video cameras.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"117 1","pages":"374-380 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79383802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 840
The quotient image: Class based recognition and synthesis under varying illumination conditions 商图像:不同光照条件下基于类的识别与合成
Tammy Riklin-Raviv, A. Shashua
The paper addresses the problem of "class-based" recognition and image-synthesis with varying illumination. The class-based synthesis and recognition tasks are defined as follows: given a single input image of an object, and a sample of images with varying illumination conditions of other objects of the same general class, capture the equivalence relationship (by generation of new images or by invariants) among all images of the object corresponding to new illumination conditions. The key result in our approach is based on a definition of an illumination invariant signature image, we call the "quotient" image, which enables an analytic generation of the image space with varying illumination from a single input image and a very small sample of other objects of the class-in our experiments as few as two objects. In many cases the recognition results outperform by far conventional methods and the image-synthesis is of remarkable quality considering the size of the database of example images and the mild pre-process required for making the algorithm work.
本文解决了“基于类别”的识别和不同光照下的图像合成问题。基于类的合成和识别任务定义如下:给定一个对象的单一输入图像,以及具有不同照明条件的同一一般类别的其他对象的图像样本,捕获与新照明条件相对应的该对象的所有图像之间的等价关系(通过生成新图像或通过不变量)。我们方法的关键结果是基于照明不变签名图像的定义,我们称之为“商”图像,它可以从单个输入图像和非常小的同类其他对象样本中分析生成具有不同照明的图像空间-在我们的实验中只有两个对象。在许多情况下,识别结果远远优于传统方法,并且考虑到示例图像数据库的大小和使算法工作所需的温和预处理,图像合成的质量非常高。
{"title":"The quotient image: Class based recognition and synthesis under varying illumination conditions","authors":"Tammy Riklin-Raviv, A. Shashua","doi":"10.1109/CVPR.1999.784968","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784968","url":null,"abstract":"The paper addresses the problem of \"class-based\" recognition and image-synthesis with varying illumination. The class-based synthesis and recognition tasks are defined as follows: given a single input image of an object, and a sample of images with varying illumination conditions of other objects of the same general class, capture the equivalence relationship (by generation of new images or by invariants) among all images of the object corresponding to new illumination conditions. The key result in our approach is based on a definition of an illumination invariant signature image, we call the \"quotient\" image, which enables an analytic generation of the image space with varying illumination from a single input image and a very small sample of other objects of the class-in our experiments as few as two objects. In many cases the recognition results outperform by far conventional methods and the image-synthesis is of remarkable quality considering the size of the database of example images and the mild pre-process required for making the algorithm work.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"1 1","pages":"566-571 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79248673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 83
Stochastic image segmentation by typical cuts 基于典型切割的随机图像分割
Yoram Gdalyahu, D. Weinshall, M. Werman
We present a stochastic clustering algorithm which uses pairwise similarity of elements, based on a new graph theoretical algorithm for the sampling of cuts in graphs. The stochastic nature of our method makes it robust against noise, including accidental edges and small spurious clusters. We demonstrate the robustness and superiority of our method for image segmentation on a few synthetic examples where other recently proposed methods (such as normalized-cut) fail. In addition, the complexity of our method is lower. We describe experiments with real images showing good segmentation results.
基于一种新的图理论算法,提出了一种利用元素两两相似度的随机聚类算法。我们的方法的随机特性使它对噪声具有鲁棒性,包括偶然的边缘和小的伪簇。我们在几个合成示例上证明了我们的图像分割方法的鲁棒性和优越性,而其他最近提出的方法(如归一化切割)都失败了。此外,我们的方法的复杂性较低。我们描述了真实图像的实验,显示出良好的分割效果。
{"title":"Stochastic image segmentation by typical cuts","authors":"Yoram Gdalyahu, D. Weinshall, M. Werman","doi":"10.1109/CVPR.1999.784979","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784979","url":null,"abstract":"We present a stochastic clustering algorithm which uses pairwise similarity of elements, based on a new graph theoretical algorithm for the sampling of cuts in graphs. The stochastic nature of our method makes it robust against noise, including accidental edges and small spurious clusters. We demonstrate the robustness and superiority of our method for image segmentation on a few synthetic examples where other recently proposed methods (such as normalized-cut) fail. In addition, the complexity of our method is lower. We describe experiments with real images showing good segmentation results.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"46 1","pages":"596-601 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79675451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 55
Simultaneous image classification and restoration using a variational approach 同时图像分类和恢复使用变分方法
Christophe Samson, L. Blanc-Féraud, J. Zerubia, G. Aubert
Herein, we present a variational model devoted to image classification coupled with an edge-preserving regularization process. In the last decade, the variational approach has proven its efficiency in the field of edge-preserving restoration. In this paper, we add a classification capability which contributes to provide images compound of homogeneous regions with regularized boundaries. The soundness of this model is based on the works developed on the phase transition theory in mechanics. The proposed algorithm is fast, easy to implement and efficient. We compare our results on both synthetic and satellite images with the ones obtained by a stochastic model using a Potts regularization.
在此,我们提出了一种用于图像分类的变分模型,并结合了保持边缘的正则化过程。近十年来,变分方法在边缘保持恢复领域的有效性得到了证明。在本文中,我们增加了一种分类能力,有助于提供具有正则化边界的均匀区域的图像复合。这个模型的正确性是建立在力学相变理论基础上的。该算法具有快速、易于实现和高效的特点。我们将合成图像和卫星图像的结果与使用Potts正则化的随机模型获得的结果进行了比较。
{"title":"Simultaneous image classification and restoration using a variational approach","authors":"Christophe Samson, L. Blanc-Féraud, J. Zerubia, G. Aubert","doi":"10.1109/CVPR.1999.784985","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784985","url":null,"abstract":"Herein, we present a variational model devoted to image classification coupled with an edge-preserving regularization process. In the last decade, the variational approach has proven its efficiency in the field of edge-preserving restoration. In this paper, we add a classification capability which contributes to provide images compound of homogeneous regions with regularized boundaries. The soundness of this model is based on the works developed on the phase transition theory in mechanics. The proposed algorithm is fast, easy to implement and efficient. We compare our results on both synthetic and satellite images with the ones obtained by a stochastic model using a Potts regularization.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"8 1","pages":"618-623 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80834356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
The tensors of three affine views 三个仿射视图的张量
T. Thórhallsson, D. W. Murray
In this paper we specialize the projective unifocal, bifocal, and trifocal tensors to the affine case, and show how the tensors obtained relate to the registered tensors encountered in previous work. This enables us to obtain an affine specialization of known projective relations connecting points and lines across two or three views. In the simpler case of affine cameras we give neccessary and sufficient constraints on the components of the trifocal tensor together with a simple geometric interpretation. Finally, we show how the estimation of the tensors from point correspondences is achieved through factorization, and discuss the estimation from line correspondences.
在本文中,我们将射影单焦、双焦和三焦张量专门用于仿射情况,并说明所得到的张量如何与先前工作中遇到的注册张量相关联。这使我们能够获得已知射影关系的仿射专门化,这些关系将点和线连接在两个或三个视图上。在仿射相机的简单情况下,我们给出了三焦张量分量的必要和充分的约束,并给出了简单的几何解释。最后,我们展示了如何通过因式分解实现从点对应的张量估计,并讨论了从线对应的张量估计。
{"title":"The tensors of three affine views","authors":"T. Thórhallsson, D. W. Murray","doi":"10.1109/CVPR.1999.786977","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786977","url":null,"abstract":"In this paper we specialize the projective unifocal, bifocal, and trifocal tensors to the affine case, and show how the tensors obtained relate to the registered tensors encountered in previous work. This enables us to obtain an affine specialization of known projective relations connecting points and lines across two or three views. In the simpler case of affine cameras we give neccessary and sufficient constraints on the components of the trifocal tensor together with a simple geometric interpretation. Finally, we show how the estimation of the tensors from point correspondences is achieved through factorization, and discuss the estimation from line correspondences.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"259 1","pages":"450-456 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77111453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
A volumetric stereo matching method: application to image-based modeling 一种体立体匹配方法在图像建模中的应用
Qian Chen, G. Medioni
We formulate stereo matching as an extremal surface extraction problem. This is made possible by embedding the disparity surface inside a volume where the surface is composed of voxels with locally maximal similarity values. This formulation naturally implements the coherence principle, and allows us to incorporate most known global constraints. Time efficiency is achieved by executing the algorithm in a coarse-to-fine fashion, and only populating the full volume at the coarsest level. To make the system more practical, we present a rectification algorithm based on the fundamental matrix, avoiding full camera calibration. We present results on standard stereo pairs, and on our own data set. The results are qualitatively evaluated in terms of both the generated disparity maps and the 3-D models.
我们将立体匹配表述为一个极值曲面提取问题。这是通过将视差表面嵌入到一个由具有局部最大相似值的体素组成的体积中来实现的。这个公式自然地实现了相干原则,并允许我们合并大多数已知的全局约束。时间效率是通过以一种从粗到细的方式执行算法来实现的,并且只在最粗的级别上填充整个卷。为了使系统更加实用,我们提出了一种基于基本矩阵的纠偏算法,避免了整个摄像机的校准。我们在标准立体对和我们自己的数据集上给出了结果。从生成的视差图和三维模型两方面对结果进行了定性评价。
{"title":"A volumetric stereo matching method: application to image-based modeling","authors":"Qian Chen, G. Medioni","doi":"10.1109/CVPR.1999.786913","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786913","url":null,"abstract":"We formulate stereo matching as an extremal surface extraction problem. This is made possible by embedding the disparity surface inside a volume where the surface is composed of voxels with locally maximal similarity values. This formulation naturally implements the coherence principle, and allows us to incorporate most known global constraints. Time efficiency is achieved by executing the algorithm in a coarse-to-fine fashion, and only populating the full volume at the coarsest level. To make the system more practical, we present a rectification algorithm based on the fundamental matrix, avoiding full camera calibration. We present results on standard stereo pairs, and on our own data set. The results are qualitatively evaluated in terms of both the generated disparity maps and the 3-D models.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"13 1","pages":"29-34 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74372463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 91
Applying perceptual grouping to content-based image retrieval: building images 将感知分组应用于基于内容的图像检索:构建图像
Qasim Iqbal, J. Aggarwal
This paper presents an application of perceptual grouping rules for content-based image retrieval. The semantic interrelationships between different primitive image features are exploited by perceptual grouping to detect the presence of manmade structures. A methodology based on these principles in a Bayesian framework for the retrieval of building images, and the results obtained are presented. The image database consists of monocular grayscale outdoor images taken from a ground-level camera.
提出了一种基于内容的图像检索中感知分组规则的应用。通过感知分组,利用不同原始图像特征之间的语义相互关系来检测人工结构的存在。本文提出了一种基于这些原理的基于贝叶斯框架的建筑物图像检索方法,并给出了结果。图像数据库由地面摄像机拍摄的单目灰度户外图像组成。
{"title":"Applying perceptual grouping to content-based image retrieval: building images","authors":"Qasim Iqbal, J. Aggarwal","doi":"10.1109/CVPR.1999.786915","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786915","url":null,"abstract":"This paper presents an application of perceptual grouping rules for content-based image retrieval. The semantic interrelationships between different primitive image features are exploited by perceptual grouping to detect the presence of manmade structures. A methodology based on these principles in a Bayesian framework for the retrieval of building images, and the results obtained are presented. The image database consists of monocular grayscale outdoor images taken from a ground-level camera.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"3 1","pages":"42-48 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73471931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 93
Statistics of natural images and models 自然图像和模型的统计
Jinggang Huang, D. Mumford
Large calibrated datasets of 'random' natural images have recently become available. These make possible precise and intensive statistical studies of the local nature of images. We report results ranging from the simplest single pixel intensity to joint distribution of 3 Haar wavelet responses. Some of these statistics shed light on old issues such as the near scale-invariance of image statistics and some are entirely new. We fit mathematical models to some of the statistics and explain others in terms of local image features.
“随机”自然图像的大型校准数据集最近已经可用。这使得对图像的局部性质进行精确而深入的统计研究成为可能。我们报告的结果范围从最简单的单像素强度到3 Haar小波响应的联合分布。其中一些统计数据揭示了一些老问题,比如图像统计的近尺度不变性,还有一些是全新的。我们将数学模型拟合到一些统计数据中,并根据局部图像特征解释其他统计数据。
{"title":"Statistics of natural images and models","authors":"Jinggang Huang, D. Mumford","doi":"10.1109/CVPR.1999.786990","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786990","url":null,"abstract":"Large calibrated datasets of 'random' natural images have recently become available. These make possible precise and intensive statistical studies of the local nature of images. We report results ranging from the simplest single pixel intensity to joint distribution of 3 Haar wavelet responses. Some of these statistics shed light on old issues such as the near scale-invariance of image statistics and some are entirely new. We fit mathematical models to some of the statistics and explain others in terms of local image features.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"8 1","pages":"541-547 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81949857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 636
期刊
Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1