首页 > 最新文献

Computer analysis of images and patterns : proceedings of the ... International Conference on Automatic Image Processing. International Conference on Automatic Image Processing最新文献

英文 中文
An efficient boundary encoding scheme which is optimal in the rate distortion sense 一种在速率失真意义上最优的高效边界编码方案
G. Schuster, A. Katsaggelos
A major problem in object oriented video coding is the efficient encoding of the shape information of arbitrarily shaped objects. Efficient shape coding schemes are also needed in encoding the shape information of video object planes (VOP) in the MPEG-4 standard. In this paper, we present an efficient method for the lossy encoding of object shapes which are given as 8-connect chain codes (Meier et al., 1997). We approximate the object shape by a second order B-spline curve and consider the problem of finding the curve with the lowest bit rate for a given distortion. The presented scheme is optimal, efficient and offers complete control over the trade-off between bit-rate and distortion. We present results with the proposed scheme using objects shapes of different sizes.
面向对象视频编码的一个主要问题是对任意形状物体的形状信息进行有效的编码。在MPEG-4标准中,对视频对象平面(VOP)的形状信息进行编码也需要高效的形状编码方案。在本文中,我们提出了一种对物体形状进行有损编码的有效方法,这些形状以8连接链编码的形式给出(Meier et al., 1997)。我们用二阶b样条曲线来近似物体的形状,并考虑在给定的失真情况下寻找具有最低比特率的曲线的问题。所提出的方案是最优的,高效的,并提供了完全控制比特率和失真之间的权衡。我们使用不同大小的物体形状给出了该方案的结果。
{"title":"An efficient boundary encoding scheme which is optimal in the rate distortion sense","authors":"G. Schuster, A. Katsaggelos","doi":"10.1109/ICIP.1997.638660","DOIUrl":"https://doi.org/10.1109/ICIP.1997.638660","url":null,"abstract":"A major problem in object oriented video coding is the efficient encoding of the shape information of arbitrarily shaped objects. Efficient shape coding schemes are also needed in encoding the shape information of video object planes (VOP) in the MPEG-4 standard. In this paper, we present an efficient method for the lossy encoding of object shapes which are given as 8-connect chain codes (Meier et al., 1997). We approximate the object shape by a second order B-spline curve and consider the problem of finding the curve with the lowest bit rate for a given distortion. The presented scheme is optimal, efficient and offers complete control over the trade-off between bit-rate and distortion. We present results with the proposed scheme using objects shapes of different sizes.","PeriodicalId":92344,"journal":{"name":"Computer analysis of images and patterns : proceedings of the ... International Conference on Automatic Image Processing. International Conference on Automatic Image Processing","volume":"158 1","pages":"9-12 vol.2"},"PeriodicalIF":0.0,"publicationDate":"1997-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74878005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Detecting multiple moving targets using deformable contours 使用可变形轮廓检测多个移动目标
N. Paragios, R. Deriche
This paper presents a framework for detecting multiple moving moving objects in a sequence of images. Using a statistical approach, where the inter-frame difference is modeled by a mixture of two Laplacian distributions and a deformable contour-based energy minimization approach, we reformulate the motion detection problem as a front propagation problem. Following the work of geodesic active contours, we transform the moving objects detection problem into an equivalent problem of geodesic computation, which is solved using a level set formulation scheme. To reduce the computational cost required by a direct implementation of the formulation scheme the narrow band technique is used. In order to further reduce the CPU time, a multi-scale approach has also been considered. Very promising experimental results are provided using real video sequences.
本文提出了一种检测图像序列中多个运动目标的框架。使用统计方法,其中帧间差异由两个拉普拉斯分布和基于变形轮廓的能量最小化方法混合建模,我们将运动检测问题重新表述为前传播问题。在测地线活动轮廓的基础上,将运动目标检测问题转化为等效的测地线计算问题,并采用水平集的表述方案进行求解。为了减少直接实现配方方案所需的计算成本,采用窄带技术。为了进一步减少CPU时间,还考虑了多尺度方法。利用真实的视频序列,得到了很有希望的实验结果。
{"title":"Detecting multiple moving targets using deformable contours","authors":"N. Paragios, R. Deriche","doi":"10.1109/ICIP.1997.638713","DOIUrl":"https://doi.org/10.1109/ICIP.1997.638713","url":null,"abstract":"This paper presents a framework for detecting multiple moving moving objects in a sequence of images. Using a statistical approach, where the inter-frame difference is modeled by a mixture of two Laplacian distributions and a deformable contour-based energy minimization approach, we reformulate the motion detection problem as a front propagation problem. Following the work of geodesic active contours, we transform the moving objects detection problem into an equivalent problem of geodesic computation, which is solved using a level set formulation scheme. To reduce the computational cost required by a direct implementation of the formulation scheme the narrow band technique is used. In order to further reduce the CPU time, a multi-scale approach has also been considered. Very promising experimental results are provided using real video sequences.","PeriodicalId":92344,"journal":{"name":"Computer analysis of images and patterns : proceedings of the ... International Conference on Automatic Image Processing. International Conference on Automatic Image Processing","volume":"30 1","pages":"183-186 vol.2"},"PeriodicalIF":0.0,"publicationDate":"1997-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73229704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Segmentation of compressed documents 压缩文档的分割
R. Queiroz, R. Eschbach
We present a novel technique for segmentation of a JPEG-compressed document based on block activity. The activity is measured as the number of bits spent to encode each block. Each number is mapped to a pixel brightness value in an auxiliary image which is then used for segmentation. We introduce the use of such an image and show an example of a simple segmentation algorithm, which was successfully applied to test documents. The desired region can be identified and cropped (or replaced) from the compressed data without decompressing the image.
提出了一种基于块活动的jpeg压缩文档分割新技术。该活动是用编码每个块所花费的比特数来衡量的。每个数字都映射到辅助图像中的像素亮度值,然后用于分割。我们介绍了这种图像的使用,并给出了一个简单的分割算法的例子,该算法成功地应用于测试文档。可以在不解压图像的情况下从压缩数据中识别和裁剪(或替换)所需的区域。
{"title":"Segmentation of compressed documents","authors":"R. Queiroz, R. Eschbach","doi":"10.1109/ICIP.1997.631984","DOIUrl":"https://doi.org/10.1109/ICIP.1997.631984","url":null,"abstract":"We present a novel technique for segmentation of a JPEG-compressed document based on block activity. The activity is measured as the number of bits spent to encode each block. Each number is mapped to a pixel brightness value in an auxiliary image which is then used for segmentation. We introduce the use of such an image and show an example of a simple segmentation algorithm, which was successfully applied to test documents. The desired region can be identified and cropped (or replaced) from the compressed data without decompressing the image.","PeriodicalId":92344,"journal":{"name":"Computer analysis of images and patterns : proceedings of the ... International Conference on Automatic Image Processing. International Conference on Automatic Image Processing","volume":"3 1","pages":"70-73 vol.3"},"PeriodicalIF":0.0,"publicationDate":"1997-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75587237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Zerotree design for image compression: toward weighted universal zerotree coding 图像压缩的零树设计:走向加权通用零树编码
M. Effros
We consider the problem of optimal, data-dependent zerotree design for use in weighted universal zerotree codes for image compression. A weighted universal zerotree code (WUZC) is a data compression system that replaces the single, data-independent zerotree of Said and Pearlman (see IEEE Transactions on Circuits and Systems for Video Technology, vol.6, no.3, p.243-50, 1996) with an optimal collection of zerotrees for good image coding performance across a wide variety of possible sources. We describe the weighted universal zerotree encoding and design algorithms but focus primarily on the problem of optimal, data-dependent zerotree design. We demonstrate the performance of the proposed algorithm by comparing, at a variety of target rates, the performance of a Said-Pearlman style code using the standard zerotree to the performance of the same code using a zerotree designed with our algorithm. The comparison is made without entropy coding. The proposed zerotree design algorithm achieves, on a collection of combined text and gray-scale images, up to 4 dB performance improvement over a Said-Pearlman zerotree.
我们考虑了用于图像压缩的加权通用零树编码的最优、数据相关的零树设计问题。加权通用零树码(WUZC)是一种数据压缩系统,它取代了Said和Pearlman的单一的、与数据无关的零树(参见IEEE Transactions on Circuits and Systems for Video Technology, vol.6, no. 6)。3, p.243- 50,1996),并在各种可能的源中提供最佳的零树集合,以获得良好的图像编码性能。我们描述了加权通用零树编码和设计算法,但主要关注最优的问题,数据相关的零树设计。我们通过比较在各种目标速率下,使用标准零树的Said-Pearlman风格代码的性能与使用用我们的算法设计的零树的相同代码的性能来证明所提出算法的性能。在不使用熵编码的情况下进行比较。所提出的零树设计算法在文本和灰度图像的组合上,比Said-Pearlman零树的性能提高了4db。
{"title":"Zerotree design for image compression: toward weighted universal zerotree coding","authors":"M. Effros","doi":"10.1109/ICIP.1997.647988","DOIUrl":"https://doi.org/10.1109/ICIP.1997.647988","url":null,"abstract":"We consider the problem of optimal, data-dependent zerotree design for use in weighted universal zerotree codes for image compression. A weighted universal zerotree code (WUZC) is a data compression system that replaces the single, data-independent zerotree of Said and Pearlman (see IEEE Transactions on Circuits and Systems for Video Technology, vol.6, no.3, p.243-50, 1996) with an optimal collection of zerotrees for good image coding performance across a wide variety of possible sources. We describe the weighted universal zerotree encoding and design algorithms but focus primarily on the problem of optimal, data-dependent zerotree design. We demonstrate the performance of the proposed algorithm by comparing, at a variety of target rates, the performance of a Said-Pearlman style code using the standard zerotree to the performance of the same code using a zerotree designed with our algorithm. The comparison is made without entropy coding. The proposed zerotree design algorithm achieves, on a collection of combined text and gray-scale images, up to 4 dB performance improvement over a Said-Pearlman zerotree.","PeriodicalId":92344,"journal":{"name":"Computer analysis of images and patterns : proceedings of the ... International Conference on Automatic Image Processing. International Conference on Automatic Image Processing","volume":"237 1","pages":"616-619 vol.1"},"PeriodicalIF":0.0,"publicationDate":"1997-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75751383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
A parallel algorithm for a very fast 2D velocity field estimation 二维速度场快速估计的并行算法
F. Coat, E. Pissaloux, P. Bonnin, T. Garié, F. Durbin, A. Tissot
This paper proposes a parallel algorithm based upon dynamic programming, for velocity field estimation. It has O(N) complexity, N being the number of element involved in the process. This low complexity is very interesting for many real time applications (autonomous robot navigation, trajectory matching etc.). The algorithm has been implemented on the CM-5, therefore functional specifications of our parallel dynamic programming circuit have been validated.
提出了一种基于动态规划的速度场估计并行算法。它的复杂度为0 (N), N是过程中涉及的元素的数量。这种低复杂度对于许多实时应用(自主机器人导航、轨迹匹配等)来说非常有趣。该算法已在CM-5上实现,从而验证了并行动态规划电路的功能指标。
{"title":"A parallel algorithm for a very fast 2D velocity field estimation","authors":"F. Coat, E. Pissaloux, P. Bonnin, T. Garié, F. Durbin, A. Tissot","doi":"10.1109/ICIP.1997.638712","DOIUrl":"https://doi.org/10.1109/ICIP.1997.638712","url":null,"abstract":"This paper proposes a parallel algorithm based upon dynamic programming, for velocity field estimation. It has O(N) complexity, N being the number of element involved in the process. This low complexity is very interesting for many real time applications (autonomous robot navigation, trajectory matching etc.). The algorithm has been implemented on the CM-5, therefore functional specifications of our parallel dynamic programming circuit have been validated.","PeriodicalId":92344,"journal":{"name":"Computer analysis of images and patterns : proceedings of the ... International Conference on Automatic Image Processing. International Conference on Automatic Image Processing","volume":"37 10 1","pages":"179-182 vol.2"},"PeriodicalIF":0.0,"publicationDate":"1997-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78660359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
An axiomatic approach to image interpolation 图像插值的一种公理方法
V. Caselles, J. Morel, Catalina Sbert
We discuss possible algorithms for interpolating data given in a set of curves and/or points in the plane. We propose a set of basic assumptions to be satisfied by the interpolation algorithms which lead to a set of models in terms of possibly degenerate elliptic partial differential equations. The absolute minimal Lipschitz extension model (AMLE) is singled out and studied in more detail. We show experiments suggesting a possible application, the restoration of images with poor dynamic range.
我们讨论了在平面上的一组曲线和/或点中给出插值数据的可能算法。我们提出了一组基本假设来满足插值算法,从而导致一组可能退化的椭圆型偏微分方程的模型。对绝对极小Lipschitz扩展模型(AMLE)进行了详细的研究。我们展示了一种可能的应用,即恢复动态范围较差的图像。
{"title":"An axiomatic approach to image interpolation","authors":"V. Caselles, J. Morel, Catalina Sbert","doi":"10.1109/ICIP.1997.632125","DOIUrl":"https://doi.org/10.1109/ICIP.1997.632125","url":null,"abstract":"We discuss possible algorithms for interpolating data given in a set of curves and/or points in the plane. We propose a set of basic assumptions to be satisfied by the interpolation algorithms which lead to a set of models in terms of possibly degenerate elliptic partial differential equations. The absolute minimal Lipschitz extension model (AMLE) is singled out and studied in more detail. We show experiments suggesting a possible application, the restoration of images with poor dynamic range.","PeriodicalId":92344,"journal":{"name":"Computer analysis of images and patterns : proceedings of the ... International Conference on Automatic Image Processing. International Conference on Automatic Image Processing","volume":"35 1","pages":"376-379 vol.3"},"PeriodicalIF":0.0,"publicationDate":"1997-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78701491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 388
Algorithm for automatically producing layered sprites by detecting camera movement 通过检测摄像机运动自动生成分层精灵的算法
K. Jinzenji, S. Ishibashi, H. Kotera
We clarify the relationship between motion vectors (two parameters) and all camera motion, namely scaling, rotation and translation. All camera motions are described as Hermart transform coefficients. After all scaling and rotation factors of camera motion are removed, translation factors represent the depth of real three dimensional world. So we proposed a new sprite model, which is consist of planes vertical towards the depth direction. Using this sprite model, any background reflecting any camera motion creates sprites. Our simulation results show clear sprites and a clear synthesized image. The correlation between the original image and the synthesized image is more than 0.7. That indicates that the created sprites are suitable for prediction.
我们阐明了运动向量(两个参数)和所有摄像机运动(即缩放、旋转和平移)之间的关系。所有摄像机运动都被描述为Hermart变换系数。在去除摄像机运动的缩放和旋转因子后,平移因子代表真实三维世界的深度。因此,我们提出了一种新的精灵模型,该模型由垂直于深度方向的平面组成。使用这个精灵模型,任何背景反射任何相机运动都会创建精灵。我们的仿真结果显示了清晰的精灵和清晰的合成图像。原始图像与合成图像的相关系数大于0.7。这表明所创建的精灵适合于预测。
{"title":"Algorithm for automatically producing layered sprites by detecting camera movement","authors":"K. Jinzenji, S. Ishibashi, H. Kotera","doi":"10.1109/ICIP.1997.648075","DOIUrl":"https://doi.org/10.1109/ICIP.1997.648075","url":null,"abstract":"We clarify the relationship between motion vectors (two parameters) and all camera motion, namely scaling, rotation and translation. All camera motions are described as Hermart transform coefficients. After all scaling and rotation factors of camera motion are removed, translation factors represent the depth of real three dimensional world. So we proposed a new sprite model, which is consist of planes vertical towards the depth direction. Using this sprite model, any background reflecting any camera motion creates sprites. Our simulation results show clear sprites and a clear synthesized image. The correlation between the original image and the synthesized image is more than 0.7. That indicates that the created sprites are suitable for prediction.","PeriodicalId":92344,"journal":{"name":"Computer analysis of images and patterns : proceedings of the ... International Conference on Automatic Image Processing. International Conference on Automatic Image Processing","volume":"9 2 1","pages":"767-770 vol.1"},"PeriodicalIF":0.0,"publicationDate":"1997-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78452607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Coding artifacts removal using biased anisotropic diffusion 利用偏置各向异性扩散去除编码伪影
Seungjoon Yang, Y. Hu
Biased anisotropic diffusion is applied to the coding artifacts removal of the DCT based codec. It is formulated as a cost minimization problem. The weighting factors of the cost function are controlled such that the solution removes the blocking effect and conceals the block losses. It has an advantage over other postprocessing schemes because it handles the discontinuity of the image, smoothes the image selectively, and takes the visual masking in to account. Features needed for the weighting factors are extracted directly from the DCT coefficients to reduce the computational complexity.
将偏置各向异性扩散应用于基于DCT的编解码器的编码伪影去除。它被表述为成本最小化问题。控制了代价函数的权重因子,使解消除了阻塞效应,掩盖了阻塞损失。它比其他后处理方案有一个优点,因为它处理了图像的不连续,有选择地平滑了图像,并考虑了视觉掩蔽。直接从DCT系数中提取加权因子所需的特征,降低了计算复杂度。
{"title":"Coding artifacts removal using biased anisotropic diffusion","authors":"Seungjoon Yang, Y. Hu","doi":"10.1109/ICIP.1997.638767","DOIUrl":"https://doi.org/10.1109/ICIP.1997.638767","url":null,"abstract":"Biased anisotropic diffusion is applied to the coding artifacts removal of the DCT based codec. It is formulated as a cost minimization problem. The weighting factors of the cost function are controlled such that the solution removes the blocking effect and conceals the block losses. It has an advantage over other postprocessing schemes because it handles the discontinuity of the image, smoothes the image selectively, and takes the visual masking in to account. Features needed for the weighting factors are extracted directly from the DCT coefficients to reduce the computational complexity.","PeriodicalId":92344,"journal":{"name":"Computer analysis of images and patterns : proceedings of the ... International Conference on Automatic Image Processing. International Conference on Automatic Image Processing","volume":"131 1","pages":"346-349 vol.2"},"PeriodicalIF":0.0,"publicationDate":"1997-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75917190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Adaptive partitionings for fractal image compression 分形图像压缩的自适应分区
M. Ruhl, H. Hartenstein, D. Saupe
In fractal image compression a partitioning of the image into ranges is required. Saupe and Ruhl (1996) proposed to find good partitionings by means of a split-and-merge process guided by evolutionary computing. In this approach ranges are connected sets of small square image blocks. Far better rate-distortion curves can be obtained as compared to traditional quadtree partitionings, however, at the expense of an increase of computing time. In this paper we show how conventional acceleration techniques and a deterministic version of the evolution reduce the time-complexity of the method without degrading the encoding quality. Furthermore, we report on techniques to improve the rate-distortion performance and evaluate the results visually.
在分形图像压缩中,需要对图像进行范围划分。Saupe和Ruhl(1996)提出通过进化计算指导的分裂合并过程来寻找良好的分区。在这种方法中,范围是小正方形图像块的连接集。然而,与传统的四叉树划分相比,可以获得更好的速率失真曲线,但代价是增加计算时间。在本文中,我们展示了传统的加速技术和进化的确定性版本如何在不降低编码质量的情况下降低方法的时间复杂度。此外,我们还报道了提高率失真性能的技术,并对结果进行了视觉评估。
{"title":"Adaptive partitionings for fractal image compression","authors":"M. Ruhl, H. Hartenstein, D. Saupe","doi":"10.1109/ICIP.1997.638753","DOIUrl":"https://doi.org/10.1109/ICIP.1997.638753","url":null,"abstract":"In fractal image compression a partitioning of the image into ranges is required. Saupe and Ruhl (1996) proposed to find good partitionings by means of a split-and-merge process guided by evolutionary computing. In this approach ranges are connected sets of small square image blocks. Far better rate-distortion curves can be obtained as compared to traditional quadtree partitionings, however, at the expense of an increase of computing time. In this paper we show how conventional acceleration techniques and a deterministic version of the evolution reduce the time-complexity of the method without degrading the encoding quality. Furthermore, we report on techniques to improve the rate-distortion performance and evaluate the results visually.","PeriodicalId":92344,"journal":{"name":"Computer analysis of images and patterns : proceedings of the ... International Conference on Automatic Image Processing. International Conference on Automatic Image Processing","volume":"25 1","pages":"310-313 vol.2"},"PeriodicalIF":0.0,"publicationDate":"1997-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75105950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
3-D SAR imaging via high-resolution spectral estimation methods: experiments with XPATCH 基于高分辨率光谱估计方法的三维SAR成像:XPATCH实验
M. W. Castelloe, D. Munson
We explore 3-D image reconstruction from limited synthetic aperture radar (SAR) data, using a previously developed approach, based on high-resolution spectral estimation. Existing 3-D SAR imaging techniques, such as stereo and interferometry, present difficulties that may be overcome by this method. We use XPATCH to generate simulated SAR data of a military tank and an aircraft, and we then show image reconstructions using both conventional Fourier inversion and high-resolution spectral estimation. The latter approach is seen to provide superior imagery.
我们利用先前开发的基于高分辨率光谱估计的方法,从有限合成孔径雷达(SAR)数据中探索三维图像重建。现有的三维合成孔径雷达成像技术,如立体和干涉测量,目前的困难可能会被这种方法克服。我们使用XPATCH生成军用坦克和飞机的模拟SAR数据,然后使用传统的傅里叶反演和高分辨率光谱估计显示图像重建。后一种方法被认为能提供更好的图像。
{"title":"3-D SAR imaging via high-resolution spectral estimation methods: experiments with XPATCH","authors":"M. W. Castelloe, D. Munson","doi":"10.1109/ICIP.1997.648100","DOIUrl":"https://doi.org/10.1109/ICIP.1997.648100","url":null,"abstract":"We explore 3-D image reconstruction from limited synthetic aperture radar (SAR) data, using a previously developed approach, based on high-resolution spectral estimation. Existing 3-D SAR imaging techniques, such as stereo and interferometry, present difficulties that may be overcome by this method. We use XPATCH to generate simulated SAR data of a military tank and an aircraft, and we then show image reconstructions using both conventional Fourier inversion and high-resolution spectral estimation. The latter approach is seen to provide superior imagery.","PeriodicalId":92344,"journal":{"name":"Computer analysis of images and patterns : proceedings of the ... International Conference on Automatic Image Processing. International Conference on Automatic Image Processing","volume":"10 3 1","pages":"853-856 vol.1"},"PeriodicalIF":0.0,"publicationDate":"1997-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72662778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
Computer analysis of images and patterns : proceedings of the ... International Conference on Automatic Image Processing. International Conference on Automatic Image Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1