Summary form only given. Signal reconstruction from a limited set of linear measurements of a signal and prior knowledge of signal characteristics expressed as convex constraint sets were treated. The problem was posed in Hilbert space as the determination of the minimum norm element in the intersection of convex constraint sets and a linear variety with finite codimension. A finite parameterization for the optimal solution was derived, and the optimal parameter vector was shown to satisfy a system of nonlinear equations in a finite-dimensional Euclidean space. Iterative algorithms for determining the parameters were obtained, and convergence was shown to be quadratic for many applications. The results were applied to example multidimensional reconstruction problems.<>
{"title":"A finite parameterization and iterative algorithms for constrained minimum norm signal reconstruction","authors":"K. Arun, L. Potter","doi":"10.1109/MDSP.1989.97075","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97075","url":null,"abstract":"Summary form only given. Signal reconstruction from a limited set of linear measurements of a signal and prior knowledge of signal characteristics expressed as convex constraint sets were treated. The problem was posed in Hilbert space as the determination of the minimum norm element in the intersection of convex constraint sets and a linear variety with finite codimension. A finite parameterization for the optimal solution was derived, and the optimal parameter vector was shown to satisfy a system of nonlinear equations in a finite-dimensional Euclidean space. Iterative algorithms for determining the parameters were obtained, and convergence was shown to be quadratic for many applications. The results were applied to example multidimensional reconstruction problems.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114857163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. A focusing technique that can reduce each wideband interferer in multigroup scenarios to essentially a rank-one representation without preliminary estimates of the group locations has been developed. The method is based on adjusting the spatial sampling rate or spatially resampling the array outputs, as a function of temporal frequency. Resampling has the effect of rescaling the spatial frequency axis at each temporal frequency. For proper focusing, the rescaling factors are selected so that the spatial frequency of each wideband arrival in the resampled sequences is the same for all temporal frequencies in the receiver band. A linear shift-variant transformation that can perform the required resampling operation has been designed. The array gain of the focused minimum variance distortionless response (MVDR) beamformer has been compared with its fully adaptive counterpart in some common broadband interference-dominated scenarios.<>
{"title":"Focussed partially adaptive broadband beamforming via spatial resampling","authors":"J. Krolik, D. Swingler","doi":"10.1109/MDSP.1989.97067","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97067","url":null,"abstract":"Summary form only given. A focusing technique that can reduce each wideband interferer in multigroup scenarios to essentially a rank-one representation without preliminary estimates of the group locations has been developed. The method is based on adjusting the spatial sampling rate or spatially resampling the array outputs, as a function of temporal frequency. Resampling has the effect of rescaling the spatial frequency axis at each temporal frequency. For proper focusing, the rescaling factors are selected so that the spatial frequency of each wideband arrival in the resampled sequences is the same for all temporal frequencies in the receiver band. A linear shift-variant transformation that can perform the required resampling operation has been designed. The array gain of the focused minimum variance distortionless response (MVDR) beamformer has been compared with its fully adaptive counterpart in some common broadband interference-dominated scenarios.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128427379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given, as follows. An efficient tool for ventricule dynamic function assessment through coronary angiogram analysis is described. The temporal image sequences are recorded by means of a X-ray digital device, and the vessels are enhanced by injection of contrast medium. Up to now, these sequences have allowed quantitative diagnosis of certain cardiac diseases (morphological and structural features) or have guided surgical operations. The authors propose to enlarge the scope of this imaging technique to include the automatic handling of kinetic properties of vascular branches. They report a new scheme for such automatic analysis. It combines motion estimation and feature extraction and makes them act interactively.<>
{"title":"Combining motion estimation and segmentation in digital subtracted angiograms analysis","authors":"J. Rong, J. Coatrieux, R. Collorec","doi":"10.1109/MDSP.1989.97013","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97013","url":null,"abstract":"Summary form only given, as follows. An efficient tool for ventricule dynamic function assessment through coronary angiogram analysis is described. The temporal image sequences are recorded by means of a X-ray digital device, and the vessels are enhanced by injection of contrast medium. Up to now, these sequences have allowed quantitative diagnosis of certain cardiac diseases (morphological and structural features) or have guided surgical operations. The authors propose to enlarge the scope of this imaging technique to include the automatic handling of kinetic properties of vascular branches. They report a new scheme for such automatic analysis. It combines motion estimation and feature extraction and makes them act interactively.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129351011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Recent results in optimization of filter kernels for subband coding of images, for both infinite-impulse-response (IIR) and finite-impulse-response (FIR) filters, have been reviewed. Aspects of orthogonality of filter banks as well as the choice between odd order and even order filters are discussed. As an optimization function, a weighted sum of the quantized band step response error (e.g. overshoot) and the frequency response has been used. Due to the difficulty of optimizing such a target function (which tend to have a large number of local minima), a version of simulated annealing has been used for optimization. The subjective difference between two images that were coded using filters with and without consideration of the step response has been examined.<>
{"title":"Optimization of filters for subband coding of images","authors":"T. Kronander","doi":"10.1109/MDSP.1989.97134","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97134","url":null,"abstract":"Summary form only given. Recent results in optimization of filter kernels for subband coding of images, for both infinite-impulse-response (IIR) and finite-impulse-response (FIR) filters, have been reviewed. Aspects of orthogonality of filter banks as well as the choice between odd order and even order filters are discussed. As an optimization function, a weighted sum of the quantized band step response error (e.g. overshoot) and the frequency response has been used. Due to the difficulty of optimizing such a target function (which tend to have a large number of local minima), a version of simulated annealing has been used for optimization. The subjective difference between two images that were coded using filters with and without consideration of the step response has been examined.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121802084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given, as follows. Decision analysis is the process of making an optimal decision (classification) based on extracted features as an input. Although an important task in supervised statistical pattern recognition, decision analysis is often the speed bottleneck of such a system. Most of the work on real-time pattern recognition was done in the area of feature extraction, and very little was done in the area of decision analysis. The design of a real-time decision analyzer that can operate at image sensor speeds is presented. The real-time performance is achieved by selecting a class of classifiers that is amenable to VLSI implementation and has considerable discriminatory power. A flexible system architecture for the decision analyzer is proposed. It can be tailored to particular user specifications and is based on a two-chip set as a building block. An application of the design to a low-level image segmentation system, called LISA, which is currently being built, is also reported.<>
{"title":"Real-time decision analysis-algorithms, architectures and implementation","authors":"C. Shung, W. Blanz, D. Petkovic","doi":"10.1109/MDSP.1989.97033","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97033","url":null,"abstract":"Summary form only given, as follows. Decision analysis is the process of making an optimal decision (classification) based on extracted features as an input. Although an important task in supervised statistical pattern recognition, decision analysis is often the speed bottleneck of such a system. Most of the work on real-time pattern recognition was done in the area of feature extraction, and very little was done in the area of decision analysis. The design of a real-time decision analyzer that can operate at image sensor speeds is presented. The real-time performance is achieved by selecting a class of classifiers that is amenable to VLSI implementation and has considerable discriminatory power. A flexible system architecture for the decision analyzer is proposed. It can be tailored to particular user specifications and is based on a two-chip set as a building block. An application of the design to a low-level image segmentation system, called LISA, which is currently being built, is also reported.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"09 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125793910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Recent work in developing techniques for the removal of photon noise (shot noise) from two-dimensional power spectra and bispectra is reported. The general problem of speckle imaging (i.e. imaging through atmospheric turbulence), has been addressed. The general approach is to take a set of very-short-exposure frames (1-10 ms), compute some quantity like the power spectrum or bispectrum for each frame, and average this quantity over all the frames. These averaged quantities can be used to reconstruct the image. However, when the object being imaged is very dim, or the exposure time for a single frame is very short, the computed bispectrum with exhibit a photon noise component due to the Poisson statistical nature of the photon detection process. The photon noise contribution has been derived, taking into account that some pixels are brighter than others when the camera is exposed to uniform illumination.<>
{"title":"Photon noise bias in computed bispectra","authors":"D. Dudgeon, J. Beletic, M. Lane","doi":"10.1109/MDSP.1989.97100","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97100","url":null,"abstract":"Summary form only given. Recent work in developing techniques for the removal of photon noise (shot noise) from two-dimensional power spectra and bispectra is reported. The general problem of speckle imaging (i.e. imaging through atmospheric turbulence), has been addressed. The general approach is to take a set of very-short-exposure frames (1-10 ms), compute some quantity like the power spectrum or bispectrum for each frame, and average this quantity over all the frames. These averaged quantities can be used to reconstruct the image. However, when the object being imaged is very dim, or the exposure time for a single frame is very short, the computed bispectrum with exhibit a photon noise component due to the Poisson statistical nature of the photon detection process. The photon noise contribution has been derived, taking into account that some pixels are brighter than others when the camera is exposed to uniform illumination.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122185565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. The problem of echo imaging when the imaging system is composed of a group of phased arrays with arbitrary coordinates in the spatial domain (multistatic configuration) has been addressed. Each phase array has a finite aperture and is used for transmitting, receiving, or both functions. A receiving phased array may make a synchronous or asynchronous detection of the backscattered signal. The task in this imaging problem is to integrate the data collected by the phased arrays and relate them to the object under study. The imaging problem is first formulated for a plane wave source in a bistatic configuration. These results are extended for the radiation patterns of a multistatic imaging system. Methods for processing the backscattered signals to reduce the artifacts in the reconstructed image caused by the finite size of the phased arrays have been developed. Phase processing techniques have been examined for the case in which the backscattered signal is detected in a noncoherent environment. It has been shown that these array processing principles can be utilized to formulate a system model and inversion for synthetic aperture radar imaging that incorporates wavefront curvature.<>
{"title":"Multistatic echo imaging in remote sensing and diagnostic medicine","authors":"M. Soumekh","doi":"10.1109/MDSP.1989.97117","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97117","url":null,"abstract":"Summary form only given. The problem of echo imaging when the imaging system is composed of a group of phased arrays with arbitrary coordinates in the spatial domain (multistatic configuration) has been addressed. Each phase array has a finite aperture and is used for transmitting, receiving, or both functions. A receiving phased array may make a synchronous or asynchronous detection of the backscattered signal. The task in this imaging problem is to integrate the data collected by the phased arrays and relate them to the object under study. The imaging problem is first formulated for a plane wave source in a bistatic configuration. These results are extended for the radiation patterns of a multistatic imaging system. Methods for processing the backscattered signals to reduce the artifacts in the reconstructed image caused by the finite size of the phased arrays have been developed. Phase processing techniques have been examined for the case in which the backscattered signal is detected in a noncoherent environment. It has been shown that these array processing principles can be utilized to formulate a system model and inversion for synthetic aperture radar imaging that incorporates wavefront curvature.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129915932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. A spatial technique for the efficient adaptive coding of image sequences at low data rates is reported. The technique has the advantage of simple implementation for real-time operation and allows the matching of coding resolution to variations in image detail. The basis of the technique is a recursive quadtree division of image frames into successively smaller and smaller blocks until the original picture detail can be adequately represented by a bilinear interpolation from the four block corner points. At this point subdivision ceases and information is transmitted about the structure of the quadtree division and any new corner points needed for interpolation (for most blocks, some corner points will have already been coded to reconstruct previous blocks). Initial problems with the application of the algorithm to an image sequence derived from the effects of noise causing random failure of the threshold test and errors in corner point values, which are then spread over the whole block by the reconstruction interpolation. Methods of counteracting these effects as well as techniques for optimizing the quantization of sample values, particularly in the case of the processing of small blocks, have been developed.<>
{"title":"Spatial domain decomposition for coding image sequences","authors":"R. Clarke, P. Cordell","doi":"10.1109/MDSP.1989.97131","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97131","url":null,"abstract":"Summary form only given. A spatial technique for the efficient adaptive coding of image sequences at low data rates is reported. The technique has the advantage of simple implementation for real-time operation and allows the matching of coding resolution to variations in image detail. The basis of the technique is a recursive quadtree division of image frames into successively smaller and smaller blocks until the original picture detail can be adequately represented by a bilinear interpolation from the four block corner points. At this point subdivision ceases and information is transmitted about the structure of the quadtree division and any new corner points needed for interpolation (for most blocks, some corner points will have already been coded to reconstruct previous blocks). Initial problems with the application of the algorithm to an image sequence derived from the effects of noise causing random failure of the threshold test and errors in corner point values, which are then spread over the whole block by the reconstruction interpolation. Methods of counteracting these effects as well as techniques for optimizing the quantization of sample values, particularly in the case of the processing of small blocks, have been developed.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"50 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129806670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Eichel, D. Ghiglia, C. V. Jakowatz, G. Mastin, L. A. Romero, D. Wahl
Summary form only given. A recently developed synthetic aperture radar (SAR) autofocus technique called the phase gradient autofocus (PGA) algorithm is considered. it has been developed to mitigate the problem of phase error compensation, which is common to all aperture synthesis imaging systems. The phase errors manifest themselves as redundant information in the reconstructed image. This invites the use of a data-driven algorithm to estimate the phase error function and perform the restorative deconvolution. The PGA algorithm exploits this redundancy to obtain a linear minimum variance estimator of the phase error. It has been demonstrated to be robust, computationally efficient, and easily implemented in standard digital signal processing hardware.<>
{"title":"Applications of phase gradient autofocus to aperture synthesis imaging","authors":"P. Eichel, D. Ghiglia, C. V. Jakowatz, G. Mastin, L. A. Romero, D. Wahl","doi":"10.1109/MDSP.1989.97023","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97023","url":null,"abstract":"Summary form only given. A recently developed synthetic aperture radar (SAR) autofocus technique called the phase gradient autofocus (PGA) algorithm is considered. it has been developed to mitigate the problem of phase error compensation, which is common to all aperture synthesis imaging systems. The phase errors manifest themselves as redundant information in the reconstructed image. This invites the use of a data-driven algorithm to estimate the phase error function and perform the restorative deconvolution. The PGA algorithm exploits this redundancy to obtain a linear minimum variance estimator of the phase error. It has been demonstrated to be robust, computationally efficient, and easily implemented in standard digital signal processing hardware.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128904240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Clastic reservoirs saturated with heavy oils have been observed to exhibit a marked relationship between the velocity of propagation of acoustic waves and the temperature of the oil-saturated sediments. This observation forms the basis for a method of monitoring the changes which occur in the reservoir when thermal enhanced oil recovery (EOR) procedures are used. An algebraic formulation of the diffraction reconstruction problem provides a sound basis for algorithm development. A variety of error criteria may then be considered. These considerations have led to highly accurate full wave reconstruction algorithms, which are now in use for imaging hydrocarbon reservoirs. In addition, resolution analyses developed for nonlinear inverse problems, such as the diffraction tomographic reconstruction problem, allow confidence limits to be placed on the accuracy of reconstruction at each point in the processed tomogram.<>
{"title":"Diffraction tomography for geophysical imaging in hydrocarbon reservoirs","authors":"J. Justice, A. Vassiliou","doi":"10.1109/MDSP.1989.97019","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97019","url":null,"abstract":"Summary form only given. Clastic reservoirs saturated with heavy oils have been observed to exhibit a marked relationship between the velocity of propagation of acoustic waves and the temperature of the oil-saturated sediments. This observation forms the basis for a method of monitoring the changes which occur in the reservoir when thermal enhanced oil recovery (EOR) procedures are used. An algebraic formulation of the diffraction reconstruction problem provides a sound basis for algorithm development. A variety of error criteria may then be considered. These considerations have led to highly accurate full wave reconstruction algorithms, which are now in use for imaging hydrocarbon reservoirs. In addition, resolution analyses developed for nonlinear inverse problems, such as the diffraction tomographic reconstruction problem, allow confidence limits to be placed on the accuracy of reconstruction at each point in the processed tomogram.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121562049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}