Summary form only given. It is observed that classical filtering theory (both in 1-D and 2-D cases) can be viewed as a particular solution to the minimum problem for a certain Tikhonov functional and that the underlying functional setting is strongly related to Sobolev space theory. The author is attempting to generalize that approach by considering Tikhonov functionals of a particular type, defined over discrete signals in terms of the L/sub 1/-norm.<>
{"title":"Digital filters as L/sub 1/-norm regularizers","authors":"S. Alliney","doi":"10.1109/MDSP.1989.97057","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97057","url":null,"abstract":"Summary form only given. It is observed that classical filtering theory (both in 1-D and 2-D cases) can be viewed as a particular solution to the minimum problem for a certain Tikhonov functional and that the underlying functional setting is strongly related to Sobolev space theory. The author is attempting to generalize that approach by considering Tikhonov functionals of a particular type, defined over discrete signals in terms of the L/sub 1/-norm.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116518865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Regions of the images observed by each sensor have been modeled as noncausal Gaussian Markov random fields (GMRFs), and labeled images have been assumed to follow a Gibbs distribution. The region labeling algorithms then become functions of model parameters, and the multisensor image segmentation problems become inference problems, given multisensor parameter measurements and local spatial interaction evidence. Two different multisensor image segmentation algorithms, maximum a posteriori (MAP) estimation and the Dempster-Shafer evidential reasoning technique, have been developed and evaluated. The Bayesian MAP approach uses an independent opinion pool for data fusion and a deterministic relaxation to obtain the map solution. Dempster-Shafer approach uses Dempster's rule of combination for data fusion, belief intervals and ignorance to represent confidence of labeling, and a deterministic relaxation scheme that updates the belief intervals. Simulations with mosaic images of real textures and with anatomical magnetic resonance images have been carried out.<>
{"title":"Segmentation of multi-sensor images","authors":"Rae H. Lee, Richard Leahy","doi":"10.1109/MDSP.1989.96998","DOIUrl":"https://doi.org/10.1109/MDSP.1989.96998","url":null,"abstract":"Summary form only given. Regions of the images observed by each sensor have been modeled as noncausal Gaussian Markov random fields (GMRFs), and labeled images have been assumed to follow a Gibbs distribution. The region labeling algorithms then become functions of model parameters, and the multisensor image segmentation problems become inference problems, given multisensor parameter measurements and local spatial interaction evidence. Two different multisensor image segmentation algorithms, maximum a posteriori (MAP) estimation and the Dempster-Shafer evidential reasoning technique, have been developed and evaluated. The Bayesian MAP approach uses an independent opinion pool for data fusion and a deterministic relaxation to obtain the map solution. Dempster-Shafer approach uses Dempster's rule of combination for data fusion, belief intervals and ignorance to represent confidence of labeling, and a deterministic relaxation scheme that updates the belief intervals. Simulations with mosaic images of real textures and with anatomical magnetic resonance images have been carried out.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121077483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given, as follows. An efficient algorithm that extracts lines using a point and a direction parameters from an edge image is discussed. In this algorithm, a line equation is derived for every pair of two edge elements in the image, and lines are extracted using a one-dimensional accumulator (instead of a two-dimensional accumulator, as in the conventional Hough transform methods). The advantages of this algorithm over the conventional Hough transform methods are a fast processing time and less chance of multiple detection for a single line. These advantages come mainly from the facts that the edge elements included in a line are removed from the image when the line is extracted, and the direction parameter of a line is computed only once for a pair of two edge elements which form the line. This means that the processing time does not increase proportionally to the accuracy of lines to be extracted. The algorithm has been implemented on an IBM-PC/AT using the Pascal programming language, and synthetic and real images have been used to show the performance of the algorithm.<>
{"title":"Extracting lines using a modified Hough transformation","authors":"Yeon Kim, Sung-Pil Lyu","doi":"10.1109/MDSP.1989.97002","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97002","url":null,"abstract":"Summary form only given, as follows. An efficient algorithm that extracts lines using a point and a direction parameters from an edge image is discussed. In this algorithm, a line equation is derived for every pair of two edge elements in the image, and lines are extracted using a one-dimensional accumulator (instead of a two-dimensional accumulator, as in the conventional Hough transform methods). The advantages of this algorithm over the conventional Hough transform methods are a fast processing time and less chance of multiple detection for a single line. These advantages come mainly from the facts that the edge elements included in a line are removed from the image when the line is extracted, and the direction parameter of a line is computed only once for a pair of two edge elements which form the line. This means that the processing time does not increase proportionally to the accuracy of lines to be extracted. The algorithm has been implemented on an IBM-PC/AT using the Pascal programming language, and synthetic and real images have been used to show the performance of the algorithm.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130208988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. It has been shown that a bottom-up strategy using an improved version of the optimized 2D edge filter fits the requirements of edge detection in real-world 2D images better than the top-down approach. The results are obtained by first operating on the full resolution, gradually restricting it to improve detection capabilities. The 2D filter has been applied to a bottom-up multiresolution edge detection scheme. All contour segments are registered. The behavior is observed while a fine-to-coarse tracing is performed, and the segments are classified as to whether they carry relevant information or not. The optimal edge detector has been tested using two strategies: a top-down procedure and the bottom-up scale-space scheme. While the computational burden is the same in both cases, the bottom-up approach provides considerable improvement in noise suppression and always detects the full length of a contour as obtained at the finest scale.<>
{"title":"Model-based 2D edge detection using bottom-up strategy","authors":"P. Besslich, E. Forgber","doi":"10.1109/MDSP.1989.97004","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97004","url":null,"abstract":"Summary form only given. It has been shown that a bottom-up strategy using an improved version of the optimized 2D edge filter fits the requirements of edge detection in real-world 2D images better than the top-down approach. The results are obtained by first operating on the full resolution, gradually restricting it to improve detection capabilities. The 2D filter has been applied to a bottom-up multiresolution edge detection scheme. All contour segments are registered. The behavior is observed while a fine-to-coarse tracing is performed, and the segments are classified as to whether they carry relevant information or not. The optimal edge detector has been tested using two strategies: a top-down procedure and the bottom-up scale-space scheme. While the computational burden is the same in both cases, the bottom-up approach provides considerable improvement in noise suppression and always detects the full length of a contour as obtained at the finest scale.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129884511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. The use of two new constraints in image restoration using set-theoretic algorithms has been investigated. The constraints aim to reduce the signal-dependent and filtered noise artifacts in the resulting restoration. The first constraint proposed is the 'bounded variation from the Wiener solution' constraint. The second is a continuously varying smoothness constraint.<>
{"title":"New constraints for set-theoretic image restoration with artifact suppression","authors":"M. Sezan, H. Trussell","doi":"10.1109/MDSP.1989.97102","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97102","url":null,"abstract":"Summary form only given. The use of two new constraints in image restoration using set-theoretic algorithms has been investigated. The constraints aim to reduce the signal-dependent and filtered noise artifacts in the resulting restoration. The first constraint proposed is the 'bounded variation from the Wiener solution' constraint. The second is a continuously varying smoothness constraint.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"282 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129994431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. The theoretical and practical problems of performing spherical harmonic analysis (SHA) on the electroencephalogram (EEG) have been investigated. One problem considered is related to sampling theory, that is, determining the effect of sample positions and detecting the presence of spatial aliasing. Another involves constraining the solution of the spherical harmonic coefficients to prevent instabilities. It has been found that the unconstrained solution does not always provide a realistic global solution. Some of the practical problems that have been investigated are spatial bandwidth estimation and placement errors. Spatial bandwidth estimation is important for determining the minimum number of electrodes required to sample the EEG adequately and to determines the order in a SHA. The sensitivity of the results to errors in determining the electrode positions on the scalp has also been studied.<>
{"title":"Spherical harmonic analysis of the electroencephalogram","authors":"G. Shaw, Z. Koles","doi":"10.1109/MDSP.1989.97020","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97020","url":null,"abstract":"Summary form only given. The theoretical and practical problems of performing spherical harmonic analysis (SHA) on the electroencephalogram (EEG) have been investigated. One problem considered is related to sampling theory, that is, determining the effect of sample positions and detecting the presence of spatial aliasing. Another involves constraining the solution of the spherical harmonic coefficients to prevent instabilities. It has been found that the unconstrained solution does not always provide a realistic global solution. Some of the practical problems that have been investigated are spatial bandwidth estimation and placement errors. Spatial bandwidth estimation is important for determining the minimum number of electrodes required to sample the EEG adequately and to determines the order in a SHA. The sensitivity of the results to errors in determining the electrode positions on the scalp has also been studied.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"49 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130886227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given, as follows. Arrays and associated signal processing algorithms have been in use for many years. In recent years, a new wave of algorithms has swept upon the scene promising superior performance at the expense of computational load. Rather than a litany of unrelated algorithms, these new approaches are based on a small number of key ideas drawn from rather diverse mathematical concepts: linear algebra (especially eigenanalysis), optimization theory, and robust statistics. These modern techniques will be surveyed in a unified way so that their relationships can be understood. A critical aspect of evaluating these algorithms will be the tradeoff between performance and computation.<>
{"title":"Trends in array signal processing","authors":"D.H. Johnson","doi":"10.1109/MDSP.1989.97025","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97025","url":null,"abstract":"Summary form only given, as follows. Arrays and associated signal processing algorithms have been in use for many years. In recent years, a new wave of algorithms has swept upon the scene promising superior performance at the expense of computational load. Rather than a litany of unrelated algorithms, these new approaches are based on a small number of key ideas drawn from rather diverse mathematical concepts: linear algebra (especially eigenanalysis), optimization theory, and robust statistics. These modern techniques will be surveyed in a unified way so that their relationships can be understood. A critical aspect of evaluating these algorithms will be the tradeoff between performance and computation.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134022615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. The design of multidimensional perfect reconstruction filter banks (PRFBs) for arbitrary sampling lattices has been addressed. Necessary and sufficient conditions have been formulated for perfect reconstruction within this general context, and the design of multidimensional finite-impulse-response (FIR) PRFBs has been shown using a method that optimizes directly over the impulse response coefficients of the analysis and synthesis filters, expressing the perfect reconstruction condition as a set of equality constraints involving the impulse response coefficients. Symmetries among various filters in the filter bank or within a single filter not only serve to reduce the number of variables in the design problem, but also manifest themselves in the form of automatically satisfied constraints and redundancies among the constraints. In both cases, the total number of constraints in the design problem is reduced. The multidimensional filter banks share some desirable properties with their one-dimensional counterparts: the analysis and synthesis filters are equal complexity FIR filters, and it is possible to design systems with any number of channels.<>
{"title":"The design of multidimensional FIR perfect reconstruction filter banks for arbitrary sampling lattices","authors":"J. Allebach, E. Viscito","doi":"10.1109/MDSP.1989.97095","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97095","url":null,"abstract":"Summary form only given. The design of multidimensional perfect reconstruction filter banks (PRFBs) for arbitrary sampling lattices has been addressed. Necessary and sufficient conditions have been formulated for perfect reconstruction within this general context, and the design of multidimensional finite-impulse-response (FIR) PRFBs has been shown using a method that optimizes directly over the impulse response coefficients of the analysis and synthesis filters, expressing the perfect reconstruction condition as a set of equality constraints involving the impulse response coefficients. Symmetries among various filters in the filter bank or within a single filter not only serve to reduce the number of variables in the design problem, but also manifest themselves in the form of automatically satisfied constraints and redundancies among the constraints. In both cases, the total number of constraints in the design problem is reduced. The multidimensional filter banks share some desirable properties with their one-dimensional counterparts: the analysis and synthesis filters are equal complexity FIR filters, and it is possible to design systems with any number of channels.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134523232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Simulations of matched field processing (MFP) in a complex three-dimensional ocean environment indicate that the environment itself can be exploited for the purpose of array processing. Work discussed in a preceding poster paper has been extended to the range-dependent 3-D environment case. Self-consistent MFP simulations in a noisy 3-D environment have been produced using the same numerical model for constructing the signal and correlated noise field. The approach has been applied to an ocean environment that has a portion of the Gulf Stream running through an area of bathymetric variability together with a storm that produces an anisotropic (horizontally and vertically correlated) noise field. Linear and nonlinear MFP processing is considered for vertical and horizontal arrays. The complexity of the environment has been shown to enhance MFP rather than degrade it.<>
{"title":"Environmental signal processing (ESP): an application of three-dimensional matched field processing in the ocean","authors":"W. Kuperman, J. Perkins","doi":"10.1109/MDSP.1989.97039","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97039","url":null,"abstract":"Summary form only given. Simulations of matched field processing (MFP) in a complex three-dimensional ocean environment indicate that the environment itself can be exploited for the purpose of array processing. Work discussed in a preceding poster paper has been extended to the range-dependent 3-D environment case. Self-consistent MFP simulations in a noisy 3-D environment have been produced using the same numerical model for constructing the signal and correlated noise field. The approach has been applied to an ocean environment that has a portion of the Gulf Stream running through an area of bathymetric variability together with a storm that produces an anisotropic (horizontally and vertically correlated) noise field. Linear and nonlinear MFP processing is considered for vertical and horizontal arrays. The complexity of the environment has been shown to enhance MFP rather than degrade it.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132150989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given, as follows. There are several types of time-varying filters designed for processing of nonstationary data. The old method of dividing the nonstationary sequence into piecewise stationary sections has been modified to design an adaptive filter for multichannel time-varying seismic data. The design principle is illustrated through some synthetic seismic traces. It is found that the adaptive multichannel filter suppresses the coherent noise present in the seismogram.<>
{"title":"Adaptive filter for processing of multichannel nonstationary seismic data","authors":"K. Srivastava, V. Dimri","doi":"10.1109/MDSP.1989.97018","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97018","url":null,"abstract":"Summary form only given, as follows. There are several types of time-varying filters designed for processing of nonstationary data. The old method of dividing the nonstationary sequence into piecewise stationary sections has been modified to design an adaptive filter for multichannel time-varying seismic data. The design principle is illustrated through some synthetic seismic traces. It is found that the adaptive multichannel filter suppresses the coherent noise present in the seismogram.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114219212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}