Pub Date : 2000-10-29DOI: 10.1109/ACSSC.2000.911246
P. Thanyasrisung, I. Reed, X. Yu, J. S. Goldstein, P. Zulch
The linear feature mapping detector (LFMD) developed by Yu and Reed (1995) yields excellent results for detecting a 2-D signal with limited prior information about the signal waveform and the statistical properties of the clutter. However, a direct implementation of the original version of the LFMD criterion in real-time for high resolution data may not be practical at the present time. In this paper, rank-reduction techniques for signal processing, were studied in both theory and practice in order to improve the LFMD for real-time target detection in X-band SAR imagery. It is demonstrated that the proposed reduced-rank detector can lower the computational complexity and decrease the amount of sample support for parameter estimation while providing excellent performance.
{"title":"Reduced-rank automatic target detection and recognition","authors":"P. Thanyasrisung, I. Reed, X. Yu, J. S. Goldstein, P. Zulch","doi":"10.1109/ACSSC.2000.911246","DOIUrl":"https://doi.org/10.1109/ACSSC.2000.911246","url":null,"abstract":"The linear feature mapping detector (LFMD) developed by Yu and Reed (1995) yields excellent results for detecting a 2-D signal with limited prior information about the signal waveform and the statistical properties of the clutter. However, a direct implementation of the original version of the LFMD criterion in real-time for high resolution data may not be practical at the present time. In this paper, rank-reduction techniques for signal processing, were studied in both theory and practice in order to improve the LFMD for real-time target detection in X-band SAR imagery. It is demonstrated that the proposed reduced-rank detector can lower the computational complexity and decrease the amount of sample support for parameter estimation while providing excellent performance.","PeriodicalId":10581,"journal":{"name":"Conference Record of the Thirty-Fourth Asilomar Conference on Signals, Systems and Computers (Cat. No.00CH37154)","volume":"43 1","pages":"1530-1534 vol.2"},"PeriodicalIF":0.0,"publicationDate":"2000-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88270006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-10-29DOI: 10.1109/ACSSC.2000.910929
J. N. Coleman, J. Kadlec
We present a technique with which arithmetic implemented in the logarithmic number system may be performed at considerably higher precision than normally available at 32 bits, with little additional hardware or execution time. Use of the technique requires that all data lie in a restricted range, and relies on scaling each such value into the maximum range of the number system. We illustrate the procedure using a recursive least squares algorithm. We show that the restriction is easily accommodated, and that the technique can yield very substantial gains in accuracy and numerical stability over 32-bit floating-point.
{"title":"Extended precision logarithmic arithmetic","authors":"J. N. Coleman, J. Kadlec","doi":"10.1109/ACSSC.2000.910929","DOIUrl":"https://doi.org/10.1109/ACSSC.2000.910929","url":null,"abstract":"We present a technique with which arithmetic implemented in the logarithmic number system may be performed at considerably higher precision than normally available at 32 bits, with little additional hardware or execution time. Use of the technique requires that all data lie in a restricted range, and relies on scaling each such value into the maximum range of the number system. We illustrate the procedure using a recursive least squares algorithm. We show that the restriction is easily accommodated, and that the technique can yield very substantial gains in accuracy and numerical stability over 32-bit floating-point.","PeriodicalId":10581,"journal":{"name":"Conference Record of the Thirty-Fourth Asilomar Conference on Signals, Systems and Computers (Cat. No.00CH37154)","volume":"18 1","pages":"124-129 vol.1"},"PeriodicalIF":0.0,"publicationDate":"2000-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88274850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-10-29DOI: 10.1109/ACSSC.2000.910994
J. L. Sullivan, J.W. Adams
In previous papers we introduced the generalized multiple exchange (GME) algorithm to design linear phase FIR filters, and the recursive GME (RGME) algorithm to design digital IIR filters, analog filters, nonlinear phase FIR filters, and wavelet scaling filters. The algorithms are based on the peak-constrained least-squares (PCLS) optimality criterion. In this paper we introduce four new algorithms to design specialized multirate filters and other cascaded filters. The new algorithms are based on the GME and RGME algorithms. The filters designed are specialized multirate filters, interpolated FIR (IFIR) filters, hybrid analog-digital anti-aliasing filters and prefilter/equalizer linear phase FIR filters.
{"title":"New optimization algorithms for multirate and cascaded filters","authors":"J. L. Sullivan, J.W. Adams","doi":"10.1109/ACSSC.2000.910994","DOIUrl":"https://doi.org/10.1109/ACSSC.2000.910994","url":null,"abstract":"In previous papers we introduced the generalized multiple exchange (GME) algorithm to design linear phase FIR filters, and the recursive GME (RGME) algorithm to design digital IIR filters, analog filters, nonlinear phase FIR filters, and wavelet scaling filters. The algorithms are based on the peak-constrained least-squares (PCLS) optimality criterion. In this paper we introduce four new algorithms to design specialized multirate filters and other cascaded filters. The new algorithms are based on the GME and RGME algorithms. The filters designed are specialized multirate filters, interpolated FIR (IFIR) filters, hybrid analog-digital anti-aliasing filters and prefilter/equalizer linear phase FIR filters.","PeriodicalId":10581,"journal":{"name":"Conference Record of the Thirty-Fourth Asilomar Conference on Signals, Systems and Computers (Cat. No.00CH37154)","volume":"7 1","pages":"445-449 vol.1"},"PeriodicalIF":0.0,"publicationDate":"2000-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88329756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-10-29DOI: 10.1109/ACSSC.2000.911230
A. Erdem Ertan, T. Barnwell
Speech and audio processing algorithms, which are based on the processing of the features and signals, are often written using poor programming styles. Understanding the existing source code and extending it is thus a time-consuming process that forces researchers to deal with programming problems instead of speech and audio processing innovations. We have developed a new system in C++ to overcome these problems. The programming techniques used in this environment allow a researcher to concentrate on innovations in an environment that still allows the rapid implementation of efficient real-time speech and audio processing applications.
{"title":"A C++ research and development environment for speech and audio processing applications","authors":"A. Erdem Ertan, T. Barnwell","doi":"10.1109/ACSSC.2000.911230","DOIUrl":"https://doi.org/10.1109/ACSSC.2000.911230","url":null,"abstract":"Speech and audio processing algorithms, which are based on the processing of the features and signals, are often written using poor programming styles. Understanding the existing source code and extending it is thus a time-consuming process that forces researchers to deal with programming problems instead of speech and audio processing innovations. We have developed a new system in C++ to overcome these problems. The programming techniques used in this environment allow a researcher to concentrate on innovations in an environment that still allows the rapid implementation of efficient real-time speech and audio processing applications.","PeriodicalId":10581,"journal":{"name":"Conference Record of the Thirty-Fourth Asilomar Conference on Signals, Systems and Computers (Cat. No.00CH37154)","volume":"19 1","pages":"1449-1453 vol.2"},"PeriodicalIF":0.0,"publicationDate":"2000-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88604581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-10-29DOI: 10.1109/ACSSC.2000.911053
H. Sampath, P. Stoica, A. Paulraj
We derive a jointly optimum space-time linear precoder and decoder for a multi-input multi-output (MIMO) channel with delay-spread, using the weighted MMSE criterion. We show that our solution provides a unified framework, from which several well-known designs as well as new designs can be developed. As an example, we show how to design a QoS based precoder and decoder that can optimally transmit and receive independent data streams, each with a different coding, modulation and target BER requirements.
{"title":"A generalized space-time linear precoder and decoder design using the weighted MMSE criterion","authors":"H. Sampath, P. Stoica, A. Paulraj","doi":"10.1109/ACSSC.2000.911053","DOIUrl":"https://doi.org/10.1109/ACSSC.2000.911053","url":null,"abstract":"We derive a jointly optimum space-time linear precoder and decoder for a multi-input multi-output (MIMO) channel with delay-spread, using the weighted MMSE criterion. We show that our solution provides a unified framework, from which several well-known designs as well as new designs can be developed. As an example, we show how to design a QoS based precoder and decoder that can optimally transmit and receive independent data streams, each with a different coding, modulation and target BER requirements.","PeriodicalId":10581,"journal":{"name":"Conference Record of the Thirty-Fourth Asilomar Conference on Signals, Systems and Computers (Cat. No.00CH37154)","volume":"40 1","pages":"753-758 vol.1"},"PeriodicalIF":0.0,"publicationDate":"2000-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91278552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-10-29DOI: 10.1109/ACSSC.2000.910663
M. Yan, B. Rao
In this paper, we make use of our general results on tracking fast fading channels to study several important special cases to provide insights and explore connections with well known existing results. The following special cases are considered: quasi-static Rayleigh fading channels, fast Rician fading channels and fully correlated fast Rayleigh fading channels. In particular, the role of channel estimation errors, antenna correlation and user signal correlation on detector performance are quantified. In addition, we quantify the effect of the number of training symbols on the performance of our MAP receiver with a Kalman filter. Finally, we compare our MAP receiver with two other adaptive receivers, the adaptive channel predictor and the adaptive MMSE combiner. The study uses a combination of analysis and simulations.
{"title":"Performance of antenna array receivers in autoregressive flat fading channels","authors":"M. Yan, B. Rao","doi":"10.1109/ACSSC.2000.910663","DOIUrl":"https://doi.org/10.1109/ACSSC.2000.910663","url":null,"abstract":"In this paper, we make use of our general results on tracking fast fading channels to study several important special cases to provide insights and explore connections with well known existing results. The following special cases are considered: quasi-static Rayleigh fading channels, fast Rician fading channels and fully correlated fast Rayleigh fading channels. In particular, the role of channel estimation errors, antenna correlation and user signal correlation on detector performance are quantified. In addition, we quantify the effect of the number of training symbols on the performance of our MAP receiver with a Kalman filter. Finally, we compare our MAP receiver with two other adaptive receivers, the adaptive channel predictor and the adaptive MMSE combiner. The study uses a combination of analysis and simulations.","PeriodicalId":10581,"journal":{"name":"Conference Record of the Thirty-Fourth Asilomar Conference on Signals, Systems and Computers (Cat. No.00CH37154)","volume":"20 1","pages":"995-999 vol.2"},"PeriodicalIF":0.0,"publicationDate":"2000-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91306304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-10-29DOI: 10.1109/ACSSC.2000.910681
M. Gupta, A. Gilbert
We explore the use of multiresolution analysis for vector signals, such as multispectral images or stock market portfolio time series. These signals often contain local correlations among components that are overlooked in a component-by-component analysis. We show that a coarse signal defined by taking local arithmetic averages is equivalent to analyzing the signal component by component, but by using the average that minimizes the L/sup 2/ distance to the local points results in a non-separable vector multiresolution analysis. We propose using the vector multiresolution representation for signal processing tasks such as compression and denoising. We prove some results in denoising and present color image examples.
{"title":"Nonlinear vector multiresolution analysis","authors":"M. Gupta, A. Gilbert","doi":"10.1109/ACSSC.2000.910681","DOIUrl":"https://doi.org/10.1109/ACSSC.2000.910681","url":null,"abstract":"We explore the use of multiresolution analysis for vector signals, such as multispectral images or stock market portfolio time series. These signals often contain local correlations among components that are overlooked in a component-by-component analysis. We show that a coarse signal defined by taking local arithmetic averages is equivalent to analyzing the signal component by component, but by using the average that minimizes the L/sup 2/ distance to the local points results in a non-separable vector multiresolution analysis. We propose using the vector multiresolution representation for signal processing tasks such as compression and denoising. We prove some results in denoising and present color image examples.","PeriodicalId":10581,"journal":{"name":"Conference Record of the Thirty-Fourth Asilomar Conference on Signals, Systems and Computers (Cat. No.00CH37154)","volume":"23 1","pages":"1077-1081 vol.2"},"PeriodicalIF":0.0,"publicationDate":"2000-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88923458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-10-29DOI: 10.1109/ACSSC.2000.910966
A. Kasapi, S.B. Da Torre, A. Roger, A. Kerr, A. Nolan
Adaptive antenna array methods in cellular applications are typically constrained by cost to the basestation. In order to achieve high network capacity, however, both the uplink and downlink channels must be improved; the basestation therefore must make use of reciprocity to transmit minimal interference to interfering co-channel users, while sending maximal power to desired users. In PHS, the primary limitation to reciprocity is channel motion, due either to user motion or environmental motion. This paper introduces statistics to quantify, this motion on a network-wide level. The results show great repeatability and are directly relevant to the assessment of network performance with adaptive array antennas.
{"title":"Massive scale air interface reciprocity (motion) survey of a PHS network","authors":"A. Kasapi, S.B. Da Torre, A. Roger, A. Kerr, A. Nolan","doi":"10.1109/ACSSC.2000.910966","DOIUrl":"https://doi.org/10.1109/ACSSC.2000.910966","url":null,"abstract":"Adaptive antenna array methods in cellular applications are typically constrained by cost to the basestation. In order to achieve high network capacity, however, both the uplink and downlink channels must be improved; the basestation therefore must make use of reciprocity to transmit minimal interference to interfering co-channel users, while sending maximal power to desired users. In PHS, the primary limitation to reciprocity is channel motion, due either to user motion or environmental motion. This paper introduces statistics to quantify, this motion on a network-wide level. The results show great repeatability and are directly relevant to the assessment of network performance with adaptive array antennas.","PeriodicalId":10581,"journal":{"name":"Conference Record of the Thirty-Fourth Asilomar Conference on Signals, Systems and Computers (Cat. No.00CH37154)","volume":"64 1","pages":"297-300 vol.1"},"PeriodicalIF":0.0,"publicationDate":"2000-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85609113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-10-29DOI: 10.1109/ACSSC.2000.910913
S. Hayward, P. D. Baxter, T. Shepherd
A system employing a narrow receiver bandwidth can achieve high range resolution through 'stretch' processing of linear frequency modulated (LFM) waveforms. At the output of the stretch processor the spatial statistics of interferers are time-varying, and consequently time-varying beamforming weights are required to achieve adequate interference cancellation. In this paper we describe a family of time-varying adaptive beamforming algorithms and assess their performance in terms of the quality of the resulting target range profiles. We show that there are performance advantages in using methods that constrain the weight vector to be smoothly time-varying.
{"title":"Adaptive interference cancellation using time-varying beamforming weights for wideband LFM waveforms","authors":"S. Hayward, P. D. Baxter, T. Shepherd","doi":"10.1109/ACSSC.2000.910913","DOIUrl":"https://doi.org/10.1109/ACSSC.2000.910913","url":null,"abstract":"A system employing a narrow receiver bandwidth can achieve high range resolution through 'stretch' processing of linear frequency modulated (LFM) waveforms. At the output of the stretch processor the spatial statistics of interferers are time-varying, and consequently time-varying beamforming weights are required to achieve adequate interference cancellation. In this paper we describe a family of time-varying adaptive beamforming algorithms and assess their performance in terms of the quality of the resulting target range profiles. We show that there are performance advantages in using methods that constrain the weight vector to be smoothly time-varying.","PeriodicalId":10581,"journal":{"name":"Conference Record of the Thirty-Fourth Asilomar Conference on Signals, Systems and Computers (Cat. No.00CH37154)","volume":"17 1","pages":"30-35 vol.1"},"PeriodicalIF":0.0,"publicationDate":"2000-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91083928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-10-29DOI: 10.1109/ACSSC.2000.911001
A. Tsai, A. Yezzi, A. Willsky
We first address the problem of simultaneous image segmentation and smoothing by approaching the Mumford-Shah (1989) paradigm from a curve evolution perspective. In particular we let a set of deformable contours define the boundaries between regions in an image where we model the data via piecewise smooth functions and employ a gradient flow to evolve these contours. Next, we generalize the data fidelity term of the original Mumford-Shah functional to incorporate a spatially varying penalty. This more general model leads us to a novel partial differential equation (PDE) based approach for simultaneous image magnification, segmentation, and smoothing.
{"title":"A PDE approach to image smoothing and magnification using the Mumford-Shah functional","authors":"A. Tsai, A. Yezzi, A. Willsky","doi":"10.1109/ACSSC.2000.911001","DOIUrl":"https://doi.org/10.1109/ACSSC.2000.911001","url":null,"abstract":"We first address the problem of simultaneous image segmentation and smoothing by approaching the Mumford-Shah (1989) paradigm from a curve evolution perspective. In particular we let a set of deformable contours define the boundaries between regions in an image where we model the data via piecewise smooth functions and employ a gradient flow to evolve these contours. Next, we generalize the data fidelity term of the original Mumford-Shah functional to incorporate a spatially varying penalty. This more general model leads us to a novel partial differential equation (PDE) based approach for simultaneous image magnification, segmentation, and smoothing.","PeriodicalId":10581,"journal":{"name":"Conference Record of the Thirty-Fourth Asilomar Conference on Signals, Systems and Computers (Cat. No.00CH37154)","volume":"24 1","pages":"473-477 vol.1"},"PeriodicalIF":0.0,"publicationDate":"2000-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88848141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}