Pub Date : 2015-08-01DOI: 10.1109/DSP-SPE.2015.7369562
J. Tuthill, G. Hampson, J. Bunton, F. Harris, A. Brown, R. Ferris, T. Bateman
In order to maximize science returns in radio astronomy there is a constant drive to process ever wider instantaneous bandwidths. A key function of a radio telescope signal processing system is to divide a wide input bandwidth into a number of narrow sub-bands for further processing and analysis. The polyphase filter-bank channelizer has become the primary technique for performing this function due to its flexibility and suitability for very efficient implementation in FPGA hardware. Furthermore, oversampling polyphase filter-banks are gaining popularity in this role due to their ability to reduce spectral image components in each sub-band to very low levels for a given prototype filter response. A characteristic of the oversampling operation in a polyphase filterbank, however, is that the resulting sub-band outputs are in general no longer band centered on DC (as is the case for a maximally decimated filterbank) but are shifted by an amount that depends on the index of the sub-band. In this paper we present the structure of the oversampled polyphase filterbank used for the new Australian Square Kilometer Array Pathfinder (ASKAP) radio telescope and describe a technique used to correct for the sub-band frequency shift brought about by oversampling.
{"title":"Compensating for oversampling effects in polyphase channelizers: A radio astronomy application","authors":"J. Tuthill, G. Hampson, J. Bunton, F. Harris, A. Brown, R. Ferris, T. Bateman","doi":"10.1109/DSP-SPE.2015.7369562","DOIUrl":"https://doi.org/10.1109/DSP-SPE.2015.7369562","url":null,"abstract":"In order to maximize science returns in radio astronomy there is a constant drive to process ever wider instantaneous bandwidths. A key function of a radio telescope signal processing system is to divide a wide input bandwidth into a number of narrow sub-bands for further processing and analysis. The polyphase filter-bank channelizer has become the primary technique for performing this function due to its flexibility and suitability for very efficient implementation in FPGA hardware. Furthermore, oversampling polyphase filter-banks are gaining popularity in this role due to their ability to reduce spectral image components in each sub-band to very low levels for a given prototype filter response. A characteristic of the oversampling operation in a polyphase filterbank, however, is that the resulting sub-band outputs are in general no longer band centered on DC (as is the case for a maximally decimated filterbank) but are shifted by an amount that depends on the index of the sub-band. In this paper we present the structure of the oversampled polyphase filterbank used for the new Australian Square Kilometer Array Pathfinder (ASKAP) radio telescope and describe a technique used to correct for the sub-band frequency shift brought about by oversampling.","PeriodicalId":91992,"journal":{"name":"2015 IEEE Signal Processing and Signal Processing Education Workshop (SP/SPE)","volume":"72 1","pages":"255-260"},"PeriodicalIF":0.0,"publicationDate":"2015-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84346689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-01DOI: 10.1109/DSP-SPE.2015.7369530
A. G. Klein
We study the use of an inquiry-based lab to introduce communication systems to undergraduate electrical engineering students with prior knowledge in signals and systems. The students are not provided with an explicit list of procedures to follow, but are prompted to design and build a complete end-to-end wireless acoustic digital transceiver on their own, using inexpensive off-the-shelf components, before they have had any exposure to analog or digital radio concepts. Qualitative evaluation suggests this process of discovery, problem solving, and experimentation provides context to students when theoretical and abstract communication systems concepts are subsequently introduced in lecture. Survey results are provided which suggest this open-ended, hands-on approach is an effective teaching and learning technique for introducing communication systems, and several possible extensions of this approach are discussed.
{"title":"An inquiry-based acoustic signal processing lab module for introducing digital communications","authors":"A. G. Klein","doi":"10.1109/DSP-SPE.2015.7369530","DOIUrl":"https://doi.org/10.1109/DSP-SPE.2015.7369530","url":null,"abstract":"We study the use of an inquiry-based lab to introduce communication systems to undergraduate electrical engineering students with prior knowledge in signals and systems. The students are not provided with an explicit list of procedures to follow, but are prompted to design and build a complete end-to-end wireless acoustic digital transceiver on their own, using inexpensive off-the-shelf components, before they have had any exposure to analog or digital radio concepts. Qualitative evaluation suggests this process of discovery, problem solving, and experimentation provides context to students when theoretical and abstract communication systems concepts are subsequently introduced in lecture. Survey results are provided which suggest this open-ended, hands-on approach is an effective teaching and learning technique for introducing communication systems, and several possible extensions of this approach are discussed.","PeriodicalId":91992,"journal":{"name":"2015 IEEE Signal Processing and Signal Processing Education Workshop (SP/SPE)","volume":"107 1","pages":"71-76"},"PeriodicalIF":0.0,"publicationDate":"2015-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77451812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-01DOI: 10.1109/DSP-SPE.2015.7369522
M. Rahmani, George K. Atia
Robust Principal Component Analysis (PCA) (or robust subspace recovery) is a particularly important problem in unsupervised learning pertaining to a broad range of applications. In this paper, we analyze a randomized robust subspace recovery algorithm to show that its complexity is independent of the size of the data matrix. Exploiting the intrinsic low-dimensional geometry of the low rank matrix, the big data matrix is first turned to smaller size compressed data. This is accomplished by selecting a small random subset of the columns of the given data matrix, which is then projected into a random low-dimensional subspace. In the next step, a convex robust PCA algorithm is applied to the compressed data to learn the columns subspace of the low rank matrix. We derive new sufficient conditions, which show that the number of linear observations and the complexity of the randomized algorithm do not depend on the size of the given data.
{"title":"Analysis of randomized robust PCA for high dimensional data","authors":"M. Rahmani, George K. Atia","doi":"10.1109/DSP-SPE.2015.7369522","DOIUrl":"https://doi.org/10.1109/DSP-SPE.2015.7369522","url":null,"abstract":"Robust Principal Component Analysis (PCA) (or robust subspace recovery) is a particularly important problem in unsupervised learning pertaining to a broad range of applications. In this paper, we analyze a randomized robust subspace recovery algorithm to show that its complexity is independent of the size of the data matrix. Exploiting the intrinsic low-dimensional geometry of the low rank matrix, the big data matrix is first turned to smaller size compressed data. This is accomplished by selecting a small random subset of the columns of the given data matrix, which is then projected into a random low-dimensional subspace. In the next step, a convex robust PCA algorithm is applied to the compressed data to learn the columns subspace of the low rank matrix. We derive new sufficient conditions, which show that the number of linear observations and the complexity of the randomized algorithm do not depend on the size of the given data.","PeriodicalId":91992,"journal":{"name":"2015 IEEE Signal Processing and Signal Processing Education Workshop (SP/SPE)","volume":"75 1","pages":"25-30"},"PeriodicalIF":0.0,"publicationDate":"2015-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83371675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-01DOI: 10.1109/DSP-SPE.2015.7369565
Yifeng Wu, R. Black, B. Jeffs, K. Warnick
Radio astronomy instrumentation uses phased array feeds to provide radio telescopes with wider fields of view and enhanced beam control for detection and interference suppression. The standard assumption in Radio Astronomy is that receiver amplifiers operate in a linear region. In the presence of strong radio-frequency interference (RFI), however, it is possible to drive the amplifiers non-linear. This can cause out-of-band RFI to become non-linear and mix harmonics into the filter passband. In this scenario, classical RFI-mitigating beamformers may not be very good at suppressing the interference. This paper analyzes the effect of several beamformers in suppressing interference resulting from non-linear amplifiers. Experimental results show that a subspace projection beamformer is able to suppress the interference despite the nonlinear RFI.
{"title":"Cancelling non-linear processing products due to strong out-of-band interference in radio astronomical arrays","authors":"Yifeng Wu, R. Black, B. Jeffs, K. Warnick","doi":"10.1109/DSP-SPE.2015.7369565","DOIUrl":"https://doi.org/10.1109/DSP-SPE.2015.7369565","url":null,"abstract":"Radio astronomy instrumentation uses phased array feeds to provide radio telescopes with wider fields of view and enhanced beam control for detection and interference suppression. The standard assumption in Radio Astronomy is that receiver amplifiers operate in a linear region. In the presence of strong radio-frequency interference (RFI), however, it is possible to drive the amplifiers non-linear. This can cause out-of-band RFI to become non-linear and mix harmonics into the filter passband. In this scenario, classical RFI-mitigating beamformers may not be very good at suppressing the interference. This paper analyzes the effect of several beamformers in suppressing interference resulting from non-linear amplifiers. Experimental results show that a subspace projection beamformer is able to suppress the interference despite the nonlinear RFI.","PeriodicalId":91992,"journal":{"name":"2015 IEEE Signal Processing and Signal Processing Education Workshop (SP/SPE)","volume":"34 1","pages":"272-277"},"PeriodicalIF":0.0,"publicationDate":"2015-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90228704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-01DOI: 10.1109/DSP-SPE.2015.7369581
Trace A. Griffiths, G. Ware, T. Moon
Digital multispectral imaging (MSI) has been widely adopted to aid in the study of ancient artefacts including paintings and documents. MSI is able to capture views of the subject at multiple narrowband wavelengths ranging from the ultraviolet through the infrared. Stacking the imagery data in three dimensions creates a large image data cube which can be processed using statistical signal and image processing techniques. This paper is a brief review of how signal processing can aid in reducing three general problem areas that may be present in MSI data sets of ancient documents, namely: image fusion, ink identification, and bleed-through removal.
{"title":"Signal processing techniques for enhancing multispectral images of ancient documents","authors":"Trace A. Griffiths, G. Ware, T. Moon","doi":"10.1109/DSP-SPE.2015.7369581","DOIUrl":"https://doi.org/10.1109/DSP-SPE.2015.7369581","url":null,"abstract":"Digital multispectral imaging (MSI) has been widely adopted to aid in the study of ancient artefacts including paintings and documents. MSI is able to capture views of the subject at multiple narrowband wavelengths ranging from the ultraviolet through the infrared. Stacking the imagery data in three dimensions creates a large image data cube which can be processed using statistical signal and image processing techniques. This paper is a brief review of how signal processing can aid in reducing three general problem areas that may be present in MSI data sets of ancient documents, namely: image fusion, ink identification, and bleed-through removal.","PeriodicalId":91992,"journal":{"name":"2015 IEEE Signal Processing and Signal Processing Education Workshop (SP/SPE)","volume":"1 1","pages":"364-369"},"PeriodicalIF":0.0,"publicationDate":"2015-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79920190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-01DOI: 10.1109/DSP-SPE.2015.7369537
T. Ghirmai
In this paper, we show a convenient way of generating a Laplace process of a desired autocorrelation. Our approach is based upon the fact that the real or imaginary component of the product of two independent complex Gaussian random variables has a Laplace marginal probability density function (pdf). We, therefore, generate a Laplace process by multiplying two independent complex Gaussian autoregressive (AR) processes. By establishing the relationship of the autocorrelation of the complex Gaussian AR processes with the autocorrelation of the resulting Laplace process, we show a convenient and simple method of selecting the parameters of the Gaussian AR processes to obtain desired autocorrelation values of the Laplace Process. To verify the method, we provide computer simulations of generating Laplace processes by the method using illustrative examples and compare their statistical characteristics to theoretical values.
{"title":"Generating Laplace process with desired autocorrelation from Gaussian AR processes","authors":"T. Ghirmai","doi":"10.1109/DSP-SPE.2015.7369537","DOIUrl":"https://doi.org/10.1109/DSP-SPE.2015.7369537","url":null,"abstract":"In this paper, we show a convenient way of generating a Laplace process of a desired autocorrelation. Our approach is based upon the fact that the real or imaginary component of the product of two independent complex Gaussian random variables has a Laplace marginal probability density function (pdf). We, therefore, generate a Laplace process by multiplying two independent complex Gaussian autoregressive (AR) processes. By establishing the relationship of the autocorrelation of the complex Gaussian AR processes with the autocorrelation of the resulting Laplace process, we show a convenient and simple method of selecting the parameters of the Gaussian AR processes to obtain desired autocorrelation values of the Laplace Process. To verify the method, we provide computer simulations of generating Laplace processes by the method using illustrative examples and compare their statistical characteristics to theoretical values.","PeriodicalId":91992,"journal":{"name":"2015 IEEE Signal Processing and Signal Processing Education Workshop (SP/SPE)","volume":"39 1","pages":"113-117"},"PeriodicalIF":0.0,"publicationDate":"2015-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74906987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-01DOI: 10.1109/DSP-SPE.2015.7369560
J. Schatzman
The difference between continuous time and discrete time Cross-Ambiguity Functions can be significant. Both narrow band and wide band CAFs can be computed exactly with discretization, but the usual implementation of the narrow band CAF introduces an error which increases with FDOA. The error is largest for signal modulations with non- symmetric CAF plane signatures and for large FDOA values. The wide band CAF is unaffected by this deficiency whether or not a variable delay/variable rate filter is employed. Simple and relatively low cost post-processing can largely correct the discretization error for the narrow band CAF.
{"title":"The Cross-Ambiguity Function for emitter location and radar - practical issues for time discretization","authors":"J. Schatzman","doi":"10.1109/DSP-SPE.2015.7369560","DOIUrl":"https://doi.org/10.1109/DSP-SPE.2015.7369560","url":null,"abstract":"The difference between continuous time and discrete time Cross-Ambiguity Functions can be significant. Both narrow band and wide band CAFs can be computed exactly with discretization, but the usual implementation of the narrow band CAF introduces an error which increases with FDOA. The error is largest for signal modulations with non- symmetric CAF plane signatures and for large FDOA values. The wide band CAF is unaffected by this deficiency whether or not a variable delay/variable rate filter is employed. Simple and relatively low cost post-processing can largely correct the discretization error for the narrow band CAF.","PeriodicalId":91992,"journal":{"name":"2015 IEEE Signal Processing and Signal Processing Education Workshop (SP/SPE)","volume":"17 1","pages":"243-248"},"PeriodicalIF":0.0,"publicationDate":"2015-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90020472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-01DOI: 10.1109/DSP-SPE.2015.7369583
Shawn Higbee
This paper proposes a method of partitioning large data libraries into smaller sub-partitions, in such a way that a least-squares-based identification process will be numerically better behaved. An example from a well-known remote sensing spectral library is used to illustrate various seed strategies for the partitioning as well as various assignment strategies. In the example shown seed strategy is relatively unimportant for a library of this size, but there is a substantial improvement in least-squares performance with SVD-based partitioning for both point and interval estimates. Several context-dependent variants of this strategy are also proposed.
{"title":"A practical strategy for spectral library partitioning and least-squares identification","authors":"Shawn Higbee","doi":"10.1109/DSP-SPE.2015.7369583","DOIUrl":"https://doi.org/10.1109/DSP-SPE.2015.7369583","url":null,"abstract":"This paper proposes a method of partitioning large data libraries into smaller sub-partitions, in such a way that a least-squares-based identification process will be numerically better behaved. An example from a well-known remote sensing spectral library is used to illustrate various seed strategies for the partitioning as well as various assignment strategies. In the example shown seed strategy is relatively unimportant for a library of this size, but there is a substantial improvement in least-squares performance with SVD-based partitioning for both point and interval estimates. Several context-dependent variants of this strategy are also proposed.","PeriodicalId":91992,"journal":{"name":"2015 IEEE Signal Processing and Signal Processing Education Workshop (SP/SPE)","volume":"98 1","pages":"376-379"},"PeriodicalIF":0.0,"publicationDate":"2015-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73231184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-01DOI: 10.1109/DSP-SPE.2015.7369557
S. Wood, E. Fontenla, Christopher A. Metzler, W. Chiu, Richard Baraniuk
Cryo-electron tomography (cryo-ET), which produces three dimensional images at molecular resolution, is one of many applications that requires image reconstruction from projection measurements acquired with irregular measurement geometry. Although Fourier transform based reconstruction methods have been widely and successfully used in medical imaging for over 25 years, assumptions of regular measurement geometry and a band limited source cause direction sensitive artifacts when applied to cryo-ET. Iterative space domain methods such as compressed sensing could be applied to this severely underdetermined system with a limited range of projection angles and projection length, but progress has been hindered by the computational and storage requirements of the very large projection matrix of observation partials. In this paper we derive a method of dynamically computing the elements of the projection matrix accurately for continuous basis functions of limited extent with arbitrary beam width. Storage requirements are reduced by a factor of order 107 and there is no access overhead. This approach for limited angle and limited view measurement geometries is posed to enable dramatically improved reconstruction performance and is easily adapted to parallel computing architectures.
{"title":"Dynamic model generation for application of compressed sensing to cryo-electron tomography reconstruction","authors":"S. Wood, E. Fontenla, Christopher A. Metzler, W. Chiu, Richard Baraniuk","doi":"10.1109/DSP-SPE.2015.7369557","DOIUrl":"https://doi.org/10.1109/DSP-SPE.2015.7369557","url":null,"abstract":"Cryo-electron tomography (cryo-ET), which produces three dimensional images at molecular resolution, is one of many applications that requires image reconstruction from projection measurements acquired with irregular measurement geometry. Although Fourier transform based reconstruction methods have been widely and successfully used in medical imaging for over 25 years, assumptions of regular measurement geometry and a band limited source cause direction sensitive artifacts when applied to cryo-ET. Iterative space domain methods such as compressed sensing could be applied to this severely underdetermined system with a limited range of projection angles and projection length, but progress has been hindered by the computational and storage requirements of the very large projection matrix of observation partials. In this paper we derive a method of dynamically computing the elements of the projection matrix accurately for continuous basis functions of limited extent with arbitrary beam width. Storage requirements are reduced by a factor of order 107 and there is no access overhead. This approach for limited angle and limited view measurement geometries is posed to enable dramatically improved reconstruction performance and is easily adapted to parallel computing architectures.","PeriodicalId":91992,"journal":{"name":"2015 IEEE Signal Processing and Signal Processing Education Workshop (SP/SPE)","volume":"15 1","pages":"226-231"},"PeriodicalIF":0.0,"publicationDate":"2015-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74484020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-08-01DOI: 10.1109/DSP-SPE.2015.7369527
A. Spanias
Signal processing algorithms and DSP chips are embedded nearly in every application that involves natural signal or data analysis and/or synthesis. Applications of digital signal processing (DSP) in engineering include electrical, mechanical, chemical, industrial and biomedical systems. Applications in other areas include entertainment, financial, health, computing, manufacturing, to name a few. At ASU we developed an elective course for an undergraduate program called Digital Culture that includes gaming, smart stages, computer music, visualization and other applications. We have offered the course online to arts majors in 2013. We begun adding multidisciplinary application content to this course and offered it again in 2015 as a hybrid online course with compulsory weekly on-campus sessions. Arrangements are being made to include it as an elective course in information management systems, computer informatics, mechanical engineering, and biomedical informatics. The course now includes several introductory topics in signal processing covered mostly at a qualitative and block diagram level; we added several simulations in MATLAB and in Java-DSP. The course covers basics of DSP starting from time and frequency domain analysis and sampling. It then covers digital FIR ad IIR filters and the FFT. About one third of the course covers applications which introduce qualitative descriptions of some advanced topics. For example, linear prediction and coding of speech are described at the block diagram level with MATLAB and Java simulations. Extensions to 2-D signal processing are covered as well with the focus on JPEG and MPEG applications. The syllabus, simulations and preliminary assessments of this course are presented in the paper.
{"title":"An introductory signal processing course offered across the curriculum","authors":"A. Spanias","doi":"10.1109/DSP-SPE.2015.7369527","DOIUrl":"https://doi.org/10.1109/DSP-SPE.2015.7369527","url":null,"abstract":"Signal processing algorithms and DSP chips are embedded nearly in every application that involves natural signal or data analysis and/or synthesis. Applications of digital signal processing (DSP) in engineering include electrical, mechanical, chemical, industrial and biomedical systems. Applications in other areas include entertainment, financial, health, computing, manufacturing, to name a few. At ASU we developed an elective course for an undergraduate program called Digital Culture that includes gaming, smart stages, computer music, visualization and other applications. We have offered the course online to arts majors in 2013. We begun adding multidisciplinary application content to this course and offered it again in 2015 as a hybrid online course with compulsory weekly on-campus sessions. Arrangements are being made to include it as an elective course in information management systems, computer informatics, mechanical engineering, and biomedical informatics. The course now includes several introductory topics in signal processing covered mostly at a qualitative and block diagram level; we added several simulations in MATLAB and in Java-DSP. The course covers basics of DSP starting from time and frequency domain analysis and sampling. It then covers digital FIR ad IIR filters and the FFT. About one third of the course covers applications which introduce qualitative descriptions of some advanced topics. For example, linear prediction and coding of speech are described at the block diagram level with MATLAB and Java simulations. Extensions to 2-D signal processing are covered as well with the focus on JPEG and MPEG applications. The syllabus, simulations and preliminary assessments of this course are presented in the paper.","PeriodicalId":91992,"journal":{"name":"2015 IEEE Signal Processing and Signal Processing Education Workshop (SP/SPE)","volume":"26 1","pages":"55-58"},"PeriodicalIF":0.0,"publicationDate":"2015-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81508096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}