Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775707
V. Muñoz-Jiménez, A. Mokraoui-Zergainoh, J. Astruc
This paper concentrates on the bidirectional motion estimation problem for applications at very low bit rate. The missing frames in the original video sequence are predicted by the decoder using only the received frames with not additional information. The same selected moving objects on each decoded frames, are initially meshed from quadrilateral blocks which are then deformed using specific warping functions. The positions of the mesh nodes are adapted to the object's edges in such a way that the reconstruction error is as small as possible. Afterwards, the position displacements of these nodes are used to predict those of the moving objects in the missing frames. Finally, the meshed objects are reconstructed thanks to the predicted nodes. The proposed approach is integrated in the H.264/AVC video coding standard. Simulation results present the performance of the proposed bidirectional motion estimation.
{"title":"Bidirectional Motion Estimation Approach Using Warping Mesh Combined to Frame Interpolation","authors":"V. Muñoz-Jiménez, A. Mokraoui-Zergainoh, J. Astruc","doi":"10.1109/ISSPIT.2008.4775707","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775707","url":null,"abstract":"This paper concentrates on the bidirectional motion estimation problem for applications at very low bit rate. The missing frames in the original video sequence are predicted by the decoder using only the received frames with not additional information. The same selected moving objects on each decoded frames, are initially meshed from quadrilateral blocks which are then deformed using specific warping functions. The positions of the mesh nodes are adapted to the object's edges in such a way that the reconstruction error is as small as possible. Afterwards, the position displacements of these nodes are used to predict those of the moving objects in the missing frames. Finally, the meshed objects are reconstructed thanks to the predicted nodes. The proposed approach is integrated in the H.264/AVC video coding standard. Simulation results present the performance of the proposed bidirectional motion estimation.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131974623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775687
M. Fahmy, G. Raheem, O. S. Mohammed, O. Fahmy, G. Fahmy
In this paper, two approaches are proposed for digital image watermarking. In the first approach, we rely on embedding all the watermarking information in the approximation coefficients of the host's image wavelet decomposition. This is achieved by combining a weighted least squares Bspline coefficient expansion of the watermarking image, to the host's approximation coefficients. In order to make the size of Bspline expansion less or equal to the size of the host's approximation matrix, the watermarking image has to be decimated. The second approach relies on applying natural preserving transforms NPT, in a symmetrical manner to the host's image. In this case, the logo or the secret key replaces some of the host's image bottom lines. After applying NPT, the original host image bottom lines, replace the watermarked ones to make the host image looks natural. A novel fast least squares algorithm is proposed for watermark extraction. Illustrative examples are given, to show the effectiveness of these methods. Thes results show that the proposed Bspline data hiding technique is robust to compression, as well as the abilities of watermark extraction of any NPT watermarked images.
{"title":"Watermarking Via Bspline Expansion and Natural Preserving Transforms","authors":"M. Fahmy, G. Raheem, O. S. Mohammed, O. Fahmy, G. Fahmy","doi":"10.1109/ISSPIT.2008.4775687","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775687","url":null,"abstract":"In this paper, two approaches are proposed for digital image watermarking. In the first approach, we rely on embedding all the watermarking information in the approximation coefficients of the host's image wavelet decomposition. This is achieved by combining a weighted least squares Bspline coefficient expansion of the watermarking image, to the host's approximation coefficients. In order to make the size of Bspline expansion less or equal to the size of the host's approximation matrix, the watermarking image has to be decimated. The second approach relies on applying natural preserving transforms NPT, in a symmetrical manner to the host's image. In this case, the logo or the secret key replaces some of the host's image bottom lines. After applying NPT, the original host image bottom lines, replace the watermarked ones to make the host image looks natural. A novel fast least squares algorithm is proposed for watermark extraction. Illustrative examples are given, to show the effectiveness of these methods. Thes results show that the proposed Bspline data hiding technique is robust to compression, as well as the abilities of watermark extraction of any NPT watermarked images.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133834277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775701
M. S. Yazdi, K. Najafzade, M. Moghaddam
In recent years, image databases have grown faster; hence there are a real need for fast indexing and retrieval methods in image databases. In this paper, we proposed an approach for fast image indexing and retrieval in symbolic image databases using triangular spatial relations (TSR). The indexing data structure is based on a new introduced structure and hash function. To obtain the time complexity O(1); the linear hashing was used that has constant load factor. The experimental results were great.
{"title":"A Fast Symbolic Image Indexing and Retrieval Method Based On TSR and Linear Hashing","authors":"M. S. Yazdi, K. Najafzade, M. Moghaddam","doi":"10.1109/ISSPIT.2008.4775701","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775701","url":null,"abstract":"In recent years, image databases have grown faster; hence there are a real need for fast indexing and retrieval methods in image databases. In this paper, we proposed an approach for fast image indexing and retrieval in symbolic image databases using triangular spatial relations (TSR). The indexing data structure is based on a new introduced structure and hash function. To obtain the time complexity O(1); the linear hashing was used that has constant load factor. The experimental results were great.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124093554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775666
O. F. Hamad, Mikyung Kang, Jin-Han Jeon, Ji-Seung Nam
k-means distance-based nodes clustering technique proposed enhance the performance of RDMAR protocol in a Mobile Ad-hoc NETwork (MANET). To limit the flood search to just a circular local area around the source, the Relative Distance Micro-discovery Ad Hoc Routing (RDMAR) protocol uses the Relative Distance (RD). If the distance of flood discovery is further limited by clustering the nodes with similar characters in to one group, different from the dissimilar characters' group, the performance of the RDMAR implementation can be elevated. The k-means algorithm, similar to the one in unsupervised learning in pattern classification, can be recursively applied to re-classify the clusters as the MANET environment, resource availability, and node demands change. This technique can be more effective in a MANET with comparatively moderate change of the dynamicity and slow change in nodes' demands plus highly accumulated groups of nodes at given sub-areas.
{"title":"Neural Network's k-means Distance-Based Nodes-Clustering for Enhanced RDMAR Protocol in a MANET","authors":"O. F. Hamad, Mikyung Kang, Jin-Han Jeon, Ji-Seung Nam","doi":"10.1109/ISSPIT.2008.4775666","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775666","url":null,"abstract":"k-means distance-based nodes clustering technique proposed enhance the performance of RDMAR protocol in a Mobile Ad-hoc NETwork (MANET). To limit the flood search to just a circular local area around the source, the Relative Distance Micro-discovery Ad Hoc Routing (RDMAR) protocol uses the Relative Distance (RD). If the distance of flood discovery is further limited by clustering the nodes with similar characters in to one group, different from the dissimilar characters' group, the performance of the RDMAR implementation can be elevated. The k-means algorithm, similar to the one in unsupervised learning in pattern classification, can be recursively applied to re-classify the clusters as the MANET environment, resource availability, and node demands change. This technique can be more effective in a MANET with comparatively moderate change of the dynamicity and slow change in nodes' demands plus highly accumulated groups of nodes at given sub-areas.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128481706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775706
D. Bardone, Elias S. G. Carotti, J. de Martin
Fidelity Range Extensions (FRExt) is an H.264/AVC amendment which provides enhanced coding tools and the possibility to perform high resolution and lossless video encoding. However, most of the efforts for lossless coding in the H.264/AVC framework have been concentrated on improving the prediction step while leaving the entropy coder; CABAC, originally designed for lossy coding, unaltered. However, if transformation and quantization of the corresponding coefficients are not performed, as is the case of the lossless coding mode for FRExt, CABAC becomes sub-optimal. In this paper we show how considerable improvements in compression ratios can be achieved with simple modifications of the CABAC engine. The proposed technique was tested on a set of 4:4:4 test sequences, achieving gains of up to 12.80% with respect to the original unmodified H.264/AVC algorithm.
{"title":"Adaptive Golomb Codes For Level Binarization In The H.264/AVC FRExt Lossless Mode","authors":"D. Bardone, Elias S. G. Carotti, J. de Martin","doi":"10.1109/ISSPIT.2008.4775706","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775706","url":null,"abstract":"Fidelity Range Extensions (FRExt) is an H.264/AVC amendment which provides enhanced coding tools and the possibility to perform high resolution and lossless video encoding. However, most of the efforts for lossless coding in the H.264/AVC framework have been concentrated on improving the prediction step while leaving the entropy coder; CABAC, originally designed for lossy coding, unaltered. However, if transformation and quantization of the corresponding coefficients are not performed, as is the case of the lossless coding mode for FRExt, CABAC becomes sub-optimal. In this paper we show how considerable improvements in compression ratios can be achieved with simple modifications of the CABAC engine. The proposed technique was tested on a set of 4:4:4 test sequences, achieving gains of up to 12.80% with respect to the original unmodified H.264/AVC algorithm.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117034481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775709
Kam Swee Ng, Hyung-Jeong Yang, Sun-Hee Kim, Jong-Mun Jeong
EEG based brain computer interface has provided a new communication pathway between the human brain and the computer. It can be used for handicap or disabled users to interact with human using the computer interface. It can also be used in controlling human's muscles movement. In this paper, we show that meaningful information can be extracted from EEG signal through incremental approach. We applied principal component analysis incrementally which recognizes patterns in the series of EEG data that consists of actual and imaginary limb movements. Our experiments have proven that the approach is promising especially in time series data because it works incrementally.
{"title":"Incremental Pattern Recognition on EEG Signal","authors":"Kam Swee Ng, Hyung-Jeong Yang, Sun-Hee Kim, Jong-Mun Jeong","doi":"10.1109/ISSPIT.2008.4775709","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775709","url":null,"abstract":"EEG based brain computer interface has provided a new communication pathway between the human brain and the computer. It can be used for handicap or disabled users to interact with human using the computer interface. It can also be used in controlling human's muscles movement. In this paper, we show that meaningful information can be extracted from EEG signal through incremental approach. We applied principal component analysis incrementally which recognizes patterns in the series of EEG data that consists of actual and imaginary limb movements. Our experiments have proven that the approach is promising especially in time series data because it works incrementally.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121652642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775655
M. Kantardzic, C. Walgampaya, B. Wenerstrom, O. Lozitskiy, S. Higgins, D. King
Click fraud is a type of Internet crime that occurs in pay per click online advertising when a person, automated script, or computer program imitates a legitimate user of a Web browser clicking on an ad, for the purpose of generating a charge per click without having actual interest in the target of the ad's link. Most of the available commercial solutions are just click fraud reporting systems, not real-time click fraud detection and prevention systems. A new solution is proposed in this paper that will analyze the detailed user click activities based on data collected form different sources. More information about each click enables better evaluation of the quality of click traffic. We utilize the multi source data fusion to merge client side and server side activities. Proposed solution is integrated in our CCFDP V1.0 system for a real-time detection and prevention of click fraud. We have tested the system with real world data from an actual ad campaign where the results show that additional real-time information about clicks improve the quality of click fraud analysis.
{"title":"Improving Click Fraud Detection by Real Time Data Fusion","authors":"M. Kantardzic, C. Walgampaya, B. Wenerstrom, O. Lozitskiy, S. Higgins, D. King","doi":"10.1109/ISSPIT.2008.4775655","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775655","url":null,"abstract":"Click fraud is a type of Internet crime that occurs in pay per click online advertising when a person, automated script, or computer program imitates a legitimate user of a Web browser clicking on an ad, for the purpose of generating a charge per click without having actual interest in the target of the ad's link. Most of the available commercial solutions are just click fraud reporting systems, not real-time click fraud detection and prevention systems. A new solution is proposed in this paper that will analyze the detailed user click activities based on data collected form different sources. More information about each click enables better evaluation of the quality of click traffic. We utilize the multi source data fusion to merge client side and server side activities. Proposed solution is integrated in our CCFDP V1.0 system for a real-time detection and prevention of click fraud. We have tested the system with real world data from an actual ad campaign where the results show that additional real-time information about clicks improve the quality of click fraud analysis.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"279 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122504398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775729
A. Tafreshi, A. Nasrabadi, Amir H. Omidvarnia
In this paper, we attempt to analyze the effectiveness of the Empirical Mode Decomposition (EMD) for discriminating epilepticl periods from the interictal periods. The Empirical Mode Decomposition (EMD) is a general signal processing method for analyzing nonlinear and nonstationary time series. The main idea of EMD is to decompose a time series into a finite and often small number of intrinsic mode functions (IMFs). EMD is an adaptive decomposition method since the extracted information is obtained directly from the original signal. By utilizing this method to obtain the features of interictal and preictal signals, we compare these features with traditional features such as AR model coefficients and also the combination of them through self-organizing map (SOM). Our results confirmed that our proposed features could potentially be used to distinguish interictal from preictal data with average success rate up to 89.68% over 19 patients.
{"title":"Empirical Mode Decomposition In Epileptic Seizure Prediction","authors":"A. Tafreshi, A. Nasrabadi, Amir H. Omidvarnia","doi":"10.1109/ISSPIT.2008.4775729","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775729","url":null,"abstract":"In this paper, we attempt to analyze the effectiveness of the Empirical Mode Decomposition (EMD) for discriminating epilepticl periods from the interictal periods. The Empirical Mode Decomposition (EMD) is a general signal processing method for analyzing nonlinear and nonstationary time series. The main idea of EMD is to decompose a time series into a finite and often small number of intrinsic mode functions (IMFs). EMD is an adaptive decomposition method since the extracted information is obtained directly from the original signal. By utilizing this method to obtain the features of interictal and preictal signals, we compare these features with traditional features such as AR model coefficients and also the combination of them through self-organizing map (SOM). Our results confirmed that our proposed features could potentially be used to distinguish interictal from preictal data with average success rate up to 89.68% over 19 patients.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131299606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775676
E. Sokic, M. Ahic-Djokic
This paper presents an example of project-based learning (PBL) in an undergraduate course on Image processing. The design of a simple, low-cost computer vision system for implementation on a chess-playing capable robot is discussed. The system is based on a standard CCD camera and a personal computer. This project is a good tool for learning most of the course material that would otherwise be mastered by homework problems and study before an exam. An algorithm which detects chess moves is proposed. It compares two or more frames captured before, during and after a played chess move, and finds differences between them, which are used to define a played chess move. Further image processing is required to eliminate false readings, recognize direction of chess moves, end eliminate image distortion. Many Image processing problems and solutions can be introduced to students, through the proposed algorithm. The results are encouraging - students without any previous knowledge in image processing and advanced topics, such as artificial intelligence (neural networks etc.), may attain a chess move recognition success rate greater than 95%, in controlled light environments.
{"title":"Simple Computer Vision System for Chess Playing Robot Manipulator as a Project-based Learning Example","authors":"E. Sokic, M. Ahic-Djokic","doi":"10.1109/ISSPIT.2008.4775676","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775676","url":null,"abstract":"This paper presents an example of project-based learning (PBL) in an undergraduate course on Image processing. The design of a simple, low-cost computer vision system for implementation on a chess-playing capable robot is discussed. The system is based on a standard CCD camera and a personal computer. This project is a good tool for learning most of the course material that would otherwise be mastered by homework problems and study before an exam. An algorithm which detects chess moves is proposed. It compares two or more frames captured before, during and after a played chess move, and finds differences between them, which are used to define a played chess move. Further image processing is required to eliminate false readings, recognize direction of chess moves, end eliminate image distortion. Many Image processing problems and solutions can be introduced to students, through the proposed algorithm. The results are encouraging - students without any previous knowledge in image processing and advanced topics, such as artificial intelligence (neural networks etc.), may attain a chess move recognition success rate greater than 95%, in controlled light environments.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128302692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775693
S. Krishnamurthy
Motivations for performing prefiltering based on root finding are presented, for an interference canceling receiver employing a reduced-state equalizer such as DFSE or RSSE. Since the interference canceling filter (ICF) has a MMSE-DFE structure, that shortens the channel impulse response (CIR), a low complexity minimum phase prefilter can be applied before equalization. Root-finding based prefiltering for second order filters are of particular interest, since closed form solutions can be obtained with less computations. For such second order filters, CIR can be classified as minimum, mixed or maximum phase, based on few inequalities which directly use the complex-valued channel coefficients. Proposed inequalities help in retaining maximum accuracy with low complexity, by avoiding some approximation algorithms involved in root identification on DSP. While samples corresponding to minimum and maximum phase channels are processed directly, root-finding is employed only to transform the mixed phase channels to their minimum phase equivalents.
{"title":"An Efficient Approach to Minimum Phase Prefiltering of Short Length Filters","authors":"S. Krishnamurthy","doi":"10.1109/ISSPIT.2008.4775693","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775693","url":null,"abstract":"Motivations for performing prefiltering based on root finding are presented, for an interference canceling receiver employing a reduced-state equalizer such as DFSE or RSSE. Since the interference canceling filter (ICF) has a MMSE-DFE structure, that shortens the channel impulse response (CIR), a low complexity minimum phase prefilter can be applied before equalization. Root-finding based prefiltering for second order filters are of particular interest, since closed form solutions can be obtained with less computations. For such second order filters, CIR can be classified as minimum, mixed or maximum phase, based on few inequalities which directly use the complex-valued channel coefficients. Proposed inequalities help in retaining maximum accuracy with low complexity, by avoiding some approximation algorithms involved in root identification on DSP. While samples corresponding to minimum and maximum phase channels are processed directly, root-finding is employed only to transform the mixed phase channels to their minimum phase equivalents.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130400740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}