Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775703
HyeSuk Kim, Gihong Kim, Gueesang Lee, June-Young Chang, Hanjin Cho
In existing approaches, diffusion is performed in four directions (North, South, East, West) without specific conditions. Therefore, these methods have shortcomings of distorted with the existence of impulse noises. In this paper, a new anisotropic diffusion based on directions of line-edges is proposed to enhance preservation of line-edges together with removal of noises. In the proposed method, an edge detection mask is used to find the direction of a line-edge. As a result, when the magnitude of edge detection is large enough, there exists a line-edge. In the case of a line-edge, the weight of diffusion is selected adaptively according to the direction of the line-edge. The diffusion is based on 8-directions diffusion with emphasis on the line-edge direction. Experimental results show that the proposed method can eliminate noise while preserving contour of line-edges.
{"title":"Anisotropic Diffusion for Preservation of Line-edges","authors":"HyeSuk Kim, Gihong Kim, Gueesang Lee, June-Young Chang, Hanjin Cho","doi":"10.1109/ISSPIT.2008.4775703","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775703","url":null,"abstract":"In existing approaches, diffusion is performed in four directions (North, South, East, West) without specific conditions. Therefore, these methods have shortcomings of distorted with the existence of impulse noises. In this paper, a new anisotropic diffusion based on directions of line-edges is proposed to enhance preservation of line-edges together with removal of noises. In the proposed method, an edge detection mask is used to find the direction of a line-edge. As a result, when the magnitude of edge detection is large enough, there exists a line-edge. In the case of a line-edge, the weight of diffusion is selected adaptively according to the direction of the line-edge. The diffusion is based on 8-directions diffusion with emphasis on the line-edge direction. Experimental results show that the proposed method can eliminate noise while preserving contour of line-edges.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125640704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775664
R. Meyers, A. Desoky
The Blowfish cryptosystem is a very fast and useful scheme, even though it was introduced over a decade ago. This cryptosystem consists of two parts, a subkey and S-box generation phase, and an encrypiton phase. A short introduction to both algorithms are given, along with a few notes about the Ciphertext Block Chaining (CBC) mode. Some general information about attacks are explained, along with information about some of the people who have worked to analyze and attempt to break Blowfish. An implementation of a Windows tool for encrypting files which uses Blowfish is also examined in this paper. The results of the encryption tool clearly demonstrate how fast the encryption is compared to the subkey and S-box generation. The secrecy of the cryptosystem is explained by using several test files of different types, as well as a study of the security with respect to the number of rounds. Finally, some possible extensions to the software tool to improve its usefulness based on the strength of Blowfish are given.
{"title":"An Implementation of the Blowfish Cryptosystem","authors":"R. Meyers, A. Desoky","doi":"10.1109/ISSPIT.2008.4775664","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775664","url":null,"abstract":"The Blowfish cryptosystem is a very fast and useful scheme, even though it was introduced over a decade ago. This cryptosystem consists of two parts, a subkey and S-box generation phase, and an encrypiton phase. A short introduction to both algorithms are given, along with a few notes about the Ciphertext Block Chaining (CBC) mode. Some general information about attacks are explained, along with information about some of the people who have worked to analyze and attempt to break Blowfish. An implementation of a Windows tool for encrypting files which uses Blowfish is also examined in this paper. The results of the encryption tool clearly demonstrate how fast the encryption is compared to the subkey and S-box generation. The secrecy of the cryptosystem is explained by using several test files of different types, as well as a study of the security with respect to the number of rounds. Finally, some possible extensions to the software tool to improve its usefulness based on the strength of Blowfish are given.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114282274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775701
M. S. Yazdi, K. Najafzade, M. Moghaddam
In recent years, image databases have grown faster; hence there are a real need for fast indexing and retrieval methods in image databases. In this paper, we proposed an approach for fast image indexing and retrieval in symbolic image databases using triangular spatial relations (TSR). The indexing data structure is based on a new introduced structure and hash function. To obtain the time complexity O(1); the linear hashing was used that has constant load factor. The experimental results were great.
{"title":"A Fast Symbolic Image Indexing and Retrieval Method Based On TSR and Linear Hashing","authors":"M. S. Yazdi, K. Najafzade, M. Moghaddam","doi":"10.1109/ISSPIT.2008.4775701","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775701","url":null,"abstract":"In recent years, image databases have grown faster; hence there are a real need for fast indexing and retrieval methods in image databases. In this paper, we proposed an approach for fast image indexing and retrieval in symbolic image databases using triangular spatial relations (TSR). The indexing data structure is based on a new introduced structure and hash function. To obtain the time complexity O(1); the linear hashing was used that has constant load factor. The experimental results were great.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124093554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775666
O. F. Hamad, Mikyung Kang, Jin-Han Jeon, Ji-Seung Nam
k-means distance-based nodes clustering technique proposed enhance the performance of RDMAR protocol in a Mobile Ad-hoc NETwork (MANET). To limit the flood search to just a circular local area around the source, the Relative Distance Micro-discovery Ad Hoc Routing (RDMAR) protocol uses the Relative Distance (RD). If the distance of flood discovery is further limited by clustering the nodes with similar characters in to one group, different from the dissimilar characters' group, the performance of the RDMAR implementation can be elevated. The k-means algorithm, similar to the one in unsupervised learning in pattern classification, can be recursively applied to re-classify the clusters as the MANET environment, resource availability, and node demands change. This technique can be more effective in a MANET with comparatively moderate change of the dynamicity and slow change in nodes' demands plus highly accumulated groups of nodes at given sub-areas.
{"title":"Neural Network's k-means Distance-Based Nodes-Clustering for Enhanced RDMAR Protocol in a MANET","authors":"O. F. Hamad, Mikyung Kang, Jin-Han Jeon, Ji-Seung Nam","doi":"10.1109/ISSPIT.2008.4775666","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775666","url":null,"abstract":"k-means distance-based nodes clustering technique proposed enhance the performance of RDMAR protocol in a Mobile Ad-hoc NETwork (MANET). To limit the flood search to just a circular local area around the source, the Relative Distance Micro-discovery Ad Hoc Routing (RDMAR) protocol uses the Relative Distance (RD). If the distance of flood discovery is further limited by clustering the nodes with similar characters in to one group, different from the dissimilar characters' group, the performance of the RDMAR implementation can be elevated. The k-means algorithm, similar to the one in unsupervised learning in pattern classification, can be recursively applied to re-classify the clusters as the MANET environment, resource availability, and node demands change. This technique can be more effective in a MANET with comparatively moderate change of the dynamicity and slow change in nodes' demands plus highly accumulated groups of nodes at given sub-areas.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128481706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775706
D. Bardone, Elias S. G. Carotti, J. de Martin
Fidelity Range Extensions (FRExt) is an H.264/AVC amendment which provides enhanced coding tools and the possibility to perform high resolution and lossless video encoding. However, most of the efforts for lossless coding in the H.264/AVC framework have been concentrated on improving the prediction step while leaving the entropy coder; CABAC, originally designed for lossy coding, unaltered. However, if transformation and quantization of the corresponding coefficients are not performed, as is the case of the lossless coding mode for FRExt, CABAC becomes sub-optimal. In this paper we show how considerable improvements in compression ratios can be achieved with simple modifications of the CABAC engine. The proposed technique was tested on a set of 4:4:4 test sequences, achieving gains of up to 12.80% with respect to the original unmodified H.264/AVC algorithm.
{"title":"Adaptive Golomb Codes For Level Binarization In The H.264/AVC FRExt Lossless Mode","authors":"D. Bardone, Elias S. G. Carotti, J. de Martin","doi":"10.1109/ISSPIT.2008.4775706","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775706","url":null,"abstract":"Fidelity Range Extensions (FRExt) is an H.264/AVC amendment which provides enhanced coding tools and the possibility to perform high resolution and lossless video encoding. However, most of the efforts for lossless coding in the H.264/AVC framework have been concentrated on improving the prediction step while leaving the entropy coder; CABAC, originally designed for lossy coding, unaltered. However, if transformation and quantization of the corresponding coefficients are not performed, as is the case of the lossless coding mode for FRExt, CABAC becomes sub-optimal. In this paper we show how considerable improvements in compression ratios can be achieved with simple modifications of the CABAC engine. The proposed technique was tested on a set of 4:4:4 test sequences, achieving gains of up to 12.80% with respect to the original unmodified H.264/AVC algorithm.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117034481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775709
Kam Swee Ng, Hyung-Jeong Yang, Sun-Hee Kim, Jong-Mun Jeong
EEG based brain computer interface has provided a new communication pathway between the human brain and the computer. It can be used for handicap or disabled users to interact with human using the computer interface. It can also be used in controlling human's muscles movement. In this paper, we show that meaningful information can be extracted from EEG signal through incremental approach. We applied principal component analysis incrementally which recognizes patterns in the series of EEG data that consists of actual and imaginary limb movements. Our experiments have proven that the approach is promising especially in time series data because it works incrementally.
{"title":"Incremental Pattern Recognition on EEG Signal","authors":"Kam Swee Ng, Hyung-Jeong Yang, Sun-Hee Kim, Jong-Mun Jeong","doi":"10.1109/ISSPIT.2008.4775709","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775709","url":null,"abstract":"EEG based brain computer interface has provided a new communication pathway between the human brain and the computer. It can be used for handicap or disabled users to interact with human using the computer interface. It can also be used in controlling human's muscles movement. In this paper, we show that meaningful information can be extracted from EEG signal through incremental approach. We applied principal component analysis incrementally which recognizes patterns in the series of EEG data that consists of actual and imaginary limb movements. Our experiments have proven that the approach is promising especially in time series data because it works incrementally.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121652642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775655
M. Kantardzic, C. Walgampaya, B. Wenerstrom, O. Lozitskiy, S. Higgins, D. King
Click fraud is a type of Internet crime that occurs in pay per click online advertising when a person, automated script, or computer program imitates a legitimate user of a Web browser clicking on an ad, for the purpose of generating a charge per click without having actual interest in the target of the ad's link. Most of the available commercial solutions are just click fraud reporting systems, not real-time click fraud detection and prevention systems. A new solution is proposed in this paper that will analyze the detailed user click activities based on data collected form different sources. More information about each click enables better evaluation of the quality of click traffic. We utilize the multi source data fusion to merge client side and server side activities. Proposed solution is integrated in our CCFDP V1.0 system for a real-time detection and prevention of click fraud. We have tested the system with real world data from an actual ad campaign where the results show that additional real-time information about clicks improve the quality of click fraud analysis.
{"title":"Improving Click Fraud Detection by Real Time Data Fusion","authors":"M. Kantardzic, C. Walgampaya, B. Wenerstrom, O. Lozitskiy, S. Higgins, D. King","doi":"10.1109/ISSPIT.2008.4775655","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775655","url":null,"abstract":"Click fraud is a type of Internet crime that occurs in pay per click online advertising when a person, automated script, or computer program imitates a legitimate user of a Web browser clicking on an ad, for the purpose of generating a charge per click without having actual interest in the target of the ad's link. Most of the available commercial solutions are just click fraud reporting systems, not real-time click fraud detection and prevention systems. A new solution is proposed in this paper that will analyze the detailed user click activities based on data collected form different sources. More information about each click enables better evaluation of the quality of click traffic. We utilize the multi source data fusion to merge client side and server side activities. Proposed solution is integrated in our CCFDP V1.0 system for a real-time detection and prevention of click fraud. We have tested the system with real world data from an actual ad campaign where the results show that additional real-time information about clicks improve the quality of click fraud analysis.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"279 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122504398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775729
A. Tafreshi, A. Nasrabadi, Amir H. Omidvarnia
In this paper, we attempt to analyze the effectiveness of the Empirical Mode Decomposition (EMD) for discriminating epilepticl periods from the interictal periods. The Empirical Mode Decomposition (EMD) is a general signal processing method for analyzing nonlinear and nonstationary time series. The main idea of EMD is to decompose a time series into a finite and often small number of intrinsic mode functions (IMFs). EMD is an adaptive decomposition method since the extracted information is obtained directly from the original signal. By utilizing this method to obtain the features of interictal and preictal signals, we compare these features with traditional features such as AR model coefficients and also the combination of them through self-organizing map (SOM). Our results confirmed that our proposed features could potentially be used to distinguish interictal from preictal data with average success rate up to 89.68% over 19 patients.
{"title":"Empirical Mode Decomposition In Epileptic Seizure Prediction","authors":"A. Tafreshi, A. Nasrabadi, Amir H. Omidvarnia","doi":"10.1109/ISSPIT.2008.4775729","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775729","url":null,"abstract":"In this paper, we attempt to analyze the effectiveness of the Empirical Mode Decomposition (EMD) for discriminating epilepticl periods from the interictal periods. The Empirical Mode Decomposition (EMD) is a general signal processing method for analyzing nonlinear and nonstationary time series. The main idea of EMD is to decompose a time series into a finite and often small number of intrinsic mode functions (IMFs). EMD is an adaptive decomposition method since the extracted information is obtained directly from the original signal. By utilizing this method to obtain the features of interictal and preictal signals, we compare these features with traditional features such as AR model coefficients and also the combination of them through self-organizing map (SOM). Our results confirmed that our proposed features could potentially be used to distinguish interictal from preictal data with average success rate up to 89.68% over 19 patients.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131299606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775676
E. Sokic, M. Ahic-Djokic
This paper presents an example of project-based learning (PBL) in an undergraduate course on Image processing. The design of a simple, low-cost computer vision system for implementation on a chess-playing capable robot is discussed. The system is based on a standard CCD camera and a personal computer. This project is a good tool for learning most of the course material that would otherwise be mastered by homework problems and study before an exam. An algorithm which detects chess moves is proposed. It compares two or more frames captured before, during and after a played chess move, and finds differences between them, which are used to define a played chess move. Further image processing is required to eliminate false readings, recognize direction of chess moves, end eliminate image distortion. Many Image processing problems and solutions can be introduced to students, through the proposed algorithm. The results are encouraging - students without any previous knowledge in image processing and advanced topics, such as artificial intelligence (neural networks etc.), may attain a chess move recognition success rate greater than 95%, in controlled light environments.
{"title":"Simple Computer Vision System for Chess Playing Robot Manipulator as a Project-based Learning Example","authors":"E. Sokic, M. Ahic-Djokic","doi":"10.1109/ISSPIT.2008.4775676","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775676","url":null,"abstract":"This paper presents an example of project-based learning (PBL) in an undergraduate course on Image processing. The design of a simple, low-cost computer vision system for implementation on a chess-playing capable robot is discussed. The system is based on a standard CCD camera and a personal computer. This project is a good tool for learning most of the course material that would otherwise be mastered by homework problems and study before an exam. An algorithm which detects chess moves is proposed. It compares two or more frames captured before, during and after a played chess move, and finds differences between them, which are used to define a played chess move. Further image processing is required to eliminate false readings, recognize direction of chess moves, end eliminate image distortion. Many Image processing problems and solutions can be introduced to students, through the proposed algorithm. The results are encouraging - students without any previous knowledge in image processing and advanced topics, such as artificial intelligence (neural networks etc.), may attain a chess move recognition success rate greater than 95%, in controlled light environments.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128302692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-12-01DOI: 10.1109/ISSPIT.2008.4775693
S. Krishnamurthy
Motivations for performing prefiltering based on root finding are presented, for an interference canceling receiver employing a reduced-state equalizer such as DFSE or RSSE. Since the interference canceling filter (ICF) has a MMSE-DFE structure, that shortens the channel impulse response (CIR), a low complexity minimum phase prefilter can be applied before equalization. Root-finding based prefiltering for second order filters are of particular interest, since closed form solutions can be obtained with less computations. For such second order filters, CIR can be classified as minimum, mixed or maximum phase, based on few inequalities which directly use the complex-valued channel coefficients. Proposed inequalities help in retaining maximum accuracy with low complexity, by avoiding some approximation algorithms involved in root identification on DSP. While samples corresponding to minimum and maximum phase channels are processed directly, root-finding is employed only to transform the mixed phase channels to their minimum phase equivalents.
{"title":"An Efficient Approach to Minimum Phase Prefiltering of Short Length Filters","authors":"S. Krishnamurthy","doi":"10.1109/ISSPIT.2008.4775693","DOIUrl":"https://doi.org/10.1109/ISSPIT.2008.4775693","url":null,"abstract":"Motivations for performing prefiltering based on root finding are presented, for an interference canceling receiver employing a reduced-state equalizer such as DFSE or RSSE. Since the interference canceling filter (ICF) has a MMSE-DFE structure, that shortens the channel impulse response (CIR), a low complexity minimum phase prefilter can be applied before equalization. Root-finding based prefiltering for second order filters are of particular interest, since closed form solutions can be obtained with less computations. For such second order filters, CIR can be classified as minimum, mixed or maximum phase, based on few inequalities which directly use the complex-valued channel coefficients. Proposed inequalities help in retaining maximum accuracy with low complexity, by avoiding some approximation algorithms involved in root identification on DSP. While samples corresponding to minimum and maximum phase channels are processed directly, root-finding is employed only to transform the mixed phase channels to their minimum phase equivalents.","PeriodicalId":213756,"journal":{"name":"2008 IEEE International Symposium on Signal Processing and Information Technology","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130400740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}