Pub Date : 2007-12-01DOI: 10.1109/ISSPIT.2007.4458197
M. Mustafa, A. Hussain, K. Ghazali, S. Riyadi
A weed can be thought of as any plant growing in the wrong place at the wrong time and doing more harm than good. Weeds compete with the crop for water, light, nutrients and space, and therefore reduce crop yields and also affect the efficient use of machinery. The most widely used method for weed control is to use agricultural chemicals (herbicides and fertilizer products). This heavy reliance on chemicals raises many environmental and economic concerns, causing many farmers to seek alternatives for weed control in order to reduce chemical use in farming. Since hand labor is costly, an automated weed control system may be economically feasible. A real-time precision automated weed control system could also reduce or eliminate the need for chemicals. In this research, an intelligent real-time automatic weed control system using image processing has been developed to identify and discriminate the weed types namely as narrow and broad. The core component of vision technology is the image processing to recognize type of weeds. Two techniques of image processing, GLCM and FFT have been used and compared to find the best solution of weed recognition for classification. The developed machine vision system consists of a mechanical structure which includes a sprayer, a Logitech web-digital camera, 12v motor coupled with a pump system and a small size CPU as a processor. Offline images and recorded video has been tested to the system and classification result of weed shows the successful rate is above 80%.
{"title":"Implementation of Image Processing Technique in Real Time Vision System for Automatic Weeding Strategy","authors":"M. Mustafa, A. Hussain, K. Ghazali, S. Riyadi","doi":"10.1109/ISSPIT.2007.4458197","DOIUrl":"https://doi.org/10.1109/ISSPIT.2007.4458197","url":null,"abstract":"A weed can be thought of as any plant growing in the wrong place at the wrong time and doing more harm than good. Weeds compete with the crop for water, light, nutrients and space, and therefore reduce crop yields and also affect the efficient use of machinery. The most widely used method for weed control is to use agricultural chemicals (herbicides and fertilizer products). This heavy reliance on chemicals raises many environmental and economic concerns, causing many farmers to seek alternatives for weed control in order to reduce chemical use in farming. Since hand labor is costly, an automated weed control system may be economically feasible. A real-time precision automated weed control system could also reduce or eliminate the need for chemicals. In this research, an intelligent real-time automatic weed control system using image processing has been developed to identify and discriminate the weed types namely as narrow and broad. The core component of vision technology is the image processing to recognize type of weeds. Two techniques of image processing, GLCM and FFT have been used and compared to find the best solution of weed recognition for classification. The developed machine vision system consists of a mechanical structure which includes a sprayer, a Logitech web-digital camera, 12v motor coupled with a pump system and a small size CPU as a processor. Offline images and recorded video has been tested to the system and classification result of weed shows the successful rate is above 80%.","PeriodicalId":299267,"journal":{"name":"2007 IEEE International Symposium on Signal Processing and Information Technology","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117235182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-12-01DOI: 10.1109/ISSPIT.2007.4458111
S. Mohammad Nikjoo, S.M.A. Moghadas, A. Salmanpour, A.S. Tehrani
Different adaptive filter algorithms are used for feedback cancellation in hearing aids. In this paper first, we will explain three different algorithms for feedback cancellation, which are robust against colored input signals. These algorithms are including the Approximated Affine Projection using Gram-Schmidt prediction, the Variable Step-size APA algorithm, and the QR-RLS algorithm. Then, we will use these algorithms in feedback cancellation in hearing aids. We use two kinds of input signals in our implementation, the sound shape signal and the music shape signal. Then, the performance of these algorithms in each case is analyzed, simulation results are presented for a better comparison.
{"title":"Performance Analysis of VS-APA, QR-RLS and Approximated APA on Feedback Cancellation in Hearing Aids","authors":"S. Mohammad Nikjoo, S.M.A. Moghadas, A. Salmanpour, A.S. Tehrani","doi":"10.1109/ISSPIT.2007.4458111","DOIUrl":"https://doi.org/10.1109/ISSPIT.2007.4458111","url":null,"abstract":"Different adaptive filter algorithms are used for feedback cancellation in hearing aids. In this paper first, we will explain three different algorithms for feedback cancellation, which are robust against colored input signals. These algorithms are including the Approximated Affine Projection using Gram-Schmidt prediction, the Variable Step-size APA algorithm, and the QR-RLS algorithm. Then, we will use these algorithms in feedback cancellation in hearing aids. We use two kinds of input signals in our implementation, the sound shape signal and the music shape signal. Then, the performance of these algorithms in each case is analyzed, simulation results are presented for a better comparison.","PeriodicalId":299267,"journal":{"name":"2007 IEEE International Symposium on Signal Processing and Information Technology","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117178277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-12-01DOI: 10.1109/ISSPIT.2007.4458131
H. Ajorloo, A. Lakdashti, M. Manzuri-Shalmani
Communication of speech over error prone channels such as wireless channels and internet usually suffers from loss of large number of adjacent samples. In this paper, we propose to make artificial correlation between speech samples which distorts it. By choosing appropriate parameters, one can control this distortion to lie below acceptable ranges. Using this correlation, the receiver can recover lost samples up to a certain limit using our proposed algorithm. Experimental results show that our solution overcomes a previous one reported in the literature specially when the amount of lost samples are below the mentioned limit.
{"title":"PIRS: Pseudo Inversion Based Recovery of Speech Signals","authors":"H. Ajorloo, A. Lakdashti, M. Manzuri-Shalmani","doi":"10.1109/ISSPIT.2007.4458131","DOIUrl":"https://doi.org/10.1109/ISSPIT.2007.4458131","url":null,"abstract":"Communication of speech over error prone channels such as wireless channels and internet usually suffers from loss of large number of adjacent samples. In this paper, we propose to make artificial correlation between speech samples which distorts it. By choosing appropriate parameters, one can control this distortion to lie below acceptable ranges. Using this correlation, the receiver can recover lost samples up to a certain limit using our proposed algorithm. Experimental results show that our solution overcomes a previous one reported in the literature specially when the amount of lost samples are below the mentioned limit.","PeriodicalId":299267,"journal":{"name":"2007 IEEE International Symposium on Signal Processing and Information Technology","volume":"47 29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126699756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-12-01DOI: 10.1109/ISSPIT.2007.4458024
V. Rojkova, Y. Khalil, Adel Said Elmaghraby, M. Kantardzic
The traffic behavior of the University of Louisville network with the interconnected backbone routers and the number of Virtual Local Area Network (VLAN) subnets is investigated using the Random Matrix Theory (RMT) approach. We employ the system of equal interval time series of traffic counts at all router to router and router to subnet connections as a representation of the inter-domain traffic. The cross-correlation matrix C of the real and simulated traffic rate changes between different traffic time series is calculated and tested against null- hypothesis of random interactions. The majority of the eigenvalues lambdai of matrix C fall within the bounds predicted by the RMT for the eigenvalues of random correlation matrices. The inverse participation ratio (IPR) of congested traffic shows the higher level of localization (fewer number of randomly interacting network nodes). In other words, the IPR level signifies the start of congestion or correlated traffic. Hence, the RMT based model for multiple input multiple output (MIMO) system widely accepted in wireless communication domain is quite applicable for analysis and modeling of traffic dynamics of wired systems. In particular, the IPR of the RMT predicted boundaries in real traffic can be used as a congestion indicator in network congestion control mechanisms.
{"title":"Use of Simulation and Random Matrix Theory to Identify the State of Network Traffic","authors":"V. Rojkova, Y. Khalil, Adel Said Elmaghraby, M. Kantardzic","doi":"10.1109/ISSPIT.2007.4458024","DOIUrl":"https://doi.org/10.1109/ISSPIT.2007.4458024","url":null,"abstract":"The traffic behavior of the University of Louisville network with the interconnected backbone routers and the number of Virtual Local Area Network (VLAN) subnets is investigated using the Random Matrix Theory (RMT) approach. We employ the system of equal interval time series of traffic counts at all router to router and router to subnet connections as a representation of the inter-domain traffic. The cross-correlation matrix C of the real and simulated traffic rate changes between different traffic time series is calculated and tested against null- hypothesis of random interactions. The majority of the eigenvalues lambdai of matrix C fall within the bounds predicted by the RMT for the eigenvalues of random correlation matrices. The inverse participation ratio (IPR) of congested traffic shows the higher level of localization (fewer number of randomly interacting network nodes). In other words, the IPR level signifies the start of congestion or correlated traffic. Hence, the RMT based model for multiple input multiple output (MIMO) system widely accepted in wireless communication domain is quite applicable for analysis and modeling of traffic dynamics of wired systems. In particular, the IPR of the RMT predicted boundaries in real traffic can be used as a congestion indicator in network congestion control mechanisms.","PeriodicalId":299267,"journal":{"name":"2007 IEEE International Symposium on Signal Processing and Information Technology","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121628855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-12-01DOI: 10.1109/ISSPIT.2007.4458213
S. Elhabian, Hossam Abd, El Munim, S. Elshazly, Alya Farag, Mohamed Aboelghar
Template matching is a common approach for detection of lung nodules from CT scans. Templates may take different shapes, size and intensity distribution. The process of nodule detection is essentially two steps: isolation of candidate nodules, and elimination of false positive nodules. The processes of outlining the detected nodules and their classification (i.e., assigning pathology for each nodule) complete the CAD system for early detection of lung nodules. This paper is concerned with the template design and evaluating the effectiveness of the first step in the nodule detection process. The paper will neither address the problem of reducing false positives nor would it deal with nodule segmentation and classification. Only parametric templates are considered. Modeling the gray scale distribution for the templates is based on the prior knowledge of typical nodules extracted by radiologists. The effectiveness of the template matching is investigated by cross validation with respect to the ground truth and is described by hit rate curves indicating the probability of detection as function of shape, size and orientation, if applicable, of the templates. We used synthetic and sample real CT scan images in our experiments. It is found that template matching is more sensitive to additive noise than image blurring when tests conducted on synthetic data. On the sample CT scans small size circular and hollow-circular templates provided comparable results to human experts.
{"title":"Experiments on Sensitivity of Template Matching for Lung Nodule Detection in Low Dose CT Scans","authors":"S. Elhabian, Hossam Abd, El Munim, S. Elshazly, Alya Farag, Mohamed Aboelghar","doi":"10.1109/ISSPIT.2007.4458213","DOIUrl":"https://doi.org/10.1109/ISSPIT.2007.4458213","url":null,"abstract":"Template matching is a common approach for detection of lung nodules from CT scans. Templates may take different shapes, size and intensity distribution. The process of nodule detection is essentially two steps: isolation of candidate nodules, and elimination of false positive nodules. The processes of outlining the detected nodules and their classification (i.e., assigning pathology for each nodule) complete the CAD system for early detection of lung nodules. This paper is concerned with the template design and evaluating the effectiveness of the first step in the nodule detection process. The paper will neither address the problem of reducing false positives nor would it deal with nodule segmentation and classification. Only parametric templates are considered. Modeling the gray scale distribution for the templates is based on the prior knowledge of typical nodules extracted by radiologists. The effectiveness of the template matching is investigated by cross validation with respect to the ground truth and is described by hit rate curves indicating the probability of detection as function of shape, size and orientation, if applicable, of the templates. We used synthetic and sample real CT scan images in our experiments. It is found that template matching is more sensitive to additive noise than image blurring when tests conducted on synthetic data. On the sample CT scans small size circular and hollow-circular templates provided comparable results to human experts.","PeriodicalId":299267,"journal":{"name":"2007 IEEE International Symposium on Signal Processing and Information Technology","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123089310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-12-01DOI: 10.1109/ISSPIT.2007.4458005
A. Abadpour, A. Alfa, Jeff Diamond
Designing a video-on-demand (VoD) system is in essence an optimization task aimed at minimizing the cost of communication and storage in the corresponding network. The decision variables of this problem are the locations of the nodes plus the content which should be cached in each node. Furthermore, an assignment strategy is needed to determine, for each customer, which node should be contacted for each video file. While this problem is categorized in the general group of network optimization problems, its specific characteristics demand a new solution to be sought for it. In this paper, inspired by the success of fuzzy optimization for similar problems in coding, a fuzzy objective function is derived which is heuristically shown to minimize the communication cost in a VoD network, while controlling the storage cost as well. Then, an iterative algorithm is proposed to find an optimum solution to the proposed problem. After addressing the mathematical details of the proposed method, a sample problem is presented followed by the solution produced for it by the proposed method. This solution is then extensively analyzed.
{"title":"Fuzzy design of a video-on-demand network","authors":"A. Abadpour, A. Alfa, Jeff Diamond","doi":"10.1109/ISSPIT.2007.4458005","DOIUrl":"https://doi.org/10.1109/ISSPIT.2007.4458005","url":null,"abstract":"Designing a video-on-demand (VoD) system is in essence an optimization task aimed at minimizing the cost of communication and storage in the corresponding network. The decision variables of this problem are the locations of the nodes plus the content which should be cached in each node. Furthermore, an assignment strategy is needed to determine, for each customer, which node should be contacted for each video file. While this problem is categorized in the general group of network optimization problems, its specific characteristics demand a new solution to be sought for it. In this paper, inspired by the success of fuzzy optimization for similar problems in coding, a fuzzy objective function is derived which is heuristically shown to minimize the communication cost in a VoD network, while controlling the storage cost as well. Then, an iterative algorithm is proposed to find an optimum solution to the proposed problem. After addressing the mathematical details of the proposed method, a sample problem is presented followed by the solution produced for it by the proposed method. This solution is then extensively analyzed.","PeriodicalId":299267,"journal":{"name":"2007 IEEE International Symposium on Signal Processing and Information Technology","volume":"474 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121251869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-12-01DOI: 10.1109/ISSPIT.2007.4458177
Hossam Abd, El Munim, Alya Farag, Manuel F. Casanova
Geometric analysis of normal and autistic human subjects reveal distinctions in deformations in the corpus callosum (CC) that may be used for image analysis-based studies of autism. Preliminary studies showed that the CC of autistic patients is quite distinct from normal controls. We use an implicit vector representation of CC to carry out the registration process which reduces the pose differences between the CC's models. Then the complex Fourier descriptor analysis is used to extract a feature vector of each CC model. This feature is used to build a criteria of discrimination between the normal and autistic subjects. This paper introduces a new method for the 2D shape registration problem by matching vector distance functions. A variational frame work is proposed for the global and local registration of CC's. A gradient descent optimization is used which can efficiently handle both the rigid and the non-rigid operations together. The registration of real CC extracted from MRI data sets demonstrates the potential of the proposed approach. Discrimination results will be demonstrated as well to show the efficiency of the discrimination technique.
{"title":"Frequency-Domain Analysis of the Human Brain for Studies of Autism","authors":"Hossam Abd, El Munim, Alya Farag, Manuel F. Casanova","doi":"10.1109/ISSPIT.2007.4458177","DOIUrl":"https://doi.org/10.1109/ISSPIT.2007.4458177","url":null,"abstract":"Geometric analysis of normal and autistic human subjects reveal distinctions in deformations in the corpus callosum (CC) that may be used for image analysis-based studies of autism. Preliminary studies showed that the CC of autistic patients is quite distinct from normal controls. We use an implicit vector representation of CC to carry out the registration process which reduces the pose differences between the CC's models. Then the complex Fourier descriptor analysis is used to extract a feature vector of each CC model. This feature is used to build a criteria of discrimination between the normal and autistic subjects. This paper introduces a new method for the 2D shape registration problem by matching vector distance functions. A variational frame work is proposed for the global and local registration of CC's. A gradient descent optimization is used which can efficiently handle both the rigid and the non-rigid operations together. The registration of real CC extracted from MRI data sets demonstrates the potential of the proposed approach. Discrimination results will be demonstrated as well to show the efficiency of the discrimination technique.","PeriodicalId":299267,"journal":{"name":"2007 IEEE International Symposium on Signal Processing and Information Technology","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123775907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-12-01DOI: 10.1109/ISSPIT.2007.4458004
Takashi Takenouchi, Shin Ishii
Based on the framework of error-correcting output coding (ECOC), we formerly proposed a multi-class classification method in which mis-classification of each binary classifier is regarded as a bit inversion error based on a probabilistic model of the noisy channel. In this article, we propose a modification of the method, based on localized likelihood, to deal with the discrepancy of metric between assumed by binary classifiers and underlying the dataset. Experiments using a synthetic dataset are performed, and we observe the improvement by the localized method.
{"title":"A multi-class classification with a probabilistic localized decoder","authors":"Takashi Takenouchi, Shin Ishii","doi":"10.1109/ISSPIT.2007.4458004","DOIUrl":"https://doi.org/10.1109/ISSPIT.2007.4458004","url":null,"abstract":"Based on the framework of error-correcting output coding (ECOC), we formerly proposed a multi-class classification method in which mis-classification of each binary classifier is regarded as a bit inversion error based on a probabilistic model of the noisy channel. In this article, we propose a modification of the method, based on localized likelihood, to deal with the discrepancy of metric between assumed by binary classifiers and underlying the dataset. Experiments using a synthetic dataset are performed, and we observe the improvement by the localized method.","PeriodicalId":299267,"journal":{"name":"2007 IEEE International Symposium on Signal Processing and Information Technology","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127555329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-12-01DOI: 10.1109/ISSPIT.2007.4458157
Ning Wang, P. C. Ching, Nengheng Zheng, Tan Lee
Motivated by the mechanism of speech production, we present a novel idea of using source-tract features in training speaker models for recognition. By considering the severe degradation occurring when a speaker recognition system operates under noisy environment, which could well be due to the missing of speaker-distinctive information, we propose a robust feature estimation method that can capture the source and tract related speech properties from noisy input speech utterances. As a simple yet useful speech enhancement technique, spectral subtractive-type algorithm is employed to remove the additive noise prior to feature extraction process. It is shown through analytical derivation as well as simulation that the proposed feature estimation method leads to robust recognition performance, especially for very low signal-to-noise ratios. In the context of Gaussian mixture model-based speaker recognition with the presence of additive white Gaussian noise in the input utterances, the new approach produces consistent reduction of both identification error rate and equal error rate at signal-to-noise ratios ranging from 0 dB to 15 dB.
{"title":"Robust Speaker Recognition Using Both Vocal Source and Vocal Tract Features Estimated from Noisy Input Utterances","authors":"Ning Wang, P. C. Ching, Nengheng Zheng, Tan Lee","doi":"10.1109/ISSPIT.2007.4458157","DOIUrl":"https://doi.org/10.1109/ISSPIT.2007.4458157","url":null,"abstract":"Motivated by the mechanism of speech production, we present a novel idea of using source-tract features in training speaker models for recognition. By considering the severe degradation occurring when a speaker recognition system operates under noisy environment, which could well be due to the missing of speaker-distinctive information, we propose a robust feature estimation method that can capture the source and tract related speech properties from noisy input speech utterances. As a simple yet useful speech enhancement technique, spectral subtractive-type algorithm is employed to remove the additive noise prior to feature extraction process. It is shown through analytical derivation as well as simulation that the proposed feature estimation method leads to robust recognition performance, especially for very low signal-to-noise ratios. In the context of Gaussian mixture model-based speaker recognition with the presence of additive white Gaussian noise in the input utterances, the new approach produces consistent reduction of both identification error rate and equal error rate at signal-to-noise ratios ranging from 0 dB to 15 dB.","PeriodicalId":299267,"journal":{"name":"2007 IEEE International Symposium on Signal Processing and Information Technology","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115096472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-12-01DOI: 10.1109/ISSPIT.2007.4458078
Khaled Mayyas
An Affine Projection (AP) adaptive algorithm employing a new adaptive step-size control equation is proposed. The variable step-size (VSS) is an efficient esti mation of a theoretical optimal one based on the minimization of the mean-square error (MSE) at each time instant. As a result, improvement in convergence speed is attained in early stages of convergence with small misadjustment near the optimum. The algorithm enhanced performance characteristics are verified by simulation examples.
{"title":"An Affine Projection Algorithm with an Adaptive Step-Size Equation","authors":"Khaled Mayyas","doi":"10.1109/ISSPIT.2007.4458078","DOIUrl":"https://doi.org/10.1109/ISSPIT.2007.4458078","url":null,"abstract":"An Affine Projection (AP) adaptive algorithm employing a new adaptive step-size control equation is proposed. The variable step-size (VSS) is an efficient esti mation of a theoretical optimal one based on the minimization of the mean-square error (MSE) at each time instant. As a result, improvement in convergence speed is attained in early stages of convergence with small misadjustment near the optimum. The algorithm enhanced performance characteristics are verified by simulation examples.","PeriodicalId":299267,"journal":{"name":"2007 IEEE International Symposium on Signal Processing and Information Technology","volume":"199 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115888188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}