Pub Date : 2007-12-04DOI: 10.1109/ISCIT.2007.4392109
Pyung-soo, Yong-Jin Kim
In this paper, the new fast vertical handover scheme is proposed for the hierarchical Mobile IPv6 (HMIPv6) to optimize and enhance the existing fast vertical handover HMIPv6 (FVH-HMIPv6) in heterogeneous wireless access networks. The recently standardized IEEE 802.21 Media Independent Handover Function (MIHF) is adopted for the proposed FVH-HMIPv6. Firstly, the Media Independent Information Service (MIIIS) is extended by including new L3 information to provide domain prefixes of heterogeneous neighbouring mobility anchor points (MAPs), which is critical to the handover performance of proposed FVH-FMIPv6 with MIHF. Secondly, the operation procedure for the proposed scheme is described in detail. Finally, the analysis for handover performance is performed for the proposed scheme and the existing scheme, which shows the proposed FVH-HMIPv6 with MIHF can optimize and enhance the handover performance of the existing scheme.
{"title":"Hierarchical Mobile IPv6 based fast vertical handover scheme for heterogeneous wireless networks","authors":"Pyung-soo, Yong-Jin Kim","doi":"10.1109/ISCIT.2007.4392109","DOIUrl":"https://doi.org/10.1109/ISCIT.2007.4392109","url":null,"abstract":"In this paper, the new fast vertical handover scheme is proposed for the hierarchical Mobile IPv6 (HMIPv6) to optimize and enhance the existing fast vertical handover HMIPv6 (FVH-HMIPv6) in heterogeneous wireless access networks. The recently standardized IEEE 802.21 Media Independent Handover Function (MIHF) is adopted for the proposed FVH-HMIPv6. Firstly, the Media Independent Information Service (MIIIS) is extended by including new L3 information to provide domain prefixes of heterogeneous neighbouring mobility anchor points (MAPs), which is critical to the handover performance of proposed FVH-FMIPv6 with MIHF. Secondly, the operation procedure for the proposed scheme is described in detail. Finally, the analysis for handover performance is performed for the proposed scheme and the existing scheme, which shows the proposed FVH-HMIPv6 with MIHF can optimize and enhance the handover performance of the existing scheme.","PeriodicalId":331439,"journal":{"name":"2007 International Symposium on Communications and Information Technologies","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129818248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-12-04DOI: 10.37936/ECTI-CIT.201041.54220
G. Dolecek
The modification of the conventional CIC (cascaded-integrator-comb) filter for rational sample rate conversion (SRC) in software radio (SWR) systems is presented here. The conversion factor is a ratio of two mutually prime numbers, where the decimation factor M can be expressed as a product of two integers. The overall filter realization is based on a stepped triangular form of the CIC impulse response, the corresponding expanded cosine filter, and sine-based compensation filter. This filter performs sampling rate conversion efficiently by using only additions/subtractions making it attractive for software radio (SWR) applications.
{"title":"Modified CIC filter for rational sample rate conversion","authors":"G. Dolecek","doi":"10.37936/ECTI-CIT.201041.54220","DOIUrl":"https://doi.org/10.37936/ECTI-CIT.201041.54220","url":null,"abstract":"The modification of the conventional CIC (cascaded-integrator-comb) filter for rational sample rate conversion (SRC) in software radio (SWR) systems is presented here. The conversion factor is a ratio of two mutually prime numbers, where the decimation factor M can be expressed as a product of two integers. The overall filter realization is based on a stepped triangular form of the CIC impulse response, the corresponding expanded cosine filter, and sine-based compensation filter. This filter performs sampling rate conversion efficiently by using only additions/subtractions making it attractive for software radio (SWR) applications.","PeriodicalId":331439,"journal":{"name":"2007 International Symposium on Communications and Information Technologies","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130963757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-12-04DOI: 10.1109/ISCIT.2007.4392213
W.V. Siricharoen
The Internet is already the primary source of tourist destination information for travelers. Digital business is changed from the recommended usual business using the Internet and e-commerce. E-tourism Software adapted from original e-commerce, ready for creating online reservation/booking. It is the total solution of instant e-commerce for travel 24 hour. E-tourism is a perfect candidate for Semantic Web because it is information-based and depends on the World Wide Web, both as a means of marketing and transaction channel. Ontologies can assist organization, browsing, parametric search, and in general, more intelligent access to online information and services. Ontology-based information retrieval makes it possible to handle the known challenges in connection with Web-based information systems in a more efficient way. This paper discusses some ontological trends that support the growing domain of online tourism. The first part introduces the e-tourism implementation and using ontologies in e-tourism. The second part describes fundamental background of ontologies; definition, modeling, language and etc. The third part discusses the example concepts of existing e-tourism using ontologies and it is a briefly summary on the important e-tourism ontologies. Finally, the last part is the conclusion.
{"title":"E-commerce adaptation using ontologies for e-tourism","authors":"W.V. Siricharoen","doi":"10.1109/ISCIT.2007.4392213","DOIUrl":"https://doi.org/10.1109/ISCIT.2007.4392213","url":null,"abstract":"The Internet is already the primary source of tourist destination information for travelers. Digital business is changed from the recommended usual business using the Internet and e-commerce. E-tourism Software adapted from original e-commerce, ready for creating online reservation/booking. It is the total solution of instant e-commerce for travel 24 hour. E-tourism is a perfect candidate for Semantic Web because it is information-based and depends on the World Wide Web, both as a means of marketing and transaction channel. Ontologies can assist organization, browsing, parametric search, and in general, more intelligent access to online information and services. Ontology-based information retrieval makes it possible to handle the known challenges in connection with Web-based information systems in a more efficient way. This paper discusses some ontological trends that support the growing domain of online tourism. The first part introduces the e-tourism implementation and using ontologies in e-tourism. The second part describes fundamental background of ontologies; definition, modeling, language and etc. The third part discusses the example concepts of existing e-tourism using ontologies and it is a briefly summary on the important e-tourism ontologies. Finally, the last part is the conclusion.","PeriodicalId":331439,"journal":{"name":"2007 International Symposium on Communications and Information Technologies","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127298359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-12-04DOI: 10.1109/ISCIT.2007.4391975
Eduardo I. Silva, Daniel E. Quevedo, Graham C. Goodwin
When employing multibit data converters the necessity arises to compensate for digital to analog converter (DAC) element mismatch. The most widespread compensation techniques are based on digital element matching (DEM) and, if properly designed, these can achieve almost arbitrary DAC mismatch noise shaping. This paper gives a closed form expression for the optimal DEM noise shaping profile. It depends upon the spectrum of the analog signal to be quantized and may also include a frequency weighting filter which reflects perceptual criteria.
{"title":"Optimal multibit Digital to Analog Conversion","authors":"Eduardo I. Silva, Daniel E. Quevedo, Graham C. Goodwin","doi":"10.1109/ISCIT.2007.4391975","DOIUrl":"https://doi.org/10.1109/ISCIT.2007.4391975","url":null,"abstract":"When employing multibit data converters the necessity arises to compensate for digital to analog converter (DAC) element mismatch. The most widespread compensation techniques are based on digital element matching (DEM) and, if properly designed, these can achieve almost arbitrary DAC mismatch noise shaping. This paper gives a closed form expression for the optimal DEM noise shaping profile. It depends upon the spectrum of the analog signal to be quantized and may also include a frequency weighting filter which reflects perceptual criteria.","PeriodicalId":331439,"journal":{"name":"2007 International Symposium on Communications and Information Technologies","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122476861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-12-04DOI: 10.1109/ISCIT.2007.4392263
D. Farrokhi, R. Togneri, A. Zaknich
A pre and post processing technique is proposed to enhance the speech signal of highly non-stationary noisy speech. The purpose of this research has been to build on current speech enhancement algorithms to produce an improved algorithm for enhancement of speech contaminated with non-stationary babble type noise. The pre processing involves two stages. In stage one, the variance of the noisy speech spectrum is reduced by utilizing the Discrete or Prolate Spheroidal Sequence (DPSS) multi-taper algorithm plus a Controlled Forward Moving Average (CFMA) technique. We introduced the CFMA algorithm to smooth and reduce variance of the estimated non-stationary noise spectrum. In the second stage the noisy speech power spectrum is de-noised by applying Stein's Unbiased Risk Estimator (SURE) wavelet thresholding technique. In the third layer, use is made of a noise estimation algorithm with rapid adaptation for a highly non-stationary noise environment. The noise estimate is updated in three frequency sub-bands, by averaging the noisy speech power spectrum using a frequency dependent smoothing factor, which is adjusted, based on a signal presence probability factor. In the fourth layer a spectral subtraction algorithm is used to enhance the speech signal, by subtracting each estimated noise from the original noisy speech. The new proposed post processing is then applied to the complete signal when the speech enhancement is processed using segmental speech enhancement. The enhanced signal is further improved by applying a soft wavelet thresholding technique to the un-segmented enhanced speech at the final processing stage. The results show improvements both quantitatively and qualitatively compared to the speech enhancement that does not apply the CFMA algorithm.
{"title":"Speech enhancement of non-stationary noise based on controlled forward moving average","authors":"D. Farrokhi, R. Togneri, A. Zaknich","doi":"10.1109/ISCIT.2007.4392263","DOIUrl":"https://doi.org/10.1109/ISCIT.2007.4392263","url":null,"abstract":"A pre and post processing technique is proposed to enhance the speech signal of highly non-stationary noisy speech. The purpose of this research has been to build on current speech enhancement algorithms to produce an improved algorithm for enhancement of speech contaminated with non-stationary babble type noise. The pre processing involves two stages. In stage one, the variance of the noisy speech spectrum is reduced by utilizing the Discrete or Prolate Spheroidal Sequence (DPSS) multi-taper algorithm plus a Controlled Forward Moving Average (CFMA) technique. We introduced the CFMA algorithm to smooth and reduce variance of the estimated non-stationary noise spectrum. In the second stage the noisy speech power spectrum is de-noised by applying Stein's Unbiased Risk Estimator (SURE) wavelet thresholding technique. In the third layer, use is made of a noise estimation algorithm with rapid adaptation for a highly non-stationary noise environment. The noise estimate is updated in three frequency sub-bands, by averaging the noisy speech power spectrum using a frequency dependent smoothing factor, which is adjusted, based on a signal presence probability factor. In the fourth layer a spectral subtraction algorithm is used to enhance the speech signal, by subtracting each estimated noise from the original noisy speech. The new proposed post processing is then applied to the complete signal when the speech enhancement is processed using segmental speech enhancement. The enhanced signal is further improved by applying a soft wavelet thresholding technique to the un-segmented enhanced speech at the final processing stage. The results show improvements both quantitatively and qualitatively compared to the speech enhancement that does not apply the CFMA algorithm.","PeriodicalId":331439,"journal":{"name":"2007 International Symposium on Communications and Information Technologies","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126016598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-12-04DOI: 10.1109/ISCIT.2007.4392096
Lu Li-hua, Ma Xiao-lei, Li Yuan-An
NEMO basic support protocol enables mobile networks preserve communication with other nodes, while changing their point of attachment to the Internet. But in nested mobile network, Pinball routing problem appears. In this paper, a route optimization solution for mobile networks based on local mobility management framework was proposed to improve the performance. By defining the operation on local mobility anchor, forward routing path, reverse routing path and routing inside local mobility domain will be optimized. The tunnel path will be reduced, and performance will be improved. Analytical and simulation results demonstrate that our solution is a viable scheme to optimize route for nested-NEMO.
{"title":"Route optimization solution for nested mobile network in local mobility domain with multiple local mobility anchors","authors":"Lu Li-hua, Ma Xiao-lei, Li Yuan-An","doi":"10.1109/ISCIT.2007.4392096","DOIUrl":"https://doi.org/10.1109/ISCIT.2007.4392096","url":null,"abstract":"NEMO basic support protocol enables mobile networks preserve communication with other nodes, while changing their point of attachment to the Internet. But in nested mobile network, Pinball routing problem appears. In this paper, a route optimization solution for mobile networks based on local mobility management framework was proposed to improve the performance. By defining the operation on local mobility anchor, forward routing path, reverse routing path and routing inside local mobility domain will be optimized. The tunnel path will be reduced, and performance will be improved. Analytical and simulation results demonstrate that our solution is a viable scheme to optimize route for nested-NEMO.","PeriodicalId":331439,"journal":{"name":"2007 International Symposium on Communications and Information Technologies","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124405649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-12-04DOI: 10.1109/ISCIT.2007.4392053
Wei Peng, Shaodan Ma, T. Ng, Jiang Wang
The QR decomposition based M algorithm (QRD-M) is a sub-optimal detection algorithm which can achieve a tradeoff between bit error rate (BER) performance and computational complexity for multiple input multiple output (MIMO) systems. In this paper, an adaptive QRD-M algorithm with variable number of surviving paths is proposed for MIMO systems. The number of surviving paths at each detection stage is adaptively determined according to the instantaneous value and the statistics of channel conditions. The required statistics of the channel conditions is directly derived and given in closed form without a large number of training observations. The proposed algorithm is simple to implement and it is verified by computer simulations that the proposed algorithm has lower computational complexity than the fixed QRD-M algorithms thus can offer a better tradeoff between BER performance and computational complexity.
{"title":"Adaptive QRD-M detection with variable number of surviving paths for MIMO systems","authors":"Wei Peng, Shaodan Ma, T. Ng, Jiang Wang","doi":"10.1109/ISCIT.2007.4392053","DOIUrl":"https://doi.org/10.1109/ISCIT.2007.4392053","url":null,"abstract":"The QR decomposition based M algorithm (QRD-M) is a sub-optimal detection algorithm which can achieve a tradeoff between bit error rate (BER) performance and computational complexity for multiple input multiple output (MIMO) systems. In this paper, an adaptive QRD-M algorithm with variable number of surviving paths is proposed for MIMO systems. The number of surviving paths at each detection stage is adaptively determined according to the instantaneous value and the statistics of channel conditions. The required statistics of the channel conditions is directly derived and given in closed form without a large number of training observations. The proposed algorithm is simple to implement and it is verified by computer simulations that the proposed algorithm has lower computational complexity than the fixed QRD-M algorithms thus can offer a better tradeoff between BER performance and computational complexity.","PeriodicalId":331439,"journal":{"name":"2007 International Symposium on Communications and Information Technologies","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127539246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-12-04DOI: 10.1109/ISCIT.2007.4392094
P. Sanguansat, W. Asdornwised, S. Marukatat, S. Jitapunkul
In this paper, we proposed a novel technique for face recognition using Two-Dimensional Random Subspace Analysis (2DRSA), based on the Two-Dimensional Principal Component Analysis (2DPCA) technique and Random Subspace Method (RSM). In conventional 2DPCA, the image covariance matrix is directly calculated via 2D images in matrix form, by concept of the covariance of a random variable. However, 2DPCA reduces the dimension of the original image matrix in only one directions, normally in the row direction. Thus, it needs many more coefficients for image representation than PCA. For solving this problem, many methods were proposed by considering both the row and column directions. We develop another technique to reduce the dimension of 2DPCA feature matrix in column direction by using of randomization in selecting the rows of feature matrix. And the ensemble classification method is used to classify these feature subspaces. Experimental results on Yale, ORL and AR face databases show the improvement of our proposed techniques over the conventional 2DPCA.
{"title":"Two-Dimensional Random Subspace Analysis for face recognition","authors":"P. Sanguansat, W. Asdornwised, S. Marukatat, S. Jitapunkul","doi":"10.1109/ISCIT.2007.4392094","DOIUrl":"https://doi.org/10.1109/ISCIT.2007.4392094","url":null,"abstract":"In this paper, we proposed a novel technique for face recognition using Two-Dimensional Random Subspace Analysis (2DRSA), based on the Two-Dimensional Principal Component Analysis (2DPCA) technique and Random Subspace Method (RSM). In conventional 2DPCA, the image covariance matrix is directly calculated via 2D images in matrix form, by concept of the covariance of a random variable. However, 2DPCA reduces the dimension of the original image matrix in only one directions, normally in the row direction. Thus, it needs many more coefficients for image representation than PCA. For solving this problem, many methods were proposed by considering both the row and column directions. We develop another technique to reduce the dimension of 2DPCA feature matrix in column direction by using of randomization in selecting the rows of feature matrix. And the ensemble classification method is used to classify these feature subspaces. Experimental results on Yale, ORL and AR face databases show the improvement of our proposed techniques over the conventional 2DPCA.","PeriodicalId":331439,"journal":{"name":"2007 International Symposium on Communications and Information Technologies","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133781752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-12-04DOI: 10.1109/ISCIT.2007.4392077
Xingyu Zhang, Xiaohong Peng
As one of the most challenging aspects of streaming video over lossy networks, the technology for controlling packet losses has attracted more and more attention. Erasure coding is one of the ideal choices to deal with this problem. In most cases, the researchers need an effective method or tool to validate the erasure codes used for dealing with different packet loss patterns. Although some previous work has been done on employing erasure codes in video streaming system, few actual buildups and experiments which involve implementation of erasure codes against real packet loss in streaming systems have been reported. In this paper, we focus on constructing a testbed that integrates loss pattern generation and erasure coding implementation into video streaming services over lossy networks. With this approach, we are able to assess the capability of erasure coding in packet loss control and compare the performances of the video streaming systems with and without erasure coding. As an example, we have implemented the Reed-Solomon (7, 5) code for protecting MPEG streaming data under random packet losses. Experiment results show that the replay quality can be improved significantly by using erasure coding in video streaming systems, and that the testbed can suggest appropriate erasure code parameters for different loss environments.
{"title":"A testbed of erasure coding on video streaming system over lossy networks","authors":"Xingyu Zhang, Xiaohong Peng","doi":"10.1109/ISCIT.2007.4392077","DOIUrl":"https://doi.org/10.1109/ISCIT.2007.4392077","url":null,"abstract":"As one of the most challenging aspects of streaming video over lossy networks, the technology for controlling packet losses has attracted more and more attention. Erasure coding is one of the ideal choices to deal with this problem. In most cases, the researchers need an effective method or tool to validate the erasure codes used for dealing with different packet loss patterns. Although some previous work has been done on employing erasure codes in video streaming system, few actual buildups and experiments which involve implementation of erasure codes against real packet loss in streaming systems have been reported. In this paper, we focus on constructing a testbed that integrates loss pattern generation and erasure coding implementation into video streaming services over lossy networks. With this approach, we are able to assess the capability of erasure coding in packet loss control and compare the performances of the video streaming systems with and without erasure coding. As an example, we have implemented the Reed-Solomon (7, 5) code for protecting MPEG streaming data under random packet losses. Experiment results show that the replay quality can be improved significantly by using erasure coding in video streaming systems, and that the testbed can suggest appropriate erasure code parameters for different loss environments.","PeriodicalId":331439,"journal":{"name":"2007 International Symposium on Communications and Information Technologies","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130473682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-12-04DOI: 10.1109/ISCIT.2007.4392188
Xue Wang, Sheng Wang, Daowei Bi
Dynamic sensor nodes selection strategy refers to the optimization for achieving the tradeoff between energy consumption and effective coverage rate, enhancing energy efficiency, enlarging the effective coverage rate and prolonging the lifetime of wireless sensor networks (WSN). This paper proposes a dynamic sensor nodes selection strategy, so-called HN-GA, which uses the genetic algorithm (GA) to implement global searching and adopts the Hopfield network (HN) to reduce the search space of GA and ensure the validity of each chromosome. For evaluating the sensor nodes selection results, a combined metric based on several practically feasible measures of the energy consumption and effective coverage rate is introduced. The simulation results verify that the proposed HN-GA algorithm performs well in dynamic sensor nodes selection. With the help of HN-GA based dynamic sensor nodes selection, the lifetime and the effective coverage performance of WSN can be significantly improved. Compared to GA and HN, HN-GA has better performance in regional convergence and global searching, and it can achieve dynamic sensor nodes selection optimization efficiently, robustly and rapidly.
{"title":"Dynamic sensor nodes selection strategy for wireless sensor networks","authors":"Xue Wang, Sheng Wang, Daowei Bi","doi":"10.1109/ISCIT.2007.4392188","DOIUrl":"https://doi.org/10.1109/ISCIT.2007.4392188","url":null,"abstract":"Dynamic sensor nodes selection strategy refers to the optimization for achieving the tradeoff between energy consumption and effective coverage rate, enhancing energy efficiency, enlarging the effective coverage rate and prolonging the lifetime of wireless sensor networks (WSN). This paper proposes a dynamic sensor nodes selection strategy, so-called HN-GA, which uses the genetic algorithm (GA) to implement global searching and adopts the Hopfield network (HN) to reduce the search space of GA and ensure the validity of each chromosome. For evaluating the sensor nodes selection results, a combined metric based on several practically feasible measures of the energy consumption and effective coverage rate is introduced. The simulation results verify that the proposed HN-GA algorithm performs well in dynamic sensor nodes selection. With the help of HN-GA based dynamic sensor nodes selection, the lifetime and the effective coverage performance of WSN can be significantly improved. Compared to GA and HN, HN-GA has better performance in regional convergence and global searching, and it can achieve dynamic sensor nodes selection optimization efficiently, robustly and rapidly.","PeriodicalId":331439,"journal":{"name":"2007 International Symposium on Communications and Information Technologies","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130518877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}