Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409156
P. Rahul, A. Gaikwad
This paper is focused on Devanagari Handwritten Script Recognition. The scanned word image is taken as an input image. An Input image is preprocessed and segmented. The features are extracted. Feature vector is applied to an artificial Neural Network. The Network is trained for the different set of numerals and alphabets. Output of Self Organizing Map applied to Learning Vector Quantization and the accuracy is calculated.
{"title":"Multistage Recognition Approach for Handwritten Devanagari Script Recognition","authors":"P. Rahul, A. Gaikwad","doi":"10.1109/WICT.2012.6409156","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409156","url":null,"abstract":"This paper is focused on Devanagari Handwritten Script Recognition. The scanned word image is taken as an input image. An Input image is preprocessed and segmented. The features are extracted. Feature vector is applied to an artificial Neural Network. The Network is trained for the different set of numerals and alphabets. Output of Self Organizing Map applied to Learning Vector Quantization and the accuracy is calculated.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115553776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409149
M. O. Suliman, V. S. Kumar
Time-cost trade off optimization problem is very important in both fields of construction management and combinatorial optimization. In this paper, we propose a Pareto approach to solve multi-objective time-cost optimization problem in a fuzzy environment. Activity duration and cost are represented by trapezoidal fuzzy numbers to assess the uncertainty associate with them. The objective considered is to minimize the project overall completion time and total project cost. An effective simulated annealing algorithm based on proposed approach is presented to solve multi-objective fuzzy time-cost optimization problem. An external memory of non-dominated solutions is considered to save and update the non-dominated solutions during the problem solving process. Numerical examples are used to evaluate and study the performance of the proposed algorithm.
{"title":"Simulated annealing based algorith for multiobjective fuzzy time-cost optimization problem","authors":"M. O. Suliman, V. S. Kumar","doi":"10.1109/WICT.2012.6409149","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409149","url":null,"abstract":"Time-cost trade off optimization problem is very important in both fields of construction management and combinatorial optimization. In this paper, we propose a Pareto approach to solve multi-objective time-cost optimization problem in a fuzzy environment. Activity duration and cost are represented by trapezoidal fuzzy numbers to assess the uncertainty associate with them. The objective considered is to minimize the project overall completion time and total project cost. An effective simulated annealing algorithm based on proposed approach is presented to solve multi-objective fuzzy time-cost optimization problem. An external memory of non-dominated solutions is considered to save and update the non-dominated solutions during the problem solving process. Numerical examples are used to evaluate and study the performance of the proposed algorithm.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115648658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409070
S. Chawla, S. Srivastava
Web application domain requires specialized focus in Requirements Engineering because of their unique features like dynamic nature, presentation styles, large and heterogeneous audience, and navigational access to information. We propose WebURN notation, an extension to User Requirements Notation that is a goal and scenario based notation for early requirements analysis. An extended meta model for Goal Requirements Language and Use case Maps has been created for Web specific context. Also, the paper describes construction algorithms for creating the Requirements Analysis models from textual information entered by the Requirements Engineer. The Goal based approach, works closely with the Web specific functional and non-functional Requirements and delivers models with lesser conflicts, better choice amongst alternatives and handles crosscutting concerns corresponding to personalization and non-functional requirements.
{"title":"A Goal based methodology for Web specific Requirements Engineering","authors":"S. Chawla, S. Srivastava","doi":"10.1109/WICT.2012.6409070","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409070","url":null,"abstract":"Web application domain requires specialized focus in Requirements Engineering because of their unique features like dynamic nature, presentation styles, large and heterogeneous audience, and navigational access to information. We propose WebURN notation, an extension to User Requirements Notation that is a goal and scenario based notation for early requirements analysis. An extended meta model for Goal Requirements Language and Use case Maps has been created for Web specific context. Also, the paper describes construction algorithms for creating the Requirements Analysis models from textual information entered by the Requirements Engineer. The Goal based approach, works closely with the Web specific functional and non-functional Requirements and delivers models with lesser conflicts, better choice amongst alternatives and handles crosscutting concerns corresponding to personalization and non-functional requirements.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114340745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409051
S. Angadi, M. Kodabagi, M. Jerabandi
Automated systems for understanding text in low resolution natural scene images of display boards are facilitating several applications such as blind assistants, traffic guidance systems, tour guide systems, location aware systems and many more. The text recognition at character level is one the important processing steps for development of such systems. In this work, a novel method for recognition of Kannada basic characters using zone wise statistical features is proposed. The method works in two phases; In the first phase, the zone wise statistical features are obtained from training samples and knowledge base is constructed. During testing, the test image is processed to obtain zone wise statistical features and character is recognized using nearest neighbor classifier. The method has been evaluated for 1043 samples and achieves an average recognition accuracy of 83.49%. The method is robust and insensitive to noise, blur, variations in font size and style, uneven thickness and varying lightning conditions.
{"title":"Character recognition of Kannada text in low resolution display board images using zone wise statistical features","authors":"S. Angadi, M. Kodabagi, M. Jerabandi","doi":"10.1109/WICT.2012.6409051","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409051","url":null,"abstract":"Automated systems for understanding text in low resolution natural scene images of display boards are facilitating several applications such as blind assistants, traffic guidance systems, tour guide systems, location aware systems and many more. The text recognition at character level is one the important processing steps for development of such systems. In this work, a novel method for recognition of Kannada basic characters using zone wise statistical features is proposed. The method works in two phases; In the first phase, the zone wise statistical features are obtained from training samples and knowledge base is constructed. During testing, the test image is processed to obtain zone wise statistical features and character is recognized using nearest neighbor classifier. The method has been evaluated for 1043 samples and achieves an average recognition accuracy of 83.49%. The method is robust and insensitive to noise, blur, variations in font size and style, uneven thickness and varying lightning conditions.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114458878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409245
Ying Wang, Xianfeng Liu, B. Lin, Fuwen Pang
This paper addresses the optimal multicast resource allocation problem in orthogonal frequency division multiple access (OFDMA) system, where the fine granularity scalability (FGS) video coding is exploited to perform source coding. With FGS, the multicast information is coded into base layer data and enhancement layer data, and there is a predefined proportional rate ratio between the transmission rate of the base layer and the enhancement layer so that an approximate playing progress can be maintained at the receiver. The multicast resource allocation problem is formulated as a mixed binary integer programming problem with the objective of maximizing the network total throughput while maintaining the required proportional rate ratio under a total transmit power constraint for the base station. The problem is solved by using Lagrangian dual decomposition method, resulting in an optimal joint subcarrier and power allocation algorithm with computational complexity being linear with the number of subcarriers. Numerical simulations are conducted to evaluate the performance of the proposed algorithm in terms of convergence and throughput. Simulation results show that the proposed optimal algorithm converges very fast and significantly outperforms the available suboptimal schemes.
{"title":"Joint power and subcarrier allocation for layered multicast streaming with proportional rate ratio constraint in OFDMA wireless networks","authors":"Ying Wang, Xianfeng Liu, B. Lin, Fuwen Pang","doi":"10.1109/WICT.2012.6409245","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409245","url":null,"abstract":"This paper addresses the optimal multicast resource allocation problem in orthogonal frequency division multiple access (OFDMA) system, where the fine granularity scalability (FGS) video coding is exploited to perform source coding. With FGS, the multicast information is coded into base layer data and enhancement layer data, and there is a predefined proportional rate ratio between the transmission rate of the base layer and the enhancement layer so that an approximate playing progress can be maintained at the receiver. The multicast resource allocation problem is formulated as a mixed binary integer programming problem with the objective of maximizing the network total throughput while maintaining the required proportional rate ratio under a total transmit power constraint for the base station. The problem is solved by using Lagrangian dual decomposition method, resulting in an optimal joint subcarrier and power allocation algorithm with computational complexity being linear with the number of subcarriers. Numerical simulations are conducted to evaluate the performance of the proposed algorithm in terms of convergence and throughput. Simulation results show that the proposed optimal algorithm converges very fast and significantly outperforms the available suboptimal schemes.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117096464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409060
Tirimula Rao Benala, R. Mall, Satchidanada Dehuri, Koradda Chinna Babu
Software cost estimation continues to be an area of concern for managing of software development industry. We use unsupervised learning (e.g., clustering algorithms) combined with functional link artificial neural networks for software effort prediction. The unsupervised learning (clustering) indigenously divide the input space into the required number of partitions thus eliminating the need of ad-hoc selection of number of clusters. Functional link artificial neural networks (FLANNs), on the other hand is a powerful computational model. Chebyshev polynomial has been used in the FLANN as a choice for functional expansion to exhaustively study the performance. Three real life datasets related to software cost estimation have been considered for empirical evaluation of this proposed method. The experimental results show that our method could significantly improve prediction accuracy of conventional FLANN and has the potential to become an effective method for software cost estimation.
{"title":"Software effort prediction using unsupervised learning (clustering) and functional link artificial neural networks","authors":"Tirimula Rao Benala, R. Mall, Satchidanada Dehuri, Koradda Chinna Babu","doi":"10.1109/WICT.2012.6409060","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409060","url":null,"abstract":"Software cost estimation continues to be an area of concern for managing of software development industry. We use unsupervised learning (e.g., clustering algorithms) combined with functional link artificial neural networks for software effort prediction. The unsupervised learning (clustering) indigenously divide the input space into the required number of partitions thus eliminating the need of ad-hoc selection of number of clusters. Functional link artificial neural networks (FLANNs), on the other hand is a powerful computational model. Chebyshev polynomial has been used in the FLANN as a choice for functional expansion to exhaustively study the performance. Three real life datasets related to software cost estimation have been considered for empirical evaluation of this proposed method. The experimental results show that our method could significantly improve prediction accuracy of conventional FLANN and has the potential to become an effective method for software cost estimation.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115878781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409045
S. Akashe, G. Sharma, V. Rajak, R. Pandey
In this paper, we propose a leakage reduction technique as high leakage currents in deep submicron regimes are becoming a major contributor to total power dissipation of CMOS circuits. Sub threshold leakage current plays a very important role in power dissipation so to reduce the sub threshold leakage current we proposed an adaptive voltage level (AVL) technique. Which optimize the overall voltage across the half subtractor circuit in standby mode. In this AVL technique, two schemes are employed, one is AVLS (adaptive voltage level at supply) in which the supply voltage is reduced and the other is AVLG (adaptive voltage level at ground) in which the ground potential is increased. By applying this technique we have reduced the leakage current from 9.274*10-12ampere) to 5.428*10-12amp. That means this technique the leakage current 41.4%. The circuit is simulated on Cadence(R) Virtuoso(R) in 45nano meter CMOS technology. Simulation results reveal that there is a significant reduction in leakage current for this proposed cell with the AVL circuit reducing the supply voltage.
{"title":"Implementation of high performance and low leakage half subtractor circuit using AVL technique","authors":"S. Akashe, G. Sharma, V. Rajak, R. Pandey","doi":"10.1109/WICT.2012.6409045","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409045","url":null,"abstract":"In this paper, we propose a leakage reduction technique as high leakage currents in deep submicron regimes are becoming a major contributor to total power dissipation of CMOS circuits. Sub threshold leakage current plays a very important role in power dissipation so to reduce the sub threshold leakage current we proposed an adaptive voltage level (AVL) technique. Which optimize the overall voltage across the half subtractor circuit in standby mode. In this AVL technique, two schemes are employed, one is AVLS (adaptive voltage level at supply) in which the supply voltage is reduced and the other is AVLG (adaptive voltage level at ground) in which the ground potential is increased. By applying this technique we have reduced the leakage current from 9.274*10-12ampere) to 5.428*10-12amp. That means this technique the leakage current 41.4%. The circuit is simulated on Cadence(R) Virtuoso(R) in 45nano meter CMOS technology. Simulation results reveal that there is a significant reduction in leakage current for this proposed cell with the AVL circuit reducing the supply voltage.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124898268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409112
S. Kharche, A. Mahajan
In this paper the comparative analysis of the two versions of Internet Protocol viz. IPv4 and IPv6 is presented based on throughput as the network performance metric. The throughput is measured for the application layer Domain Name System (DNS) traffic over the transport layer protocols. The throughput measurements are repeated for Voice traffic with different voice samples and computer gaming traffic characteristics. The experiments are performed on two machines connected via crossover cable in Windows 2007 and Windows 2008 client server environment. The measurements are facilitated by the simulated DNS and software coded voice samples present in the latest version of D-ITG 2.8.0rcl tool. The results show that network performance varies not only with Internet protocol (IP) version but also with the transport layer protocols and the voice samples. The maximum throughput difference observed for G711.2 Voice sample is 86.09 percent.
本文以吞吐量为网络性能指标,对IPv4和IPv6两个版本的互联网协议进行了比较分析。吞吐量是通过传输层协议对应用层DNS (Domain Name System)流量进行测量的。对具有不同语音样本和计算机游戏流量特征的语音流量重复吞吐量测量。实验在Windows 2007和Windows 2008客户端服务器环境下通过交叉电缆连接的两台机器上进行。通过最新版本的D-ITG 2.8.0rcl工具中提供的模拟DNS和软件编码语音样本,方便了测量。结果表明,网络性能不仅与IP协议版本有关,还与传输层协议和语音样本有关。在G711.2语音样本中观察到的最大吞吐量差异为86.09%。
{"title":"IPv4 and IPv6 performance comparison for simulated DNS and VoIP traffic in Windows 2007 and Windows 2008 client server environment","authors":"S. Kharche, A. Mahajan","doi":"10.1109/WICT.2012.6409112","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409112","url":null,"abstract":"In this paper the comparative analysis of the two versions of Internet Protocol viz. IPv4 and IPv6 is presented based on throughput as the network performance metric. The throughput is measured for the application layer Domain Name System (DNS) traffic over the transport layer protocols. The throughput measurements are repeated for Voice traffic with different voice samples and computer gaming traffic characteristics. The experiments are performed on two machines connected via crossover cable in Windows 2007 and Windows 2008 client server environment. The measurements are facilitated by the simulated DNS and software coded voice samples present in the latest version of D-ITG 2.8.0rcl tool. The results show that network performance varies not only with Internet protocol (IP) version but also with the transport layer protocols and the voice samples. The maximum throughput difference observed for G711.2 Voice sample is 86.09 percent.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123297272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409207
N. Singh, K. Tiwari, A. Nigam, P. Gupta
This paper presents an efficient multimodel bio-metric system based on 4 slap fingerprint images. The system utilizes 4 slap fingerprint scanner to simultaneously collect fingerprints of multiple fingers on a hand in one image. The acquired multi-finger images are first segmented to get individual fingers. Quality of each individual finger is estimated and its minutiae points are extracted. The minutiae points of each individual finger extracted from gallery 4 slap fingerprint image is compared with the corresponding individual finger of the query 4 slap fingerprint image to get matching score of that finger. Matching score between two 4 slap fingerprint images is obtained by fusing matching scores of various fingers along with their respective image quality and relative accuracies. Decision of matching has been taken based on the fused matching score. The system has been tested on two 4 slap fingerprint databases viz IITK-student and IITK-rural containing 1007 and 991 subjects respectively. Both databases are acquired in 2 sessions. The correct recognition rate obtained is 91.00% for IITK-rural database and 99.64% for IITK-student database. Respective EER values are 5.64% and 0.94%.
{"title":"Fusion of 4-slap fingerprint images with their qualities for human recognition","authors":"N. Singh, K. Tiwari, A. Nigam, P. Gupta","doi":"10.1109/WICT.2012.6409207","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409207","url":null,"abstract":"This paper presents an efficient multimodel bio-metric system based on 4 slap fingerprint images. The system utilizes 4 slap fingerprint scanner to simultaneously collect fingerprints of multiple fingers on a hand in one image. The acquired multi-finger images are first segmented to get individual fingers. Quality of each individual finger is estimated and its minutiae points are extracted. The minutiae points of each individual finger extracted from gallery 4 slap fingerprint image is compared with the corresponding individual finger of the query 4 slap fingerprint image to get matching score of that finger. Matching score between two 4 slap fingerprint images is obtained by fusing matching scores of various fingers along with their respective image quality and relative accuracies. Decision of matching has been taken based on the fused matching score. The system has been tested on two 4 slap fingerprint databases viz IITK-student and IITK-rural containing 1007 and 991 subjects respectively. Both databases are acquired in 2 sessions. The correct recognition rate obtained is 91.00% for IITK-rural database and 99.64% for IITK-student database. Respective EER values are 5.64% and 0.94%.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122592747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-10-01DOI: 10.1109/WICT.2012.6409135
B. Misra, A. K. Nayak, S. Rai
Wireless sensor nodes are mostly battery operated, tiny in size, and deployed in some specified area. The limited capacity of the battery power hinders the performance of the node as well as of the network. This requires efficient management of battery power for optimal utilization of the sensor network technology. In this work we propose a novel method to extend the life time of the sensor network by organizing the sensors into a maximal number of non-disjoint set covers with non-uniform sensing ranges. Sensors present in one set cover remain active at any instant of time while other sensors are in sleep mode, so they consume negligible battery power. Each set cover is activated successively, thereby increasing the total lifetime of the network. Each sensor can move towards the center of the set of nodes it covers to further reduce the sensing range and further diminishes the consumption of energy. It is observed through extensive simulation work that the network lifetime can be increased by adjusting their sensing ranges as compared to sensors with fixed sensing ranges. This method employs genetic algorithm (GA) and greedy heuristic approach to determine optimal sensing range for efficient energy management in sensor network. Simulation results show its efficiency over the fixed-range strategies.
{"title":"Achieving energy efficiency by self-adjusting sensing range and positioning in wireless sensor networks","authors":"B. Misra, A. K. Nayak, S. Rai","doi":"10.1109/WICT.2012.6409135","DOIUrl":"https://doi.org/10.1109/WICT.2012.6409135","url":null,"abstract":"Wireless sensor nodes are mostly battery operated, tiny in size, and deployed in some specified area. The limited capacity of the battery power hinders the performance of the node as well as of the network. This requires efficient management of battery power for optimal utilization of the sensor network technology. In this work we propose a novel method to extend the life time of the sensor network by organizing the sensors into a maximal number of non-disjoint set covers with non-uniform sensing ranges. Sensors present in one set cover remain active at any instant of time while other sensors are in sleep mode, so they consume negligible battery power. Each set cover is activated successively, thereby increasing the total lifetime of the network. Each sensor can move towards the center of the set of nodes it covers to further reduce the sensing range and further diminishes the consumption of energy. It is observed through extensive simulation work that the network lifetime can be increased by adjusting their sensing ranges as compared to sensors with fixed sensing ranges. This method employs genetic algorithm (GA) and greedy heuristic approach to determine optimal sensing range for efficient energy management in sensor network. Simulation results show its efficiency over the fixed-range strategies.","PeriodicalId":445333,"journal":{"name":"2012 World Congress on Information and Communication Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124235038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}