S. Selvi, M. Rath, N. Sinha, S. Singh, N. Hemrom, A. Bhattacharya, A. Biswal
Every organization is information driven and it's the employee who drives and carries out day to day activities. The P&A department train the people, organizes them, so that employees can effectively perform these activities. This requires viewing people as human assets, not costs to the organization. Looking at people as assets is part of human resource management and human capital management. For managing and automating the HR Process to maximize the productivity of the organization, the organization has to implement HRMS, a Human Resource Management System. HRMS system will help in reducing costs, saving time, integrating and aligning HR efforts with the rest of the organization. Employees will be empowered and engage with more input and control over their work life. Through HRMS one can quickly build the workflows and processes. The powerful flexibility features keep employees current and compliant, even as rules and regulations change. For competent management of business process, computerization is must in today's scenario. RDCIS (Research and Development Centre for Iron & Steel), is a research unit of SAIL in the area of Iron Steel. The organization hierarchy is two tier architecture. Top level is Area and Bottom level is department. Each area has various departments. The P&A (Personnel & Administration) department carries out different activities for managing various Human Resource functions. The different functions carried out by P&A department are Manpower Planning, Succession plans, Redeployment/ Job rotation, Career Planning, Compensation Revision, Employee Profile, Manpower Statistics, Age/ Skill/ Qualification matrix, Employee Turnover, Utilization of perks (LTC, Company Leased Housing etc.), Facilities (Residential phone, Housing loan etc.), Employee Performance/ Appraisal analysis, Training program details, Stagnation Analysis etc. Without a computerize systems, it is very difficult to drive the HR functions, adjustment of personnel systems to meet current and future requirements, and the management of change. The project comprises of database design, application design and development of software for storage and retrieval for the maintenance of HR data through user friendly interfaces. The developed software also has mechanisms to avoid tampering of data. The software has been developed with 3-tier approach. The software tools used are Oracle Designer, Oracle Database and JSP. The software has been deployed with Tomcat Apache Server on Windows Operating System.
{"title":"HR e-Leave Tour Management System at RDCIS, SAIL","authors":"S. Selvi, M. Rath, N. Sinha, S. Singh, N. Hemrom, A. Bhattacharya, A. Biswal","doi":"10.1109/ICIT.2014.31","DOIUrl":"https://doi.org/10.1109/ICIT.2014.31","url":null,"abstract":"Every organization is information driven and it's the employee who drives and carries out day to day activities. The P&A department train the people, organizes them, so that employees can effectively perform these activities. This requires viewing people as human assets, not costs to the organization. Looking at people as assets is part of human resource management and human capital management. For managing and automating the HR Process to maximize the productivity of the organization, the organization has to implement HRMS, a Human Resource Management System. HRMS system will help in reducing costs, saving time, integrating and aligning HR efforts with the rest of the organization. Employees will be empowered and engage with more input and control over their work life. Through HRMS one can quickly build the workflows and processes. The powerful flexibility features keep employees current and compliant, even as rules and regulations change. For competent management of business process, computerization is must in today's scenario. RDCIS (Research and Development Centre for Iron & Steel), is a research unit of SAIL in the area of Iron Steel. The organization hierarchy is two tier architecture. Top level is Area and Bottom level is department. Each area has various departments. The P&A (Personnel & Administration) department carries out different activities for managing various Human Resource functions. The different functions carried out by P&A department are Manpower Planning, Succession plans, Redeployment/ Job rotation, Career Planning, Compensation Revision, Employee Profile, Manpower Statistics, Age/ Skill/ Qualification matrix, Employee Turnover, Utilization of perks (LTC, Company Leased Housing etc.), Facilities (Residential phone, Housing loan etc.), Employee Performance/ Appraisal analysis, Training program details, Stagnation Analysis etc. Without a computerize systems, it is very difficult to drive the HR functions, adjustment of personnel systems to meet current and future requirements, and the management of change. The project comprises of database design, application design and development of software for storage and retrieval for the maintenance of HR data through user friendly interfaces. The developed software also has mechanisms to avoid tampering of data. The software has been developed with 3-tier approach. The software tools used are Oracle Designer, Oracle Database and JSP. The software has been deployed with Tomcat Apache Server on Windows Operating System.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"46 1","pages":"333-338"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73920414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The recently proposed improved multi-stage clustering (IMSC) based blind equalisation algorithm in [1] gave significant performance improvement as compared to its state of the art counterparts. In that work, the performance was considered over a frequency-selective single input single output (SISO) additive white Gaussian noise (AWGN) channel. The practice of relaying is used in cooperative communications so as to give a variety of the independent signals to the receiver to choose from, the choice being dependent on the quality of the link. In other words, this results in a diversity gain at the receiver. In this paper, we propose a novel blind equalisation scheme which accepts inputs from relays, and finds a smart way of blindly fusing the incoming data, so as to reach a lower mean square deviation (MSD) from the Weiner solution. The simulations presented in this paper validate our algorithm. We also derive an expression for MSD from the Weiner solution of this algorithm as a function of step-size as in [2]. We find that it closely matches the experimentally obtained curves.
{"title":"Improved Multi-stage Clustering Based Blind Equalisation in Distributed Environments","authors":"R. Mitra, V. Bhatia","doi":"10.1109/ICIT.2014.32","DOIUrl":"https://doi.org/10.1109/ICIT.2014.32","url":null,"abstract":"The recently proposed improved multi-stage clustering (IMSC) based blind equalisation algorithm in [1] gave significant performance improvement as compared to its state of the art counterparts. In that work, the performance was considered over a frequency-selective single input single output (SISO) additive white Gaussian noise (AWGN) channel. The practice of relaying is used in cooperative communications so as to give a variety of the independent signals to the receiver to choose from, the choice being dependent on the quality of the link. In other words, this results in a diversity gain at the receiver. In this paper, we propose a novel blind equalisation scheme which accepts inputs from relays, and finds a smart way of blindly fusing the incoming data, so as to reach a lower mean square deviation (MSD) from the Weiner solution. The simulations presented in this paper validate our algorithm. We also derive an expression for MSD from the Weiner solution of this algorithm as a function of step-size as in [2]. We find that it closely matches the experimentally obtained curves.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"51 1","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78301293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As a large number of requests are submitted to the data center, load balancing is one of the main challenges in Cloud Data Center. Existing load Balancing techniques mainly focus on improving the quality of services, providing the expected output on time etc. Therefore, there is a need to develop load balancing technique that can improve the performance of cloud computing along with optimal resource utilization. The proposed technique of load balancing is based on Ant Colony Optimization which detects overloaded and under loaded servers and thereby performs load balancing operations between identified servers of Data Center. The proposed technique ensures availability, achieves efficient resource utilization, maximizes number of requests handled by cloud and minimizes time required to serve multiple requests. The complexity of proposed algorithm depends on datacenter network architecture.
{"title":"A Technique Based on Ant Colony Optimization for Load Balancing in Cloud Data Center","authors":"Ekta Gupta, Vidya Deshpande","doi":"10.1109/ICIT.2014.54","DOIUrl":"https://doi.org/10.1109/ICIT.2014.54","url":null,"abstract":"As a large number of requests are submitted to the data center, load balancing is one of the main challenges in Cloud Data Center. Existing load Balancing techniques mainly focus on improving the quality of services, providing the expected output on time etc. Therefore, there is a need to develop load balancing technique that can improve the performance of cloud computing along with optimal resource utilization. The proposed technique of load balancing is based on Ant Colony Optimization which detects overloaded and under loaded servers and thereby performs load balancing operations between identified servers of Data Center. The proposed technique ensures availability, achieves efficient resource utilization, maximizes number of requests handled by cloud and minimizes time required to serve multiple requests. The complexity of proposed algorithm depends on datacenter network architecture.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"107 1","pages":"12-17"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85912687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A data space system facilitates a new way for sharing and integrating the information among the various distributed, autonomous and heterogeneous data sources. To provide the best effort answer of a user query, a data space system needs to resolve the semantic heterogeneity in its core. There are many solutions being proposed to address this problem widely. We are exploring the problem of semantic heterogeneity in a data space system as a part of our PhD work. In this paper, we have addressed the semantic heterogeneity in the context of a data space system, and presented an abstract framework to model the semantic heterogeneity in data space. The proposed model is based on machine learning and ontology approaches. The machine learning technique analyzes the semantically equivalent data items (or entities) in data space, and the ontology conceptualizes the structural entities in a data space. This model resolves the semantic heterogeneity of a data space system, and creates a conceptual model using "from-data-to-schema" approach. The proposed approach implicitly creates the domain ontology by finding the most similar concepts comming from different data sources and enriches the performance of the system by finding the semantic relationships among them.
{"title":"Modeling Semantic Heterogeneity in Dataspace: A Machine Learning Approach","authors":"Mrityunjay Singh, S. Jain, V. Panchal","doi":"10.1109/ICIT.2014.24","DOIUrl":"https://doi.org/10.1109/ICIT.2014.24","url":null,"abstract":"A data space system facilitates a new way for sharing and integrating the information among the various distributed, autonomous and heterogeneous data sources. To provide the best effort answer of a user query, a data space system needs to resolve the semantic heterogeneity in its core. There are many solutions being proposed to address this problem widely. We are exploring the problem of semantic heterogeneity in a data space system as a part of our PhD work. In this paper, we have addressed the semantic heterogeneity in the context of a data space system, and presented an abstract framework to model the semantic heterogeneity in data space. The proposed model is based on machine learning and ontology approaches. The machine learning technique analyzes the semantically equivalent data items (or entities) in data space, and the ontology conceptualizes the structural entities in a data space. This model resolves the semantic heterogeneity of a data space system, and creates a conceptual model using \"from-data-to-schema\" approach. The proposed approach implicitly creates the domain ontology by finding the most similar concepts comming from different data sources and enriches the performance of the system by finding the semantic relationships among them.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"48 1","pages":"275-280"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79088836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mining Human Interaction in Meetings is useful to identify how a person reacts in different situations. Behavior represents the nature of the person and mining helps to analyze, how the person express their opinion in meeting. For this, study of semantic knowledge is important. Human interactions in meeting are categorized as propose, comment, acknowledgement, ask opinion, positive opinion and negative opinion. The sequence of human interactions is represented as a Tree. Tree structure is used to represent the Human Interaction flow in meeting. Interaction flow helps to assure the probability of another type of interaction. Tree pattern mining and sub tree pattern mining algorithms are automated to analyze the structure of the tree and to extract interaction flow patterns. The extracted patterns are interpreted from human interactions. The frequent patterns are used as an indexing tool to access a particular semantics, and that patterns are clustered to determine the behavior of the person.
{"title":"Smart Meeting System: An Approach to Recognize Patterns Using Tree Based Mining","authors":"Puja R. Kose, P. K. Bharne","doi":"10.1109/ICIT.2014.45","DOIUrl":"https://doi.org/10.1109/ICIT.2014.45","url":null,"abstract":"Mining Human Interaction in Meetings is useful to identify how a person reacts in different situations. Behavior represents the nature of the person and mining helps to analyze, how the person express their opinion in meeting. For this, study of semantic knowledge is important. Human interactions in meeting are categorized as propose, comment, acknowledgement, ask opinion, positive opinion and negative opinion. The sequence of human interactions is represented as a Tree. Tree structure is used to represent the Human Interaction flow in meeting. Interaction flow helps to assure the probability of another type of interaction. Tree pattern mining and sub tree pattern mining algorithms are automated to analyze the structure of the tree and to extract interaction flow patterns. The extracted patterns are interpreted from human interactions. The frequent patterns are used as an indexing tool to access a particular semantics, and that patterns are clustered to determine the behavior of the person.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"10 1","pages":"206-208"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82790806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The performance of particle swarm optimization (PSO) greatly depends upon the effective selection of vital tuning metric known as acceleration coefficients (especially when applied to design space exploration (DSE) problem) which incorporates ability to clinically balance between exploration and exploitation during searching. The major contributions of the paper are as follows: a) A novel analysis of two variants of acceleration coefficient (hierarchical time varying acceleration coefficient vs. Constant acceleration coefficient) in PSO and their impact on convergence time and exploration time in context of multi objective (MO) DSE in HLS. The analysis assists the designer in pre-tuning the acceleration coefficient to an optimal value for achieving better convergence and exploration time before DSE initiation, b) A novel performance comparison of PSO driven DSE (PSO-DSE) with previous works based on quality metrics for MO evolutionary algorithms such as generational distance, maximum pareto-optimal front error, spacing, spreading and weighted metric. When two variants of acceleration coefficients (constant and time varying) were compared, it was revealed from the results that the PSO-DSE has on average 9.5% better exploration speed with constant acceleration coefficient as compared to hierarchical time varying acceleration coefficient. Further, with setting of constant acceleration coefficient, the PSO-DSE produces results with efficient generational distance, maximum pareto-optimal front error, spacing, spreading and weighted metric as compared to previous approaches.
{"title":"Time Varying vs. Fixed Acceleration Coefficient PSO Driven Exploration during High Level Synthesis: Performance and Quality Assessment","authors":"A. Sengupta, V. Mishra","doi":"10.1109/ICIT.2014.16","DOIUrl":"https://doi.org/10.1109/ICIT.2014.16","url":null,"abstract":"The performance of particle swarm optimization (PSO) greatly depends upon the effective selection of vital tuning metric known as acceleration coefficients (especially when applied to design space exploration (DSE) problem) which incorporates ability to clinically balance between exploration and exploitation during searching. The major contributions of the paper are as follows: a) A novel analysis of two variants of acceleration coefficient (hierarchical time varying acceleration coefficient vs. Constant acceleration coefficient) in PSO and their impact on convergence time and exploration time in context of multi objective (MO) DSE in HLS. The analysis assists the designer in pre-tuning the acceleration coefficient to an optimal value for achieving better convergence and exploration time before DSE initiation, b) A novel performance comparison of PSO driven DSE (PSO-DSE) with previous works based on quality metrics for MO evolutionary algorithms such as generational distance, maximum pareto-optimal front error, spacing, spreading and weighted metric. When two variants of acceleration coefficients (constant and time varying) were compared, it was revealed from the results that the PSO-DSE has on average 9.5% better exploration speed with constant acceleration coefficient as compared to hierarchical time varying acceleration coefficient. Further, with setting of constant acceleration coefficient, the PSO-DSE produces results with efficient generational distance, maximum pareto-optimal front error, spacing, spreading and weighted metric as compared to previous approaches.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"1 1","pages":"281-286"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79416477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reversible Logic Technology has emerged as potential logic design style for implementation in Low Power VLSI Design, Quantum Computing and Dissipation less Computing. Based on this technology, a 6×6 Multifunctional Dwivedi - Rao (DR) Gate has been designed for implementation of logical and arithmetical functions. DR gate has been applied for designing of 1-bit ALU to perform NOT, 2 no's NOR, COPIER, ADD, COUT logical and arithmetical operations in one clock cycle. Comparison of various results shows that proposed DR gate and its application in ALU design is better than its counterpart [16] in terms of Quantum Cost, Logical Operations, Worst Case Delay, Garbage output, Total Boolean function performed etc. Proposed gate has been simulated on VHDL and improvement of 72, 40, 24 logical operations on 4, 3, 2 control input signals respectively, without any garbage output.
{"title":"Design of Multifunctional DR Gate and Its Application in ALU Design","authors":"A. G. Rao, A. K. D. Dwivedi","doi":"10.1109/ICIT.2014.49","DOIUrl":"https://doi.org/10.1109/ICIT.2014.49","url":null,"abstract":"Reversible Logic Technology has emerged as potential logic design style for implementation in Low Power VLSI Design, Quantum Computing and Dissipation less Computing. Based on this technology, a 6×6 Multifunctional Dwivedi - Rao (DR) Gate has been designed for implementation of logical and arithmetical functions. DR gate has been applied for designing of 1-bit ALU to perform NOT, 2 no's NOR, COPIER, ADD, COUT logical and arithmetical operations in one clock cycle. Comparison of various results shows that proposed DR gate and its application in ALU design is better than its counterpart [16] in terms of Quantum Cost, Logical Operations, Worst Case Delay, Garbage output, Total Boolean function performed etc. Proposed gate has been simulated on VHDL and improvement of 72, 40, 24 logical operations on 4, 3, 2 control input signals respectively, without any garbage output.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"14 1","pages":"339-344"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88039843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Service Oriented design and software development has gained much importance in the area of e-Governance applications. The main focus is to design and implement web services for efficient realisation of service interoperability and reuse. In this paper, we have proposed a layered service oriented design approach with government department wise abstraction levels for uniform service specification and composition mechanism. For our case study we have considered seven web services of different departments, which are interdependent. The service identification, service composition and service interaction pattern of all these services are done as per the principles of service oriented design. A generalised framework is also proposed to develop any complex web service, which can use and reuse services of multiple departments seamlessly. It provides flexible and dependable solution for developing agile e-Governance applications as the complexity of web services increases.
{"title":"Service Oriented Layered Approach for E-Governance System Implementation","authors":"R. Das, Sujata Patnaik, A. K. Padhy, C. Mohini","doi":"10.1109/ICIT.2014.38","DOIUrl":"https://doi.org/10.1109/ICIT.2014.38","url":null,"abstract":"Service Oriented design and software development has gained much importance in the area of e-Governance applications. The main focus is to design and implement web services for efficient realisation of service interoperability and reuse. In this paper, we have proposed a layered service oriented design approach with government department wise abstraction levels for uniform service specification and composition mechanism. For our case study we have considered seven web services of different departments, which are interdependent. The service identification, service composition and service interaction pattern of all these services are done as per the principles of service oriented design. A generalised framework is also proposed to develop any complex web service, which can use and reuse services of multiple departments seamlessly. It provides flexible and dependable solution for developing agile e-Governance applications as the complexity of web services increases.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"74 1","pages":"293-298"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85404358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Map Matching is a well-established problem which deals with mapping raw time stamped location traces to edges of road network graph. Location data traces may be from devices like GPS, Mobile Signals etc. It has applicability in mining travel patterns, route prediction, vehicle turn prediction and resource prediction in grid computing etc. Existing map matching algorithms are designed to run on vertical scalable frameworks (enhancing CPU, Disk storage, Network Resources etc.). Vertical scaling has known limitations and implementation difficulties. In this paper we present a framework for horizontal scaling of map-matching algorithm, which overcomes limitations of vertical scaling. This framework uses Hbase for data storage and map-reduce computation framework. Both of these technologies belong to big data technology stack. Proposed framework is evaluated by running ST-matching based map matching algorithm.
{"title":"Framework for Horizontal Scaling of Map Matching: Using Map-Reduce","authors":"V. Tiwari, Arti Arya, Sudha Chaturvedi","doi":"10.1109/ICIT.2014.70","DOIUrl":"https://doi.org/10.1109/ICIT.2014.70","url":null,"abstract":"Map Matching is a well-established problem which deals with mapping raw time stamped location traces to edges of road network graph. Location data traces may be from devices like GPS, Mobile Signals etc. It has applicability in mining travel patterns, route prediction, vehicle turn prediction and resource prediction in grid computing etc. Existing map matching algorithms are designed to run on vertical scalable frameworks (enhancing CPU, Disk storage, Network Resources etc.). Vertical scaling has known limitations and implementation difficulties. In this paper we present a framework for horizontal scaling of map-matching algorithm, which overcomes limitations of vertical scaling. This framework uses Hbase for data storage and map-reduce computation framework. Both of these technologies belong to big data technology stack. Proposed framework is evaluated by running ST-matching based map matching algorithm.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"1 1","pages":"30-34"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72679944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. K. Nayak, Monalisa Mishra, S. C. Rai, S. Pradhan
In the development of wireless sensor networks (WSNs) applications, organizing sensor nodes into a communication network and route the sensed data from sensor nodes to a remote sink is a challenging task. Energy efficient and reliable routing of data from the source to destination with minimal power consumption remains as a core research problem. So, in WSN we need an efficient protocol to route any transmitted data with extended lifetime of network. In this paper, we propose a novel clustering algorithm, Front-Leading Energy Efficient Cluster Heads (FLEECH), in which the whole network is partitioned into regions with diminishing sizes. In each region, we form multiple clusters. The selection of the Cluster Head (CH) is based on residual energy and distance of each node to the sink as its parameter. Simulation results show that our proposed model FLEECH outperforms Low Energy Adaptive Clustering Hierarchy (LEACH) with respect to energy consumption and extension of network life time.
{"title":"A Novel Cluster Head Selection Method for Energy Efficient Wireless Sensor Network","authors":"B. K. Nayak, Monalisa Mishra, S. C. Rai, S. Pradhan","doi":"10.1109/ICIT.2014.74","DOIUrl":"https://doi.org/10.1109/ICIT.2014.74","url":null,"abstract":"In the development of wireless sensor networks (WSNs) applications, organizing sensor nodes into a communication network and route the sensed data from sensor nodes to a remote sink is a challenging task. Energy efficient and reliable routing of data from the source to destination with minimal power consumption remains as a core research problem. So, in WSN we need an efficient protocol to route any transmitted data with extended lifetime of network. In this paper, we propose a novel clustering algorithm, Front-Leading Energy Efficient Cluster Heads (FLEECH), in which the whole network is partitioned into regions with diminishing sizes. In each region, we form multiple clusters. The selection of the Cluster Head (CH) is based on residual energy and distance of each node to the sink as its parameter. Simulation results show that our proposed model FLEECH outperforms Low Energy Adaptive Clustering Hierarchy (LEACH) with respect to energy consumption and extension of network life time.","PeriodicalId":6486,"journal":{"name":"2014 17th International Conference on Computer and Information Technology (ICCIT)","volume":"12 1","pages":"53-57"},"PeriodicalIF":0.0,"publicationDate":"2014-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75144335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}