Pub Date : 2014-04-10DOI: 10.1109/ICRTIT.2014.6996115
F. Lydia Catherine, Ravi Pathak, V. Vaidehi
System security has become significant issue in many organizations. The attacks like DoS, U2R, R2L and Probing etc., creating a serious threat to the appropriate operation of Internet services as well as in host system. In recent years, intrusion detection system is designed to prevent the intruder in the host as well as in network systems. Existing host based intrusion detection systems detects the intrusion using complete feature set and it is not fast enough to detect the attacks. To overcome this problem, this paper proposes an efficient HIDS - Correlation based Partial Decision Tree Algorithm (CPDT). The proposed CPDT combines Correlation feature selection for selecting features and Partial Decision Tree (PART) for classifying the normal and the abnormal packets. The algorithm is implemented and has been validated within KDD'99 dataset and found to give better results than the existing algorithms. The proposed CPDT model provides the accuracy of 99.9458%.
{"title":"Efficient host based intrusion detection system using Partial Decision Tree and Correlation feature selection algorithm","authors":"F. Lydia Catherine, Ravi Pathak, V. Vaidehi","doi":"10.1109/ICRTIT.2014.6996115","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996115","url":null,"abstract":"System security has become significant issue in many organizations. The attacks like DoS, U2R, R2L and Probing etc., creating a serious threat to the appropriate operation of Internet services as well as in host system. In recent years, intrusion detection system is designed to prevent the intruder in the host as well as in network systems. Existing host based intrusion detection systems detects the intrusion using complete feature set and it is not fast enough to detect the attacks. To overcome this problem, this paper proposes an efficient HIDS - Correlation based Partial Decision Tree Algorithm (CPDT). The proposed CPDT combines Correlation feature selection for selecting features and Partial Decision Tree (PART) for classifying the normal and the abnormal packets. The algorithm is implemented and has been validated within KDD'99 dataset and found to give better results than the existing algorithms. The proposed CPDT model provides the accuracy of 99.9458%.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134412877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-10DOI: 10.1109/ICRTIT.2014.6996211
G. Madhubala, R. Priyadharshini, P. Ranjitham, S. Baskaran
Cloud Computing is the delivery of computing as a service, which is specifically involved with Storage of data, enabling ubiquitous, convenient access to shared resources that are provided to computers and other devices as a utility over a network. Storage, which is considered to be the key attribute, is hindered by the presence of redundant copies of data. Data Deduplication is a specialized technique for data compression and duplicate detection for eliminating duplicate copies of data to make storage utilization efficient. Cloud Service Providers currently employ Hashing technique so as to avoid the presence of redundant copies. Apparently, there are a few major pitfalls which can be vanquished through the employment of a Nature - Inspired, Genetic Programming Approach, for deduplication. Genetic Programming is a systematic, domain - independent programming model making use of the ideologies of biological evolution so as to handle a complicated problem. A Sequence Matching Algorithm and Levenshtein's Algorithm are used for Text Comparison and then Genetic Programming concepts are utilized to detect the closest match. The performance of these three algorithms and hashing technique are compared. Since bio-inspired concepts, systems and algorithms are found to be more efficient, a Nature-Inspired Approach for data deduplication in cloud storage is implemented.
{"title":"Nature - Inspired enhanced data deduplication for efficient cloud storage","authors":"G. Madhubala, R. Priyadharshini, P. Ranjitham, S. Baskaran","doi":"10.1109/ICRTIT.2014.6996211","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996211","url":null,"abstract":"Cloud Computing is the delivery of computing as a service, which is specifically involved with Storage of data, enabling ubiquitous, convenient access to shared resources that are provided to computers and other devices as a utility over a network. Storage, which is considered to be the key attribute, is hindered by the presence of redundant copies of data. Data Deduplication is a specialized technique for data compression and duplicate detection for eliminating duplicate copies of data to make storage utilization efficient. Cloud Service Providers currently employ Hashing technique so as to avoid the presence of redundant copies. Apparently, there are a few major pitfalls which can be vanquished through the employment of a Nature - Inspired, Genetic Programming Approach, for deduplication. Genetic Programming is a systematic, domain - independent programming model making use of the ideologies of biological evolution so as to handle a complicated problem. A Sequence Matching Algorithm and Levenshtein's Algorithm are used for Text Comparison and then Genetic Programming concepts are utilized to detect the closest match. The performance of these three algorithms and hashing technique are compared. Since bio-inspired concepts, systems and algorithms are found to be more efficient, a Nature-Inspired Approach for data deduplication in cloud storage is implemented.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124765775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-10DOI: 10.1109/ICRTIT.2014.6996158
D. Priyadharshini, J. Angelina, K. Sundarakantham, S. Shalinie
Backups protect the file systems from disk or other hardware failures, software errors that may corrupt the file system and natural disasters. However, a single file may be present as multiple copies in the file system. Hence searching time to find the redundant data and to eliminate them is high. In addition to this, redundant data consumes more space in storage systems. Data de-duplication techniques are used to address these issues. Fingerprint lookup is a key ingredient for efficient de-duplication. This paper proposes an efficient Fingerprint lookup technique called Prefix Indexing Tablets in which the fingerprint lookup is performed only on necessary tablets. Further to reduce the fingerprint lookup delay, only the prefix of the fingerprint is considered. Experimentation on standard datasets show that the lookup latency of the proposed de-duplication method is reduced by 62% and the running time is improved.
{"title":"Efficient fingerprint lookup using Prefix Indexing Tablet","authors":"D. Priyadharshini, J. Angelina, K. Sundarakantham, S. Shalinie","doi":"10.1109/ICRTIT.2014.6996158","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996158","url":null,"abstract":"Backups protect the file systems from disk or other hardware failures, software errors that may corrupt the file system and natural disasters. However, a single file may be present as multiple copies in the file system. Hence searching time to find the redundant data and to eliminate them is high. In addition to this, redundant data consumes more space in storage systems. Data de-duplication techniques are used to address these issues. Fingerprint lookup is a key ingredient for efficient de-duplication. This paper proposes an efficient Fingerprint lookup technique called Prefix Indexing Tablets in which the fingerprint lookup is performed only on necessary tablets. Further to reduce the fingerprint lookup delay, only the prefix of the fingerprint is considered. Experimentation on standard datasets show that the lookup latency of the proposed de-duplication method is reduced by 62% and the running time is improved.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114296412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-10DOI: 10.1109/ICRTIT.2014.6996170
D. Jagadish
Collaborative learning is an online classroom can take the form of conversation between the whole classes or within smaller groups. Moodle (Modular Object-Oriented Dynamic Learning Environment) is a free and open source e-learning software platform, also known as a Learning Management System, or Virtual Learning Environment (VLE). As a web-based tool, Moodle offers the possible way to deliver courses which include an enormous variety of information sources - links to multimedia, websites and image - which are hard to deliver in a traditional teaching atmosphere. The converse (chat) activity module in moodle allows participants to encompass a realtime synchronous discussion in a moodle course. A teacher can organize users into groups within the course or within particular activities. This paper aims in efficient group formation of learners in a collaborative learning environment so that every individual in the group is benefitted. As a testing platform tenth standard Tamil text book is incorporated in to moodle. In this paper K-NN clustering algorithm is used to improve the group performance. This algorithm achieves good performance in terms of balancing the knowledge level among all the students.
{"title":"Grouping in collaborative e-learning environment based on interaction among students","authors":"D. Jagadish","doi":"10.1109/ICRTIT.2014.6996170","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996170","url":null,"abstract":"Collaborative learning is an online classroom can take the form of conversation between the whole classes or within smaller groups. Moodle (Modular Object-Oriented Dynamic Learning Environment) is a free and open source e-learning software platform, also known as a Learning Management System, or Virtual Learning Environment (VLE). As a web-based tool, Moodle offers the possible way to deliver courses which include an enormous variety of information sources - links to multimedia, websites and image - which are hard to deliver in a traditional teaching atmosphere. The converse (chat) activity module in moodle allows participants to encompass a realtime synchronous discussion in a moodle course. A teacher can organize users into groups within the course or within particular activities. This paper aims in efficient group formation of learners in a collaborative learning environment so that every individual in the group is benefitted. As a testing platform tenth standard Tamil text book is incorporated in to moodle. In this paper K-NN clustering algorithm is used to improve the group performance. This algorithm achieves good performance in terms of balancing the knowledge level among all the students.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122796318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-10DOI: 10.1109/ICRTIT.2014.6996121
Abinash Tripathy, S. Rath
Software Development Life Cycle (SDLC) starts with eliciting requirement of user as a document called Software Requirement Specification (SRS). SRS document is mostly written in the form of any natural language (NL) that is convenient for the client. In order to develop a right software based on user's requirements, the objects, methods and attributes needs to be identified from SRS document. In this paper, an attempt is made to develop a methodology, using the concept of Natural Language Processing (NLP) for Object Oriented (OO) Programming System analysis concept, by finding out the class name and its details directly form SRS.
{"title":"Application of Natural Language Processing in Object Oriented Software Development","authors":"Abinash Tripathy, S. Rath","doi":"10.1109/ICRTIT.2014.6996121","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996121","url":null,"abstract":"Software Development Life Cycle (SDLC) starts with eliciting requirement of user as a document called Software Requirement Specification (SRS). SRS document is mostly written in the form of any natural language (NL) that is convenient for the client. In order to develop a right software based on user's requirements, the objects, methods and attributes needs to be identified from SRS document. In this paper, an attempt is made to develop a methodology, using the concept of Natural Language Processing (NLP) for Object Oriented (OO) Programming System analysis concept, by finding out the class name and its details directly form SRS.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122868152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-10DOI: 10.1109/ICRTIT.2014.6996210
P. Bharathi, K. K. Raj, Hiran Kumar Singh, Dhananjay Kumar
A traffic distribution in a wireless network plays a major role in resource allocation. In this paper, we analyze throughput in Cognitive Radio Network (CRN) under two traffic distributions Pareto on-off and Poisson distribution. We consider a CRN where the cell is divided into different concrete circles and sectors. In each segment is analyzed and the channel is allocated accordingly while taking a count of Blocking -dropping probability and false alarm -missed detection probability. The system is simulated in java platform and results shows higher throughput for Poisson distribution.
{"title":"Throughput analysis of different traffic distribution in Cognitive Radio Network","authors":"P. Bharathi, K. K. Raj, Hiran Kumar Singh, Dhananjay Kumar","doi":"10.1109/ICRTIT.2014.6996210","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996210","url":null,"abstract":"A traffic distribution in a wireless network plays a major role in resource allocation. In this paper, we analyze throughput in Cognitive Radio Network (CRN) under two traffic distributions Pareto on-off and Poisson distribution. We consider a CRN where the cell is divided into different concrete circles and sectors. In each segment is analyzed and the channel is allocated accordingly while taking a count of Blocking -dropping probability and false alarm -missed detection probability. The system is simulated in java platform and results shows higher throughput for Poisson distribution.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116619079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-10DOI: 10.1109/ICRTIT.2014.6996152
S. Suman, S. Porselvi, L. Bhagyalakshmi, Dhananjay Kumar
In wireless ad hoc networks, Quality of Service (QoS) can be obtained efficiently using the power control scheme. Power control can be achieved by incorporating cooperation among the available links. In this paper, we propose an adaptive pricing scheme that enables the nodes in the networks to determine the maximum allowable power that can be used for transmission of data within the networks so as to avoid inducing interference in the other links that exist in the networks. Each node calculates the total power which, when used for data transmission with the other nodes would obtain Nash Equilibrium (NE) for the utility function. This in turn contributes to maximize the frequency reuse and thereby improves throughput capacity. Numerical results prove that the overall throughput of the network is improved under this scheme.
{"title":"Game theoretical approach for improving throughput capacity in wireless ad hoc networks","authors":"S. Suman, S. Porselvi, L. Bhagyalakshmi, Dhananjay Kumar","doi":"10.1109/ICRTIT.2014.6996152","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996152","url":null,"abstract":"In wireless ad hoc networks, Quality of Service (QoS) can be obtained efficiently using the power control scheme. Power control can be achieved by incorporating cooperation among the available links. In this paper, we propose an adaptive pricing scheme that enables the nodes in the networks to determine the maximum allowable power that can be used for transmission of data within the networks so as to avoid inducing interference in the other links that exist in the networks. Each node calculates the total power which, when used for data transmission with the other nodes would obtain Nash Equilibrium (NE) for the utility function. This in turn contributes to maximize the frequency reuse and thereby improves throughput capacity. Numerical results prove that the overall throughput of the network is improved under this scheme.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115650057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-10DOI: 10.1109/ICRTIT.2014.6996161
S. K. Aparnaa, K. Kousalya
Grid computing involves sharing data storage and coordinating network resources. The complexity of scheduling increases with heterogeneous nature of grid and is highly difficult to schedule effectively. The goal of grid job scheduling is to achieve high system performance and match the job to the appropriate available resource. Due to dynamic nature of grid, the traditional job scheduling algorithms First Come First Serve (FCFS) and First Come Last Serve (FCLS) does not adapt to the grid environment. In order to utilize the power of grid completely and to schedule jobs efficiently many existing algorithms have been implemented. However the existing algorithms does not consider the memory requirement of each cluster which is one of the main resource for scheduling data intensive jobs. Due to this the job failure rate is also very high. To provide a solution to that problem Enhanced Adaptive Scoring Job Scheduling algorithm is introduced. The jobs are identified whether it is data intensive or computational intensive and based on that the jobs are scheduled. The jobs are allocated by computing Job Score (JS) along with the memory requirement of each cluster. Due to the dynamic nature of grid environment, each time the status of the resources changes and each time the Job Score(JS) is computed and the jobs are allocated to the most appropriate resources. The proposed algorithm minimize job failure rate and makespan time is also reduced.
{"title":"An Enhanced Adaptive Scoring Job Scheduling algorithm for minimizing job failure in heterogeneous grid network","authors":"S. K. Aparnaa, K. Kousalya","doi":"10.1109/ICRTIT.2014.6996161","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996161","url":null,"abstract":"Grid computing involves sharing data storage and coordinating network resources. The complexity of scheduling increases with heterogeneous nature of grid and is highly difficult to schedule effectively. The goal of grid job scheduling is to achieve high system performance and match the job to the appropriate available resource. Due to dynamic nature of grid, the traditional job scheduling algorithms First Come First Serve (FCFS) and First Come Last Serve (FCLS) does not adapt to the grid environment. In order to utilize the power of grid completely and to schedule jobs efficiently many existing algorithms have been implemented. However the existing algorithms does not consider the memory requirement of each cluster which is one of the main resource for scheduling data intensive jobs. Due to this the job failure rate is also very high. To provide a solution to that problem Enhanced Adaptive Scoring Job Scheduling algorithm is introduced. The jobs are identified whether it is data intensive or computational intensive and based on that the jobs are scheduled. The jobs are allocated by computing Job Score (JS) along with the memory requirement of each cluster. Due to the dynamic nature of grid environment, each time the status of the resources changes and each time the Job Score(JS) is computed and the jobs are allocated to the most appropriate resources. The proposed algorithm minimize job failure rate and makespan time is also reduced.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"257 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117030131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-10DOI: 10.1109/ICRTIT.2014.6996136
B. Bhaskar, S. Veluchamy
Biometrics has wide applications in the fields of security and privacy. Since unimodal biometrics are subjected to various problems regarding recognition and security, multimodal biometrics have been used extensively nowadays for personal authentication. In this paper we have proposed an efficient personal identification system using two biometric identifiers, palm print and Inner knuckle print. In the recent years, palm prints and knuckle prints have overruled other biometric identifiers because of their unique, stable and novelty feature. The proposed feature extraction method for palm print is Monogenic Binary Coding (MBC), which is an efficient approach for extracting palm print features. Then for inner knuckle print recognition we have tried two algorithms named Ridgelet Transform and Scale Invariant Feature Transform (SIFT). Also we have compared their results in terms of recognition rate. We then adopt Support Vector Machine (SVM) for classifying the extracted feature vectors. Combining both knuckle print and palm print for personal identification will give better security and accuracy.
{"title":"Hand based multibiometric authentication using local feature extraction","authors":"B. Bhaskar, S. Veluchamy","doi":"10.1109/ICRTIT.2014.6996136","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996136","url":null,"abstract":"Biometrics has wide applications in the fields of security and privacy. Since unimodal biometrics are subjected to various problems regarding recognition and security, multimodal biometrics have been used extensively nowadays for personal authentication. In this paper we have proposed an efficient personal identification system using two biometric identifiers, palm print and Inner knuckle print. In the recent years, palm prints and knuckle prints have overruled other biometric identifiers because of their unique, stable and novelty feature. The proposed feature extraction method for palm print is Monogenic Binary Coding (MBC), which is an efficient approach for extracting palm print features. Then for inner knuckle print recognition we have tried two algorithms named Ridgelet Transform and Scale Invariant Feature Transform (SIFT). Also we have compared their results in terms of recognition rate. We then adopt Support Vector Machine (SVM) for classifying the extracted feature vectors. Combining both knuckle print and palm print for personal identification will give better security and accuracy.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127039129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-10DOI: 10.1109/ICRTIT.2014.6996159
M. Pallikonda Rajasekaran, M. Suresh, U. Dhanasekaran
Biometrics is the ID of humans utilizing intrinsic physical, biological, otherwise activity features, traits, or habits. Biometrics has the potential to provide this desired ability to clearly and discretely determine a person's identity with additional accuracy and security. Biometric systems primarily based on individual antecedent of advice which is referred as unimodal frameworks. Even though some unimodal frameworks (e.g. Palm, Finger impression, Face, Iris), have got significant change in consistency plus precision yet has experienced selection issues attributable to non-all-inclusiveness of biometrics attributes, vulnerability to biometric mocking and insufficient exactness created by boisterous information as their inconveniences. In future, single biometric framework might not be in a position to accomplish the wanted execution prerequisite in genuine world provisions. To defeat these issues, we have to utilize multimodal biometric confirmation frameworks which blend data from various modalities to make a choice. Multimodal biometric confirmation framework utilize use more than one human modalities such as face, iris, retina, sclera and fingerprint etc. to improve their security of the method. In this approach, combined the biometric traits of sclera and fingerprint for addressing authentication issues, which has not discussed and implemented earlier. The fusion of multimodal biometric system helps to reduce the system error rates. The ANFIS model consolidated the neural system versatile capacities and the fluffy rationale qualitative strategy will have low false dismissal degree contrasted with neural network and fluffy rationale qualitative frame work. The combination of multimodal biometric security conspires in the ANFIS will show higher accuracy come close with Neural Network and Fuzzy Inference System.
{"title":"Multimodal biometric recognition using sclera and fingerprint based on ANFIS","authors":"M. Pallikonda Rajasekaran, M. Suresh, U. Dhanasekaran","doi":"10.1109/ICRTIT.2014.6996159","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996159","url":null,"abstract":"Biometrics is the ID of humans utilizing intrinsic physical, biological, otherwise activity features, traits, or habits. Biometrics has the potential to provide this desired ability to clearly and discretely determine a person's identity with additional accuracy and security. Biometric systems primarily based on individual antecedent of advice which is referred as unimodal frameworks. Even though some unimodal frameworks (e.g. Palm, Finger impression, Face, Iris), have got significant change in consistency plus precision yet has experienced selection issues attributable to non-all-inclusiveness of biometrics attributes, vulnerability to biometric mocking and insufficient exactness created by boisterous information as their inconveniences. In future, single biometric framework might not be in a position to accomplish the wanted execution prerequisite in genuine world provisions. To defeat these issues, we have to utilize multimodal biometric confirmation frameworks which blend data from various modalities to make a choice. Multimodal biometric confirmation framework utilize use more than one human modalities such as face, iris, retina, sclera and fingerprint etc. to improve their security of the method. In this approach, combined the biometric traits of sclera and fingerprint for addressing authentication issues, which has not discussed and implemented earlier. The fusion of multimodal biometric system helps to reduce the system error rates. The ANFIS model consolidated the neural system versatile capacities and the fluffy rationale qualitative strategy will have low false dismissal degree contrasted with neural network and fluffy rationale qualitative frame work. The combination of multimodal biometric security conspires in the ANFIS will show higher accuracy come close with Neural Network and Fuzzy Inference System.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"183 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127040851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}