首页 > 最新文献

2014 International Conference on Recent Trends in Information Technology最新文献

英文 中文
Efficient host based intrusion detection system using Partial Decision Tree and Correlation feature selection algorithm 基于部分决策树和相关特征选择算法的高效主机入侵检测系统
Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996115
F. Lydia Catherine, Ravi Pathak, V. Vaidehi
System security has become significant issue in many organizations. The attacks like DoS, U2R, R2L and Probing etc., creating a serious threat to the appropriate operation of Internet services as well as in host system. In recent years, intrusion detection system is designed to prevent the intruder in the host as well as in network systems. Existing host based intrusion detection systems detects the intrusion using complete feature set and it is not fast enough to detect the attacks. To overcome this problem, this paper proposes an efficient HIDS - Correlation based Partial Decision Tree Algorithm (CPDT). The proposed CPDT combines Correlation feature selection for selecting features and Partial Decision Tree (PART) for classifying the normal and the abnormal packets. The algorithm is implemented and has been validated within KDD'99 dataset and found to give better results than the existing algorithms. The proposed CPDT model provides the accuracy of 99.9458%.
系统安全已经成为许多组织的重要问题。DoS、U2R、R2L、探测等攻击对互联网服务和主机系统的正常运行造成严重威胁。近年来,入侵检测系统被设计用于防止主机和网络系统中的入侵者。现有的基于主机的入侵检测系统采用完整的特征集进行入侵检测,检测速度不够快。为了克服这一问题,本文提出了一种高效的基于HIDS -相关性的部分决策树算法(CPDT)。该算法结合了相关特征选择(Correlation feature selection)和部分决策树(Partial Decision Tree, PART)对正常和异常数据包进行分类。该算法已在KDD'99数据集上实现并进行了验证,结果表明该算法比现有算法具有更好的效果。提出的CPDT模型的准确率为99.9458%。
{"title":"Efficient host based intrusion detection system using Partial Decision Tree and Correlation feature selection algorithm","authors":"F. Lydia Catherine, Ravi Pathak, V. Vaidehi","doi":"10.1109/ICRTIT.2014.6996115","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996115","url":null,"abstract":"System security has become significant issue in many organizations. The attacks like DoS, U2R, R2L and Probing etc., creating a serious threat to the appropriate operation of Internet services as well as in host system. In recent years, intrusion detection system is designed to prevent the intruder in the host as well as in network systems. Existing host based intrusion detection systems detects the intrusion using complete feature set and it is not fast enough to detect the attacks. To overcome this problem, this paper proposes an efficient HIDS - Correlation based Partial Decision Tree Algorithm (CPDT). The proposed CPDT combines Correlation feature selection for selecting features and Partial Decision Tree (PART) for classifying the normal and the abnormal packets. The algorithm is implemented and has been validated within KDD'99 dataset and found to give better results than the existing algorithms. The proposed CPDT model provides the accuracy of 99.9458%.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134412877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Nature - Inspired enhanced data deduplication for efficient cloud storage 自然-启发增强的重复数据删除,用于高效的云存储
Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996211
G. Madhubala, R. Priyadharshini, P. Ranjitham, S. Baskaran
Cloud Computing is the delivery of computing as a service, which is specifically involved with Storage of data, enabling ubiquitous, convenient access to shared resources that are provided to computers and other devices as a utility over a network. Storage, which is considered to be the key attribute, is hindered by the presence of redundant copies of data. Data Deduplication is a specialized technique for data compression and duplicate detection for eliminating duplicate copies of data to make storage utilization efficient. Cloud Service Providers currently employ Hashing technique so as to avoid the presence of redundant copies. Apparently, there are a few major pitfalls which can be vanquished through the employment of a Nature - Inspired, Genetic Programming Approach, for deduplication. Genetic Programming is a systematic, domain - independent programming model making use of the ideologies of biological evolution so as to handle a complicated problem. A Sequence Matching Algorithm and Levenshtein's Algorithm are used for Text Comparison and then Genetic Programming concepts are utilized to detect the closest match. The performance of these three algorithms and hashing technique are compared. Since bio-inspired concepts, systems and algorithms are found to be more efficient, a Nature-Inspired Approach for data deduplication in cloud storage is implemented.
云计算是将计算作为一种服务交付,它特别涉及到数据的存储,从而实现无处不在、方便地访问共享资源,这些资源作为一种实用工具通过网络提供给计算机和其他设备。存储(被认为是关键属性)受到数据冗余副本存在的阻碍。重复数据删除是一种专门的数据压缩和重复检测技术,用于消除重复的数据副本,从而提高存储利用率。云服务提供商目前采用哈希技术,以避免冗余副本的存在。显然,有几个主要的陷阱可以通过采用自然启发的遗传编程方法来克服,以实现重复数据删除。遗传规划是利用生物进化的思想来处理复杂问题的一种系统的、领域独立的规划模型。采用序列匹配算法和Levenshtein算法进行文本比较,然后利用遗传规划的概念检测最接近的匹配。比较了这三种算法和散列技术的性能。由于生物启发的概念、系统和算法被发现更有效,因此在云存储中实施了一种自然启发的重复数据删除方法。
{"title":"Nature - Inspired enhanced data deduplication for efficient cloud storage","authors":"G. Madhubala, R. Priyadharshini, P. Ranjitham, S. Baskaran","doi":"10.1109/ICRTIT.2014.6996211","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996211","url":null,"abstract":"Cloud Computing is the delivery of computing as a service, which is specifically involved with Storage of data, enabling ubiquitous, convenient access to shared resources that are provided to computers and other devices as a utility over a network. Storage, which is considered to be the key attribute, is hindered by the presence of redundant copies of data. Data Deduplication is a specialized technique for data compression and duplicate detection for eliminating duplicate copies of data to make storage utilization efficient. Cloud Service Providers currently employ Hashing technique so as to avoid the presence of redundant copies. Apparently, there are a few major pitfalls which can be vanquished through the employment of a Nature - Inspired, Genetic Programming Approach, for deduplication. Genetic Programming is a systematic, domain - independent programming model making use of the ideologies of biological evolution so as to handle a complicated problem. A Sequence Matching Algorithm and Levenshtein's Algorithm are used for Text Comparison and then Genetic Programming concepts are utilized to detect the closest match. The performance of these three algorithms and hashing technique are compared. Since bio-inspired concepts, systems and algorithms are found to be more efficient, a Nature-Inspired Approach for data deduplication in cloud storage is implemented.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124765775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Efficient fingerprint lookup using Prefix Indexing Tablet 高效的指纹查找使用前缀索引平板电脑
Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996158
D. Priyadharshini, J. Angelina, K. Sundarakantham, S. Shalinie
Backups protect the file systems from disk or other hardware failures, software errors that may corrupt the file system and natural disasters. However, a single file may be present as multiple copies in the file system. Hence searching time to find the redundant data and to eliminate them is high. In addition to this, redundant data consumes more space in storage systems. Data de-duplication techniques are used to address these issues. Fingerprint lookup is a key ingredient for efficient de-duplication. This paper proposes an efficient Fingerprint lookup technique called Prefix Indexing Tablets in which the fingerprint lookup is performed only on necessary tablets. Further to reduce the fingerprint lookup delay, only the prefix of the fingerprint is considered. Experimentation on standard datasets show that the lookup latency of the proposed de-duplication method is reduced by 62% and the running time is improved.
备份保护文件系统免受磁盘或其他硬件故障、可能损坏文件系统的软件错误和自然灾害的影响。然而,一个文件在文件系统中可能有多个副本。因此,查找冗余数据并消除冗余数据的搜索时间很高。此外,冗余数据会占用更多的存储空间。数据重复删除技术用于解决这些问题。指纹查找是高效重复数据删除的关键因素。本文提出了一种高效的指纹查找技术,称为前缀索引片,该技术只在必要的片上进行指纹查找。为了进一步减少指纹查找延迟,只考虑指纹的前缀。在标准数据集上的实验表明,提出的重复数据删除方法的查找延迟减少了62%,并提高了运行时间。
{"title":"Efficient fingerprint lookup using Prefix Indexing Tablet","authors":"D. Priyadharshini, J. Angelina, K. Sundarakantham, S. Shalinie","doi":"10.1109/ICRTIT.2014.6996158","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996158","url":null,"abstract":"Backups protect the file systems from disk or other hardware failures, software errors that may corrupt the file system and natural disasters. However, a single file may be present as multiple copies in the file system. Hence searching time to find the redundant data and to eliminate them is high. In addition to this, redundant data consumes more space in storage systems. Data de-duplication techniques are used to address these issues. Fingerprint lookup is a key ingredient for efficient de-duplication. This paper proposes an efficient Fingerprint lookup technique called Prefix Indexing Tablets in which the fingerprint lookup is performed only on necessary tablets. Further to reduce the fingerprint lookup delay, only the prefix of the fingerprint is considered. Experimentation on standard datasets show that the lookup latency of the proposed de-duplication method is reduced by 62% and the running time is improved.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114296412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Grouping in collaborative e-learning environment based on interaction among students 协作式电子学习环境中基于学生互动的分组
Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996170
D. Jagadish
Collaborative learning is an online classroom can take the form of conversation between the whole classes or within smaller groups. Moodle (Modular Object-Oriented Dynamic Learning Environment) is a free and open source e-learning software platform, also known as a Learning Management System, or Virtual Learning Environment (VLE). As a web-based tool, Moodle offers the possible way to deliver courses which include an enormous variety of information sources - links to multimedia, websites and image - which are hard to deliver in a traditional teaching atmosphere. The converse (chat) activity module in moodle allows participants to encompass a realtime synchronous discussion in a moodle course. A teacher can organize users into groups within the course or within particular activities. This paper aims in efficient group formation of learners in a collaborative learning environment so that every individual in the group is benefitted. As a testing platform tenth standard Tamil text book is incorporated in to moodle. In this paper K-NN clustering algorithm is used to improve the group performance. This algorithm achieves good performance in terms of balancing the knowledge level among all the students.
协作学习是一种在线课堂,可以采取全班之间或小组内部对话的形式。Moodle(模块化面向对象动态学习环境)是一个免费的开源电子学习软件平台,也被称为学习管理系统或虚拟学习环境(VLE)。作为一种基于网络的工具,Moodle提供了一种可能的方式来交付课程,其中包括各种各样的信息来源-多媒体,网站和图像的链接-这些在传统的教学环境中很难交付。moodle中的会话(聊天)活动模块允许参与者在moodle课程中包含实时同步讨论。教师可以在课程或特定活动中将用户组织成组。本文的目的是在协作学习环境中有效地形成学习者的小组,使小组中的每个人都受益。作为一个测试平台,第十标准泰米尔语教科书被纳入moodle。本文采用K-NN聚类算法来提高分组性能。该算法在平衡所有学生的知识水平方面取得了良好的性能。
{"title":"Grouping in collaborative e-learning environment based on interaction among students","authors":"D. Jagadish","doi":"10.1109/ICRTIT.2014.6996170","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996170","url":null,"abstract":"Collaborative learning is an online classroom can take the form of conversation between the whole classes or within smaller groups. Moodle (Modular Object-Oriented Dynamic Learning Environment) is a free and open source e-learning software platform, also known as a Learning Management System, or Virtual Learning Environment (VLE). As a web-based tool, Moodle offers the possible way to deliver courses which include an enormous variety of information sources - links to multimedia, websites and image - which are hard to deliver in a traditional teaching atmosphere. The converse (chat) activity module in moodle allows participants to encompass a realtime synchronous discussion in a moodle course. A teacher can organize users into groups within the course or within particular activities. This paper aims in efficient group formation of learners in a collaborative learning environment so that every individual in the group is benefitted. As a testing platform tenth standard Tamil text book is incorporated in to moodle. In this paper K-NN clustering algorithm is used to improve the group performance. This algorithm achieves good performance in terms of balancing the knowledge level among all the students.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122796318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Application of Natural Language Processing in Object Oriented Software Development 自然语言处理在面向对象软件开发中的应用
Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996121
Abinash Tripathy, S. Rath
Software Development Life Cycle (SDLC) starts with eliciting requirement of user as a document called Software Requirement Specification (SRS). SRS document is mostly written in the form of any natural language (NL) that is convenient for the client. In order to develop a right software based on user's requirements, the objects, methods and attributes needs to be identified from SRS document. In this paper, an attempt is made to develop a methodology, using the concept of Natural Language Processing (NLP) for Object Oriented (OO) Programming System analysis concept, by finding out the class name and its details directly form SRS.
软件开发生命周期(SDLC)从引出用户的需求开始,形成称为软件需求规范(SRS)的文档。SRS文档主要以方便客户端的任何自然语言(NL)的形式编写。为了根据用户需求开发出合适的软件,需要从SRS文档中识别对象、方法和属性。本文尝试将自然语言处理(NLP)的概念应用于面向对象(OO)编程系统的分析概念,通过直接从SRS中找出类名及其详细信息,开发一种方法。
{"title":"Application of Natural Language Processing in Object Oriented Software Development","authors":"Abinash Tripathy, S. Rath","doi":"10.1109/ICRTIT.2014.6996121","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996121","url":null,"abstract":"Software Development Life Cycle (SDLC) starts with eliciting requirement of user as a document called Software Requirement Specification (SRS). SRS document is mostly written in the form of any natural language (NL) that is convenient for the client. In order to develop a right software based on user's requirements, the objects, methods and attributes needs to be identified from SRS document. In this paper, an attempt is made to develop a methodology, using the concept of Natural Language Processing (NLP) for Object Oriented (OO) Programming System analysis concept, by finding out the class name and its details directly form SRS.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122868152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Throughput analysis of different traffic distribution in Cognitive Radio Network 认知无线网络中不同业务分布的吞吐量分析
Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996210
P. Bharathi, K. K. Raj, Hiran Kumar Singh, Dhananjay Kumar
A traffic distribution in a wireless network plays a major role in resource allocation. In this paper, we analyze throughput in Cognitive Radio Network (CRN) under two traffic distributions Pareto on-off and Poisson distribution. We consider a CRN where the cell is divided into different concrete circles and sectors. In each segment is analyzed and the channel is allocated accordingly while taking a count of Blocking -dropping probability and false alarm -missed detection probability. The system is simulated in java platform and results shows higher throughput for Poisson distribution.
无线网络中的业务分布在资源分配中起着重要的作用。本文分析了认知无线网络(CRN)在两种流量分布下的吞吐量,分别是帕累托开通和泊松分布。我们考虑一个CRN,其中单元被分成不同的具体圆和扇区。在计算丢包概率和误报漏检概率的同时,对每一段进行分析并进行相应的信道分配。在java平台上对该系统进行了仿真,结果表明该系统具有较高的泊松分布吞吐量。
{"title":"Throughput analysis of different traffic distribution in Cognitive Radio Network","authors":"P. Bharathi, K. K. Raj, Hiran Kumar Singh, Dhananjay Kumar","doi":"10.1109/ICRTIT.2014.6996210","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996210","url":null,"abstract":"A traffic distribution in a wireless network plays a major role in resource allocation. In this paper, we analyze throughput in Cognitive Radio Network (CRN) under two traffic distributions Pareto on-off and Poisson distribution. We consider a CRN where the cell is divided into different concrete circles and sectors. In each segment is analyzed and the channel is allocated accordingly while taking a count of Blocking -dropping probability and false alarm -missed detection probability. The system is simulated in java platform and results shows higher throughput for Poisson distribution.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116619079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Game theoretical approach for improving throughput capacity in wireless ad hoc networks 提高无线自组织网络吞吐量的博弈论方法
Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996152
S. Suman, S. Porselvi, L. Bhagyalakshmi, Dhananjay Kumar
In wireless ad hoc networks, Quality of Service (QoS) can be obtained efficiently using the power control scheme. Power control can be achieved by incorporating cooperation among the available links. In this paper, we propose an adaptive pricing scheme that enables the nodes in the networks to determine the maximum allowable power that can be used for transmission of data within the networks so as to avoid inducing interference in the other links that exist in the networks. Each node calculates the total power which, when used for data transmission with the other nodes would obtain Nash Equilibrium (NE) for the utility function. This in turn contributes to maximize the frequency reuse and thereby improves throughput capacity. Numerical results prove that the overall throughput of the network is improved under this scheme.
在无线自组织网络中,使用功率控制方案可以有效地获得服务质量(QoS)。功率控制可以通过整合可用链路之间的合作来实现。在本文中,我们提出了一种自适应定价方案,使网络中的节点能够确定可用于网络内数据传输的最大允许功率,以避免对网络中存在的其他链路产生干扰。每个节点计算总功率,当与其他节点进行数据传输时,将获得效用函数的纳什均衡(NE)。这反过来又有助于最大化频率重用,从而提高吞吐量。数值结果表明,该方案提高了网络的整体吞吐量。
{"title":"Game theoretical approach for improving throughput capacity in wireless ad hoc networks","authors":"S. Suman, S. Porselvi, L. Bhagyalakshmi, Dhananjay Kumar","doi":"10.1109/ICRTIT.2014.6996152","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996152","url":null,"abstract":"In wireless ad hoc networks, Quality of Service (QoS) can be obtained efficiently using the power control scheme. Power control can be achieved by incorporating cooperation among the available links. In this paper, we propose an adaptive pricing scheme that enables the nodes in the networks to determine the maximum allowable power that can be used for transmission of data within the networks so as to avoid inducing interference in the other links that exist in the networks. Each node calculates the total power which, when used for data transmission with the other nodes would obtain Nash Equilibrium (NE) for the utility function. This in turn contributes to maximize the frequency reuse and thereby improves throughput capacity. Numerical results prove that the overall throughput of the network is improved under this scheme.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115650057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An Enhanced Adaptive Scoring Job Scheduling algorithm for minimizing job failure in heterogeneous grid network 基于自适应计分的异构网格网络作业调度算法
Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996161
S. K. Aparnaa, K. Kousalya
Grid computing involves sharing data storage and coordinating network resources. The complexity of scheduling increases with heterogeneous nature of grid and is highly difficult to schedule effectively. The goal of grid job scheduling is to achieve high system performance and match the job to the appropriate available resource. Due to dynamic nature of grid, the traditional job scheduling algorithms First Come First Serve (FCFS) and First Come Last Serve (FCLS) does not adapt to the grid environment. In order to utilize the power of grid completely and to schedule jobs efficiently many existing algorithms have been implemented. However the existing algorithms does not consider the memory requirement of each cluster which is one of the main resource for scheduling data intensive jobs. Due to this the job failure rate is also very high. To provide a solution to that problem Enhanced Adaptive Scoring Job Scheduling algorithm is introduced. The jobs are identified whether it is data intensive or computational intensive and based on that the jobs are scheduled. The jobs are allocated by computing Job Score (JS) along with the memory requirement of each cluster. Due to the dynamic nature of grid environment, each time the status of the resources changes and each time the Job Score(JS) is computed and the jobs are allocated to the most appropriate resources. The proposed algorithm minimize job failure rate and makespan time is also reduced.
网格计算包括共享数据存储和协调网络资源。网格的异构特性增加了调度的复杂性,很难进行有效的调度。网格作业调度的目标是实现高系统性能,并将作业与适当的可用资源相匹配。由于网格的动态性,传统的作业调度算法先到先服务(FCFS)和先到后服务(FCLS)不适应网格环境。为了充分利用网格的力量,有效地调度作业,已有许多算法被实现。然而,现有的算法没有考虑每个集群的内存需求,而内存需求是调度数据密集型作业的主要资源之一。因此,工作失败率也很高。为了解决这一问题,提出了一种改进的自适应计分作业调度算法。无论作业是数据密集型的还是计算密集型的,都会确定作业,并根据作业进行调度。通过计算Job Score (JS)以及每个集群的内存需求来分配作业。由于网格环境的动态性,每次资源的状态都会发生变化,每次都会计算Job Score(JS),并将作业分配给最合适的资源。该算法最大限度地降低了作业失败率,缩短了完工时间。
{"title":"An Enhanced Adaptive Scoring Job Scheduling algorithm for minimizing job failure in heterogeneous grid network","authors":"S. K. Aparnaa, K. Kousalya","doi":"10.1109/ICRTIT.2014.6996161","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996161","url":null,"abstract":"Grid computing involves sharing data storage and coordinating network resources. The complexity of scheduling increases with heterogeneous nature of grid and is highly difficult to schedule effectively. The goal of grid job scheduling is to achieve high system performance and match the job to the appropriate available resource. Due to dynamic nature of grid, the traditional job scheduling algorithms First Come First Serve (FCFS) and First Come Last Serve (FCLS) does not adapt to the grid environment. In order to utilize the power of grid completely and to schedule jobs efficiently many existing algorithms have been implemented. However the existing algorithms does not consider the memory requirement of each cluster which is one of the main resource for scheduling data intensive jobs. Due to this the job failure rate is also very high. To provide a solution to that problem Enhanced Adaptive Scoring Job Scheduling algorithm is introduced. The jobs are identified whether it is data intensive or computational intensive and based on that the jobs are scheduled. The jobs are allocated by computing Job Score (JS) along with the memory requirement of each cluster. Due to the dynamic nature of grid environment, each time the status of the resources changes and each time the Job Score(JS) is computed and the jobs are allocated to the most appropriate resources. The proposed algorithm minimize job failure rate and makespan time is also reduced.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"257 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117030131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Hand based multibiometric authentication using local feature extraction 基于局部特征提取的手部多重生物特征认证
Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996136
B. Bhaskar, S. Veluchamy
Biometrics has wide applications in the fields of security and privacy. Since unimodal biometrics are subjected to various problems regarding recognition and security, multimodal biometrics have been used extensively nowadays for personal authentication. In this paper we have proposed an efficient personal identification system using two biometric identifiers, palm print and Inner knuckle print. In the recent years, palm prints and knuckle prints have overruled other biometric identifiers because of their unique, stable and novelty feature. The proposed feature extraction method for palm print is Monogenic Binary Coding (MBC), which is an efficient approach for extracting palm print features. Then for inner knuckle print recognition we have tried two algorithms named Ridgelet Transform and Scale Invariant Feature Transform (SIFT). Also we have compared their results in terms of recognition rate. We then adopt Support Vector Machine (SVM) for classifying the extracted feature vectors. Combining both knuckle print and palm print for personal identification will give better security and accuracy.
生物识别技术在安全和隐私领域有着广泛的应用。由于单模态生物识别技术在识别和安全方面存在各种问题,多模态生物识别技术目前已广泛用于个人身份验证。本文提出了一种利用手掌指纹和内指关节指纹两种生物特征识别的高效个人识别系统。近年来,手掌指纹和指关节指纹因其独特、稳定和新颖的特征而取代了其他生物识别技术。本文提出的掌纹特征提取方法是单基因二进制编码(Monogenic Binary Coding, MBC),这是一种有效的掌纹特征提取方法。然后对指关节内纹识别进行了脊波变换和尺度不变特征变换(SIFT)两种算法的尝试。我们还比较了他们的结果在识别率方面。然后采用支持向量机(SVM)对提取的特征向量进行分类。结合指关节指纹和掌纹进行个人识别,安全性和准确性更高。
{"title":"Hand based multibiometric authentication using local feature extraction","authors":"B. Bhaskar, S. Veluchamy","doi":"10.1109/ICRTIT.2014.6996136","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996136","url":null,"abstract":"Biometrics has wide applications in the fields of security and privacy. Since unimodal biometrics are subjected to various problems regarding recognition and security, multimodal biometrics have been used extensively nowadays for personal authentication. In this paper we have proposed an efficient personal identification system using two biometric identifiers, palm print and Inner knuckle print. In the recent years, palm prints and knuckle prints have overruled other biometric identifiers because of their unique, stable and novelty feature. The proposed feature extraction method for palm print is Monogenic Binary Coding (MBC), which is an efficient approach for extracting palm print features. Then for inner knuckle print recognition we have tried two algorithms named Ridgelet Transform and Scale Invariant Feature Transform (SIFT). Also we have compared their results in terms of recognition rate. We then adopt Support Vector Machine (SVM) for classifying the extracted feature vectors. Combining both knuckle print and palm print for personal identification will give better security and accuracy.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127039129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Multimodal biometric recognition using sclera and fingerprint based on ANFIS 基于ANFIS的巩膜指纹多模态生物识别
Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996159
M. Pallikonda Rajasekaran, M. Suresh, U. Dhanasekaran
Biometrics is the ID of humans utilizing intrinsic physical, biological, otherwise activity features, traits, or habits. Biometrics has the potential to provide this desired ability to clearly and discretely determine a person's identity with additional accuracy and security. Biometric systems primarily based on individual antecedent of advice which is referred as unimodal frameworks. Even though some unimodal frameworks (e.g. Palm, Finger impression, Face, Iris), have got significant change in consistency plus precision yet has experienced selection issues attributable to non-all-inclusiveness of biometrics attributes, vulnerability to biometric mocking and insufficient exactness created by boisterous information as their inconveniences. In future, single biometric framework might not be in a position to accomplish the wanted execution prerequisite in genuine world provisions. To defeat these issues, we have to utilize multimodal biometric confirmation frameworks which blend data from various modalities to make a choice. Multimodal biometric confirmation framework utilize use more than one human modalities such as face, iris, retina, sclera and fingerprint etc. to improve their security of the method. In this approach, combined the biometric traits of sclera and fingerprint for addressing authentication issues, which has not discussed and implemented earlier. The fusion of multimodal biometric system helps to reduce the system error rates. The ANFIS model consolidated the neural system versatile capacities and the fluffy rationale qualitative strategy will have low false dismissal degree contrasted with neural network and fluffy rationale qualitative frame work. The combination of multimodal biometric security conspires in the ANFIS will show higher accuracy come close with Neural Network and Fuzzy Inference System.
生物识别技术是利用人类内在的物理、生物或活动特征、特征或习惯来识别人类的身份。生物识别技术有潜力提供这种所需的能力,以额外的准确性和安全性清晰而离散地确定一个人的身份。生物识别系统主要基于个人的建议,这被称为单模框架。尽管一些单模框架(如手掌、手指印象、面部、虹膜)在一致性和精度上有了显著的变化,但由于生物特征属性的非全包容性、易受生物特征嘲弄和嘈杂信息造成的准确性不足等问题,给选择带来了不便。将来,单个生物识别框架可能无法在真实世界的规定中完成所需的执行先决条件。为了解决这些问题,我们必须利用多模态生物识别确认框架,混合各种模态的数据来做出选择。多模态生物识别确认框架利用使用多种人体形态,如面部、虹膜、视网膜、巩膜和指纹等,以提高其方法的安全性。在这种方法中,结合巩膜和指纹的生物特征来解决以前没有讨论和实现的身份验证问题。多模态生物识别系统的融合有助于降低系统错误率。ANFIS模型巩固了神经系统的通用性,与神经网络和蓬松基本原理定性框架相比,蓬松基本原理定性策略具有较低的误解雇度。在ANFIS中,多模态生物识别安全方案的组合将显示出更高的准确率,接近神经网络和模糊推理系统。
{"title":"Multimodal biometric recognition using sclera and fingerprint based on ANFIS","authors":"M. Pallikonda Rajasekaran, M. Suresh, U. Dhanasekaran","doi":"10.1109/ICRTIT.2014.6996159","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996159","url":null,"abstract":"Biometrics is the ID of humans utilizing intrinsic physical, biological, otherwise activity features, traits, or habits. Biometrics has the potential to provide this desired ability to clearly and discretely determine a person's identity with additional accuracy and security. Biometric systems primarily based on individual antecedent of advice which is referred as unimodal frameworks. Even though some unimodal frameworks (e.g. Palm, Finger impression, Face, Iris), have got significant change in consistency plus precision yet has experienced selection issues attributable to non-all-inclusiveness of biometrics attributes, vulnerability to biometric mocking and insufficient exactness created by boisterous information as their inconveniences. In future, single biometric framework might not be in a position to accomplish the wanted execution prerequisite in genuine world provisions. To defeat these issues, we have to utilize multimodal biometric confirmation frameworks which blend data from various modalities to make a choice. Multimodal biometric confirmation framework utilize use more than one human modalities such as face, iris, retina, sclera and fingerprint etc. to improve their security of the method. In this approach, combined the biometric traits of sclera and fingerprint for addressing authentication issues, which has not discussed and implemented earlier. The fusion of multimodal biometric system helps to reduce the system error rates. The ANFIS model consolidated the neural system versatile capacities and the fluffy rationale qualitative strategy will have low false dismissal degree contrasted with neural network and fluffy rationale qualitative frame work. The combination of multimodal biometric security conspires in the ANFIS will show higher accuracy come close with Neural Network and Fuzzy Inference System.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"183 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127040851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2014 International Conference on Recent Trends in Information Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1