首页 > 最新文献

2014 International Conference on Recent Trends in Information Technology最新文献

英文 中文
Efficient host based intrusion detection system using Partial Decision Tree and Correlation feature selection algorithm 基于部分决策树和相关特征选择算法的高效主机入侵检测系统
Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996115
F. Lydia Catherine, Ravi Pathak, V. Vaidehi
System security has become significant issue in many organizations. The attacks like DoS, U2R, R2L and Probing etc., creating a serious threat to the appropriate operation of Internet services as well as in host system. In recent years, intrusion detection system is designed to prevent the intruder in the host as well as in network systems. Existing host based intrusion detection systems detects the intrusion using complete feature set and it is not fast enough to detect the attacks. To overcome this problem, this paper proposes an efficient HIDS - Correlation based Partial Decision Tree Algorithm (CPDT). The proposed CPDT combines Correlation feature selection for selecting features and Partial Decision Tree (PART) for classifying the normal and the abnormal packets. The algorithm is implemented and has been validated within KDD'99 dataset and found to give better results than the existing algorithms. The proposed CPDT model provides the accuracy of 99.9458%.
系统安全已经成为许多组织的重要问题。DoS、U2R、R2L、探测等攻击对互联网服务和主机系统的正常运行造成严重威胁。近年来,入侵检测系统被设计用于防止主机和网络系统中的入侵者。现有的基于主机的入侵检测系统采用完整的特征集进行入侵检测,检测速度不够快。为了克服这一问题,本文提出了一种高效的基于HIDS -相关性的部分决策树算法(CPDT)。该算法结合了相关特征选择(Correlation feature selection)和部分决策树(Partial Decision Tree, PART)对正常和异常数据包进行分类。该算法已在KDD'99数据集上实现并进行了验证,结果表明该算法比现有算法具有更好的效果。提出的CPDT模型的准确率为99.9458%。
{"title":"Efficient host based intrusion detection system using Partial Decision Tree and Correlation feature selection algorithm","authors":"F. Lydia Catherine, Ravi Pathak, V. Vaidehi","doi":"10.1109/ICRTIT.2014.6996115","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996115","url":null,"abstract":"System security has become significant issue in many organizations. The attacks like DoS, U2R, R2L and Probing etc., creating a serious threat to the appropriate operation of Internet services as well as in host system. In recent years, intrusion detection system is designed to prevent the intruder in the host as well as in network systems. Existing host based intrusion detection systems detects the intrusion using complete feature set and it is not fast enough to detect the attacks. To overcome this problem, this paper proposes an efficient HIDS - Correlation based Partial Decision Tree Algorithm (CPDT). The proposed CPDT combines Correlation feature selection for selecting features and Partial Decision Tree (PART) for classifying the normal and the abnormal packets. The algorithm is implemented and has been validated within KDD'99 dataset and found to give better results than the existing algorithms. The proposed CPDT model provides the accuracy of 99.9458%.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134412877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Efficient fingerprint lookup using Prefix Indexing Tablet 高效的指纹查找使用前缀索引平板电脑
Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996158
D. Priyadharshini, J. Angelina, K. Sundarakantham, S. Shalinie
Backups protect the file systems from disk or other hardware failures, software errors that may corrupt the file system and natural disasters. However, a single file may be present as multiple copies in the file system. Hence searching time to find the redundant data and to eliminate them is high. In addition to this, redundant data consumes more space in storage systems. Data de-duplication techniques are used to address these issues. Fingerprint lookup is a key ingredient for efficient de-duplication. This paper proposes an efficient Fingerprint lookup technique called Prefix Indexing Tablets in which the fingerprint lookup is performed only on necessary tablets. Further to reduce the fingerprint lookup delay, only the prefix of the fingerprint is considered. Experimentation on standard datasets show that the lookup latency of the proposed de-duplication method is reduced by 62% and the running time is improved.
备份保护文件系统免受磁盘或其他硬件故障、可能损坏文件系统的软件错误和自然灾害的影响。然而,一个文件在文件系统中可能有多个副本。因此,查找冗余数据并消除冗余数据的搜索时间很高。此外,冗余数据会占用更多的存储空间。数据重复删除技术用于解决这些问题。指纹查找是高效重复数据删除的关键因素。本文提出了一种高效的指纹查找技术,称为前缀索引片,该技术只在必要的片上进行指纹查找。为了进一步减少指纹查找延迟,只考虑指纹的前缀。在标准数据集上的实验表明,提出的重复数据删除方法的查找延迟减少了62%,并提高了运行时间。
{"title":"Efficient fingerprint lookup using Prefix Indexing Tablet","authors":"D. Priyadharshini, J. Angelina, K. Sundarakantham, S. Shalinie","doi":"10.1109/ICRTIT.2014.6996158","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996158","url":null,"abstract":"Backups protect the file systems from disk or other hardware failures, software errors that may corrupt the file system and natural disasters. However, a single file may be present as multiple copies in the file system. Hence searching time to find the redundant data and to eliminate them is high. In addition to this, redundant data consumes more space in storage systems. Data de-duplication techniques are used to address these issues. Fingerprint lookup is a key ingredient for efficient de-duplication. This paper proposes an efficient Fingerprint lookup technique called Prefix Indexing Tablets in which the fingerprint lookup is performed only on necessary tablets. Further to reduce the fingerprint lookup delay, only the prefix of the fingerprint is considered. Experimentation on standard datasets show that the lookup latency of the proposed de-duplication method is reduced by 62% and the running time is improved.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114296412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nature - Inspired enhanced data deduplication for efficient cloud storage 自然-启发增强的重复数据删除,用于高效的云存储
Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996211
G. Madhubala, R. Priyadharshini, P. Ranjitham, S. Baskaran
Cloud Computing is the delivery of computing as a service, which is specifically involved with Storage of data, enabling ubiquitous, convenient access to shared resources that are provided to computers and other devices as a utility over a network. Storage, which is considered to be the key attribute, is hindered by the presence of redundant copies of data. Data Deduplication is a specialized technique for data compression and duplicate detection for eliminating duplicate copies of data to make storage utilization efficient. Cloud Service Providers currently employ Hashing technique so as to avoid the presence of redundant copies. Apparently, there are a few major pitfalls which can be vanquished through the employment of a Nature - Inspired, Genetic Programming Approach, for deduplication. Genetic Programming is a systematic, domain - independent programming model making use of the ideologies of biological evolution so as to handle a complicated problem. A Sequence Matching Algorithm and Levenshtein's Algorithm are used for Text Comparison and then Genetic Programming concepts are utilized to detect the closest match. The performance of these three algorithms and hashing technique are compared. Since bio-inspired concepts, systems and algorithms are found to be more efficient, a Nature-Inspired Approach for data deduplication in cloud storage is implemented.
云计算是将计算作为一种服务交付,它特别涉及到数据的存储,从而实现无处不在、方便地访问共享资源,这些资源作为一种实用工具通过网络提供给计算机和其他设备。存储(被认为是关键属性)受到数据冗余副本存在的阻碍。重复数据删除是一种专门的数据压缩和重复检测技术,用于消除重复的数据副本,从而提高存储利用率。云服务提供商目前采用哈希技术,以避免冗余副本的存在。显然,有几个主要的陷阱可以通过采用自然启发的遗传编程方法来克服,以实现重复数据删除。遗传规划是利用生物进化的思想来处理复杂问题的一种系统的、领域独立的规划模型。采用序列匹配算法和Levenshtein算法进行文本比较,然后利用遗传规划的概念检测最接近的匹配。比较了这三种算法和散列技术的性能。由于生物启发的概念、系统和算法被发现更有效,因此在云存储中实施了一种自然启发的重复数据删除方法。
{"title":"Nature - Inspired enhanced data deduplication for efficient cloud storage","authors":"G. Madhubala, R. Priyadharshini, P. Ranjitham, S. Baskaran","doi":"10.1109/ICRTIT.2014.6996211","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996211","url":null,"abstract":"Cloud Computing is the delivery of computing as a service, which is specifically involved with Storage of data, enabling ubiquitous, convenient access to shared resources that are provided to computers and other devices as a utility over a network. Storage, which is considered to be the key attribute, is hindered by the presence of redundant copies of data. Data Deduplication is a specialized technique for data compression and duplicate detection for eliminating duplicate copies of data to make storage utilization efficient. Cloud Service Providers currently employ Hashing technique so as to avoid the presence of redundant copies. Apparently, there are a few major pitfalls which can be vanquished through the employment of a Nature - Inspired, Genetic Programming Approach, for deduplication. Genetic Programming is a systematic, domain - independent programming model making use of the ideologies of biological evolution so as to handle a complicated problem. A Sequence Matching Algorithm and Levenshtein's Algorithm are used for Text Comparison and then Genetic Programming concepts are utilized to detect the closest match. The performance of these three algorithms and hashing technique are compared. Since bio-inspired concepts, systems and algorithms are found to be more efficient, a Nature-Inspired Approach for data deduplication in cloud storage is implemented.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124765775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
An Enhanced Adaptive Scoring Job Scheduling algorithm for minimizing job failure in heterogeneous grid network 基于自适应计分的异构网格网络作业调度算法
Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996161
S. K. Aparnaa, K. Kousalya
Grid computing involves sharing data storage and coordinating network resources. The complexity of scheduling increases with heterogeneous nature of grid and is highly difficult to schedule effectively. The goal of grid job scheduling is to achieve high system performance and match the job to the appropriate available resource. Due to dynamic nature of grid, the traditional job scheduling algorithms First Come First Serve (FCFS) and First Come Last Serve (FCLS) does not adapt to the grid environment. In order to utilize the power of grid completely and to schedule jobs efficiently many existing algorithms have been implemented. However the existing algorithms does not consider the memory requirement of each cluster which is one of the main resource for scheduling data intensive jobs. Due to this the job failure rate is also very high. To provide a solution to that problem Enhanced Adaptive Scoring Job Scheduling algorithm is introduced. The jobs are identified whether it is data intensive or computational intensive and based on that the jobs are scheduled. The jobs are allocated by computing Job Score (JS) along with the memory requirement of each cluster. Due to the dynamic nature of grid environment, each time the status of the resources changes and each time the Job Score(JS) is computed and the jobs are allocated to the most appropriate resources. The proposed algorithm minimize job failure rate and makespan time is also reduced.
网格计算包括共享数据存储和协调网络资源。网格的异构特性增加了调度的复杂性,很难进行有效的调度。网格作业调度的目标是实现高系统性能,并将作业与适当的可用资源相匹配。由于网格的动态性,传统的作业调度算法先到先服务(FCFS)和先到后服务(FCLS)不适应网格环境。为了充分利用网格的力量,有效地调度作业,已有许多算法被实现。然而,现有的算法没有考虑每个集群的内存需求,而内存需求是调度数据密集型作业的主要资源之一。因此,工作失败率也很高。为了解决这一问题,提出了一种改进的自适应计分作业调度算法。无论作业是数据密集型的还是计算密集型的,都会确定作业,并根据作业进行调度。通过计算Job Score (JS)以及每个集群的内存需求来分配作业。由于网格环境的动态性,每次资源的状态都会发生变化,每次都会计算Job Score(JS),并将作业分配给最合适的资源。该算法最大限度地降低了作业失败率,缩短了完工时间。
{"title":"An Enhanced Adaptive Scoring Job Scheduling algorithm for minimizing job failure in heterogeneous grid network","authors":"S. K. Aparnaa, K. Kousalya","doi":"10.1109/ICRTIT.2014.6996161","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996161","url":null,"abstract":"Grid computing involves sharing data storage and coordinating network resources. The complexity of scheduling increases with heterogeneous nature of grid and is highly difficult to schedule effectively. The goal of grid job scheduling is to achieve high system performance and match the job to the appropriate available resource. Due to dynamic nature of grid, the traditional job scheduling algorithms First Come First Serve (FCFS) and First Come Last Serve (FCLS) does not adapt to the grid environment. In order to utilize the power of grid completely and to schedule jobs efficiently many existing algorithms have been implemented. However the existing algorithms does not consider the memory requirement of each cluster which is one of the main resource for scheduling data intensive jobs. Due to this the job failure rate is also very high. To provide a solution to that problem Enhanced Adaptive Scoring Job Scheduling algorithm is introduced. The jobs are identified whether it is data intensive or computational intensive and based on that the jobs are scheduled. The jobs are allocated by computing Job Score (JS) along with the memory requirement of each cluster. Due to the dynamic nature of grid environment, each time the status of the resources changes and each time the Job Score(JS) is computed and the jobs are allocated to the most appropriate resources. The proposed algorithm minimize job failure rate and makespan time is also reduced.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"257 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117030131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Towards secure audit services for outsourced data in cloud 为云中的外包数据提供安全审计服务
Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996214
Sumalatha M R, Hemalathaa S, Monika R, Ahila C
The rapid growth in the field of Cloud Computing introduces a myriad of security hazards to the information and data. Data outsourcing relieves the responsibility of local data storage and maintenance, but introduces security implications. A third party service provider, stores and maintains data, application or infrastructure of cloud user. Auditing methods and infrastructures in cloud play an important character in cloud security strategies. As data and applications deployed in the cloud are more delicate, the requirement for auditing systems to provide rapid analysis and quick responses becomes inevitable. In this work we provide a privacy-preserving data integrity protection mechanism by allowing public auditing for cloud storage with the assistance of the data owner's identity. This guarantees the auditing can be done by the third party without fetching the entire data from the cloud. A data protection scheme is also outlined, by providing a method to allow for data to be encrypted in the cloud without loss of accessibility or functionality for the authorized users.
云计算领域的快速发展给信息和数据带来了无数的安全隐患。数据外包减轻了本地数据存储和维护的责任,但也带来了安全隐患。第三方服务提供商存储和维护云用户的数据、应用程序或基础设施。云中的审计方法和基础设施是云安全策略的重要组成部分。由于部署在云中的数据和应用程序更加微妙,审计系统提供快速分析和快速响应的需求变得不可避免。在这项工作中,我们提供了一种保护隐私的数据完整性保护机制,允许在数据所有者身份的帮助下对云存储进行公共审计。这保证了审计可以由第三方完成,而无需从云中获取整个数据。还概述了一种数据保护方案,该方案提供了一种方法,允许在云中对数据进行加密,而不会损失授权用户的可访问性或功能。
{"title":"Towards secure audit services for outsourced data in cloud","authors":"Sumalatha M R, Hemalathaa S, Monika R, Ahila C","doi":"10.1109/ICRTIT.2014.6996214","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996214","url":null,"abstract":"The rapid growth in the field of Cloud Computing introduces a myriad of security hazards to the information and data. Data outsourcing relieves the responsibility of local data storage and maintenance, but introduces security implications. A third party service provider, stores and maintains data, application or infrastructure of cloud user. Auditing methods and infrastructures in cloud play an important character in cloud security strategies. As data and applications deployed in the cloud are more delicate, the requirement for auditing systems to provide rapid analysis and quick responses becomes inevitable. In this work we provide a privacy-preserving data integrity protection mechanism by allowing public auditing for cloud storage with the assistance of the data owner's identity. This guarantees the auditing can be done by the third party without fetching the entire data from the cloud. A data protection scheme is also outlined, by providing a method to allow for data to be encrypted in the cloud without loss of accessibility or functionality for the authorized users.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129404636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Automatic localization and segmentation of Optic Disc in retinal fundus images through image processing techniques 利用图像处理技术实现视网膜眼底图像视盘的自动定位与分割
Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996090
R. GeethaRamani, C. Dhanapackiam
The Optic Disc location detection and extraction are main role of automatically analyzing of retinal image. Ophthalmologists analyze the Optic Disc for finding the presence or absence of retinal diseases viz. Glaucoma, Diabetic Retinopathy, Occlusion, Orbital lymphangioma, Papilloedema, Pituitary Cancer, Open-angle glaucoma etc. In this paper, we attempted to localize and segment the Optic Disc region of retinal fundus images by template matching method and morphological procedure. The optic nerve is originate in the brightest region of retinal image and it act as a main region to detect the retinal diseases using the ratio of cup and disc(CDR) and the ratio between Optic rim & center of the Optic Disc. The proposed work localizes and segments the Optic Disc then the corresponding center points & diameter of retinal fundus images are determined. We have considered the Gold Standard Database (available at public repository) that comprises of 30 retinal fundus images to our experiments. The location of Optic Disc is detected, segmented for all images and the center & diameter of segmented Optic Disc are evaluated against the Optic Disc center points & diameter (ground truth specified by ophthalmologist experts). The Optic Disc centers & diameter identified through our method are near close to ground truth provided by the ophthalmologist experts. The proposed system achieves 98.7% accuracy in locating the Optic Disc while compare with other Optic Disc detection methodologies such as Active Contour Model, Fuzzy C-Means, Artificial Neural Network.
视盘位置检测与提取是视网膜图像自动分析的重要环节。眼科医生分析视盘以发现视网膜疾病的存在或不存在,如青光眼、糖尿病视网膜病变、闭塞、眶淋巴管瘤、乳头状水肿、垂体癌、开角型青光眼等。本文采用模板匹配方法和形态学方法对眼底图像视盘区域进行定位和分割。视神经发源于视网膜图像中最亮的区域,是利用杯盘比(CDR)和视盘边缘与视盘中心之比检测视网膜病变的主要区域。该方法首先对视盘进行定位和分割,然后确定相应的眼底图像中心点和直径。我们考虑了包含30张视网膜眼底图像的金标准数据库(可在公共存储库中获得)来进行实验。检测视盘的位置,对所有图像进行分割,并根据视盘中心点和直径(由眼科专家指定的基础事实)评估分割后的视盘的中心和直径。通过我们的方法确定的视盘中心和直径接近眼科专家提供的地面事实。与活动轮廓模型、模糊c均值、人工神经网络等视盘检测方法相比,该系统对视盘的定位精度达到98.7%。
{"title":"Automatic localization and segmentation of Optic Disc in retinal fundus images through image processing techniques","authors":"R. GeethaRamani, C. Dhanapackiam","doi":"10.1109/ICRTIT.2014.6996090","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996090","url":null,"abstract":"The Optic Disc location detection and extraction are main role of automatically analyzing of retinal image. Ophthalmologists analyze the Optic Disc for finding the presence or absence of retinal diseases viz. Glaucoma, Diabetic Retinopathy, Occlusion, Orbital lymphangioma, Papilloedema, Pituitary Cancer, Open-angle glaucoma etc. In this paper, we attempted to localize and segment the Optic Disc region of retinal fundus images by template matching method and morphological procedure. The optic nerve is originate in the brightest region of retinal image and it act as a main region to detect the retinal diseases using the ratio of cup and disc(CDR) and the ratio between Optic rim & center of the Optic Disc. The proposed work localizes and segments the Optic Disc then the corresponding center points & diameter of retinal fundus images are determined. We have considered the Gold Standard Database (available at public repository) that comprises of 30 retinal fundus images to our experiments. The location of Optic Disc is detected, segmented for all images and the center & diameter of segmented Optic Disc are evaluated against the Optic Disc center points & diameter (ground truth specified by ophthalmologist experts). The Optic Disc centers & diameter identified through our method are near close to ground truth provided by the ophthalmologist experts. The proposed system achieves 98.7% accuracy in locating the Optic Disc while compare with other Optic Disc detection methodologies such as Active Contour Model, Fuzzy C-Means, Artificial Neural Network.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130691900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
An effective enactment of broadcasting XML in wireless mobile environment 无线移动环境下广播XML的有效实现
Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996208
J. Briskilal, D. Satish
In this new scenario, Wireless communications are very much popular in all aspects, accordingly to provide an effective enactment of broadcasting energy efficiency and latency efficiency are considered by means of Lineage Encoding and Twig pattern queries. Lineage encoding is the scheme to convert the XML byte formats into bit formats, thereby providing effective achieving of bandwidth. Also this converting scheme can handle twig pattern queries. A twig pattern query provides a very fast reply to the users by performing multi-way searching of tree traversals. And a novel methodology named G node which is a group node consisting collection of multi elements. This provides the accurate information to the users. We propose an XML automation tool that creates customized xml files .so that there is no need of relying on third party for xml files. And also there is no need of storing the xml in the repository to extract the data for further process. Dynamic addition of G nodes is possible in order to add dynamic events without interrupting an existing broadcasting channel. And there is no depth restriction for creating XML file in an automation tool.
在这种新的场景下,无线通信在各个方面都非常受欢迎,相应地,通过沿袭编码和细枝模式查询来考虑提供有效的广播能效和延迟效率。沿袭编码是将XML字节格式转换为位格式,从而有效实现带宽的一种方案。该转换方案还可以处理小枝模式查询。小枝模式查询通过执行树遍历的多路搜索,为用户提供非常快速的回复。并提出了一种新的方法——G节点,它是由多个元素组成的群节点。这为用户提供了准确的信息。我们提出了一个XML自动化工具,它可以创建自定义的XML文件,这样就不需要依赖于第三方的XML文件。也不需要将xml存储在存储库中以提取数据供进一步处理。为了在不中断现有广播通道的情况下添加动态事件,可以动态添加G节点。在自动化工具中创建XML文件没有深度限制。
{"title":"An effective enactment of broadcasting XML in wireless mobile environment","authors":"J. Briskilal, D. Satish","doi":"10.1109/ICRTIT.2014.6996208","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996208","url":null,"abstract":"In this new scenario, Wireless communications are very much popular in all aspects, accordingly to provide an effective enactment of broadcasting energy efficiency and latency efficiency are considered by means of Lineage Encoding and Twig pattern queries. Lineage encoding is the scheme to convert the XML byte formats into bit formats, thereby providing effective achieving of bandwidth. Also this converting scheme can handle twig pattern queries. A twig pattern query provides a very fast reply to the users by performing multi-way searching of tree traversals. And a novel methodology named G node which is a group node consisting collection of multi elements. This provides the accurate information to the users. We propose an XML automation tool that creates customized xml files .so that there is no need of relying on third party for xml files. And also there is no need of storing the xml in the repository to extract the data for further process. Dynamic addition of G nodes is possible in order to add dynamic events without interrupting an existing broadcasting channel. And there is no depth restriction for creating XML file in an automation tool.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128972232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Game theoretical approach for improving throughput capacity in wireless ad hoc networks 提高无线自组织网络吞吐量的博弈论方法
Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996152
S. Suman, S. Porselvi, L. Bhagyalakshmi, Dhananjay Kumar
In wireless ad hoc networks, Quality of Service (QoS) can be obtained efficiently using the power control scheme. Power control can be achieved by incorporating cooperation among the available links. In this paper, we propose an adaptive pricing scheme that enables the nodes in the networks to determine the maximum allowable power that can be used for transmission of data within the networks so as to avoid inducing interference in the other links that exist in the networks. Each node calculates the total power which, when used for data transmission with the other nodes would obtain Nash Equilibrium (NE) for the utility function. This in turn contributes to maximize the frequency reuse and thereby improves throughput capacity. Numerical results prove that the overall throughput of the network is improved under this scheme.
在无线自组织网络中,使用功率控制方案可以有效地获得服务质量(QoS)。功率控制可以通过整合可用链路之间的合作来实现。在本文中,我们提出了一种自适应定价方案,使网络中的节点能够确定可用于网络内数据传输的最大允许功率,以避免对网络中存在的其他链路产生干扰。每个节点计算总功率,当与其他节点进行数据传输时,将获得效用函数的纳什均衡(NE)。这反过来又有助于最大化频率重用,从而提高吞吐量。数值结果表明,该方案提高了网络的整体吞吐量。
{"title":"Game theoretical approach for improving throughput capacity in wireless ad hoc networks","authors":"S. Suman, S. Porselvi, L. Bhagyalakshmi, Dhananjay Kumar","doi":"10.1109/ICRTIT.2014.6996152","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996152","url":null,"abstract":"In wireless ad hoc networks, Quality of Service (QoS) can be obtained efficiently using the power control scheme. Power control can be achieved by incorporating cooperation among the available links. In this paper, we propose an adaptive pricing scheme that enables the nodes in the networks to determine the maximum allowable power that can be used for transmission of data within the networks so as to avoid inducing interference in the other links that exist in the networks. Each node calculates the total power which, when used for data transmission with the other nodes would obtain Nash Equilibrium (NE) for the utility function. This in turn contributes to maximize the frequency reuse and thereby improves throughput capacity. Numerical results prove that the overall throughput of the network is improved under this scheme.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115650057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An improved dynamic data replica selection and placement in cloud 改进了云中的动态数据副本选择和放置
Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996180
A. Rajalakshmi, D. Vijayakumar, Dr.K.G. Srinivasagan
Cloud computing platform is getting more and more attentions as a new trend of data management. Data replication has been widely used to speed up data access in cloud. Replica selection and placement are the major issues in replication. In this paper we propose an approach for dynamic data replication in cloud. A replica management system allows users to create, and manage replicas and update the replicas if the original datas are modified. The proposed work concentrates on designing an algorithm for suitable optimal replica selection and placement to increase availability of data in the cloud. The method consists of two main phases file application and replication operation. The first phase contains the replica location and creation by using catalog and index. In second phase is used to find whether there is enough space in the destination to store the requested file or not. Replication aims to increase availability of resources, minimum access cost, shared bandwidth consumption and delay time by replicating data. The proposed systems developed under the Eucalyptus cloud environment. The results of proposed replica selection algorithm achieve better accessibility compared with other methods.
云计算平台作为数据管理的新趋势,越来越受到人们的关注。数据复制已被广泛用于加快云中的数据访问速度。副本的选择和放置是复制中的主要问题。本文提出了一种云中动态数据复制的方法。副本管理系统允许用户创建和管理副本,并在原始数据被修改时更新副本。提出的工作重点是设计一种算法,用于合适的最佳副本选择和放置,以增加云中的数据可用性。该方法包括两个主要阶段:文件应用和复制操作。第一阶段包含使用目录和索引的副本位置和创建。第二阶段用于查找目标中是否有足够的空间来存储所请求的文件。复制的目的是通过复制数据来提高资源的可用性、降低访问成本、共享带宽消耗和延迟时间。所提出的系统是在Eucalyptus云环境下开发的。与其他方法相比,本文提出的副本选择算法具有更好的可达性。
{"title":"An improved dynamic data replica selection and placement in cloud","authors":"A. Rajalakshmi, D. Vijayakumar, Dr.K.G. Srinivasagan","doi":"10.1109/ICRTIT.2014.6996180","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996180","url":null,"abstract":"Cloud computing platform is getting more and more attentions as a new trend of data management. Data replication has been widely used to speed up data access in cloud. Replica selection and placement are the major issues in replication. In this paper we propose an approach for dynamic data replication in cloud. A replica management system allows users to create, and manage replicas and update the replicas if the original datas are modified. The proposed work concentrates on designing an algorithm for suitable optimal replica selection and placement to increase availability of data in the cloud. The method consists of two main phases file application and replication operation. The first phase contains the replica location and creation by using catalog and index. In second phase is used to find whether there is enough space in the destination to store the requested file or not. Replication aims to increase availability of resources, minimum access cost, shared bandwidth consumption and delay time by replicating data. The proposed systems developed under the Eucalyptus cloud environment. The results of proposed replica selection algorithm achieve better accessibility compared with other methods.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124076483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Hand based multibiometric authentication using local feature extraction 基于局部特征提取的手部多重生物特征认证
Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996136
B. Bhaskar, S. Veluchamy
Biometrics has wide applications in the fields of security and privacy. Since unimodal biometrics are subjected to various problems regarding recognition and security, multimodal biometrics have been used extensively nowadays for personal authentication. In this paper we have proposed an efficient personal identification system using two biometric identifiers, palm print and Inner knuckle print. In the recent years, palm prints and knuckle prints have overruled other biometric identifiers because of their unique, stable and novelty feature. The proposed feature extraction method for palm print is Monogenic Binary Coding (MBC), which is an efficient approach for extracting palm print features. Then for inner knuckle print recognition we have tried two algorithms named Ridgelet Transform and Scale Invariant Feature Transform (SIFT). Also we have compared their results in terms of recognition rate. We then adopt Support Vector Machine (SVM) for classifying the extracted feature vectors. Combining both knuckle print and palm print for personal identification will give better security and accuracy.
生物识别技术在安全和隐私领域有着广泛的应用。由于单模态生物识别技术在识别和安全方面存在各种问题,多模态生物识别技术目前已广泛用于个人身份验证。本文提出了一种利用手掌指纹和内指关节指纹两种生物特征识别的高效个人识别系统。近年来,手掌指纹和指关节指纹因其独特、稳定和新颖的特征而取代了其他生物识别技术。本文提出的掌纹特征提取方法是单基因二进制编码(Monogenic Binary Coding, MBC),这是一种有效的掌纹特征提取方法。然后对指关节内纹识别进行了脊波变换和尺度不变特征变换(SIFT)两种算法的尝试。我们还比较了他们的结果在识别率方面。然后采用支持向量机(SVM)对提取的特征向量进行分类。结合指关节指纹和掌纹进行个人识别,安全性和准确性更高。
{"title":"Hand based multibiometric authentication using local feature extraction","authors":"B. Bhaskar, S. Veluchamy","doi":"10.1109/ICRTIT.2014.6996136","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996136","url":null,"abstract":"Biometrics has wide applications in the fields of security and privacy. Since unimodal biometrics are subjected to various problems regarding recognition and security, multimodal biometrics have been used extensively nowadays for personal authentication. In this paper we have proposed an efficient personal identification system using two biometric identifiers, palm print and Inner knuckle print. In the recent years, palm prints and knuckle prints have overruled other biometric identifiers because of their unique, stable and novelty feature. The proposed feature extraction method for palm print is Monogenic Binary Coding (MBC), which is an efficient approach for extracting palm print features. Then for inner knuckle print recognition we have tried two algorithms named Ridgelet Transform and Scale Invariant Feature Transform (SIFT). Also we have compared their results in terms of recognition rate. We then adopt Support Vector Machine (SVM) for classifying the extracted feature vectors. Combining both knuckle print and palm print for personal identification will give better security and accuracy.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127039129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
期刊
2014 International Conference on Recent Trends in Information Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1