首页 > 最新文献

2014 International Conference on Recent Trends in Information Technology最新文献

英文 中文
Towards secure audit services for outsourced data in cloud 为云中的外包数据提供安全审计服务
Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996214
Sumalatha M R, Hemalathaa S, Monika R, Ahila C
The rapid growth in the field of Cloud Computing introduces a myriad of security hazards to the information and data. Data outsourcing relieves the responsibility of local data storage and maintenance, but introduces security implications. A third party service provider, stores and maintains data, application or infrastructure of cloud user. Auditing methods and infrastructures in cloud play an important character in cloud security strategies. As data and applications deployed in the cloud are more delicate, the requirement for auditing systems to provide rapid analysis and quick responses becomes inevitable. In this work we provide a privacy-preserving data integrity protection mechanism by allowing public auditing for cloud storage with the assistance of the data owner's identity. This guarantees the auditing can be done by the third party without fetching the entire data from the cloud. A data protection scheme is also outlined, by providing a method to allow for data to be encrypted in the cloud without loss of accessibility or functionality for the authorized users.
云计算领域的快速发展给信息和数据带来了无数的安全隐患。数据外包减轻了本地数据存储和维护的责任,但也带来了安全隐患。第三方服务提供商存储和维护云用户的数据、应用程序或基础设施。云中的审计方法和基础设施是云安全策略的重要组成部分。由于部署在云中的数据和应用程序更加微妙,审计系统提供快速分析和快速响应的需求变得不可避免。在这项工作中,我们提供了一种保护隐私的数据完整性保护机制,允许在数据所有者身份的帮助下对云存储进行公共审计。这保证了审计可以由第三方完成,而无需从云中获取整个数据。还概述了一种数据保护方案,该方案提供了一种方法,允许在云中对数据进行加密,而不会损失授权用户的可访问性或功能。
{"title":"Towards secure audit services for outsourced data in cloud","authors":"Sumalatha M R, Hemalathaa S, Monika R, Ahila C","doi":"10.1109/ICRTIT.2014.6996214","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996214","url":null,"abstract":"The rapid growth in the field of Cloud Computing introduces a myriad of security hazards to the information and data. Data outsourcing relieves the responsibility of local data storage and maintenance, but introduces security implications. A third party service provider, stores and maintains data, application or infrastructure of cloud user. Auditing methods and infrastructures in cloud play an important character in cloud security strategies. As data and applications deployed in the cloud are more delicate, the requirement for auditing systems to provide rapid analysis and quick responses becomes inevitable. In this work we provide a privacy-preserving data integrity protection mechanism by allowing public auditing for cloud storage with the assistance of the data owner's identity. This guarantees the auditing can be done by the third party without fetching the entire data from the cloud. A data protection scheme is also outlined, by providing a method to allow for data to be encrypted in the cloud without loss of accessibility or functionality for the authorized users.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129404636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An improved dynamic data replica selection and placement in cloud 改进了云中的动态数据副本选择和放置
Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996180
A. Rajalakshmi, D. Vijayakumar, Dr.K.G. Srinivasagan
Cloud computing platform is getting more and more attentions as a new trend of data management. Data replication has been widely used to speed up data access in cloud. Replica selection and placement are the major issues in replication. In this paper we propose an approach for dynamic data replication in cloud. A replica management system allows users to create, and manage replicas and update the replicas if the original datas are modified. The proposed work concentrates on designing an algorithm for suitable optimal replica selection and placement to increase availability of data in the cloud. The method consists of two main phases file application and replication operation. The first phase contains the replica location and creation by using catalog and index. In second phase is used to find whether there is enough space in the destination to store the requested file or not. Replication aims to increase availability of resources, minimum access cost, shared bandwidth consumption and delay time by replicating data. The proposed systems developed under the Eucalyptus cloud environment. The results of proposed replica selection algorithm achieve better accessibility compared with other methods.
云计算平台作为数据管理的新趋势,越来越受到人们的关注。数据复制已被广泛用于加快云中的数据访问速度。副本的选择和放置是复制中的主要问题。本文提出了一种云中动态数据复制的方法。副本管理系统允许用户创建和管理副本,并在原始数据被修改时更新副本。提出的工作重点是设计一种算法,用于合适的最佳副本选择和放置,以增加云中的数据可用性。该方法包括两个主要阶段:文件应用和复制操作。第一阶段包含使用目录和索引的副本位置和创建。第二阶段用于查找目标中是否有足够的空间来存储所请求的文件。复制的目的是通过复制数据来提高资源的可用性、降低访问成本、共享带宽消耗和延迟时间。所提出的系统是在Eucalyptus云环境下开发的。与其他方法相比,本文提出的副本选择算法具有更好的可达性。
{"title":"An improved dynamic data replica selection and placement in cloud","authors":"A. Rajalakshmi, D. Vijayakumar, Dr.K.G. Srinivasagan","doi":"10.1109/ICRTIT.2014.6996180","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996180","url":null,"abstract":"Cloud computing platform is getting more and more attentions as a new trend of data management. Data replication has been widely used to speed up data access in cloud. Replica selection and placement are the major issues in replication. In this paper we propose an approach for dynamic data replication in cloud. A replica management system allows users to create, and manage replicas and update the replicas if the original datas are modified. The proposed work concentrates on designing an algorithm for suitable optimal replica selection and placement to increase availability of data in the cloud. The method consists of two main phases file application and replication operation. The first phase contains the replica location and creation by using catalog and index. In second phase is used to find whether there is enough space in the destination to store the requested file or not. Replication aims to increase availability of resources, minimum access cost, shared bandwidth consumption and delay time by replicating data. The proposed systems developed under the Eucalyptus cloud environment. The results of proposed replica selection algorithm achieve better accessibility compared with other methods.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124076483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Automatic localization and segmentation of Optic Disc in retinal fundus images through image processing techniques 利用图像处理技术实现视网膜眼底图像视盘的自动定位与分割
Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996090
R. GeethaRamani, C. Dhanapackiam
The Optic Disc location detection and extraction are main role of automatically analyzing of retinal image. Ophthalmologists analyze the Optic Disc for finding the presence or absence of retinal diseases viz. Glaucoma, Diabetic Retinopathy, Occlusion, Orbital lymphangioma, Papilloedema, Pituitary Cancer, Open-angle glaucoma etc. In this paper, we attempted to localize and segment the Optic Disc region of retinal fundus images by template matching method and morphological procedure. The optic nerve is originate in the brightest region of retinal image and it act as a main region to detect the retinal diseases using the ratio of cup and disc(CDR) and the ratio between Optic rim & center of the Optic Disc. The proposed work localizes and segments the Optic Disc then the corresponding center points & diameter of retinal fundus images are determined. We have considered the Gold Standard Database (available at public repository) that comprises of 30 retinal fundus images to our experiments. The location of Optic Disc is detected, segmented for all images and the center & diameter of segmented Optic Disc are evaluated against the Optic Disc center points & diameter (ground truth specified by ophthalmologist experts). The Optic Disc centers & diameter identified through our method are near close to ground truth provided by the ophthalmologist experts. The proposed system achieves 98.7% accuracy in locating the Optic Disc while compare with other Optic Disc detection methodologies such as Active Contour Model, Fuzzy C-Means, Artificial Neural Network.
视盘位置检测与提取是视网膜图像自动分析的重要环节。眼科医生分析视盘以发现视网膜疾病的存在或不存在,如青光眼、糖尿病视网膜病变、闭塞、眶淋巴管瘤、乳头状水肿、垂体癌、开角型青光眼等。本文采用模板匹配方法和形态学方法对眼底图像视盘区域进行定位和分割。视神经发源于视网膜图像中最亮的区域,是利用杯盘比(CDR)和视盘边缘与视盘中心之比检测视网膜病变的主要区域。该方法首先对视盘进行定位和分割,然后确定相应的眼底图像中心点和直径。我们考虑了包含30张视网膜眼底图像的金标准数据库(可在公共存储库中获得)来进行实验。检测视盘的位置,对所有图像进行分割,并根据视盘中心点和直径(由眼科专家指定的基础事实)评估分割后的视盘的中心和直径。通过我们的方法确定的视盘中心和直径接近眼科专家提供的地面事实。与活动轮廓模型、模糊c均值、人工神经网络等视盘检测方法相比,该系统对视盘的定位精度达到98.7%。
{"title":"Automatic localization and segmentation of Optic Disc in retinal fundus images through image processing techniques","authors":"R. GeethaRamani, C. Dhanapackiam","doi":"10.1109/ICRTIT.2014.6996090","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996090","url":null,"abstract":"The Optic Disc location detection and extraction are main role of automatically analyzing of retinal image. Ophthalmologists analyze the Optic Disc for finding the presence or absence of retinal diseases viz. Glaucoma, Diabetic Retinopathy, Occlusion, Orbital lymphangioma, Papilloedema, Pituitary Cancer, Open-angle glaucoma etc. In this paper, we attempted to localize and segment the Optic Disc region of retinal fundus images by template matching method and morphological procedure. The optic nerve is originate in the brightest region of retinal image and it act as a main region to detect the retinal diseases using the ratio of cup and disc(CDR) and the ratio between Optic rim & center of the Optic Disc. The proposed work localizes and segments the Optic Disc then the corresponding center points & diameter of retinal fundus images are determined. We have considered the Gold Standard Database (available at public repository) that comprises of 30 retinal fundus images to our experiments. The location of Optic Disc is detected, segmented for all images and the center & diameter of segmented Optic Disc are evaluated against the Optic Disc center points & diameter (ground truth specified by ophthalmologist experts). The Optic Disc centers & diameter identified through our method are near close to ground truth provided by the ophthalmologist experts. The proposed system achieves 98.7% accuracy in locating the Optic Disc while compare with other Optic Disc detection methodologies such as Active Contour Model, Fuzzy C-Means, Artificial Neural Network.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130691900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
An effective enactment of broadcasting XML in wireless mobile environment 无线移动环境下广播XML的有效实现
Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996208
J. Briskilal, D. Satish
In this new scenario, Wireless communications are very much popular in all aspects, accordingly to provide an effective enactment of broadcasting energy efficiency and latency efficiency are considered by means of Lineage Encoding and Twig pattern queries. Lineage encoding is the scheme to convert the XML byte formats into bit formats, thereby providing effective achieving of bandwidth. Also this converting scheme can handle twig pattern queries. A twig pattern query provides a very fast reply to the users by performing multi-way searching of tree traversals. And a novel methodology named G node which is a group node consisting collection of multi elements. This provides the accurate information to the users. We propose an XML automation tool that creates customized xml files .so that there is no need of relying on third party for xml files. And also there is no need of storing the xml in the repository to extract the data for further process. Dynamic addition of G nodes is possible in order to add dynamic events without interrupting an existing broadcasting channel. And there is no depth restriction for creating XML file in an automation tool.
在这种新的场景下,无线通信在各个方面都非常受欢迎,相应地,通过沿袭编码和细枝模式查询来考虑提供有效的广播能效和延迟效率。沿袭编码是将XML字节格式转换为位格式,从而有效实现带宽的一种方案。该转换方案还可以处理小枝模式查询。小枝模式查询通过执行树遍历的多路搜索,为用户提供非常快速的回复。并提出了一种新的方法——G节点,它是由多个元素组成的群节点。这为用户提供了准确的信息。我们提出了一个XML自动化工具,它可以创建自定义的XML文件,这样就不需要依赖于第三方的XML文件。也不需要将xml存储在存储库中以提取数据供进一步处理。为了在不中断现有广播通道的情况下添加动态事件,可以动态添加G节点。在自动化工具中创建XML文件没有深度限制。
{"title":"An effective enactment of broadcasting XML in wireless mobile environment","authors":"J. Briskilal, D. Satish","doi":"10.1109/ICRTIT.2014.6996208","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996208","url":null,"abstract":"In this new scenario, Wireless communications are very much popular in all aspects, accordingly to provide an effective enactment of broadcasting energy efficiency and latency efficiency are considered by means of Lineage Encoding and Twig pattern queries. Lineage encoding is the scheme to convert the XML byte formats into bit formats, thereby providing effective achieving of bandwidth. Also this converting scheme can handle twig pattern queries. A twig pattern query provides a very fast reply to the users by performing multi-way searching of tree traversals. And a novel methodology named G node which is a group node consisting collection of multi elements. This provides the accurate information to the users. We propose an XML automation tool that creates customized xml files .so that there is no need of relying on third party for xml files. And also there is no need of storing the xml in the repository to extract the data for further process. Dynamic addition of G nodes is possible in order to add dynamic events without interrupting an existing broadcasting channel. And there is no depth restriction for creating XML file in an automation tool.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128972232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Harnessing the semantic analysis of tag using Semantic Based Lesk Algorithm 利用基于语义的Lesk算法对标签进行语义分析
Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996200
M. Shankar, R. Senthilkumar
In the field of Data retrieval, accessing web resources is frequent task. This domain is shifting radically from the amplified data growth to the way in which it is structured and retrieved across web. This explosive growth of data is the result of billions of people using the Internet and mobile devices for commerce, entertainment, social interactions and as well as the Internet of things that constantly share machine-generated data. Even with lot of research, the task of analyzing this data to extract its business values with precision still remains as a trivial issue. To address this issue, the paper presents a novel Semantic Based Lesk Algorithm (SBLA), which traces the meaning of user defined tags and categorizes the web data by means of Support Vector Machine (SVM) classifier. On comparing with existing methods, the proposed method performs well in extraction of admissible data with the better accuracy and precision as discussed in result analysis.
在数据检索领域,访问web资源是一个频繁的任务。这个领域正在从根本上从放大数据增长转变为通过网络构建和检索数据的方式。数据的爆炸式增长是数十亿人使用互联网和移动设备进行商业、娱乐、社交互动以及不断共享机器生成数据的物联网的结果。即使进行了大量的研究,分析这些数据以精确地提取其业务价值的任务仍然是一个微不足道的问题。为了解决这一问题,本文提出了一种新的基于语义的Lesk算法(SBLA),该算法通过跟踪用户自定义标签的含义,并利用支持向量机(SVM)分类器对web数据进行分类。结果分析表明,与现有方法相比,该方法在可接受数据提取方面表现良好,具有较高的准确度和精密度。
{"title":"Harnessing the semantic analysis of tag using Semantic Based Lesk Algorithm","authors":"M. Shankar, R. Senthilkumar","doi":"10.1109/ICRTIT.2014.6996200","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996200","url":null,"abstract":"In the field of Data retrieval, accessing web resources is frequent task. This domain is shifting radically from the amplified data growth to the way in which it is structured and retrieved across web. This explosive growth of data is the result of billions of people using the Internet and mobile devices for commerce, entertainment, social interactions and as well as the Internet of things that constantly share machine-generated data. Even with lot of research, the task of analyzing this data to extract its business values with precision still remains as a trivial issue. To address this issue, the paper presents a novel Semantic Based Lesk Algorithm (SBLA), which traces the meaning of user defined tags and categorizes the web data by means of Support Vector Machine (SVM) classifier. On comparing with existing methods, the proposed method performs well in extraction of admissible data with the better accuracy and precision as discussed in result analysis.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"30 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132869878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An efficient dynamic indexing and metadata based storage in cloud environment 云环境下基于元数据的高效动态索引存储
Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996151
S. Anjanadevi, D. Vijayakumar, K. .. Srinivasagan
Cloud computing is an emerging, computing model wherein the tasks are allocated to software, combination of connections, and services accessed over a network. This connections and network of servers is collectively known as the cloud. In place of operating their own data centers, users might rent computing power and storage capacity from a service provider and pays only for what they use. Cloud storage is delivering the data storage as service. If the data is stored in cloud, it must provide the data access and heterogeneity. With the advances in cloud computing it allows storing of large number of images and data throughout the world. This paper proposes the indexing and metadata management which helps to access the distributed data with reduced latency. The metadata management can be enhanced for large scale file system applications. When designing the metadata, the storage location of the metadata and attributes is important for the efficient retrieval of the data. Indexes are used to quickly locate data without having to search over every location in storage. Based on these two models, the data can be easily fetched and the search time was reduced to retrieve the appropriate data.
云计算是一种新兴的计算模型,其中任务分配给软件、连接组合和通过网络访问的服务。这种连接和服务器网络统称为云。用户可以从服务提供商那里租用计算能力和存储容量,而无需运营自己的数据中心,只需为自己使用的部分付费。云存储将数据存储作为服务来提供。如果数据存储在云中,则必须提供数据访问和异构性。随着云计算的进步,它允许在世界各地存储大量的图像和数据。本文提出了索引和元数据管理,这有助于减少对分布式数据的访问延迟。对于大型文件系统应用程序,可以增强元数据管理。在设计元数据时,元数据和属性的存储位置对于有效地检索数据非常重要。索引用于快速定位数据,而不必搜索存储中的每个位置。基于这两种模型,可以方便地获取数据,减少了检索相应数据的搜索时间。
{"title":"An efficient dynamic indexing and metadata based storage in cloud environment","authors":"S. Anjanadevi, D. Vijayakumar, K. .. Srinivasagan","doi":"10.1109/ICRTIT.2014.6996151","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996151","url":null,"abstract":"Cloud computing is an emerging, computing model wherein the tasks are allocated to software, combination of connections, and services accessed over a network. This connections and network of servers is collectively known as the cloud. In place of operating their own data centers, users might rent computing power and storage capacity from a service provider and pays only for what they use. Cloud storage is delivering the data storage as service. If the data is stored in cloud, it must provide the data access and heterogeneity. With the advances in cloud computing it allows storing of large number of images and data throughout the world. This paper proposes the indexing and metadata management which helps to access the distributed data with reduced latency. The metadata management can be enhanced for large scale file system applications. When designing the metadata, the storage location of the metadata and attributes is important for the efficient retrieval of the data. Indexes are used to quickly locate data without having to search over every location in storage. Based on these two models, the data can be easily fetched and the search time was reduced to retrieve the appropriate data.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133641606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Detecting cloning attack in Social Networks using classification and clustering techniques 利用分类聚类技术检测社交网络中的克隆攻击
Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996166
S. Kiruthiga, P. Kola Sujatha, A. Kannan
Social Networks (SN) are popular among the people to interact with their friends through the internet. Users spending their time in popular social networking sites like facebook, Myspace and twitter to share the personal information. Cloning attack is one of the insidious attacks in facebook. Usually attackers stole the images and personal information about a person and create the fake profile pages. Once the profile gets cloned they started to send a friend request using the cloned profile. Incase if the real users account gets blocked, they used to send a new friend request to their friends. At the same time cloned one also sending the request to the person. At that time it was hard to identify the real one for users. In the proposed system the clone attack is detected based on user action time period and users click pattern to find the similarity between the cloned profile and real one in facebook. Using Cosine similarity and Jaccard index the performance of the similarity between the users is improved.
社交网络(SN)是人们通过互联网与朋友互动的热门工具。用户花时间在facebook、Myspace和twitter等流行的社交网站上分享个人信息。克隆攻击是facebook的一种阴险的攻击。通常,攻击者会窃取一个人的图像和个人信息,并创建虚假的个人资料页面。一旦配置文件被克隆,他们就开始使用克隆的配置文件发送好友请求。如果真实用户的账户被屏蔽,他们通常会向他们的朋友发送新的好友请求。同时克隆了一个也向该人发送请求。当时,用户很难识别真假。在该系统中,基于用户动作时间段和用户点击模式检测克隆攻击,以寻找克隆的个人资料与facebook真实个人资料的相似度。利用余弦相似度和Jaccard索引提高了用户间相似度的性能。
{"title":"Detecting cloning attack in Social Networks using classification and clustering techniques","authors":"S. Kiruthiga, P. Kola Sujatha, A. Kannan","doi":"10.1109/ICRTIT.2014.6996166","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996166","url":null,"abstract":"Social Networks (SN) are popular among the people to interact with their friends through the internet. Users spending their time in popular social networking sites like facebook, Myspace and twitter to share the personal information. Cloning attack is one of the insidious attacks in facebook. Usually attackers stole the images and personal information about a person and create the fake profile pages. Once the profile gets cloned they started to send a friend request using the cloned profile. Incase if the real users account gets blocked, they used to send a new friend request to their friends. At the same time cloned one also sending the request to the person. At that time it was hard to identify the real one for users. In the proposed system the clone attack is detected based on user action time period and users click pattern to find the similarity between the cloned profile and real one in facebook. Using Cosine similarity and Jaccard index the performance of the similarity between the users is improved.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130959535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
CLBC - Cost effective load balanced resource allocation for partitioned cloud system CLBC—分区云系统的成本高效负载均衡资源分配
Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996174
M. Sumalatha, C. Selvakumar, T. Priya, R. T. Azariah, P. Manohar
In cloud computing, remote based massive data storage and dynamic computation services are provided to the users. The cloud enables the user to complete their tasks using pay-as-you-go cost model which typically works on the incurred virtual machine hours, so reducing the execution time will minimize the computational cost. Therefore the scheduler should bring maximum throughput in order to achieve effective resource allocation in cloud. Hence, in this work, DBPS (Deadline Based Pre-emptive Scheduling) and a TLBC (Throttled Load Balancing for Cloud) load balancing model based on cloud partitioning using virtual machine has been proposed. Workload prediction is done using statistics and training set, so that error tolerance can be achieved in TLBC. The preliminary results obtained when measuring performance based on the computational cost of the task set and the number of tasks executed in a particular time shows the proposed TLBC outperforms compared with existing systems. OpenNebula has been used as the cloud management tool for doing real time analysis and improving performance.
在云计算中,为用户提供基于远程的海量数据存储和动态计算服务。云允许用户使用按需付费的成本模型来完成他们的任务,这种模型通常在产生的虚拟机小时上工作,因此减少执行时间将使计算成本最小化。因此,调度器应该带来最大的吞吐量,以便在云中实现有效的资源分配。因此,本文提出了一种基于虚拟机的基于云分区的DBPS (Deadline Based preemptive Scheduling)和TLBC (throttledloadbalancing for Cloud)负载均衡模型。利用统计数据和训练集对工作负载进行预测,从而实现TLBC的容错性。根据任务集的计算成本和在特定时间内执行的任务数量对性能进行测量时获得的初步结果表明,与现有系统相比,建议的TLBC性能更好。OpenNebula被用作云管理工具,用于进行实时分析和提高性能。
{"title":"CLBC - Cost effective load balanced resource allocation for partitioned cloud system","authors":"M. Sumalatha, C. Selvakumar, T. Priya, R. T. Azariah, P. Manohar","doi":"10.1109/ICRTIT.2014.6996174","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996174","url":null,"abstract":"In cloud computing, remote based massive data storage and dynamic computation services are provided to the users. The cloud enables the user to complete their tasks using pay-as-you-go cost model which typically works on the incurred virtual machine hours, so reducing the execution time will minimize the computational cost. Therefore the scheduler should bring maximum throughput in order to achieve effective resource allocation in cloud. Hence, in this work, DBPS (Deadline Based Pre-emptive Scheduling) and a TLBC (Throttled Load Balancing for Cloud) load balancing model based on cloud partitioning using virtual machine has been proposed. Workload prediction is done using statistics and training set, so that error tolerance can be achieved in TLBC. The preliminary results obtained when measuring performance based on the computational cost of the task set and the number of tasks executed in a particular time shows the proposed TLBC outperforms compared with existing systems. OpenNebula has been used as the cloud management tool for doing real time analysis and improving performance.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130663596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A short message classification algorithm for tweet classification 一种用于tweet分类的短消息分类算法
Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996189
P. Selvaperumal, A. Suruliandi
Twitter users tweet their views in the form of short text messages. Twitter topic classification is classifying the tweets in to a set of predefined classes. In this work, a new tweet classification Method that makes use of tweet features like URL's in the tweet, retweeted tweets and influential users tweet is proposed. Experiments were carried out with extensive tweet data set. The performance of the proposed algorithm in classifying the tweets was compared with the text classification algorithms like SVM, Naïve Bayes, KNN etc. It is observed that the proposed method outclasses the conventional text classification algorithms in classifying the tweets.
推特用户以短信的形式发布他们的观点。Twitter主题分类是将tweet分类到一组预定义的类中。本文提出了一种新的推文分类方法,该方法利用推文、转发推文和有影响力用户推文中的URL等推文特征。在广泛的推文数据集上进行了实验。将本文算法与SVM、Naïve贝叶斯、KNN等文本分类算法在推文分类中的性能进行了比较。结果表明,该方法在推文分类方面优于传统的文本分类算法。
{"title":"A short message classification algorithm for tweet classification","authors":"P. Selvaperumal, A. Suruliandi","doi":"10.1109/ICRTIT.2014.6996189","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996189","url":null,"abstract":"Twitter users tweet their views in the form of short text messages. Twitter topic classification is classifying the tweets in to a set of predefined classes. In this work, a new tweet classification Method that makes use of tweet features like URL's in the tweet, retweeted tweets and influential users tweet is proposed. Experiments were carried out with extensive tweet data set. The performance of the proposed algorithm in classifying the tweets was compared with the text classification algorithms like SVM, Naïve Bayes, KNN etc. It is observed that the proposed method outclasses the conventional text classification algorithms in classifying the tweets.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114713257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
A novel proposal to effectively combine multipath data forwarding for data center networks with congestion control and load balancing using Software-Defined Networking Approach 一种利用软件定义网络方法将数据中心网络的多路径数据转发与拥塞控制和负载均衡有效结合的新方案
Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996178
Arijit Mallik, S. Hegde
Modern data center networks (DCNs) often use multi-rooted topologies, which offer multipath capability, for increased bandwidth and fault tolerance. However, traditional routing algorithms for the Internet have no or limited support for multipath routing, and cannot fully utilize available bandwidth in such DCNs. As a result, they route all the traffic through a single path, and thus form congestion. Multipath (MP) routing might be a good alternative, but is not sufficient alone to handle congestion that comes from the contention of end stations. Dynamic load balancing, on the other hand, protects the network from sudden congestions which could be caused by load spikes or link failures. However, little work has been done to incorporate all these features in a single and comprehensive solution for Data Center Ethernet (DCE). In this paper, we propose a novel method that attempts to integrate dynamic load balancing, multi-path scheme with congestion control (CC), with the use of pure Software-Defined-Networking (SDN) approach. SDN decouples control plane from the data forwarding plane, which reduces the overheads of the network switches. The major objectives that our solution attempts to achieve are, efficient utilization of network resources, high throughput and minimal frame loss.
现代数据中心网络(dcn)通常使用多根拓扑,这种拓扑提供多路径功能,以增加带宽和容错性。然而,传统的Internet路由算法对多路径路由的支持是有限的,并且不能充分利用这种dcn中的可用带宽。结果,它们将所有的流量通过一条路径,从而形成拥堵。多路径(MP)路由可能是一个很好的替代方案,但是单独处理端站争用引起的拥塞是不够的。另一方面,动态负载平衡保护网络免受突然拥塞,这可能是由负载峰值或链路故障引起的。然而,要将所有这些特性整合到数据中心以太网(DCE)的单一综合解决方案中,还做了很少的工作。在本文中,我们提出了一种新的方法,试图将动态负载平衡,多路径方案与拥塞控制(CC)结合起来,并使用纯软件定义网络(SDN)方法。SDN将控制平面与数据转发平面解耦,降低了网络交换机的开销。我们的解决方案试图实现的主要目标是,有效利用网络资源,高吞吐量和最小的帧丢失。
{"title":"A novel proposal to effectively combine multipath data forwarding for data center networks with congestion control and load balancing using Software-Defined Networking Approach","authors":"Arijit Mallik, S. Hegde","doi":"10.1109/ICRTIT.2014.6996178","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996178","url":null,"abstract":"Modern data center networks (DCNs) often use multi-rooted topologies, which offer multipath capability, for increased bandwidth and fault tolerance. However, traditional routing algorithms for the Internet have no or limited support for multipath routing, and cannot fully utilize available bandwidth in such DCNs. As a result, they route all the traffic through a single path, and thus form congestion. Multipath (MP) routing might be a good alternative, but is not sufficient alone to handle congestion that comes from the contention of end stations. Dynamic load balancing, on the other hand, protects the network from sudden congestions which could be caused by load spikes or link failures. However, little work has been done to incorporate all these features in a single and comprehensive solution for Data Center Ethernet (DCE). In this paper, we propose a novel method that attempts to integrate dynamic load balancing, multi-path scheme with congestion control (CC), with the use of pure Software-Defined-Networking (SDN) approach. SDN decouples control plane from the data forwarding plane, which reduces the overheads of the network switches. The major objectives that our solution attempts to achieve are, efficient utilization of network resources, high throughput and minimal frame loss.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115912833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
期刊
2014 International Conference on Recent Trends in Information Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1