Pub Date : 2014-04-10DOI: 10.1109/ICRTIT.2014.6996214
Sumalatha M R, Hemalathaa S, Monika R, Ahila C
The rapid growth in the field of Cloud Computing introduces a myriad of security hazards to the information and data. Data outsourcing relieves the responsibility of local data storage and maintenance, but introduces security implications. A third party service provider, stores and maintains data, application or infrastructure of cloud user. Auditing methods and infrastructures in cloud play an important character in cloud security strategies. As data and applications deployed in the cloud are more delicate, the requirement for auditing systems to provide rapid analysis and quick responses becomes inevitable. In this work we provide a privacy-preserving data integrity protection mechanism by allowing public auditing for cloud storage with the assistance of the data owner's identity. This guarantees the auditing can be done by the third party without fetching the entire data from the cloud. A data protection scheme is also outlined, by providing a method to allow for data to be encrypted in the cloud without loss of accessibility or functionality for the authorized users.
{"title":"Towards secure audit services for outsourced data in cloud","authors":"Sumalatha M R, Hemalathaa S, Monika R, Ahila C","doi":"10.1109/ICRTIT.2014.6996214","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996214","url":null,"abstract":"The rapid growth in the field of Cloud Computing introduces a myriad of security hazards to the information and data. Data outsourcing relieves the responsibility of local data storage and maintenance, but introduces security implications. A third party service provider, stores and maintains data, application or infrastructure of cloud user. Auditing methods and infrastructures in cloud play an important character in cloud security strategies. As data and applications deployed in the cloud are more delicate, the requirement for auditing systems to provide rapid analysis and quick responses becomes inevitable. In this work we provide a privacy-preserving data integrity protection mechanism by allowing public auditing for cloud storage with the assistance of the data owner's identity. This guarantees the auditing can be done by the third party without fetching the entire data from the cloud. A data protection scheme is also outlined, by providing a method to allow for data to be encrypted in the cloud without loss of accessibility or functionality for the authorized users.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129404636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-10DOI: 10.1109/ICRTIT.2014.6996180
A. Rajalakshmi, D. Vijayakumar, Dr.K.G. Srinivasagan
Cloud computing platform is getting more and more attentions as a new trend of data management. Data replication has been widely used to speed up data access in cloud. Replica selection and placement are the major issues in replication. In this paper we propose an approach for dynamic data replication in cloud. A replica management system allows users to create, and manage replicas and update the replicas if the original datas are modified. The proposed work concentrates on designing an algorithm for suitable optimal replica selection and placement to increase availability of data in the cloud. The method consists of two main phases file application and replication operation. The first phase contains the replica location and creation by using catalog and index. In second phase is used to find whether there is enough space in the destination to store the requested file or not. Replication aims to increase availability of resources, minimum access cost, shared bandwidth consumption and delay time by replicating data. The proposed systems developed under the Eucalyptus cloud environment. The results of proposed replica selection algorithm achieve better accessibility compared with other methods.
{"title":"An improved dynamic data replica selection and placement in cloud","authors":"A. Rajalakshmi, D. Vijayakumar, Dr.K.G. Srinivasagan","doi":"10.1109/ICRTIT.2014.6996180","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996180","url":null,"abstract":"Cloud computing platform is getting more and more attentions as a new trend of data management. Data replication has been widely used to speed up data access in cloud. Replica selection and placement are the major issues in replication. In this paper we propose an approach for dynamic data replication in cloud. A replica management system allows users to create, and manage replicas and update the replicas if the original datas are modified. The proposed work concentrates on designing an algorithm for suitable optimal replica selection and placement to increase availability of data in the cloud. The method consists of two main phases file application and replication operation. The first phase contains the replica location and creation by using catalog and index. In second phase is used to find whether there is enough space in the destination to store the requested file or not. Replication aims to increase availability of resources, minimum access cost, shared bandwidth consumption and delay time by replicating data. The proposed systems developed under the Eucalyptus cloud environment. The results of proposed replica selection algorithm achieve better accessibility compared with other methods.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124076483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-10DOI: 10.1109/ICRTIT.2014.6996090
R. GeethaRamani, C. Dhanapackiam
The Optic Disc location detection and extraction are main role of automatically analyzing of retinal image. Ophthalmologists analyze the Optic Disc for finding the presence or absence of retinal diseases viz. Glaucoma, Diabetic Retinopathy, Occlusion, Orbital lymphangioma, Papilloedema, Pituitary Cancer, Open-angle glaucoma etc. In this paper, we attempted to localize and segment the Optic Disc region of retinal fundus images by template matching method and morphological procedure. The optic nerve is originate in the brightest region of retinal image and it act as a main region to detect the retinal diseases using the ratio of cup and disc(CDR) and the ratio between Optic rim & center of the Optic Disc. The proposed work localizes and segments the Optic Disc then the corresponding center points & diameter of retinal fundus images are determined. We have considered the Gold Standard Database (available at public repository) that comprises of 30 retinal fundus images to our experiments. The location of Optic Disc is detected, segmented for all images and the center & diameter of segmented Optic Disc are evaluated against the Optic Disc center points & diameter (ground truth specified by ophthalmologist experts). The Optic Disc centers & diameter identified through our method are near close to ground truth provided by the ophthalmologist experts. The proposed system achieves 98.7% accuracy in locating the Optic Disc while compare with other Optic Disc detection methodologies such as Active Contour Model, Fuzzy C-Means, Artificial Neural Network.
{"title":"Automatic localization and segmentation of Optic Disc in retinal fundus images through image processing techniques","authors":"R. GeethaRamani, C. Dhanapackiam","doi":"10.1109/ICRTIT.2014.6996090","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996090","url":null,"abstract":"The Optic Disc location detection and extraction are main role of automatically analyzing of retinal image. Ophthalmologists analyze the Optic Disc for finding the presence or absence of retinal diseases viz. Glaucoma, Diabetic Retinopathy, Occlusion, Orbital lymphangioma, Papilloedema, Pituitary Cancer, Open-angle glaucoma etc. In this paper, we attempted to localize and segment the Optic Disc region of retinal fundus images by template matching method and morphological procedure. The optic nerve is originate in the brightest region of retinal image and it act as a main region to detect the retinal diseases using the ratio of cup and disc(CDR) and the ratio between Optic rim & center of the Optic Disc. The proposed work localizes and segments the Optic Disc then the corresponding center points & diameter of retinal fundus images are determined. We have considered the Gold Standard Database (available at public repository) that comprises of 30 retinal fundus images to our experiments. The location of Optic Disc is detected, segmented for all images and the center & diameter of segmented Optic Disc are evaluated against the Optic Disc center points & diameter (ground truth specified by ophthalmologist experts). The Optic Disc centers & diameter identified through our method are near close to ground truth provided by the ophthalmologist experts. The proposed system achieves 98.7% accuracy in locating the Optic Disc while compare with other Optic Disc detection methodologies such as Active Contour Model, Fuzzy C-Means, Artificial Neural Network.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130691900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-10DOI: 10.1109/ICRTIT.2014.6996208
J. Briskilal, D. Satish
In this new scenario, Wireless communications are very much popular in all aspects, accordingly to provide an effective enactment of broadcasting energy efficiency and latency efficiency are considered by means of Lineage Encoding and Twig pattern queries. Lineage encoding is the scheme to convert the XML byte formats into bit formats, thereby providing effective achieving of bandwidth. Also this converting scheme can handle twig pattern queries. A twig pattern query provides a very fast reply to the users by performing multi-way searching of tree traversals. And a novel methodology named G node which is a group node consisting collection of multi elements. This provides the accurate information to the users. We propose an XML automation tool that creates customized xml files .so that there is no need of relying on third party for xml files. And also there is no need of storing the xml in the repository to extract the data for further process. Dynamic addition of G nodes is possible in order to add dynamic events without interrupting an existing broadcasting channel. And there is no depth restriction for creating XML file in an automation tool.
{"title":"An effective enactment of broadcasting XML in wireless mobile environment","authors":"J. Briskilal, D. Satish","doi":"10.1109/ICRTIT.2014.6996208","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996208","url":null,"abstract":"In this new scenario, Wireless communications are very much popular in all aspects, accordingly to provide an effective enactment of broadcasting energy efficiency and latency efficiency are considered by means of Lineage Encoding and Twig pattern queries. Lineage encoding is the scheme to convert the XML byte formats into bit formats, thereby providing effective achieving of bandwidth. Also this converting scheme can handle twig pattern queries. A twig pattern query provides a very fast reply to the users by performing multi-way searching of tree traversals. And a novel methodology named G node which is a group node consisting collection of multi elements. This provides the accurate information to the users. We propose an XML automation tool that creates customized xml files .so that there is no need of relying on third party for xml files. And also there is no need of storing the xml in the repository to extract the data for further process. Dynamic addition of G nodes is possible in order to add dynamic events without interrupting an existing broadcasting channel. And there is no depth restriction for creating XML file in an automation tool.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128972232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-10DOI: 10.1109/ICRTIT.2014.6996200
M. Shankar, R. Senthilkumar
In the field of Data retrieval, accessing web resources is frequent task. This domain is shifting radically from the amplified data growth to the way in which it is structured and retrieved across web. This explosive growth of data is the result of billions of people using the Internet and mobile devices for commerce, entertainment, social interactions and as well as the Internet of things that constantly share machine-generated data. Even with lot of research, the task of analyzing this data to extract its business values with precision still remains as a trivial issue. To address this issue, the paper presents a novel Semantic Based Lesk Algorithm (SBLA), which traces the meaning of user defined tags and categorizes the web data by means of Support Vector Machine (SVM) classifier. On comparing with existing methods, the proposed method performs well in extraction of admissible data with the better accuracy and precision as discussed in result analysis.
{"title":"Harnessing the semantic analysis of tag using Semantic Based Lesk Algorithm","authors":"M. Shankar, R. Senthilkumar","doi":"10.1109/ICRTIT.2014.6996200","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996200","url":null,"abstract":"In the field of Data retrieval, accessing web resources is frequent task. This domain is shifting radically from the amplified data growth to the way in which it is structured and retrieved across web. This explosive growth of data is the result of billions of people using the Internet and mobile devices for commerce, entertainment, social interactions and as well as the Internet of things that constantly share machine-generated data. Even with lot of research, the task of analyzing this data to extract its business values with precision still remains as a trivial issue. To address this issue, the paper presents a novel Semantic Based Lesk Algorithm (SBLA), which traces the meaning of user defined tags and categorizes the web data by means of Support Vector Machine (SVM) classifier. On comparing with existing methods, the proposed method performs well in extraction of admissible data with the better accuracy and precision as discussed in result analysis.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"30 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132869878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-10DOI: 10.1109/ICRTIT.2014.6996151
S. Anjanadevi, D. Vijayakumar, K. .. Srinivasagan
Cloud computing is an emerging, computing model wherein the tasks are allocated to software, combination of connections, and services accessed over a network. This connections and network of servers is collectively known as the cloud. In place of operating their own data centers, users might rent computing power and storage capacity from a service provider and pays only for what they use. Cloud storage is delivering the data storage as service. If the data is stored in cloud, it must provide the data access and heterogeneity. With the advances in cloud computing it allows storing of large number of images and data throughout the world. This paper proposes the indexing and metadata management which helps to access the distributed data with reduced latency. The metadata management can be enhanced for large scale file system applications. When designing the metadata, the storage location of the metadata and attributes is important for the efficient retrieval of the data. Indexes are used to quickly locate data without having to search over every location in storage. Based on these two models, the data can be easily fetched and the search time was reduced to retrieve the appropriate data.
{"title":"An efficient dynamic indexing and metadata based storage in cloud environment","authors":"S. Anjanadevi, D. Vijayakumar, K. .. Srinivasagan","doi":"10.1109/ICRTIT.2014.6996151","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996151","url":null,"abstract":"Cloud computing is an emerging, computing model wherein the tasks are allocated to software, combination of connections, and services accessed over a network. This connections and network of servers is collectively known as the cloud. In place of operating their own data centers, users might rent computing power and storage capacity from a service provider and pays only for what they use. Cloud storage is delivering the data storage as service. If the data is stored in cloud, it must provide the data access and heterogeneity. With the advances in cloud computing it allows storing of large number of images and data throughout the world. This paper proposes the indexing and metadata management which helps to access the distributed data with reduced latency. The metadata management can be enhanced for large scale file system applications. When designing the metadata, the storage location of the metadata and attributes is important for the efficient retrieval of the data. Indexes are used to quickly locate data without having to search over every location in storage. Based on these two models, the data can be easily fetched and the search time was reduced to retrieve the appropriate data.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133641606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-10DOI: 10.1109/ICRTIT.2014.6996166
S. Kiruthiga, P. Kola Sujatha, A. Kannan
Social Networks (SN) are popular among the people to interact with their friends through the internet. Users spending their time in popular social networking sites like facebook, Myspace and twitter to share the personal information. Cloning attack is one of the insidious attacks in facebook. Usually attackers stole the images and personal information about a person and create the fake profile pages. Once the profile gets cloned they started to send a friend request using the cloned profile. Incase if the real users account gets blocked, they used to send a new friend request to their friends. At the same time cloned one also sending the request to the person. At that time it was hard to identify the real one for users. In the proposed system the clone attack is detected based on user action time period and users click pattern to find the similarity between the cloned profile and real one in facebook. Using Cosine similarity and Jaccard index the performance of the similarity between the users is improved.
{"title":"Detecting cloning attack in Social Networks using classification and clustering techniques","authors":"S. Kiruthiga, P. Kola Sujatha, A. Kannan","doi":"10.1109/ICRTIT.2014.6996166","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996166","url":null,"abstract":"Social Networks (SN) are popular among the people to interact with their friends through the internet. Users spending their time in popular social networking sites like facebook, Myspace and twitter to share the personal information. Cloning attack is one of the insidious attacks in facebook. Usually attackers stole the images and personal information about a person and create the fake profile pages. Once the profile gets cloned they started to send a friend request using the cloned profile. Incase if the real users account gets blocked, they used to send a new friend request to their friends. At the same time cloned one also sending the request to the person. At that time it was hard to identify the real one for users. In the proposed system the clone attack is detected based on user action time period and users click pattern to find the similarity between the cloned profile and real one in facebook. Using Cosine similarity and Jaccard index the performance of the similarity between the users is improved.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130959535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-10DOI: 10.1109/ICRTIT.2014.6996174
M. Sumalatha, C. Selvakumar, T. Priya, R. T. Azariah, P. Manohar
In cloud computing, remote based massive data storage and dynamic computation services are provided to the users. The cloud enables the user to complete their tasks using pay-as-you-go cost model which typically works on the incurred virtual machine hours, so reducing the execution time will minimize the computational cost. Therefore the scheduler should bring maximum throughput in order to achieve effective resource allocation in cloud. Hence, in this work, DBPS (Deadline Based Pre-emptive Scheduling) and a TLBC (Throttled Load Balancing for Cloud) load balancing model based on cloud partitioning using virtual machine has been proposed. Workload prediction is done using statistics and training set, so that error tolerance can be achieved in TLBC. The preliminary results obtained when measuring performance based on the computational cost of the task set and the number of tasks executed in a particular time shows the proposed TLBC outperforms compared with existing systems. OpenNebula has been used as the cloud management tool for doing real time analysis and improving performance.
在云计算中,为用户提供基于远程的海量数据存储和动态计算服务。云允许用户使用按需付费的成本模型来完成他们的任务,这种模型通常在产生的虚拟机小时上工作,因此减少执行时间将使计算成本最小化。因此,调度器应该带来最大的吞吐量,以便在云中实现有效的资源分配。因此,本文提出了一种基于虚拟机的基于云分区的DBPS (Deadline Based preemptive Scheduling)和TLBC (throttledloadbalancing for Cloud)负载均衡模型。利用统计数据和训练集对工作负载进行预测,从而实现TLBC的容错性。根据任务集的计算成本和在特定时间内执行的任务数量对性能进行测量时获得的初步结果表明,与现有系统相比,建议的TLBC性能更好。OpenNebula被用作云管理工具,用于进行实时分析和提高性能。
{"title":"CLBC - Cost effective load balanced resource allocation for partitioned cloud system","authors":"M. Sumalatha, C. Selvakumar, T. Priya, R. T. Azariah, P. Manohar","doi":"10.1109/ICRTIT.2014.6996174","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996174","url":null,"abstract":"In cloud computing, remote based massive data storage and dynamic computation services are provided to the users. The cloud enables the user to complete their tasks using pay-as-you-go cost model which typically works on the incurred virtual machine hours, so reducing the execution time will minimize the computational cost. Therefore the scheduler should bring maximum throughput in order to achieve effective resource allocation in cloud. Hence, in this work, DBPS (Deadline Based Pre-emptive Scheduling) and a TLBC (Throttled Load Balancing for Cloud) load balancing model based on cloud partitioning using virtual machine has been proposed. Workload prediction is done using statistics and training set, so that error tolerance can be achieved in TLBC. The preliminary results obtained when measuring performance based on the computational cost of the task set and the number of tasks executed in a particular time shows the proposed TLBC outperforms compared with existing systems. OpenNebula has been used as the cloud management tool for doing real time analysis and improving performance.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130663596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-10DOI: 10.1109/ICRTIT.2014.6996189
P. Selvaperumal, A. Suruliandi
Twitter users tweet their views in the form of short text messages. Twitter topic classification is classifying the tweets in to a set of predefined classes. In this work, a new tweet classification Method that makes use of tweet features like URL's in the tweet, retweeted tweets and influential users tweet is proposed. Experiments were carried out with extensive tweet data set. The performance of the proposed algorithm in classifying the tweets was compared with the text classification algorithms like SVM, Naïve Bayes, KNN etc. It is observed that the proposed method outclasses the conventional text classification algorithms in classifying the tweets.
{"title":"A short message classification algorithm for tweet classification","authors":"P. Selvaperumal, A. Suruliandi","doi":"10.1109/ICRTIT.2014.6996189","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996189","url":null,"abstract":"Twitter users tweet their views in the form of short text messages. Twitter topic classification is classifying the tweets in to a set of predefined classes. In this work, a new tweet classification Method that makes use of tweet features like URL's in the tweet, retweeted tweets and influential users tweet is proposed. Experiments were carried out with extensive tweet data set. The performance of the proposed algorithm in classifying the tweets was compared with the text classification algorithms like SVM, Naïve Bayes, KNN etc. It is observed that the proposed method outclasses the conventional text classification algorithms in classifying the tweets.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114713257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-10DOI: 10.1109/ICRTIT.2014.6996178
Arijit Mallik, S. Hegde
Modern data center networks (DCNs) often use multi-rooted topologies, which offer multipath capability, for increased bandwidth and fault tolerance. However, traditional routing algorithms for the Internet have no or limited support for multipath routing, and cannot fully utilize available bandwidth in such DCNs. As a result, they route all the traffic through a single path, and thus form congestion. Multipath (MP) routing might be a good alternative, but is not sufficient alone to handle congestion that comes from the contention of end stations. Dynamic load balancing, on the other hand, protects the network from sudden congestions which could be caused by load spikes or link failures. However, little work has been done to incorporate all these features in a single and comprehensive solution for Data Center Ethernet (DCE). In this paper, we propose a novel method that attempts to integrate dynamic load balancing, multi-path scheme with congestion control (CC), with the use of pure Software-Defined-Networking (SDN) approach. SDN decouples control plane from the data forwarding plane, which reduces the overheads of the network switches. The major objectives that our solution attempts to achieve are, efficient utilization of network resources, high throughput and minimal frame loss.
{"title":"A novel proposal to effectively combine multipath data forwarding for data center networks with congestion control and load balancing using Software-Defined Networking Approach","authors":"Arijit Mallik, S. Hegde","doi":"10.1109/ICRTIT.2014.6996178","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996178","url":null,"abstract":"Modern data center networks (DCNs) often use multi-rooted topologies, which offer multipath capability, for increased bandwidth and fault tolerance. However, traditional routing algorithms for the Internet have no or limited support for multipath routing, and cannot fully utilize available bandwidth in such DCNs. As a result, they route all the traffic through a single path, and thus form congestion. Multipath (MP) routing might be a good alternative, but is not sufficient alone to handle congestion that comes from the contention of end stations. Dynamic load balancing, on the other hand, protects the network from sudden congestions which could be caused by load spikes or link failures. However, little work has been done to incorporate all these features in a single and comprehensive solution for Data Center Ethernet (DCE). In this paper, we propose a novel method that attempts to integrate dynamic load balancing, multi-path scheme with congestion control (CC), with the use of pure Software-Defined-Networking (SDN) approach. SDN decouples control plane from the data forwarding plane, which reduces the overheads of the network switches. The major objectives that our solution attempts to achieve are, efficient utilization of network resources, high throughput and minimal frame loss.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115912833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}