Text summarization is a process in which it abbreviating the source script into a short version maintaining its information content with its original meaning. It is an impossible or difficult task for human beings to summarize very large number of documents by hand. The word text summarization methods divided into two parts extractive or abstractive summarization. The extractive summarization technique extracts by selecting significant sentences, paragraphs etc from its original documents and connect them into a short form. The status of sentence is decided by sentences arithmetical and dialectal features. In other hands an abstractive method of summarization entails of understanding the unique text and re-telling it in a few words. It uses linguistic approaches to inspect and decipher the text and find the new observations and expressions to best define it by engendering a new shorter text that delivers the most meaningful facts from the original text document. A deep study of Text Summarization systems has been presented in this paper.
{"title":"Summarization Techniques of Cloud Computing","authors":"Ankita Gupta, Deepak Motwani","doi":"10.1145/2979779.2979845","DOIUrl":"https://doi.org/10.1145/2979779.2979845","url":null,"abstract":"Text summarization is a process in which it abbreviating the source script into a short version maintaining its information content with its original meaning. It is an impossible or difficult task for human beings to summarize very large number of documents by hand. The word text summarization methods divided into two parts extractive or abstractive summarization. The extractive summarization technique extracts by selecting significant sentences, paragraphs etc from its original documents and connect them into a short form. The status of sentence is decided by sentences arithmetical and dialectal features. In other hands an abstractive method of summarization entails of understanding the unique text and re-telling it in a few words. It uses linguistic approaches to inspect and decipher the text and find the new observations and expressions to best define it by engendering a new shorter text that delivers the most meaningful facts from the original text document. A deep study of Text Summarization systems has been presented in this paper.","PeriodicalId":298730,"journal":{"name":"Proceedings of the International Conference on Advances in Information Communication Technology & Computing","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126459998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recommender system plays the major role of filtering the needed information from enormous amount of overloaded information. From e-commerce to movie websites, recommender systems are being used for market their product to the customer. Also, recommender system gains user trust by suggesting the customer's products of interest based on the profile of the customer and other related information. So, when the recommender system goes wrong or suggests an irrelevant product, the customer will stop trusting and using the recommender system. This kind of scenario will affect the customer as well as the e-commerce and other websites that depends on recommender systems for boosting the sales. There is a significant need to correct the recommender system when it goes wrong, since, wrong recommendations will weaken the user trust and diminish the efficiency of the system. In this paper, we are defining a scrutable algorithm for enhancing the efficiency of recommender system based on fuzzy decision tree. Scrutable algorithm will correct the system and will work on enhancing the efficiency of the recommender system. By adapting the scrutable algorithm, users will be in a position to understand the transparency in recommending items which, in turn, will gain user trust.
{"title":"A Scrutable Algorithm for Enhancing the Efficiency of Recommender Systems using Fuzzy Decision Tree","authors":"S. Moses, L. D. D. Babu","doi":"10.1145/2979779.2979806","DOIUrl":"https://doi.org/10.1145/2979779.2979806","url":null,"abstract":"Recommender system plays the major role of filtering the needed information from enormous amount of overloaded information. From e-commerce to movie websites, recommender systems are being used for market their product to the customer. Also, recommender system gains user trust by suggesting the customer's products of interest based on the profile of the customer and other related information. So, when the recommender system goes wrong or suggests an irrelevant product, the customer will stop trusting and using the recommender system. This kind of scenario will affect the customer as well as the e-commerce and other websites that depends on recommender systems for boosting the sales. There is a significant need to correct the recommender system when it goes wrong, since, wrong recommendations will weaken the user trust and diminish the efficiency of the system. In this paper, we are defining a scrutable algorithm for enhancing the efficiency of recommender system based on fuzzy decision tree. Scrutable algorithm will correct the system and will work on enhancing the efficiency of the recommender system. By adapting the scrutable algorithm, users will be in a position to understand the transparency in recommending items which, in turn, will gain user trust.","PeriodicalId":298730,"journal":{"name":"Proceedings of the International Conference on Advances in Information Communication Technology & Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129769151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical data mining is an emerging field employed to discover hidden knowledge within the large datasets for early medical diagnosis of disease. Usually, large databases comprise of numerous features which may have missing values, noise and outliers. However, such features can mislead to future medical diagnosis. Moreover to deal with irrelevant and redundant features among large databases, proper pre processing data techniques needs be applied. In, past studies data mining technique such as feature selection is efficiently applied to deal with irrelevant, noisy and redundant features. This paper explains application of data mining techniques using feature selection for pancreatic cancer patients to conduct machine learning studies on collected patient records. We have evaluated different feature selection techniques such as Correlation-Based Filter Method (CFS) and Wrapper Subset Evaluation using Naive Bayes and J48 (an implementation of C4.5) classifier on medical databases to analyze varied data mining algorithms which can effectively classify medical data for future medical diagnosis. Further, experimental techniques have been used to measure the effectiveness and efficiency of feature selection algorithms. The experimental analysis conducted has proven beneficiary to determine machine learning methods for effective analysis of pancreatic cancer diagnosis.
{"title":"A Feature Based Approach for Medical Databases","authors":"Ritu Chauhan, Harleen Kaur, Sukrati Sharma","doi":"10.1145/2979779.2979873","DOIUrl":"https://doi.org/10.1145/2979779.2979873","url":null,"abstract":"Medical data mining is an emerging field employed to discover hidden knowledge within the large datasets for early medical diagnosis of disease. Usually, large databases comprise of numerous features which may have missing values, noise and outliers. However, such features can mislead to future medical diagnosis. Moreover to deal with irrelevant and redundant features among large databases, proper pre processing data techniques needs be applied. In, past studies data mining technique such as feature selection is efficiently applied to deal with irrelevant, noisy and redundant features. This paper explains application of data mining techniques using feature selection for pancreatic cancer patients to conduct machine learning studies on collected patient records. We have evaluated different feature selection techniques such as Correlation-Based Filter Method (CFS) and Wrapper Subset Evaluation using Naive Bayes and J48 (an implementation of C4.5) classifier on medical databases to analyze varied data mining algorithms which can effectively classify medical data for future medical diagnosis. Further, experimental techniques have been used to measure the effectiveness and efficiency of feature selection algorithms. The experimental analysis conducted has proven beneficiary to determine machine learning methods for effective analysis of pancreatic cancer diagnosis.","PeriodicalId":298730,"journal":{"name":"Proceedings of the International Conference on Advances in Information Communication Technology & Computing","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130534088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
WLANs have substituted wired networks and have formed an uprising in the area of communication. The Point Coordination Function (PCF) of the IEEE 802.11 protocol offersto sustain to real time traffic for Wireless Local Area Network (WLANs). However, PCF being a central polling protocol, a little bandwidth is exhausted on null packets and polling operating cost. To supplyimproved channel operation and to decreaseconsumption of bandwidth, an enhanced bidirectional broadcast of setextent known as Bi-Directional Point coordination Function (BD-PCF) is included into PCF. Based on this policy, wireless Access Points (APs) can approximatesuitablelength of the contention free period by conceding any received packet equal to the received packet in size. But as only one packet is transported during communication, the constraints of quality of service needed an enhancement. So a novel procedure, is projected in which two packets are conveyed in same interval with the AP, can resolve its wake-up timer and trigger sleep mode for the remaining of the Contention Free Period (CFP). Extensive computer based models of the new proposed method is established for the upgrading in terms of throughput and delay in voice influx.
{"title":"A Novel Technique of implementing Bidirectional Point Coordination Function for Voice Traffic in WLAN","authors":"Himanshu Yadav, D. Dembla","doi":"10.1145/2979779.2979864","DOIUrl":"https://doi.org/10.1145/2979779.2979864","url":null,"abstract":"WLANs have substituted wired networks and have formed an uprising in the area of communication. The Point Coordination Function (PCF) of the IEEE 802.11 protocol offersto sustain to real time traffic for Wireless Local Area Network (WLANs). However, PCF being a central polling protocol, a little bandwidth is exhausted on null packets and polling operating cost. To supplyimproved channel operation and to decreaseconsumption of bandwidth, an enhanced bidirectional broadcast of setextent known as Bi-Directional Point coordination Function (BD-PCF) is included into PCF. Based on this policy, wireless Access Points (APs) can approximatesuitablelength of the contention free period by conceding any received packet equal to the received packet in size. But as only one packet is transported during communication, the constraints of quality of service needed an enhancement. So a novel procedure, is projected in which two packets are conveyed in same interval with the AP, can resolve its wake-up timer and trigger sleep mode for the remaining of the Contention Free Period (CFP). Extensive computer based models of the new proposed method is established for the upgrading in terms of throughput and delay in voice influx.","PeriodicalId":298730,"journal":{"name":"Proceedings of the International Conference on Advances in Information Communication Technology & Computing","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133773705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Building an effective classification model when the high dimensional data is suffering from class imbalance problem is a major challenge. The problem is severe when negative samples have large percentages than positive samples. To surmount the class imbalance and high dimensionality issues in the dataset, we propose a SFS framework that comprises of SMOTE filters, which are used for balancing the datasets, as well as feature ranker for pre-processing of data. The framework is developed using R language and various R packages. Then the performance of SFS framework is evaluated and found that proposed framework outperforms than other state-of-the-art methods.
{"title":"Combining Synthetic Minority Oversampling Technique and Subset Feature Selection Technique For Class Imbalance Problem","authors":"Pawan Lachheta, S. Bawa","doi":"10.1145/2979779.2979804","DOIUrl":"https://doi.org/10.1145/2979779.2979804","url":null,"abstract":"Building an effective classification model when the high dimensional data is suffering from class imbalance problem is a major challenge. The problem is severe when negative samples have large percentages than positive samples. To surmount the class imbalance and high dimensionality issues in the dataset, we propose a SFS framework that comprises of SMOTE filters, which are used for balancing the datasets, as well as feature ranker for pre-processing of data. The framework is developed using R language and various R packages. Then the performance of SFS framework is evaluated and found that proposed framework outperforms than other state-of-the-art methods.","PeriodicalId":298730,"journal":{"name":"Proceedings of the International Conference on Advances in Information Communication Technology & Computing","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134043060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amit D. Joshi, Satyanarayana Vollala, B. S. Begum, N. Ramasubramanian
Shared memory multi-core processors are becoming dominant in todays computer architectures. Caching of shared data may produce a problem of replication in multiple caches. Replication provides reduction in contention for shared data items along with reduction in access latency and memory bandwidth. Caching of shared data that are being read by multiple processors simultaneously, introduces the problem of cache coherence. There are two different techniques to track the sharing status viz. Directory and Snooping. This work gives an emphasis on the study and analysis of impact of various system parameters on the performance of the basic techniques. The performance analysis of this work is based on the number of processors, available bandwidth and cache size. The prime aim of this work is to identify appropriate cache coherence protocol for various configurations. Simulation results have shown that snooping based systems are appropriate for high bandwidth systems and is the ideal choice for CPU and communication intensive workloads while directory based cache coherence protocols are suitable for lower bandwidth systems and will be more appropriate for memory intensive workloads.
{"title":"Performance Analysis of Cache Coherence Protocols for Multi-core Architectures: A System Attribute Perspective","authors":"Amit D. Joshi, Satyanarayana Vollala, B. S. Begum, N. Ramasubramanian","doi":"10.1145/2979779.2979801","DOIUrl":"https://doi.org/10.1145/2979779.2979801","url":null,"abstract":"Shared memory multi-core processors are becoming dominant in todays computer architectures. Caching of shared data may produce a problem of replication in multiple caches. Replication provides reduction in contention for shared data items along with reduction in access latency and memory bandwidth. Caching of shared data that are being read by multiple processors simultaneously, introduces the problem of cache coherence. There are two different techniques to track the sharing status viz. Directory and Snooping. This work gives an emphasis on the study and analysis of impact of various system parameters on the performance of the basic techniques. The performance analysis of this work is based on the number of processors, available bandwidth and cache size. The prime aim of this work is to identify appropriate cache coherence protocol for various configurations. Simulation results have shown that snooping based systems are appropriate for high bandwidth systems and is the ideal choice for CPU and communication intensive workloads while directory based cache coherence protocols are suitable for lower bandwidth systems and will be more appropriate for memory intensive workloads.","PeriodicalId":298730,"journal":{"name":"Proceedings of the International Conference on Advances in Information Communication Technology & Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130938513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Big Data is a term which defines a vast amount of structured and unstructured data which is challenging to process because of its large size, using traditional algorithms and lack of high speed processing techniques. Now a days, vast amount of digital data is being gathered from many important areas, including social networking websites like Facebook and Twitter. It is important for us to mine this big data for analysis purpose. One important analysis in this domain is to find key nodes in a social graph which can be the major information spreader. Node centrality measures can be used in many graph applications such as searching and ranking of nodes. Traditional centrality algorithms fail on such huge graphs therefore it is difficult to use these algorithms on big graphs. Traditional centrality algorithms such as degree centrality, betweenness centrality and closeness centrality were not designed for such large data. In this paper, we calculate centrality measures for big graphs having huge number of edges and nodes by parallelizing traditional centrality algorithms so that they can be used in an efficient way when the size of graph grows. We use MapReduce and Hadoop to implement these algorithms for parallel and distributed data processing. We present results and anomalies of these algorithms and also show the comparative processing time taken on normal systems and on Hadoop systems.
{"title":"Identification and ranking of key persons in a Social Networking Website using Hadoop & Big Data Analytics","authors":"Prerna Agarwal, Rafeeq Ahmed, Tanvir Ahmad","doi":"10.1145/2979779.2979844","DOIUrl":"https://doi.org/10.1145/2979779.2979844","url":null,"abstract":"Big Data is a term which defines a vast amount of structured and unstructured data which is challenging to process because of its large size, using traditional algorithms and lack of high speed processing techniques. Now a days, vast amount of digital data is being gathered from many important areas, including social networking websites like Facebook and Twitter. It is important for us to mine this big data for analysis purpose. One important analysis in this domain is to find key nodes in a social graph which can be the major information spreader. Node centrality measures can be used in many graph applications such as searching and ranking of nodes. Traditional centrality algorithms fail on such huge graphs therefore it is difficult to use these algorithms on big graphs. Traditional centrality algorithms such as degree centrality, betweenness centrality and closeness centrality were not designed for such large data. In this paper, we calculate centrality measures for big graphs having huge number of edges and nodes by parallelizing traditional centrality algorithms so that they can be used in an efficient way when the size of graph grows. We use MapReduce and Hadoop to implement these algorithms for parallel and distributed data processing. We present results and anomalies of these algorithms and also show the comparative processing time taken on normal systems and on Hadoop systems.","PeriodicalId":298730,"journal":{"name":"Proceedings of the International Conference on Advances in Information Communication Technology & Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132280278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud computing manifests exceptional capacity to facilitate easy to manage, cost effective, flexible and powerful resources across the internet. Due to maximum and shared utilization of utilization of resources, cloud computing enhances the capabilities of resources. There is a dire need for data security in the wake of the increasing capabilities of attackers and high magnitude of sensitive data. Cryptography is employed to ensure secrecy and authentication of data. Conventional information assurance methods are facing increasing technological advances such as radical developments in mathematics, potential to perform big computations and the prospects of wide-ranging quantum computations. Quantum cryptography is a promising solution towards absolute security in cryptosystems. This paper proposes integration of Advanced Encryption Standard (AES) algorithm with quantum cryptography. The proposed scheme is robust and meets essential security requirements. The simulation results show that the Quantum AES produces complex keys which are hard to predict by adversaries than the keys generated by the AES itself.
{"title":"A Novel Scheme for Data Security in Cloud Computing using Quantum Cryptography","authors":"Geeta Sharma, S. Kalra","doi":"10.1145/2979779.2979816","DOIUrl":"https://doi.org/10.1145/2979779.2979816","url":null,"abstract":"Cloud computing manifests exceptional capacity to facilitate easy to manage, cost effective, flexible and powerful resources across the internet. Due to maximum and shared utilization of utilization of resources, cloud computing enhances the capabilities of resources. There is a dire need for data security in the wake of the increasing capabilities of attackers and high magnitude of sensitive data. Cryptography is employed to ensure secrecy and authentication of data. Conventional information assurance methods are facing increasing technological advances such as radical developments in mathematics, potential to perform big computations and the prospects of wide-ranging quantum computations. Quantum cryptography is a promising solution towards absolute security in cryptosystems. This paper proposes integration of Advanced Encryption Standard (AES) algorithm with quantum cryptography. The proposed scheme is robust and meets essential security requirements. The simulation results show that the Quantum AES produces complex keys which are hard to predict by adversaries than the keys generated by the AES itself.","PeriodicalId":298730,"journal":{"name":"Proceedings of the International Conference on Advances in Information Communication Technology & Computing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134181068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the continuous increase in frequent E Commerce users, online businesses must have more customer friendly websites to better satisfy the personalized requirements of online customer and hence improve their market share over competition; Different customers have different purchase requirements at different intervals of time and hence new strategies are often required to be deployed by online retailers in order to identify the current purchase requirements of customer. In this research work, we propose design of a tool called Intelligent Meta Search System for E-commerce (IMSS-E), which can be used to blend benefits of Apriori based Map Reduce framework supported by Intelligent technologies like back propagation neural network and semantic web with B2C E-commerce to assist the online user to easily search and rank various E Commerce websites which can better satisfy his/her personalized online purchase requirement. An extensive experimental evaluation shows that IMSS-E can better satisfy the personalized search requirements of E Commerce users than conventional meta search engines.
随着电子商务频繁用户的不断增加,网上商家必须有更多的客户友好型网站,以更好地满足网上客户的个性化需求,从而提高其在竞争中的市场份额;不同的客户在不同的时间间隔有不同的购买需求,因此在线零售商经常需要部署新的策略来识别客户当前的购买需求。在本研究工作中,我们提出了一个电子商务智能元搜索系统(Intelligent Meta Search System for ecommerce, IMSS-E)的设计工具,该工具可以将基于Apriori的Map Reduce框架的优点与B2C电子商务相结合,并结合反向传播神经网络和语义网等智能技术,帮助在线用户方便地搜索和排名各种电子商务网站,以更好地满足其个性化的在线购买需求。大量的实验评估表明,与传统元搜索引擎相比,IMSS-E能更好地满足电子商务用户的个性化搜索需求。
{"title":"IMSS-E: An Intelligent Approach to Design of Adaptive Meta Search System for E Commerce Website Ranking","authors":"Dheeraj Malhotra, O. Rishi","doi":"10.1145/2979779.2979782","DOIUrl":"https://doi.org/10.1145/2979779.2979782","url":null,"abstract":"With the continuous increase in frequent E Commerce users, online businesses must have more customer friendly websites to better satisfy the personalized requirements of online customer and hence improve their market share over competition; Different customers have different purchase requirements at different intervals of time and hence new strategies are often required to be deployed by online retailers in order to identify the current purchase requirements of customer. In this research work, we propose design of a tool called Intelligent Meta Search System for E-commerce (IMSS-E), which can be used to blend benefits of Apriori based Map Reduce framework supported by Intelligent technologies like back propagation neural network and semantic web with B2C E-commerce to assist the online user to easily search and rank various E Commerce websites which can better satisfy his/her personalized online purchase requirement. An extensive experimental evaluation shows that IMSS-E can better satisfy the personalized search requirements of E Commerce users than conventional meta search engines.","PeriodicalId":298730,"journal":{"name":"Proceedings of the International Conference on Advances in Information Communication Technology & Computing","volume":"35 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133557115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a new technique of classification of Plant Leaf Disease (Potato Late Blight) using gradient and texture features and Artificial Neural Networks. This technique uses Artificial Neural Networks to Segment an Image which is initially segmented using unsupervised Fuzzy-C-means Clustering Algorithm. In this proposed approach decorrelation extending is utilized to enhance the shading contrasts as a part of the information pictures. At that point Fuzzy C-mean bunching is connected to portion the sickness influenced region which additionally incorporate foundation with same shading attributes. At last we propose to utilize the neural system based way to deal with group the malady influenced locales from the comparable shading textured foundation. The results of our work are promising.
{"title":"Classification of Plant Leaf Diseases Using Gradient and Texture Feature","authors":"R. Kaur, Sanjay Singla","doi":"10.1145/2979779.2979875","DOIUrl":"https://doi.org/10.1145/2979779.2979875","url":null,"abstract":"This paper presents a new technique of classification of Plant Leaf Disease (Potato Late Blight) using gradient and texture features and Artificial Neural Networks. This technique uses Artificial Neural Networks to Segment an Image which is initially segmented using unsupervised Fuzzy-C-means Clustering Algorithm. In this proposed approach decorrelation extending is utilized to enhance the shading contrasts as a part of the information pictures. At that point Fuzzy C-mean bunching is connected to portion the sickness influenced region which additionally incorporate foundation with same shading attributes. At last we propose to utilize the neural system based way to deal with group the malady influenced locales from the comparable shading textured foundation. The results of our work are promising.","PeriodicalId":298730,"journal":{"name":"Proceedings of the International Conference on Advances in Information Communication Technology & Computing","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121330007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}