Pub Date : 2020-01-01DOI: 10.1109/Confluence47617.2020.9057842
Sarthak Gupta, Virain Malhotra, Vasudha Vashisht
India is one of the largest producers of agricultural products. Main source of India’s GDP is its vast agricultural produce that accounts to 16% of the total. About 58 percent of the India’s workforce is involved in agriculture. But due to variable climatic condition of the country farmers are unprepared for these harsh and inevitable conditions. The farmers don’t have any effective way to deal with natural disasters such as drought and flooding which results in damaging of the crop and steep loss to the farmers. This research paper proposes a system through which we can reduce the problems of the farmers by automated smart irrigation system in drought conditions and smart suction pump which will suck out the excess water during flooding conditions. A database will be maintained for thorough analysis of amount of water irrigated in the fields, measurement of amount of rainfall, amount of water sucked during flooding and humidity level of soil in timeline manner. This database will be used for prediction of such climatic conditions and informing the farmers to take appropriate measures so that they can reduce or nullify the losses under such conditions.
{"title":"Water Irrigation and Flood Prevention using IOT","authors":"Sarthak Gupta, Virain Malhotra, Vasudha Vashisht","doi":"10.1109/Confluence47617.2020.9057842","DOIUrl":"https://doi.org/10.1109/Confluence47617.2020.9057842","url":null,"abstract":"India is one of the largest producers of agricultural products. Main source of India’s GDP is its vast agricultural produce that accounts to 16% of the total. About 58 percent of the India’s workforce is involved in agriculture. But due to variable climatic condition of the country farmers are unprepared for these harsh and inevitable conditions. The farmers don’t have any effective way to deal with natural disasters such as drought and flooding which results in damaging of the crop and steep loss to the farmers. This research paper proposes a system through which we can reduce the problems of the farmers by automated smart irrigation system in drought conditions and smart suction pump which will suck out the excess water during flooding conditions. A database will be maintained for thorough analysis of amount of water irrigated in the fields, measurement of amount of rainfall, amount of water sucked during flooding and humidity level of soil in timeline manner. This database will be used for prediction of such climatic conditions and informing the farmers to take appropriate measures so that they can reduce or nullify the losses under such conditions.","PeriodicalId":180005,"journal":{"name":"2020 10th International Conference on Cloud Computing, Data Science & Engineering (Confluence)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115799856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1109/Confluence47617.2020.9057889
Pushkar Sharma, P. Hans, Subhash Chand Gupta
Agriculture is one of the main factor that decides the growth of any country. In India itself around 65% of the population is based on agriculture. Due to various seasonal conditions the crops get infected by various kind of diseases. These diseases firstly affect the leaves of the plant and later infected the whole plant which in turn affect the quality and quantity of crop cultivated. As there are large number of plants in the farm, it becomes very difficult for the human eye to detect and classify the disease of each plant in the field. And it is very important to diagnose each plant because these diseases may spread. Hence in this paper we are introducing the artificial intelligence based automatic plant leaf disease detection and classification for quick and easy detection of disease and then classifying it and performing required remedies to cure that disease. This approach of ours goals towards increasing the productivity of crops in agriculture. In this approach we have follow several steps i.e. image collection, image preprocessing, segmentation and classification.
{"title":"Classification Of Plant Leaf Diseases Using Machine Learning And Image Preprocessing Techniques","authors":"Pushkar Sharma, P. Hans, Subhash Chand Gupta","doi":"10.1109/Confluence47617.2020.9057889","DOIUrl":"https://doi.org/10.1109/Confluence47617.2020.9057889","url":null,"abstract":"Agriculture is one of the main factor that decides the growth of any country. In India itself around 65% of the population is based on agriculture. Due to various seasonal conditions the crops get infected by various kind of diseases. These diseases firstly affect the leaves of the plant and later infected the whole plant which in turn affect the quality and quantity of crop cultivated. As there are large number of plants in the farm, it becomes very difficult for the human eye to detect and classify the disease of each plant in the field. And it is very important to diagnose each plant because these diseases may spread. Hence in this paper we are introducing the artificial intelligence based automatic plant leaf disease detection and classification for quick and easy detection of disease and then classifying it and performing required remedies to cure that disease. This approach of ours goals towards increasing the productivity of crops in agriculture. In this approach we have follow several steps i.e. image collection, image preprocessing, segmentation and classification.","PeriodicalId":180005,"journal":{"name":"2020 10th International Conference on Cloud Computing, Data Science & Engineering (Confluence)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123833385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1109/Confluence47617.2020.9058244
Amandeep Kaur
Regression testing is the backbone of the functional Software Testing. Unlike any other testing; regression validation evolves the whole suite of code which incorporates the existing code as well as new code or the change request. Validating all the possible scenarios is not effective as it increases the expenditure. This gains the outlook for the researchers to analyze a more efficient way for regression testing by electing a subset from the test suite to spot the defects. Ample research has crop up for this NP-Hard problem and folks are implementing the metaheuristic techniques and dominantly the nature-inspired ones. In this paper, to extract the optimal test cases we have utilized Harris Hawks Optimization (HHO) which is a nature-inspired technique and portrays chasing drive away style of Harris’ hawks termed as Surprise Pounce. In this tactic, assorted hawks combine together to pounce a prey through the offbeat directions to surprise the prey. This paper focuses on the Harris Hawks Optimization algorithm and its applications in the domain of software testing.
{"title":"An Approach To Extract Optimal Test Cases Using AI","authors":"Amandeep Kaur","doi":"10.1109/Confluence47617.2020.9058244","DOIUrl":"https://doi.org/10.1109/Confluence47617.2020.9058244","url":null,"abstract":"Regression testing is the backbone of the functional Software Testing. Unlike any other testing; regression validation evolves the whole suite of code which incorporates the existing code as well as new code or the change request. Validating all the possible scenarios is not effective as it increases the expenditure. This gains the outlook for the researchers to analyze a more efficient way for regression testing by electing a subset from the test suite to spot the defects. Ample research has crop up for this NP-Hard problem and folks are implementing the metaheuristic techniques and dominantly the nature-inspired ones. In this paper, to extract the optimal test cases we have utilized Harris Hawks Optimization (HHO) which is a nature-inspired technique and portrays chasing drive away style of Harris’ hawks termed as Surprise Pounce. In this tactic, assorted hawks combine together to pounce a prey through the offbeat directions to surprise the prey. This paper focuses on the Harris Hawks Optimization algorithm and its applications in the domain of software testing.","PeriodicalId":180005,"journal":{"name":"2020 10th International Conference on Cloud Computing, Data Science & Engineering (Confluence)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126279877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K-means clustering is an algorithm, which has been used to cluster the given data into k sets that are mutual exclusive of each other. The K-means algorithm is designed to work with the Euclidean distance but there are many measures to identify the dissimilarity of the dataset. The aim of this paper is to discuss the performance of K-means clustering algorithm on city block, cosine, and correlation distance which are used to get the results and further their performance has been shown in terms of accuracy. For classification, authors have chosen the IRIS data set. K means have claimed 98% accuracy on city block and correlation distance.
k -means聚类是一种算法,它被用来将给定的数据聚类成k个相互排斥的集合。K-means算法是设计用来处理欧几里得距离的,但是有很多方法可以识别数据集的不相似性。本文的目的是讨论K-means聚类算法在城市街区、余弦和相关距离上的性能,并进一步在精度方面展示了它们的性能。对于分类,作者选择了IRIS数据集。K均值在城市街区和相关距离上的准确率达到98%。
{"title":"Comparative Study of K-Means Clustering Using Iris Data Set for Various Distances","authors":"Adrija Chakraborty, Neetu Faujdar, Akash Punhani, Shipra Saraswat","doi":"10.1109/Confluence47617.2020.9058328","DOIUrl":"https://doi.org/10.1109/Confluence47617.2020.9058328","url":null,"abstract":"K-means clustering is an algorithm, which has been used to cluster the given data into k sets that are mutual exclusive of each other. The K-means algorithm is designed to work with the Euclidean distance but there are many measures to identify the dissimilarity of the dataset. The aim of this paper is to discuss the performance of K-means clustering algorithm on city block, cosine, and correlation distance which are used to get the results and further their performance has been shown in terms of accuracy. For classification, authors have chosen the IRIS data set. K means have claimed 98% accuracy on city block and correlation distance.","PeriodicalId":180005,"journal":{"name":"2020 10th International Conference on Cloud Computing, Data Science & Engineering (Confluence)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125020366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1109/Confluence47617.2020.9058306
Ananthnarayan Rajappa, A. Upadhyay, A. Sabitha, Abhay Bansal, B. White, L. Cottrell
PingER (Ping End-to-End Reporting) is a tool developed by SLAC National Accelerator Laboratory for the purpose of Internet End-to-end Performance Monitoring (IEPM). The aim of this research work is to develop a mobile application for Android mobile devices using Firebase for storing the data, obtained from pinging the beacons, and authenticating the users. The Measuring Agent (MA) pings the beacon list, the data obtained is formatted with the help of a Regular Expression library before being pushed to Firebase. In addition, the location of the MA, latitude and longitude, is also tracked with the help of Google’s Geolocation API. This data is also stored in the database.
{"title":"Implementation of PingER on Android Mobile Devices Using Firebase","authors":"Ananthnarayan Rajappa, A. Upadhyay, A. Sabitha, Abhay Bansal, B. White, L. Cottrell","doi":"10.1109/Confluence47617.2020.9058306","DOIUrl":"https://doi.org/10.1109/Confluence47617.2020.9058306","url":null,"abstract":"PingER (Ping End-to-End Reporting) is a tool developed by SLAC National Accelerator Laboratory for the purpose of Internet End-to-end Performance Monitoring (IEPM). The aim of this research work is to develop a mobile application for Android mobile devices using Firebase for storing the data, obtained from pinging the beacons, and authenticating the users. The Measuring Agent (MA) pings the beacon list, the data obtained is formatted with the help of a Regular Expression library before being pushed to Firebase. In addition, the location of the MA, latitude and longitude, is also tracked with the help of Google’s Geolocation API. This data is also stored in the database.","PeriodicalId":180005,"journal":{"name":"2020 10th International Conference on Cloud Computing, Data Science & Engineering (Confluence)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130905301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1109/Confluence47617.2020.9057938
Avneesh Vashistha, Pushpneel Verma
Resource management is one of the most challenging task in the cloud data center. These challenges have raised from the dynamic nature and high uncertainty in the cloud environment. Moreover, allocating resources over time may lead the sub-optimal execution environment due to significant up and drop in the workload that have some time dependent patterns. Therefore, it requires some time-sensitive techniques for optimising the resources utilization in cloud data center. In this paper, we discuss the workload prediction techniques that forecast the workload in the cloud environment and the value of predicted workload guides for optimising the resources. Furthermore, we present the workload taxonomy which is classified into (i) workload predictor and (ii) model fitting. In addition, we provide an extensive discussion on the workload predictors and further classified into temporal and non-temporal.
{"title":"A Literature Review and Taxonomy on Workload Prediction in Cloud Data Center","authors":"Avneesh Vashistha, Pushpneel Verma","doi":"10.1109/Confluence47617.2020.9057938","DOIUrl":"https://doi.org/10.1109/Confluence47617.2020.9057938","url":null,"abstract":"Resource management is one of the most challenging task in the cloud data center. These challenges have raised from the dynamic nature and high uncertainty in the cloud environment. Moreover, allocating resources over time may lead the sub-optimal execution environment due to significant up and drop in the workload that have some time dependent patterns. Therefore, it requires some time-sensitive techniques for optimising the resources utilization in cloud data center. In this paper, we discuss the workload prediction techniques that forecast the workload in the cloud environment and the value of predicted workload guides for optimising the resources. Furthermore, we present the workload taxonomy which is classified into (i) workload predictor and (ii) model fitting. In addition, we provide an extensive discussion on the workload predictors and further classified into temporal and non-temporal.","PeriodicalId":180005,"journal":{"name":"2020 10th International Conference on Cloud Computing, Data Science & Engineering (Confluence)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127694362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1109/Confluence47617.2020.9057955
Karman Singh, Renuka Nagpal, Rajni Sehgal
RMS Titanic was a British cruise ship said to be the largest cruise ever made in the history of world. It collided with an iceberg during its maiden journey across the pacific ocean from Southampton to New York City. With more than 2200 passengers on board, nearly half of them died after the unprecedented mishap. The infamous incident compels researchers to dig into the dataset. This research is aimed at achieving an exploratory data analysis and understand the effect or parameters key to the survival of a person had they been on the ship. The survival prediction has been done by applying various algorithms like Logistic Regression, K – nearest neighbours, Support vector machines, Decision Tree. Towards the end, accuracies of the algorithms based on features fed to them has been compared in a tabular form.
{"title":"Exploratory Data Analysis and Machine Learning on Titanic Disaster Dataset","authors":"Karman Singh, Renuka Nagpal, Rajni Sehgal","doi":"10.1109/Confluence47617.2020.9057955","DOIUrl":"https://doi.org/10.1109/Confluence47617.2020.9057955","url":null,"abstract":"RMS Titanic was a British cruise ship said to be the largest cruise ever made in the history of world. It collided with an iceberg during its maiden journey across the pacific ocean from Southampton to New York City. With more than 2200 passengers on board, nearly half of them died after the unprecedented mishap. The infamous incident compels researchers to dig into the dataset. This research is aimed at achieving an exploratory data analysis and understand the effect or parameters key to the survival of a person had they been on the ship. The survival prediction has been done by applying various algorithms like Logistic Regression, K – nearest neighbours, Support vector machines, Decision Tree. Towards the end, accuracies of the algorithms based on features fed to them has been compared in a tabular form.","PeriodicalId":180005,"journal":{"name":"2020 10th International Conference on Cloud Computing, Data Science & Engineering (Confluence)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125357982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1109/Confluence47617.2020.9057921
S. Namazi, L. Brankovic, B. Moghtaderi, J. Zanganeh
Global warming is a long-term environmental hazard demonstrated by a gradual increase in the temperature of the Earth. It is caused by the accumulation of greenhouse gases in the atmosphere, including carbon dioxide and methane. Although, in terms of the volume, methane is considered secondary to carbon dioxide, it is about 21 times more damaging when compared over a 100-year period. Fugitive methane emissions from underground coal mines significantly contribute to global warming. Amongst all the known methods to reduce the fugitive methane, application of thermal oxidation (or, simply, burning) is deemed the most effective and practical. This process produces water vapour and carbon dioxide, which has significantly lower adverse impact on the atmosphere than methane. The thermal oxidisers operate at high temperatures, which may introduce a risk of fire and explosion to the mine. In order to mitigate such risk, a thorough understanding of the methane explosion characteristics is essential. Methane fire and explosion experiments under conditions pertinent to underground coal mines are expensive, risky and necessitate significant effort, and thus require enormous preparation and safety procedures. It is cheaper and safer to analyse existing data to discover patterns and predict explosions than to conduct new extensive experiments. In this paper, we present a comparative study of data mining and machine learning techniques used for these purposes.
{"title":"Comparative Study of Data Mining Techniques for Predicting Explosions in Coal Mines","authors":"S. Namazi, L. Brankovic, B. Moghtaderi, J. Zanganeh","doi":"10.1109/Confluence47617.2020.9057921","DOIUrl":"https://doi.org/10.1109/Confluence47617.2020.9057921","url":null,"abstract":"Global warming is a long-term environmental hazard demonstrated by a gradual increase in the temperature of the Earth. It is caused by the accumulation of greenhouse gases in the atmosphere, including carbon dioxide and methane. Although, in terms of the volume, methane is considered secondary to carbon dioxide, it is about 21 times more damaging when compared over a 100-year period. Fugitive methane emissions from underground coal mines significantly contribute to global warming. Amongst all the known methods to reduce the fugitive methane, application of thermal oxidation (or, simply, burning) is deemed the most effective and practical. This process produces water vapour and carbon dioxide, which has significantly lower adverse impact on the atmosphere than methane. The thermal oxidisers operate at high temperatures, which may introduce a risk of fire and explosion to the mine. In order to mitigate such risk, a thorough understanding of the methane explosion characteristics is essential. Methane fire and explosion experiments under conditions pertinent to underground coal mines are expensive, risky and necessitate significant effort, and thus require enormous preparation and safety procedures. It is cheaper and safer to analyse existing data to discover patterns and predict explosions than to conduct new extensive experiments. In this paper, we present a comparative study of data mining and machine learning techniques used for these purposes.","PeriodicalId":180005,"journal":{"name":"2020 10th International Conference on Cloud Computing, Data Science & Engineering (Confluence)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116616476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1109/Confluence47617.2020.9057810
Shaurya Uppal, Arti Jain, Anuja Arora
Text Mining refers to an extraction of certain nontrivial, hidden and interesting knowledge from an unstructured textual data. In this paper, efforts are directed to interpret text mining queries in the healthcare domain. To do so, the dataset is taken from the 1mg-company that has emerged during 2015 to provide transparent, authentic and accessible healthcare information for the millions of people while guiding customers with the quality care that too at affordable prices. The different text mining algorithms are compared to generate knowledge extraction of keyterms while linking the personalized search concepts with respect to the healthcare domain, and for the better search recommendations. The algorithms are: basic TF-IDF, SGRank with IDF, TextRank, and modified TF-IDF. The best results are obtained with the modified TF-IDF with the Shingle analyzer where post-release overall is reduced.
{"title":"Comparative Analysis for KeyTerms Extraction Methods for Personalized Search Engines","authors":"Shaurya Uppal, Arti Jain, Anuja Arora","doi":"10.1109/Confluence47617.2020.9057810","DOIUrl":"https://doi.org/10.1109/Confluence47617.2020.9057810","url":null,"abstract":"Text Mining refers to an extraction of certain nontrivial, hidden and interesting knowledge from an unstructured textual data. In this paper, efforts are directed to interpret text mining queries in the healthcare domain. To do so, the dataset is taken from the 1mg-company that has emerged during 2015 to provide transparent, authentic and accessible healthcare information for the millions of people while guiding customers with the quality care that too at affordable prices. The different text mining algorithms are compared to generate knowledge extraction of keyterms while linking the personalized search concepts with respect to the healthcare domain, and for the better search recommendations. The algorithms are: basic TF-IDF, SGRank with IDF, TextRank, and modified TF-IDF. The best results are obtained with the modified TF-IDF with the Shingle analyzer where post-release overall is reduced.","PeriodicalId":180005,"journal":{"name":"2020 10th International Conference on Cloud Computing, Data Science & Engineering (Confluence)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132471593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01DOI: 10.1109/Confluence47617.2020.9057816
A. Kaur, R. Kaur, Swati Sondhi
Control design plays a significant role in almost all types of industries. Proportional-integral-derivative (PID) controllers are an integral part of process control loops. PID controllers are popular for their simplicity of implementation and broad applicability. In recent years, various metaheuristic algorithms and modified hybrid algorithms have been applied to design the controllers. The aim of this paper is to design a controller with high versatility, accuracy and good control quality. In this research paper, first, a novel tuning method based on Crow Search Algorithm (CSA) is proposed to optimize parameters of PID controller: $K_{p}, K_{i}$ and Kd. Each crow represents a feasible solution for the PID parameters. Second, four objective functions have been explored and the effectiveness and convergence rates of CSA-PID controller is evaluated therein for two different control problems. Last, comparison has been carried out between CSA optimized PID The main advantage of CSA is its simplicity, faster convergence rate, ease of implementation and easy understanding. As per findings based on statistical analysis, Crow search Algorithm (CSA) has been found to be more reliable. Simulation results based on two control problems and four evaluation functions have been tested for set point tracking, load rejection capability, noise suppression and modelling errors.
{"title":"CSA based PID Controller Design Technique for optimizing Various Integral Errors","authors":"A. Kaur, R. Kaur, Swati Sondhi","doi":"10.1109/Confluence47617.2020.9057816","DOIUrl":"https://doi.org/10.1109/Confluence47617.2020.9057816","url":null,"abstract":"Control design plays a significant role in almost all types of industries. Proportional-integral-derivative (PID) controllers are an integral part of process control loops. PID controllers are popular for their simplicity of implementation and broad applicability. In recent years, various metaheuristic algorithms and modified hybrid algorithms have been applied to design the controllers. The aim of this paper is to design a controller with high versatility, accuracy and good control quality. In this research paper, first, a novel tuning method based on Crow Search Algorithm (CSA) is proposed to optimize parameters of PID controller: $K_{p}, K_{i}$ and Kd. Each crow represents a feasible solution for the PID parameters. Second, four objective functions have been explored and the effectiveness and convergence rates of CSA-PID controller is evaluated therein for two different control problems. Last, comparison has been carried out between CSA optimized PID The main advantage of CSA is its simplicity, faster convergence rate, ease of implementation and easy understanding. As per findings based on statistical analysis, Crow search Algorithm (CSA) has been found to be more reliable. Simulation results based on two control problems and four evaluation functions have been tested for set point tracking, load rejection capability, noise suppression and modelling errors.","PeriodicalId":180005,"journal":{"name":"2020 10th International Conference on Cloud Computing, Data Science & Engineering (Confluence)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131624891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}