Various machine learning architectures including neural networks have been designed, developed and used to classify data. These networks have been used for Computer Vision, Speech Recognition and Natural Language Processing, to mention but a few and provide near accurate results. One of the major challenges faced in the area of mathematical equational recognition has been background information and noise. This paper presents a system that makes use of image processing and an artificial neural network to recognize, contextualize and compute mathematical equations from noisy images. The system attempts to overcome the challenges faced at segmentation and recognition stages.
{"title":"Detecting, Contextualizing and Computing Basic Mathematical Equations from Noisy Images using Machine Learning","authors":"Daniel Ogwok, E. M. Ehlers","doi":"10.1145/3440840.3440855","DOIUrl":"https://doi.org/10.1145/3440840.3440855","url":null,"abstract":"Various machine learning architectures including neural networks have been designed, developed and used to classify data. These networks have been used for Computer Vision, Speech Recognition and Natural Language Processing, to mention but a few and provide near accurate results. One of the major challenges faced in the area of mathematical equational recognition has been background information and noise. This paper presents a system that makes use of image processing and an artificial neural network to recognize, contextualize and compute mathematical equations from noisy images. The system attempts to overcome the challenges faced at segmentation and recognition stages.","PeriodicalId":273859,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Computational Intelligence and Intelligent Systems","volume":"2019 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114535865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the article, we analyze the optimality of an oil production manufacturing via intelligent control digital twines. By examining the process industry, we present the primary keys of oil production lines, productivity, and quality. To spotlight on the intelligent process control system, as well as on the adaptive intelligent optimization of the production process, we used several methods, namely: Multi-Criteria Decision Analysis, Pareto optimization method and approximate neural network integration of all production line process information; in addition to tracking analysis, productivity and quality control. Even though this article discusses the optimality of oil manufacturing, the conclusions determine in this article can be extended to the processing industry worldwide.
{"title":"The intelligent control system of optimal oil manufacturing production","authors":"H. M. Yassine, V. Shkodyrev","doi":"10.1145/3440840.3440848","DOIUrl":"https://doi.org/10.1145/3440840.3440848","url":null,"abstract":"In the article, we analyze the optimality of an oil production manufacturing via intelligent control digital twines. By examining the process industry, we present the primary keys of oil production lines, productivity, and quality. To spotlight on the intelligent process control system, as well as on the adaptive intelligent optimization of the production process, we used several methods, namely: Multi-Criteria Decision Analysis, Pareto optimization method and approximate neural network integration of all production line process information; in addition to tracking analysis, productivity and quality control. Even though this article discusses the optimality of oil manufacturing, the conclusions determine in this article can be extended to the processing industry worldwide.","PeriodicalId":273859,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Computational Intelligence and Intelligent Systems","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115929141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to monitor all telemetry data, thresholds are adopted to judge the status of satellite. This method is terrible when some abnormal happened, if the data was not more than pre-set threshold. when the data exceeding the threshold after a period of time, there were a big fault for satellite. This fault would make a huge economic loss especially for the communicate satellite. These are two classes telemetry of satellite about this scenario, one class is continuously changing digital telemetry, the other class is temperature. A method was proposed for solving these problems. An autoencoder model was applied to monitor the telemetry data according to the devices or equipment board. Each device or equipment board has own model, and telemetry data is inputted to the model for compressing a single parameter as one-dimension feature. The operators just only monitor the one-dimension feature, that is simple and fast. If an abnormal appear, the parameter of device or equipment board would be changed to warn the operators, who would check the actual telemetry data of device or equipment board, and the abnormal would be checked out immediately and earlier than the traditional method. For detecting the two kinds of typical abnormal which could not detect by traditional method, two models were built and data was prepared. The results show that auto-decoder model can detect the abnormal accurately and be useful for the operator. A software was built, and some models were trained for a satellite.
{"title":"A Novel Method for Satellite Monitoring With One-Dimension Feature Based on Autoencoder Model","authors":"Di Hu","doi":"10.1145/3440840.3440845","DOIUrl":"https://doi.org/10.1145/3440840.3440845","url":null,"abstract":"In order to monitor all telemetry data, thresholds are adopted to judge the status of satellite. This method is terrible when some abnormal happened, if the data was not more than pre-set threshold. when the data exceeding the threshold after a period of time, there were a big fault for satellite. This fault would make a huge economic loss especially for the communicate satellite. These are two classes telemetry of satellite about this scenario, one class is continuously changing digital telemetry, the other class is temperature. A method was proposed for solving these problems. An autoencoder model was applied to monitor the telemetry data according to the devices or equipment board. Each device or equipment board has own model, and telemetry data is inputted to the model for compressing a single parameter as one-dimension feature. The operators just only monitor the one-dimension feature, that is simple and fast. If an abnormal appear, the parameter of device or equipment board would be changed to warn the operators, who would check the actual telemetry data of device or equipment board, and the abnormal would be checked out immediately and earlier than the traditional method. For detecting the two kinds of typical abnormal which could not detect by traditional method, two models were built and data was prepared. The results show that auto-decoder model can detect the abnormal accurately and be useful for the operator. A software was built, and some models were trained for a satellite.","PeriodicalId":273859,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Computational Intelligence and Intelligent Systems","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132619030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Location estimation of Autonomous mobile robots is an essential and challenging task, especially for indoor applications. Despite the many solutions and algorithms that have been suggested in the literature to provide a precise localisation technique for mobile robots, it continues to be an open research problem and worth further study. In this paper, a predefined map with artificial colour code signature (CCs) beacons are used to build an effective algorithm to achieve an indoor localisation and position prediction of an omnidirectional mobile robot. This algorithm is primarily based on calculating the distance between the robot and the beacon using Pixy cameras, as vision sensors; then, estimating the position of the robot using a trilateration method. By comparing the results obtained in this paper with the mathematically obtained results, it is clearly shown that the robot effectively follows the localisation algorithm to estimate its pose (position and orientation), improving its localisation abilities in addition to obtaining its initial position. Furthermore, the limitations associated with using Pixy cameras are discussed in this paper as well.
{"title":"Omnidirectional Robot Indoor Localisation using Two Pixy Cameras and Artificial Colour Code Signature Beacons","authors":"Mohanad N. Noaman, Z. Al-Shibaany, Saba Al-Wais","doi":"10.1145/3440840.3440849","DOIUrl":"https://doi.org/10.1145/3440840.3440849","url":null,"abstract":"Location estimation of Autonomous mobile robots is an essential and challenging task, especially for indoor applications. Despite the many solutions and algorithms that have been suggested in the literature to provide a precise localisation technique for mobile robots, it continues to be an open research problem and worth further study. In this paper, a predefined map with artificial colour code signature (CCs) beacons are used to build an effective algorithm to achieve an indoor localisation and position prediction of an omnidirectional mobile robot. This algorithm is primarily based on calculating the distance between the robot and the beacon using Pixy cameras, as vision sensors; then, estimating the position of the robot using a trilateration method. By comparing the results obtained in this paper with the mathematically obtained results, it is clearly shown that the robot effectively follows the localisation algorithm to estimate its pose (position and orientation), improving its localisation abilities in addition to obtaining its initial position. Furthermore, the limitations associated with using Pixy cameras are discussed in this paper as well.","PeriodicalId":273859,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Computational Intelligence and Intelligent Systems","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116891657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
At present, there are some problems in theway of research and training of primary school teachers, such as high cost, long cycle, limited number of research and training, slow updating of research contents and so on. Therefore, the virtual learning community (VLC) for primary school teachers’ research and training is constructed. In the process of implementation community core function, a hybrid recommendation algorithm based on content information label extraction and collaborative filtering is proposed for personalized recommendation system, which solves the problem of cold start of new users. Based on the NLP and the deep-learning algorithms, the two models of interest and behaviour are combined to update the interest model based on the behaviour of the learners in the intelligent teaching system. According to the user evaluation data, the intelligent teaching evaluation system has realized the intelligent evaluation of teachers’ teaching activities. The insufficient in problem classification have been improved based on deep-learning algorithms for intelligent question answering system. The solution proposed in this paper has been applied to the research and training of primary school teachers in Liaoning province of China, which will play an important role in improving the level of teachers in primary education.
{"title":"The Research and Implementation of Intelligent VLC","authors":"Bo Song, Xiaomei Li","doi":"10.1145/3440840.3440841","DOIUrl":"https://doi.org/10.1145/3440840.3440841","url":null,"abstract":"At present, there are some problems in theway of research and training of primary school teachers, such as high cost, long cycle, limited number of research and training, slow updating of research contents and so on. Therefore, the virtual learning community (VLC) for primary school teachers’ research and training is constructed. In the process of implementation community core function, a hybrid recommendation algorithm based on content information label extraction and collaborative filtering is proposed for personalized recommendation system, which solves the problem of cold start of new users. Based on the NLP and the deep-learning algorithms, the two models of interest and behaviour are combined to update the interest model based on the behaviour of the learners in the intelligent teaching system. According to the user evaluation data, the intelligent teaching evaluation system has realized the intelligent evaluation of teachers’ teaching activities. The insufficient in problem classification have been improved based on deep-learning algorithms for intelligent question answering system. The solution proposed in this paper has been applied to the research and training of primary school teachers in Liaoning province of China, which will play an important role in improving the level of teachers in primary education.","PeriodicalId":273859,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Computational Intelligence and Intelligent Systems","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123838701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Crime is one of the most complex social problems around the world, posing a major threat to human life and property. Predicting crime incidents in advance can be a great help in fighting against crime and has drawn continuous attention from both academic and industrial communities. Although a plethora of methods have been proposed over the past decade, most of the algorithms either perform prediction by leveraging linear or other oversimplified models or fail to fully explore the dynamic patterns in the crime data. In this paper, we propose a novel deep learning based crime prediction framework called CrimeSTC to jointly learn the intricate spatial-temporal-categorical correlations hidden inside the crime and big urban data. Specifically, our framework consists of four parts: dynamic module (handling the data that change every day via local CNN and GRU), static module (handling the data that remain the same over time via fully connected layers), categorical module (capturing the categorical dependency via graph convolutional network) and joint training module (concatenating dynamic and static representations to forecast crime numbers). Extensive experiments on real world datasets validate the effectiveness of our framework.
{"title":"CrimeSTC: A Deep Spatial-Temporal-Categorical Network for Citywide Crime Prediction","authors":"Yue Wei, Weichao Liang, Youquan Wang, Jie Cao","doi":"10.1145/3440840.3440850","DOIUrl":"https://doi.org/10.1145/3440840.3440850","url":null,"abstract":"Crime is one of the most complex social problems around the world, posing a major threat to human life and property. Predicting crime incidents in advance can be a great help in fighting against crime and has drawn continuous attention from both academic and industrial communities. Although a plethora of methods have been proposed over the past decade, most of the algorithms either perform prediction by leveraging linear or other oversimplified models or fail to fully explore the dynamic patterns in the crime data. In this paper, we propose a novel deep learning based crime prediction framework called CrimeSTC to jointly learn the intricate spatial-temporal-categorical correlations hidden inside the crime and big urban data. Specifically, our framework consists of four parts: dynamic module (handling the data that change every day via local CNN and GRU), static module (handling the data that remain the same over time via fully connected layers), categorical module (capturing the categorical dependency via graph convolutional network) and joint training module (concatenating dynamic and static representations to forecast crime numbers). Extensive experiments on real world datasets validate the effectiveness of our framework.","PeriodicalId":273859,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Computational Intelligence and Intelligent Systems","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114016661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiaming Zhang, Lin Zhang, Shaocong Guo, W. Meng, Qingsong Ai, Quan Liu
Functional electrical stimulation (FES) is an effective treatment for the rehabilitation of stroke patients with hemiplegia. At present, it is challenging to accurately control the functional electrical stimulation during rehabilitation as various parameters of electrical stimulation are difficult to determine, and the stimulation response is easily affected by interferences. To improve the control accuracy for trajectory tracking during repetitive training and to compensate external interference, in this paper we take the knee joint as an example designed a functional electrical stimulation system based on adaptive network-based fuzzy inference system (ANFIS) and iterative learning control (ILC). Firstly, an adaptive fuzzy neural inference system was used to establish the joint muscle model, and a PID-type iterative learning controller was used to achieve the adjustment of functional electrical stimulation parameters. The maximum error of the ANFIS-based muscle model was 1.64Nm and the root means square error was 0.4327Nm. The maximum angle error of the actual knee motion compared with the expected angle was 22.76°, and the root means square error was 6.7413° after 10 iterations. Therefore, the system realizes the control of the pulse width of functional electrical stimulation in rehabilitation training, so that patients can carry out rehabilitation training according to the expected trajectory, which provides convenience for the rehabilitation training of stroke hemiplegia patients.
{"title":"Iterative Learning Control of Functional Electrical Stimulation Based on Joint Muscle Model","authors":"Jiaming Zhang, Lin Zhang, Shaocong Guo, W. Meng, Qingsong Ai, Quan Liu","doi":"10.1145/3440840.3440853","DOIUrl":"https://doi.org/10.1145/3440840.3440853","url":null,"abstract":"Functional electrical stimulation (FES) is an effective treatment for the rehabilitation of stroke patients with hemiplegia. At present, it is challenging to accurately control the functional electrical stimulation during rehabilitation as various parameters of electrical stimulation are difficult to determine, and the stimulation response is easily affected by interferences. To improve the control accuracy for trajectory tracking during repetitive training and to compensate external interference, in this paper we take the knee joint as an example designed a functional electrical stimulation system based on adaptive network-based fuzzy inference system (ANFIS) and iterative learning control (ILC). Firstly, an adaptive fuzzy neural inference system was used to establish the joint muscle model, and a PID-type iterative learning controller was used to achieve the adjustment of functional electrical stimulation parameters. The maximum error of the ANFIS-based muscle model was 1.64Nm and the root means square error was 0.4327Nm. The maximum angle error of the actual knee motion compared with the expected angle was 22.76°, and the root means square error was 6.7413° after 10 iterations. Therefore, the system realizes the control of the pulse width of functional electrical stimulation in rehabilitation training, so that patients can carry out rehabilitation training according to the expected trajectory, which provides convenience for the rehabilitation training of stroke hemiplegia patients.","PeriodicalId":273859,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Computational Intelligence and Intelligent Systems","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122637399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Image steganalysis is a very important technology for forensics. Recent studies show that the idea of steganalysis based on Convolutional Neural Network (CNN) is feasible. In this paper, we propose a novel digital image steganalysis model based on CNN. Compared with the existing CNN-based methods, the CNN model proposed to this paper has two characteristics. First, in the front of the network, high-pass filter in SRM is used to initialize the convolution kernels, which is beneficial to learning steganography noise in the image. Second, in the middle of the network, the residual learning mechanism is used to enhance the convergence speed and stability of the network. Experiments on the standard data set show that the proposed CNN model can detect S-UNIWARD steganography algorithm with high accuracy.
{"title":"A Convolution Neural Network Based on Residual Learning for Image Steganalysis","authors":"Yuanbin Wu, Qingyan Li, Lin Li","doi":"10.1145/3440840.3440843","DOIUrl":"https://doi.org/10.1145/3440840.3440843","url":null,"abstract":"Image steganalysis is a very important technology for forensics. Recent studies show that the idea of steganalysis based on Convolutional Neural Network (CNN) is feasible. In this paper, we propose a novel digital image steganalysis model based on CNN. Compared with the existing CNN-based methods, the CNN model proposed to this paper has two characteristics. First, in the front of the network, high-pass filter in SRM is used to initialize the convolution kernels, which is beneficial to learning steganography noise in the image. Second, in the middle of the network, the residual learning mechanism is used to enhance the convergence speed and stability of the network. Experiments on the standard data set show that the proposed CNN model can detect S-UNIWARD steganography algorithm with high accuracy.","PeriodicalId":273859,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Computational Intelligence and Intelligent Systems","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122637786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fake news is progressively becoming a threat to individuals, society, news systems, governments and democracy. The need to fight it is rising accompanied by various researches that showed promising results. Deep learning methods and word embeddings contributed a lot in devising detection mechanisms. However, lack of sufficient datasets and the question “which word embedding best captures content features” have posed challenges to make detection methods adequately accurate. In this work, we prepared a dataset from a scrape of 13 years of continuous data that we believe will narrow the gap. We also proposed a deep learning model for early detection of fake news using convolutional neural networks and long short-term memory networks. We evaluated three pre-trained word embeddings in the context of the fake news problem using different measures. Series of experiments were made on three real world datasets, including ours, using the proposed model. Results showed that the choice of pre-trained embeddings can be arbitrary. However, embeddings purely trained from the fake news dataset and pre-trained embeddings allowed to update showed relatively better performance over static embeddings. High dimensional embeddings showed better results than low dimensional embeddings and this persisted for all the datasets used.
{"title":"Fighting Fake News Using Deep Learning: Pre-trained Word Embeddings and the Embedding Layer Investigated","authors":"Fantahun Gereme, William Zhu","doi":"10.1145/3440840.3440847","DOIUrl":"https://doi.org/10.1145/3440840.3440847","url":null,"abstract":"Fake news is progressively becoming a threat to individuals, society, news systems, governments and democracy. The need to fight it is rising accompanied by various researches that showed promising results. Deep learning methods and word embeddings contributed a lot in devising detection mechanisms. However, lack of sufficient datasets and the question “which word embedding best captures content features” have posed challenges to make detection methods adequately accurate. In this work, we prepared a dataset from a scrape of 13 years of continuous data that we believe will narrow the gap. We also proposed a deep learning model for early detection of fake news using convolutional neural networks and long short-term memory networks. We evaluated three pre-trained word embeddings in the context of the fake news problem using different measures. Series of experiments were made on three real world datasets, including ours, using the proposed model. Results showed that the choice of pre-trained embeddings can be arbitrary. However, embeddings purely trained from the fake news dataset and pre-trained embeddings allowed to update showed relatively better performance over static embeddings. High dimensional embeddings showed better results than low dimensional embeddings and this persisted for all the datasets used.","PeriodicalId":273859,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Computational Intelligence and Intelligent Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128214635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although - Soil moisture is the main factor in agricultural production and hydrological cycles, and its prediction is essential for rational use and management of water resources. However, soil moisture involves complicated structural characters and meteorological factors, and is difficult to establish an ideal mathematical model for soil moisture prediction. Prediction of soil moisture in advance will be useful to the farmers in the field of agriculture. In this paper, we have used machine learning techniques such as linear regression, support vector machine regression, PCA, and Naïve Bayes for prediction of soil moisture for a span of 12 to 13 weeks ahead. These techniques have been applied on four different datasets collected from 13 different districts of West Bengal, and four different crops (Potato, Mustard, Paddy, Cauliflower) collected over the span of about 1st January 2020 – 30th March 2020. The performance of the predictor is to be evaluated on the basis of F1-Score.
{"title":"Soil Moisture Prediction Using Machine Learning Techniques","authors":"S. Paul, Satwinder Singh","doi":"10.1145/3440840.3440854","DOIUrl":"https://doi.org/10.1145/3440840.3440854","url":null,"abstract":"Although - Soil moisture is the main factor in agricultural production and hydrological cycles, and its prediction is essential for rational use and management of water resources. However, soil moisture involves complicated structural characters and meteorological factors, and is difficult to establish an ideal mathematical model for soil moisture prediction. Prediction of soil moisture in advance will be useful to the farmers in the field of agriculture. In this paper, we have used machine learning techniques such as linear regression, support vector machine regression, PCA, and Naïve Bayes for prediction of soil moisture for a span of 12 to 13 weeks ahead. These techniques have been applied on four different datasets collected from 13 different districts of West Bengal, and four different crops (Potato, Mustard, Paddy, Cauliflower) collected over the span of about 1st January 2020 – 30th March 2020. The performance of the predictor is to be evaluated on the basis of F1-Score.","PeriodicalId":273859,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Computational Intelligence and Intelligent Systems","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127686463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}