Pub Date : 2024-02-16DOI: 10.35671/telematika.v17i1.2774
Raisa Amalia, M. Faisal, Fatma Indriani, Irwan Budiman, Muhammad Itqan Mazdadi, Friska Abadi, Muhammad Meftah Mafazy
The COVID-19 pandemic has permeated the global sphere and influenced nearly all nations and regions. Common symptoms of this pandemic include fever, cough, fatigue, and loss of sense of smell. The impact of COVID-19 on public health and the economy has made it a significant global concern. It has caused economic contraction in Indonesia, particularly in face-to-face interaction and mobility sectors, such as transportation, warehousing, construction, and food and beverages. Since the pandemic began, Twitter users have shared symptoms in their tweets. However, they couldn't confirm their concerns due to testing limitations, reporting delays, and pre-registration requirements in healthcare. The classification of text from Twitter data about COVID-19 topics has predominantly focused on sentiment analysis regarding the pandemic or vaccination. Research on identifying COVID-19 symptoms through social media messages is limited in the literature. The main objective of this study is to identify symptoms using word embedding techniques and the LSTM algorithm. Various techniques such as Word2Vec, GloVe, FastText, and a composite approach are used. LSTM is used for classification, improving upon the RNN technique. Evaluation criteria include accuracy, precision, and recall. The model with an input dimension of 147x100 achieves the highest accuracy at 89%. This study aims to find the best LSTM model for detecting COVID-19 symptoms in social media tweets. It evaluates LSTM models with different word embedding techniques and input dimensions, providing insights into the optimal text-based method for COVID-19 detection through social media texts.
{"title":"Identification of Social Media Posts Containing Self-reported COVID-19 Symptoms using Triple Word Embeddings and Long Short-Term Memory","authors":"Raisa Amalia, M. Faisal, Fatma Indriani, Irwan Budiman, Muhammad Itqan Mazdadi, Friska Abadi, Muhammad Meftah Mafazy","doi":"10.35671/telematika.v17i1.2774","DOIUrl":"https://doi.org/10.35671/telematika.v17i1.2774","url":null,"abstract":"The COVID-19 pandemic has permeated the global sphere and influenced nearly all nations and regions. Common symptoms of this pandemic include fever, cough, fatigue, and loss of sense of smell. The impact of COVID-19 on public health and the economy has made it a significant global concern. It has caused economic contraction in Indonesia, particularly in face-to-face interaction and mobility sectors, such as transportation, warehousing, construction, and food and beverages. Since the pandemic began, Twitter users have shared symptoms in their tweets. However, they couldn't confirm their concerns due to testing limitations, reporting delays, and pre-registration requirements in healthcare. The classification of text from Twitter data about COVID-19 topics has predominantly focused on sentiment analysis regarding the pandemic or vaccination. Research on identifying COVID-19 symptoms through social media messages is limited in the literature. The main objective of this study is to identify symptoms using word embedding techniques and the LSTM algorithm. Various techniques such as Word2Vec, GloVe, FastText, and a composite approach are used. LSTM is used for classification, improving upon the RNN technique. Evaluation criteria include accuracy, precision, and recall. The model with an input dimension of 147x100 achieves the highest accuracy at 89%. This study aims to find the best LSTM model for detecting COVID-19 symptoms in social media tweets. It evaluates LSTM models with different word embedding techniques and input dimensions, providing insights into the optimal text-based method for COVID-19 detection through social media texts.","PeriodicalId":31716,"journal":{"name":"Telematika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140454099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-16DOI: 10.35671/telematika.v17i1.2831
Sarifah Agustiani, Yan Rianto
Colon cancer is a type of cancer that attacks the last part of the human digestive tract. Factors such as an unhealthy diet, low fiber consumption, and high animal protein and fat intake can increase the risk of developing this disease. Diagnosis of colon cancer requires sophisticated diagnostic procedures such as CT scan, MRI, PET scan, ultrasound, or biopsy, which are often time-consuming and require particular expertise. This study aims to classify colon cancer based on histopathological images using a dataset of 10,000 images. This data is divided into 7,950 images for training, 2,000 for testing, and 50 for validation, aiming to achieve effective generalization. The Convolutional Neural Network (CNN) method was applied in this research with a relatively shallow architecture consisting of 4 convolution layers, 2 fully connected layers, and 1 output layer. Research results were evaluated by looking at the accuracy value of 99.55%, precision value of 99.49%, recall of 99.59%, prediction experiments on several images, and loss and accuracy graphs to detect signs of overfitting. However, this research has limitations in determining hyperparameters and layer depth, which was only tested from 1 to 5 convolution layers. Therefore, there are still opportunities for further development, such as applying unique feature extraction before the classification process
{"title":"Deep Learning for Histopathological Image Analysis: A Convolutional Neural Network Approach to Colon Cancer Classification","authors":"Sarifah Agustiani, Yan Rianto","doi":"10.35671/telematika.v17i1.2831","DOIUrl":"https://doi.org/10.35671/telematika.v17i1.2831","url":null,"abstract":"Colon cancer is a type of cancer that attacks the last part of the human digestive tract. Factors such as an unhealthy diet, low fiber consumption, and high animal protein and fat intake can increase the risk of developing this disease. Diagnosis of colon cancer requires sophisticated diagnostic procedures such as CT scan, MRI, PET scan, ultrasound, or biopsy, which are often time-consuming and require particular expertise. This study aims to classify colon cancer based on histopathological images using a dataset of 10,000 images. This data is divided into 7,950 images for training, 2,000 for testing, and 50 for validation, aiming to achieve effective generalization. The Convolutional Neural Network (CNN) method was applied in this research with a relatively shallow architecture consisting of 4 convolution layers, 2 fully connected layers, and 1 output layer. Research results were evaluated by looking at the accuracy value of 99.55%, precision value of 99.49%, recall of 99.59%, prediction experiments on several images, and loss and accuracy graphs to detect signs of overfitting. However, this research has limitations in determining hyperparameters and layer depth, which was only tested from 1 to 5 convolution layers. Therefore, there are still opportunities for further development, such as applying unique feature extraction before the classification process","PeriodicalId":31716,"journal":{"name":"Telematika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140454441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-16DOI: 10.35671/telematika.v17i1.2824
Sarjon Defit, A. Windarto, Putrama Alkhairi
Optimizing classification methods (forward selection, backward elimination, and optimized selection) and ensemble techniques (AdaBoost and Bagging) are essential for accurate sentiment analysis, particularly in political contexts on social media. This research compares advanced classification models with standard ones (Decision Tree, Random Tree, Naive Bayes, Random Forest, K- NN, Neural Network, and Generalized Linear Model), analyzing 1,200 tweets from December 10-11, 2023, focusing on "Indonesia" and "capres." It encompasses 490 positive, 355 negative, and 353 neutral sentiments, reflecting diverse opinions on presidential candidates and political issues. The enhanced model achieves 96.37% accuracy, with the backward selection model reaching 100% accuracy for negative sentiments. The study suggests further exploration of hybrid feature selection and improved classifiers for high-stakes sentiment analysis. With forward feature selection and ensemble method, Naive Bayes stands out for classifying negative sentiments while maintaining high overall accuracy (96.37%).
{"title":"Comparative Analysis of Classification Methods in Sentiment Analysis: The Impact of Feature Selection and Ensemble Techniques Optimization","authors":"Sarjon Defit, A. Windarto, Putrama Alkhairi","doi":"10.35671/telematika.v17i1.2824","DOIUrl":"https://doi.org/10.35671/telematika.v17i1.2824","url":null,"abstract":"Optimizing classification methods (forward selection, backward elimination, and optimized selection) and ensemble techniques (AdaBoost and Bagging) are essential for accurate sentiment analysis, particularly in political contexts on social media. This research compares advanced classification models with standard ones (Decision Tree, Random Tree, Naive Bayes, Random Forest, K- NN, Neural Network, and Generalized Linear Model), analyzing 1,200 tweets from December 10-11, 2023, focusing on \"Indonesia\" and \"capres.\" It encompasses 490 positive, 355 negative, and 353 neutral sentiments, reflecting diverse opinions on presidential candidates and political issues. The enhanced model achieves 96.37% accuracy, with the backward selection model reaching 100% accuracy for negative sentiments. The study suggests further exploration of hybrid feature selection and improved classifiers for high-stakes sentiment analysis. With forward feature selection and ensemble method, Naive Bayes stands out for classifying negative sentiments while maintaining high overall accuracy (96.37%).","PeriodicalId":31716,"journal":{"name":"Telematika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140454476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-16DOI: 10.35671/telematika.v17i1.2724
M. D. R. Wahyudi, Agung Fatwanto
The Quran considered the holy book for Muslims, contains scientific and historical facts affirming Islam's truth, beauty, and influence on human life. Consequently, the Quran text and its translations are valuable sources for text mining research, particularly for studying the interrelationship of its verses. One approach to grouping objects using certain algorithms is clustering, with K-Means Clustering being a prominent example. However, clustering results are often suboptimal due to the random selection of centroids. To address this, the study proposes using the Particle Swarm Optimization (PSO) algorithm, which selects centroids based on PSO results. The hybrid PSO algorithm initiates a single iteration of the K-means algorithm. It concludes either upon reaching the maximum iteration limit or when the average shift in the center of the mass vector falls below 0.0001. Evaluation of the clustering results from the three models indicates that the K-Means algorithm produced the lowest Sum of Squared Error (SSE) value of 1032.19. Additionally, the hybrid PSO algorithm generated the highest Silhouette value of 0.258 and the lowest quantization value of 0.00947. Further evaluation using a confusion matrix showed that K-Means clustering had an accuracy rate of 81.7%, K-Means with PSO had 82.5%, and the combination of K-Means with hybrid PSO yielded the highest accuracy rate of 91.1% among the three grouping model.
{"title":"Optimizing Clustering of Indonesian Text Data Using Particle Swarm Optimization Algorithm: A Case Study of the Quran Translation","authors":"M. D. R. Wahyudi, Agung Fatwanto","doi":"10.35671/telematika.v17i1.2724","DOIUrl":"https://doi.org/10.35671/telematika.v17i1.2724","url":null,"abstract":"The Quran considered the holy book for Muslims, contains scientific and historical facts affirming Islam's truth, beauty, and influence on human life. Consequently, the Quran text and its translations are valuable sources for text mining research, particularly for studying the interrelationship of its verses. One approach to grouping objects using certain algorithms is clustering, with K-Means Clustering being a prominent example. However, clustering results are often suboptimal due to the random selection of centroids. To address this, the study proposes using the Particle Swarm Optimization (PSO) algorithm, which selects centroids based on PSO results. The hybrid PSO algorithm initiates a single iteration of the K-means algorithm. It concludes either upon reaching the maximum iteration limit or when the average shift in the center of the mass vector falls below 0.0001. Evaluation of the clustering results from the three models indicates that the K-Means algorithm produced the lowest Sum of Squared Error (SSE) value of 1032.19. Additionally, the hybrid PSO algorithm generated the highest Silhouette value of 0.258 and the lowest quantization value of 0.00947. Further evaluation using a confusion matrix showed that K-Means clustering had an accuracy rate of 81.7%, K-Means with PSO had 82.5%, and the combination of K-Means with hybrid PSO yielded the highest accuracy rate of 91.1% among the three grouping model.","PeriodicalId":31716,"journal":{"name":"Telematika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140454722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.31315/telematika.v20i1.8933
Atmaja Jalu Narendra Kisma, Hendra Marcos
Purpose: This study aims to deal with parking problems in the area of Amikom University, Purwokerto. In addition, this research is designed to implement theoretical and practical knowledge that has been obtained in lectures.Design/methodology/approach: In research on parking design applications in the Amikom University area, Purwokerto, library study methods and literature study methods are used. The amount of data can add insight and can make it easier to process data in research.Findings/result: This application will be able to help more Amikom Purwokerto University residents, especially in the Faculty of Computer Science. The use of this application will help find parking areas in FIK areas such as Basement Parking, Front Parking and Field Parking. In addition, security will be helped by this application because if it is implemented, vehicles parked in the reserved area will be tidier and safer. In addition, security does not need to find an empty parking area for users.Originality/value/state of the art: This research focuses on parking system design like previous studies. However, this research focuses more on designing parking applications at Amikom Purwokerto University.
{"title":"Design Automatic Parking Application of Amikom Purwokerto University","authors":"Atmaja Jalu Narendra Kisma, Hendra Marcos","doi":"10.31315/telematika.v20i1.8933","DOIUrl":"https://doi.org/10.31315/telematika.v20i1.8933","url":null,"abstract":" Purpose: This study aims to deal with parking problems in the area of Amikom University, Purwokerto. In addition, this research is designed to implement theoretical and practical knowledge that has been obtained in lectures.Design/methodology/approach: In research on parking design applications in the Amikom University area, Purwokerto, library study methods and literature study methods are used. The amount of data can add insight and can make it easier to process data in research.Findings/result: This application will be able to help more Amikom Purwokerto University residents, especially in the Faculty of Computer Science. The use of this application will help find parking areas in FIK areas such as Basement Parking, Front Parking and Field Parking. In addition, security will be helped by this application because if it is implemented, vehicles parked in the reserved area will be tidier and safer. In addition, security does not need to find an empty parking area for users.Originality/value/state of the art: This research focuses on parking system design like previous studies. However, this research focuses more on designing parking applications at Amikom Purwokerto University. ","PeriodicalId":31716,"journal":{"name":"Telematika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75356160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.31315/telematika.v20i1.9674
A. P. Suryotomo, A. Harjoko
Purpose: This study aims to implement and improve a wrapper-type Input Variable Selection (IVS) to the prediction model of oil palm production utilizing oil palm expert knowledge criteria and distance-based data sensitivity criteria in order to measure cost-saving in laboratory leaf and soil sample testing.Methodology: The proposed approach consists of IVS process, searching the best prediction model based on the selected variables, and analyzing the cost-saving in laboratory leaf and soil sample testing.Findings/result: The proposed method managed to effectively choose 7 from 19 variables and achieve 81.47% saving from total laboratory sample testing cost.Value: This result has the potential to help small stakeholder oil palm planter to reduce the cost of laboratory testing without losing important information from their plantation.
{"title":"Input Variable Selection for Oil Palm Plantation Productivity Prediction Model","authors":"A. P. Suryotomo, A. Harjoko","doi":"10.31315/telematika.v20i1.9674","DOIUrl":"https://doi.org/10.31315/telematika.v20i1.9674","url":null,"abstract":"Purpose: This study aims to implement and improve a wrapper-type Input Variable Selection (IVS) to the prediction model of oil palm production utilizing oil palm expert knowledge criteria and distance-based data sensitivity criteria in order to measure cost-saving in laboratory leaf and soil sample testing.Methodology: The proposed approach consists of IVS process, searching the best prediction model based on the selected variables, and analyzing the cost-saving in laboratory leaf and soil sample testing.Findings/result: The proposed method managed to effectively choose 7 from 19 variables and achieve 81.47% saving from total laboratory sample testing cost.Value: This result has the potential to help small stakeholder oil palm planter to reduce the cost of laboratory testing without losing important information from their plantation.","PeriodicalId":31716,"journal":{"name":"Telematika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76293111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.31315/telematika.v20i1.8925
Waode Siti Nurul Hassanah, Yunda Lestari, Rizal Adi Saputra
Purpose: To detect cracks in the walls of buildings using digital image processing and the Naïve Bayes Algorithm.Design/methodology/approach: Using the YCbCr color model for the segmentation process and the HSV color model for the feature extraction process. This study also uses the Naïve Bayes Algorithm to calculate the probability of feature similarity between testing data and training data.Findings/result: Detecting cracks is an important task to check the condition of the structure. Manual testing is a recognized method of crack detection. In manual testing, crack sketches are prepared by hand and deviation states are recorded. Because the manual approach relies heavily on the knowledge and experience of experts, it lacks objectivity in quantitative analysis. In addition, the manual method takes quite a lot of time. Instead of the manual method, this research proposes digital-based crack detection by utilizing image processing. This study uses an intelligent model based on image processing techniques that have been processed in the HSV color space. In addition, this study also uses the YcbCr color space for feature extraction and classification using the Naïve Bayes Algorithm for crack detection analysis on building walls. The accuracy of the research test data reached 88.888888888888890%, while the training data achieved an accuracy of 93.333333333333330%.Originality/value/state of the art: This study has the same focus as previous research, namely detecting cracks in building walls, but has different methods and is implemented in case studies.
{"title":"Digital Image Processing to Detect Cracks in Buildings Using Naïve Bayes Algorithm (Case Study: Faculty of Engineering, Halu Oleo University)","authors":"Waode Siti Nurul Hassanah, Yunda Lestari, Rizal Adi Saputra","doi":"10.31315/telematika.v20i1.8925","DOIUrl":"https://doi.org/10.31315/telematika.v20i1.8925","url":null,"abstract":"Purpose: To detect cracks in the walls of buildings using digital image processing and the Naïve Bayes Algorithm.Design/methodology/approach: Using the YCbCr color model for the segmentation process and the HSV color model for the feature extraction process. This study also uses the Naïve Bayes Algorithm to calculate the probability of feature similarity between testing data and training data.Findings/result: Detecting cracks is an important task to check the condition of the structure. Manual testing is a recognized method of crack detection. In manual testing, crack sketches are prepared by hand and deviation states are recorded. Because the manual approach relies heavily on the knowledge and experience of experts, it lacks objectivity in quantitative analysis. In addition, the manual method takes quite a lot of time. Instead of the manual method, this research proposes digital-based crack detection by utilizing image processing. This study uses an intelligent model based on image processing techniques that have been processed in the HSV color space. In addition, this study also uses the YcbCr color space for feature extraction and classification using the Naïve Bayes Algorithm for crack detection analysis on building walls. The accuracy of the research test data reached 88.888888888888890%, while the training data achieved an accuracy of 93.333333333333330%.Originality/value/state of the art: This study has the same focus as previous research, namely detecting cracks in building walls, but has different methods and is implemented in case studies.","PeriodicalId":31716,"journal":{"name":"Telematika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79114576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.31315/telematika.v20i1.7643
N. Noprianto, V. N. Wijayaningrum, R. Wakhidah
Purpose: Designing a sensor data monitoring system using a time series database and monitoring platform on a Development Board device.Design/methodology/approach: It begins with a requirement analysis, such as the preparation of the required software and hardware, followed by the creation of the system architecture that will be adopted. Then the development process from a predetermined design to the testing process to ensure the dashboard page can display data according to a predetermined scenario.Findings/result: From the research that has been done, produces a design of sensor data that is sent using the MQTT protocol via Node-RED, then stored in a time series database (InfluxDB) and displayed on the Grafana dashboard display.Originality/value/state of the art: Sensor data monitoring dashboard on Development Board devices
{"title":"Monitoring Development Board based on InfluxDB and Grafana","authors":"N. Noprianto, V. N. Wijayaningrum, R. Wakhidah","doi":"10.31315/telematika.v20i1.7643","DOIUrl":"https://doi.org/10.31315/telematika.v20i1.7643","url":null,"abstract":"Purpose: Designing a sensor data monitoring system using a time series database and monitoring platform on a Development Board device.Design/methodology/approach: It begins with a requirement analysis, such as the preparation of the required software and hardware, followed by the creation of the system architecture that will be adopted. Then the development process from a predetermined design to the testing process to ensure the dashboard page can display data according to a predetermined scenario.Findings/result: From the research that has been done, produces a design of sensor data that is sent using the MQTT protocol via Node-RED, then stored in a time series database (InfluxDB) and displayed on the Grafana dashboard display.Originality/value/state of the art: Sensor data monitoring dashboard on Development Board devices","PeriodicalId":31716,"journal":{"name":"Telematika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73123335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.31315/telematika.v20i1.9358
Damar Wicaksono, Imam Adi Nata
Purpose: This research was conducted to help identify diseases early and provide suggestions for recommendation systems for these plants in general that are beneficial for farmers.Design/methodology/approach: This research goes through several stages, namely planning , analysis, design, and implementation.Findings/result: CLIPS-based Horticultural Plant Disease Identification Expert SystemOriginality/value/state of the art: In the process of diagnosing plant diseases, it requires the accuracy and thoroughness of an expert or experts on symptoms that indicate a disease because of the similarity of these symptoms. Misdiagnosis of existing symptoms causes differences in the results of the diagnosis with the actual disease suffered by the plant. Along with the development of technology, a system was devised that would help report early identification of diseases and provide suggestions for recommendation systems for these plants in general that are beneficial to farmers.
{"title":"Application of Expert System Identification of Horticultural Plant Diseases with Certainty Factor and Forward Chaining for Smart Village Concept Development","authors":"Damar Wicaksono, Imam Adi Nata","doi":"10.31315/telematika.v20i1.9358","DOIUrl":"https://doi.org/10.31315/telematika.v20i1.9358","url":null,"abstract":"Purpose: This research was conducted to help identify diseases early and provide suggestions for recommendation systems for these plants in general that are beneficial for farmers.Design/methodology/approach: This research goes through several stages, namely planning , analysis, design, and implementation.Findings/result: CLIPS-based Horticultural Plant Disease Identification Expert SystemOriginality/value/state of the art: In the process of diagnosing plant diseases, it requires the accuracy and thoroughness of an expert or experts on symptoms that indicate a disease because of the similarity of these symptoms. Misdiagnosis of existing symptoms causes differences in the results of the diagnosis with the actual disease suffered by the plant. Along with the development of technology, a system was devised that would help report early identification of diseases and provide suggestions for recommendation systems for these plants in general that are beneficial to farmers.","PeriodicalId":31716,"journal":{"name":"Telematika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88199111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.31315/telematika.v20i1.8868
Tasya Auliya Ulul Azmi, Luthfi Hakim, D. C. R. Novitasari, W. D. Utami
Purpose: This study aims to monitor the service quality of JMO applications from time to time by classifying JMO user reviews into the class of positive, neutral, and negative sentiments.Design/methodology/approach : The method used in this study is the random forest classification method. Data processing in this study uses feature extraction, TF-IDF and labeling with the lexicon-based method.Findings/result: Based on the research results, it was found that the highest frequency of classification was the positive class with 17571 reviews compared to the neutral class with 8701 reviews and the negative class with 3876 reviews with an accuracy evaluation value of 93%, precision 88%, recall 93%, and f1-score 90%.Originality/value/state of the art:This study uses 150737 reviews that have been pre-processed using the random forest method and TF-IDF and lexicon-based feature extraction.
{"title":"Application Random Forest Method for Sentiment Analysis in Jamsostek Mobile Review","authors":"Tasya Auliya Ulul Azmi, Luthfi Hakim, D. C. R. Novitasari, W. D. Utami","doi":"10.31315/telematika.v20i1.8868","DOIUrl":"https://doi.org/10.31315/telematika.v20i1.8868","url":null,"abstract":"Purpose: This study aims to monitor the service quality of JMO applications from time to time by classifying JMO user reviews into the class of positive, neutral, and negative sentiments.Design/methodology/approach : The method used in this study is the random forest classification method. Data processing in this study uses feature extraction, TF-IDF and labeling with the lexicon-based method.Findings/result: Based on the research results, it was found that the highest frequency of classification was the positive class with 17571 reviews compared to the neutral class with 8701 reviews and the negative class with 3876 reviews with an accuracy evaluation value of 93%, precision 88%, recall 93%, and f1-score 90%.Originality/value/state of the art:This study uses 150737 reviews that have been pre-processed using the random forest method and TF-IDF and lexicon-based feature extraction.","PeriodicalId":31716,"journal":{"name":"Telematika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87345363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}