Aditya Permana, Timothy K. Shih, Aina Musdholifah, Anny Kartika Sari
Erhu is a stringed instrument originating from China. In playing this instrument, there are rules on how to position the player's body and hold the instrument correctly. Therefore, a system is needed that can detect every movement of the Erhu player. This study will discuss action recognition on video using the 3DCNN and LSTM methods. The 3D Convolutional Neural Network method is a method that has a CNN base. To improve the ability to capture every information stored in every movement, combining an LSTM layer in the 3D-CNN model is necessary. LSTM is capable of handling the vanishing gradient problem faced by RNN. This research uses RGB video as a dataset, and there are three main parts in preprocessing and feature extraction. The three main parts are the body, erhu pole, and bow. To perform preprocessing and feature extraction, this study uses a body landmark to perform preprocessing and feature extraction on the body segment. In contrast, the erhu and bow segments use the Hough Lines algorithm. Furthermore, for the classification process, we propose two algorithms, namely, traditional algorithm and deep learning algorithm. These two-classification algorithms will produce an error message output from every movement of the erhu player.
{"title":"Error Action Recognition on Playing The Erhu Musical Instrument Using Hybrid Classification Method with 3D-CNN and LSTM","authors":"Aditya Permana, Timothy K. Shih, Aina Musdholifah, Anny Kartika Sari","doi":"10.22146/ijccs.76555","DOIUrl":"https://doi.org/10.22146/ijccs.76555","url":null,"abstract":"Erhu is a stringed instrument originating from China. In playing this instrument, there are rules on how to position the player's body and hold the instrument correctly. Therefore, a system is needed that can detect every movement of the Erhu player. This study will discuss action recognition on video using the 3DCNN and LSTM methods. The 3D Convolutional Neural Network method is a method that has a CNN base. To improve the ability to capture every information stored in every movement, combining an LSTM layer in the 3D-CNN model is necessary. LSTM is capable of handling the vanishing gradient problem faced by RNN. This research uses RGB video as a dataset, and there are three main parts in preprocessing and feature extraction. The three main parts are the body, erhu pole, and bow. To perform preprocessing and feature extraction, this study uses a body landmark to perform preprocessing and feature extraction on the body segment. In contrast, the erhu and bow segments use the Hough Lines algorithm. Furthermore, for the classification process, we propose two algorithms, namely, traditional algorithm and deep learning algorithm. These two-classification algorithms will produce an error message output from every movement of the erhu player.","PeriodicalId":31625,"journal":{"name":"IJCCS Indonesian Journal of Computing and Cybernetics Systems","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135313743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I Nyoman Prayana Trisna, Afiahayati Afiahayati, Muhammad Auzan
Flower pollination algorithm is a bio-inspired system that adapts a similar process to genetic algorithm, that aims for optimization problems. In this research, we examine the utilization of the flower pollination algorithm in linear regression for currency exchange cases. The solutions are represented as a set that contains regression coefficients. Population size for the candidate solutions and the switch probability between global pollination and local pollination have been experimented with in this research. Our result shows that the final solution is better when a higher size population and higher switch probability are employed. Furthermore, our result shows the higher size of the population leads to considerable running time, where the leaning probability of global pollination slightly increases the running time.
{"title":"Flower Pollination Inspired Algorithm on Exchange Rates Prediction Case","authors":"I Nyoman Prayana Trisna, Afiahayati Afiahayati, Muhammad Auzan","doi":"10.22146/ijccs.84223","DOIUrl":"https://doi.org/10.22146/ijccs.84223","url":null,"abstract":"Flower pollination algorithm is a bio-inspired system that adapts a similar process to genetic algorithm, that aims for optimization problems. In this research, we examine the utilization of the flower pollination algorithm in linear regression for currency exchange cases. The solutions are represented as a set that contains regression coefficients. Population size for the candidate solutions and the switch probability between global pollination and local pollination have been experimented with in this research. Our result shows that the final solution is better when a higher size population and higher switch probability are employed. Furthermore, our result shows the higher size of the population leads to considerable running time, where the leaning probability of global pollination slightly increases the running time.","PeriodicalId":31625,"journal":{"name":"IJCCS Indonesian Journal of Computing and Cybernetics Systems","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135313924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ariel Yonatan Alin, Kusrini Kusrini, Kumara Ari Yuana
Drone object detection is one of the main applications of image processing technology and pattern recognition using deep learning. However, the limited drone image data that can be accessed for training detection algorithms is a challenge in the development of drone object detection technology. Therefore, many studies have been conducted to increase the amount of drone image data using data augmentation techniques. This study aims to evaluate the effect of data augmentation on deep learning accuracy in drone object detection using the YOLOv5 algorithm. The methods used in this research include collecting drone image data, augmenting data with rotate, crop and cutout, training the YOLOv5 algorithm with and without data augmentation, as well as testing and analyzing training results.The results of the study show that data augmentation can't improve the accuracy of the YOLOv5 algorithm in drone object detection. Evidenced by the decreasing value of precision and mAP@0.5 and the relatively constant value of recall and F-1 score. This is caused by too much augmentation can cause loss of important information in the data and improper augmentation can cause noise or distortion in the data.
{"title":"The Effect of Data Augmentation in Deep Learning with Drone Object Detection","authors":"Ariel Yonatan Alin, Kusrini Kusrini, Kumara Ari Yuana","doi":"10.22146/ijccs.84785","DOIUrl":"https://doi.org/10.22146/ijccs.84785","url":null,"abstract":"Drone object detection is one of the main applications of image processing technology and pattern recognition using deep learning. However, the limited drone image data that can be accessed for training detection algorithms is a challenge in the development of drone object detection technology. Therefore, many studies have been conducted to increase the amount of drone image data using data augmentation techniques. This study aims to evaluate the effect of data augmentation on deep learning accuracy in drone object detection using the YOLOv5 algorithm. The methods used in this research include collecting drone image data, augmenting data with rotate, crop and cutout, training the YOLOv5 algorithm with and without data augmentation, as well as testing and analyzing training results.The results of the study show that data augmentation can't improve the accuracy of the YOLOv5 algorithm in drone object detection. Evidenced by the decreasing value of precision and mAP@0.5 and the relatively constant value of recall and F-1 score. This is caused by too much augmentation can cause loss of important information in the data and improper augmentation can cause noise or distortion in the data.","PeriodicalId":31625,"journal":{"name":"IJCCS Indonesian Journal of Computing and Cybernetics Systems","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135313925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Obfuscation is a technique for transforming program code into a different form that is more difficult to understand. Several obfuscation methods are used to obfuscate source code, including dead code insertion, code transposition, and string encryption. In this research, the development of an obfuscator that can work on C language source code uses the code transposition method, namely randomizing the arrangement of lines of code with a hash function and then using the DES encryption algorithm to hide the parameters of the hash function so that it is increasingly difficult to find the original format. This obfuscator is specifically used to maintain the security of source code in C language from plagiarism and piracy. In order to evaluate this obfuscator, nine respondents who understand the C programming language were asked to deobfuscate the obfuscated source code manually. Then the percentage of correctness and the average time needed to perform the manual deobfuscation are observed. The evaluation results show that the obfuscator effectively maintains security and complicates the source code analysis.
{"title":"C Source code Obfuscation using Hash Function and Encryption Algorithm","authors":"Sarah Rosdiana Tambunan, Nur Rokhman","doi":"10.22146/ijccs.86118","DOIUrl":"https://doi.org/10.22146/ijccs.86118","url":null,"abstract":"Obfuscation is a technique for transforming program code into a different form that is more difficult to understand. Several obfuscation methods are used to obfuscate source code, including dead code insertion, code transposition, and string encryption. In this research, the development of an obfuscator that can work on C language source code uses the code transposition method, namely randomizing the arrangement of lines of code with a hash function and then using the DES encryption algorithm to hide the parameters of the hash function so that it is increasingly difficult to find the original format. This obfuscator is specifically used to maintain the security of source code in C language from plagiarism and piracy. In order to evaluate this obfuscator, nine respondents who understand the C programming language were asked to deobfuscate the obfuscated source code manually. Then the percentage of correctness and the average time needed to perform the manual deobfuscation are observed. The evaluation results show that the obfuscator effectively maintains security and complicates the source code analysis.","PeriodicalId":31625,"journal":{"name":"IJCCS Indonesian Journal of Computing and Cybernetics Systems","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135313310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autism Spectrum Disorder (ASD) is a developmental disorder that impairs the development of behaviors, communication, and learning abilities. Early detection of ASD helps patients to get beter training to communicate and interact with others. In this study, we identified ASD and non-ASD individuals using machine learning (ML) approaches. We used Gaussian naive Bayes (NB), k-nearest neighbors (KNN), random forest (RF), logistic regression (LR), Gaussian naive Bayes (NB), support vector machine (SVM) with linear basis function and decision tree (DT). We preprocessed the data using the imputation methods, namely linear regression, Mice forest, and Missforest. We selected the important features using the Simultaneous perturbation feature selection and ranking (SpFSR) technique from all 21 ASD features of three datasets combined (N=1,100 individuals) from University California Irvine (UCI) repository. We evaluated the performance of the method's discrimination, calibration, and clinical utility using a stratified 10-fold cross-validation method. We achieved the highest accuracy possible by using SVM with selected the most important 10 features. We observed the integration of imputation using linear regression, SpFSR and SVM as the most effective models, with an accuracy rate of 100% outperformed the previous studies in ASD prediciton
自闭症谱系障碍(ASD)是一种发育障碍,会损害行为、沟通和学习能力的发展。ASD的早期发现有助于患者获得更好的与他人沟通和互动的训练。在这项研究中,我们使用机器学习(ML)方法识别ASD和非ASD个体。我们使用高斯朴素贝叶斯(NB)、k近邻(KNN)、随机森林(RF)、逻辑回归(LR)、高斯朴素贝叶斯(NB)、线性基函数支持向量机(SVM)和决策树(DT)。采用线性回归、Mice forest和Missforest等方法对数据进行预处理。我们使用同步扰动特征选择和排序(Simultaneous perturbation feature selection and ranking,简称SpFSR)技术,从加州大学欧文分校(UCI)数据库中三个数据集(N= 1100个个体)的所有21个ASD特征中选择出重要特征。我们使用分层的10倍交叉验证方法评估了该方法的鉴别、校准和临床应用的性能。我们通过选择最重要的10个特征使用支持向量机实现了最高的准确率。我们观察到以线性回归、SpFSR和SVM作为最有效的模型进行整合,在ASD预测中准确率达到100%,优于以往的研究
{"title":"Autism Spectrum Disorder (ASD) Identification Using Feature-Based Machine Learning Classification Model","authors":"Anton Novianto, Mila Desi Anasanti","doi":"10.22146/ijccs.83585","DOIUrl":"https://doi.org/10.22146/ijccs.83585","url":null,"abstract":"Autism Spectrum Disorder (ASD) is a developmental disorder that impairs the development of behaviors, communication, and learning abilities. Early detection of ASD helps patients to get beter training to communicate and interact with others. In this study, we identified ASD and non-ASD individuals using machine learning (ML) approaches. We used Gaussian naive Bayes (NB), k-nearest neighbors (KNN), random forest (RF), logistic regression (LR), Gaussian naive Bayes (NB), support vector machine (SVM) with linear basis function and decision tree (DT). We preprocessed the data using the imputation methods, namely linear regression, Mice forest, and Missforest. We selected the important features using the Simultaneous perturbation feature selection and ranking (SpFSR) technique from all 21 ASD features of three datasets combined (N=1,100 individuals) from University California Irvine (UCI) repository. We evaluated the performance of the method's discrimination, calibration, and clinical utility using a stratified 10-fold cross-validation method. We achieved the highest accuracy possible by using SVM with selected the most important 10 features. We observed the integration of imputation using linear regression, SpFSR and SVM as the most effective models, with an accuracy rate of 100% outperformed the previous studies in ASD prediciton","PeriodicalId":31625,"journal":{"name":"IJCCS Indonesian Journal of Computing and Cybernetics Systems","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135313744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muhammad Zha'farudin Pudya Wardana, Moh. Edi Wibowo
The TV commercial detection problem is a hard challenge due to the variety of programs and TV channels. The usage of deep learning methods to solve this problem has shown good results. However, it takes a long time with many training epochs to get high accuracy. This research uses transfer learning techniques to reduce training time and limits the number of training epochs to 20. From video data, the audio feature is extracted with Mel-spectrogram representation, and the visual features are picked from a video frame. The datasets were gathered by recording programs from various TV channels in Indonesia. Pre-trained CNN models such as MobileNetV2, InceptionV3, and DenseNet169 are re-trained and are used to detect commercials at the shot level. We do post-processing to cluster the shots into segments of commercials and non-commercials. The best result is shown by Audio-Visual CNN using transfer learning with an accuracy of 93.26% with only 20 training epochs. It is faster and better than the CNN model without using transfer learning with an accuracy of 88.17% and 77 training epochs. The result by adding post-processing increases the accuracy of Audio-Visual CNN using transfer learning to 96.42%.
{"title":"Audio-Visual CNN using Transfer Learning for TV Commercial Break Detection","authors":"Muhammad Zha'farudin Pudya Wardana, Moh. Edi Wibowo","doi":"10.22146/ijccs.76058","DOIUrl":"https://doi.org/10.22146/ijccs.76058","url":null,"abstract":"The TV commercial detection problem is a hard challenge due to the variety of programs and TV channels. The usage of deep learning methods to solve this problem has shown good results. However, it takes a long time with many training epochs to get high accuracy. This research uses transfer learning techniques to reduce training time and limits the number of training epochs to 20. From video data, the audio feature is extracted with Mel-spectrogram representation, and the visual features are picked from a video frame. The datasets were gathered by recording programs from various TV channels in Indonesia. Pre-trained CNN models such as MobileNetV2, InceptionV3, and DenseNet169 are re-trained and are used to detect commercials at the shot level. We do post-processing to cluster the shots into segments of commercials and non-commercials. The best result is shown by Audio-Visual CNN using transfer learning with an accuracy of 93.26% with only 20 training epochs. It is faster and better than the CNN model without using transfer learning with an accuracy of 88.17% and 77 training epochs. The result by adding post-processing increases the accuracy of Audio-Visual CNN using transfer learning to 96.42%.","PeriodicalId":31625,"journal":{"name":"IJCCS Indonesian Journal of Computing and Cybernetics Systems","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135313746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Raden Bagus Muhammad AdryanPutra Adhy Wijaya, Delfia Nur Anrianti Putri, Dzikri Rahadian Fudholi
In the food industry, separating vegetables is done by visually trained professionals. However, because it takes plenty of time to sort a large number of different types of vegetables, human errors might arise at any time, and using human resources is not always effective. Thus, automation is needed to minimize process time and errors. Computer vision helps reduce the need for human resources by automatizing the classification. Vegetables come in various colors and shapes; thus, vegetable classification becomes a challenging multiclass classification due to intraspecies variety and interspecies similarity of these main distinguishing characteristics. Consequently, much research is made to automatically discover effective methods to group each type of vegetable using computers. To answer this challenge, we proposed a solution utilizing deep learning with a Convolutional Neural Network (CNN) to perform multi-label classification on some types of vegetables. We experimented with the modification of batch size and optimizer type. In the training process, the learning rate is 0.01, and it adapts on arrival in the local minimum for result optimization. This classification is performed on 15 types of vegetables and produces 98.1% accuracy on testing data with 25 minutes and 45 seconds of training time.
{"title":"Smart GreenGrocer: Automatic Vegetable Type Classification Using the CNN Algorithm","authors":"Raden Bagus Muhammad AdryanPutra Adhy Wijaya, Delfia Nur Anrianti Putri, Dzikri Rahadian Fudholi","doi":"10.22146/ijccs.82377","DOIUrl":"https://doi.org/10.22146/ijccs.82377","url":null,"abstract":"In the food industry, separating vegetables is done by visually trained professionals. However, because it takes plenty of time to sort a large number of different types of vegetables, human errors might arise at any time, and using human resources is not always effective. Thus, automation is needed to minimize process time and errors. Computer vision helps reduce the need for human resources by automatizing the classification. Vegetables come in various colors and shapes; thus, vegetable classification becomes a challenging multiclass classification due to intraspecies variety and interspecies similarity of these main distinguishing characteristics. Consequently, much research is made to automatically discover effective methods to group each type of vegetable using computers. To answer this challenge, we proposed a solution utilizing deep learning with a Convolutional Neural Network (CNN) to perform multi-label classification on some types of vegetables. We experimented with the modification of batch size and optimizer type. In the training process, the learning rate is 0.01, and it adapts on arrival in the local minimum for result optimization. This classification is performed on 15 types of vegetables and produces 98.1% accuracy on testing data with 25 minutes and 45 seconds of training time.","PeriodicalId":31625,"journal":{"name":"IJCCS Indonesian Journal of Computing and Cybernetics Systems","volume":"238 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135313742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The number of speakers of regional languages who are able to read and to write traditional scripts in Indonesia is decreasing. If left unaddressed, this will lead to the extinction of Nusantara scripts and it is not impossible that their reading methods will be forgotten in the future. To anticipate this, this study aims to preserve the knowledge of reading ancient scripts by developing a Deep Learning model that can read document images written using one of the 10 Nusantara scripts we have collected: Bali, Batak, Bugis, Javanese, Kawi, Kerinci, Lampung, Pallava, Rejang, and Sundanese. While previous studies have made efforts to read traditional Nusantara scripts using various Machine Learning and Convolutional Neural Network algorithms, they have primarily focused on specific scripts and lacked an integrated approach from script type recognition to character recognition. This study is the first to comprehensively address the entire range of Nusantara scripts, encompassing script type detection and character recognition. Convolutional Neural Network, ConvMixer, and Visual Transformer models were utilized and their respective performances were compared. The results demonstrate that our models achieved 96% accuracy in classifying Nusantara script types, with character recognition accuracy ranging from 93% to approximately 100% across the ten scripts.
{"title":"Deep Learning Approaches for Nusantara Scripts Optical Character Recognition","authors":"Agi Prasetiadi, Julian Saputra, Iqsyahiro Kresna, Imada Ramadhanti","doi":"10.22146/ijccs.86302","DOIUrl":"https://doi.org/10.22146/ijccs.86302","url":null,"abstract":"The number of speakers of regional languages who are able to read and to write traditional scripts in Indonesia is decreasing. If left unaddressed, this will lead to the extinction of Nusantara scripts and it is not impossible that their reading methods will be forgotten in the future. To anticipate this, this study aims to preserve the knowledge of reading ancient scripts by developing a Deep Learning model that can read document images written using one of the 10 Nusantara scripts we have collected: Bali, Batak, Bugis, Javanese, Kawi, Kerinci, Lampung, Pallava, Rejang, and Sundanese. While previous studies have made efforts to read traditional Nusantara scripts using various Machine Learning and Convolutional Neural Network algorithms, they have primarily focused on specific scripts and lacked an integrated approach from script type recognition to character recognition. This study is the first to comprehensively address the entire range of Nusantara scripts, encompassing script type detection and character recognition. Convolutional Neural Network, ConvMixer, and Visual Transformer models were utilized and their respective performances were compared. The results demonstrate that our models achieved 96% accuracy in classifying Nusantara script types, with character recognition accuracy ranging from 93% to approximately 100% across the ten scripts.","PeriodicalId":31625,"journal":{"name":"IJCCS Indonesian Journal of Computing and Cybernetics Systems","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135313926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The distribution of pests in rice plants results in significant losses in production and damage to rice plants for farmers, seen from data on the area of rice borer attacks in the province of Bali in Tabanan district. Therefore, by predicting the distribution of rice pests, we can know the pattern of pest attacks so that we can anticipate them because predicting can provide accuracy and error values through the test results. One of the prediction models is BPNN, where BPNN's advantages for solving complex problems are very suitable for use where large amounts of data are involved and many input/output variables, BPNN is also capable of modeling nonlinear relationships between input and output variables, which may be difficult to capture by this type of predictive model. other. Backpropagation includes supervised learning, which means it can learn from labeled examples and can make accurate predictions on new, unlabeled data. Split data using K-fold cross-validation serves to assess the process performance of an algorithmic method by dividing random data samples and grouping the data as many as K k-fold values.
{"title":"Predictive Analysis of Rice Pest Distribution in Bali Province Using Backpropagation Neural Network","authors":"I Kadek Agus Dwipayana, Putu Sugiartawan","doi":"10.22146/ijccs.85584","DOIUrl":"https://doi.org/10.22146/ijccs.85584","url":null,"abstract":"The distribution of pests in rice plants results in significant losses in production and damage to rice plants for farmers, seen from data on the area of rice borer attacks in the province of Bali in Tabanan district. Therefore, by predicting the distribution of rice pests, we can know the pattern of pest attacks so that we can anticipate them because predicting can provide accuracy and error values through the test results. One of the prediction models is BPNN, where BPNN's advantages for solving complex problems are very suitable for use where large amounts of data are involved and many input/output variables, BPNN is also capable of modeling nonlinear relationships between input and output variables, which may be difficult to capture by this type of predictive model. other. Backpropagation includes supervised learning, which means it can learn from labeled examples and can make accurate predictions on new, unlabeled data. Split data using K-fold cross-validation serves to assess the process performance of an algorithmic method by dividing random data samples and grouping the data as many as K k-fold values.","PeriodicalId":31625,"journal":{"name":"IJCCS Indonesian Journal of Computing and Cybernetics Systems","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135313303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Penyakit jantung koroner adalah tersumbatnya suplai darah jantung. Penyakit jantung adalah penyebab utama kematian di seluruh dunia. Berbagai faktor risiko berkontribusi terhadap penyakit jantung, termasuk merokok, gaya hidup tidak sehat, kolesterol tinggi, dan hipertensi. Dengan demikian, prediksi penyakit dapat dilakukan untuk mengidentifikasi individu yang berisiko guna mencegah peningkatan kematian akibat penyakit jantung. Penambangan data, khususnya metode Extreme Machine Learning (ELM), biasanya digunakan untuk tujuan ini. ELM adalah metode jaringan saraf dalam kecepatan pelatihan dan tidak memerlukan propagasi balik, dan menentukan jumlah node tersembunyi yang optimal dan mencapai hasil yang akurat tetap menjadi tantangan. Pada penelitian ini, ELM dengan Particle Swarm Optimization (PSO) diusulkan untuk mengoptimalkan klasifikasi penyakit jantung, yang bertujuan untuk mencapai hasil optimal dengan pembelajaran cepat. Penelitian ini mengikuti proses yang sistematis, termasuk pengumpulan data, preprocessing, pemodelan, dan evaluasi menggunakan analisis matriks konfusi. Hasil dan pembahasan menyajikan efektivitas metode yang diusulkan dengan mengevaluasi akurasi klasifikasi berdasarkan berbagai parameter, seperti ukuran populasi, jumlah node tersembunyi, dan iterasi. Temuan menunjukkan bahwa ELM dengan optimasi PSO dapat memberikan hasil klasifikasi yang akurat untuk diagnosis penyakit jantung, dengan tingkat akurasi yang menjanjikan.
{"title":"Application of Extreme Learning Machine Method With Particle Swarm Optimization to Classify of Heart Disease","authors":"Adela Putri Ariyanti, Muhammad Itqan Mazdadi, Andi - Farmadi, Muliadi Muliadi, Rudy Herteno","doi":"10.22146/ijccs.86291","DOIUrl":"https://doi.org/10.22146/ijccs.86291","url":null,"abstract":"Penyakit jantung koroner adalah tersumbatnya suplai darah jantung. Penyakit jantung adalah penyebab utama kematian di seluruh dunia. Berbagai faktor risiko berkontribusi terhadap penyakit jantung, termasuk merokok, gaya hidup tidak sehat, kolesterol tinggi, dan hipertensi. Dengan demikian, prediksi penyakit dapat dilakukan untuk mengidentifikasi individu yang berisiko guna mencegah peningkatan kematian akibat penyakit jantung. Penambangan data, khususnya metode Extreme Machine Learning (ELM), biasanya digunakan untuk tujuan ini. ELM adalah metode jaringan saraf dalam kecepatan pelatihan dan tidak memerlukan propagasi balik, dan menentukan jumlah node tersembunyi yang optimal dan mencapai hasil yang akurat tetap menjadi tantangan. Pada penelitian ini, ELM dengan Particle Swarm Optimization (PSO) diusulkan untuk mengoptimalkan klasifikasi penyakit jantung, yang bertujuan untuk mencapai hasil optimal dengan pembelajaran cepat. Penelitian ini mengikuti proses yang sistematis, termasuk pengumpulan data, preprocessing, pemodelan, dan evaluasi menggunakan analisis matriks konfusi. Hasil dan pembahasan menyajikan efektivitas metode yang diusulkan dengan mengevaluasi akurasi klasifikasi berdasarkan berbagai parameter, seperti ukuran populasi, jumlah node tersembunyi, dan iterasi. Temuan menunjukkan bahwa ELM dengan optimasi PSO dapat memberikan hasil klasifikasi yang akurat untuk diagnosis penyakit jantung, dengan tingkat akurasi yang menjanjikan.","PeriodicalId":31625,"journal":{"name":"IJCCS Indonesian Journal of Computing and Cybernetics Systems","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135313745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}