Pub Date : 2024-04-14DOI: 10.18517/ijaseit.14.2.19010
Retno W. Damayanti, Haryono Setiadi, Dicka Korintus Kurnianto, N. A. E. Entifar
In 2022, Indonesia began the energy conversion pilot project from Liquefied Petroleum Gas (LPG) stoves to induction stoves in Surakarta. Before the program is scaled up, it is vital to conduct an in-depth analysis of the technology readiness, technology acceptance, and user satisfaction to assess program continuity. This research aims to identify what configuration of aspects of technology readiness, technology acceptance, and satisfaction will produce continuance intention and the necessary conditions of continuance usage intention. This study involved 412 conversion program participants in five districts in Surakarta, Indonesia. Utilizing fuzzy-set qualitative comparative analysis (fsQCA), four solution configurations that lead to high continuance intention and four for low continuance intention were obtained. Generally, nearly all conditions must be maintained at a positive level to produce high continuance intention, especially innovativeness, and satisfaction. The research has theoretical and practical implications, including satisfaction having the greatest impact on configurations and the quality of the conversion program, in which the induction stove and its service program must become the main focus to ensure satisfaction. Clear policies and wider socialization must be conducted to enhance people’s awareness and trust. To boost sustainability and continuity, synergistic cooperation between stakeholders and the creation of a better environment for induction stove implementation must also be established. Future research should conduct a longitudinal study approach to strengthen the analysis of a long-term induction stove conversion program.
{"title":"Configuration Analysis of Technology Readiness, Technology Acceptance, and Public Satisfaction Regarding Continued Induction Stove Use in Indonesia","authors":"Retno W. Damayanti, Haryono Setiadi, Dicka Korintus Kurnianto, N. A. E. Entifar","doi":"10.18517/ijaseit.14.2.19010","DOIUrl":"https://doi.org/10.18517/ijaseit.14.2.19010","url":null,"abstract":"In 2022, Indonesia began the energy conversion pilot project from Liquefied Petroleum Gas (LPG) stoves to induction stoves in Surakarta. Before the program is scaled up, it is vital to conduct an in-depth analysis of the technology readiness, technology acceptance, and user satisfaction to assess program continuity. This research aims to identify what configuration of aspects of technology readiness, technology acceptance, and satisfaction will produce continuance intention and the necessary conditions of continuance usage intention. This study involved 412 conversion program participants in five districts in Surakarta, Indonesia. Utilizing fuzzy-set qualitative comparative analysis (fsQCA), four solution configurations that lead to high continuance intention and four for low continuance intention were obtained. Generally, nearly all conditions must be maintained at a positive level to produce high continuance intention, especially innovativeness, and satisfaction. The research has theoretical and practical implications, including satisfaction having the greatest impact on configurations and the quality of the conversion program, in which the induction stove and its service program must become the main focus to ensure satisfaction. Clear policies and wider socialization must be conducted to enhance people’s awareness and trust. To boost sustainability and continuity, synergistic cooperation between stakeholders and the creation of a better environment for induction stove implementation must also be established. Future research should conduct a longitudinal study approach to strengthen the analysis of a long-term induction stove conversion program.","PeriodicalId":14471,"journal":{"name":"International Journal on Advanced Science, Engineering and Information Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140706127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-14DOI: 10.18517/ijaseit.14.2.18860
Jeong-Soo Lee, Jungwon Cho
Artificial intelligence (AI) has emerged as a pivotal technology for enhancing national and industrial competitiveness in the digital transformation era. Consequently, the cultivation of specialized talent in AI has garnered significant attention. This study analyzed AI-related department curricula at major universities worldwide, identifying critical courses for each academic semester. The data we collected included course titles, syllabi, and learning objectives, which were refined and analyzed afterward. Furthermore, we comparatively examined university AI education programs based on the content of Computer Science Curricula 2023, a widely recognized framework for computer science education. The insights gleaned from our analysis revealed that AI curricula are built upon a foundation of computer science, emphasizing the importance of a deep understanding of various related domains within the field of computer science. Based on these findings, we proposed a curriculum for AI departments, considering the need for a comprehensive understanding of computer science alongside specialized AI courses. This study aims to provide foundational data for advancing AI education and guide educational program improvements. Ultimately, it aspires to contribute to developing specialized professionals in the AI field, thereby bolstering national and industrial competitiveness in the rapidly evolving digital landscape.
{"title":"Artificial Intelligence Curriculum Development for Intelligent System Experts in University","authors":"Jeong-Soo Lee, Jungwon Cho","doi":"10.18517/ijaseit.14.2.18860","DOIUrl":"https://doi.org/10.18517/ijaseit.14.2.18860","url":null,"abstract":"Artificial intelligence (AI) has emerged as a pivotal technology for enhancing national and industrial competitiveness in the digital transformation era. Consequently, the cultivation of specialized talent in AI has garnered significant attention. This study analyzed AI-related department curricula at major universities worldwide, identifying critical courses for each academic semester. The data we collected included course titles, syllabi, and learning objectives, which were refined and analyzed afterward. Furthermore, we comparatively examined university AI education programs based on the content of Computer Science Curricula 2023, a widely recognized framework for computer science education. The insights gleaned from our analysis revealed that AI curricula are built upon a foundation of computer science, emphasizing the importance of a deep understanding of various related domains within the field of computer science. Based on these findings, we proposed a curriculum for AI departments, considering the need for a comprehensive understanding of computer science alongside specialized AI courses. This study aims to provide foundational data for advancing AI education and guide educational program improvements. Ultimately, it aspires to contribute to developing specialized professionals in the AI field, thereby bolstering national and industrial competitiveness in the rapidly evolving digital landscape.","PeriodicalId":14471,"journal":{"name":"International Journal on Advanced Science, Engineering and Information Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140704628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-14DOI: 10.18517/ijaseit.14.2.19953
Des Suryani, Ambiyar, Asrul Huda, Fitri Ayu, Erdisna, Muhardi
Currently, digital technology is developing in all fields. The development of this technology certainly has a significant impact on the world of education. A suitable learning model is needed, especially in the Algorithm and Programming course, to face global challenges in the Industrial Revolution 4.0. Students are expected to have skills that include critical and creative thinking in solving problems, communication, and collaboration with the support of technology. The STEAM-Problem Based Learning model can be used in the Algorithm and Programming learning process with seven syntaxes: preparation and knowledge identification; problem identification; plan solution; create and test products; communicate; evaluation and feedback and giving rewards. All activities carried out in the learning process by lecturers and students can be stored in a database. This research attempts to determine the validity of the STEAM-Problem Based Learning database design, which will be implemented in the Algorithm and Programming course. The data analysis technique used is validity analysis, which is based on assessing the data obtained through a questionnaire or a questionnaire using a Likert scale. Data was processed using Aiken’s V validity coefficient formula to test the expert’s judgment. Assessment indicators include correctness, consistency, relevance, completeness, and minimality. The results of the study show that the validity test on the STEAM-Problem Based Learning database design is valid, so it is feasible to implement it in algorithm and programming learning.
目前,数字技术正在各个领域发展。这种技术的发展无疑对教育界产生了重大影响。面对工业革命 4.0 的全球挑战,我们需要一个合适的学习模式,尤其是在算法与编程课程中。在技术的支持下,学生应具备解决问题的批判性和创造性思维、沟通和协作等技能。在《算法与程序设计》的学习过程中,可以采用 STEAM 问题式学习模式,包括七个语法:准备和知识识别;问题识别;计划解决方案;创建和测试产品;交流;评价和反馈以及给予奖励。讲师和学生在学习过程中开展的所有活动都可以存储在数据库中。本研究试图确定 STEAM 问题式学习数据库设计的有效性,该设计将在算法和编程课程中实施。使用的数据分析技术是效度分析,其基础是评估通过问卷调查或使用李克特量表的问卷调查获得的数据。使用艾肯 V 效度系数公式对数据进行处理,以检验专家的判断。评估指标包括正确性、一致性、相关性、完整性和最小性。研究结果表明,STEAM-基于问题的学习数据库设计的有效性检验是有效的,因此在算法和编程学习中实施是可行的。
{"title":"Implementation of Relational Database in the STEAM-Problem Based Learning Model in Algorithm and Programming","authors":"Des Suryani, Ambiyar, Asrul Huda, Fitri Ayu, Erdisna, Muhardi","doi":"10.18517/ijaseit.14.2.19953","DOIUrl":"https://doi.org/10.18517/ijaseit.14.2.19953","url":null,"abstract":"Currently, digital technology is developing in all fields. The development of this technology certainly has a significant impact on the world of education. A suitable learning model is needed, especially in the Algorithm and Programming course, to face global challenges in the Industrial Revolution 4.0. Students are expected to have skills that include critical and creative thinking in solving problems, communication, and collaboration with the support of technology. The STEAM-Problem Based Learning model can be used in the Algorithm and Programming learning process with seven syntaxes: preparation and knowledge identification; problem identification; plan solution; create and test products; communicate; evaluation and feedback and giving rewards. All activities carried out in the learning process by lecturers and students can be stored in a database. This research attempts to determine the validity of the STEAM-Problem Based Learning database design, which will be implemented in the Algorithm and Programming course. The data analysis technique used is validity analysis, which is based on assessing the data obtained through a questionnaire or a questionnaire using a Likert scale. Data was processed using Aiken’s V validity coefficient formula to test the expert’s judgment. Assessment indicators include correctness, consistency, relevance, completeness, and minimality. The results of the study show that the validity test on the STEAM-Problem Based Learning database design is valid, so it is feasible to implement it in algorithm and programming learning.","PeriodicalId":14471,"journal":{"name":"International Journal on Advanced Science, Engineering and Information Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140705018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-14DOI: 10.18517/ijaseit.14.2.18749
F. Kamalov, S. Moussa, G. B. Satrya
The ubiquitous adoption of network-based technologies has left organizations vulnerable to malicious attacks. It has become vital to have effective intrusion detection systems (IDS) that protect the network from attacks. In this paper, we study the intrusion detection problem through the lens of probability theory. We consider a situation where a network receives random malicious signals at discrete time instances, and an IDS attempts to capture these signals via a random check process. We aim to develop a probabilistic framework for intrusion detection under the given scenario. Concretely, we calculate the detection rate of a network attack by an IDS and determine the expected number of detections. We perform extensive theoretical and experimental analyses of the problem. The results presented in this paper would be helpful tools for designing and analyzing intrusion detection systems. We propose a probabilistic framework that could be useful for IDS experts; for a network-based IDS that monitors in real-time, analyzing the entire traffic flow can be computationally expensive. By probabilistically sampling only a fraction of the network traffic, the IDS can still perform its task effectively while reducing the computational cost. However, checking only a fraction of the traffic increases the possibility of missing an attack. This research can help IDS designers achieve appropriate detection rates while maintaining a low false alarm rate. The groundwork laid out in this paper could be used for future research on understanding the probabilities related to intrusion detection.
{"title":"Probabilistic Analysis of Random Check Intrusion Detection System","authors":"F. Kamalov, S. Moussa, G. B. Satrya","doi":"10.18517/ijaseit.14.2.18749","DOIUrl":"https://doi.org/10.18517/ijaseit.14.2.18749","url":null,"abstract":"The ubiquitous adoption of network-based technologies has left organizations vulnerable to malicious attacks. It has become vital to have effective intrusion detection systems (IDS) that protect the network from attacks. In this paper, we study the intrusion detection problem through the lens of probability theory. We consider a situation where a network receives random malicious signals at discrete time instances, and an IDS attempts to capture these signals via a random check process. We aim to develop a probabilistic framework for intrusion detection under the given scenario. Concretely, we calculate the detection rate of a network attack by an IDS and determine the expected number of detections. We perform extensive theoretical and experimental analyses of the problem. The results presented in this paper would be helpful tools for designing and analyzing intrusion detection systems. We propose a probabilistic framework that could be useful for IDS experts; for a network-based IDS that monitors in real-time, analyzing the entire traffic flow can be computationally expensive. By probabilistically sampling only a fraction of the network traffic, the IDS can still perform its task effectively while reducing the computational cost. However, checking only a fraction of the traffic increases the possibility of missing an attack. This research can help IDS designers achieve appropriate detection rates while maintaining a low false alarm rate. The groundwork laid out in this paper could be used for future research on understanding the probabilities related to intrusion detection.","PeriodicalId":14471,"journal":{"name":"International Journal on Advanced Science, Engineering and Information Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140705009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-14DOI: 10.18517/ijaseit.14.2.19396
Ayu Wirdiani, Steven Ndung'u Machetho, Ketut Gede, Darma Putra, Rukmi Sari Hartati Made Sudarma c, Henrico Aldy Ferdian
Various biometric security systems, such as face recognition, fingerprint, voice, hand geometry, and iris, have been developed. Apart from being a communication medium, the human voice is also a form of biometrics that can be used for identification. Voice has unique characteristics that can be used as a differentiator between one person and another. A sound speaker recognition system must be able to pick up the features that characterize a person's voice. This study aims to develop a human speaker recognition system using the Convolutional Neural Network (CNN) method. This research proposes improvements in the fine-tuning layer in CNN architecture to improve the Accuracy. The recognition system combines the CNN method with Mel Frequency Cepstral Coefficients (MFCC) to perform feature extraction on raw audio and K Nearest Neighbor (KNN) to classify the embedding output. In general, this system extracts voice data features using MFCC. The process is continued with feature extraction using CNN with triplet loss to obtain the 128-dimensional embedding output. The classification of the CNN embedding output uses the KNN method. This research was conducted on 50 speakers from the TIMIT dataset, which contained eight utterances for each speaker and 60 speakers from live recording using a smartphone. The accuracy of this speaker recognition system achieves high-performance accuracy. Further research can be developed by combining different biometrics objects, commonly known as multimodal, to improve recognition accuracy further.
{"title":"Improvement Model for Speaker Recognition using MFCC-CNN and Online Triplet Mining","authors":"Ayu Wirdiani, Steven Ndung'u Machetho, Ketut Gede, Darma Putra, Rukmi Sari Hartati Made Sudarma c, Henrico Aldy Ferdian","doi":"10.18517/ijaseit.14.2.19396","DOIUrl":"https://doi.org/10.18517/ijaseit.14.2.19396","url":null,"abstract":"Various biometric security systems, such as face recognition, fingerprint, voice, hand geometry, and iris, have been developed. Apart from being a communication medium, the human voice is also a form of biometrics that can be used for identification. Voice has unique characteristics that can be used as a differentiator between one person and another. A sound speaker recognition system must be able to pick up the features that characterize a person's voice. This study aims to develop a human speaker recognition system using the Convolutional Neural Network (CNN) method. This research proposes improvements in the fine-tuning layer in CNN architecture to improve the Accuracy. The recognition system combines the CNN method with Mel Frequency Cepstral Coefficients (MFCC) to perform feature extraction on raw audio and K Nearest Neighbor (KNN) to classify the embedding output. In general, this system extracts voice data features using MFCC. The process is continued with feature extraction using CNN with triplet loss to obtain the 128-dimensional embedding output. The classification of the CNN embedding output uses the KNN method. This research was conducted on 50 speakers from the TIMIT dataset, which contained eight utterances for each speaker and 60 speakers from live recording using a smartphone. The accuracy of this speaker recognition system achieves high-performance accuracy. Further research can be developed by combining different biometrics objects, commonly known as multimodal, to improve recognition accuracy further.","PeriodicalId":14471,"journal":{"name":"International Journal on Advanced Science, Engineering and Information Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140706468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-28DOI: 10.18517/ijaseit.14.1.19047
Woo-Hyeon Kim, Geon-Woo Kim, Joo-Chang Kim
General video search and recommendation systems primarily rely on metadata and personal information. Metadata includes file names, keywords, tags, and genres, among others, and is used to describe the video's content. The video platform assesses the relevance of user search queries to the video metadata and presents search results in order of highest relevance. Recommendations are based on videos with metadata judged to be similar to the one the user is currently watching. Most platforms offer search and recommendation services by employing separate algorithms for metadata and personal information. Therefore, metadata plays a vital role in video search. Video service platforms develop various algorithms to provide users with more accurate search results and recommendations. Quantifying video similarity is essential to enhance the accuracy of search results and recommendations. Since content producers primarily provide basic metadata, it can be abused. Additionally, the resemblance between similar video segments may diminish depending on its duration. This paper proposes a metadata expansion model that utilizes object recognition and Speech-to-Text (STT) technology. The model selects key objects by analyzing the frequency of their appearance in the video, extracts audio separately, transcribes it into text, and extracts the script. Scripts are quantified by tokenizing them into words using text-mining techniques. By augmenting metadata with key objects and script tokens, various video content search and recommendation platforms are expected to deliver results closer to user search terms and recommend related content.
{"title":"Multi-Modal Deep Learning based Metadata Extensions for Video Clipping","authors":"Woo-Hyeon Kim, Geon-Woo Kim, Joo-Chang Kim","doi":"10.18517/ijaseit.14.1.19047","DOIUrl":"https://doi.org/10.18517/ijaseit.14.1.19047","url":null,"abstract":"General video search and recommendation systems primarily rely on metadata and personal information. Metadata includes file names, keywords, tags, and genres, among others, and is used to describe the video's content. The video platform assesses the relevance of user search queries to the video metadata and presents search results in order of highest relevance. Recommendations are based on videos with metadata judged to be similar to the one the user is currently watching. Most platforms offer search and recommendation services by employing separate algorithms for metadata and personal information. Therefore, metadata plays a vital role in video search. Video service platforms develop various algorithms to provide users with more accurate search results and recommendations. Quantifying video similarity is essential to enhance the accuracy of search results and recommendations. Since content producers primarily provide basic metadata, it can be abused. Additionally, the resemblance between similar video segments may diminish depending on its duration. This paper proposes a metadata expansion model that utilizes object recognition and Speech-to-Text (STT) technology. The model selects key objects by analyzing the frequency of their appearance in the video, extracts audio separately, transcribes it into text, and extracts the script. Scripts are quantified by tokenizing them into words using text-mining techniques. By augmenting metadata with key objects and script tokens, various video content search and recommendation platforms are expected to deliver results closer to user search terms and recommend related content.","PeriodicalId":14471,"journal":{"name":"International Journal on Advanced Science, Engineering and Information Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140419340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-28DOI: 10.18517/ijaseit.14.1.18832
A. Laksito, Nuruddin Wiranda, Shofiyati Nur Karimah, Mardhiya Hayaty
Due to its extensive use in both public and commercial contexts, sentiment analysis on Twitter has recently received much attention, particularly concerning tweets about COVID-19. Information about COVID-19 has been widely spread over social media, resulting in various views, opinions, and emotions about this pandemic, significantly impacting people's health. It is exceedingly challenging for the authorities to find these rumors on these public platforms manually. This paper proposes a framework for text classification using the RNN model and its updates, such as LSTM, BiLSTM, and GRU. This study aims to determine the best recurrent network model for handling cases of Twitter data classification. We utilized Twitter data relevant to COVID-19 and the lockdown with four classification classes (sad, joy, fear, and anger). In addition, this study aims to prove whether GloVe pre-trained word embedding can increase the accuracy of model predictions. The training and testing datasets were split into 80% and 20%, respectively. Therefore, in this experiment an early stopping technique was used with a limit of 15 epochs and a minimum delta of 0.01, meaning that training will be stopped if there is no improvement of 0.1% accuracy after 15 epochs. We used the f1-score average to measure the accuracy of the classification task results. The test results show that the BiLSTM model with GloVe word embedding yields the best f1-score compared to other models. Moreover, in all model testing, the f1-score value of the 'fear' class displays the highest accuracy compared to other classes.
{"title":"The COVID-19 Tweets Classification Based on Recurrent Neural Network","authors":"A. Laksito, Nuruddin Wiranda, Shofiyati Nur Karimah, Mardhiya Hayaty","doi":"10.18517/ijaseit.14.1.18832","DOIUrl":"https://doi.org/10.18517/ijaseit.14.1.18832","url":null,"abstract":"Due to its extensive use in both public and commercial contexts, sentiment analysis on Twitter has recently received much attention, particularly concerning tweets about COVID-19. Information about COVID-19 has been widely spread over social media, resulting in various views, opinions, and emotions about this pandemic, significantly impacting people's health. It is exceedingly challenging for the authorities to find these rumors on these public platforms manually. This paper proposes a framework for text classification using the RNN model and its updates, such as LSTM, BiLSTM, and GRU. This study aims to determine the best recurrent network model for handling cases of Twitter data classification. We utilized Twitter data relevant to COVID-19 and the lockdown with four classification classes (sad, joy, fear, and anger). In addition, this study aims to prove whether GloVe pre-trained word embedding can increase the accuracy of model predictions. The training and testing datasets were split into 80% and 20%, respectively. Therefore, in this experiment an early stopping technique was used with a limit of 15 epochs and a minimum delta of 0.01, meaning that training will be stopped if there is no improvement of 0.1% accuracy after 15 epochs. We used the f1-score average to measure the accuracy of the classification task results. The test results show that the BiLSTM model with GloVe word embedding yields the best f1-score compared to other models. Moreover, in all model testing, the f1-score value of the 'fear' class displays the highest accuracy compared to other classes.","PeriodicalId":14471,"journal":{"name":"International Journal on Advanced Science, Engineering and Information Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140421693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-28DOI: 10.18517/ijaseit.14.1.18912
Fajar Rahardika Bahari Putra, Abdul Fadlil, Rusydi Umar
Indonesia is a country that has many natural resources, especially mammals. The Papua and West Papua regions are large provinces with abundant natural resources and tremendous livestock potential. The availability of natural resources in the form of live cattle provides a great opportunity to develop animal husbandry in West Papua province. This research was conducted to create a new expert system with a knowledge base to solve the problems that occur and be useful for the community, especially cattle breeders. The current problem is the delay and lack of medical personnel in diagnosing cattle diseases, the distance that must be traveled, which is still very difficult to travel, and the lack of understanding of farmers in early handling when implications indicate animals. So, the Certainty Factor Method and Bayes Theorem with Forward-Chaining search are used to handle current problems. From the results of manual calculations, Certainty Factor Forward Chaining search is a method that has an uncertainty value of 99.84% for 3-day fever compared to Bayes Theorem Forward Chaining search with a value of 50% for worms, 50% for 3-day fever and 50% for nail rot, if applied then Certainty Factor Forward Chaining search is the most appropriate. Likewise, updating the knowledge base must be done from time to time. So that in the future, it can be compared with other methods and Android-based to facilitate current breeders.
{"title":"Application of Forward Chaining Method, Certainty Factor, and Bayes Theorem for Cattle Disease","authors":"Fajar Rahardika Bahari Putra, Abdul Fadlil, Rusydi Umar","doi":"10.18517/ijaseit.14.1.18912","DOIUrl":"https://doi.org/10.18517/ijaseit.14.1.18912","url":null,"abstract":"Indonesia is a country that has many natural resources, especially mammals. The Papua and West Papua regions are large provinces with abundant natural resources and tremendous livestock potential. The availability of natural resources in the form of live cattle provides a great opportunity to develop animal husbandry in West Papua province. This research was conducted to create a new expert system with a knowledge base to solve the problems that occur and be useful for the community, especially cattle breeders. The current problem is the delay and lack of medical personnel in diagnosing cattle diseases, the distance that must be traveled, which is still very difficult to travel, and the lack of understanding of farmers in early handling when implications indicate animals. So, the Certainty Factor Method and Bayes Theorem with Forward-Chaining search are used to handle current problems. From the results of manual calculations, Certainty Factor Forward Chaining search is a method that has an uncertainty value of 99.84% for 3-day fever compared to Bayes Theorem Forward Chaining search with a value of 50% for worms, 50% for 3-day fever and 50% for nail rot, if applied then Certainty Factor Forward Chaining search is the most appropriate. Likewise, updating the knowledge base must be done from time to time. So that in the future, it can be compared with other methods and Android-based to facilitate current breeders.","PeriodicalId":14471,"journal":{"name":"International Journal on Advanced Science, Engineering and Information Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140423928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-28DOI: 10.18517/ijaseit.14.1.19660
Surachai Chantee, T. Mayakul
Flooding is a recurring global issue that leads to substantial loss of life and property damage. A crucial tool in managing and mitigating the impact of flooding is using flood hazard maps, which help identify high-risk areas and enable effective planning and management. This study presents a study on developing a predictive model to identify flood-prone areas in the Mae Chan Basin of Thailand using machine learning techniques, precisely the random sub-space ensemble method combined with a deep neural network (RS-DNN) and Nadam optimizer. The model was trained using 11 geographic information system (GIS) layers, including rainfall, elevation, slope, distance from the river, soil group, NDVI, road density, curvature, land use, flow accumulation, geology, and flood inventory data. Feature selection was carried out using the Gain Ratio method. The model was validated using accuracy, precision, ROC, and AUC metrics. Using the Wilcoxon signed-rank test, the effectiveness was compared to other machine learning algorithms, including random tree and support vector machines. The results showed that the RS-DNN model achieved a higher classification accuracy of 97% in both the training and testing datasets, compared to random tree (93%) and SVM (82%). The model's performance was also validated by its high AUC value of (0.99), compared to a random tree (0.93) and SVM (0.82) at a significance level of 0.05. In conclusion, the RS-DNN model is a highly accurate tool for identifying flood-prone areas, aiding in effective flood management and planning.
{"title":"Modeling Flood Susceptible Areas Using Deep Learning Techniques with Random Subspace: A Case Study of the Mae Chan Basin in Thailand","authors":"Surachai Chantee, T. Mayakul","doi":"10.18517/ijaseit.14.1.19660","DOIUrl":"https://doi.org/10.18517/ijaseit.14.1.19660","url":null,"abstract":"Flooding is a recurring global issue that leads to substantial loss of life and property damage. A crucial tool in managing and mitigating the impact of flooding is using flood hazard maps, which help identify high-risk areas and enable effective planning and management. This study presents a study on developing a predictive model to identify flood-prone areas in the Mae Chan Basin of Thailand using machine learning techniques, precisely the random sub-space ensemble method combined with a deep neural network (RS-DNN) and Nadam optimizer. The model was trained using 11 geographic information system (GIS) layers, including rainfall, elevation, slope, distance from the river, soil group, NDVI, road density, curvature, land use, flow accumulation, geology, and flood inventory data. Feature selection was carried out using the Gain Ratio method. The model was validated using accuracy, precision, ROC, and AUC metrics. Using the Wilcoxon signed-rank test, the effectiveness was compared to other machine learning algorithms, including random tree and support vector machines. The results showed that the RS-DNN model achieved a higher classification accuracy of 97% in both the training and testing datasets, compared to random tree (93%) and SVM (82%). The model's performance was also validated by its high AUC value of (0.99), compared to a random tree (0.93) and SVM (0.82) at a significance level of 0.05. In conclusion, the RS-DNN model is a highly accurate tool for identifying flood-prone areas, aiding in effective flood management and planning.","PeriodicalId":14471,"journal":{"name":"International Journal on Advanced Science, Engineering and Information Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140420193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-27DOI: 10.18517/ijaseit.14.1.19433
M. Adityawan, B. Yakti, C. Sandi, Andhika Wicaksono Sasongko, Mohammad Farid, D. Harlan, A. A. Kuntoro, Widyaningtias, E. Riawan
A dam break event can cause heavy loss in the affected area due to the lack of a mitigation system. Therefore, modeling the possibility of a dam break occurrence requires a risk analysis. The Saguling Dam, Cirata Dam, and Jatiluhur Dam make a cascade dam and is one of the country's most valuable assets. This study simulates a flood induced by the failure of this cascade dam. The dam break is simulated using HEC-HMS 4.6 with several dam-break scenarios due to overtopping and piping. The scenario with the highest peak discharge is then used to simulate the overland flow using HEC-RAS 5.0.7, representing the most extreme condition when the dam break occurs. The generated flood induced by the dam break hit seven regencies with a total affected area of 1,596.59 km2. Moreover, an economic analysis is conducted. The result states that the most affected regency by economic losses is Karawang Regency, and the least affected is Subang Regency. The financial analysis, conducted using the ECLAC method, shows that the extent of inundation influences economic losses due to flooding, the distribution of depth, and the land cover of the affected area. This study hopes to assist in developing a mitigation plan for future possible dam breaks and provide a recommendation for decision-makers for developing land use areas.
{"title":"Economic Loss due to the Failure of a Cascade Dam: A Study Case in the Saguling-Cirata-Jatiluhur Dam","authors":"M. Adityawan, B. Yakti, C. Sandi, Andhika Wicaksono Sasongko, Mohammad Farid, D. Harlan, A. A. Kuntoro, Widyaningtias, E. Riawan","doi":"10.18517/ijaseit.14.1.19433","DOIUrl":"https://doi.org/10.18517/ijaseit.14.1.19433","url":null,"abstract":"A dam break event can cause heavy loss in the affected area due to the lack of a mitigation system. Therefore, modeling the possibility of a dam break occurrence requires a risk analysis. The Saguling Dam, Cirata Dam, and Jatiluhur Dam make a cascade dam and is one of the country's most valuable assets. This study simulates a flood induced by the failure of this cascade dam. The dam break is simulated using HEC-HMS 4.6 with several dam-break scenarios due to overtopping and piping. The scenario with the highest peak discharge is then used to simulate the overland flow using HEC-RAS 5.0.7, representing the most extreme condition when the dam break occurs. The generated flood induced by the dam break hit seven regencies with a total affected area of 1,596.59 km2. Moreover, an economic analysis is conducted. The result states that the most affected regency by economic losses is Karawang Regency, and the least affected is Subang Regency. The financial analysis, conducted using the ECLAC method, shows that the extent of inundation influences economic losses due to flooding, the distribution of depth, and the land cover of the affected area. This study hopes to assist in developing a mitigation plan for future possible dam breaks and provide a recommendation for decision-makers for developing land use areas.","PeriodicalId":14471,"journal":{"name":"International Journal on Advanced Science, Engineering and Information Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140425124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}