Pub Date : 2022-10-29DOI: 10.20473/jisebi.8.2.182-195
L. Wanti, O. Somantri
Background: Dental disease detection is essential for the diagnosis of dental diseases. Objective: This research compares the Mamdani fuzzy logic and Naïve Bayes in detecting dental diseases. Methods: The first is to process data on dental disease symptoms and dental support tissues based on complaints of toothache consulted with experts at a community health centre (puskesmas). The second is to apply the Mamdani fuzzy logic and the Naïve Bayes to the proposed expert system. The third is to provide recommended decisions about dental diseases based on the symptom data inputted into the expert system. Patient data were collected at the North Cilacap puskesmas between July and December 2021. Results: The Mamdani fuzzy logic converts uncertain values into definite values, and the Naïve Bayes method classifies the type of dental disease by calculating the weight of patients’ answers. The methods were tested on 67 patients with dental disease complaints. The accuracy rate of the Mamdani fuzzy logic was 85.1%, and the Naïve Bayes method was 82.1%. Conclusion: The prediction accuracy was compared to the expert diagnoses to determine whether the Mamdani fuzzy logic method is better than the Naïve Bayes method. Keywords: Dental Disease, Expert System, Mamdani Fuzzy Logic, Naïve Bayes, Prediction
{"title":"Comparing Fuzzy Logic Mamdani and Naïve Bayes for Dental Disease Detection","authors":"L. Wanti, O. Somantri","doi":"10.20473/jisebi.8.2.182-195","DOIUrl":"https://doi.org/10.20473/jisebi.8.2.182-195","url":null,"abstract":"Background: Dental disease detection is essential for the diagnosis of dental diseases.\u0000Objective: This research compares the Mamdani fuzzy logic and Naïve Bayes in detecting dental diseases.\u0000Methods: The first is to process data on dental disease symptoms and dental support tissues based on complaints of toothache consulted with experts at a community health centre (puskesmas). The second is to apply the Mamdani fuzzy logic and the Naïve Bayes to the proposed expert system. The third is to provide recommended decisions about dental diseases based on the symptom data inputted into the expert system. Patient data were collected at the North Cilacap puskesmas between July and December 2021.\u0000Results: The Mamdani fuzzy logic converts uncertain values into definite values, and the Naïve Bayes method classifies the type of dental disease by calculating the weight of patients’ answers. The methods were tested on 67 patients with dental disease complaints. The accuracy rate of the Mamdani fuzzy logic was 85.1%, and the Naïve Bayes method was 82.1%.\u0000Conclusion: The prediction accuracy was compared to the expert diagnoses to determine whether the Mamdani fuzzy logic method is better than the Naïve Bayes method.\u0000 \u0000Keywords: Dental Disease, Expert System, Mamdani Fuzzy Logic, Naïve Bayes, Prediction","PeriodicalId":16185,"journal":{"name":"Journal of Information Systems Engineering and Business Intelligence","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75320443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-29DOI: 10.20473/jisebi.8.2.142-148
Sheila Shevira, I. M. A. D. Suarjaya, Putu Wira Buana
Background: Twitter is a popular social media where users express emotions, thoughts, and opinions that cannot be channelled in the real world. They do this by tweeting short, concise, and clear messages. Since users often express themselves, Twitter data can detect mental health trends. Objective: This study aims to detect suicidal messages through tweets written by users with mental health issues. Methods: These tweets are analysed and classified using the lexicon-based and Naive Bayes algorithms to determine whether it contains suicidal messages. Results: The classification results show that the ‘normal’ classification is predominant at 52.3% of the total 3,034,826 tweets, which indicates an increase from September to December 2021. Conclusion: Most tweets are categorised as ‘normal’, therefore the mental health status appears secure. However, this finding needs to be re-examined in the future, especially in DKI Jakarta Province, which has the most cases of mental disorders. This study found that the Naive Bayes algorithm is more accurate (85.5%) than the lexicon-based algorithm. This can be improved in future studies by increasing performance at the pre-processing stage. Keywords: Lexicon Based, Mental Disorder, Mental Health, Naïve Bayes, Twitter
{"title":"Lexicon and Naive Bayes Algorithms to Detect Mental Health Situations from Twitter Data","authors":"Sheila Shevira, I. M. A. D. Suarjaya, Putu Wira Buana","doi":"10.20473/jisebi.8.2.142-148","DOIUrl":"https://doi.org/10.20473/jisebi.8.2.142-148","url":null,"abstract":"Background: Twitter is a popular social media where users express emotions, thoughts, and opinions that cannot be channelled in the real world. They do this by tweeting short, concise, and clear messages. Since users often express themselves, Twitter data can detect mental health trends.\u0000Objective: This study aims to detect suicidal messages through tweets written by users with mental health issues.\u0000Methods: These tweets are analysed and classified using the lexicon-based and Naive Bayes algorithms to determine whether it contains suicidal messages.\u0000Results: The classification results show that the ‘normal’ classification is predominant at 52.3% of the total 3,034,826 tweets, which indicates an increase from September to December 2021.\u0000Conclusion: Most tweets are categorised as ‘normal’, therefore the mental health status appears secure. However, this finding needs to be re-examined in the future, especially in DKI Jakarta Province, which has the most cases of mental disorders. This study found that the Naive Bayes algorithm is more accurate (85.5%) than the lexicon-based algorithm. This can be improved in future studies by increasing performance at the pre-processing stage.\u0000 \u0000Keywords: Lexicon Based, Mental Disorder, Mental Health, Naïve Bayes, Twitter","PeriodicalId":16185,"journal":{"name":"Journal of Information Systems Engineering and Business Intelligence","volume":"59 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80457244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-29DOI: 10.20473/jisebi.8.2.207-217
Rias Kumalasari Devi, D. I. Sensuse, Kautsarina, Ryan Randy Suryono
Background: Information security is essential for organisations, hence the risk assessment. Information security risk assessment (ISRA) identifies, assesses, and prioritizes risks according to organisational goals. Previous studies have analysed and discussed information security risk assessment. Therefore, it is necessary to understand the models more systematically. Objective: This study aims to determine types of ISRA and fill a gap in literature review research by categorizing existing frameworks, models, and methods. Methods: The systematic literature review (SLR) approach developed by Kitchenham is applied in this research. A total of 25 studies were selected, classified, and analysed according to defined criteria. Results: Most selected studies focus on implementing and developing new models for risk assessment. In addition, most are related to information systems in general. Conclusion: The findings show that there is no single best framework or model because the best framework needs to be tailored according to organisational goals. Previous researchers have developed several new ISRA models, but empirical evaluation research is needed. Future research needs to develop more robust models for risk assessments for cloud computing systems. Keywords: Information Security Risk Assessment, ISRA, Security Risk
{"title":"Information Security Risk Assessment (ISRA): A Systematic Literature Review","authors":"Rias Kumalasari Devi, D. I. Sensuse, Kautsarina, Ryan Randy Suryono","doi":"10.20473/jisebi.8.2.207-217","DOIUrl":"https://doi.org/10.20473/jisebi.8.2.207-217","url":null,"abstract":"Background: Information security is essential for organisations, hence the risk assessment. Information security risk assessment (ISRA) identifies, assesses, and prioritizes risks according to organisational goals. Previous studies have analysed and discussed information security risk assessment. Therefore, it is necessary to understand the models more systematically.\u0000Objective: This study aims to determine types of ISRA and fill a gap in literature review research by categorizing existing frameworks, models, and methods.\u0000Methods: The systematic literature review (SLR) approach developed by Kitchenham is applied in this research. A total of 25 studies were selected, classified, and analysed according to defined criteria.\u0000Results: Most selected studies focus on implementing and developing new models for risk assessment. In addition, most are related to information systems in general.\u0000Conclusion: The findings show that there is no single best framework or model because the best framework needs to be tailored according to organisational goals. Previous researchers have developed several new ISRA models, but empirical evaluation research is needed. Future research needs to develop more robust models for risk assessments for cloud computing systems.\u0000 \u0000Keywords: Information Security Risk Assessment, ISRA, Security Risk","PeriodicalId":16185,"journal":{"name":"Journal of Information Systems Engineering and Business Intelligence","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84043668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-26DOI: 10.20473/jisebi.8.1.61-70
Kuncahyo Setyo Nugroho, F. A. Bachtiar, W. Mahmudy
Background: Term-weighting plays a key role in detecting emotion in texts. Studies in term-weighting schemes aim to improve short text classification by distinguishing terms accurately. Objective: This study aims to formulate the best term-weighting schemes and discover the relationship between n-gram combinations and different classification algorithms in detecting emotion in Twitter texts. Methods: The data used was the Indonesian Twitter Emotion Dataset, with features generated through different n-gram combinations. Two approaches assign weights to the features. Tests were carried out using ten-fold cross-validation on three classification algorithms. The performance of the model was measured using accuracy and F1 score. Results: The term-weighting schemes with the highest performance are Term Frequency-Inverse Category Frequency (TF-ICF) and Term Frequency-Relevance Frequency (TF-RF). The scheme with a supervised approach performed better than the unsupervised one. However, we did not find a consistent advantage as some of the experiments found that Term Frequency-Inverse Document Frequency (TF-IDF) also performed exceptionally well. The traditional TF-IDF method remains worth considering as a term-weighting scheme. Conclusion: This study provides recommendations for emotion detection in texts. Future studies can benefit from dealing with imbalances in the dataset to provide better performance. Keywords: Emotion Detection, Feature Engineering, Term-Weighting, Text Mining
{"title":"Detecting Emotion in Indonesian Tweets: A Term-Weighting Scheme Study","authors":"Kuncahyo Setyo Nugroho, F. A. Bachtiar, W. Mahmudy","doi":"10.20473/jisebi.8.1.61-70","DOIUrl":"https://doi.org/10.20473/jisebi.8.1.61-70","url":null,"abstract":"Background: Term-weighting plays a key role in detecting emotion in texts. Studies in term-weighting schemes aim to improve short text classification by distinguishing terms accurately.\u0000Objective: This study aims to formulate the best term-weighting schemes and discover the relationship between n-gram combinations and different classification algorithms in detecting emotion in Twitter texts.\u0000Methods: The data used was the Indonesian Twitter Emotion Dataset, with features generated through different n-gram combinations. Two approaches assign weights to the features. Tests were carried out using ten-fold cross-validation on three classification algorithms. The performance of the model was measured using accuracy and F1 score.\u0000Results: The term-weighting schemes with the highest performance are Term Frequency-Inverse Category Frequency (TF-ICF) and Term Frequency-Relevance Frequency (TF-RF). The scheme with a supervised approach performed better than the unsupervised one. However, we did not find a consistent advantage as some of the experiments found that Term Frequency-Inverse Document Frequency (TF-IDF) also performed exceptionally well. The traditional TF-IDF method remains worth considering as a term-weighting scheme.\u0000Conclusion: This study provides recommendations for emotion detection in texts. Future studies can benefit from dealing with imbalances in the dataset to provide better performance.\u0000Keywords: Emotion Detection, Feature Engineering, Term-Weighting, Text Mining","PeriodicalId":16185,"journal":{"name":"Journal of Information Systems Engineering and Business Intelligence","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74738818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: A calorie estimation system based on food images uses computer vision technology to recognize and count calories. There are two key processes required in the system: detection and segmentation. Many algorithms can undertake both processes, each algorithm with different levels of accuracy. Objective: This study aims to improve the accuracy of calorie calculation and segmentation processes using a combination of Mask R-CNN and GrabCut algorithms. Methods: The segmentation mask generated from Mask R-CNN and GrabCut were combined to create a new mask, then used to calculate the calorie. By considering the image augmentation technique, the accuracy of the calorie calculation and segmentation processes were observed to evaluate the method’s performance. Results: The proposed method could achieve a satisfying result, with an average calculation error value of less than 10% and an F1 score above 90% in all scenarios. Conclusion: Compared to earlier studies, the combination of Mask R-CNN and GrabCut could obtain a more satisfying result in calculating food calories with different shapes. Keywords: Augmentation, Calorie Calculation, Detection
{"title":"Mask R-CNN and GrabCut Algorithm for an Image-based Calorie Estimation System","authors":"Tiara Lestari Subaran, Transmissia Semiawan, Nurjannah Syakrani","doi":"10.20473/jisebi.8.1.1-10","DOIUrl":"https://doi.org/10.20473/jisebi.8.1.1-10","url":null,"abstract":"Background: A calorie estimation system based on food images uses computer vision technology to recognize and count calories. There are two key processes required in the system: detection and segmentation. Many algorithms can undertake both processes, each algorithm with different levels of accuracy.\u0000Objective: This study aims to improve the accuracy of calorie calculation and segmentation processes using a combination of Mask R-CNN and GrabCut algorithms.\u0000Methods: The segmentation mask generated from Mask R-CNN and GrabCut were combined to create a new mask, then used to calculate the calorie. By considering the image augmentation technique, the accuracy of the calorie calculation and segmentation processes were observed to evaluate the method’s performance.\u0000Results: The proposed method could achieve a satisfying result, with an average calculation error value of less than 10% and an F1 score above 90% in all scenarios.\u0000Conclusion: Compared to earlier studies, the combination of Mask R-CNN and GrabCut could obtain a more satisfying result in calculating food calories with different shapes.\u0000Keywords: Augmentation, Calorie Calculation, Detection","PeriodicalId":16185,"journal":{"name":"Journal of Information Systems Engineering and Business Intelligence","volume":"83 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79331984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: YOLOv4 is one of the fastest algorithms for object detection. Its methods, i.e., bag of freebies and bag of specials, can prevent overfitting, but this can be combined with early stopping as it could also prevent overfitting. Objective: This study aims to identify the effectiveness of early stopping in preventing overfitting in the YOLOv4 training process. Methods: Four datasets were grouped based on the training data size and object class, These datasets were tested in the experiment, which was carried out using three patience hyperparameters: 2, 3, and 5. To assess the consistency, it was repeated eight times. Results: The experimental results show that early stopping is triggered more frequently in training with data below 2,000 images. Of the three patience hyperparameters used, patience 2 and 3 were able to halve the training duration without sacrificing accuracy. Patience 5 rarely triggers early stopping. There is no pattern of correlation between the number of object classes and early stopping. Conclusion: Early stopping is useful only in training with data below 2,000 images. Patience with a value of 2 or 3 are recommended. Keywords: Early Stopping, Overfitting, Training data, YOLOv4
{"title":"Early Stopping Effectiveness for YOLOv4","authors":"Afif Rana Muhammad, Hamzah Prasetio Utomo, Priyanto Hidayatullah, Nurjannah Syakrani","doi":"10.20473/jisebi.8.1.11-20","DOIUrl":"https://doi.org/10.20473/jisebi.8.1.11-20","url":null,"abstract":"Background: YOLOv4 is one of the fastest algorithms for object detection. Its methods, i.e., bag of freebies and bag of specials, can prevent overfitting, but this can be combined with early stopping as it could also prevent overfitting.\u0000Objective: This study aims to identify the effectiveness of early stopping in preventing overfitting in the YOLOv4 training process.\u0000Methods: Four datasets were grouped based on the training data size and object class, These datasets were tested in the experiment, which was carried out using three patience hyperparameters: 2, 3, and 5. To assess the consistency, it was repeated eight times.\u0000Results: The experimental results show that early stopping is triggered more frequently in training with data below 2,000 images. Of the three patience hyperparameters used, patience 2 and 3 were able to halve the training duration without sacrificing accuracy. Patience 5 rarely triggers early stopping. There is no pattern of correlation between the number of object classes and early stopping.\u0000Conclusion: Early stopping is useful only in training with data below 2,000 images. Patience with a value of 2 or 3 are recommended.\u0000Keywords: Early Stopping, Overfitting, Training data, YOLOv4","PeriodicalId":16185,"journal":{"name":"Journal of Information Systems Engineering and Business Intelligence","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77037166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-26DOI: 10.20473/jisebi.8.1.80-90
Ahmad Arif Santosa, F. Alamsjah
Background: With the massive e-commerce transactions and document transfers, reliable system protection is needed. A digital signature is a tool that consists of encryption and decryption algorithms in a secret key to prevent data theft and online fraud. Objective: This research proposes an integrated technology-organization-environment (TOE) and the unified theory of acceptance and use of technology (UTAUT) to determine the factors affecting consumer intention to adopt the digital signature system. This research uses finance and information system departments’ perspectives in various industries. Methods: The analytical method is the Structural Equation Modeling (SEM) approach using the Smart Partial Least Square statistical version 3.0 software to examine the hypothesized connections between latent variables. Results: The results show that support from top management, size of the enterprise, and social influence have significant and positive effects on digital signature adoption. Meanwhile, user involvement and perceived simplicity have a negative effect on the adoption of a digital signature system in finance and information system departments. Conclusion: The current research suggests that executive levels in the finance and information system departments encourage the adoption of digital signature tools in doing daily tasks to increase efficiency. Keywords: Digital signature, consumer intention, finance and information system, structural equation modeling, TOE and UTAUT
{"title":"The Drivers of a Digital Signature System Adoption: Evidence from Finance and Information System Departments","authors":"Ahmad Arif Santosa, F. Alamsjah","doi":"10.20473/jisebi.8.1.80-90","DOIUrl":"https://doi.org/10.20473/jisebi.8.1.80-90","url":null,"abstract":"Background: With the massive e-commerce transactions and document transfers, reliable system protection is needed. A digital signature is a tool that consists of encryption and decryption algorithms in a secret key to prevent data theft and online fraud.\u0000Objective: This research proposes an integrated technology-organization-environment (TOE) and the unified theory of acceptance and use of technology (UTAUT) to determine the factors affecting consumer intention to adopt the digital signature system. This research uses finance and information system departments’ perspectives in various industries.\u0000Methods: The analytical method is the Structural Equation Modeling (SEM) approach using the Smart Partial Least Square statistical version 3.0 software to examine the hypothesized connections between latent variables.\u0000Results: The results show that support from top management, size of the enterprise, and social influence have significant and positive effects on digital signature adoption. Meanwhile, user involvement and perceived simplicity have a negative effect on the adoption of a digital signature system in finance and information system departments.\u0000Conclusion: The current research suggests that executive levels in the finance and information system departments encourage the adoption of digital signature tools in doing daily tasks to increase efficiency.\u0000Keywords: Digital signature, consumer intention, finance and information system, structural equation modeling, TOE and UTAUT","PeriodicalId":16185,"journal":{"name":"Journal of Information Systems Engineering and Business Intelligence","volume":"80 3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79649396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-26DOI: 10.20473/jisebi.8.1.71-79
E. Suryani, R. A. Hendrawan, Benyamin Limanto, Fatharani Wafda, Inayah Auliyah
Background: Some studies have shown that Return on Total Assets is a strategy to increase market share. Other studies have also shown that social media like WeChat can increase market share. However, no studies have considered Instagram engagement in increasing market share. Objective: This study aims to identify variable linkage that increases market share through a dynamic system approach in small and medium-sized enterprises (SMEs). Methods: Using a System Dynamics approach, this study presents a model simulation with a proposed increase in market share by considering Instagram features. This approach creates a Causal Loop Diagram converted into a simulated Stock Flow Diagram. The value generated from the simulation is validated with the mean comparison and % error variance formulas. Results: Instagram engagement increases market share from 0.009 to 0.018. Such engagement can be increased by posting regularly and doing more activities, such as increasing post frequency, holding contests, and maximizing all features. Conclusion: This study has successfully modeled information technology, i.e., a promotion module on social media. However, this work has not yet demonstrated how the features can gain more market share, so future research is needed. Keywords: Causal Loop Diagram, Engagement, Market Share, Stock Flow Diagram, System Dynamics
{"title":"The Impact of Social Media Engagement on Market Share: A System Dynamics Model","authors":"E. Suryani, R. A. Hendrawan, Benyamin Limanto, Fatharani Wafda, Inayah Auliyah","doi":"10.20473/jisebi.8.1.71-79","DOIUrl":"https://doi.org/10.20473/jisebi.8.1.71-79","url":null,"abstract":"Background: Some studies have shown that Return on Total Assets is a strategy to increase market share. Other studies have also shown that social media like WeChat can increase market share. However, no studies have considered Instagram engagement in increasing market share.\u0000Objective: This study aims to identify variable linkage that increases market share through a dynamic system approach in small and medium-sized enterprises (SMEs).\u0000Methods: Using a System Dynamics approach, this study presents a model simulation with a proposed increase in market share by considering Instagram features. This approach creates a Causal Loop Diagram converted into a simulated Stock Flow Diagram. The value generated from the simulation is validated with the mean comparison and % error variance formulas.\u0000Results: Instagram engagement increases market share from 0.009 to 0.018. Such engagement can be increased by posting regularly and doing more activities, such as increasing post frequency, holding contests, and maximizing all features.\u0000Conclusion: This study has successfully modeled information technology, i.e., a promotion module on social media. However, this work has not yet demonstrated how the features can gain more market share, so future research is needed.\u0000 Keywords: Causal Loop Diagram, Engagement, Market Share, Stock Flow Diagram, System Dynamics","PeriodicalId":16185,"journal":{"name":"Journal of Information Systems Engineering and Business Intelligence","volume":"97 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85322981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-26DOI: 10.20473/jisebi.8.1.91-99
Cut Fiarni, Arif Gunawan, Fredrick Victor
Background: The goal of academic supervision is to help students plan their academic journey and graduate on time. An intelligent support system is needed to spot potentially struggling students and identify the issues as early as possible. Objective: This study aims to develop an academic advising recommender system that improves decision-making through system utility, ease of use, and clearly visualized information. The study also aims to find the best advising relationship model to be implemented in the proposed system. Methods: The system was modeled by following the hybrid approach to obtain information and suggest recommended actions. The recommendation was modeled by backward chaining to prevent students from dropping out. Results: To validate the recommendations given by the proposed system, we used conformity level, and the result was 94.45%. To evaluate the utility of the system, we used the backbox method, resulting in satisfactory responses. Lastly, to evaluate user acceptance, we used the technology acceptance model (TAM), resulting in 85% ease of use and 91.2% perceived usefulness for the four main features, study planning, graduate timeline simulation, progress report, and visualization of academic KPIs. Conclusion: We propose an academic recommender system with KPIs visualization and academic planning information. Keywords: Academic advising model, recommender system, backward chaining, goal-driven, technology acceptance model, certainty factor
{"title":"Academic Recommender System Using Engagement Advising and Backward Chaining Model","authors":"Cut Fiarni, Arif Gunawan, Fredrick Victor","doi":"10.20473/jisebi.8.1.91-99","DOIUrl":"https://doi.org/10.20473/jisebi.8.1.91-99","url":null,"abstract":"Background: The goal of academic supervision is to help students plan their academic journey and graduate on time. An intelligent support system is needed to spot potentially struggling students and identify the issues as early as possible.\u0000Objective: This study aims to develop an academic advising recommender system that improves decision-making through system utility, ease of use, and clearly visualized information. The study also aims to find the best advising relationship model to be implemented in the proposed system.\u0000Methods: The system was modeled by following the hybrid approach to obtain information and suggest recommended actions. The recommendation was modeled by backward chaining to prevent students from dropping out.\u0000Results: To validate the recommendations given by the proposed system, we used conformity level, and the result was 94.45%. To evaluate the utility of the system, we used the backbox method, resulting in satisfactory responses. Lastly, to evaluate user acceptance, we used the technology acceptance model (TAM), resulting in 85% ease of use and 91.2% perceived usefulness for the four main features, study planning, graduate timeline simulation, progress report, and visualization of academic KPIs.\u0000Conclusion: We propose an academic recommender system with KPIs visualization and academic planning information.\u0000Keywords: Academic advising model, recommender system, backward chaining, goal-driven, technology acceptance model, certainty factor","PeriodicalId":16185,"journal":{"name":"Journal of Information Systems Engineering and Business Intelligence","volume":"82 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79034884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-26DOI: 10.20473/jisebi.8.1.51-60
Firman Maulana Adhari, T. Abidin, R. Ferdhiana
Background: In the last decade, the number of registered vehicles has grown exponentially. With more vehicles on the road, traffic jams, accidents, and violations also increase. A license plate plays a key role in solving such problems because it stores a vehicle’s historical information. Therefore, automated license-plate character recognition is needed. Objective: This study proposes a recognition system that uses convolutional neural network (CNN) architectures to recognize characters from a license plate’s images. We called it a modified LeNet-5 architecture. Methods: We used four different CNN architectures to recognize license plate characters: AlexNet, LeNet-5, modified LeNet-5, and ResNet-50 architectures. We evaluated the performance based on their accuracy and computation time. We compared the deep learning methods with the Freeman chain code (FCC) extraction with support vector machine (SVM). We also evaluated the Otsu and the threshold binarization performances when applied in the FCC extraction method. Results: The ResNet-50 and modified LeNet-5 produces the best accuracy during the training at 0.97. The precision and recall scores of the ResNet-50 are both 0.97, while the modified LeNet-5’s values are 0.98 and 0.96, respectively. The modified LeNet-5 shows a slightly higher precision score but a lower recall score. The modified LeNet-5 shows a slightly lower accuracy during the testing than ResNet-50. Meanwhile, the Otsu binarization’s FCC extraction is better than the threshold binarization. Overall, the FCC extraction technique performs less effectively than CNN. The modified LeNet-5 computes the fastest at 7 mins and 57 secs, while ResNet-50 needs 42 mins and 11 secs. Conclusion: We discovered that CNN is better than the FCC extraction method with SVM. Both ResNet-50 and the modified LeNet-5 perform best during the training, with F measure scoring 0.97. However, ResNet-50 outperforms the modified LeNet-5 during the testing, with F-measure at 0.97 and 1.00, respectively. In addition, the FCC extraction using the Otsu binarization is better than the threshold binarization. Otsu binarization reached 0.91, higher than the static threshold binarization at 127. In addition, Otsu binarization produces a dynamic threshold value depending on the images’ light intensity. Keywords: Convolutional Neural Network, Freeman Chain Code, License Plate Character Recognition, Support Vector Machine
{"title":"License Plate Character Recognition using Convolutional Neural Network","authors":"Firman Maulana Adhari, T. Abidin, R. Ferdhiana","doi":"10.20473/jisebi.8.1.51-60","DOIUrl":"https://doi.org/10.20473/jisebi.8.1.51-60","url":null,"abstract":"Background: In the last decade, the number of registered vehicles has grown exponentially. With more vehicles on the road, traffic jams, accidents, and violations also increase. A license plate plays a key role in solving such problems because it stores a vehicle’s historical information. Therefore, automated license-plate character recognition is needed.\u0000Objective: This study proposes a recognition system that uses convolutional neural network (CNN) architectures to recognize characters from a license plate’s images. We called it a modified LeNet-5 architecture.\u0000Methods: We used four different CNN architectures to recognize license plate characters: AlexNet, LeNet-5, modified LeNet-5, and ResNet-50 architectures. We evaluated the performance based on their accuracy and computation time. We compared the deep learning methods with the Freeman chain code (FCC) extraction with support vector machine (SVM). We also evaluated the Otsu and the threshold binarization performances when applied in the FCC extraction method.\u0000Results: The ResNet-50 and modified LeNet-5 produces the best accuracy during the training at 0.97. The precision and recall scores of the ResNet-50 are both 0.97, while the modified LeNet-5’s values are 0.98 and 0.96, respectively. The modified LeNet-5 shows a slightly higher precision score but a lower recall score. The modified LeNet-5 shows a slightly lower accuracy during the testing than ResNet-50. Meanwhile, the Otsu binarization’s FCC extraction is better than the threshold binarization. Overall, the FCC extraction technique performs less effectively than CNN. The modified LeNet-5 computes the fastest at 7 mins and 57 secs, while ResNet-50 needs 42 mins and 11 secs.\u0000Conclusion: We discovered that CNN is better than the FCC extraction method with SVM. Both ResNet-50 and the modified LeNet-5 perform best during the training, with F measure scoring 0.97. However, ResNet-50 outperforms the modified LeNet-5 during the testing, with F-measure at 0.97 and 1.00, respectively. In addition, the FCC extraction using the Otsu binarization is better than the threshold binarization. Otsu binarization reached 0.91, higher than the static threshold binarization at 127. In addition, Otsu binarization produces a dynamic threshold value depending on the images’ light intensity.\u0000Keywords: Convolutional Neural Network, Freeman Chain Code, License Plate Character Recognition, Support Vector Machine","PeriodicalId":16185,"journal":{"name":"Journal of Information Systems Engineering and Business Intelligence","volume":"12 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91427398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}