Background: Technology acceptance model (TAM) has been extensively used to analyse user acceptance of technologies adopted by enterprises at different levels. Moreover, the technology adoption has drawn attention among practitioners and academic communities alike, leading to the development of approaches to understand the concept. However, there is a degree of inconsistency found in previous studies on different types of TAM models used in explaining user acceptance of technologies among small-medium enterprises (SMEs).Objective: This critical literature review aims to synthesise the technology adoption scholarly studies using TAM. It is expected to aid the identification of the most relevant factors influencing SMEs in adopting technology. Additionally, analysing the variations of TAM developed in previous studies could provide suggested variables specific to the type of technology industry.Methods: An integrated approach was used, and this involves a review of articles on the adoption of technologies in SMEs from 2011 to 2021, retrieved from popular databases using a mixture of keywords such as technology acceptance model (TAM), technology adoption, and technology adoption in SMEs.Results: An overview of TAM studies on user acceptance of technology in this review covers a wide range of research areas from financial technology to human resource management-related technology. Perceived usefulness and perceived ease of use were discovered to be the most common factors in TAM from the 21 articles reviewed. Meanwhile, some other variables were observed such as context, type of technology and level of user experience.Conclusion: The review highlights key trends in previous studies on IT adoption in SMEs, which assist researchers and developers in understanding the most relevant factors and suitable TAM models in determining user acceptance in a particular field. Keywords: Technology Acceptance Model, Technology Adoption, Small-medium Enterprises, Critical Review
{"title":"Technology Adoption in Small-Medium Enterprises based on Technology Acceptance Model: A Critical Review","authors":"Adisthy Shabrina Nurqamarani, Eddy Sogiarto, Nurlaeli Nurlaeli","doi":"10.20473/jisebi.7.2.162-172","DOIUrl":"https://doi.org/10.20473/jisebi.7.2.162-172","url":null,"abstract":"Background: Technology acceptance model (TAM) has been extensively used to analyse user acceptance of technologies adopted by enterprises at different levels. Moreover, the technology adoption has drawn attention among practitioners and academic communities alike, leading to the development of approaches to understand the concept. However, there is a degree of inconsistency found in previous studies on different types of TAM models used in explaining user acceptance of technologies among small-medium enterprises (SMEs).Objective: This critical literature review aims to synthesise the technology adoption scholarly studies using TAM. It is expected to aid the identification of the most relevant factors influencing SMEs in adopting technology. Additionally, analysing the variations of TAM developed in previous studies could provide suggested variables specific to the type of technology industry.Methods: An integrated approach was used, and this involves a review of articles on the adoption of technologies in SMEs from 2011 to 2021, retrieved from popular databases using a mixture of keywords such as technology acceptance model (TAM), technology adoption, and technology adoption in SMEs.Results: An overview of TAM studies on user acceptance of technology in this review covers a wide range of research areas from financial technology to human resource management-related technology. Perceived usefulness and perceived ease of use were discovered to be the most common factors in TAM from the 21 articles reviewed. Meanwhile, some other variables were observed such as context, type of technology and level of user experience.Conclusion: The review highlights key trends in previous studies on IT adoption in SMEs, which assist researchers and developers in understanding the most relevant factors and suitable TAM models in determining user acceptance in a particular field. Keywords: Technology Acceptance Model, Technology Adoption, Small-medium Enterprises, Critical Review","PeriodicalId":16185,"journal":{"name":"Journal of Information Systems Engineering and Business Intelligence","volume":"59 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91343068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-27DOI: 10.20473/JISEBI.7.1.11-21
B. Miftahurrohmah, Catur Wulandari, Y. S. Dharmawan
Background: Stock investment has been gaining momentum in the past years due to the development of technology. During the pandemic lockdown, people have invested more. One the one hand, stock investment has high potential profitability, but on the other, it is equally risky. Therefore, a value at risk (VaR) analysis is needed. One approach to calculate VaR is by using the Bayesian mixture model, which has been proven to be able to overcome heavy-tailed cases. Then, the VaR’s accuracy needs to be tested, and one of the ways is by using backtesting, such as the Kupiec test. Objective : This study aims to determine the VaR model of PT NFC Indonesia Tbk (NFCX) return data using Bayesian mixture modelling and backtesting. On a practical level, this study can provide information about the potential risks of investing that is grounded in empirical evidence. Methods : The data used was NFCX data retrieved from Yahoo Finance, which was then modelled with a mixture model based on the normal and Laplace distributions. After that, the VaR accuracy was calculated and then tested by using backtesting. Results : The test results showed that the VaR with the mixture Laplace autoregressive (MLAR) approach (2;[2],[4]) was accurate at 5% and 1% quantiles while mixture normal autoregressive MNAR (2;[2],[2,4]) was only accurate at 5% quantiles. Conclusion : The better performing NFCX VaR model for this study based on backtesting using Kupiec test is MLAR(2;[2],[4]).
背景:近年来,由于科技的发展,股票投资势头强劲。在疫情封锁期间,人们投入了更多。一方面,股票投资具有很高的潜在盈利能力,但另一方面,它同样具有风险。因此,需要进行风险值(VaR)分析。计算VaR的一种方法是使用贝叶斯混合模型,该模型已被证明能够克服重尾情况。然后,需要对VaR的准确性进行测试,其中一种方法是使用回测,例如Kupiec测试。目的:利用贝叶斯混合模型和回验方法,确定PT NFC Indonesia Tbk (NFCX)回归数据的VaR模型。在实践层面上,本研究可以提供基于经验证据的投资潜在风险信息。方法:使用的数据是从雅虎财经检索到的NFCX数据,然后使用基于正态分布和拉普拉斯分布的混合模型进行建模。在此基础上,计算VaR的准确性,并通过回测进行检验。结果:检验结果表明,混合拉普拉斯自回归(MLAR)方法(2;[2],[4])的VaR在5%和1%分位数下准确,而混合正态自回归MNAR(2;[2],[2,4])仅在5%分位数下准确。结论:基于Kupiec检验的回测,本研究中表现较好的NFCX VaR模型是MLAR(2;[2],[4])。
{"title":"Investment Modelling Using Value at Risk Bayesian Mixture Modelling Approach and Backtesting to Assess Stock Risk","authors":"B. Miftahurrohmah, Catur Wulandari, Y. S. Dharmawan","doi":"10.20473/JISEBI.7.1.11-21","DOIUrl":"https://doi.org/10.20473/JISEBI.7.1.11-21","url":null,"abstract":"Background: Stock investment has been gaining momentum in the past years due to the development of technology. During the pandemic lockdown, people have invested more. One the one hand, stock investment has high potential profitability, but on the other, it is equally risky. Therefore, a value at risk (VaR) analysis is needed. One approach to calculate VaR is by using the Bayesian mixture model, which has been proven to be able to overcome heavy-tailed cases. Then, the VaR’s accuracy needs to be tested, and one of the ways is by using backtesting, such as the Kupiec test. Objective : This study aims to determine the VaR model of PT NFC Indonesia Tbk (NFCX) return data using Bayesian mixture modelling and backtesting. On a practical level, this study can provide information about the potential risks of investing that is grounded in empirical evidence. Methods : The data used was NFCX data retrieved from Yahoo Finance, which was then modelled with a mixture model based on the normal and Laplace distributions. After that, the VaR accuracy was calculated and then tested by using backtesting. Results : The test results showed that the VaR with the mixture Laplace autoregressive (MLAR) approach (2;[2],[4]) was accurate at 5% and 1% quantiles while mixture normal autoregressive MNAR (2;[2],[2,4]) was only accurate at 5% quantiles. Conclusion : The better performing NFCX VaR model for this study based on backtesting using Kupiec test is MLAR(2;[2],[4]).","PeriodicalId":16185,"journal":{"name":"Journal of Information Systems Engineering and Business Intelligence","volume":"26 1","pages":"11-21"},"PeriodicalIF":0.0,"publicationDate":"2021-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88259808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-27DOI: 10.20473/JISEBI.7.1.74-83
Yohani Setiya Rafika Nur, R. Rahmadi, C. Effendy
Background: Cancer patients can experience both physical and non-physical problems such as psychosocial, spiritual, and emotional problems, which impact the quality of life. Previous studies on quality of life mostly have employed multivariate analyses. To our knowledge, no studies have focused yet on the underlying causal relationship between factors representing the quality of life of cancer patients, which is very important when attempting to improve the quality of life. Objective: The study aims to model the causal relationships between the factors that represent cancer and quality of life. Methods: This study uses the S3C-Latent method to estimate the causal model relationships between the factors. The S3C-Latent method combines Structural Equation Model (SEM), a multi objective optimization method, and the stability selection approach, to estimate a stable and parsimonious causal model. Results: There are nine causal relations that have been found, i.e., from physical to global health with a reliability score of 0.73, to performance status with a reliability score of 1, from emotional to global health with a reliability score of 0.71, to performance status with a reliability score of 0.82, from nausea, loss of appetite, dyspnea, insomnia, loss of appetite and from constipation to performance status with reliability scores of 0.76; 1; 0.61; 0.76; 0.72; 0.70, respectively. Moreover, this study found that 15 associations (strong relation where the causal direction cannot be determined from the data alone) between factors with reliability scores range from 0.65 to 1. Conclusion: The estimated model is consistent with the results shown in previous studies. The model is expected to provide evidence-based recommendation for health care providers in designing strategies to increase cancer patients’ life quality. For future research, we suggest studies to include more variables in the model to capture a broader view to the problem.
{"title":"Causal Modeling Between Factors on Quality of Life in Cancer Patients Using S3C-Latent Algorithm","authors":"Yohani Setiya Rafika Nur, R. Rahmadi, C. Effendy","doi":"10.20473/JISEBI.7.1.74-83","DOIUrl":"https://doi.org/10.20473/JISEBI.7.1.74-83","url":null,"abstract":"Background: Cancer patients can experience both physical and non-physical problems such as psychosocial, spiritual, and emotional problems, which impact the quality of life. Previous studies on quality of life mostly have employed multivariate analyses. To our knowledge, no studies have focused yet on the underlying causal relationship between factors representing the quality of life of cancer patients, which is very important when attempting to improve the quality of life. Objective: The study aims to model the causal relationships between the factors that represent cancer and quality of life. Methods: This study uses the S3C-Latent method to estimate the causal model relationships between the factors. The S3C-Latent method combines Structural Equation Model (SEM), a multi objective optimization method, and the stability selection approach, to estimate a stable and parsimonious causal model. Results: There are nine causal relations that have been found, i.e., from physical to global health with a reliability score of 0.73, to performance status with a reliability score of 1, from emotional to global health with a reliability score of 0.71, to performance status with a reliability score of 0.82, from nausea, loss of appetite, dyspnea, insomnia, loss of appetite and from constipation to performance status with reliability scores of 0.76; 1; 0.61; 0.76; 0.72; 0.70, respectively. Moreover, this study found that 15 associations (strong relation where the causal direction cannot be determined from the data alone) between factors with reliability scores range from 0.65 to 1. Conclusion: The estimated model is consistent with the results shown in previous studies. The model is expected to provide evidence-based recommendation for health care providers in designing strategies to increase cancer patients’ life quality. For future research, we suggest studies to include more variables in the model to capture a broader view to the problem.","PeriodicalId":16185,"journal":{"name":"Journal of Information Systems Engineering and Business Intelligence","volume":"7 1","pages":"74-83"},"PeriodicalIF":0.0,"publicationDate":"2021-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78258649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-27DOI: 10.20473/JISEBI.7.1.84-90
Ady Hermawan, L. Manik
Background: The Agile method, which is claimed to reduce time needed for software development cycle has been widely used. It addresses communication gaps between customers and developers. Today, the DevOps has been extended as part of the Agile process to address communication gaps between developer’s team members. Despite the rising popularity, the effect of DevOps implementation on the teamwork quality in software development is still unknown. Objective: The objective of this research is to conduct a study on the impact of DevOps on teamwork quality. Two software houses, PT X and PT Y, are chosen as the case studies. Methods: This research uses quantitative methods to analyse research data using simple linear regression. The questionnaire technique is used to retrieve respondent data using 62 questions, consisting of 20 DevOps questions from 4 indicators and 42 teamwork quality questions from 6 indicators. Results: The results from various quality tests indicate that all instruments are valid and reliable while hypothesis tests showed that the DevOps implementation variable has an influence on the teamwork quality variable by 75.6%. Conclusion: It can be concluded that the implementation of the DevOps in software development has a positive correlation with the teamwork quality.
{"title":"The Effect of DevOps Implementation on Teamwork Quality in Software Development","authors":"Ady Hermawan, L. Manik","doi":"10.20473/JISEBI.7.1.84-90","DOIUrl":"https://doi.org/10.20473/JISEBI.7.1.84-90","url":null,"abstract":"Background: The Agile method, which is claimed to reduce time needed for software development cycle has been widely used. It addresses communication gaps between customers and developers. Today, the DevOps has been extended as part of the Agile process to address communication gaps between developer’s team members. Despite the rising popularity, the effect of DevOps implementation on the teamwork quality in software development is still unknown. Objective: The objective of this research is to conduct a study on the impact of DevOps on teamwork quality. Two software houses, PT X and PT Y, are chosen as the case studies. Methods: This research uses quantitative methods to analyse research data using simple linear regression. The questionnaire technique is used to retrieve respondent data using 62 questions, consisting of 20 DevOps questions from 4 indicators and 42 teamwork quality questions from 6 indicators. Results: The results from various quality tests indicate that all instruments are valid and reliable while hypothesis tests showed that the DevOps implementation variable has an influence on the teamwork quality variable by 75.6%. Conclusion: It can be concluded that the implementation of the DevOps in software development has a positive correlation with the teamwork quality.","PeriodicalId":16185,"journal":{"name":"Journal of Information Systems Engineering and Business Intelligence","volume":"7 1","pages":"84-90"},"PeriodicalIF":0.0,"publicationDate":"2021-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89102792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-27DOI: 10.20473/JISEBI.7.1.67-73
H. Yuliansyah, Rahmasari Adi Putri Imaniati, Anggit Wirasto, Merlinda Wibowo
Background: Facilitating an effective learning process is the goal of higher education institutions. Despite improvement in curriculum and resources, many students cannot graduate on time. Mostly, the number of students who graduate on time is lower than the number of new students enrolling to universities. This could dilute the chance for students to learn effectively as the ratio between faculty members and students becomes non-ideal.Objective: This study aims to present a prediction model for students’ on-time graduation using the C4.5 algorithm by considering four features, namely the department, GPA, English score, and age.Methods: This research was completed in three stages: data pre-processing, data processing and performance measurement. This predicting scheme make the prediction based on the department of study, age, GPA and English proficiency.Results: The results of this study have successfully predicted students’ graduation. This result is based on the data of students who graduated in 2008-2014. The prediction performance result achieved 90% of accuracy using 300 testing data.Conclusion: The finding is expected to be useful for universities in administering their teaching and learning process.
{"title":"Predicting Students Graduate on Time Using C4.5 Algorithm","authors":"H. Yuliansyah, Rahmasari Adi Putri Imaniati, Anggit Wirasto, Merlinda Wibowo","doi":"10.20473/JISEBI.7.1.67-73","DOIUrl":"https://doi.org/10.20473/JISEBI.7.1.67-73","url":null,"abstract":"Background: Facilitating an effective learning process is the goal of higher education institutions. Despite improvement in curriculum and resources, many students cannot graduate on time. Mostly, the number of students who graduate on time is lower than the number of new students enrolling to universities. This could dilute the chance for students to learn effectively as the ratio between faculty members and students becomes non-ideal.Objective: This study aims to present a prediction model for students’ on-time graduation using the C4.5 algorithm by considering four features, namely the department, GPA, English score, and age.Methods: This research was completed in three stages: data pre-processing, data processing and performance measurement. This predicting scheme make the prediction based on the department of study, age, GPA and English proficiency.Results: The results of this study have successfully predicted students’ graduation. This result is based on the data of students who graduated in 2008-2014. The prediction performance result achieved 90% of accuracy using 300 testing data.Conclusion: The finding is expected to be useful for universities in administering their teaching and learning process.","PeriodicalId":16185,"journal":{"name":"Journal of Information Systems Engineering and Business Intelligence","volume":"56 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90148572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-27DOI: 10.20473/JISEBI.7.1.22-30
F. Adhinata, Diovianto Putra Rakhmadani, Danur Wijayanto
Background: The COVID-19 pandemic has made people spend more time on online meetings more than ever. The prolonged time looking at the monitor may cause fatigue, which can subsequently impact the mental and physical health. A fatigue detection system is needed to monitor the Internet users well-being. Previous research related to the fatigue detection system used a fuzzy system, but the accuracy was below 85%. In this research, machine learning is used to improve accuracy.Objective: This research examines the combination of the FaceNet algorithm with either k-nearest neighbor (K-NN) or multiclass support vector machine (SVM) to improve the accuracy.Methods: In this study, we used the UTA-RLDD dataset. The features used for fatigue detection come from the face, so the dataset is segmented using the Haar Cascades method, which is then resized. The feature extraction process uses FaceNet's pre-trained algorithm. The extracted features are classified into three classes—focused, unfocused, and fatigue—using the K-NN or multiclass SVM method.Results: The combination between the FaceNet algorithm and K-NN, with a value of resulted in a better accuracy than the FaceNet algorithm with multiclass SVM with the polynomial kernel (at 94.68% and 89.87% respectively). The processing speed of both combinations of methods has allowed for real-time data processing.Conclusion: This research provides an overview of methods for early fatigue detection while working at the computer so that we can limit staring at the computer screen too long and switch places to maintain the health of our eyes.
{"title":"Fatigue Detection on Face Image Using FaceNet Algorithm and K-Nearest Neighbor Classifier","authors":"F. Adhinata, Diovianto Putra Rakhmadani, Danur Wijayanto","doi":"10.20473/JISEBI.7.1.22-30","DOIUrl":"https://doi.org/10.20473/JISEBI.7.1.22-30","url":null,"abstract":"Background: The COVID-19 pandemic has made people spend more time on online meetings more than ever. The prolonged time looking at the monitor may cause fatigue, which can subsequently impact the mental and physical health. A fatigue detection system is needed to monitor the Internet users well-being. Previous research related to the fatigue detection system used a fuzzy system, but the accuracy was below 85%. In this research, machine learning is used to improve accuracy.Objective: This research examines the combination of the FaceNet algorithm with either k-nearest neighbor (K-NN) or multiclass support vector machine (SVM) to improve the accuracy.Methods: In this study, we used the UTA-RLDD dataset. The features used for fatigue detection come from the face, so the dataset is segmented using the Haar Cascades method, which is then resized. The feature extraction process uses FaceNet's pre-trained algorithm. The extracted features are classified into three classes—focused, unfocused, and fatigue—using the K-NN or multiclass SVM method.Results: The combination between the FaceNet algorithm and K-NN, with a value of resulted in a better accuracy than the FaceNet algorithm with multiclass SVM with the polynomial kernel (at 94.68% and 89.87% respectively). The processing speed of both combinations of methods has allowed for real-time data processing.Conclusion: This research provides an overview of methods for early fatigue detection while working at the computer so that we can limit staring at the computer screen too long and switch places to maintain the health of our eyes. ","PeriodicalId":16185,"journal":{"name":"Journal of Information Systems Engineering and Business Intelligence","volume":"29 1","pages":"22"},"PeriodicalIF":0.0,"publicationDate":"2021-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84152078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-27DOI: 10.20473/JISEBI.7.1.31-41
Syafira Fitri Auliya, Nurcahyani Wulandari
Background: The novel coronavirus disease 2019 (COVID-19) has been spreading rapidly across the world and infected millions of people, many of whom died. As part of the response plans, many countries have been attempting to restrict people’s mobility by launching social distancing protocol, including in Indonesia. It is then necessary to identify the campaign’s impact and analyze the influence of mobility patterns on the pandemic’s transmission rate. Objective: Using mobility data from Google and Apple, this research discovers that COVID-19 daily new cases in Indonesia are mostly related to the mobility trends in the previous eight days. Methods: We generate ten-day predictions of COVID-19 daily new cases and Indonesians’ mobility by using Long-Short Term Memory (LSTM) algorithm to provide insight for future implementation of social distancing policies. Results: We found that all eight-mobility categories result in the highest accumulation correlation values between COVID-19 daily new cases and the mobility eight days before. We forecast of the pandemic daily new cases in Indonesia, DKI Jakarta and worldwide (with error on MAPE 6.2% - 9.4%) as well as the mobility trends in Indonesia and DKI Jakarta (with error on MAPE 6.4 - 287.3%). Conclusion: We discover that the driver behind the rapid transmission in Indonesia is the number of visits to retail and recreation, groceries and pharmacies, and parks. In contrast, the mobility to the workplaces negatively correlates with the pandemic spread rate.
{"title":"The Impact of Mobility Patterns on the Spread of the COVID-19 in Indonesia","authors":"Syafira Fitri Auliya, Nurcahyani Wulandari","doi":"10.20473/JISEBI.7.1.31-41","DOIUrl":"https://doi.org/10.20473/JISEBI.7.1.31-41","url":null,"abstract":"Background: The novel coronavirus disease 2019 (COVID-19) has been spreading rapidly across the world and infected millions of people, many of whom died. As part of the response plans, many countries have been attempting to restrict people’s mobility by launching social distancing protocol, including in Indonesia. It is then necessary to identify the campaign’s impact and analyze the influence of mobility patterns on the pandemic’s transmission rate. Objective: Using mobility data from Google and Apple, this research discovers that COVID-19 daily new cases in Indonesia are mostly related to the mobility trends in the previous eight days. Methods: We generate ten-day predictions of COVID-19 daily new cases and Indonesians’ mobility by using Long-Short Term Memory (LSTM) algorithm to provide insight for future implementation of social distancing policies. Results: We found that all eight-mobility categories result in the highest accumulation correlation values between COVID-19 daily new cases and the mobility eight days before. We forecast of the pandemic daily new cases in Indonesia, DKI Jakarta and worldwide (with error on MAPE 6.2% - 9.4%) as well as the mobility trends in Indonesia and DKI Jakarta (with error on MAPE 6.4 - 287.3%). Conclusion: We discover that the driver behind the rapid transmission in Indonesia is the number of visits to retail and recreation, groceries and pharmacies, and parks. In contrast, the mobility to the workplaces negatively correlates with the pandemic spread rate.","PeriodicalId":16185,"journal":{"name":"Journal of Information Systems Engineering and Business Intelligence","volume":"40 1","pages":"31-41"},"PeriodicalIF":0.0,"publicationDate":"2021-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89353390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-27DOI: 10.20473/JISEBI.7.1.56-66
M. F. Naufal, Selvia Ferdiana Kusuma, Zefanya Ardya Prayuska, Ang Alexander Yoshua, Yohanes Albert Lauwoto, Nicky Setyawan Dinata, David Sugiarto
Background: The COVID-19 pandemic remains a problem in 2021. Health protocols are needed to prevent the spread, including wearing a face mask. Enforcing people to wear face masks is tiring. AI can be used to classify images for face mask detection. There are a lot of image classification algorithm for face mask detection, but there are still no studies that compare their performance. Objective: This study aims to compare the classification algorithms of classical machine learning. They are k-nearest neighbors (KNN), support vector machine (SVM), and a widely used deep learning algorithm for image classification which is convolutional neural network (CNN) for face masks detection. Methods: This study uses 5 and 3 cross-validation for assessing the performance of KNN, SVM, and CNN in face mask detection. Results: CNN has the best average performance with the accuracy of 0.9683 and average execution time of 2,507.802 seconds for classifying 3,725 faces with mask and 3,828 faces without mask images. Conclusion: For a large amount of image data, KNN and SVM can be used as temporary algorithms in face mask detection due to their faster execution times. At the same time, CNN can be trained to form a classification model. In this case, it is advisable to use CNN for classification because it has better performance than KNN and SVM. In the future, the classification model can be implemented for automatic alert system to detect and warn people who are not wearing face masks.
{"title":"Comparative Analysis of Image Classification Algorithms for Face Mask Detection","authors":"M. F. Naufal, Selvia Ferdiana Kusuma, Zefanya Ardya Prayuska, Ang Alexander Yoshua, Yohanes Albert Lauwoto, Nicky Setyawan Dinata, David Sugiarto","doi":"10.20473/JISEBI.7.1.56-66","DOIUrl":"https://doi.org/10.20473/JISEBI.7.1.56-66","url":null,"abstract":"Background: The COVID-19 pandemic remains a problem in 2021. Health protocols are needed to prevent the spread, including wearing a face mask. Enforcing people to wear face masks is tiring. AI can be used to classify images for face mask detection. There are a lot of image classification algorithm for face mask detection, but there are still no studies that compare their performance. Objective: This study aims to compare the classification algorithms of classical machine learning. They are k-nearest neighbors (KNN), support vector machine (SVM), and a widely used deep learning algorithm for image classification which is convolutional neural network (CNN) for face masks detection. Methods: This study uses 5 and 3 cross-validation for assessing the performance of KNN, SVM, and CNN in face mask detection. Results: CNN has the best average performance with the accuracy of 0.9683 and average execution time of 2,507.802 seconds for classifying 3,725 faces with mask and 3,828 faces without mask images. Conclusion: For a large amount of image data, KNN and SVM can be used as temporary algorithms in face mask detection due to their faster execution times. At the same time, CNN can be trained to form a classification model. In this case, it is advisable to use CNN for classification because it has better performance than KNN and SVM. In the future, the classification model can be implemented for automatic alert system to detect and warn people who are not wearing face masks.","PeriodicalId":16185,"journal":{"name":"Journal of Information Systems Engineering and Business Intelligence","volume":"115 ","pages":"56-66"},"PeriodicalIF":0.0,"publicationDate":"2021-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72551859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-27DOI: 10.20473/JISEBI.7.1.42-55
A. Biswas, Md. Saiful Islam
Background: Handwriting recognition becomes an appreciable research area because of its important practical applications, but varieties of writing patterns make automatic classification a challenging task. Classifying handwritten digits with a higher accuracy is needed to improve the limitations from past research, which mostly used deep learning approaches.Objective: Two most noteworthy limitations are low accuracy and slow computational speed. The current study is to model a Convolutional Neural Network (CNN), which is simple yet more accurate in classifying English handwritten digits for different datasets. Novelty of this paper is to explore an efficient CNN architecture that can classify digits of different datasets accurately.Methods: The author proposed five different CNN architectures for training and validation tasks with two datasets. Dataset-1 consists of 12,000 MNIST data and Dataset-2 consists of 29,400-digit data of Kaggle. The proposed CNN models extract the features first and then performs the classification tasks. For the performance optimization, the models utilized stochastic gradient descent with momentum optimizer.Results: Among the five models, one was found to be the best performer, with 99.53% and 98.93% of validation accuracy for Dataset-1 and Dataset-2 respectively. Compared to Adam and RMSProp optimizers, stochastic gradient descent with momentum yielded the highest accuracy.Conclusion: The proposed best CNN model has the simplest architecture. It provides a higher accuracy for different datasets and takes less computational time. The validation accuracy of the proposed model is also higher than those of in past works.
{"title":"An Efficient CNN Model for Automated Digital Handwritten Digit Classification","authors":"A. Biswas, Md. Saiful Islam","doi":"10.20473/JISEBI.7.1.42-55","DOIUrl":"https://doi.org/10.20473/JISEBI.7.1.42-55","url":null,"abstract":"Background: Handwriting recognition becomes an appreciable research area because of its important practical applications, but varieties of writing patterns make automatic classification a challenging task. Classifying handwritten digits with a higher accuracy is needed to improve the limitations from past research, which mostly used deep learning approaches.Objective: Two most noteworthy limitations are low accuracy and slow computational speed. The current study is to model a Convolutional Neural Network (CNN), which is simple yet more accurate in classifying English handwritten digits for different datasets. Novelty of this paper is to explore an efficient CNN architecture that can classify digits of different datasets accurately.Methods: The author proposed five different CNN architectures for training and validation tasks with two datasets. Dataset-1 consists of 12,000 MNIST data and Dataset-2 consists of 29,400-digit data of Kaggle. The proposed CNN models extract the features first and then performs the classification tasks. For the performance optimization, the models utilized stochastic gradient descent with momentum optimizer.Results: Among the five models, one was found to be the best performer, with 99.53% and 98.93% of validation accuracy for Dataset-1 and Dataset-2 respectively. Compared to Adam and RMSProp optimizers, stochastic gradient descent with momentum yielded the highest accuracy.Conclusion: The proposed best CNN model has the simplest architecture. It provides a higher accuracy for different datasets and takes less computational time. The validation accuracy of the proposed model is also higher than those of in past works. ","PeriodicalId":16185,"journal":{"name":"Journal of Information Systems Engineering and Business Intelligence","volume":"60 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77256871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-27DOI: 10.20473/JISEBI.7.1.1-10
Raden Gunawan Santosa, Yuan Lukito, Antonius Rachmat Chrismanto
Background: Student admission at universities aims to select the best candidates who will excel and finish their studies on time. There are many factors to be considered in student admission. To assist the process, an intelligent model is needed to spot the potentially high achieving students, as well as to identify potentially struggling students as early as possible. Objective: This research uses K-means clustering to predict students’ grade point average (GPA) based on students’ profile, such as high school status and location, university entrance test score and English language competence. Methods: Students’ data from class of 2008 to 2017 are used to create two clusters using K-means clustering algorithm. Two centroids from the clusters are used to classify all the data into two groups: high GPA and low GPA. We use the data from class of 2018 as test data. The performance of the prediction is measured using accuracy, precision and recall. Results: Based on the analysis, the K-means clustering method is 78.59% accurate among the merit-based-admission students and 94.627% among the regular-admission students. Conclusion: The prediction involving merit-based-admission students has lower predictive accuracy values than that of involving regular-admission students because the clustering model for the merit-based-admission data is K = 3, but for the prediction, the assumption is K = 2.
{"title":"Classification and Prediction of Students’ GPA Using K-Means Clustering Algorithm to Assist Student Admission Process","authors":"Raden Gunawan Santosa, Yuan Lukito, Antonius Rachmat Chrismanto","doi":"10.20473/JISEBI.7.1.1-10","DOIUrl":"https://doi.org/10.20473/JISEBI.7.1.1-10","url":null,"abstract":"Background: Student admission at universities aims to select the best candidates who will excel and finish their studies on time. There are many factors to be considered in student admission. To assist the process, an intelligent model is needed to spot the potentially high achieving students, as well as to identify potentially struggling students as early as possible. Objective: This research uses K-means clustering to predict students’ grade point average (GPA) based on students’ profile, such as high school status and location, university entrance test score and English language competence. Methods: Students’ data from class of 2008 to 2017 are used to create two clusters using K-means clustering algorithm. Two centroids from the clusters are used to classify all the data into two groups: high GPA and low GPA. We use the data from class of 2018 as test data. The performance of the prediction is measured using accuracy, precision and recall. Results: Based on the analysis, the K-means clustering method is 78.59% accurate among the merit-based-admission students and 94.627% among the regular-admission students. Conclusion: The prediction involving merit-based-admission students has lower predictive accuracy values than that of involving regular-admission students because the clustering model for the merit-based-admission data is K = 3, but for the prediction, the assumption is K = 2.","PeriodicalId":16185,"journal":{"name":"Journal of Information Systems Engineering and Business Intelligence","volume":"39 1","pages":"1-10"},"PeriodicalIF":0.0,"publicationDate":"2021-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77532115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}