Pub Date : 2024-02-28DOI: 10.20473/jisebi.10.1.139-151
Erma Suryani, Haris Rafi, Amalia Utamima
Background: As the population increases, the need for corn products also increases. Corn is needed for various purposes, such as food consumption, industry, and animal feed. Therefore, increasing corn production is crucial to support food availability and the food industry. Objective: The objective of this project is to create a model to increase corn farming productivity using scenarios from drip irrigation systems and farmer field school programs. Methods: A system dynamics approach is utilized to model the complexity and nonlinear behaviour of the corn farming system. In addition, several scenarios are formulated to achieve the objective of increasing corn productivity. Results: Simulation results showed that adopting a drip irrigation system and operating a farmer field school program would increase corn productivity. Conclusion: The corn farming system model was successfully developed in this research. The scenario of implementing a drip irrigation system and the farmer field school program allowed farmers to increase corn productivity. Through the scenario of implementing a drip irrigation system, farmers can save water use, thereby reducing the impact of drought. Meanwhile, the scenario of the farmer field school program enables farmers to manage agriculture effectively. This study suggests that further research could consider the byproducts of corn production to increase the profits of corn farmers. Keywords: Corn Farming, Decision Support System, Modeling, Simulation, System Dynamics
{"title":"Model-based Decision Support System Using a System Dynamics Approach to Increase Corn Productivity","authors":"Erma Suryani, Haris Rafi, Amalia Utamima","doi":"10.20473/jisebi.10.1.139-151","DOIUrl":"https://doi.org/10.20473/jisebi.10.1.139-151","url":null,"abstract":"Background: As the population increases, the need for corn products also increases. Corn is needed for various purposes, such as food consumption, industry, and animal feed. Therefore, increasing corn production is crucial to support food availability and the food industry.\u0000Objective: The objective of this project is to create a model to increase corn farming productivity using scenarios from drip irrigation systems and farmer field school programs.\u0000Methods: A system dynamics approach is utilized to model the complexity and nonlinear behaviour of the corn farming system. In addition, several scenarios are formulated to achieve the objective of increasing corn productivity.\u0000Results: Simulation results showed that adopting a drip irrigation system and operating a farmer field school program would increase corn productivity.\u0000Conclusion: The corn farming system model was successfully developed in this research. The scenario of implementing a drip irrigation system and the farmer field school program allowed farmers to increase corn productivity. Through the scenario of implementing a drip irrigation system, farmers can save water use, thereby reducing the impact of drought. Meanwhile, the scenario of the farmer field school program enables farmers to manage agriculture effectively. This study suggests that further research could consider the byproducts of corn production to increase the profits of corn farmers.\u0000 \u0000Keywords: Corn Farming, Decision Support System, Modeling, Simulation, System Dynamics","PeriodicalId":16185,"journal":{"name":"Journal of Information Systems Engineering and Business Intelligence","volume":"50 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140420534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-28DOI: 10.20473/jisebi.10.1.81-93
Purno Tri Aji, Luthfi Ramadani
Background: Over the years, the role of startups has experienced a significant increase in healthcare delivery, particularly in telemedicine. However, there are still some inherent challenges, including cultural factors, lack of digital literacy, and uneven internet network infrastructure that must be considered during implementation. Previous reports also showed that there was a knowledge gap regarding the factors influencing acceptance of telemedicine. Objective: This study aimed to introduce and investigate an adjusted model based on Technology Acceptance Model (TAM) to assess the influence of user dimensions, technological aspects, and socio-cultural elements on the intention to adopt telemedicine services. Methods: The hypothesized relationships between latent variables were examined through Structural Equation Modeling (SEM). In addition, data analysis was carried out using Partial Least Squares-Structural Equation Modeling (PLS-SEM). Results: Self-efficacy (β=-0.272, P=0.013), perceived usefulness (β=0.355, P=0.000), facilitating conditions (β=0.425, P=0.000), and cultural factors (β=0.421, P=0.001) were found to exert a significant influence on the intention to adopt telemedicine services. Meanwhile, trust, the variables of perceived ease of use, and social influence had no significant influences. Conclusion: This study emphasized the significance of comprehending the factors influencing the adoption of telemedicine services. In addition, the results showed that the extended TAM was applicable in assessing acceptance of telemedicine services. Keywords: acceptance, telemedicine, TAM, SEM, intention to use
背景:多年来,初创企业在医疗保健服务中的作用显著增加,尤其是在远程医疗方面。然而,在实施过程中仍需考虑一些固有的挑战,包括文化因素、缺乏数字知识以及互联网网络基础设施不均衡等。以前的报告还显示,在影响远程医疗接受度的因素方面存在知识差距:本研究旨在引入并研究一个基于技术接受模型(TAM)的调整模型,以评估用户维度、技术方面和社会文化因素对采用远程医疗服务意向的影响:方法:通过结构方程模型(SEM)研究了潜变量之间的假设关系。此外,还使用偏最小二乘法-结构方程模型(PLS-SEM)进行了数据分析:结果发现,自我效能(β=-0.272,P=0.013)、感知有用性(β=0.355,P=0.000)、便利条件(β=0.425,P=0.000)和文化因素(β=0.421,P=0.001)对采用远程医疗服务的意向有显著影响。与此同时,信任、感知易用性变量和社会影响则没有显著影响:本研究强调了了解采用远程医疗服务的影响因素的重要性。此外,研究结果表明,扩展 TAM 适用于评估远程医疗服务的接受度。关键词:接受;远程医疗;TAM;SEM;使用意向
{"title":"Patients’ Acceptance of Telemedicine Technology: The Influence of User Behavior and Socio-Cultural Dimensions","authors":"Purno Tri Aji, Luthfi Ramadani","doi":"10.20473/jisebi.10.1.81-93","DOIUrl":"https://doi.org/10.20473/jisebi.10.1.81-93","url":null,"abstract":"Background: Over the years, the role of startups has experienced a significant increase in healthcare delivery, particularly in telemedicine. However, there are still some inherent challenges, including cultural factors, lack of digital literacy, and uneven internet network infrastructure that must be considered during implementation. Previous reports also showed that there was a knowledge gap regarding the factors influencing acceptance of telemedicine.\u0000Objective: This study aimed to introduce and investigate an adjusted model based on Technology Acceptance Model (TAM) to assess the influence of user dimensions, technological aspects, and socio-cultural elements on the intention to adopt telemedicine services.\u0000Methods: The hypothesized relationships between latent variables were examined through Structural Equation Modeling (SEM). In addition, data analysis was carried out using Partial Least Squares-Structural Equation Modeling (PLS-SEM).\u0000Results: Self-efficacy (β=-0.272, P=0.013), perceived usefulness (β=0.355, P=0.000), facilitating conditions (β=0.425, P=0.000), and cultural factors (β=0.421, P=0.001) were found to exert a significant influence on the intention to adopt telemedicine services. Meanwhile, trust, the variables of perceived ease of use, and social influence had no significant influences.\u0000Conclusion: This study emphasized the significance of comprehending the factors influencing the adoption of telemedicine services. In addition, the results showed that the extended TAM was applicable in assessing acceptance of telemedicine services.\u0000 \u0000Keywords: acceptance, telemedicine, TAM, SEM, intention to use","PeriodicalId":16185,"journal":{"name":"Journal of Information Systems Engineering and Business Intelligence","volume":"134 40","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140423342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-28DOI: 10.20473/jisebi.10.1.126-138
Winanti Winanti, Erick Fernando
Background: The widespread use of financial technology (FinTech) is a popular aspect across various fields, particularly in online transportation. However, the usage has led to an increase in illegal FinTech, causing significant problems for public. Issues related to account security, such as hacks leading to the loss of user balances and misuse of data, contribute to the erosion of brand image and public trust. Despite the growing prominence of FinTech, explorations on the application in the context of online transportation remain limited. Previous studies have not discussed the impact of brand image on perceived usefulness and ease of use. Therefore, this current study explores the importance of combining brand image and trust factors to increase user intention. This process is achieved by investigating brand image and trust as crucial factors influencing increased perceived ease and benefits during the integration of FinTech in online transportation services. Objective: This study aimed to measure the impact of brand image and trust factors on the adoption of FinTech in online transportation. Methods: The investigation was carried out with a quantitative analysis approach using Partial Least Squares–Structural Equation Modeling (PLS–SEM). Furthermore, it focused on understanding FinTech services in online transportation, incorporating factors such as trust, brand image, perceived ease of use, perceived usefulness, and user intention. Data were collected by using a purposive sampling method through online questionnaire distribution. PLS-SEM was adapted for analyzing variable relationships, hypotheses, and models. Results: The results showed that factors including trust, perceived ease of use, and perceived usefulness significantly influenced the willingness to use FinTech in online transportation services. However, it was observed that brand image factors did not impact user intentions. Conclusion: This study showed a critical aspect in understanding the value of FinTech services by explaining the importance of establishing trust and building a good brand image as precursors. These factors indirectly contributed to increased perceived benefits and ease of use. Therefore, the insights offered valuable input for companies aiming to develop trusted FinTech platforms with a positive product image. Keywords: Brand Image, Trust, FinTech, Online Transportation
{"title":"The Role of Brand Image and Trust in the Adoption of FinTech Digital Payment for Online Transportation","authors":"Winanti Winanti, Erick Fernando","doi":"10.20473/jisebi.10.1.126-138","DOIUrl":"https://doi.org/10.20473/jisebi.10.1.126-138","url":null,"abstract":"Background: The widespread use of financial technology (FinTech) is a popular aspect across various fields, particularly in online transportation. However, the usage has led to an increase in illegal FinTech, causing significant problems for public. Issues related to account security, such as hacks leading to the loss of user balances and misuse of data, contribute to the erosion of brand image and public trust. Despite the growing prominence of FinTech, explorations on the application in the context of online transportation remain limited. Previous studies have not discussed the impact of brand image on perceived usefulness and ease of use. Therefore, this current study explores the importance of combining brand image and trust factors to increase user intention. This process is achieved by investigating brand image and trust as crucial factors influencing increased perceived ease and benefits during the integration of FinTech in online transportation services.\u0000Objective: This study aimed to measure the impact of brand image and trust factors on the adoption of FinTech in online transportation.\u0000Methods: The investigation was carried out with a quantitative analysis approach using Partial Least Squares–Structural Equation Modeling (PLS–SEM). Furthermore, it focused on understanding FinTech services in online transportation, incorporating factors such as trust, brand image, perceived ease of use, perceived usefulness, and user intention. Data were collected by using a purposive sampling method through online questionnaire distribution. PLS-SEM was adapted for analyzing variable relationships, hypotheses, and models.\u0000Results: The results showed that factors including trust, perceived ease of use, and perceived usefulness significantly influenced the willingness to use FinTech in online transportation services. However, it was observed that brand image factors did not impact user intentions.\u0000Conclusion: This study showed a critical aspect in understanding the value of FinTech services by explaining the importance of establishing trust and building a good brand image as precursors. These factors indirectly contributed to increased perceived benefits and ease of use. Therefore, the insights offered valuable input for companies aiming to develop trusted FinTech platforms with a positive product image.\u0000 \u0000Keywords: Brand Image, Trust, FinTech, Online Transportation","PeriodicalId":16185,"journal":{"name":"Journal of Information Systems Engineering and Business Intelligence","volume":"93 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140422624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-28DOI: 10.20473/jisebi.10.1.94-109
Moh. Zulkifli Katili, Yeni Herdiyeni, M. Hardhienata
Background: Insect vectors spread 80% of plant viruses, causing major agricultural production losses. Direct insect vector identification is difficult due to a wide range of hosts, limited detection methods, and high PCR costs and expertise. Currently, a biodiversity database named Global Biotic Interaction (GloBI) provides an opportunity to identify virus vectors using its data. Objective: This study aims to build an insect vector search engine that can construct an virus-insect-plant interaction knowledge graph, identify insect vectors using network analysis, and extend knowledge about identified insect vectors. Methods: We leverage GloBI data to construct a graph that shows the complex relationships between insects, viruses, and plants. We identify insect vectors using interaction analysis and taxonomy analysis, then combine them into a final score. In interaction analysis, we propose Targeted Node Centric-Degree Centrality (TNC-DC) which finds insects with many directly and indirectly connections to the virus. Finally, we integrate Wikidata, DBPedia, and NCBIOntology to provide comprehensive information about insect vectors in the knowledge extension stage. Results: The interaction graph for each test virus was created. At the test stage, interaction and taxonomic analysis achieved 0.80 precision. TNC-DC succeeded in overcoming the failure of the original degree centrality which always got bees in the prediction results. During knowledge extension stage, we succeeded in finding the natural enemy of the Bemisia Tabaci (an insect vector of Pepper Yellow Leaf Curl Virus). Furthermore, an insect vector search engine is developed. The search engine provides network analysis insights, insect vector common names, photos, descriptions, natural enemies, other species, and relevant publications about the predicted insect vector. Conclusion: An insect vector search engine correctly identified virus vectors using GloBI data, TNC-DC, and entity embedding. Average precision was 0.80 in precision tests. There is a note that some insects are best in the first-to-five order. Keywords: Knowledge Graph, Network Analysis, Degree Centrality, Entity Embedding, Insect Vector
{"title":"Leveraging Biotic Interaction Knowledge Graph and Network Analysis to Uncover Insect Vectors of Plant Virus","authors":"Moh. Zulkifli Katili, Yeni Herdiyeni, M. Hardhienata","doi":"10.20473/jisebi.10.1.94-109","DOIUrl":"https://doi.org/10.20473/jisebi.10.1.94-109","url":null,"abstract":"Background: Insect vectors spread 80% of plant viruses, causing major agricultural production losses. Direct insect vector identification is difficult due to a wide range of hosts, limited detection methods, and high PCR costs and expertise. Currently, a biodiversity database named Global Biotic Interaction (GloBI) provides an opportunity to identify virus vectors using its data.\u0000Objective: This study aims to build an insect vector search engine that can construct an virus-insect-plant interaction knowledge graph, identify insect vectors using network analysis, and extend knowledge about identified insect vectors.\u0000Methods: We leverage GloBI data to construct a graph that shows the complex relationships between insects, viruses, and plants. We identify insect vectors using interaction analysis and taxonomy analysis, then combine them into a final score. In interaction analysis, we propose Targeted Node Centric-Degree Centrality (TNC-DC) which finds insects with many directly and indirectly connections to the virus. Finally, we integrate Wikidata, DBPedia, and NCBIOntology to provide comprehensive information about insect vectors in the knowledge extension stage.\u0000Results: The interaction graph for each test virus was created. At the test stage, interaction and taxonomic analysis achieved 0.80 precision. TNC-DC succeeded in overcoming the failure of the original degree centrality which always got bees in the prediction results. During knowledge extension stage, we succeeded in finding the natural enemy of the Bemisia Tabaci (an insect vector of Pepper Yellow Leaf Curl Virus). Furthermore, an insect vector search engine is developed. The search engine provides network analysis insights, insect vector common names, photos, descriptions, natural enemies, other species, and relevant publications about the predicted insect vector.\u0000Conclusion: An insect vector search engine correctly identified virus vectors using GloBI data, TNC-DC, and entity embedding. Average precision was 0.80 in precision tests. There is a note that some insects are best in the first-to-five order.\u0000 \u0000Keywords: Knowledge Graph, Network Analysis, Degree Centrality, Entity Embedding, Insect Vector","PeriodicalId":16185,"journal":{"name":"Journal of Information Systems Engineering and Business Intelligence","volume":"18 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140418159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-28DOI: 10.20473/jisebi.10.1.38-50
Jumanto Jumanto, Rofik Rofik, E. Sugiharti, A. Alamsyah, R. Arifudin, Budi Prasetiyo, M. A. Muslim
Background: Parkinson's disease (PD) is a critical neurodegenerative disorder affecting the central nervous system and often causing impaired movement and cognitive function in patients. In addition, its diagnosis in the early stages requires a complex and time-consuming process because all existing tests such as electroencephalography or blood examinations lack effectiveness and accuracy. Several studies explored PD prediction using sound, with a specific focus on the development of classification models to enhance accuracy. The majority of these neglected crucial aspects including feature extraction and proper parameter tuning, leading to low accuracy. Objective: This study aims to optimize performance of voice-based PD prediction through feature extraction, with the goal of reducing data dimensions and improving model computational efficiency. Additionally, appropriate parameters will be selected for enhancement of the ability of the model to identify both PD cases and healthy individuals. Methods: The proposed new model applied an OpenML dataset comprising voice recordings from 31 individuals, namely 23 PD patients and 8 healthy participants. The experimental process included the initial use of the SVM algorithm, followed by implementing PCA for feature extraction to enhance machine learning accuracy. Subsequently, data balancing with SMOTE was conducted, and GridSearchCV was used to identify the best parameter combination based on the predicted model characteristics. Result: Evaluation of the proposed model showed an impressive accuracy of 97.44%, sensitivity of 100%, and specificity of 85.71%. This excellent result was achieved with a limited dataset and a 10-fold cross-validation tuning, rendering the model sensitive to the training data. Conclusion: This study successfully enhanced the prediction model accuracy through the SVM+PCA+GridSearchCV+CV method. However, future investigations should consider an appropriate number of folds for a small dataset, explore alternative cross-validation methods, and expand the dataset to enhance model generalizability. Keywords: GridSearchCV, Parkinson Disaese, SVM, PCA, SMOTE, Voice/Speech
{"title":"Optimizing Support Vector Machine Performance for Parkinson's Disease Diagnosis Using GridSearchCV and PCA-Based Feature Extraction","authors":"Jumanto Jumanto, Rofik Rofik, E. Sugiharti, A. Alamsyah, R. Arifudin, Budi Prasetiyo, M. A. Muslim","doi":"10.20473/jisebi.10.1.38-50","DOIUrl":"https://doi.org/10.20473/jisebi.10.1.38-50","url":null,"abstract":"Background: Parkinson's disease (PD) is a critical neurodegenerative disorder affecting the central nervous system and often causing impaired movement and cognitive function in patients. In addition, its diagnosis in the early stages requires a complex and time-consuming process because all existing tests such as electroencephalography or blood examinations lack effectiveness and accuracy. Several studies explored PD prediction using sound, with a specific focus on the development of classification models to enhance accuracy. The majority of these neglected crucial aspects including feature extraction and proper parameter tuning, leading to low accuracy.\u0000Objective: This study aims to optimize performance of voice-based PD prediction through feature extraction, with the goal of reducing data dimensions and improving model computational efficiency. Additionally, appropriate parameters will be selected for enhancement of the ability of the model to identify both PD cases and healthy individuals.\u0000Methods: The proposed new model applied an OpenML dataset comprising voice recordings from 31 individuals, namely 23 PD patients and 8 healthy participants. The experimental process included the initial use of the SVM algorithm, followed by implementing PCA for feature extraction to enhance machine learning accuracy. Subsequently, data balancing with SMOTE was conducted, and GridSearchCV was used to identify the best parameter combination based on the predicted model characteristics. \u0000Result: Evaluation of the proposed model showed an impressive accuracy of 97.44%, sensitivity of 100%, and specificity of 85.71%. This excellent result was achieved with a limited dataset and a 10-fold cross-validation tuning, rendering the model sensitive to the training data.\u0000Conclusion: This study successfully enhanced the prediction model accuracy through the SVM+PCA+GridSearchCV+CV method. However, future investigations should consider an appropriate number of folds for a small dataset, explore alternative cross-validation methods, and expand the dataset to enhance model generalizability.\u0000 \u0000Keywords: GridSearchCV, Parkinson Disaese, SVM, PCA, SMOTE, Voice/Speech","PeriodicalId":16185,"journal":{"name":"Journal of Information Systems Engineering and Business Intelligence","volume":"41 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140421654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-28DOI: 10.20473/jisebi.10.1.51-69
Tito Febrian Nugraha, Wahyu Setiawan Wibowo, Venera Genia, Ahmad Fadhil, Y. Ruldeviyani
Background: The exponential data growth emphasises the importance of efficient information flow in organisations, especially in the financial sector. Data quality significantly influences decision-making, necessitating reliable Data Quality Management (DQM) frameworks. Previous studies propose DQM to maintain data quality through regulation, technology, measurement, evaluation, and improvement. Researchers highlight high-quality data benefits in private organisations but note the lack of improvement in data utilisation in public organisations. In Indonesia, data accuracy and quality are crucial for financial policies, with frequent reports of data inaccuracies in the Directorate General of Customs and Excise (DJBC), demanding standardised DQM practices. However, However, prior studies have yet to provide comprehensive and practical solutions to improve DQM practices. This study therefore aims to measure the DQM maturity, provide recommendations based on best practices, and formulate a practical strategy for improvements along with indicators tailored to the organisation, a topic that previous research has not explored. Methods: This study falls under a mixed method approach (a quantitative study followed by a qualitative study) and employs a three-stage methodology. The authors conduct maturity assessment using Loshin model through an assisted enumeration from 5 key stakeholders followed by recommendations based on the Data Management Body of Knowledge (DMBOK) and strategy formulation from internal documents and interview. Results: The data analysis yielded a DQM maturity score of 3.10, indicating a "defined to managed" level of maturity. Among eight components, only one receives a Managed level, two components are in the Defined level and the rest belongs to a Repeatable level. This study also proposes three strategies to bolster DQM by targeting 49 weak points, which will be progressively and sequentially implemented over a three-year period, using twelve possible solutions. Conclusion: The study highlights the importance of efficient data flow, particularly in the financial sector, and suggests DQM for maintaining data quality. DJBC's import DQM level is assessed using Loshin's measurements, revealing areas for improvement through key DMBOK activities. Recommendations include data governance, strategic planning, and sequential DQM implementation. The study concludes by formulating a practical approach to be applied in a three-year span with ten indicators to measure success. Keywords: Data Quality Management, Data Quality Maturity Model, Data Quality Strategy, Loshin, DMBOK
{"title":"A Practical Approach to Enhance Data Quality Management in Government: Case Study of Indonesian Customs and Excise Office","authors":"Tito Febrian Nugraha, Wahyu Setiawan Wibowo, Venera Genia, Ahmad Fadhil, Y. Ruldeviyani","doi":"10.20473/jisebi.10.1.51-69","DOIUrl":"https://doi.org/10.20473/jisebi.10.1.51-69","url":null,"abstract":"Background: The exponential data growth emphasises the importance of efficient information flow in organisations, especially in the financial sector. Data quality significantly influences decision-making, necessitating reliable Data Quality Management (DQM) frameworks. Previous studies propose DQM to maintain data quality through regulation, technology, measurement, evaluation, and improvement. Researchers highlight high-quality data benefits in private organisations but note the lack of improvement in data utilisation in public organisations. In Indonesia, data accuracy and quality are crucial for financial policies, with frequent reports of data inaccuracies in the Directorate General of Customs and Excise (DJBC), demanding standardised DQM practices. However, However, prior studies have yet to provide comprehensive and practical solutions to improve DQM practices. This study therefore aims to measure the DQM maturity, provide recommendations based on best practices, and formulate a practical strategy for improvements along with indicators tailored to the organisation, a topic that previous research has not explored.\u0000Methods: This study falls under a mixed method approach (a quantitative study followed by a qualitative study) and employs a three-stage methodology. The authors conduct maturity assessment using Loshin model through an assisted enumeration from 5 key stakeholders followed by recommendations based on the Data Management Body of Knowledge (DMBOK) and strategy formulation from internal documents and interview.\u0000Results: The data analysis yielded a DQM maturity score of 3.10, indicating a \"defined to managed\" level of maturity. Among eight components, only one receives a Managed level, two components are in the Defined level and the rest belongs to a Repeatable level. This study also proposes three strategies to bolster DQM by targeting 49 weak points, which will be progressively and sequentially implemented over a three-year period, using twelve possible solutions.\u0000Conclusion: The study highlights the importance of efficient data flow, particularly in the financial sector, and suggests DQM for maintaining data quality. DJBC's import DQM level is assessed using Loshin's measurements, revealing areas for improvement through key DMBOK activities. Recommendations include data governance, strategic planning, and sequential DQM implementation. The study concludes by formulating a practical approach to be applied in a three-year span with ten indicators to measure success.\u0000 \u0000Keywords: Data Quality Management, Data Quality Maturity Model, Data Quality Strategy, Loshin, DMBOK","PeriodicalId":16185,"journal":{"name":"Journal of Information Systems Engineering and Business Intelligence","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140422467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-28DOI: 10.20473/jisebi.10.1.167-178
A. Romadhony, Said Al Faraby, Rita Rismala, U. N. Wisesty, Anditya Arifianto
Background: The publicly available large dataset plays an important role in the development of the natural language processing/computational linguistic research field. However, up to now, there are only a few large Indonesian language datasets accessible for research purposes, including sentiment analysis datasets, where sentiment analysis is considered the most popular task. Objective: The objective of this work is to present sentiment analysis on a large Indonesian product review dataset, employing various features and methods. Two tasks have been implemented: classifying reviews into three classes (positive, negative, neutral), and predicting ratings. Methods: Sentiment analysis was conducted on the FDReview dataset, comprising over 700,000 reviews. The analysis treated sentiment as a classification problem, employing the following methods: Multinomial Naïve Bayes (MNB), Support Vector Machine (SVM), LSTM, and BiLSTM. Result: The experimental results indicate that in the comparison of performance using conventional methods, MNB outperformed SVM in rating prediction, whereas SVM exhibited better performance in the review classification task. Additionally, the results demonstrate that the BiLSTM method outperformed all other methods in both tasks. Furthermore, this study includes experiments conducted on balanced and unbalanced small-sized sample datasets. Conclusion: Analysis of the experimental results revealed that the deep learning-based method performed better only in the large dataset setting. Results from the small balanced dataset indicate that conventional machine learning methods exhibit competitive performance compared to deep learning approaches. Keywords: Indonesian review dataset, Large dataset, Rating prediction, Sentiment analysis
{"title":"Sentiment Analysis on a Large Indonesian Product Review Dataset","authors":"A. Romadhony, Said Al Faraby, Rita Rismala, U. N. Wisesty, Anditya Arifianto","doi":"10.20473/jisebi.10.1.167-178","DOIUrl":"https://doi.org/10.20473/jisebi.10.1.167-178","url":null,"abstract":"Background: The publicly available large dataset plays an important role in the development of the natural language processing/computational linguistic research field. However, up to now, there are only a few large Indonesian language datasets accessible for research purposes, including sentiment analysis datasets, where sentiment analysis is considered the most popular task.\u0000Objective: The objective of this work is to present sentiment analysis on a large Indonesian product review dataset, employing various features and methods. Two tasks have been implemented: classifying reviews into three classes (positive, negative, neutral), and predicting ratings.\u0000Methods: Sentiment analysis was conducted on the FDReview dataset, comprising over 700,000 reviews. The analysis treated sentiment as a classification problem, employing the following methods: Multinomial Naïve Bayes (MNB), Support Vector Machine (SVM), LSTM, and BiLSTM.\u0000Result: The experimental results indicate that in the comparison of performance using conventional methods, MNB outperformed SVM in rating prediction, whereas SVM exhibited better performance in the review classification task. Additionally, the results demonstrate that the BiLSTM method outperformed all other methods in both tasks. Furthermore, this study includes experiments conducted on balanced and unbalanced small-sized sample datasets.\u0000Conclusion: Analysis of the experimental results revealed that the deep learning-based method performed better only in the large dataset setting. Results from the small balanced dataset indicate that conventional machine learning methods exhibit competitive performance compared to deep learning approaches.\u0000 \u0000Keywords: Indonesian review dataset, Large dataset, Rating prediction, Sentiment analysis","PeriodicalId":16185,"journal":{"name":"Journal of Information Systems Engineering and Business Intelligence","volume":"242 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140417748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-28DOI: 10.20473/jisebi.10.1.110-125
Ersalina Trisnawati, I. K. Raharjana, Taufik Taufik, A. Basori, A. B. F. Mansur, Nouf Alghanmi
Background: In Agile software development, an essential initial stage is eliciting software requirements. This process engages stakeholders to achieve comprehensive results. However, a common issue is the variance in domain and technical knowledge among stakeholders, potentially impacting the quality of software requirements elicitation. Objective: Understanding the characteristics of user stories produced by stakeholders becomes crucial, particularly considering the differences in domain and technical knowledge. This study aims to compare the characteristics of user stories generated by stakeholders with varying backgrounds in domain and technical expertise. Methods: The initial step involves categorizing respondents into distinct stakeholder groups. Three stakeholders are involved in this study, constituting a combination of those with high and low technical and domain knowledge. Subsequently, data collection of user stories is conducted across various case studies. Finally, the acquired user stories are analyzed for further insights. Results: The analysis reveals variations in user stories generated by the three stakeholder categories across the three case studies. Stakeholders with domain knowledge tend to focus on 'what' aspects with task elements and 'why' aspects with hard-goal elements. Meanwhile, technical knowledge crafts user stories with capability elements in the 'what' aspect. Utilizing the QUS framework, it is evident that technical knowledge consistently produces a higher number of high-quality user stories across all quality categories, Conclusion: The contribution offered by this study lies in determining the distinct characteristics of user stories produced by different types of stakeholders, focusing on disparities in domain and technical knowledge. The study highlights the comparison of various characteristics of user story elements, such as hard-goals, soft-goals, tasks, or capabilities, and assesses the quality of user stories based on the user story framework. Additionally, it endorse the importance of process innovation in shaping the requirements gathering process and subsequently influencing the quality of user stories. Keywords: User story, Agile Software Development, Requirements Elicitation, Stakeholder, Domain Knowledge, Process Innovation Background: In Agile software development, an essential initial stage is eliciting software requirements. This process engages stakeholders to achieve comprehensive results. However, a common issue is the variance in domain and technical knowledge among stakeholders, potentially impacting the quality of software requirements elicitation. Objective: Understanding the characteristics of user stories produced by stakeholders becomes crucial, particularly considering the differences in domain and technical knowledge. This study aims to compare the characteristics of user stories generated by stakeholders with varying backgrounds in domain and technical expertise. Methods: The initia
{"title":"Analyzing Variances in User Story Characteristics: A Comparative Study of Stakeholders with Diverse Domain and Technical Knowledge in Software Requirements Elicitation","authors":"Ersalina Trisnawati, I. K. Raharjana, Taufik Taufik, A. Basori, A. B. F. Mansur, Nouf Alghanmi","doi":"10.20473/jisebi.10.1.110-125","DOIUrl":"https://doi.org/10.20473/jisebi.10.1.110-125","url":null,"abstract":"Background: In Agile software development, an essential initial stage is eliciting software requirements. This process engages stakeholders to achieve comprehensive results. However, a common issue is the variance in domain and technical knowledge among stakeholders, potentially impacting the quality of software requirements elicitation.\u0000Objective: Understanding the characteristics of user stories produced by stakeholders becomes crucial, particularly considering the differences in domain and technical knowledge. This study aims to compare the characteristics of user stories generated by stakeholders with varying backgrounds in domain and technical expertise.\u0000Methods: The initial step involves categorizing respondents into distinct stakeholder groups. Three stakeholders are involved in this study, constituting a combination of those with high and low technical and domain knowledge. Subsequently, data collection of user stories is conducted across various case studies. Finally, the acquired user stories are analyzed for further insights.\u0000Results: The analysis reveals variations in user stories generated by the three stakeholder categories across the three case studies. Stakeholders with domain knowledge tend to focus on 'what' aspects with task elements and 'why' aspects with hard-goal elements. Meanwhile, technical knowledge crafts user stories with capability elements in the 'what' aspect. Utilizing the QUS framework, it is evident that technical knowledge consistently produces a higher number of high-quality user stories across all quality categories,\u0000Conclusion: The contribution offered by this study lies in determining the distinct characteristics of user stories produced by different types of stakeholders, focusing on disparities in domain and technical knowledge. The study highlights the comparison of various characteristics of user story elements, such as hard-goals, soft-goals, tasks, or capabilities, and assesses the quality of user stories based on the user story framework. Additionally, it endorse the importance of process innovation in shaping the requirements gathering process and subsequently influencing the quality of user stories.\u0000 \u0000Keywords: User story, Agile Software Development, Requirements Elicitation, Stakeholder, Domain Knowledge, Process Innovation\u0000Background: In Agile software development, an essential initial stage is eliciting software requirements. This process engages stakeholders to achieve comprehensive results. However, a common issue is the variance in domain and technical knowledge among stakeholders, potentially impacting the quality of software requirements elicitation.\u0000Objective: Understanding the characteristics of user stories produced by stakeholders becomes crucial, particularly considering the differences in domain and technical knowledge. This study aims to compare the characteristics of user stories generated by stakeholders with varying backgrounds in domain and technical expertise.\u0000Methods: The initia","PeriodicalId":16185,"journal":{"name":"Journal of Information Systems Engineering and Business Intelligence","volume":"38 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140422708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.20473/jisebi.9.2.264-275
Reza Fauzan, Daniel Siahaan, Mirotus Solekhah, Vriza Wahyu Saputra, Aditya Eka Bagaskara, Muhammad Ihsan Karimi
Background: Software engineering are courses comprising various project types, including simple assignments completed in supervised settings and more complex tasks undertaken independently by students, without the oversight of a constant teacher or lab assistant. The imperative need arises for a comprehensive assessment framework to validate the fulfillment of learning objectives and facilitate the measurement of student outcomes, particularly in computer science and software engineering. This leads to the delineation of an appropriate assessment structure and pattern. Objective: This study aimed to acquire the expertise required for assessing student performance in computer science and software engineering courses. Methods: A comprehensive literature review spanning from 2012 to October 2021 was conducted, resulting in the identification of 20 papers addressing the assessment framework in software engineering and computer science courses. Specific inclusion and exclusion criteria were meticulously applied in two rounds of assessment to identify the most pertinent studies for this investigation. Results: The results showed multiple methods for assessing software engineering and computer science courses, including the Assessment Matrix, Automatic Assessment, CDIO, Cooperative Thinking, formative and summative assessment, Game, Generative Learning Robot, NIMSAD, SECAT, Self-assessment and Peer-assessment, SonarQube Tools, WRENCH, and SEP-CyLE. Conclusion: The evaluation framework for software engineering and computer science courses required further refinement, ultimately leading to the selection of the most suitable technique, known as learning framework. Keywords: Computer science course, Software engineering course, Student assessment, Systematic literature review
{"title":"A Systematic Literature Review of Student Assessment Framework in Software Engineering Courses","authors":"Reza Fauzan, Daniel Siahaan, Mirotus Solekhah, Vriza Wahyu Saputra, Aditya Eka Bagaskara, Muhammad Ihsan Karimi","doi":"10.20473/jisebi.9.2.264-275","DOIUrl":"https://doi.org/10.20473/jisebi.9.2.264-275","url":null,"abstract":"Background: Software engineering are courses comprising various project types, including simple assignments completed in supervised settings and more complex tasks undertaken independently by students, without the oversight of a constant teacher or lab assistant. The imperative need arises for a comprehensive assessment framework to validate the fulfillment of learning objectives and facilitate the measurement of student outcomes, particularly in computer science and software engineering. This leads to the delineation of an appropriate assessment structure and pattern. Objective: This study aimed to acquire the expertise required for assessing student performance in computer science and software engineering courses. Methods: A comprehensive literature review spanning from 2012 to October 2021 was conducted, resulting in the identification of 20 papers addressing the assessment framework in software engineering and computer science courses. Specific inclusion and exclusion criteria were meticulously applied in two rounds of assessment to identify the most pertinent studies for this investigation. Results: The results showed multiple methods for assessing software engineering and computer science courses, including the Assessment Matrix, Automatic Assessment, CDIO, Cooperative Thinking, formative and summative assessment, Game, Generative Learning Robot, NIMSAD, SECAT, Self-assessment and Peer-assessment, SonarQube Tools, WRENCH, and SEP-CyLE. Conclusion: The evaluation framework for software engineering and computer science courses required further refinement, ultimately leading to the selection of the most suitable technique, known as learning framework. Keywords: Computer science course, Software engineering course, Student assessment, Systematic literature review","PeriodicalId":16185,"journal":{"name":"Journal of Information Systems Engineering and Business Intelligence","volume":"96 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135510237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.20473/jisebi.9.2.161-180
Aschalew Arega, Durga Prasad Sharma
Background: The healthcare sector has been facing multilateral challenges regarding the quality of services and access to healthcare innovations. As the population grows, the sector requires faster and more reliable services, but the opposite is true in developing countries. As a robust technology, cloud computing has numerous features and benefits that are still to be explored. The intervention of the latest technologies in healthcare is crucial to shifting toward next-generation healthcare systems. In developing countries like Ethiopia, cloud features are still far from being systematically explored to design smart and green healthcare services. Objective: To excavate contextualized research gaps in the existing studies towards smart and green features of cloud computing in healthcare information services. Methods: We conducted a systematic review of research publications indexed in Scopus, Web of Science, IEEE Xplore, PubMed, and ProQuest. 52 research articles were screened based on significant selection criteria and systematically reviewed. Extensive efforts have been made to rigorously review recent, contemporary, and relevant research articles. Results: This study presented a summary of parameters, proposed solutions from the reviewed articles, and identified research gaps. These identified research gaps are related to security and privacy concerns, data repository standardization, data shareability, self-health data access control, service collaboration, energy efficiency/greenness, consolidation of health data repositories, carbon footprint, and performance evaluation. Conclusion: The paper consolidated research gaps from multiple research investigations into a single paper, allowing researchers to develop innovative solutions for improving healthcare services. Based on a rigorous analysis of the literature, the existing systems overlooked green computing features and were highly vulnerable to security violations. Several studies reveal that security and privacy threats have been seriously hampering the exponential growth of cloud computing. 54 percent of the reviewed articles focused on security and privacy concerns. Keywords: Cloud computing, Consolidation, Green computing, Green features, Healthcare services, Systematic literature review.
背景:医疗保健部门一直面临着有关服务质量和获得医疗保健创新的多边挑战。随着人口的增长,该部门需要更快、更可靠的服务,但发展中国家的情况正好相反。作为一项健壮的技术,云计算有许多特性和优点有待探索。最新技术在医疗保健的干预是至关重要的转向下一代医疗保健系统。在埃塞俄比亚等发展中国家,云功能还远未被系统地探索以设计智能和绿色医疗服务。目的:挖掘现有医疗信息服务中云计算智能与绿色特征研究的情境化研究空白。方法:我们对Scopus、Web of Science、IEEE Xplore、PubMed和ProQuest检索的研究出版物进行了系统综述。根据重要的选择标准对52篇研究论文进行筛选,并进行系统审查。广泛的努力已作出严格审查最近,当代和相关的研究文章。结果:本研究提出了参数的总结,从审查的文章提出解决方案,并确定了研究差距。这些已确定的研究差距涉及安全和隐私问题、数据存储库标准化、数据可共享性、自我健康数据访问控制、服务协作、能源效率/绿色、健康数据存储库整合、碳足迹和绩效评估。结论:该论文将多个研究调查的研究空白整合到一篇论文中,使研究人员能够开发改进医疗保健服务的创新解决方案。根据对文献的严格分析,现有系统忽略了绿色计算功能,并且极易受到安全违规的影响。几项研究表明,安全和隐私威胁严重阻碍了云计算的指数级增长。被审查的文章中有54%关注安全和隐私问题。关键词:云计算、整合、绿色计算、绿色特征、医疗服务、系统文献综述
{"title":"Towards Smart and Green Features of Cloud Computing in Healthcare Services: A Systematic Literature Review","authors":"Aschalew Arega, Durga Prasad Sharma","doi":"10.20473/jisebi.9.2.161-180","DOIUrl":"https://doi.org/10.20473/jisebi.9.2.161-180","url":null,"abstract":"Background: The healthcare sector has been facing multilateral challenges regarding the quality of services and access to healthcare innovations. As the population grows, the sector requires faster and more reliable services, but the opposite is true in developing countries. As a robust technology, cloud computing has numerous features and benefits that are still to be explored. The intervention of the latest technologies in healthcare is crucial to shifting toward next-generation healthcare systems. In developing countries like Ethiopia, cloud features are still far from being systematically explored to design smart and green healthcare services. Objective: To excavate contextualized research gaps in the existing studies towards smart and green features of cloud computing in healthcare information services. Methods: We conducted a systematic review of research publications indexed in Scopus, Web of Science, IEEE Xplore, PubMed, and ProQuest. 52 research articles were screened based on significant selection criteria and systematically reviewed. Extensive efforts have been made to rigorously review recent, contemporary, and relevant research articles. Results: This study presented a summary of parameters, proposed solutions from the reviewed articles, and identified research gaps. These identified research gaps are related to security and privacy concerns, data repository standardization, data shareability, self-health data access control, service collaboration, energy efficiency/greenness, consolidation of health data repositories, carbon footprint, and performance evaluation. Conclusion: The paper consolidated research gaps from multiple research investigations into a single paper, allowing researchers to develop innovative solutions for improving healthcare services. Based on a rigorous analysis of the literature, the existing systems overlooked green computing features and were highly vulnerable to security violations. Several studies reveal that security and privacy threats have been seriously hampering the exponential growth of cloud computing. 54 percent of the reviewed articles focused on security and privacy concerns. Keywords: Cloud computing, Consolidation, Green computing, Green features, Healthcare services, Systematic literature review.","PeriodicalId":16185,"journal":{"name":"Journal of Information Systems Engineering and Business Intelligence","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135510240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}