Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285878
H. M. Sitorus, R. Govindaraju, I. Wiratmadja, I. Sudirman
Mobile banking is one of the latest electronic banking channels that provide financial services through information and communication technologies. Although it offers numerous benefits, many Indonesian banks face problem of low mobile banking adoption. A study on what makes customer fully accept mobile banking can help banks develop effective strategies to answer this problem. This study examines mobile banking adoption from an interaction perspective. The purpose of this study is to investigate the interaction between individual and technology, specifically the role of usability and compatibility on mobile banking adoption. Based on literature study on technology adoption, mobile banking adoption, usability and compatibility literatures, a research model is proposed. There are 5 constructs examined, i.e. satisfaction, perceived usefulness, perceived ease of use, perceived learnability and compatibility; the relationship of the constructs and their effects on intention to continue using mobile banking are examined. The results indicate that intention to continue using mobile banking is significantly determined by compatibility and satisfaction. The results also show perceived ease of use and perceived learnability are different constructs and have different roles on explaining satisfaction.
{"title":"Interaction perspective in mobile banking adoption: The role of usability and compatibility","authors":"H. M. Sitorus, R. Govindaraju, I. Wiratmadja, I. Sudirman","doi":"10.1109/ICODSE.2017.8285878","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285878","url":null,"abstract":"Mobile banking is one of the latest electronic banking channels that provide financial services through information and communication technologies. Although it offers numerous benefits, many Indonesian banks face problem of low mobile banking adoption. A study on what makes customer fully accept mobile banking can help banks develop effective strategies to answer this problem. This study examines mobile banking adoption from an interaction perspective. The purpose of this study is to investigate the interaction between individual and technology, specifically the role of usability and compatibility on mobile banking adoption. Based on literature study on technology adoption, mobile banking adoption, usability and compatibility literatures, a research model is proposed. There are 5 constructs examined, i.e. satisfaction, perceived usefulness, perceived ease of use, perceived learnability and compatibility; the relationship of the constructs and their effects on intention to continue using mobile banking are examined. The results indicate that intention to continue using mobile banking is significantly determined by compatibility and satisfaction. The results also show perceived ease of use and perceived learnability are different constructs and have different roles on explaining satisfaction.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126149106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285860
Ekki Rinaldi, Aina Musdholifah
Support Vector Machine (SVM) has long been used in opinion mining social media website including YouTube, the most popular video sharing based media social in the world. However, the preprocessing approach and use of kernel functions in SVM requires precision in the selection of appropriate kernel functions in order to get high accuracy. Thus, this research focuses on proposing FVEC approach for preprocessing and finding the best kernel function in term of accuracy, for opinion mining on Indonesian comments of YouTube video. Four types of kernel functions have been investigated, namely linear, poly degree 2, poly degree 3, and RBF. The experiment uses 13,638 Indonesian comments of YouTube videos that review about smartphone products of various brands. The comments can contain sentiments that refer to how the video is delivered or the product itself, or even irrelevant to both, so this study classifies comments into seven classes. From the experimental result show that FVEC-SVM using linear kernel function is outperformed than others on accuracy term, i.e. 62.76%.
{"title":"FVEC-SVM for opinion mining on Indonesian comments of youtube video","authors":"Ekki Rinaldi, Aina Musdholifah","doi":"10.1109/ICODSE.2017.8285860","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285860","url":null,"abstract":"Support Vector Machine (SVM) has long been used in opinion mining social media website including YouTube, the most popular video sharing based media social in the world. However, the preprocessing approach and use of kernel functions in SVM requires precision in the selection of appropriate kernel functions in order to get high accuracy. Thus, this research focuses on proposing FVEC approach for preprocessing and finding the best kernel function in term of accuracy, for opinion mining on Indonesian comments of YouTube video. Four types of kernel functions have been investigated, namely linear, poly degree 2, poly degree 3, and RBF. The experiment uses 13,638 Indonesian comments of YouTube videos that review about smartphone products of various brands. The comments can contain sentiments that refer to how the video is delivered or the product itself, or even irrelevant to both, so this study classifies comments into seven classes. From the experimental result show that FVEC-SVM using linear kernel function is outperformed than others on accuracy term, i.e. 62.76%.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131502850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285865
Wilhelmus Andrian Tanujaya, Muhammad Z. C. Candra, Saiful Akbar
Developers of data stream processing application have to write a lot of codes even for a simple functionality, and, to make it worse, tend to rewrite their codes when developing different applications. These developers also need to recompile the code even for simple changes. In this paper, we present a configurable data stream application framework, which will help developers in developing data stream applications by reducing the amount of codes written to develop a data stream processing application. In this framework, we introduce a Domain Specific Language (DSL) for defining and configuring the data stream application. Our framework provides many basic stream processing functionalities, such as passing the data from data source to processing classes, filtering, windowing the data, as well as sending the data to data collectors. Most configurations related to these functionalities can be easily changed using the DSL without the need to recompile the code. The framework was tested using two case studies, where for each of them we developed data stream applications with and without the framework. The case studies show the increased productivity in terms of the number of lines of code and the number of files written for each respective application.
{"title":"Rapid data stream application development framework","authors":"Wilhelmus Andrian Tanujaya, Muhammad Z. C. Candra, Saiful Akbar","doi":"10.1109/ICODSE.2017.8285865","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285865","url":null,"abstract":"Developers of data stream processing application have to write a lot of codes even for a simple functionality, and, to make it worse, tend to rewrite their codes when developing different applications. These developers also need to recompile the code even for simple changes. In this paper, we present a configurable data stream application framework, which will help developers in developing data stream applications by reducing the amount of codes written to develop a data stream processing application. In this framework, we introduce a Domain Specific Language (DSL) for defining and configuring the data stream application. Our framework provides many basic stream processing functionalities, such as passing the data from data source to processing classes, filtering, windowing the data, as well as sending the data to data collectors. Most configurations related to these functionalities can be easily changed using the DSL without the need to recompile the code. The framework was tested using two case studies, where for each of them we developed data stream applications with and without the framework. The case studies show the increased productivity in terms of the number of lines of code and the number of files written for each respective application.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115423842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285853
Inne Gartina Husein, Saiful Akbar, B. Sitohang, F. N. Azizah
Ontology matching produce a set of semantic correspondences called alignment. The issue of incoherent alignment has been the concern of many researcher since 2010, since almost all matching systems produce incoherent alignments of ontologies. Mapping repair process is a way to quantify the quality of alignment based on the definition of mapping incoherence. Internal properties of mapping will be measured by semantic of the ontologies being matched. Mapping repair process should restore coherence condition by removing as less as possible unwanted mappings. This is call minimal diagnosis. Minimal on the amount of removed mapping and small confidence value of removed mapping. This paper compares optimal path finding techniques that support minimal diagnosis. Some experiments conducted using conference track ontology. Experiment result showed that A∗ Search produced the greatest precision, recall and f-measure values, followed by Greedy Search. Both techniques computed the lowest cost path by using heuristic. This condition was also due to logic algorithm that effective to support minimal diagnosis.
{"title":"Comparison of optimal path finding techniques for minimal diagnosis in mapping repair","authors":"Inne Gartina Husein, Saiful Akbar, B. Sitohang, F. N. Azizah","doi":"10.1109/ICODSE.2017.8285853","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285853","url":null,"abstract":"Ontology matching produce a set of semantic correspondences called alignment. The issue of incoherent alignment has been the concern of many researcher since 2010, since almost all matching systems produce incoherent alignments of ontologies. Mapping repair process is a way to quantify the quality of alignment based on the definition of mapping incoherence. Internal properties of mapping will be measured by semantic of the ontologies being matched. Mapping repair process should restore coherence condition by removing as less as possible unwanted mappings. This is call minimal diagnosis. Minimal on the amount of removed mapping and small confidence value of removed mapping. This paper compares optimal path finding techniques that support minimal diagnosis. Some experiments conducted using conference track ontology. Experiment result showed that A∗ Search produced the greatest precision, recall and f-measure values, followed by Greedy Search. Both techniques computed the lowest cost path by using heuristic. This condition was also due to logic algorithm that effective to support minimal diagnosis.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"20 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113956447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285858
A. Wijayanto, Siti Mariyah, A. Purwarianti
Fuzzy Geographically Weighted Clustering (FGWC) is recognized as one of the most efficient methods for geo-demographic analysis problem. FGWC uses neighborhood effect to remedy the limitation of classical fuzzy clustering methods in terms of geographic factors. However, there are some drawbacks of FGWC such as sensitivity to cluster initialization phase that is required to overcome. In this paper a new hybrid approach of FGWC based on Ant Colony Optimization (ACO), namely FGWC-ACO is proposed in which the initialization is performed better and in an appropriate manner. Based on the experimental simulation, the proposed method clearly outperforms the standard FGWC and offers a better geo-demographic clustering quality.
{"title":"Enhancing clustering quality of fuzzy geographically weighted clustering using Ant Colony optimization","authors":"A. Wijayanto, Siti Mariyah, A. Purwarianti","doi":"10.1109/ICODSE.2017.8285858","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285858","url":null,"abstract":"Fuzzy Geographically Weighted Clustering (FGWC) is recognized as one of the most efficient methods for geo-demographic analysis problem. FGWC uses neighborhood effect to remedy the limitation of classical fuzzy clustering methods in terms of geographic factors. However, there are some drawbacks of FGWC such as sensitivity to cluster initialization phase that is required to overcome. In this paper a new hybrid approach of FGWC based on Ant Colony Optimization (ACO), namely FGWC-ACO is proposed in which the initialization is performed better and in an appropriate manner. Based on the experimental simulation, the proposed method clearly outperforms the standard FGWC and offers a better geo-demographic clustering quality.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127493845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285851
Esti Suryani Wiharto, Sarngadi Palgunadi, Yudha Rizki Putra
Acute Myeloid Leukemia (AML) is a type of leukemia characterised by the occurrence of myeloid series cell differentiation that stops in the blast cells causing the accumulation of blast cells in the bone marrow. This study aims to determine leukemia typically in AML M0 and AML M1 based on the morphology of white blood cell image using image processing method. The steps performed are median filtering, YCbCr colour conversion, thresholding, and opening, and k-Nearest Neighbors classifier to classify cell types from feature extraction results. The result of characteristic extraction was done by mean difference test for each characteristic between cell type indicated that there was a significant difference in WBC diameter characteristic between cell type, while on a characteristic of nucleus ratio showed that there was no significant difference. Based on characteristic testing of each cell, a combination of a characteristic of WBC diameter and nucleus roundabout obtained the highest accuracy when k = 5 and k = 7 is 67,28%. Thus the characteristic of WBC diameter and the nuclear roundabout is the most influential data classification feature. Based on the test results of each cell, if the algorithm k = 6 k-Nearest Neighbors can classify the cell correctly 59.87% of the 162 data used based on the three characteristics each cell is the WBC diameter, the nucleus roundabout and the nucleus ratio.
{"title":"Cells identification of acute myeloid leukemia AML M0 and AML M1 using K-nearest neighbour based on morphological images","authors":"Esti Suryani Wiharto, Sarngadi Palgunadi, Yudha Rizki Putra","doi":"10.1109/ICODSE.2017.8285851","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285851","url":null,"abstract":"Acute Myeloid Leukemia (AML) is a type of leukemia characterised by the occurrence of myeloid series cell differentiation that stops in the blast cells causing the accumulation of blast cells in the bone marrow. This study aims to determine leukemia typically in AML M0 and AML M1 based on the morphology of white blood cell image using image processing method. The steps performed are median filtering, YCbCr colour conversion, thresholding, and opening, and k-Nearest Neighbors classifier to classify cell types from feature extraction results. The result of characteristic extraction was done by mean difference test for each characteristic between cell type indicated that there was a significant difference in WBC diameter characteristic between cell type, while on a characteristic of nucleus ratio showed that there was no significant difference. Based on characteristic testing of each cell, a combination of a characteristic of WBC diameter and nucleus roundabout obtained the highest accuracy when k = 5 and k = 7 is 67,28%. Thus the characteristic of WBC diameter and the nuclear roundabout is the most influential data classification feature. Based on the test results of each cell, if the algorithm k = 6 k-Nearest Neighbors can classify the cell correctly 59.87% of the 162 data used based on the three characteristics each cell is the WBC diameter, the nucleus roundabout and the nucleus ratio.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114147133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285887
Chairuni Aulia Nusapati, W. Sunindyo
Open Government Data (OGD) refers to the data produced or commissioned by the government or the government-controlled entities that can be freely used, reused, and redistributed by anyone. In Indonesia, the creation and use of OGD have been supported by the government since 2011. However, in contrary, the maturity level of the OGD published on the publishing sites is quite low, scoring only one up to three stars out of the maximum five stars according to the global Five Star Open Data standard. This paper describes a solution for the problem in the form of a semi-automated publishing tool that could be used to advance the current OGD maturity level in Indonesia. Government agencies, represented by their administrators, could use the tool to process their existing data into the more mature ones. The tool would take government data in the form of Excel and CSV files, or in other words 2-star and 3-star respectively, and process them into the maximum 5-star data in various formats. The main framework of the tool is developed based on an existing framework, extended to give more detailed steps and match to the case study at the Badan Pusat Statistik Indonesia (Indonesian Central Bureau of Statistics). Based on the evaluation, the tool can level up the existing data's maturity level from 2-star and 3-star to the maximum 5-star based on the Five Star Open Data Standard. This promising result encourages the authors to develop the tool even further in another research and the authors have also provided some possible further development based on the result of this work.
开放政府数据(Open Government Data, OGD)是指由政府或政府控制的实体生产或委托生产的、任何人都可以自由使用、重用和再分发的数据。在印度尼西亚,自2011年以来,政府一直支持OGD的创建和使用。然而,与此相反,出版网站上发布的OGD的成熟度水平很低,按照全球五星开放数据标准的最高5颗星,只有1颗到3颗。本文以半自动发布工具的形式描述了该问题的解决方案,该工具可用于提高印度尼西亚当前OGD的成熟度。以行政人员为代表的政府机构可以使用该工具将现有数据处理成更成熟的数据。该工具将以Excel和CSV文件的形式,即分别为2星和3星的政府数据,处理成各种格式的最大5星数据。该工具的主要框架是在现有框架的基础上开发的,经过扩展,提供了更详细的步骤,并与印度尼西亚中央统计局的案例研究相匹配。基于评估,该工具可以将现有数据的成熟度等级从2星和3星提升到基于五星开放数据标准的最高5星。这一有希望的结果鼓励作者在另一项研究中进一步开发该工具,作者也根据这项工作的结果提供了一些可能的进一步发展。
{"title":"Semi-automated data publishing tool for advancing the Indonesian open government data maturity level case study: Badan pusat statistik Indonesia","authors":"Chairuni Aulia Nusapati, W. Sunindyo","doi":"10.1109/ICODSE.2017.8285887","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285887","url":null,"abstract":"Open Government Data (OGD) refers to the data produced or commissioned by the government or the government-controlled entities that can be freely used, reused, and redistributed by anyone. In Indonesia, the creation and use of OGD have been supported by the government since 2011. However, in contrary, the maturity level of the OGD published on the publishing sites is quite low, scoring only one up to three stars out of the maximum five stars according to the global Five Star Open Data standard. This paper describes a solution for the problem in the form of a semi-automated publishing tool that could be used to advance the current OGD maturity level in Indonesia. Government agencies, represented by their administrators, could use the tool to process their existing data into the more mature ones. The tool would take government data in the form of Excel and CSV files, or in other words 2-star and 3-star respectively, and process them into the maximum 5-star data in various formats. The main framework of the tool is developed based on an existing framework, extended to give more detailed steps and match to the case study at the Badan Pusat Statistik Indonesia (Indonesian Central Bureau of Statistics). Based on the evaluation, the tool can level up the existing data's maturity level from 2-star and 3-star to the maximum 5-star based on the Five Star Open Data Standard. This promising result encourages the authors to develop the tool even further in another research and the authors have also provided some possible further development based on the result of this work.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"283 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122210230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285891
Aghny Arisya Putra, Rahmad Mahendra, I. Budi, Q. Munajat
Collaborative filtering has been used extensively in the commercial recommender system because of its effectiveness and ease of implementation. Collaborative filtering predicts a user's preference based on preferences of similar users or from similar items to items that are purchased by this user. The use of either user-based or item-based similarity is not sufficient. For that particular issues, hybridization of user-based and item-based in one collaborative filtering recommender system can be used to sort relevant item out of a set of candidates. This method applies similarity measures using link prediction to predict target item by combining user similarity with item similarity. The experiment results show that the combination of user and item similarities in two-steps collaborative filtering setting improves accuracy compared to the algorithm applying only user or item similarity.
{"title":"Two-steps graph-based collaborative filtering using user and item similarities: Case study of E-commerce recommender systems","authors":"Aghny Arisya Putra, Rahmad Mahendra, I. Budi, Q. Munajat","doi":"10.1109/ICODSE.2017.8285891","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285891","url":null,"abstract":"Collaborative filtering has been used extensively in the commercial recommender system because of its effectiveness and ease of implementation. Collaborative filtering predicts a user's preference based on preferences of similar users or from similar items to items that are purchased by this user. The use of either user-based or item-based similarity is not sufficient. For that particular issues, hybridization of user-based and item-based in one collaborative filtering recommender system can be used to sort relevant item out of a set of candidates. This method applies similarity measures using link prediction to predict target item by combining user similarity with item similarity. The experiment results show that the combination of user and item similarities in two-steps collaborative filtering setting improves accuracy compared to the algorithm applying only user or item similarity.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124015527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285866
Paramita Mayadewi, B. Sitohang, F. N. Azizah
Ontology has an important role in creating semantic data on web semantics. Developing a new ontology model for a knowledge domain is not an easy process. Several existing studies, trying to obtain knowledge from existing assets, that is relational data model. The fundamental problem in the process of transforming relational database to ontology is how to construct an ontology model and extracting hidden semantics from relational model. Relational model are recognized as less expressive and incapable of supporting some conceptualizations. In this paper, we will provide an overview of the mapping scheme approach for transforming relational databases to ontologies based on literature studies from several studies that have been conducted. The results of the study concluded that the mapping scheme process for relational database transformation to ontology should consider all possible combinations of primary and foreign keys in relational model to produce a rich ontology.
{"title":"Scheme mapping for relational database transformation to ontology: A survey","authors":"Paramita Mayadewi, B. Sitohang, F. N. Azizah","doi":"10.1109/ICODSE.2017.8285866","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285866","url":null,"abstract":"Ontology has an important role in creating semantic data on web semantics. Developing a new ontology model for a knowledge domain is not an easy process. Several existing studies, trying to obtain knowledge from existing assets, that is relational data model. The fundamental problem in the process of transforming relational database to ontology is how to construct an ontology model and extracting hidden semantics from relational model. Relational model are recognized as less expressive and incapable of supporting some conceptualizations. In this paper, we will provide an overview of the mapping scheme approach for transforming relational databases to ontologies based on literature studies from several studies that have been conducted. The results of the study concluded that the mapping scheme process for relational database transformation to ontology should consider all possible combinations of primary and foreign keys in relational model to produce a rich ontology.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134620680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285881
M. Ayub, Hapnes Toba, M. Wijanto, Steven Yong
Educational data mining(EDM) has been used widely to investigate data that come from a learning process, including blended learning. This study explores educational data from a Learning Course Management System (LMS) and academic data in two courses of Management Study Program, Faculty of Economics at Maranatha Christian University, which are Change Management (CM) in undergraduate program and Creative Leadership (CL) in master degree program as case studies. The main aim of this research is to provide feedback for the learning process through the LMS in order to improve students' achievement. EDM methods used are association rule mining and J48 classification. The results of association rule mining are two sets of interesting rules for the CM course and three sets of rules for CL course. Using J48 classification, two J48 pruned trees are obtained for each course. Based on those results, some suggestions are proposed to enhance the LMS and to encourage students' involvement in blended learning.
{"title":"Modelling online assessment in management subjects through educational data mining","authors":"M. Ayub, Hapnes Toba, M. Wijanto, Steven Yong","doi":"10.1109/ICODSE.2017.8285881","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285881","url":null,"abstract":"Educational data mining(EDM) has been used widely to investigate data that come from a learning process, including blended learning. This study explores educational data from a Learning Course Management System (LMS) and academic data in two courses of Management Study Program, Faculty of Economics at Maranatha Christian University, which are Change Management (CM) in undergraduate program and Creative Leadership (CL) in master degree program as case studies. The main aim of this research is to provide feedback for the learning process through the LMS in order to improve students' achievement. EDM methods used are association rule mining and J48 classification. The results of association rule mining are two sets of interesting rules for the CM course and three sets of rules for CL course. Using J48 classification, two J48 pruned trees are obtained for each course. Based on those results, some suggestions are proposed to enhance the LMS and to encourage students' involvement in blended learning.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"218 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122806743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}