Pub Date : 2016-10-01DOI: 10.1109/IWBIS.2016.7872897
Yulistiyan Wardhana, W. Jatmiko, M. F. Rachmadi
Cardiovascular system is the most important part of human body which has role as distribution system of Oxygen and body's wastes. To do the job, there are more than 60.000 miles of blood vessels participated which can caused a problem if one of them are being clogged. Unfortunately, the conditions of clogged blood vessels or diseases caused by cardiovascular malfunction could not be detected in a plain view. In this matter, we proposed a design of wearable device which can detect the conditions. The device is equipped with a newly neural network algorithm, GLVQ-PSO, which can give recommendation of the heart status based on learned data. After the research is conducted, the algorithm produce better accuracy than LVQ, GLVQ and FNGLVQ in the high level language implementation. Yet, GLVQ-PSO still has relatively worse performance in its FPGA implementation.
{"title":"Generalized learning vector quantization particle swarm optimization (GLVQ-PSO) FPGA implementation for real-time electrocardiogram","authors":"Yulistiyan Wardhana, W. Jatmiko, M. F. Rachmadi","doi":"10.1109/IWBIS.2016.7872897","DOIUrl":"https://doi.org/10.1109/IWBIS.2016.7872897","url":null,"abstract":"Cardiovascular system is the most important part of human body which has role as distribution system of Oxygen and body's wastes. To do the job, there are more than 60.000 miles of blood vessels participated which can caused a problem if one of them are being clogged. Unfortunately, the conditions of clogged blood vessels or diseases caused by cardiovascular malfunction could not be detected in a plain view. In this matter, we proposed a design of wearable device which can detect the conditions. The device is equipped with a newly neural network algorithm, GLVQ-PSO, which can give recommendation of the heart status based on learned data. After the research is conducted, the algorithm produce better accuracy than LVQ, GLVQ and FNGLVQ in the high level language implementation. Yet, GLVQ-PSO still has relatively worse performance in its FPGA implementation.","PeriodicalId":193821,"journal":{"name":"2016 International Workshop on Big Data and Information Security (IWBIS)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121359324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/IWBIS.2016.7872882
Nico Neumann
Leveraging customer data in scale and often in real time has led to a new field called programmatic commerce — the use of data, automation and analytics to improve customer experiences and company performances. In particular in advertising and marketing, programmatic applications have become very popular because they allow personalization/ micro-targeting as well as easier media planning due to the rise of automated buying processes. In this review study, we will discuss the development of the new field around advertising and marketing technology and summarize present research efforts. In addition, some industry case studies will be shared to illustrate the power of the latest big-data and machine-learning applications for driving business outcomes.
{"title":"The power of big data and algorithms for advertising and customer communication","authors":"Nico Neumann","doi":"10.1109/IWBIS.2016.7872882","DOIUrl":"https://doi.org/10.1109/IWBIS.2016.7872882","url":null,"abstract":"Leveraging customer data in scale and often in real time has led to a new field called programmatic commerce — the use of data, automation and analytics to improve customer experiences and company performances. In particular in advertising and marketing, programmatic applications have become very popular because they allow personalization/ micro-targeting as well as easier media planning due to the rise of automated buying processes. In this review study, we will discuss the development of the new field around advertising and marketing technology and summarize present research efforts. In addition, some industry case studies will be shared to illustrate the power of the latest big-data and machine-learning applications for driving business outcomes.","PeriodicalId":193821,"journal":{"name":"2016 International Workshop on Big Data and Information Security (IWBIS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128403777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/IWBIS.2016.7872881
Yennun Huang, Szu-Chuang Li
Founded in February 2007, the aim of the Research Center for Information Technology Innovation (CITI) at Academia Sinica is to integrate research and development efforts in information technologies by various organizations in Academia Sinica, and also to facilitate and leverage IT-related multidisciplinary research. As a integral part of CITI, Taiwan Information Security Center (TWISC) to conduct researches on security with funding support from Ministry of Science and Technology. TWISC serves as a platform for security experts from universities, research institutes and private sector to share information and to explore opportunities to collaborate. Its aim is to boost research and development activities and promote public awareness regarding information security. Its research topics cover data/ software/ hardware/ network security and security management. TWISC has become the hub of security research in Taiwan and have been making significant impact through publishing and creating of toolkits. Recently privacy also becomes one of the main focuses of TWISC. The research team at CITI, Academia has been working on a viable way to assess the disclosure risk of synthetic dataset. Preliminary research result will be presented in this paper.
{"title":"Overview of research center for information technology innovation in Taiwan Academia Sinica","authors":"Yennun Huang, Szu-Chuang Li","doi":"10.1109/IWBIS.2016.7872881","DOIUrl":"https://doi.org/10.1109/IWBIS.2016.7872881","url":null,"abstract":"Founded in February 2007, the aim of the Research Center for Information Technology Innovation (CITI) at Academia Sinica is to integrate research and development efforts in information technologies by various organizations in Academia Sinica, and also to facilitate and leverage IT-related multidisciplinary research. As a integral part of CITI, Taiwan Information Security Center (TWISC) to conduct researches on security with funding support from Ministry of Science and Technology. TWISC serves as a platform for security experts from universities, research institutes and private sector to share information and to explore opportunities to collaborate. Its aim is to boost research and development activities and promote public awareness regarding information security. Its research topics cover data/ software/ hardware/ network security and security management. TWISC has become the hub of security research in Taiwan and have been making significant impact through publishing and creating of toolkits. Recently privacy also becomes one of the main focuses of TWISC. The research team at CITI, Academia has been working on a viable way to assess the disclosure risk of synthetic dataset. Preliminary research result will be presented in this paper.","PeriodicalId":193821,"journal":{"name":"2016 International Workshop on Big Data and Information Security (IWBIS)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116478668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/IWBIS.2016.7872888
Y. Ruldeviyani, Bofandra Mohammad
Merchant acquirer is a business of acquiring debit card, credit card, and prepaid card transactions using EDC (electronic payment terminals) in merchants. It is included in one of the top business priority areas in PT. XYZ. It is in the area of retail payments and deposits. It increases fees based incomes, cheap funds, and high yield loans. In order to improve its business performance, PT. XYZ needs the best strategy. However, a good strategic decision needs an adequate of useful information. Currently, information that is provided by reporting staffs involved many manual tasks in the process. In consequence, the data cannot be provided quickly, and it has some complexity limitation. As a solution for this problem, PT. XYZ needs a data warehouse for its merchant acquirer business. This research will focus on the design and the implementation of the data warehouse solution using a methodology that is developed by Ralph L. Kimball. Finally, data warehouse is developed which is suitable for merchant acquirer PT. XYZ's needs.
商户收单业务是指在商户中使用电子支付终端(EDC)获取借记卡、信用卡和预付卡交易的业务。它包含在PT. XYZ的最高业务优先领域之一。它属于零售支付和存款领域。它增加了收费收入、廉价资金和高收益贷款。为了提高业务绩效,PT. XYZ需要最佳战略。然而,一个好的战略决策需要足够的有用信息。目前,报告人员提供的信息在流程中涉及许多手工任务。因此,不能快速提供数据,并且具有一定的复杂性限制。作为这个问题的解决方案,PT. XYZ需要一个数据仓库来处理它的商业收购业务。本研究将集中于使用Ralph L. Kimball开发的方法的数据仓库解决方案的设计和实现。最后,开发了适合商户收购方PT. XYZ需求的数据仓库。
{"title":"Design and implementation of merchant acquirer data warehouse at PT. XYZ","authors":"Y. Ruldeviyani, Bofandra Mohammad","doi":"10.1109/IWBIS.2016.7872888","DOIUrl":"https://doi.org/10.1109/IWBIS.2016.7872888","url":null,"abstract":"Merchant acquirer is a business of acquiring debit card, credit card, and prepaid card transactions using EDC (electronic payment terminals) in merchants. It is included in one of the top business priority areas in PT. XYZ. It is in the area of retail payments and deposits. It increases fees based incomes, cheap funds, and high yield loans. In order to improve its business performance, PT. XYZ needs the best strategy. However, a good strategic decision needs an adequate of useful information. Currently, information that is provided by reporting staffs involved many manual tasks in the process. In consequence, the data cannot be provided quickly, and it has some complexity limitation. As a solution for this problem, PT. XYZ needs a data warehouse for its merchant acquirer business. This research will focus on the design and the implementation of the data warehouse solution using a methodology that is developed by Ralph L. Kimball. Finally, data warehouse is developed which is suitable for merchant acquirer PT. XYZ's needs.","PeriodicalId":193821,"journal":{"name":"2016 International Workshop on Big Data and Information Security (IWBIS)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117257757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/IWBIS.2016.7872903
G. Jati, Budi Hartadi, A. Putra, Fahri Nurul, M. Iqbal, S. Yazid
Distributed Denial of Service (DDoS) is one kind of attacks using multiple computers. An attacker would act as a fake service requester that drains resources in computer target. This makes the target cannot serve the real request service. Thus we need to develop DDoS detector system. The proposed system consists of traffic capture, packet analyzer, and packet displayer. The system utilizes Ntopng as main traffic analyzer. Detector system has to meet good standard in accuracy, sensitivity, and reliability. We evaluate the system using one of dangerous DDoS tool named Slowloris. The system can detect attacks and provide alerts to detector user. The system also can process all incoming packets with a small margin of error (0.76%).
{"title":"Design DDoS attack detector using NTOPNG","authors":"G. Jati, Budi Hartadi, A. Putra, Fahri Nurul, M. Iqbal, S. Yazid","doi":"10.1109/IWBIS.2016.7872903","DOIUrl":"https://doi.org/10.1109/IWBIS.2016.7872903","url":null,"abstract":"Distributed Denial of Service (DDoS) is one kind of attacks using multiple computers. An attacker would act as a fake service requester that drains resources in computer target. This makes the target cannot serve the real request service. Thus we need to develop DDoS detector system. The proposed system consists of traffic capture, packet analyzer, and packet displayer. The system utilizes Ntopng as main traffic analyzer. Detector system has to meet good standard in accuracy, sensitivity, and reliability. We evaluate the system using one of dangerous DDoS tool named Slowloris. The system can detect attacks and provide alerts to detector user. The system also can process all incoming packets with a small margin of error (0.76%).","PeriodicalId":193821,"journal":{"name":"2016 International Workshop on Big Data and Information Security (IWBIS)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124411324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/IWBIS.2016.7872900
M. A. Ma'sum, W. Jatmiko, H. Suhartanto
Indonesia has high mortality caused by cardiovascular diseases. To minimize the mortality, we build a tele-ecg system for heart diseases early detection and monitoring. In this research, the tele-ecg system was enhanced using Hadoop framework, in order to deal with big data processing. The system was build on cluster computer with 4 nodes. The server is able to handle 60 requests at the same time. The system can classify the ecg data using decision tree and random forest. The accuracy is 97.14% and 98,92% for decision tree and random forest respectively. Training process in random forest is faster than in decision tree, while testing process in decision tree is faster than in random forest.
{"title":"Enhanced tele ECG system using Hadoop framework to deal with big data processing","authors":"M. A. Ma'sum, W. Jatmiko, H. Suhartanto","doi":"10.1109/IWBIS.2016.7872900","DOIUrl":"https://doi.org/10.1109/IWBIS.2016.7872900","url":null,"abstract":"Indonesia has high mortality caused by cardiovascular diseases. To minimize the mortality, we build a tele-ecg system for heart diseases early detection and monitoring. In this research, the tele-ecg system was enhanced using Hadoop framework, in order to deal with big data processing. The system was build on cluster computer with 4 nodes. The server is able to handle 60 requests at the same time. The system can classify the ecg data using decision tree and random forest. The accuracy is 97.14% and 98,92% for decision tree and random forest respectively. Training process in random forest is faster than in decision tree, while testing process in decision tree is faster than in random forest.","PeriodicalId":193821,"journal":{"name":"2016 International Workshop on Big Data and Information Security (IWBIS)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128787211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/IWBIS.2016.7872896
Rindra Wiska, Novian Habibie, A. Wibisono, W. S. Nugroho, P. Mursanto
Wireless Sensor Network (WSN) is a system that have a capability to conduct data acquisition and monitoring in a wide sampling area for a long time. However, because of its big-scale monitoring, amount of data accumulated from WSN is very huge. Conventional database system may not be able to handle its big amount of data. To overcome that, big data approach is used for an alternative data storage system and data analysis process. This research developed a WSN system for CO2 monitoring using Kafka and Impala to distribute a huge amount of data. Sensor nodes gather data and accumulated in temporary storage then streamed via Kafka platform to be stored into Impala database. System tested with data gathered from our-own made sensor nodes and give a good performance.
{"title":"Big sensor-generated data streaming using Kafka and Impala for data storage in Wireless Sensor Network for CO2 monitoring","authors":"Rindra Wiska, Novian Habibie, A. Wibisono, W. S. Nugroho, P. Mursanto","doi":"10.1109/IWBIS.2016.7872896","DOIUrl":"https://doi.org/10.1109/IWBIS.2016.7872896","url":null,"abstract":"Wireless Sensor Network (WSN) is a system that have a capability to conduct data acquisition and monitoring in a wide sampling area for a long time. However, because of its big-scale monitoring, amount of data accumulated from WSN is very huge. Conventional database system may not be able to handle its big amount of data. To overcome that, big data approach is used for an alternative data storage system and data analysis process. This research developed a WSN system for CO2 monitoring using Kafka and Impala to distribute a huge amount of data. Sensor nodes gather data and accumulated in temporary storage then streamed via Kafka platform to be stored into Impala database. System tested with data gathered from our-own made sensor nodes and give a good performance.","PeriodicalId":193821,"journal":{"name":"2016 International Workshop on Big Data and Information Security (IWBIS)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123360591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/IWBIS.2016.7872892
D. M. S. Arsa, G. Jati, Aprinaldi Jasa Mantau, Ito Wasito
The high dimensionality in big data need a heavy computation when the analysis needed. This research proposed a dimensionality reduction using deep belief network (DBN). We used hyperspectral images as case study. The hyperspectral image is a high dimensional image. Some researched have been proposed to reduce hyperspectral image dimension such as using LDA and PCA in spectral-spatial hyperspectral image classification. This paper proposed a dimensionality reduction using deep belief network (DBN) for hyperspectral image classification. In proposed framework, we use two DBNs. First DBN used to reduce the dimension of spectral bands and the second DBN used to extract spectral-spatial feature and as classifier. We used Indian Pines data set that consist of 16 classes and we compared DBN and PCA performance. The result indicates that by using DBN as dimensionality reduction method performed better than PCA in hyperspectral image classification.
{"title":"Dimensionality reduction using deep belief network in big data case study: Hyperspectral image classification","authors":"D. M. S. Arsa, G. Jati, Aprinaldi Jasa Mantau, Ito Wasito","doi":"10.1109/IWBIS.2016.7872892","DOIUrl":"https://doi.org/10.1109/IWBIS.2016.7872892","url":null,"abstract":"The high dimensionality in big data need a heavy computation when the analysis needed. This research proposed a dimensionality reduction using deep belief network (DBN). We used hyperspectral images as case study. The hyperspectral image is a high dimensional image. Some researched have been proposed to reduce hyperspectral image dimension such as using LDA and PCA in spectral-spatial hyperspectral image classification. This paper proposed a dimensionality reduction using deep belief network (DBN) for hyperspectral image classification. In proposed framework, we use two DBNs. First DBN used to reduce the dimension of spectral bands and the second DBN used to extract spectral-spatial feature and as classifier. We used Indian Pines data set that consist of 16 classes and we compared DBN and PCA performance. The result indicates that by using DBN as dimensionality reduction method performed better than PCA in hyperspectral image classification.","PeriodicalId":193821,"journal":{"name":"2016 International Workshop on Big Data and Information Security (IWBIS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122486651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/IWBIS.2016.7872894
A. A. Gunawan, A. N. Falah, Alfensi Faruk, D. S. Lutero, B. N. Ruchjana, A. S. Abdullah
Due to pollution over many years, large amounts of heavy metal pollutant can be accumulated in the rivers. In the research, we would like to predict the dangerous region around the river. For study case, we use the Meuse river floodplains which are contaminated with zinc (Zn). Large zinc concentrations can cause many health problems, for example vomiting, skin irritations, stomach cramps, and anaemia. However there is only few sample data about the zinc concentration of Meuse river, thus the missing data in unknown regions need to be generated. The aim of this research is to study and to apply spatial data mining to predict unobserved zinc pollutant by using ordinary point Kriging. By mean of semivariogram, the variability pattern of zinc can be captured. This captured model will be interpolated to predict the unknown regions by using Kriging method. In our experiments, we propose ordinary point Kriging and employ several semivariogram: Gaussian, Exponential and Spherical models. The experimental results show that: (i) by calculating the minimum error sum of squares, the fittest theoretical semivariogram models is exponential model (ii) the accuracy of the predictions can be confirmed visually by projecting the results to the map.
{"title":"Spatial data mining for predicting of unobserved zinc pollutant using ordinary point Kriging","authors":"A. A. Gunawan, A. N. Falah, Alfensi Faruk, D. S. Lutero, B. N. Ruchjana, A. S. Abdullah","doi":"10.1109/IWBIS.2016.7872894","DOIUrl":"https://doi.org/10.1109/IWBIS.2016.7872894","url":null,"abstract":"Due to pollution over many years, large amounts of heavy metal pollutant can be accumulated in the rivers. In the research, we would like to predict the dangerous region around the river. For study case, we use the Meuse river floodplains which are contaminated with zinc (Zn). Large zinc concentrations can cause many health problems, for example vomiting, skin irritations, stomach cramps, and anaemia. However there is only few sample data about the zinc concentration of Meuse river, thus the missing data in unknown regions need to be generated. The aim of this research is to study and to apply spatial data mining to predict unobserved zinc pollutant by using ordinary point Kriging. By mean of semivariogram, the variability pattern of zinc can be captured. This captured model will be interpolated to predict the unknown regions by using Kriging method. In our experiments, we propose ordinary point Kriging and employ several semivariogram: Gaussian, Exponential and Spherical models. The experimental results show that: (i) by calculating the minimum error sum of squares, the fittest theoretical semivariogram models is exponential model (ii) the accuracy of the predictions can be confirmed visually by projecting the results to the map.","PeriodicalId":193821,"journal":{"name":"2016 International Workshop on Big Data and Information Security (IWBIS)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121116694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/IWBIS.2016.7872889
Argianto Rahartomo, R. F. Aji, Y. Ruldeviyani
Big Data is a condition in which data size in a database is very large so it is difficult to be managed. An e-Learning application, like SCeLE Fasilkom UI (scele.cs.ui.ac.id), also has a very large data. SCeLE has hundreds of forum data, and each forum has at least 4000 threads of discussion. In addition, one thread can have at least dozens or hundreds posts. Therefore, it may further experience data growth problem, which will be difficult to be handled by RDBMS, such as MySQL that is currently used. In order to solve this problem, a research been conducted to apply Big Data in SCeLE Fasilkom UI, which implementation is aimed to increase SCeLE's data management performance. The implementation of Big Data in the research used MongoDB as the system's DBMS. The research result showed that MongoDB obtain better results than MySQL in SCeLE Fasilkom UI forum data case in terms of speed.
{"title":"The application of big data using MongoDB: Case study with SCeLE Fasilkom UI forum data","authors":"Argianto Rahartomo, R. F. Aji, Y. Ruldeviyani","doi":"10.1109/IWBIS.2016.7872889","DOIUrl":"https://doi.org/10.1109/IWBIS.2016.7872889","url":null,"abstract":"Big Data is a condition in which data size in a database is very large so it is difficult to be managed. An e-Learning application, like SCeLE Fasilkom UI (scele.cs.ui.ac.id), also has a very large data. SCeLE has hundreds of forum data, and each forum has at least 4000 threads of discussion. In addition, one thread can have at least dozens or hundreds posts. Therefore, it may further experience data growth problem, which will be difficult to be handled by RDBMS, such as MySQL that is currently used. In order to solve this problem, a research been conducted to apply Big Data in SCeLE Fasilkom UI, which implementation is aimed to increase SCeLE's data management performance. The implementation of Big Data in the research used MongoDB as the system's DBMS. The research result showed that MongoDB obtain better results than MySQL in SCeLE Fasilkom UI forum data case in terms of speed.","PeriodicalId":193821,"journal":{"name":"2016 International Workshop on Big Data and Information Security (IWBIS)","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127501914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}