Pub Date : 1900-01-01DOI: 10.1109/ICTS.2016.7910280
Dhian Satria Yudha Kartika, D. Herumurti
Digital image processing is still a great demand for research. Research related to digital image processing can be components of color, texture and pattern. This study focuses on the segmentation process of the body pattern of koi. Koi fish is a fish species originating from the country of Japan are much in demand by the people of Indonesia as diverse shades of color and a unique pattern. This study focuses on 9 koi fish that will be grouped into classes. From 9 of koi fish are 281 datasets were later processed into training data and data testing. The segmentation process becomes important to obtain high accuracy before the classification process. The proposed segmentation method using the K-Means as pre-processing. K-Means method used for the separation of the object and the background with two color features are worth 0 and 1. Results of pre-processing will be displayed on color feature is worth 1; object fish that has a value of Red, Green, Blue (RGB). The value in the subsequent feature extraction RGB colors into Hue Saturation Value (HSV). The process of using the HSV color feature extraction is proposed to obtain classification results with high accuracy values. The testing process using tools Weka 3.8.0 Classification with Naive Bayes method compared with Support Vector Machine (SVM) which both use the K-Fold Cross Validation. The test results showed the Naive Bayes without K-Fold Cross Validation and SVM using K-Fold Cross Validation together have a value of high accuracy of 97%. It can be concluded that the segmentation method using the K-Means and HSV capable of providing high accuracy impact on the testing process by 97%.
数字图像处理仍然是一个很大的研究需求。研究与数字图像处理相关的成分可以是颜色、纹理和图案。本研究主要研究锦鲤体纹的分割过程。锦鲤是一种原产于日本的鱼类,由于颜色深浅不一,图案独特,深受印度尼西亚人民的欢迎。这项研究的重点是9种锦鲤,它们将被分类。从9种锦鲤中提取281个数据集,然后将其处理成训练数据和数据测试。为了在分类过程之前获得较高的准确率,分割过程变得非常重要。本文提出的分割方法采用k均值作为预处理。K-Means方法用于分离物体和背景,两个颜色特征值分别为0和1。预处理结果将显示颜色特征值为1;对象fish的值为Red, Green, Blue (RGB)。在随后的特征中提取RGB颜色的值为色相饱和度值(HSV)。提出了利用HSV颜色特征提取的方法,以获得具有较高准确率值的分类结果。测试过程中使用Weka 3.8.0分类工具用朴素贝叶斯方法与支持向量机(SVM)进行比较,两者都使用K-Fold交叉验证。测试结果表明,不使用K-Fold交叉验证的朴素贝叶斯和使用K-Fold交叉验证的支持向量机的准确率高达97%。可以得出结论,使用K-Means和HSV的分割方法能够在测试过程中提供97%的高精度影响。
{"title":"Koi fish classification based on HSV color space","authors":"Dhian Satria Yudha Kartika, D. Herumurti","doi":"10.1109/ICTS.2016.7910280","DOIUrl":"https://doi.org/10.1109/ICTS.2016.7910280","url":null,"abstract":"Digital image processing is still a great demand for research. Research related to digital image processing can be components of color, texture and pattern. This study focuses on the segmentation process of the body pattern of koi. Koi fish is a fish species originating from the country of Japan are much in demand by the people of Indonesia as diverse shades of color and a unique pattern. This study focuses on 9 koi fish that will be grouped into classes. From 9 of koi fish are 281 datasets were later processed into training data and data testing. The segmentation process becomes important to obtain high accuracy before the classification process. The proposed segmentation method using the K-Means as pre-processing. K-Means method used for the separation of the object and the background with two color features are worth 0 and 1. Results of pre-processing will be displayed on color feature is worth 1; object fish that has a value of Red, Green, Blue (RGB). The value in the subsequent feature extraction RGB colors into Hue Saturation Value (HSV). The process of using the HSV color feature extraction is proposed to obtain classification results with high accuracy values. The testing process using tools Weka 3.8.0 Classification with Naive Bayes method compared with Support Vector Machine (SVM) which both use the K-Fold Cross Validation. The test results showed the Naive Bayes without K-Fold Cross Validation and SVM using K-Fold Cross Validation together have a value of high accuracy of 97%. It can be concluded that the segmentation method using the K-Means and HSV capable of providing high accuracy impact on the testing process by 97%.","PeriodicalId":177275,"journal":{"name":"2016 International Conference on Information & Communication Technology and Systems (ICTS)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122341796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/ICTS.2016.7910294
M. Rivai, A. Arifin, Eva Inaiyah Agustin
This Paper presents the identification of mixed vapour using electronic nose system composed of Quartz Crystal Microbalance (QCM) sensor array and a partition column of gas chromatography. The polymer coated QCMs produced a specific frequency shift. The data set was processed by an Artificial Neural Network using Backpropagation algorithm as a pattern recognition. The result showed that this equipment was able to identify five types of vapours namely benzene, acetone, isopropyl alcohol, non-polar and polar mixture (i.e. benzene and acetone), and also polar and polar mixture (i.e. isopropyl alcohol and acetone) with the identification rate of 96%.
{"title":"Mixed vapour identification using partition column-QCMs and Artificial Neural Network","authors":"M. Rivai, A. Arifin, Eva Inaiyah Agustin","doi":"10.1109/ICTS.2016.7910294","DOIUrl":"https://doi.org/10.1109/ICTS.2016.7910294","url":null,"abstract":"This Paper presents the identification of mixed vapour using electronic nose system composed of Quartz Crystal Microbalance (QCM) sensor array and a partition column of gas chromatography. The polymer coated QCMs produced a specific frequency shift. The data set was processed by an Artificial Neural Network using Backpropagation algorithm as a pattern recognition. The result showed that this equipment was able to identify five types of vapours namely benzene, acetone, isopropyl alcohol, non-polar and polar mixture (i.e. benzene and acetone), and also polar and polar mixture (i.e. isopropyl alcohol and acetone) with the identification rate of 96%.","PeriodicalId":177275,"journal":{"name":"2016 International Conference on Information & Communication Technology and Systems (ICTS)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130336083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/ICTS.2016.7910304
Indra Gita Anugrah, R. Sarno
Business process modeling means to describe the set of activities either within the companies or organizations. A variety of approaches used and the level of complexity is a problem that often occurs within applicability therefore needed a way to measure the degree of correspondence between the model of Business Process. By measuring the level of compatibility model in Business Process expected company can be easier to analyze. The need in the analysis of business process models are expected of a company or organization can understand the business processes that are running and can be used as a tool to help companies in the face of change and development so as to facilitate in making policies that are needed quickly. In this paper would propose merging structural analysis with semantic analysis where semantic analysis performed using Probabilistic Latent Semantic Analysis (PLSA), and then every method both structural and semantic analysis will be represented into Weighted Directed Acyclic Graph (WDAG) and to calculate, a combined with the aim to generating methods of measuring the degree of correspondence between business process models are better than just using structural analysis.
{"title":"Business Process model similarity analysis using hybrid PLSA and WDAG methods","authors":"Indra Gita Anugrah, R. Sarno","doi":"10.1109/ICTS.2016.7910304","DOIUrl":"https://doi.org/10.1109/ICTS.2016.7910304","url":null,"abstract":"Business process modeling means to describe the set of activities either within the companies or organizations. A variety of approaches used and the level of complexity is a problem that often occurs within applicability therefore needed a way to measure the degree of correspondence between the model of Business Process. By measuring the level of compatibility model in Business Process expected company can be easier to analyze. The need in the analysis of business process models are expected of a company or organization can understand the business processes that are running and can be used as a tool to help companies in the face of change and development so as to facilitate in making policies that are needed quickly. In this paper would propose merging structural analysis with semantic analysis where semantic analysis performed using Probabilistic Latent Semantic Analysis (PLSA), and then every method both structural and semantic analysis will be represented into Weighted Directed Acyclic Graph (WDAG) and to calculate, a combined with the aim to generating methods of measuring the degree of correspondence between business process models are better than just using structural analysis.","PeriodicalId":177275,"journal":{"name":"2016 International Conference on Information & Communication Technology and Systems (ICTS)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128841670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/ICTS.2016.7910301
Fachrul Pralienka Bani Muhamad, R. Sarno, A. Ahmadiyah, S. Rochimah
Graphical User Interface (GUI) testing which is done manually requires great effort, because it needs high precision and bunch of time to do the all scenarios repeatedly. In addition, it can be prone to errors and most of testing scenarios are not all done. To solve that problems, it is proposed automated GUI testing. The latest techniques of automated GUI testing (the 3rd generation) is through a visual approach or called by Visual GUI testing (VGT). To automate the VGT, it is necessary to use testing tools. With VGT tools, GUI testing can be performed automatically and can mimic the human behavior. However, in the software development process, VGT feedback is still not automated, so that the effort is still required to run the VGT manually and repeatedly. Continuous integration (CI) is a practice that can automate the build when any program code or any version of the program code is changed. Each build consists of compile, inspection program code, test, and deploy. To automate the VGT feedback, it proposed combination of CI practice and VGT practice. In this paper, the focus of research is combining and assessing the VGT tools and CI tools, because there is no research about it yet. The result of this research show that combination of Jenkins and JAutomate are the highest assessment.
{"title":"Visual GUI testing in continuous integration environment","authors":"Fachrul Pralienka Bani Muhamad, R. Sarno, A. Ahmadiyah, S. Rochimah","doi":"10.1109/ICTS.2016.7910301","DOIUrl":"https://doi.org/10.1109/ICTS.2016.7910301","url":null,"abstract":"Graphical User Interface (GUI) testing which is done manually requires great effort, because it needs high precision and bunch of time to do the all scenarios repeatedly. In addition, it can be prone to errors and most of testing scenarios are not all done. To solve that problems, it is proposed automated GUI testing. The latest techniques of automated GUI testing (the 3rd generation) is through a visual approach or called by Visual GUI testing (VGT). To automate the VGT, it is necessary to use testing tools. With VGT tools, GUI testing can be performed automatically and can mimic the human behavior. However, in the software development process, VGT feedback is still not automated, so that the effort is still required to run the VGT manually and repeatedly. Continuous integration (CI) is a practice that can automate the build when any program code or any version of the program code is changed. Each build consists of compile, inspection program code, test, and deploy. To automate the VGT feedback, it proposed combination of CI practice and VGT practice. In this paper, the focus of research is combining and assessing the VGT tools and CI tools, because there is no research about it yet. The result of this research show that combination of Jenkins and JAutomate are the highest assessment.","PeriodicalId":177275,"journal":{"name":"2016 International Conference on Information & Communication Technology and Systems (ICTS)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125575079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/ICTS.2016.7910302
S. Paturusi, T. Usagawa, Arie S. M. Lumenta
In today's digital age, electronic learning (e-learning), refers to learning via the Internet, has become a significant trend. Accordingly, the higher education institutions progressively contribute to providing e-learning. This paper describes the first development and implementation of blended learning courses at Sam Ratulangi University (UNSRAT), Manado, North Sulawesi, Indonesia. The current work aims to evaluate the implementation of those blended learning courses from students' perspective. This research suggests that for evaluation and revision process, students' contemplation on courses is a crucial factor. Therefore, it would like to investigate what factors are influencing the students' satisfaction. In this paper, the development of the courses based on the instructional design was described. Students' achievements were evaluated from the questionnaire and examination results. The result of the research provides a description of students' satisfaction and further this outcome can be seen as guidance to develop and design blended learning courses in UNSRAT environment.
{"title":"A study of students' satisfaction toward blended learning implementation in higher education institution in Indonesia","authors":"S. Paturusi, T. Usagawa, Arie S. M. Lumenta","doi":"10.1109/ICTS.2016.7910302","DOIUrl":"https://doi.org/10.1109/ICTS.2016.7910302","url":null,"abstract":"In today's digital age, electronic learning (e-learning), refers to learning via the Internet, has become a significant trend. Accordingly, the higher education institutions progressively contribute to providing e-learning. This paper describes the first development and implementation of blended learning courses at Sam Ratulangi University (UNSRAT), Manado, North Sulawesi, Indonesia. The current work aims to evaluate the implementation of those blended learning courses from students' perspective. This research suggests that for evaluation and revision process, students' contemplation on courses is a crucial factor. Therefore, it would like to investigate what factors are influencing the students' satisfaction. In this paper, the development of the courses based on the instructional design was described. Students' achievements were evaluated from the questionnaire and examination results. The result of the research provides a description of students' satisfaction and further this outcome can be seen as guidance to develop and design blended learning courses in UNSRAT environment.","PeriodicalId":177275,"journal":{"name":"2016 International Conference on Information & Communication Technology and Systems (ICTS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129200543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/ICTS.2016.7910295
R. Kusumawardani, I. Hafidz, Septa Firmansyah Putra
The open data movement has led us into immensely useful applications and innovations for decision making, both for individual citizen as well as government. This study aims to create a web application called BencanaVis which provide innovative visualization of disaster government open data using Shiny, a web framework from R programming language. The datasets being used are available from Indonesian National Disaster Management Authority agency (or BNPB), the official Indonesian Open Data government portal and the Indonesian National Statistical Bureau (or BPS) website. We create three types of scenarios or experiments for the dataset. After that, we normalize the data using min-max use normalization. Then, we employ PCA (principal component analysis) to reduce feature dimensionality. Furthermore, we apply K-Means clustering techniques and calculate the cluster validity using Sum of Square Error (SSE), Davis-Bouldin Index (DBI), Dunn Index, Connectivity Index and Silhouettes Index. The cluster member from optimal number of k are then being analyzed to create a score for disaster readiness. We shall analyze this disaster readiness using the scoring produced by weighting the attributes values with weights from the AHP methods. Furthermore, we provide two visualizations; they are 3D scatter plot and cluster distribution using leaflet library from R. There are two other visualizations provided in the web application use heatmap and streamgraph library. The heatmap visualization shows the pattern distribution of all attributes and streamgraph visualization which refers to stacked area chart shows the number of 21 types disaster which recorded from BNPB data in 16 years during the year 2000 – 2016.
{"title":"BencanaVis visualization and clustering of disaster readiness using K Means with R Shiny A case study for Disaster, Medical Personnel and Health Facilities data at Province level in Indonesia","authors":"R. Kusumawardani, I. Hafidz, Septa Firmansyah Putra","doi":"10.1109/ICTS.2016.7910295","DOIUrl":"https://doi.org/10.1109/ICTS.2016.7910295","url":null,"abstract":"The open data movement has led us into immensely useful applications and innovations for decision making, both for individual citizen as well as government. This study aims to create a web application called BencanaVis which provide innovative visualization of disaster government open data using Shiny, a web framework from R programming language. The datasets being used are available from Indonesian National Disaster Management Authority agency (or BNPB), the official Indonesian Open Data government portal and the Indonesian National Statistical Bureau (or BPS) website. We create three types of scenarios or experiments for the dataset. After that, we normalize the data using min-max use normalization. Then, we employ PCA (principal component analysis) to reduce feature dimensionality. Furthermore, we apply K-Means clustering techniques and calculate the cluster validity using Sum of Square Error (SSE), Davis-Bouldin Index (DBI), Dunn Index, Connectivity Index and Silhouettes Index. The cluster member from optimal number of k are then being analyzed to create a score for disaster readiness. We shall analyze this disaster readiness using the scoring produced by weighting the attributes values with weights from the AHP methods. Furthermore, we provide two visualizations; they are 3D scatter plot and cluster distribution using leaflet library from R. There are two other visualizations provided in the web application use heatmap and streamgraph library. The heatmap visualization shows the pattern distribution of all attributes and streamgraph visualization which refers to stacked area chart shows the number of 21 types disaster which recorded from BNPB data in 16 years during the year 2000 – 2016.","PeriodicalId":177275,"journal":{"name":"2016 International Conference on Information & Communication Technology and Systems (ICTS)","volume":"179 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130072489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/ICTS.2016.7910288
J. Jusak, Ira Puspasari, Pauladie Susanto
Signal processing for pathological heart sound signals can be considered as a fundamental part of the whole process in tele-auscultation systems. In this paper, we employ the CEEMD and the EEMD algorithm to decompose various pathological heart sound signals in the form of phonocardiograph (PCG) signals. Following the decomposition process, we subsequently extract murmurs from the targeted heart sound signals using our proposed technique that based on the Pearson distance metric. Performance analysis of the decomposition algorithms as well as the extraction method is evaluated in terms of delta SNR that signifies variance comparison of targeted signal before and after murmurs extraction. It can be concluded that in general pathological heart sound signals that have been decomposed by the CEEMD algorithm followed by the Pearson distance metric for murmurs extraction, provide the finest murmurs extraction than the EEMD. Additionally, the EEMD algorithm involves smaller number of modes to form the extracted murmurs signal as compared to the CEEMD algorithm. However, employing the CEEMD algorithm produces higher number of shifting procedures causing higher computational complexity than the EEMD algorithm.
{"title":"Heart murmurs extraction using the complete Ensemble Empirical Mode Decomposition and the Pearson distance metric","authors":"J. Jusak, Ira Puspasari, Pauladie Susanto","doi":"10.1109/ICTS.2016.7910288","DOIUrl":"https://doi.org/10.1109/ICTS.2016.7910288","url":null,"abstract":"Signal processing for pathological heart sound signals can be considered as a fundamental part of the whole process in tele-auscultation systems. In this paper, we employ the CEEMD and the EEMD algorithm to decompose various pathological heart sound signals in the form of phonocardiograph (PCG) signals. Following the decomposition process, we subsequently extract murmurs from the targeted heart sound signals using our proposed technique that based on the Pearson distance metric. Performance analysis of the decomposition algorithms as well as the extraction method is evaluated in terms of delta SNR that signifies variance comparison of targeted signal before and after murmurs extraction. It can be concluded that in general pathological heart sound signals that have been decomposed by the CEEMD algorithm followed by the Pearson distance metric for murmurs extraction, provide the finest murmurs extraction than the EEMD. Additionally, the EEMD algorithm involves smaller number of modes to form the extracted murmurs signal as compared to the CEEMD algorithm. However, employing the CEEMD algorithm produces higher number of shifting procedures causing higher computational complexity than the EEMD algorithm.","PeriodicalId":177275,"journal":{"name":"2016 International Conference on Information & Communication Technology and Systems (ICTS)","volume":"519 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116262984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/ICTS.2016.7910308
F. A. Muqtadiroh, H. M. Astuti, Rian Triadi
A poor project management is one of the causes of failure of e-learning implementation processes. There are number of activities to be well completed in an e-learning project, yet in its execution there are normally some activities fail. The failure to maintain an on-going software project quality will badly affect the final software product (e-learning).
{"title":"The development of quality gates instrument for e-learning implementation","authors":"F. A. Muqtadiroh, H. M. Astuti, Rian Triadi","doi":"10.1109/ICTS.2016.7910308","DOIUrl":"https://doi.org/10.1109/ICTS.2016.7910308","url":null,"abstract":"A poor project management is one of the causes of failure of e-learning implementation processes. There are number of activities to be well completed in an e-learning project, yet in its execution there are normally some activities fail. The failure to maintain an on-going software project quality will badly affect the final software product (e-learning).","PeriodicalId":177275,"journal":{"name":"2016 International Conference on Information & Communication Technology and Systems (ICTS)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128223260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/ICTS.2016.7910282
L. Latumakulita, Fajar Purnama, T. Usagawa, S. Paturusi, Delta Ardy Prima
“Bidik Misi” scholarship program had been released by Indonesia Government since 2010. The main qualification has two aspects; family's economics and academic performance. The qualification parameters of “Bidik Misi” scholarship are divided into two groups. The first group is used to measure the poverty level of candidates and the second group is used to measure the level of academic performance of candidates. Each group has its subgroups and behaves like a Fuzzy Logic Controller (FLC). The final output is derived from the combination of poverty and academic performance levels. The aim of this research is to propose a new framework based on Fuzzy Inference System (FIS) with Mamdani's methods for “Bidik Misi” scholarship selection. Decision mechanism of “Bidik Misi” scholarship is based on the final total score from each parameter and maximum quota. It becomes difficult to make a clear decision if two or more candidates get the same final total score thus FIS approach is proposed to help decision makers to choose the most qualified applicants and avoid unnecessary problem in the selection process.
{"title":"Indonesia scholarship selection framework using fuzzy inferences system approach: Case study: “Bidik Misi” scholarship selection","authors":"L. Latumakulita, Fajar Purnama, T. Usagawa, S. Paturusi, Delta Ardy Prima","doi":"10.1109/ICTS.2016.7910282","DOIUrl":"https://doi.org/10.1109/ICTS.2016.7910282","url":null,"abstract":"“Bidik Misi” scholarship program had been released by Indonesia Government since 2010. The main qualification has two aspects; family's economics and academic performance. The qualification parameters of “Bidik Misi” scholarship are divided into two groups. The first group is used to measure the poverty level of candidates and the second group is used to measure the level of academic performance of candidates. Each group has its subgroups and behaves like a Fuzzy Logic Controller (FLC). The final output is derived from the combination of poverty and academic performance levels. The aim of this research is to propose a new framework based on Fuzzy Inference System (FIS) with Mamdani's methods for “Bidik Misi” scholarship selection. Decision mechanism of “Bidik Misi” scholarship is based on the final total score from each parameter and maximum quota. It becomes difficult to make a clear decision if two or more candidates get the same final total score thus FIS approach is proposed to help decision makers to choose the most qualified applicants and avoid unnecessary problem in the selection process.","PeriodicalId":177275,"journal":{"name":"2016 International Conference on Information & Communication Technology and Systems (ICTS)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114238522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.1109/ICTS.2016.7910300
Pradipta Ghusti, R. Sarno, R. V. Ginardi
In anticipation of the growth of electricity demand in the region, required the addition of new substations and the development of substation-substation that has operated previously. State Electricity Company developed a plan to determine the number, capacity and location of the substation in order to reconcile the needs of electric current with future electricity needs. This plan requires the identification of the electrical load in each region as well as the capability development of substations existing substations. Expenses allocated to a substation based on the distance between them by seeking the minimum transportation cost. Cost is optimized for all of the substation through the allocation of the burden to the appropriate substation. This final project is aimed at optimizing and find the service area (service area) by APJ by applying Voronoi diagram and Delaunay Triangulation.
{"title":"Substation placement optimization method using Delaunay Triangulation Algorithm and Voronoi Diagram in East Java case study","authors":"Pradipta Ghusti, R. Sarno, R. V. Ginardi","doi":"10.1109/ICTS.2016.7910300","DOIUrl":"https://doi.org/10.1109/ICTS.2016.7910300","url":null,"abstract":"In anticipation of the growth of electricity demand in the region, required the addition of new substations and the development of substation-substation that has operated previously. State Electricity Company developed a plan to determine the number, capacity and location of the substation in order to reconcile the needs of electric current with future electricity needs. This plan requires the identification of the electrical load in each region as well as the capability development of substations existing substations. Expenses allocated to a substation based on the distance between them by seeking the minimum transportation cost. Cost is optimized for all of the substation through the allocation of the burden to the appropriate substation. This final project is aimed at optimizing and find the service area (service area) by APJ by applying Voronoi diagram and Delaunay Triangulation.","PeriodicalId":177275,"journal":{"name":"2016 International Conference on Information & Communication Technology and Systems (ICTS)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133229018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}