Pub Date : 2020-09-19DOI: 10.1109/iSemantic50169.2020.9234258
Desanty Ridzky, R. Sarno
Technology development in Indonesia has increasingly progressed and provided business opportunities for businesses to meet customer's needs. The presence of e-commerce that have been widely spread in Indonesia is one of the examples of the technological progress. Indonesia already has an e-commerce online travel agent that prioritized user's needs to make it easier for the user to make an online reservation more efficient and effective. Traveloka and Tiket.com are an e-commerce online travel agents with many downloader in Indonesia, in choosing an online travel agent, users are certainly influenced by several factors identify by using UTAUT2 model. The results of this study indicate the use of Traveloka for users is influenced by perceived security, price value, and habit factors, while Tiket.com is influenced by facilitating conditions, performance expectancy, and habit. Companies could focus on these factors in terms of increasing the desire of users to use online travel agents.
{"title":"UTAUT2 model for analyzing factors influencing user in using Online Travel Agent","authors":"Desanty Ridzky, R. Sarno","doi":"10.1109/iSemantic50169.2020.9234258","DOIUrl":"https://doi.org/10.1109/iSemantic50169.2020.9234258","url":null,"abstract":"Technology development in Indonesia has increasingly progressed and provided business opportunities for businesses to meet customer's needs. The presence of e-commerce that have been widely spread in Indonesia is one of the examples of the technological progress. Indonesia already has an e-commerce online travel agent that prioritized user's needs to make it easier for the user to make an online reservation more efficient and effective. Traveloka and Tiket.com are an e-commerce online travel agents with many downloader in Indonesia, in choosing an online travel agent, users are certainly influenced by several factors identify by using UTAUT2 model. The results of this study indicate the use of Traveloka for users is influenced by perceived security, price value, and habit factors, while Tiket.com is influenced by facilitating conditions, performance expectancy, and habit. Companies could focus on these factors in terms of increasing the desire of users to use online travel agents.","PeriodicalId":345558,"journal":{"name":"2020 International Seminar on Application for Technology of Information and Communication (iSemantic)","volume":"43 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133238460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-19DOI: 10.1109/iSemantic50169.2020.9234262
Rakha Asyrofi, Yoni Azhar Winata, R. Sarno, Aziz Fajar
K-means clustering can be used as an algorithm segmentation that can split an area of interest from the image into several different regions containing each pixel based on color. Nevertheless, the result of the color division of the clustering has not been able to display clean segmentation because there are still pixels that connect each other and produce pixel noise or unwanted pixels. In this work, we propose a technique where it can select four dominant colors from the k-means clustering results then display it as digital image output. In our approach, the proposed method can separate the cerebellum and frontal lobe from the background of the brain after several operations of morphological transformation. In implementing this method, three samples of the brain from different people were tested. From the experimental results, the DSI produces a value of 0.72 from 1 for the frontal lobe and 0.86 from 1 for the cerebellum. It means that the proposed method can segment the desired part of the brain properly.
{"title":"Cerebellum and Frontal Lobe Segmentation Based on K-Means Clustering and Morphological Transformation","authors":"Rakha Asyrofi, Yoni Azhar Winata, R. Sarno, Aziz Fajar","doi":"10.1109/iSemantic50169.2020.9234262","DOIUrl":"https://doi.org/10.1109/iSemantic50169.2020.9234262","url":null,"abstract":"K-means clustering can be used as an algorithm segmentation that can split an area of interest from the image into several different regions containing each pixel based on color. Nevertheless, the result of the color division of the clustering has not been able to display clean segmentation because there are still pixels that connect each other and produce pixel noise or unwanted pixels. In this work, we propose a technique where it can select four dominant colors from the k-means clustering results then display it as digital image output. In our approach, the proposed method can separate the cerebellum and frontal lobe from the background of the brain after several operations of morphological transformation. In implementing this method, three samples of the brain from different people were tested. From the experimental results, the DSI produces a value of 0.72 from 1 for the frontal lobe and 0.86 from 1 for the cerebellum. It means that the proposed method can segment the desired part of the brain properly.","PeriodicalId":345558,"journal":{"name":"2020 International Seminar on Application for Technology of Information and Communication (iSemantic)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125976461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-19DOI: 10.1109/iSemantic50169.2020.9234247
B. Sugiarto, C. A. Sari, De Rosal Ignatius Moses Setiadi, E. H. Rachmawanto
This research aims to analyze three patterns of embedding messages based on the Least Significant Bit (LSB) method on RGB color images. Previous research has suggested a pattern of LSB x-y-z embedding, where the value of x-y-z is 2-3-3 or 3-2-3 or 3-3-2, where the sum of the x-y-z value is 8-bits. The x value represents the number of message bits embedded on the red channel, y on the green channel, and z on the blue channel. Each research claims that the pattern has its advantages, especially to increase payload and imperceptibility. Because of this, a third method is used to compile the three methods using the same dataset, both the host image and the message. To increase the security of the message, encryption is performed using the RSA method before embedded. Stego image quality is measured by comparing it with the host image with four kinds of measuring tools, namely mean square error (MSE), peak signal to noise ratio (PSNR), structural similarity index measurement (SSIM), and histogram analysis. The results showed that all methods were of good quality and identical. It's just that LSB 3-2-3 is slightly superior when measured based on MSE and PSNR values, but the average difference in value does not reach 0.1dB. Whereas based on measuring SSIM LSB 3-3-2 get the best results. Where the difference is not more than 0.001. While at the message extraction stage the value of the bit error ratio (BER) between the original message and the extracted message yields a value of 0, which indicates that all methods can extract the message perfectly.
{"title":"Performance Analysis of LSB Color Image Steganography based on Embedding Pattern of the RGB Channels","authors":"B. Sugiarto, C. A. Sari, De Rosal Ignatius Moses Setiadi, E. H. Rachmawanto","doi":"10.1109/iSemantic50169.2020.9234247","DOIUrl":"https://doi.org/10.1109/iSemantic50169.2020.9234247","url":null,"abstract":"This research aims to analyze three patterns of embedding messages based on the Least Significant Bit (LSB) method on RGB color images. Previous research has suggested a pattern of LSB x-y-z embedding, where the value of x-y-z is 2-3-3 or 3-2-3 or 3-3-2, where the sum of the x-y-z value is 8-bits. The x value represents the number of message bits embedded on the red channel, y on the green channel, and z on the blue channel. Each research claims that the pattern has its advantages, especially to increase payload and imperceptibility. Because of this, a third method is used to compile the three methods using the same dataset, both the host image and the message. To increase the security of the message, encryption is performed using the RSA method before embedded. Stego image quality is measured by comparing it with the host image with four kinds of measuring tools, namely mean square error (MSE), peak signal to noise ratio (PSNR), structural similarity index measurement (SSIM), and histogram analysis. The results showed that all methods were of good quality and identical. It's just that LSB 3-2-3 is slightly superior when measured based on MSE and PSNR values, but the average difference in value does not reach 0.1dB. Whereas based on measuring SSIM LSB 3-3-2 get the best results. Where the difference is not more than 0.001. While at the message extraction stage the value of the bit error ratio (BER) between the original message and the extracted message yields a value of 0, which indicates that all methods can extract the message perfectly.","PeriodicalId":345558,"journal":{"name":"2020 International Seminar on Application for Technology of Information and Communication (iSemantic)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126192997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Following the amount of data and file size, the dimensions of the features can also change, causing heavy usage load on computers by simple multiplication. As technology progressed, we generate clearer sound files, resulting in more High Definition (HD) data with a direct impact on its size. Since many records are critically needed for further analysis, reducing files count and sacrificing clearer sound files is not feasible. In selecting features that best represent humorous speech, we need to implement the Feature Selection (FS) techniques. The FS acts as helpers in computing features with more than ten features/attributes. The purpose of this research is to find the FS technique with the highest accuracy of Random Forest classification, specifically for humorous speech. Unlike the usual FS techniques, we chose to employ the heuristic-based FS techniques, namely, Particle Swarm Optimization, Ant Colony Optimization, Cuckoo Search, and Firefly Algorithm. We applied the FS techniques in WEKA, over their simplification of usage; also jAudio of GUI-based feature extraction for the same reason. Moreover, we used the speech data from the UR-FUNNY dataset, which comprised 10.000 sound clips of both humorous and non-humorous speech by TED Talks speakers.
{"title":"Assessment of Humorous Speech by Automatic Heuristic-based Feature Selection","authors":"Derry Pramono Adi, Agustinus Bimo Gumelar, Ralin Pramasuri Arta Meisa, Siska Susilowati","doi":"10.1109/iSemantic50169.2020.9234228","DOIUrl":"https://doi.org/10.1109/iSemantic50169.2020.9234228","url":null,"abstract":"Following the amount of data and file size, the dimensions of the features can also change, causing heavy usage load on computers by simple multiplication. As technology progressed, we generate clearer sound files, resulting in more High Definition (HD) data with a direct impact on its size. Since many records are critically needed for further analysis, reducing files count and sacrificing clearer sound files is not feasible. In selecting features that best represent humorous speech, we need to implement the Feature Selection (FS) techniques. The FS acts as helpers in computing features with more than ten features/attributes. The purpose of this research is to find the FS technique with the highest accuracy of Random Forest classification, specifically for humorous speech. Unlike the usual FS techniques, we chose to employ the heuristic-based FS techniques, namely, Particle Swarm Optimization, Ant Colony Optimization, Cuckoo Search, and Firefly Algorithm. We applied the FS techniques in WEKA, over their simplification of usage; also jAudio of GUI-based feature extraction for the same reason. Moreover, we used the speech data from the UR-FUNNY dataset, which comprised 10.000 sound clips of both humorous and non-humorous speech by TED Talks speakers.","PeriodicalId":345558,"journal":{"name":"2020 International Seminar on Application for Technology of Information and Communication (iSemantic)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130070848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-19DOI: 10.1109/iSemantic50169.2020.9234268
Putri Damayanti, Dini Yuniasri, R. Sarno, Aziz Fajar, Dewi Rahmawati
Corpus callosum integrates left and right hemispheres of human brain. There are several methods for segmenting corpus callosum, but the existing algorithms need several steps to segment images. Therefore, we propose a simple method using level set method to segment corpus callosum. We use level set method as it can handle the structure of the brain easily. This method provides a numerical solution for processing changes in topological contours by representing a curve or surface as a zero level to a higher hyper-dimensional surface. This experiment shows that by implementing level set method to segment the corpus callosum produces Dice Similarity Coefficient (DSC) value of 85.14%.
{"title":"Corpus Callosum Segmentation from Brain MRI Images Based on Level Set Method","authors":"Putri Damayanti, Dini Yuniasri, R. Sarno, Aziz Fajar, Dewi Rahmawati","doi":"10.1109/iSemantic50169.2020.9234268","DOIUrl":"https://doi.org/10.1109/iSemantic50169.2020.9234268","url":null,"abstract":"Corpus callosum integrates left and right hemispheres of human brain. There are several methods for segmenting corpus callosum, but the existing algorithms need several steps to segment images. Therefore, we propose a simple method using level set method to segment corpus callosum. We use level set method as it can handle the structure of the brain easily. This method provides a numerical solution for processing changes in topological contours by representing a curve or surface as a zero level to a higher hyper-dimensional surface. This experiment shows that by implementing level set method to segment the corpus callosum produces Dice Similarity Coefficient (DSC) value of 85.14%.","PeriodicalId":345558,"journal":{"name":"2020 International Seminar on Application for Technology of Information and Communication (iSemantic)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117325515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-19DOI: 10.1109/iSemantic50169.2020.9234250
V. Maheswari, C. A. Sari, D. Setiadi, E. H. Rachmawanto
Principal Component Analysis (PCA) is a very popular facial recognition method. This research aims to analyze the PCA method, where various scenarios are tested to look for things that affect the results of recognition using this method. There are three datasets used in the testing phase, namely the private dataset, JAFFE, and Yale. The accuracy produced in the private dataset is 79%, 82%, 86%, and 85.33% with a variety of different scenarios, while in the JAFFE dataset the maximum recognition accuracy is 100% and in the last experiment on the Yale dataset, the accuracy is 85.33%. From various experiments that have been done, it is found that the things that affect accuracy are the number of people, training data, attributes used, lighting, and background. While facial expressions and gender do not prove to have a major influence on the recognition process, with a variety of facial expressions, the PCA method can still recognize faces well.
{"title":"Study Analysis of Human Face Recognition using Principal Component Analysis","authors":"V. Maheswari, C. A. Sari, D. Setiadi, E. H. Rachmawanto","doi":"10.1109/iSemantic50169.2020.9234250","DOIUrl":"https://doi.org/10.1109/iSemantic50169.2020.9234250","url":null,"abstract":"Principal Component Analysis (PCA) is a very popular facial recognition method. This research aims to analyze the PCA method, where various scenarios are tested to look for things that affect the results of recognition using this method. There are three datasets used in the testing phase, namely the private dataset, JAFFE, and Yale. The accuracy produced in the private dataset is 79%, 82%, 86%, and 85.33% with a variety of different scenarios, while in the JAFFE dataset the maximum recognition accuracy is 100% and in the last experiment on the Yale dataset, the accuracy is 85.33%. From various experiments that have been done, it is found that the things that affect accuracy are the number of people, training data, attributes used, lighting, and background. While facial expressions and gender do not prove to have a major influence on the recognition process, with a variety of facial expressions, the PCA method can still recognize faces well.","PeriodicalId":345558,"journal":{"name":"2020 International Seminar on Application for Technology of Information and Communication (iSemantic)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126946363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-19DOI: 10.1109/iSemantic50169.2020.9234199
De Rosal Ignatius Moses Setiadi, Dewangga Satriya Rahardwika, E. H. Rachmawanto, Christy Atika Sari, Candra Irawan, Desi Purwanti Kusumaningrum, Nuri, Swapaka Listya Trusthi
Music recommendations are one of the important things, such as music streaming platforms. Classification of music genres is one of the important initial stages in the process of music recommendation based on genre. Many music classifications are proposed by extracting audio features that require a not light computing process. This research aims to analyze and test the performance of music genre classification based on metadata using three different classifiers, namely Support Vector Machine (SVM) with radial kernel base function (RBF), K Nearest Neighbors (K-NN), and Naïve Bayes (NB). The Spotify music dataset was chosen because it has complete metadata on each of its music. Based on the results of tests conducted by the SVM classifier has the best classification performance with 80% accuracy, then followed by KNN with 77.18% and NB with 76.08%. The accuracy results are relatively the same as music classification based on audio feature extraction, so the classification with the extraction of metadata features can continue to be developed if the metadata in the dataset is well managed.
{"title":"Comparison of SVM, KNN, and NB Classifier for Genre Music Classification based on Metadata","authors":"De Rosal Ignatius Moses Setiadi, Dewangga Satriya Rahardwika, E. H. Rachmawanto, Christy Atika Sari, Candra Irawan, Desi Purwanti Kusumaningrum, Nuri, Swapaka Listya Trusthi","doi":"10.1109/iSemantic50169.2020.9234199","DOIUrl":"https://doi.org/10.1109/iSemantic50169.2020.9234199","url":null,"abstract":"Music recommendations are one of the important things, such as music streaming platforms. Classification of music genres is one of the important initial stages in the process of music recommendation based on genre. Many music classifications are proposed by extracting audio features that require a not light computing process. This research aims to analyze and test the performance of music genre classification based on metadata using three different classifiers, namely Support Vector Machine (SVM) with radial kernel base function (RBF), K Nearest Neighbors (K-NN), and Naïve Bayes (NB). The Spotify music dataset was chosen because it has complete metadata on each of its music. Based on the results of tests conducted by the SVM classifier has the best classification performance with 80% accuracy, then followed by KNN with 77.18% and NB with 76.08%. The accuracy results are relatively the same as music classification based on audio feature extraction, so the classification with the extraction of metadata features can continue to be developed if the metadata in the dataset is well managed.","PeriodicalId":345558,"journal":{"name":"2020 International Seminar on Application for Technology of Information and Communication (iSemantic)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125954534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-19DOI: 10.1109/iSemantic50169.2020.9234292
Arif Nugroho, Agustinus Bimo Gumelar, Eko Mulyanto Yuniarno, M. Purnomo
Measurement is a process to find out the object quantity in a certain unit. Accelerometer sensor is an inertial measurement unit that can be used to measure the motion states of certain objects either static or dynamic. The accelerometer as a measurement tool must be reliable and valid in expressing the value. So, the accelerometer must be calibrated first before being used to measure the motion state of the object. In this paper, we propose the polynomial curve fitting method for calibrating the accelerometer sensor. Basically, this accelerometer sensor works based on the Analog to Digital Converter (ADC) principle where it converts the tilt of the sensor to the corresponding voltage. It should be noted that this accelerometer consists of a triple-axis where all of the axes have the same input-output value. Hence, by collecting the data that contains a number of the tilts of the sensor and the corresponding voltages, it is possible to generate the mathematical model that maps the tilts of the accelerometer sensor to the corresponding voltages. From the experiment, we can generate the five-order polynomials model that can be used to predict the new value that approximates the ground-truth value. It can be proved by measuring the Mean Absolute Error (MAE) score of the polynomial curve fitting between the ground-truth value and the prediction value. As a result, the Mean Absolute Error (MAE) score for each of the axes is 0.57. It indicates that our proposed method based on the polynomial curve fitting has been successfully applied for calibrating the accelerometer sensor.
{"title":"Accelerometer Calibration Method Based on Polynomial Curve Fitting","authors":"Arif Nugroho, Agustinus Bimo Gumelar, Eko Mulyanto Yuniarno, M. Purnomo","doi":"10.1109/iSemantic50169.2020.9234292","DOIUrl":"https://doi.org/10.1109/iSemantic50169.2020.9234292","url":null,"abstract":"Measurement is a process to find out the object quantity in a certain unit. Accelerometer sensor is an inertial measurement unit that can be used to measure the motion states of certain objects either static or dynamic. The accelerometer as a measurement tool must be reliable and valid in expressing the value. So, the accelerometer must be calibrated first before being used to measure the motion state of the object. In this paper, we propose the polynomial curve fitting method for calibrating the accelerometer sensor. Basically, this accelerometer sensor works based on the Analog to Digital Converter (ADC) principle where it converts the tilt of the sensor to the corresponding voltage. It should be noted that this accelerometer consists of a triple-axis where all of the axes have the same input-output value. Hence, by collecting the data that contains a number of the tilts of the sensor and the corresponding voltages, it is possible to generate the mathematical model that maps the tilts of the accelerometer sensor to the corresponding voltages. From the experiment, we can generate the five-order polynomials model that can be used to predict the new value that approximates the ground-truth value. It can be proved by measuring the Mean Absolute Error (MAE) score of the polynomial curve fitting between the ground-truth value and the prediction value. As a result, the Mean Absolute Error (MAE) score for each of the axes is 0.57. It indicates that our proposed method based on the polynomial curve fitting has been successfully applied for calibrating the accelerometer sensor.","PeriodicalId":345558,"journal":{"name":"2020 International Seminar on Application for Technology of Information and Communication (iSemantic)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132765636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-19DOI: 10.1109/isemantic50169.2020.9234212
{"title":"[Front matter]","authors":"","doi":"10.1109/isemantic50169.2020.9234212","DOIUrl":"https://doi.org/10.1109/isemantic50169.2020.9234212","url":null,"abstract":"","PeriodicalId":345558,"journal":{"name":"2020 International Seminar on Application for Technology of Information and Communication (iSemantic)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127066034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-19DOI: 10.1109/iSemantic50169.2020.9234297
Manaris Simanjuntak, Muljono Muljono, G. F. Shidik, A. Zainul Fanani
In every domestic life always has its own problems, and every household must have their own conflict. Problems and conflicts that come in the life of the household can actually be part of the process to mature each other's partners, but sometimes the problems and conflicts that trigger divorce. Dataset from the UCI Repository, an evaluation and improvement process was carried out on the Backpropagation Neural Netwok(BPNN) algorithm and also performed a set of parameters are set and then validated, able to predict whether a married couple will divorce or not with sufficient results. high. And also do a process for some feature selection, then the value or rating of the most significant features will be processed on the Backpropagation Neural Network Algorithm and compare the results of the Gain Ratio, Information Gain, Relief and Correlation feature selection. This model underwent several validation processes so as to achieve a fairly high accuracy by using the Relief feature selection that is 99.41%.
{"title":"Evaluation Of Feature Selection for Improvement Backpropagation Neural Network in Divorce Predictions","authors":"Manaris Simanjuntak, Muljono Muljono, G. F. Shidik, A. Zainul Fanani","doi":"10.1109/iSemantic50169.2020.9234297","DOIUrl":"https://doi.org/10.1109/iSemantic50169.2020.9234297","url":null,"abstract":"In every domestic life always has its own problems, and every household must have their own conflict. Problems and conflicts that come in the life of the household can actually be part of the process to mature each other's partners, but sometimes the problems and conflicts that trigger divorce. Dataset from the UCI Repository, an evaluation and improvement process was carried out on the Backpropagation Neural Netwok(BPNN) algorithm and also performed a set of parameters are set and then validated, able to predict whether a married couple will divorce or not with sufficient results. high. And also do a process for some feature selection, then the value or rating of the most significant features will be processed on the Backpropagation Neural Network Algorithm and compare the results of the Gain Ratio, Information Gain, Relief and Correlation feature selection. This model underwent several validation processes so as to achieve a fairly high accuracy by using the Relief feature selection that is 99.41%.","PeriodicalId":345558,"journal":{"name":"2020 International Seminar on Application for Technology of Information and Communication (iSemantic)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133527902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}