Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285861
Apip Ramdlani, G. Saptawati, Y. Asnar
This research is development a framework for detecting congestion on the urban road network. ATCS (Area Traffic Control System) data in Bandung city with traffic volume are used in congestion detection process. Traffic flow data is collected by vehicles detector located at crossroads within 15 minutes. To compute spatial correlation, graph modelling are used in the adjacency matrix. Assuming the location of the detector as the vertices and the direction of the vehicle as the edge, the graph modeled with vehicle's detector location and the flow direction at nine locations on road nework. The adjacency matrix used consists of 3 matrices in each period of time, which describes the order of spatial distances traveled by vehicle at the intersection location. To calculate spatial correlation, the autocorrelation function and the cross-correlation function which are derived from Pearson's simple correlation is used to looking influence at each location on road network. The result of calculation of spatial correlation, shows the existence of seasonal pattern on the autocorrelation results even though the value scale is getting smaller as it increases time lags. This provides that the process of calculating cross-correlation functions and it can be concluded that the volume of vehicles at each location that are connected in the road network can be known by making observations in the time series of previous seasonal periods. The conclusion that can be formulated that graph modeling is needed to simplify the spatial correlation calculation process by performing the graph representation into a matrix. The Simpson rules on cross-correlation results, can be detected congestion at intersection locations on the road network to find the most critically locations causing congestion on the road network at time periods.
{"title":"Graph analysis on ATCS data in road network for congestion detection","authors":"Apip Ramdlani, G. Saptawati, Y. Asnar","doi":"10.1109/ICODSE.2017.8285861","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285861","url":null,"abstract":"This research is development a framework for detecting congestion on the urban road network. ATCS (Area Traffic Control System) data in Bandung city with traffic volume are used in congestion detection process. Traffic flow data is collected by vehicles detector located at crossroads within 15 minutes. To compute spatial correlation, graph modelling are used in the adjacency matrix. Assuming the location of the detector as the vertices and the direction of the vehicle as the edge, the graph modeled with vehicle's detector location and the flow direction at nine locations on road nework. The adjacency matrix used consists of 3 matrices in each period of time, which describes the order of spatial distances traveled by vehicle at the intersection location. To calculate spatial correlation, the autocorrelation function and the cross-correlation function which are derived from Pearson's simple correlation is used to looking influence at each location on road network. The result of calculation of spatial correlation, shows the existence of seasonal pattern on the autocorrelation results even though the value scale is getting smaller as it increases time lags. This provides that the process of calculating cross-correlation functions and it can be concluded that the volume of vehicles at each location that are connected in the road network can be known by making observations in the time series of previous seasonal periods. The conclusion that can be formulated that graph modeling is needed to simplify the spatial correlation calculation process by performing the graph representation into a matrix. The Simpson rules on cross-correlation results, can be detected congestion at intersection locations on the road network to find the most critically locations causing congestion on the road network at time periods.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122846115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285855
Erwin, M. Fachrurrozi, Ahmad Fiqih, Bahardiansyah Rua Saputra, Rachmad Algani, Anggina Primanita
The uniqueness of fruits can be observed using the colors and shapes. The fruit recognition process consists of 3 stages, namely feature extraction, clustering, and recognition. Each of stage uses different methods. The color extraction process using Fuzzy Color Histogram (FCH) method and shaping extraction using Moment Invariants (MI) method. The clustering process uses the K-Means Clustering Algorithm. The recognition process uses the k-NN method. The Content-Based Image Retrieval (CBIR) process uses image features (visual contents) to perform image searches from the database. Experimental results and analysis of fruit recognition system obtained an accuracy of 92.5% for single-object images and 90% for the multi-object image.
{"title":"Content based image retrieval for multi-objects fruits recognition using k-means and k-nearest neighbor","authors":"Erwin, M. Fachrurrozi, Ahmad Fiqih, Bahardiansyah Rua Saputra, Rachmad Algani, Anggina Primanita","doi":"10.1109/ICODSE.2017.8285855","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285855","url":null,"abstract":"The uniqueness of fruits can be observed using the colors and shapes. The fruit recognition process consists of 3 stages, namely feature extraction, clustering, and recognition. Each of stage uses different methods. The color extraction process using Fuzzy Color Histogram (FCH) method and shaping extraction using Moment Invariants (MI) method. The clustering process uses the K-Means Clustering Algorithm. The recognition process uses the k-NN method. The Content-Based Image Retrieval (CBIR) process uses image features (visual contents) to perform image searches from the database. Experimental results and analysis of fruit recognition system obtained an accuracy of 92.5% for single-object images and 90% for the multi-object image.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117237940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285863
Ahmad Fali Oklilas, Fithri Halim Ahmad, R. F. Malik
This research was conducted to find the distance prediction between reader and tag using distance determinant program that called “distance program” which applied LANDMARC method with adaptive k-NN algorithm. This method works by assigning a weighted value to k-NN algorithm between all reference tags and tested tag with k determined by key reference tag. This research is different from the research using the same method before [5] which used 2 antennas and has the position of tag in form of coordinates as the output, this study uses 1 antenna and has the distance estimation between reader's antenna and tag as the output. The use of 1 antenna is expected to increase the efficiency of the number of antennas used in one environment to search tags by distance, but still has a good accuracy, in order to not to reduce the performance of the LANDMARC method to get distance determination between reader's antenna and tag. The test was performed on 4 tracking tags, with a distance of 1.4 meters, 1.9 meters, 2.8 meters, and 3.35 meters respectively. Data retrieval is done 5 times on each tracking tag. There are 2 experiment that are applied. The first experiment is to apply 2 test scenarios, first scenario is when there is no object around the tag and second is when there are object around the tag. The second experiment is to calculate the difference of percentage error from test result from both scenarios. The first experimental result showed that the scenario 1 can produce result with the average percentage error of each tracking tag is 1.280%, 1.452%, 2.107%, and 2.470%. While scenario 2 can produce larger percentage error, with the average percentage error for each tag is 3.687%, 4.225%, 4.466%, and 7.430%. The second experimental result showed that the scenario 2 results can have larger percentage error than the scenario 1 results because of the surrounding objects near the tracking tags. The average difference of percentage error between two scenarios is 3.125%.
{"title":"Implementation of landmarc method with adaptive K-NN algorithm on distance determination program in UHF RFID system","authors":"Ahmad Fali Oklilas, Fithri Halim Ahmad, R. F. Malik","doi":"10.1109/ICODSE.2017.8285863","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285863","url":null,"abstract":"This research was conducted to find the distance prediction between reader and tag using distance determinant program that called “distance program” which applied LANDMARC method with adaptive k-NN algorithm. This method works by assigning a weighted value to k-NN algorithm between all reference tags and tested tag with k determined by key reference tag. This research is different from the research using the same method before [5] which used 2 antennas and has the position of tag in form of coordinates as the output, this study uses 1 antenna and has the distance estimation between reader's antenna and tag as the output. The use of 1 antenna is expected to increase the efficiency of the number of antennas used in one environment to search tags by distance, but still has a good accuracy, in order to not to reduce the performance of the LANDMARC method to get distance determination between reader's antenna and tag. The test was performed on 4 tracking tags, with a distance of 1.4 meters, 1.9 meters, 2.8 meters, and 3.35 meters respectively. Data retrieval is done 5 times on each tracking tag. There are 2 experiment that are applied. The first experiment is to apply 2 test scenarios, first scenario is when there is no object around the tag and second is when there are object around the tag. The second experiment is to calculate the difference of percentage error from test result from both scenarios. The first experimental result showed that the scenario 1 can produce result with the average percentage error of each tracking tag is 1.280%, 1.452%, 2.107%, and 2.470%. While scenario 2 can produce larger percentage error, with the average percentage error for each tag is 3.687%, 4.225%, 4.466%, and 7.430%. The second experimental result showed that the scenario 2 results can have larger percentage error than the scenario 1 results because of the surrounding objects near the tracking tags. The average difference of percentage error between two scenarios is 3.125%.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"288 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115213924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285874
'Adli Ihsan Hariadi, Dade Nurjanah
In recent years, with the rapid increases of books, finding relevant books has been a problem. For that, people might need their peers' opinion to complete this task. The problem is that relevant books can be gained only if there are other users or peers have same interests with them. Otherwise, they will never get relevant books. Recommender systems can be a solution for that problem. They work on finding relevant items based on other users' experience. Although research on recommender system increases, there is still not much research that considers user personality in recommender systems, even though personal preferences are really important these days. This paper discusses our research on a hybrid-based method that combines attribute-based and user personality-based methods for book recommender system. The attribute-based method has been implemented previously. In our research, we have implemented the MSV-MSL (Most Similar Visited Material to the Most Similar Learner) method, since it is the best method among hybrid attribute-based methods. The personality factor is used to find the similarity between users when creating neighborhood relationships. The method is tested using Book-crossing and Amazon Review on book category datasets. Our experiment shows that the combined method that considers user personality gives a better result than those without user personality on Book-crossing dataset. In contrary, it resulted in a lower performance on Amazon Review dataset. It can be concluded that user personality consideration can give a better result in a certain condition depending on the dataset itself and the usage proportion.
{"title":"Hybrid attribute and personality based recommender system for book recommendation","authors":"'Adli Ihsan Hariadi, Dade Nurjanah","doi":"10.1109/ICODSE.2017.8285874","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285874","url":null,"abstract":"In recent years, with the rapid increases of books, finding relevant books has been a problem. For that, people might need their peers' opinion to complete this task. The problem is that relevant books can be gained only if there are other users or peers have same interests with them. Otherwise, they will never get relevant books. Recommender systems can be a solution for that problem. They work on finding relevant items based on other users' experience. Although research on recommender system increases, there is still not much research that considers user personality in recommender systems, even though personal preferences are really important these days. This paper discusses our research on a hybrid-based method that combines attribute-based and user personality-based methods for book recommender system. The attribute-based method has been implemented previously. In our research, we have implemented the MSV-MSL (Most Similar Visited Material to the Most Similar Learner) method, since it is the best method among hybrid attribute-based methods. The personality factor is used to find the similarity between users when creating neighborhood relationships. The method is tested using Book-crossing and Amazon Review on book category datasets. Our experiment shows that the combined method that considers user personality gives a better result than those without user personality on Book-crossing dataset. In contrary, it resulted in a lower performance on Amazon Review dataset. It can be concluded that user personality consideration can give a better result in a certain condition depending on the dataset itself and the usage proportion.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130201827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285868
M. Fachrurrozi, Clara Fin Badillah, Saparudin, Junia Erlina, Erwin, Mardiana, Auzan Lazuardi
The grouping of face images can be done automatically using the Agglomerative Hierarchical Clustering (AHC) algorithm. The pre-processing performed is feature extraction in getting the face image vector feature. The AHC algorithm performs grouping using linkage average, single, and complete method. Grouping face images can help improve the search speed of the CBIR based face recognition system. The cluster validation test uses the value of Cophenetic Correlation Coefficien (CCC). From the test results, it is known that the complete method has a higher CCC value than other methods, that is equal to 0.904938 with the difference value of 0.127558 on single method and the difference of 0.02291 on the average method. The face recognition system using pre-processing clustering can perform faster face recognition better than without pre-processing clustering.
{"title":"The grouping of facial images using agglomerative hierarchical clustering to improve the CBIR based face recognition system","authors":"M. Fachrurrozi, Clara Fin Badillah, Saparudin, Junia Erlina, Erwin, Mardiana, Auzan Lazuardi","doi":"10.1109/ICODSE.2017.8285868","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285868","url":null,"abstract":"The grouping of face images can be done automatically using the Agglomerative Hierarchical Clustering (AHC) algorithm. The pre-processing performed is feature extraction in getting the face image vector feature. The AHC algorithm performs grouping using linkage average, single, and complete method. Grouping face images can help improve the search speed of the CBIR based face recognition system. The cluster validation test uses the value of Cophenetic Correlation Coefficien (CCC). From the test results, it is known that the complete method has a higher CCC value than other methods, that is equal to 0.904938 with the difference value of 0.127558 on single method and the difference of 0.02291 on the average method. The face recognition system using pre-processing clustering can perform faster face recognition better than without pre-processing clustering.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127538547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285845
Regina Yulia Yasmin, A. E. Sakya, Untung Merdijanto
Classification based on sequential patterns has become very important method in data mining. It is useful to make predictions for alert warning system and strategic decision. Moreover the necessity to improve the speed performance of sequential pattern mining also increases. However, previous researches on this area uses categorical data as input. There is necessity to process numerical data and classify sequential patterns found from the data. High accuracy numerical data are difficult to mine. Moreover, numerical data to be mined consist of many observational parameter data. This study proposes framework to overcome the problem. The framework proposes to categorize the data in preprocessing and prepare it to be ready as input for sequential pattern mining and the subsequent classification process. The framework will improve classification speed, scalability and also maintain the classification accuracy.
{"title":"A classification of sequential patterns for numerical and time series multiple source data — A preliminary application on extreme weather prediction","authors":"Regina Yulia Yasmin, A. E. Sakya, Untung Merdijanto","doi":"10.1109/ICODSE.2017.8285845","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285845","url":null,"abstract":"Classification based on sequential patterns has become very important method in data mining. It is useful to make predictions for alert warning system and strategic decision. Moreover the necessity to improve the speed performance of sequential pattern mining also increases. However, previous researches on this area uses categorical data as input. There is necessity to process numerical data and classify sequential patterns found from the data. High accuracy numerical data are difficult to mine. Moreover, numerical data to be mined consist of many observational parameter data. This study proposes framework to overcome the problem. The framework proposes to categorize the data in preprocessing and prepare it to be ready as input for sequential pattern mining and the subsequent classification process. The framework will improve classification speed, scalability and also maintain the classification accuracy.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133289600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285854
Jasman Pardede, B. Sitohang, Saiful Akbar, M. L. Khodra
Researchers implemented various similarity measure for CBIR using HSV Quantization. Implemented similarity measures on this study is Euclidean Distance, Cramer-von Mises Divergence, Manhattan Distance, Cosine Similarity, Chi-Square Dissimilarity, Jeffrey Divergence, Pearson Correlation Coefficient, and Mahalanobis Distance. The purpose of study is to measure the performance of image retrieval of the CBIR system using HSV Quantization for each of the similarity measures. The performance of similarity measures are evaluated based on precision, recall, and F-measure value that obtained from test results performed on the Wang dataset. Similarity measures were performed on each of the categories (Africa, Beaches, Building, Bus, Dinosaur, Elephant, Flower, Horses, Mountain, and Food) that has 100 images of each its category. The test results showed that the highest precision valued are 100% provided with Jeffrey Divergence on Dinosaur category. The best average precision value of all categories is provided with Jeffrey Divergence, i.e. 87.298%. In generally, the best average precision value is Dinosaur category (Euclidean Distance, Manhattan Distance, Cosine Similarity, Chi-Square Dissimilarity, Jeffrey Divergence, and Pearson Correlation Coefficient). The next of average precision value is on Flower category for Cramer-von Mises Divergence, and the last category is on Bus category that provided with Mahalanobis Distance. The highest average recall valued is 92% on Horses category that established to Cosine Similarity. The best average recall valued for all categories is on Manhattan Distance, i.e. 38.700%. In generally, the best average recall valued is on Horses category that provided with Cramer-von Mises Divergence, Manhattan Distance, Cosine Similarity, Chi-Square Dissimilarity, Jeffrey Divergence, Pearson Correlation Coefficient, and Mahalanobis Distance. The best average recall value of the Euclidean Distance is Africa category. The highest F-measure value is 87.255% on Horses category provided with Cosine Similarity. The experiment result showed that the highest F-measure valued is always on Horses category. The highest F-measure value in general provided with Manhattan Distance (Africa, Beaches, Building, Bus, Dinosaur, Elephant, Flower, Mountain, and Food), while the highest F-measure valued of Horses category provided with Cosine Similarity.
{"title":"Comparison of similarity measures in HSV quantization for CBIR","authors":"Jasman Pardede, B. Sitohang, Saiful Akbar, M. L. Khodra","doi":"10.1109/ICODSE.2017.8285854","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285854","url":null,"abstract":"Researchers implemented various similarity measure for CBIR using HSV Quantization. Implemented similarity measures on this study is Euclidean Distance, Cramer-von Mises Divergence, Manhattan Distance, Cosine Similarity, Chi-Square Dissimilarity, Jeffrey Divergence, Pearson Correlation Coefficient, and Mahalanobis Distance. The purpose of study is to measure the performance of image retrieval of the CBIR system using HSV Quantization for each of the similarity measures. The performance of similarity measures are evaluated based on precision, recall, and F-measure value that obtained from test results performed on the Wang dataset. Similarity measures were performed on each of the categories (Africa, Beaches, Building, Bus, Dinosaur, Elephant, Flower, Horses, Mountain, and Food) that has 100 images of each its category. The test results showed that the highest precision valued are 100% provided with Jeffrey Divergence on Dinosaur category. The best average precision value of all categories is provided with Jeffrey Divergence, i.e. 87.298%. In generally, the best average precision value is Dinosaur category (Euclidean Distance, Manhattan Distance, Cosine Similarity, Chi-Square Dissimilarity, Jeffrey Divergence, and Pearson Correlation Coefficient). The next of average precision value is on Flower category for Cramer-von Mises Divergence, and the last category is on Bus category that provided with Mahalanobis Distance. The highest average recall valued is 92% on Horses category that established to Cosine Similarity. The best average recall valued for all categories is on Manhattan Distance, i.e. 38.700%. In generally, the best average recall valued is on Horses category that provided with Cramer-von Mises Divergence, Manhattan Distance, Cosine Similarity, Chi-Square Dissimilarity, Jeffrey Divergence, Pearson Correlation Coefficient, and Mahalanobis Distance. The best average recall value of the Euclidean Distance is Africa category. The highest F-measure value is 87.255% on Horses category provided with Cosine Similarity. The experiment result showed that the highest F-measure valued is always on Horses category. The highest F-measure value in general provided with Manhattan Distance (Africa, Beaches, Building, Bus, Dinosaur, Elephant, Flower, Mountain, and Food), while the highest F-measure valued of Horses category provided with Cosine Similarity.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133757029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285877
Ken Dhita Tania, Bayu Adhi Tama
Hitherto, previous research of string matching techniques in knowledge sharing of explicit knowledge have shown a great success. However, their implementation in a knowledge management system is still underexplored. The aim of this paper is to propose an implementation of regular expression (regex) techniques for supporting all processes in knowledge management systems and producing a better accuracy of searching knowledge within an organization. A web-based application prototype of regex is built and several experiments are performed in order to prove the correctness of our implementation. It is obvious that regex performs better than traditional SQL concerning with knowledge searching/query.
{"title":"Implementation of regular expression (regex) on knowledge management system","authors":"Ken Dhita Tania, Bayu Adhi Tama","doi":"10.1109/ICODSE.2017.8285877","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285877","url":null,"abstract":"Hitherto, previous research of string matching techniques in knowledge sharing of explicit knowledge have shown a great success. However, their implementation in a knowledge management system is still underexplored. The aim of this paper is to propose an implementation of regular expression (regex) techniques for supporting all processes in knowledge management systems and producing a better accuracy of searching knowledge within an organization. A web-based application prototype of regex is built and several experiments are performed in order to prove the correctness of our implementation. It is obvious that regex performs better than traditional SQL concerning with knowledge searching/query.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121321900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285867
Agung Aldhiyat, M. L. Khodra
This paper investigates how information can be valuable resource for enterprise to face uncertainty of it's environmental change, especially to determine the branding strategies. Strategic intelligence model is a model that is expected to support management in establishing a brand positioning strategy effectively. It is effective in achieving strategic objectives based on existing condition of the brand equity. This paper analyzes facebook comments to build strategic intelligence model of telecommunication provider brands, comments are analyzed by performing a number of word processing, find out the topic of the comment whether it matches the brand image criteria, i.e. price, ability to serve, characteristic and feature. This paper employs Naïve Bayes classifiers and DBSCAN clustering to help classify the facebook comments based brand equity criteria, and achieved F-Measure of 0.7684%.
{"title":"Strategic intelligence model in supporting brand equity assessment","authors":"Agung Aldhiyat, M. L. Khodra","doi":"10.1109/ICODSE.2017.8285867","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285867","url":null,"abstract":"This paper investigates how information can be valuable resource for enterprise to face uncertainty of it's environmental change, especially to determine the branding strategies. Strategic intelligence model is a model that is expected to support management in establishing a brand positioning strategy effectively. It is effective in achieving strategic objectives based on existing condition of the brand equity. This paper analyzes facebook comments to build strategic intelligence model of telecommunication provider brands, comments are analyzed by performing a number of word processing, find out the topic of the comment whether it matches the brand image criteria, i.e. price, ability to serve, characteristic and feature. This paper employs Naïve Bayes classifiers and DBSCAN clustering to help classify the facebook comments based brand equity criteria, and achieved F-Measure of 0.7684%.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126723446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}