Pub Date : 2017-12-06DOI: 10.1109/ICODSE.2017.8285875
Arif Wijonarko, Dade Nurjanah, D. S. Kusumo
Social Tagging Systems (STS) are very popular web application such that millions of people join the systems and actively share their contents. These enormous number of users are flooding STS with contents and tags in an unrestrained way in that threatening the capability of the system for relevant content retrieval and information sharing. Recommender Systems (RS) is a known successful method for overcome information overload problem by filtering the relevant contents over the nonrelevant contents. Besides manage folksonomy information, STS also handle social network information of its users. Both information can be used by RS to generate a good recommendation for its users. This work proposes an enhanced method for an existing hybrid recommender system, by incorporating social network information into the input of the hybrid recommender. The recommendation generation process includes Random Walk with Restart (RWR) alongside Content-Based Filtering (CBF) and Collaborative Filtering (CF) methods. Some parameters were introduced in the system to control weight contribution of each method. A comprehensive experiment with a set of a real-world open data set in two areas, social bookmark (Delicious.com) and music sharing (Last.fm) to test the proposed hybrid recommender system. The outcomes exhibit that it can give improvement compared to an existing method in terms of accuracy. The proposed hybrid achieves 24.4% more than RWR on the Delicious dataset, and 53.85% more than CBF on Lastfm dataset.
{"title":"Hybrid recommender system using random walk with restart for social tagging system","authors":"Arif Wijonarko, Dade Nurjanah, D. S. Kusumo","doi":"10.1109/ICODSE.2017.8285875","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285875","url":null,"abstract":"Social Tagging Systems (STS) are very popular web application such that millions of people join the systems and actively share their contents. These enormous number of users are flooding STS with contents and tags in an unrestrained way in that threatening the capability of the system for relevant content retrieval and information sharing. Recommender Systems (RS) is a known successful method for overcome information overload problem by filtering the relevant contents over the nonrelevant contents. Besides manage folksonomy information, STS also handle social network information of its users. Both information can be used by RS to generate a good recommendation for its users. This work proposes an enhanced method for an existing hybrid recommender system, by incorporating social network information into the input of the hybrid recommender. The recommendation generation process includes Random Walk with Restart (RWR) alongside Content-Based Filtering (CBF) and Collaborative Filtering (CF) methods. Some parameters were introduced in the system to control weight contribution of each method. A comprehensive experiment with a set of a real-world open data set in two areas, social bookmark (Delicious.com) and music sharing (Last.fm) to test the proposed hybrid recommender system. The outcomes exhibit that it can give improvement compared to an existing method in terms of accuracy. The proposed hybrid achieves 24.4% more than RWR on the Delicious dataset, and 53.85% more than CBF on Lastfm dataset.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125978172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285846
Asanilta Fahda, A. Purwarianti
Spelling and grammar checkers are widely-used tools which aim to help in detecting and correcting various writing errors. However, there are currently no proofreading systems capable of checking both spelling and grammar errors in Indonesian text. This paper proposes an Indonesian spelling and grammar checker prototype which uses a combination of rules and statistical methods. The rule matcher module currently uses 38 rules which detect, correct, and explain common errors in punctuation, word choice, and spelling. The spelling checker module examines every word using a dictionary trie to find misspellings and Damerau-Levenshtein distance neighbors as correction candidates. Morphological analysis is also added for certain word forms. A bigram/co-occurrence Hidden Markov Model is used for ranking and selecting the candidates. The grammar checker uses a trigram language model from tokens, POS tags, or phrase chunks for identifying sentences with incorrect structures. By experiment, the co-occurrence HMM with an emission probability weight coefficient of 0.95 is selected as the most suitable model for the spelling checker. As for the grammar checker, the phrase chunk model which normalizes by chunk length and uses a threshold score of −0.4 gave the best results. The document evaluation of this system showed an overall accuracy of 83.18%. This prototype is implemented as a web application.
{"title":"A statistical and rule-based spelling and grammar checker for Indonesian text","authors":"Asanilta Fahda, A. Purwarianti","doi":"10.1109/ICODSE.2017.8285846","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285846","url":null,"abstract":"Spelling and grammar checkers are widely-used tools which aim to help in detecting and correcting various writing errors. However, there are currently no proofreading systems capable of checking both spelling and grammar errors in Indonesian text. This paper proposes an Indonesian spelling and grammar checker prototype which uses a combination of rules and statistical methods. The rule matcher module currently uses 38 rules which detect, correct, and explain common errors in punctuation, word choice, and spelling. The spelling checker module examines every word using a dictionary trie to find misspellings and Damerau-Levenshtein distance neighbors as correction candidates. Morphological analysis is also added for certain word forms. A bigram/co-occurrence Hidden Markov Model is used for ranking and selecting the candidates. The grammar checker uses a trigram language model from tokens, POS tags, or phrase chunks for identifying sentences with incorrect structures. By experiment, the co-occurrence HMM with an emission probability weight coefficient of 0.95 is selected as the most suitable model for the spelling checker. As for the grammar checker, the phrase chunk model which normalizes by chunk length and uses a threshold score of −0.4 gave the best results. The document evaluation of this system showed an overall accuracy of 83.18%. This prototype is implemented as a web application.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122864978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285892
Robinson Sitepu, F. Puspita, Shintya Apriliyani
The development of the internet in this era of globalization has increased fast. The need for internet becomes unlimited. Utility functions as one of measurements in internet usage, were usually associated with a level of satisfaction that user get for the use of information services used specifically relating to maximize profits in achieving specific. There are three internet pricing scheme used, that is flat fee, usage based and two part tariff by applying pricing scheme Internet by using one of the utility function is Cobb-Douglass with monitoring cost and marginal cost. The internet pricing scheme will be solved by LINGO 13.0 in form of non-linear optimization problems to get optimal solution. internet pricing scheme by considering marginal and monitoring cost of Cobb Douglass utility function, the optimal solution is obtained using the either usage-based pricing scheme model or two-part tariff pricing scheme model for each services offered, if we compared with flat-fee pricing scheme. It is the best way for provider to offer network based on two part tariff scheme. The results show that by applying two-part tariff scheme, the providers can maximize its revenue either for homogeneous and heterogeneous consumers.
{"title":"Utility function based-mixed integer nonlinear programming (MINLP) problem model of information service pricing schemes","authors":"Robinson Sitepu, F. Puspita, Shintya Apriliyani","doi":"10.1109/ICODSE.2017.8285892","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285892","url":null,"abstract":"The development of the internet in this era of globalization has increased fast. The need for internet becomes unlimited. Utility functions as one of measurements in internet usage, were usually associated with a level of satisfaction that user get for the use of information services used specifically relating to maximize profits in achieving specific. There are three internet pricing scheme used, that is flat fee, usage based and two part tariff by applying pricing scheme Internet by using one of the utility function is Cobb-Douglass with monitoring cost and marginal cost. The internet pricing scheme will be solved by LINGO 13.0 in form of non-linear optimization problems to get optimal solution. internet pricing scheme by considering marginal and monitoring cost of Cobb Douglass utility function, the optimal solution is obtained using the either usage-based pricing scheme model or two-part tariff pricing scheme model for each services offered, if we compared with flat-fee pricing scheme. It is the best way for provider to offer network based on two part tariff scheme. The results show that by applying two-part tariff scheme, the providers can maximize its revenue either for homogeneous and heterogeneous consumers.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114158214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285870
Taufiqurrahman, Saiful Akbar
In spite of their simplicity, Cellular Automata (CA) have a great potential for being used in modeling various natural phenomenon. CA receive widespread interest among researchers from the diverse field to learn and use them in their application domain. Researchers usually develop CA model and the platform for simulating the models for each specific problems domain. This cause inefficiency for them and also some researchers do not know how to code the simulation program. The researchers should focus to develop the model without necessarily being worried to develop the platform to simulate the model. In this research, we attempt to develop a tool to model and simulate CA that helps to develop a various CA-based model. As a starting point, the tool will be implemented to modeling and simulate fire propagation. In consequence, literature study was conducted on some related research of fire propagation model to get the generic aspect from that models. The generic aspect then implemented into software artifact as a generic tool. In the end, the tools will be tested to implement rules from related research. The test result shows that with the same construction, the tool can implement various rule and fire propagation model.
{"title":"A generic tool for modeling and simulation of fire propagation using cellular automata","authors":"Taufiqurrahman, Saiful Akbar","doi":"10.1109/ICODSE.2017.8285870","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285870","url":null,"abstract":"In spite of their simplicity, Cellular Automata (CA) have a great potential for being used in modeling various natural phenomenon. CA receive widespread interest among researchers from the diverse field to learn and use them in their application domain. Researchers usually develop CA model and the platform for simulating the models for each specific problems domain. This cause inefficiency for them and also some researchers do not know how to code the simulation program. The researchers should focus to develop the model without necessarily being worried to develop the platform to simulate the model. In this research, we attempt to develop a tool to model and simulate CA that helps to develop a various CA-based model. As a starting point, the tool will be implemented to modeling and simulate fire propagation. In consequence, literature study was conducted on some related research of fire propagation model to get the generic aspect from that models. The generic aspect then implemented into software artifact as a generic tool. In the end, the tools will be tested to implement rules from related research. The test result shows that with the same construction, the tool can implement various rule and fire propagation model.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116618405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285869
Dwina Satrinia, G. Saptawati
Traffic congestion prediction is one of the solution to overcome congestion problem. In this paper, we propose a development of system that can predict traffic speed with help of GPS data from history of taxi trip in Bandung city. GPS data from taxi trip in Bandung city does not have data speed and sometimes the location detected from GPS device is less accurate so additional steps required in data preprocessing phase. We proposed using Map Matching with topological information method in pre-processing phase. Map Matching will produce a new trajectory that has corresponded to the road. Then, from that new trajectories we calculate speed for each road segment. To predict traffic speed in the future we utilize Support Vector Regression (SVR) method. The results of this study indicate that Map Matching can help to obtain more accurate traffic speed and SVR has good performance to predict the traffic speed.
{"title":"Traffic speed prediction from GPS data of taxi trip using support vector regression","authors":"Dwina Satrinia, G. Saptawati","doi":"10.1109/ICODSE.2017.8285869","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285869","url":null,"abstract":"Traffic congestion prediction is one of the solution to overcome congestion problem. In this paper, we propose a development of system that can predict traffic speed with help of GPS data from history of taxi trip in Bandung city. GPS data from taxi trip in Bandung city does not have data speed and sometimes the location detected from GPS device is less accurate so additional steps required in data preprocessing phase. We proposed using Map Matching with topological information method in pre-processing phase. Map Matching will produce a new trajectory that has corresponded to the road. Then, from that new trajectories we calculate speed for each road segment. To predict traffic speed in the future we utilize Support Vector Regression (SVR) method. The results of this study indicate that Map Matching can help to obtain more accurate traffic speed and SVR has good performance to predict the traffic speed.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128547897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285856
S. A. Barnard, S. M. Chung, Vincent A. Schmidt
Although Twitter has been around for more than ten years, crisis management agencies and first response personnel are not able to fully use the information this type of data provides during a crisis or a natural disaster. This paper presents a tool that automatically clusters geotagged text data based on their content, rather than by only time and location, and displays the clusters and their locations on the map. It allows at-a-glance information to be displayed throughout the evolution of a crisis. For accurate clustering, we used the silhouette coefficient to determine the number of clusters automatically. To visualize the topics (i.e., frequent words) within each cluster, we used the word cloud. Our experiments demonstrated the performance of this tool is very scalable. This tool could be easily used by first response and official management personnel to quickly determine when a crisis is occurring, where it is concentrated, and what resources to best deploy to stabilize the situation.
{"title":"Content-based clustering and visualization of social media text messages","authors":"S. A. Barnard, S. M. Chung, Vincent A. Schmidt","doi":"10.1109/ICODSE.2017.8285856","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285856","url":null,"abstract":"Although Twitter has been around for more than ten years, crisis management agencies and first response personnel are not able to fully use the information this type of data provides during a crisis or a natural disaster. This paper presents a tool that automatically clusters geotagged text data based on their content, rather than by only time and location, and displays the clusters and their locations on the map. It allows at-a-glance information to be displayed throughout the evolution of a crisis. For accurate clustering, we used the silhouette coefficient to determine the number of clusters automatically. To visualize the topics (i.e., frequent words) within each cluster, we used the word cloud. Our experiments demonstrated the performance of this tool is very scalable. This tool could be easily used by first response and official management personnel to quickly determine when a crisis is occurring, where it is concentrated, and what resources to best deploy to stabilize the situation.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"16 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129643136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285847
Rifkie Primartha, Bayu Adhi Tama
Intruders have become more and more sophisticated thus a deterrence mechanism such as an intrusion detection systems (IDS) is pivotal in information security management. An IDS aims at capturing and repealing any malignant activities in the network before they can cause harmful destruction. An IDS relies on a well-trained classification model so the model is able to identify the presence of attacks effectively. This paper compares the performance of IDS by exerting random forest classifier with respect to two performance measures, i.e. accuracy and false alarm rate. Three public intrusion data sets, i.e NSL-KDD, UNSW-NB15, and GPRS are employed in the experiment. Furthermore, different tree-size ensembles are considered whilst other best learning parameters are obtained using a grid search. Our experimental results prove the superiority of random forest model for IDS as it significantly outperforms the similar ensemble, i.e. ensemble of random tree + naive bayes tree and other single classifier, i.e. naive bayes and neural network in terms of k-cross validation method.
{"title":"Anomaly detection using random forest: A performance revisited","authors":"Rifkie Primartha, Bayu Adhi Tama","doi":"10.1109/ICODSE.2017.8285847","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285847","url":null,"abstract":"Intruders have become more and more sophisticated thus a deterrence mechanism such as an intrusion detection systems (IDS) is pivotal in information security management. An IDS aims at capturing and repealing any malignant activities in the network before they can cause harmful destruction. An IDS relies on a well-trained classification model so the model is able to identify the presence of attacks effectively. This paper compares the performance of IDS by exerting random forest classifier with respect to two performance measures, i.e. accuracy and false alarm rate. Three public intrusion data sets, i.e NSL-KDD, UNSW-NB15, and GPRS are employed in the experiment. Furthermore, different tree-size ensembles are considered whilst other best learning parameters are obtained using a grid search. Our experimental results prove the superiority of random forest model for IDS as it significantly outperforms the similar ensemble, i.e. ensemble of random tree + naive bayes tree and other single classifier, i.e. naive bayes and neural network in terms of k-cross validation method.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130309501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285857
Irrevaldy, G. Saptawati
The increasing development of a city, creates density potential which could lead to traffic congestion. In recent years, the use of smartphone devices and other gadgets that have GPS (Global Positioning System) features become very commonly used in everyday activities. Previous work has built an architecture which could infer transportation mode based on GPS data. In this paper, we propose development of the previous work to detect potential traffic congestion based on transportation mode and with help from city spatial data. The data mining architecture is divided into three phases. In the first phase, we form classification model which will be used to get transportation mode information from GPS data. In the second phase, we extract spatial data, divide area into grids and divide time into several interval group. In the last phase, we use first phase result as a dataset to run in DBSCAN (Density-based spatial clustering of applications with noise) clustering algorithm for each different time interval group to know which grid area have traffic congestion potential. From this architecture, we introduced new term, cluster overlay which identify potential traffic congestion level in certain areas.
{"title":"Spatio-temporal mining to identify potential traff congestion based on transportation mode","authors":"Irrevaldy, G. Saptawati","doi":"10.1109/ICODSE.2017.8285857","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285857","url":null,"abstract":"The increasing development of a city, creates density potential which could lead to traffic congestion. In recent years, the use of smartphone devices and other gadgets that have GPS (Global Positioning System) features become very commonly used in everyday activities. Previous work has built an architecture which could infer transportation mode based on GPS data. In this paper, we propose development of the previous work to detect potential traffic congestion based on transportation mode and with help from city spatial data. The data mining architecture is divided into three phases. In the first phase, we form classification model which will be used to get transportation mode information from GPS data. In the second phase, we extract spatial data, divide area into grids and divide time into several interval group. In the last phase, we use first phase result as a dataset to run in DBSCAN (Density-based spatial clustering of applications with noise) clustering algorithm for each different time interval group to know which grid area have traffic congestion potential. From this architecture, we introduced new term, cluster overlay which identify potential traffic congestion level in certain areas.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124232840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285876
Ferdi Rahmadi, G. Saptawati
While web services now is common to use as a solution to integrating business process in an organization, the effort to analyze the business process will getting more complicated because the distributed nature of services. Process on each service interacting to each other and the process will have some relation. To help analyze the business process in a service, a process mining technique can be used. How ever, discovered process model from process mining technique still using traditional process model, i.e., workflow net that have some limitations to describe an interacting process that happens in a web services. Based on the limitations, method to identifying interactions and relation process will be proposed in this paper. From the experiment, port component in proclet is capable to model process interactions between web service and can be implemented in process mining.
{"title":"Identification process relationship of process model discovery based on workflow-net","authors":"Ferdi Rahmadi, G. Saptawati","doi":"10.1109/ICODSE.2017.8285876","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285876","url":null,"abstract":"While web services now is common to use as a solution to integrating business process in an organization, the effort to analyze the business process will getting more complicated because the distributed nature of services. Process on each service interacting to each other and the process will have some relation. To help analyze the business process in a service, a process mining technique can be used. How ever, discovered process model from process mining technique still using traditional process model, i.e., workflow net that have some limitations to describe an interacting process that happens in a web services. Based on the limitations, method to identifying interactions and relation process will be proposed in this paper. From the experiment, port component in proclet is capable to model process interactions between web service and can be implemented in process mining.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126299338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285873
R. F. Malik, Muhammad Sulkhan Nurfatih, H. Ubaya, Rido Zulfahmi, E. Sodikin
Vehicular Ad Hoc Network (VANET) is a Mobile Ad Hoc Network (MANET) concept where the vehicle acts as a node on the network. VANET uses a mobile vehicle in the Ad Hoc based wireless network. Therefore, VANET is a development of wireless networks have a protocol-specific routing implementation within the network. In this paper, we will analyze correlation queue length and time in the positionbased routing, Greedy Perimeter Stateless Routing (GPSR). The simulation is using Network Simulator 3 (NS3) and Simulation of Urban Mobility (SUMO). The scenarios are node count and velocity in the urban environment in Palembang. We propose a queue parameters (queue length and time) in the GPSR protocol in order to produce better performance. The result, by increasing the existing GPRS attributes (queue length is 96 byte and queue time is 45 seconds) made the GPSR performance better in PDR, throughput and packet loss. While for end-to-end delay results, this parameter values have minimum performance.
车辆自组织网络(VANET)是一种移动自组织网络(MANET)概念,其中车辆充当网络上的节点。VANET在基于Ad Hoc的无线网络中使用移动车辆。因此,VANET是无线网络发展中具有特定协议的网络内路由实现。本文将分析基于位置路由的贪婪周边无状态路由(GPSR)中的相关队列长度和时间。仿真采用Network Simulator 3 (NS3)和simulation of Urban Mobility (SUMO)。场景是巨港城市环境中的节点数和速度。为了获得更好的性能,我们在GPSR协议中提出了队列参数(队列长度和时间)。结果表明,通过增加现有的GPRS属性(队列长度为96字节,队列时间为45秒),GPRS在PDR、吞吐量和丢包方面的性能得到了提高。而对于端到端延迟结果,该参数值具有最低性能。
{"title":"Evaluation of greedy perimeter stateless routing protocol on vehicular ad hoc network in palembang city","authors":"R. F. Malik, Muhammad Sulkhan Nurfatih, H. Ubaya, Rido Zulfahmi, E. Sodikin","doi":"10.1109/ICODSE.2017.8285873","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285873","url":null,"abstract":"Vehicular Ad Hoc Network (VANET) is a Mobile Ad Hoc Network (MANET) concept where the vehicle acts as a node on the network. VANET uses a mobile vehicle in the Ad Hoc based wireless network. Therefore, VANET is a development of wireless networks have a protocol-specific routing implementation within the network. In this paper, we will analyze correlation queue length and time in the positionbased routing, Greedy Perimeter Stateless Routing (GPSR). The simulation is using Network Simulator 3 (NS3) and Simulation of Urban Mobility (SUMO). The scenarios are node count and velocity in the urban environment in Palembang. We propose a queue parameters (queue length and time) in the GPSR protocol in order to produce better performance. The result, by increasing the existing GPRS attributes (queue length is 96 byte and queue time is 45 seconds) made the GPSR performance better in PDR, throughput and packet loss. While for end-to-end delay results, this parameter values have minimum performance.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132622811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}