Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285850
Zulva Fachrina, D. H. Widyantoro
Most online marketplaces in Indonesia provide review or feedback feature in order to enhance customer's satisfaction. However, there is a large number of unstructured opinions and every opinion can discuss one or more aspects. In this paper, we propose a combination of rule-based and machine learning approach to classify aspect and its sentiment of online marketplace opinions. We use Support Vector Machine and Naïve Bayes Classifier for classifying opinions. The evaluation uses 2960 reviews from various categories collected from Indonesian online marketplace site. The best method for quality, accuracy, service, communication, and delivery aspect is machine learning SVM with rule-based as one of the features while the best method for packaging and price aspect is using rule-based only. The average f-measures for all aspects ranging from 78.9% to 92%.
{"title":"Aspect-sentiment classification in opinion mining using the combination of rule-based and machine learning","authors":"Zulva Fachrina, D. H. Widyantoro","doi":"10.1109/ICODSE.2017.8285850","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285850","url":null,"abstract":"Most online marketplaces in Indonesia provide review or feedback feature in order to enhance customer's satisfaction. However, there is a large number of unstructured opinions and every opinion can discuss one or more aspects. In this paper, we propose a combination of rule-based and machine learning approach to classify aspect and its sentiment of online marketplace opinions. We use Support Vector Machine and Naïve Bayes Classifier for classifying opinions. The evaluation uses 2960 reviews from various categories collected from Indonesian online marketplace site. The best method for quality, accuracy, service, communication, and delivery aspect is machine learning SVM with rule-based as one of the features while the best method for packaging and price aspect is using rule-based only. The average f-measures for all aspects ranging from 78.9% to 92%.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128866962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285884
Pranjal Ambardekar, Anagha Jamthe, Mandar M. Chincholkar
Defect resolution on time is one of the overriding project goals which cannot be neglected. Often projects suffer from missed deadlines due to open critical defects. This negatively impacts successful delivery of a product, resulting in loss of revenue and customer dissatisfaction. Predicting defect resolution time, though a daunting task, can alleviate this risk of missing targeted milestones. In this paper, the authors propose three supervised learning approaches leveraging cosine similarity measure, progressively improving the prediction for days to resolve (DTR) a defect. The prediction model uses historical defect data to estimate DTR for new similar defects. The first prediction approach leverages Naïve Bayes Classifier (NBC) to assess project risks by answering: Is quicker defect resolution feasible? The outcome of this analysis gives preliminary information on the resolution duration. To gain deeper insights on DTR, second approach utilizes similarity score between two defect summaries to predict DTR. To improve the prediction accuracy further, a third approach is shown, where predictions are based on statistical analysis on DTR of defects having same similarity scores. This approach yields lower error rates in predicting DTR for P2-High and P3-Medium defects, as compared to the second approach. Both the approaches however outperforms the simple approach, not involving supervised learning. These approaches can be applied over both open and closed source projects to reduce defect DTR.
{"title":"Predicting defect resolution time using cosine similarity","authors":"Pranjal Ambardekar, Anagha Jamthe, Mandar M. Chincholkar","doi":"10.1109/ICODSE.2017.8285884","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285884","url":null,"abstract":"Defect resolution on time is one of the overriding project goals which cannot be neglected. Often projects suffer from missed deadlines due to open critical defects. This negatively impacts successful delivery of a product, resulting in loss of revenue and customer dissatisfaction. Predicting defect resolution time, though a daunting task, can alleviate this risk of missing targeted milestones. In this paper, the authors propose three supervised learning approaches leveraging cosine similarity measure, progressively improving the prediction for days to resolve (DTR) a defect. The prediction model uses historical defect data to estimate DTR for new similar defects. The first prediction approach leverages Naïve Bayes Classifier (NBC) to assess project risks by answering: Is quicker defect resolution feasible? The outcome of this analysis gives preliminary information on the resolution duration. To gain deeper insights on DTR, second approach utilizes similarity score between two defect summaries to predict DTR. To improve the prediction accuracy further, a third approach is shown, where predictions are based on statistical analysis on DTR of defects having same similarity scores. This approach yields lower error rates in predicting DTR for P2-High and P3-Medium defects, as compared to the second approach. Both the approaches however outperforms the simple approach, not involving supervised learning. These approaches can be applied over both open and closed source projects to reduce defect DTR.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133210451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285862
I. Atastina, B. Sitohang, G. A. S. Putri, V. Moertini
One of the problems or challenges in performing graph clustering is to determine the number of clusters that best fit to the data being processed. This study is proposing a method to solve the problem using Dirichlet Process Mixture Model (DPMM). DPMM is one of the statistical methods that is already used for data clustering, without the need to define the number of clusters. However, this method has never been used before for graph clustering. Therefore, this study proposes the adaptation so that DPMM can be used for graph clustering. Experiment result shows DPMM method can be used for graph clustering, by applying spectral theory.
{"title":"Graph clustering using dirichlet process mixture model","authors":"I. Atastina, B. Sitohang, G. A. S. Putri, V. Moertini","doi":"10.1109/ICODSE.2017.8285862","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285862","url":null,"abstract":"One of the problems or challenges in performing graph clustering is to determine the number of clusters that best fit to the data being processed. This study is proposing a method to solve the problem using Dirichlet Process Mixture Model (DPMM). DPMM is one of the statistical methods that is already used for data clustering, without the need to define the number of clusters. However, this method has never been used before for graph clustering. Therefore, this study proposes the adaptation so that DPMM can be used for graph clustering. Experiment result shows DPMM method can be used for graph clustering, by applying spectral theory.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114552884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285872
Tjan Marco Orlando, W. Sunindyo
One of the media that can be used to visualize data and the results of its analysis is dashboard. Currently in the dashboard development there have been many methodologies that can be used as a reference. However, the existing methodology does not specify the steps necessary to ensure that the dashboard development is able to accommodate heterogeneous stakeholders, in which each stakeholder has different needs and activities. In ITB central library, there has been adequate data storage method, in the form of database. However, ITB central library does not yet have a dashboard as a medium that can support the use of data by heterogeneous stakeholders. This research aims to develop dashboard of ITB central library for heterogeneous stakeholders. The dashboard is expected to support data utilization by stakeholders, both for analytical and administrative purposes. In this research also conducted a study related to dashboard development methodology, for further modification to show in detail dashboard development steps to accommodate heterogeneous stakeholders. Evaluation of dashboard implementation result is conducted empirically, involving sample from stakeholder of ITB central library. The evaluation uses two existing standardized usability questionnaire, System Usability Scale (SUS) and The Usability Metric for User Experience (UMUX). In the evaluation, it is also compiled comments from all evaluation participants to find out how far the dashboard can meet the needs of each stakeholder involved.
{"title":"Designing dashboard visualization for heterogeneous stakeholders (case study: ITB central library)","authors":"Tjan Marco Orlando, W. Sunindyo","doi":"10.1109/ICODSE.2017.8285872","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285872","url":null,"abstract":"One of the media that can be used to visualize data and the results of its analysis is dashboard. Currently in the dashboard development there have been many methodologies that can be used as a reference. However, the existing methodology does not specify the steps necessary to ensure that the dashboard development is able to accommodate heterogeneous stakeholders, in which each stakeholder has different needs and activities. In ITB central library, there has been adequate data storage method, in the form of database. However, ITB central library does not yet have a dashboard as a medium that can support the use of data by heterogeneous stakeholders. This research aims to develop dashboard of ITB central library for heterogeneous stakeholders. The dashboard is expected to support data utilization by stakeholders, both for analytical and administrative purposes. In this research also conducted a study related to dashboard development methodology, for further modification to show in detail dashboard development steps to accommodate heterogeneous stakeholders. Evaluation of dashboard implementation result is conducted empirically, involving sample from stakeholder of ITB central library. The evaluation uses two existing standardized usability questionnaire, System Usability Scale (SUS) and The Usability Metric for User Experience (UMUX). In the evaluation, it is also compiled comments from all evaluation participants to find out how far the dashboard can meet the needs of each stakeholder involved.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129978490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285871
Adin Baskoro Pratomo, Riza Satria Perdana
Arduino is an open source computing platform in a form of single-board microcontroller. The microcontroller in Arduino is reprogrammable. Officially supported way to program Arduino is by using Arduino language and Arduino IDE. Another way to program an Arduino board is by using visual programming approach. Language used in visual programming approach is called Visual Programming Language. Commonly used existing tools that enable a programmer to write Arduino program visually are Ardublock and miniBloq. Both of those tools have their own strength. But, because those are separate tools, a programmer can't use all of those strengths to create a program. We have implemented Arduviz, a visual programming integrated development environment. Arduviz has most of advantages from both Arduviz and miniBloq such as instant code generation and stand alone development environment.
{"title":"Arduviz, a visual programming IDE for arduino","authors":"Adin Baskoro Pratomo, Riza Satria Perdana","doi":"10.1109/ICODSE.2017.8285871","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285871","url":null,"abstract":"Arduino is an open source computing platform in a form of single-board microcontroller. The microcontroller in Arduino is reprogrammable. Officially supported way to program Arduino is by using Arduino language and Arduino IDE. Another way to program an Arduino board is by using visual programming approach. Language used in visual programming approach is called Visual Programming Language. Commonly used existing tools that enable a programmer to write Arduino program visually are Ardublock and miniBloq. Both of those tools have their own strength. But, because those are separate tools, a programmer can't use all of those strengths to create a program. We have implemented Arduviz, a visual programming integrated development environment. Arduviz has most of advantages from both Arduviz and miniBloq such as instant code generation and stand alone development environment.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117346366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285859
Vanya Deasy Safrina, Saiful Akbar
The capabilities of moving object data collection have been increasing parallel with the development pace of technologies. The mobility of various moving objects can be easily generated via technologies, such as satellite and GPS. With such facilities, studies about moving object data have been increasing these past few decades, for instance, studies about trajectory pattern mining. Trajectory pattern mining is a field in moving object data mining that focuses on finding patterns from the spatial trajectory data generated from moving object data. The purposed system is an analysis tool that can run various algorithms related to trajectory pattern mining to mine trajectory of moving objects. In addition, the user interface is provided to facilitate interactive exploration and analysis of mining results. The main purpose of this tool development is to produce an extensible tool so that a new algorithm related to trajectory pattern mining can be added to the tool. This ability is considered important because the study on related topics is still growing rapidly. Extensibility of the tool is obtained by analyzing the general process from various trajectory pattern mining algorithms. The results of the analysis are then transformed into designs by utilizing the template method pattern to ensure the extensibility aspect itself. From this study, an analysis tool that implements various algorithms trajectory pattern mining is successfully built. The tool is extensible so that new algorithms from three mining categories, i.e. trajectory preprocessing, moving together pattern mining, and trajectory clustering, can be implemented into the tool by following several rules and steps while minimizing impact on existing system functions.
{"title":"Extensible analysis tool for trajectory pattern mining","authors":"Vanya Deasy Safrina, Saiful Akbar","doi":"10.1109/ICODSE.2017.8285859","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285859","url":null,"abstract":"The capabilities of moving object data collection have been increasing parallel with the development pace of technologies. The mobility of various moving objects can be easily generated via technologies, such as satellite and GPS. With such facilities, studies about moving object data have been increasing these past few decades, for instance, studies about trajectory pattern mining. Trajectory pattern mining is a field in moving object data mining that focuses on finding patterns from the spatial trajectory data generated from moving object data. The purposed system is an analysis tool that can run various algorithms related to trajectory pattern mining to mine trajectory of moving objects. In addition, the user interface is provided to facilitate interactive exploration and analysis of mining results. The main purpose of this tool development is to produce an extensible tool so that a new algorithm related to trajectory pattern mining can be added to the tool. This ability is considered important because the study on related topics is still growing rapidly. Extensibility of the tool is obtained by analyzing the general process from various trajectory pattern mining algorithms. The results of the analysis are then transformed into designs by utilizing the template method pattern to ensure the extensibility aspect itself. From this study, an analysis tool that implements various algorithms trajectory pattern mining is successfully built. The tool is extensible so that new algorithms from three mining categories, i.e. trajectory preprocessing, moving together pattern mining, and trajectory clustering, can be implemented into the tool by following several rules and steps while minimizing impact on existing system functions.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131411697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285890
Putri Sanggabuana Setiawan, M. I. Jambak, M. I. Jambak
The technological growth in Indonesia has stimulated the increase of technology demand. A lot of Indonesian media companies have transformed their business processes, from offline to online. The new business setting not only requires a set of revamped business processes through a business process reengineering but also a strong support from the information technology (IT) departments. The modernization and computerization of the new business processes require the company to have a lot of software projects that have such time and budget constraints. The company in this research has been experiencing a lot of unwanted overdue, both in their in-house and outsourced software projects. This paper studied the randomly picked 20 in-house software projects that adopting a certain software development methods such as software development life cycle (SDLC), Scrum, extreme programming (XP), and waterfall as well as the outsourced ones to see how effective they are to keep the software delivery on time.
{"title":"The effectiveness of using software development methods analysis by the project timeline in an Indonesian media company","authors":"Putri Sanggabuana Setiawan, M. I. Jambak, M. I. Jambak","doi":"10.1109/ICODSE.2017.8285890","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285890","url":null,"abstract":"The technological growth in Indonesia has stimulated the increase of technology demand. A lot of Indonesian media companies have transformed their business processes, from offline to online. The new business setting not only requires a set of revamped business processes through a business process reengineering but also a strong support from the information technology (IT) departments. The modernization and computerization of the new business processes require the company to have a lot of software projects that have such time and budget constraints. The company in this research has been experiencing a lot of unwanted overdue, both in their in-house and outsourced software projects. This paper studied the randomly picked 20 in-house software projects that adopting a certain software development methods such as software development life cycle (SDLC), Scrum, extreme programming (XP), and waterfall as well as the outsourced ones to see how effective they are to keep the software delivery on time.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128153414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285864
Steffi Pauli Susanti, F. N. Azizah
Time series and multivariate data are required to accommodate more complex decision making. Data are processed using data mining techniques in order to obtain valuable trends in the data that can be used to support in decision making processes. Unfortunately, we often encounter a lot of problems in preparing the data for data mining process. One of the problem is missing values. Missing values in data may causes inaccurate results of data processing. Imputation are used to handle missing values. In this thesis missing value are handled using Dynamic Bayesian Network (DBN). DBN is a useful technique to maintain the relationships between attributes of data. The results of the prediction are used to fill in the missing values in the data. Support Vector Regression (SVR) algorithm is used for predicting the missing values. It is chosen for its good performance in comparison to other similar algorithms. Validation of the technique is carried out by using Symmetric Mean Absolute Percentage Error (SMAPE). SMAPE used to count an error rate for prediction model. The use of the DBN of feature selection for SVR can't decrease the error rate of the model.
{"title":"Imputation of missing value using dynamic Bayesian network for multivariate time series data","authors":"Steffi Pauli Susanti, F. N. Azizah","doi":"10.1109/ICODSE.2017.8285864","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285864","url":null,"abstract":"Time series and multivariate data are required to accommodate more complex decision making. Data are processed using data mining techniques in order to obtain valuable trends in the data that can be used to support in decision making processes. Unfortunately, we often encounter a lot of problems in preparing the data for data mining process. One of the problem is missing values. Missing values in data may causes inaccurate results of data processing. Imputation are used to handle missing values. In this thesis missing value are handled using Dynamic Bayesian Network (DBN). DBN is a useful technique to maintain the relationships between attributes of data. The results of the prediction are used to fill in the missing values in the data. Support Vector Regression (SVR) algorithm is used for predicting the missing values. It is chosen for its good performance in comparison to other similar algorithms. Validation of the technique is carried out by using Symmetric Mean Absolute Percentage Error (SMAPE). SMAPE used to count an error rate for prediction model. The use of the DBN of feature selection for SVR can't decrease the error rate of the model.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121443319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285880
Muhamad Visat Sutarno, A. I. Kistijantoro
The advancement of technology has been giving contributions to the rapid growth of the use of digital data. In this digital era, lots of physical data have been transformed into the digital ones. One example of the use of digital data is the digital biometric fingerprint data on the Electronic Identity Card (KTP-el). Fingerprint matching can take a long time to process if the data is large enough. Thus, there is a need for a parallel fingerprint matching. Based on this rationale, this paper aims to improve the fingerprint matching performance, in the current state of the art linear solution, by using the Minutia Cylinder-Code (MCC) algorithm in parallel on GPU. Based on the experiment and testing, the proposed solution has a significantly better run time compared to the state of the art linear solution while maintaining the accuracy.
{"title":"Minutia cylinder code-based fingerprint matching optimization using GPU","authors":"Muhamad Visat Sutarno, A. I. Kistijantoro","doi":"10.1109/ICODSE.2017.8285880","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285880","url":null,"abstract":"The advancement of technology has been giving contributions to the rapid growth of the use of digital data. In this digital era, lots of physical data have been transformed into the digital ones. One example of the use of digital data is the digital biometric fingerprint data on the Electronic Identity Card (KTP-el). Fingerprint matching can take a long time to process if the data is large enough. Thus, there is a need for a parallel fingerprint matching. Based on this rationale, this paper aims to improve the fingerprint matching performance, in the current state of the art linear solution, by using the Minutia Cylinder-Code (MCC) algorithm in parallel on GPU. Based on the experiment and testing, the proposed solution has a significantly better run time compared to the state of the art linear solution while maintaining the accuracy.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123458758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/ICODSE.2017.8285893
Ivan Andrianto, M. Liem, Y. Asnar
Security is very important aspect of a web application. Therefore security testing is needed to find vulnerabilities on web applications. One of security testing technique is fuzz testing. Fuzz testing or fuzzing is a software testing technique done by giving a set of invalid inputs to the application under test. Fuzz testing is usually done by a tool. In fuzz testing for web application, a set of HTTP requests will be sent to the application under test in order to see how the application behaves when getting various inputs. It would be better if fuzz testing for web application can run automatically on certain conditions. In this research, we develop a platform and tools for web application fuzz testing automation that can be integrated to Jenkins. The tool has been tested on web applications with known vulnerabilities. In 13 of the 15 test cases, the tool can successfully found the presence of vulnerabilities. Based on the results, most vulnerabilities can be detected based on HTTP response content.
{"title":"Web application fuzz testing","authors":"Ivan Andrianto, M. Liem, Y. Asnar","doi":"10.1109/ICODSE.2017.8285893","DOIUrl":"https://doi.org/10.1109/ICODSE.2017.8285893","url":null,"abstract":"Security is very important aspect of a web application. Therefore security testing is needed to find vulnerabilities on web applications. One of security testing technique is fuzz testing. Fuzz testing or fuzzing is a software testing technique done by giving a set of invalid inputs to the application under test. Fuzz testing is usually done by a tool. In fuzz testing for web application, a set of HTTP requests will be sent to the application under test in order to see how the application behaves when getting various inputs. It would be better if fuzz testing for web application can run automatically on certain conditions. In this research, we develop a platform and tools for web application fuzz testing automation that can be integrated to Jenkins. The tool has been tested on web applications with known vulnerabilities. In 13 of the 15 test cases, the tool can successfully found the presence of vulnerabilities. Based on the results, most vulnerabilities can be detected based on HTTP response content.","PeriodicalId":366005,"journal":{"name":"2017 International Conference on Data and Software Engineering (ICoDSE)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121314084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}