Pub Date : 2020-07-01DOI: 10.1109/COMPSAC48688.2020.0-208
K. Jinzenji, Akio Jin, Tatsuya Muramoto
Despite the increasing importance of responding quickly to markets and customers, many organizations that are contracted to develop software are still unable to move from waterfall to agile development. Major reasons include not only the incompatibility of current labor laws with agile development but also the inability to determine the productivity (cost, duration) of agile development. This paper proposes indicators to evaluate and compare the productivity of projects by focusing on "value building" in LEAN. To promote agile development, NTT Laboratories have been running agile development trials using delegated agreements (similar to time and material contracts) since 2018. Using the proposed indicators, we compared the statistics of 20 agile trial projects and more than 200 waterfall development projects. Results revealed that agile development became superior to waterfall development in terms of delivery time and cost when its feature-used rate was 30% higher than that of waterfall development.
{"title":"Productivity Evaluation Indicators Based on LEAN and their Application to Compare Agile and Waterfall Projects","authors":"K. Jinzenji, Akio Jin, Tatsuya Muramoto","doi":"10.1109/COMPSAC48688.2020.0-208","DOIUrl":"https://doi.org/10.1109/COMPSAC48688.2020.0-208","url":null,"abstract":"Despite the increasing importance of responding quickly to markets and customers, many organizations that are contracted to develop software are still unable to move from waterfall to agile development. Major reasons include not only the incompatibility of current labor laws with agile development but also the inability to determine the productivity (cost, duration) of agile development. This paper proposes indicators to evaluate and compare the productivity of projects by focusing on \"value building\" in LEAN. To promote agile development, NTT Laboratories have been running agile development trials using delegated agreements (similar to time and material contracts) since 2018. Using the proposed indicators, we compared the statistics of 20 agile trial projects and more than 200 waterfall development projects. Results revealed that agile development became superior to waterfall development in terms of delivery time and cost when its feature-used rate was 30% higher than that of waterfall development.","PeriodicalId":430098,"journal":{"name":"2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116673594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/COMPSAC48688.2020.0-167
Lian Yu, Lihao Chen, Jingtao Dong, Mengyuan Li, Lijun Liu, B. Zhao, Chen Zhang
This paper proposes an approach that combines a deep learning-based method and a traditional machine learning-based method to efficiently detect malicious requests Web servers received. The first few layers of Convolutional Neural Network for Text Classification (TextCNN) are used to automatically extract powerful semantic features and in the meantime transferable statistical features are defined to boost the detection ability, specifically Web request parameter tampering. The semantic features from TextCNN and transferable statistical features from artificially-designing are grouped together to be fed into Support Vector Machine (SVM), replacing the last layer of TextCNN for classification. To facilitate the understanding of abstract features in form of numerical data in vectors extracted by TextCNN, this paper designs trace-back functions that map max-pooling outputs back to words in Web requests. After investigating the current available datasets for Web attack detection, HTTP Dataset CSIC 2010 is selected to test and verify the proposed approach. Compared with other deep learning models, the experimental results demonstrate that the approach proposed in this paper is competitive with the state-of-the-art.
{"title":"Detecting Malicious Web Requests Using an Enhanced TextCNN","authors":"Lian Yu, Lihao Chen, Jingtao Dong, Mengyuan Li, Lijun Liu, B. Zhao, Chen Zhang","doi":"10.1109/COMPSAC48688.2020.0-167","DOIUrl":"https://doi.org/10.1109/COMPSAC48688.2020.0-167","url":null,"abstract":"This paper proposes an approach that combines a deep learning-based method and a traditional machine learning-based method to efficiently detect malicious requests Web servers received. The first few layers of Convolutional Neural Network for Text Classification (TextCNN) are used to automatically extract powerful semantic features and in the meantime transferable statistical features are defined to boost the detection ability, specifically Web request parameter tampering. The semantic features from TextCNN and transferable statistical features from artificially-designing are grouped together to be fed into Support Vector Machine (SVM), replacing the last layer of TextCNN for classification. To facilitate the understanding of abstract features in form of numerical data in vectors extracted by TextCNN, this paper designs trace-back functions that map max-pooling outputs back to words in Web requests. After investigating the current available datasets for Web attack detection, HTTP Dataset CSIC 2010 is selected to test and verify the proposed approach. Compared with other deep learning models, the experimental results demonstrate that the approach proposed in this paper is competitive with the state-of-the-art.","PeriodicalId":430098,"journal":{"name":"2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127059600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/COMPSAC48688.2020.0-140
Johan Garcia, Tobias Vehkajarvi
Software development for embedded systems, in particular code which interacts with boot-up procedures, can pose considerable challenges. In this work we propose the K-Seen-Before (KSB) approach to detect and highlight anomalous boot log messages, thus relieving developers from repeatedly having to manually examine boot log files of 1000+ lines. We describe the KSB instance based anomaly detection system and its relation to KNN. An industrial data set related to development of high-speed networking equipment is utilized to examine the effects of the KSB parameters on the amount of detected anomalies. The obtained results highlight the utility of KSB and provide indications of suitable KSB parameter settings for obtaining an appropriate trade-off for the cognitive workload of the developer with regards to log file analysis.
{"title":"Boot Log Anomaly Detection with K-Seen-Before","authors":"Johan Garcia, Tobias Vehkajarvi","doi":"10.1109/COMPSAC48688.2020.0-140","DOIUrl":"https://doi.org/10.1109/COMPSAC48688.2020.0-140","url":null,"abstract":"Software development for embedded systems, in particular code which interacts with boot-up procedures, can pose considerable challenges. In this work we propose the K-Seen-Before (KSB) approach to detect and highlight anomalous boot log messages, thus relieving developers from repeatedly having to manually examine boot log files of 1000+ lines. We describe the KSB instance based anomaly detection system and its relation to KNN. An industrial data set related to development of high-speed networking equipment is utilized to examine the effects of the KSB parameters on the amount of detected anomalies. The obtained results highlight the utility of KSB and provide indications of suitable KSB parameter settings for obtaining an appropriate trade-off for the cognitive workload of the developer with regards to log file analysis.","PeriodicalId":430098,"journal":{"name":"2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127423999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/COMPSAC48688.2020.00038
You Liang, A. Thavaneswaran, Alex Paseka, Zimo Zhu, R. Thulasiram
Volatility forecasts and stock price forecasts play major roles in algorithmic trading. In this paper, joint forecasts of volatility and stock price are first obtained and then applied to algorithmic trading. Interval forecasts of stock prices are constructed using generalized double exponential smoothing (GDES) for stock price forecasts and data-driven exponentially weighted moving average (DD-EWMA) for volatility forecasts. Multi-stepahead interval forecasts for nonstationary stock price series are obtained. As an application, one-step-ahead interval forecasts are used to propose a novel dynamic data-driven algorithmic trading strategy. Commonly used simple moving average (SMA) crossover trading strategy and Bollinger bands trading strategy depend on unknown parameters (moving average window sizes) and the window sizes are usually chosen in an ad hoc fashion. However the proposed trading strategy does not depend on the window size, and is data-driven in the sense that the optimal smoothing constants of GDES and DD-EWMA are chosen from the data. In the proposed trading strategy, a training sample is used to tune the parameters: smoothing constant for GDES price forecasts, smoothing constant for DD-EWMA volatility forecasts, and the tuning parameter which maximizes Sharpe ratio (SR). A test sample is then used to compute cumulative profits to measure the out-of-sample trading performance using optimal tuning parameters. An empirical application on a set of widely traded stock indices shows that the proposed GDES interval forecast trading strategy is able to significantly outperform SMA and the buy and hold strategies for the majority of stock indices.
{"title":"A Novel Dynamic Data-Driven Algorithmic Trading Strategy Using Joint Forecasts of Volatility and Stock Price","authors":"You Liang, A. Thavaneswaran, Alex Paseka, Zimo Zhu, R. Thulasiram","doi":"10.1109/COMPSAC48688.2020.00038","DOIUrl":"https://doi.org/10.1109/COMPSAC48688.2020.00038","url":null,"abstract":"Volatility forecasts and stock price forecasts play major roles in algorithmic trading. In this paper, joint forecasts of volatility and stock price are first obtained and then applied to algorithmic trading. Interval forecasts of stock prices are constructed using generalized double exponential smoothing (GDES) for stock price forecasts and data-driven exponentially weighted moving average (DD-EWMA) for volatility forecasts. Multi-stepahead interval forecasts for nonstationary stock price series are obtained. As an application, one-step-ahead interval forecasts are used to propose a novel dynamic data-driven algorithmic trading strategy. Commonly used simple moving average (SMA) crossover trading strategy and Bollinger bands trading strategy depend on unknown parameters (moving average window sizes) and the window sizes are usually chosen in an ad hoc fashion. However the proposed trading strategy does not depend on the window size, and is data-driven in the sense that the optimal smoothing constants of GDES and DD-EWMA are chosen from the data. In the proposed trading strategy, a training sample is used to tune the parameters: smoothing constant for GDES price forecasts, smoothing constant for DD-EWMA volatility forecasts, and the tuning parameter which maximizes Sharpe ratio (SR). A test sample is then used to compute cumulative profits to measure the out-of-sample trading performance using optimal tuning parameters. An empirical application on a set of widely traded stock indices shows that the proposed GDES interval forecast trading strategy is able to significantly outperform SMA and the buy and hold strategies for the majority of stock indices.","PeriodicalId":430098,"journal":{"name":"2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127489012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/COMPSAC48688.2020.0-230
Hang Wu, May D. Wang
Supervised learning via empirical risk minimization, despite its solid theoretical foundations, faces a major challenge in generalization capability, which limits its application in real-world data science problems. In particular, current models fail to distinguish in-distribution and out-of-distribution and give over confident predictions for out-of-distribution samples. In this paper, we propose an distributionally robust learning method to train classifiers via solving an unconstrained minimax game between an adversary test distribution and a hypothesis. We showed the theoretical generalization performance guarantees, and empirically, our learned classifier when coupled with thresholded detectors, can efficiently detect out-of-distribution samples.
{"title":"Training Confidence-Calibrated Classifier via Distributionally Robust Learning","authors":"Hang Wu, May D. Wang","doi":"10.1109/COMPSAC48688.2020.0-230","DOIUrl":"https://doi.org/10.1109/COMPSAC48688.2020.0-230","url":null,"abstract":"Supervised learning via empirical risk minimization, despite its solid theoretical foundations, faces a major challenge in generalization capability, which limits its application in real-world data science problems. In particular, current models fail to distinguish in-distribution and out-of-distribution and give over confident predictions for out-of-distribution samples. In this paper, we propose an distributionally robust learning method to train classifiers via solving an unconstrained minimax game between an adversary test distribution and a hypothesis. We showed the theoretical generalization performance guarantees, and empirically, our learned classifier when coupled with thresholded detectors, can efficiently detect out-of-distribution samples.","PeriodicalId":430098,"journal":{"name":"2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124817974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/COMPSAC48688.2020.00-84
Jiaxing Sun, Yujie Li, Huimin Lu, Tohru Kamiya, S. Serikawa
Big data-driven deep learning methods have been widely used in image or video segmentation. The main challenge is that a large amount of labeled data is required in training deep learning models, which is important in real-world applications. To the best of our knowledge, there exist few researches in the deep learning-based visual segmentation. To this end, this paper summarizes the algorithms and current situation of image or video segmentation technologies based on deep learning and point out the future trends. The characteristics of segmentation that based on semi-supervised or unsupervised learning, all of the recent novel methods are summarized in this paper. The principle, advantages and disadvantages of each algorithms are also compared and analyzed.
{"title":"Deep Learning for Visual Segmentation: A Review","authors":"Jiaxing Sun, Yujie Li, Huimin Lu, Tohru Kamiya, S. Serikawa","doi":"10.1109/COMPSAC48688.2020.00-84","DOIUrl":"https://doi.org/10.1109/COMPSAC48688.2020.00-84","url":null,"abstract":"Big data-driven deep learning methods have been widely used in image or video segmentation. The main challenge is that a large amount of labeled data is required in training deep learning models, which is important in real-world applications. To the best of our knowledge, there exist few researches in the deep learning-based visual segmentation. To this end, this paper summarizes the algorithms and current situation of image or video segmentation technologies based on deep learning and point out the future trends. The characteristics of segmentation that based on semi-supervised or unsupervised learning, all of the recent novel methods are summarized in this paper. The principle, advantages and disadvantages of each algorithms are also compared and analyzed.","PeriodicalId":430098,"journal":{"name":"2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125935953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/COMPSAC48688.2020.00032
Zheng Li, Deli Yu, Yonghao Wu, Yong Liu
Online Judge (OJ) system, which can automatically evaluate the results (right or wrong) of programs by executing them on standard test cases, is widely used in programming education. While an OJ system with personalized feedback can not only give execution results, but also provide information to assist students in locating their problems quickly. Automatically fault localization techniques are designed to find the exact faults in programs automatically, experimental results showed their effect on locating artificial faults, but their effectiveness on novice programs needs to be investigated. In this paper, we first evaluate the effectiveness of several widely-studied fault localization techniques on novice programs, and then we use fine-grained test cases to improve the fault localization accuracy. Empirical studies are conducted on 77 real student programs and the results show that, compared with original test cases in OJ system, the fault localization accuracy can be improved obviously when using fine-grained test cases. More specifically, in terms of TOP-1, TOP-3 and TOP-5 metrics, the maximum results can be improved from 5, 22, 37 to 9, 24, 48, respectively. The results indicate that more faults can be located when checking the top 1, 3 or 5 statements, so the fault localization accuracy is enhanced. Furthermore, a Test Case Granularity (TCG) concept is introduced to describe fine-grained test cases, and empirically studies demonstrate that there is a strong correlation between TCG and fault localization accuracy.
{"title":"Using Fine-Grained Test Cases for Improving Novice Program Fault Localization","authors":"Zheng Li, Deli Yu, Yonghao Wu, Yong Liu","doi":"10.1109/COMPSAC48688.2020.00032","DOIUrl":"https://doi.org/10.1109/COMPSAC48688.2020.00032","url":null,"abstract":"Online Judge (OJ) system, which can automatically evaluate the results (right or wrong) of programs by executing them on standard test cases, is widely used in programming education. While an OJ system with personalized feedback can not only give execution results, but also provide information to assist students in locating their problems quickly. Automatically fault localization techniques are designed to find the exact faults in programs automatically, experimental results showed their effect on locating artificial faults, but their effectiveness on novice programs needs to be investigated. In this paper, we first evaluate the effectiveness of several widely-studied fault localization techniques on novice programs, and then we use fine-grained test cases to improve the fault localization accuracy. Empirical studies are conducted on 77 real student programs and the results show that, compared with original test cases in OJ system, the fault localization accuracy can be improved obviously when using fine-grained test cases. More specifically, in terms of TOP-1, TOP-3 and TOP-5 metrics, the maximum results can be improved from 5, 22, 37 to 9, 24, 48, respectively. The results indicate that more faults can be located when checking the top 1, 3 or 5 statements, so the fault localization accuracy is enhanced. Furthermore, a Test Case Granularity (TCG) concept is introduced to describe fine-grained test cases, and empirically studies demonstrate that there is a strong correlation between TCG and fault localization accuracy.","PeriodicalId":430098,"journal":{"name":"2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126025773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/COMPSAC48688.2020.0-112
H. Washizaki
We present a vision called "value co-creation of software by artificial intelligence (AI) and developers." In this vision, AI and developers work in collaboration as equal partners to co-create business and societal values through software system development and operations. Towards this vision, we discuss AI automation for development focusing on machine learning by introducing examples, including our own. Finally, we envision the future of value co-creation by AI and developers.
{"title":"Towards Software Value Co-Creation with AI","authors":"H. Washizaki","doi":"10.1109/COMPSAC48688.2020.0-112","DOIUrl":"https://doi.org/10.1109/COMPSAC48688.2020.0-112","url":null,"abstract":"We present a vision called \"value co-creation of software by artificial intelligence (AI) and developers.\" In this vision, AI and developers work in collaboration as equal partners to co-create business and societal values through software system development and operations. Towards this vision, we discuss AI automation for development focusing on machine learning by introducing examples, including our own. Finally, we envision the future of value co-creation by AI and developers.","PeriodicalId":430098,"journal":{"name":"2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123561384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/COMPSAC48688.2020.0-141
Kewen Wang, Mohammad Maifi Hasan Khan, Nhan Nguyen
In this paper we design and implement a middleware service for dynamically allocating computing resources for Apache Spark applications on cloud platforms, and consider two different approaches to allocate resources. In the first approach, based on limited execution data of an application, we estimate the amount of resource adjustment (i.e., Delta) for each application separately a priori which is static during the execution of that particular application (i.e., Approach - I). In the second approach, we adjust the value of Delta dynamically during runtime based on execution pattern in real-time (i.e., Approach - II). Our evaluation using six different Apache Spark applications on both physical and virtual clusters demonstrates that our approaches can improve application performance while reducing resource requirements significantly in most cases compared to static resource allocation strategies.
{"title":"A Dynamic Resource Allocation Framework for Apache Spark Applications","authors":"Kewen Wang, Mohammad Maifi Hasan Khan, Nhan Nguyen","doi":"10.1109/COMPSAC48688.2020.0-141","DOIUrl":"https://doi.org/10.1109/COMPSAC48688.2020.0-141","url":null,"abstract":"In this paper we design and implement a middleware service for dynamically allocating computing resources for Apache Spark applications on cloud platforms, and consider two different approaches to allocate resources. In the first approach, based on limited execution data of an application, we estimate the amount of resource adjustment (i.e., Delta) for each application separately a priori which is static during the execution of that particular application (i.e., Approach - I). In the second approach, we adjust the value of Delta dynamically during runtime based on execution pattern in real-time (i.e., Approach - II). Our evaluation using six different Apache Spark applications on both physical and virtual clusters demonstrates that our approaches can improve application performance while reducing resource requirements significantly in most cases compared to static resource allocation strategies.","PeriodicalId":430098,"journal":{"name":"2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"145 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115019879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-07-01DOI: 10.1109/COMPSAC48688.2020.0-189
Hiroki Kashiwazaki, H. Takakura, S. Shimojo
A wide-area distributed application is affected by network failure due to natural disasters because the servers on which the application operates are distributed geographically in a wide area. Failure Injection Testing (FIT) is a method for verifying fault tolerance of widely distributed applications. In this paper, by limiting network failures only to the connection line, whole FIT scenarios are generated, and exhaustive evaluation of fault tolerance is performed. The authors propose a method to omit the evaluations from the aspect of topological constraint conditions. And they evaluate the visualization method of performance data obtained from this evaluation and the reduction of the fault tolerance evaluation cost by the proposed method.
{"title":"A Quantitative Evaluation of a Wide-Area Distributed System with SDN-FIT","authors":"Hiroki Kashiwazaki, H. Takakura, S. Shimojo","doi":"10.1109/COMPSAC48688.2020.0-189","DOIUrl":"https://doi.org/10.1109/COMPSAC48688.2020.0-189","url":null,"abstract":"A wide-area distributed application is affected by network failure due to natural disasters because the servers on which the application operates are distributed geographically in a wide area. Failure Injection Testing (FIT) is a method for verifying fault tolerance of widely distributed applications. In this paper, by limiting network failures only to the connection line, whole FIT scenarios are generated, and exhaustive evaluation of fault tolerance is performed. The authors propose a method to omit the evaluations from the aspect of topological constraint conditions. And they evaluate the visualization method of performance data obtained from this evaluation and the reduction of the fault tolerance evaluation cost by the proposed method.","PeriodicalId":430098,"journal":{"name":"2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115508318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}