首页 > 最新文献

2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)最新文献

英文 中文
Productivity Evaluation Indicators Based on LEAN and their Application to Compare Agile and Waterfall Projects 基于精益的生产力评价指标及其在比较敏捷与瀑布项目中的应用
Pub Date : 2020-07-01 DOI: 10.1109/COMPSAC48688.2020.0-208
K. Jinzenji, Akio Jin, Tatsuya Muramoto
Despite the increasing importance of responding quickly to markets and customers, many organizations that are contracted to develop software are still unable to move from waterfall to agile development. Major reasons include not only the incompatibility of current labor laws with agile development but also the inability to determine the productivity (cost, duration) of agile development. This paper proposes indicators to evaluate and compare the productivity of projects by focusing on "value building" in LEAN. To promote agile development, NTT Laboratories have been running agile development trials using delegated agreements (similar to time and material contracts) since 2018. Using the proposed indicators, we compared the statistics of 20 agile trial projects and more than 200 waterfall development projects. Results revealed that agile development became superior to waterfall development in terms of delivery time and cost when its feature-used rate was 30% higher than that of waterfall development.
尽管对市场和客户的快速响应越来越重要,但许多承包软件开发的组织仍然无法从瀑布式开发转向敏捷开发。主要原因不仅包括当前劳动法与敏捷开发的不兼容,还包括无法确定敏捷开发的生产率(成本、持续时间)。本文以精益生产中的“价值建设”为重点,提出了评价和比较项目生产率的指标。为了促进敏捷开发,NTT实验室自2018年以来一直在使用委托协议(类似于时间和材料合同)进行敏捷开发试验。使用提出的指标,我们比较了20个敏捷试验项目和200多个瀑布开发项目的统计数据。结果表明,当敏捷开发的特性使用率比瀑布开发高30%时,敏捷开发在交付时间和成本上都优于瀑布开发。
{"title":"Productivity Evaluation Indicators Based on LEAN and their Application to Compare Agile and Waterfall Projects","authors":"K. Jinzenji, Akio Jin, Tatsuya Muramoto","doi":"10.1109/COMPSAC48688.2020.0-208","DOIUrl":"https://doi.org/10.1109/COMPSAC48688.2020.0-208","url":null,"abstract":"Despite the increasing importance of responding quickly to markets and customers, many organizations that are contracted to develop software are still unable to move from waterfall to agile development. Major reasons include not only the incompatibility of current labor laws with agile development but also the inability to determine the productivity (cost, duration) of agile development. This paper proposes indicators to evaluate and compare the productivity of projects by focusing on \"value building\" in LEAN. To promote agile development, NTT Laboratories have been running agile development trials using delegated agreements (similar to time and material contracts) since 2018. Using the proposed indicators, we compared the statistics of 20 agile trial projects and more than 200 waterfall development projects. Results revealed that agile development became superior to waterfall development in terms of delivery time and cost when its feature-used rate was 30% higher than that of waterfall development.","PeriodicalId":430098,"journal":{"name":"2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116673594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting Malicious Web Requests Using an Enhanced TextCNN 使用增强的TextCNN检测恶意Web请求
Pub Date : 2020-07-01 DOI: 10.1109/COMPSAC48688.2020.0-167
Lian Yu, Lihao Chen, Jingtao Dong, Mengyuan Li, Lijun Liu, B. Zhao, Chen Zhang
This paper proposes an approach that combines a deep learning-based method and a traditional machine learning-based method to efficiently detect malicious requests Web servers received. The first few layers of Convolutional Neural Network for Text Classification (TextCNN) are used to automatically extract powerful semantic features and in the meantime transferable statistical features are defined to boost the detection ability, specifically Web request parameter tampering. The semantic features from TextCNN and transferable statistical features from artificially-designing are grouped together to be fed into Support Vector Machine (SVM), replacing the last layer of TextCNN for classification. To facilitate the understanding of abstract features in form of numerical data in vectors extracted by TextCNN, this paper designs trace-back functions that map max-pooling outputs back to words in Web requests. After investigating the current available datasets for Web attack detection, HTTP Dataset CSIC 2010 is selected to test and verify the proposed approach. Compared with other deep learning models, the experimental results demonstrate that the approach proposed in this paper is competitive with the state-of-the-art.
本文提出了一种将基于深度学习的方法与传统的基于机器学习的方法相结合的方法来有效检测Web服务器收到的恶意请求。文本分类卷积神经网络(TextCNN)的前几层用于自动提取强大的语义特征,同时定义可转移的统计特征以提高检测能力,特别是Web请求参数篡改。将TextCNN的语义特征和人工设计的可转移统计特征组合在一起,输入支持向量机(SVM),取代TextCNN的最后一层进行分类。为了便于理解TextCNN提取的向量中数值数据形式的抽象特征,本文设计了回溯函数,将最大池化输出映射回Web请求中的单词。在研究了当前可用的Web攻击检测数据集之后,选择了HTTP数据集CSIC 2010来测试和验证所提出的方法。与其他深度学习模型相比,实验结果表明本文提出的方法具有较强的竞争力。
{"title":"Detecting Malicious Web Requests Using an Enhanced TextCNN","authors":"Lian Yu, Lihao Chen, Jingtao Dong, Mengyuan Li, Lijun Liu, B. Zhao, Chen Zhang","doi":"10.1109/COMPSAC48688.2020.0-167","DOIUrl":"https://doi.org/10.1109/COMPSAC48688.2020.0-167","url":null,"abstract":"This paper proposes an approach that combines a deep learning-based method and a traditional machine learning-based method to efficiently detect malicious requests Web servers received. The first few layers of Convolutional Neural Network for Text Classification (TextCNN) are used to automatically extract powerful semantic features and in the meantime transferable statistical features are defined to boost the detection ability, specifically Web request parameter tampering. The semantic features from TextCNN and transferable statistical features from artificially-designing are grouped together to be fed into Support Vector Machine (SVM), replacing the last layer of TextCNN for classification. To facilitate the understanding of abstract features in form of numerical data in vectors extracted by TextCNN, this paper designs trace-back functions that map max-pooling outputs back to words in Web requests. After investigating the current available datasets for Web attack detection, HTTP Dataset CSIC 2010 is selected to test and verify the proposed approach. Compared with other deep learning models, the experimental results demonstrate that the approach proposed in this paper is competitive with the state-of-the-art.","PeriodicalId":430098,"journal":{"name":"2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127059600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Boot Log Anomaly Detection with K-Seen-Before 引导日志异常检测与K-Seen-Before
Pub Date : 2020-07-01 DOI: 10.1109/COMPSAC48688.2020.0-140
Johan Garcia, Tobias Vehkajarvi
Software development for embedded systems, in particular code which interacts with boot-up procedures, can pose considerable challenges. In this work we propose the K-Seen-Before (KSB) approach to detect and highlight anomalous boot log messages, thus relieving developers from repeatedly having to manually examine boot log files of 1000+ lines. We describe the KSB instance based anomaly detection system and its relation to KNN. An industrial data set related to development of high-speed networking equipment is utilized to examine the effects of the KSB parameters on the amount of detected anomalies. The obtained results highlight the utility of KSB and provide indications of suitable KSB parameter settings for obtaining an appropriate trade-off for the cognitive workload of the developer with regards to log file analysis.
嵌入式系统的软件开发,特别是与启动过程交互的代码,可能会带来相当大的挑战。在这项工作中,我们提出了K-Seen-Before (KSB)方法来检测和突出显示异常引导日志消息,从而使开发人员不必反复手动检查1000多行引导日志文件。描述了基于KSB实例的异常检测系统及其与KNN的关系。利用与高速网络设备开发相关的工业数据集来检查KSB参数对检测到的异常数量的影响。获得的结果突出了KSB的实用性,并提供了适当的KSB参数设置的指示,以便在日志文件分析方面为开发人员的认知工作负载获得适当的权衡。
{"title":"Boot Log Anomaly Detection with K-Seen-Before","authors":"Johan Garcia, Tobias Vehkajarvi","doi":"10.1109/COMPSAC48688.2020.0-140","DOIUrl":"https://doi.org/10.1109/COMPSAC48688.2020.0-140","url":null,"abstract":"Software development for embedded systems, in particular code which interacts with boot-up procedures, can pose considerable challenges. In this work we propose the K-Seen-Before (KSB) approach to detect and highlight anomalous boot log messages, thus relieving developers from repeatedly having to manually examine boot log files of 1000+ lines. We describe the KSB instance based anomaly detection system and its relation to KNN. An industrial data set related to development of high-speed networking equipment is utilized to examine the effects of the KSB parameters on the amount of detected anomalies. The obtained results highlight the utility of KSB and provide indications of suitable KSB parameter settings for obtaining an appropriate trade-off for the cognitive workload of the developer with regards to log file analysis.","PeriodicalId":430098,"journal":{"name":"2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127423999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Dynamic Data-Driven Algorithmic Trading Strategy Using Joint Forecasts of Volatility and Stock Price 一种基于波动率和股价联合预测的动态数据驱动算法交易策略
Pub Date : 2020-07-01 DOI: 10.1109/COMPSAC48688.2020.00038
You Liang, A. Thavaneswaran, Alex Paseka, Zimo Zhu, R. Thulasiram
Volatility forecasts and stock price forecasts play major roles in algorithmic trading. In this paper, joint forecasts of volatility and stock price are first obtained and then applied to algorithmic trading. Interval forecasts of stock prices are constructed using generalized double exponential smoothing (GDES) for stock price forecasts and data-driven exponentially weighted moving average (DD-EWMA) for volatility forecasts. Multi-stepahead interval forecasts for nonstationary stock price series are obtained. As an application, one-step-ahead interval forecasts are used to propose a novel dynamic data-driven algorithmic trading strategy. Commonly used simple moving average (SMA) crossover trading strategy and Bollinger bands trading strategy depend on unknown parameters (moving average window sizes) and the window sizes are usually chosen in an ad hoc fashion. However the proposed trading strategy does not depend on the window size, and is data-driven in the sense that the optimal smoothing constants of GDES and DD-EWMA are chosen from the data. In the proposed trading strategy, a training sample is used to tune the parameters: smoothing constant for GDES price forecasts, smoothing constant for DD-EWMA volatility forecasts, and the tuning parameter which maximizes Sharpe ratio (SR). A test sample is then used to compute cumulative profits to measure the out-of-sample trading performance using optimal tuning parameters. An empirical application on a set of widely traded stock indices shows that the proposed GDES interval forecast trading strategy is able to significantly outperform SMA and the buy and hold strategies for the majority of stock indices.
波动率预测和股价预测在算法交易中起着重要作用。本文首先对波动率和股价进行联合预测,然后将其应用到算法交易中。利用广义双指数平滑法(GDES)和数据驱动指数加权移动平均法(DD-EWMA)分别构建了股票价格区间预测和波动率区间预测。得到了非平稳股票价格序列的多阶跃区间预测。作为一种应用,一步超前区间预测提出了一种新的动态数据驱动算法交易策略。常用的简单移动平均线(SMA)交叉交易策略和布林带交易策略依赖于未知参数(移动平均窗口大小),而窗口大小通常以一种特殊的方式选择。然而,所提出的交易策略不依赖于窗口大小,并且是数据驱动的,即从数据中选择GDES和DD-EWMA的最优平滑常数。在提出的交易策略中,使用训练样本对参数进行调整:GDES价格预测的平滑常数,DD-EWMA波动率预测的平滑常数,以及最大化夏普比率(SR)的调整参数。然后使用测试样本计算累积利润,以使用最优调整参数衡量样本外交易绩效。在一组广泛交易的股票指数上的实证应用表明,所提出的GDES区间预测交易策略能够显著优于大多数股票指数的SMA和买入持有策略。
{"title":"A Novel Dynamic Data-Driven Algorithmic Trading Strategy Using Joint Forecasts of Volatility and Stock Price","authors":"You Liang, A. Thavaneswaran, Alex Paseka, Zimo Zhu, R. Thulasiram","doi":"10.1109/COMPSAC48688.2020.00038","DOIUrl":"https://doi.org/10.1109/COMPSAC48688.2020.00038","url":null,"abstract":"Volatility forecasts and stock price forecasts play major roles in algorithmic trading. In this paper, joint forecasts of volatility and stock price are first obtained and then applied to algorithmic trading. Interval forecasts of stock prices are constructed using generalized double exponential smoothing (GDES) for stock price forecasts and data-driven exponentially weighted moving average (DD-EWMA) for volatility forecasts. Multi-stepahead interval forecasts for nonstationary stock price series are obtained. As an application, one-step-ahead interval forecasts are used to propose a novel dynamic data-driven algorithmic trading strategy. Commonly used simple moving average (SMA) crossover trading strategy and Bollinger bands trading strategy depend on unknown parameters (moving average window sizes) and the window sizes are usually chosen in an ad hoc fashion. However the proposed trading strategy does not depend on the window size, and is data-driven in the sense that the optimal smoothing constants of GDES and DD-EWMA are chosen from the data. In the proposed trading strategy, a training sample is used to tune the parameters: smoothing constant for GDES price forecasts, smoothing constant for DD-EWMA volatility forecasts, and the tuning parameter which maximizes Sharpe ratio (SR). A test sample is then used to compute cumulative profits to measure the out-of-sample trading performance using optimal tuning parameters. An empirical application on a set of widely traded stock indices shows that the proposed GDES interval forecast trading strategy is able to significantly outperform SMA and the buy and hold strategies for the majority of stock indices.","PeriodicalId":430098,"journal":{"name":"2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127489012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Training Confidence-Calibrated Classifier via Distributionally Robust Learning 通过分布鲁棒学习训练置信度校准分类器
Pub Date : 2020-07-01 DOI: 10.1109/COMPSAC48688.2020.0-230
Hang Wu, May D. Wang
Supervised learning via empirical risk minimization, despite its solid theoretical foundations, faces a major challenge in generalization capability, which limits its application in real-world data science problems. In particular, current models fail to distinguish in-distribution and out-of-distribution and give over confident predictions for out-of-distribution samples. In this paper, we propose an distributionally robust learning method to train classifiers via solving an unconstrained minimax game between an adversary test distribution and a hypothesis. We showed the theoretical generalization performance guarantees, and empirically, our learned classifier when coupled with thresholded detectors, can efficiently detect out-of-distribution samples.
基于经验风险最小化的监督学习虽然具有坚实的理论基础,但在泛化能力方面面临重大挑战,这限制了其在现实数据科学问题中的应用。特别是,目前的模型无法区分分布内和分布外,并且对分布外样本的预测过于自信。在本文中,我们提出了一种分布鲁棒学习方法,通过求解对手测试分布和假设之间的无约束极大极小博弈来训练分类器。从理论上证明了该分类器的泛化性能保证,并从经验上证明了该分类器与阈值检测器相结合,可以有效地检测出超出分布的样本。
{"title":"Training Confidence-Calibrated Classifier via Distributionally Robust Learning","authors":"Hang Wu, May D. Wang","doi":"10.1109/COMPSAC48688.2020.0-230","DOIUrl":"https://doi.org/10.1109/COMPSAC48688.2020.0-230","url":null,"abstract":"Supervised learning via empirical risk minimization, despite its solid theoretical foundations, faces a major challenge in generalization capability, which limits its application in real-world data science problems. In particular, current models fail to distinguish in-distribution and out-of-distribution and give over confident predictions for out-of-distribution samples. In this paper, we propose an distributionally robust learning method to train classifiers via solving an unconstrained minimax game between an adversary test distribution and a hypothesis. We showed the theoretical generalization performance guarantees, and empirically, our learned classifier when coupled with thresholded detectors, can efficiently detect out-of-distribution samples.","PeriodicalId":430098,"journal":{"name":"2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124817974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Deep Learning for Visual Segmentation: A Review 深度学习用于视觉分割:综述
Pub Date : 2020-07-01 DOI: 10.1109/COMPSAC48688.2020.00-84
Jiaxing Sun, Yujie Li, Huimin Lu, Tohru Kamiya, S. Serikawa
Big data-driven deep learning methods have been widely used in image or video segmentation. The main challenge is that a large amount of labeled data is required in training deep learning models, which is important in real-world applications. To the best of our knowledge, there exist few researches in the deep learning-based visual segmentation. To this end, this paper summarizes the algorithms and current situation of image or video segmentation technologies based on deep learning and point out the future trends. The characteristics of segmentation that based on semi-supervised or unsupervised learning, all of the recent novel methods are summarized in this paper. The principle, advantages and disadvantages of each algorithms are also compared and analyzed.
大数据驱动的深度学习方法已广泛应用于图像或视频分割。主要的挑战是在训练深度学习模型时需要大量的标记数据,这在实际应用中很重要。据我们所知,基于深度学习的视觉分割研究很少。为此,本文总结了基于深度学习的图像或视频分割技术的算法和现状,并指出了未来的发展趋势。本文总结了基于半监督学习和无监督学习的分割方法的特点,以及近年来各种新的分割方法。对各种算法的原理、优缺点进行了比较和分析。
{"title":"Deep Learning for Visual Segmentation: A Review","authors":"Jiaxing Sun, Yujie Li, Huimin Lu, Tohru Kamiya, S. Serikawa","doi":"10.1109/COMPSAC48688.2020.00-84","DOIUrl":"https://doi.org/10.1109/COMPSAC48688.2020.00-84","url":null,"abstract":"Big data-driven deep learning methods have been widely used in image or video segmentation. The main challenge is that a large amount of labeled data is required in training deep learning models, which is important in real-world applications. To the best of our knowledge, there exist few researches in the deep learning-based visual segmentation. To this end, this paper summarizes the algorithms and current situation of image or video segmentation technologies based on deep learning and point out the future trends. The characteristics of segmentation that based on semi-supervised or unsupervised learning, all of the recent novel methods are summarized in this paper. The principle, advantages and disadvantages of each algorithms are also compared and analyzed.","PeriodicalId":430098,"journal":{"name":"2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125935953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Using Fine-Grained Test Cases for Improving Novice Program Fault Localization 用细粒度测试用例改进新手程序故障定位
Pub Date : 2020-07-01 DOI: 10.1109/COMPSAC48688.2020.00032
Zheng Li, Deli Yu, Yonghao Wu, Yong Liu
Online Judge (OJ) system, which can automatically evaluate the results (right or wrong) of programs by executing them on standard test cases, is widely used in programming education. While an OJ system with personalized feedback can not only give execution results, but also provide information to assist students in locating their problems quickly. Automatically fault localization techniques are designed to find the exact faults in programs automatically, experimental results showed their effect on locating artificial faults, but their effectiveness on novice programs needs to be investigated. In this paper, we first evaluate the effectiveness of several widely-studied fault localization techniques on novice programs, and then we use fine-grained test cases to improve the fault localization accuracy. Empirical studies are conducted on 77 real student programs and the results show that, compared with original test cases in OJ system, the fault localization accuracy can be improved obviously when using fine-grained test cases. More specifically, in terms of TOP-1, TOP-3 and TOP-5 metrics, the maximum results can be improved from 5, 22, 37 to 9, 24, 48, respectively. The results indicate that more faults can be located when checking the top 1, 3 or 5 statements, so the fault localization accuracy is enhanced. Furthermore, a Test Case Granularity (TCG) concept is introduced to describe fine-grained test cases, and empirically studies demonstrate that there is a strong correlation between TCG and fault localization accuracy.
在线裁判系统(Online Judge, OJ)是一种通过在标准测试用例上执行程序来自动评估程序运行结果(对或错)的系统,在编程教育中得到了广泛的应用。而具有个性化反馈的OJ系统不仅可以给出执行结果,还可以提供信息帮助学生快速定位问题。故障自动定位技术是为了自动准确地发现程序中的故障而设计的,实验结果表明了它对人工故障的定位效果,但对新手程序的有效性还有待研究。在本文中,我们首先评估了几种广泛研究的故障定位技术对新手程序的有效性,然后我们使用细粒度的测试用例来提高故障定位的准确性。对77个真实的学生项目进行了实证研究,结果表明,与OJ系统中的原始测试用例相比,使用细粒度测试用例可以明显提高故障定位的准确性。更具体地说,就TOP-1、TOP-3和TOP-5指标而言,最大结果分别可以从5、22、37提高到9、24、48。结果表明,在检查前1、3、5条语句时,可以定位到更多的故障,从而提高了故障定位的精度。此外,引入了测试用例粒度(TCG)的概念来描述细粒度的测试用例,实证研究表明TCG与故障定位精度之间存在很强的相关性。
{"title":"Using Fine-Grained Test Cases for Improving Novice Program Fault Localization","authors":"Zheng Li, Deli Yu, Yonghao Wu, Yong Liu","doi":"10.1109/COMPSAC48688.2020.00032","DOIUrl":"https://doi.org/10.1109/COMPSAC48688.2020.00032","url":null,"abstract":"Online Judge (OJ) system, which can automatically evaluate the results (right or wrong) of programs by executing them on standard test cases, is widely used in programming education. While an OJ system with personalized feedback can not only give execution results, but also provide information to assist students in locating their problems quickly. Automatically fault localization techniques are designed to find the exact faults in programs automatically, experimental results showed their effect on locating artificial faults, but their effectiveness on novice programs needs to be investigated. In this paper, we first evaluate the effectiveness of several widely-studied fault localization techniques on novice programs, and then we use fine-grained test cases to improve the fault localization accuracy. Empirical studies are conducted on 77 real student programs and the results show that, compared with original test cases in OJ system, the fault localization accuracy can be improved obviously when using fine-grained test cases. More specifically, in terms of TOP-1, TOP-3 and TOP-5 metrics, the maximum results can be improved from 5, 22, 37 to 9, 24, 48, respectively. The results indicate that more faults can be located when checking the top 1, 3 or 5 statements, so the fault localization accuracy is enhanced. Furthermore, a Test Case Granularity (TCG) concept is introduced to describe fine-grained test cases, and empirically studies demonstrate that there is a strong correlation between TCG and fault localization accuracy.","PeriodicalId":430098,"journal":{"name":"2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126025773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Towards Software Value Co-Creation with AI 与人工智能共同创造软件价值
Pub Date : 2020-07-01 DOI: 10.1109/COMPSAC48688.2020.0-112
H. Washizaki
We present a vision called "value co-creation of software by artificial intelligence (AI) and developers." In this vision, AI and developers work in collaboration as equal partners to co-create business and societal values through software system development and operations. Towards this vision, we discuss AI automation for development focusing on machine learning by introducing examples, including our own. Finally, we envision the future of value co-creation by AI and developers.
我们提出了“人工智能(AI)和开发人员共同创造软件价值”的愿景。在这一愿景中,人工智能和开发人员作为平等的伙伴合作,通过软件系统开发和运营共同创造商业和社会价值。为了实现这一愿景,我们通过介绍示例(包括我们自己的示例)来讨论专注于机器学习的AI自动化开发。最后,我们展望了人工智能和开发者共同创造价值的未来。
{"title":"Towards Software Value Co-Creation with AI","authors":"H. Washizaki","doi":"10.1109/COMPSAC48688.2020.0-112","DOIUrl":"https://doi.org/10.1109/COMPSAC48688.2020.0-112","url":null,"abstract":"We present a vision called \"value co-creation of software by artificial intelligence (AI) and developers.\" In this vision, AI and developers work in collaboration as equal partners to co-create business and societal values through software system development and operations. Towards this vision, we discuss AI automation for development focusing on machine learning by introducing examples, including our own. Finally, we envision the future of value co-creation by AI and developers.","PeriodicalId":430098,"journal":{"name":"2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123561384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Dynamic Resource Allocation Framework for Apache Spark Applications Apache Spark应用的动态资源分配框架
Pub Date : 2020-07-01 DOI: 10.1109/COMPSAC48688.2020.0-141
Kewen Wang, Mohammad Maifi Hasan Khan, Nhan Nguyen
In this paper we design and implement a middleware service for dynamically allocating computing resources for Apache Spark applications on cloud platforms, and consider two different approaches to allocate resources. In the first approach, based on limited execution data of an application, we estimate the amount of resource adjustment (i.e., Delta) for each application separately a priori which is static during the execution of that particular application (i.e., Approach - I). In the second approach, we adjust the value of Delta dynamically during runtime based on execution pattern in real-time (i.e., Approach - II). Our evaluation using six different Apache Spark applications on both physical and virtual clusters demonstrates that our approaches can improve application performance while reducing resource requirements significantly in most cases compared to static resource allocation strategies.
本文设计并实现了一个中间件服务,用于在云平台上为Apache Spark应用程序动态分配计算资源,并考虑了两种不同的资源分配方法。在第一种方法中,基于应用程序的有限执行数据,我们分别先验地估计每个应用程序的资源调整量(即Delta),这在特定应用程序执行期间是静态的(即方法- I)。在第二种方法中,我们在运行时根据实时执行模式动态调整Delta的值(即,我们在物理和虚拟集群上使用六个不同的Apache Spark应用程序进行评估,结果表明,与静态资源分配策略相比,我们的方法可以提高应用程序的性能,同时在大多数情况下显著减少资源需求。
{"title":"A Dynamic Resource Allocation Framework for Apache Spark Applications","authors":"Kewen Wang, Mohammad Maifi Hasan Khan, Nhan Nguyen","doi":"10.1109/COMPSAC48688.2020.0-141","DOIUrl":"https://doi.org/10.1109/COMPSAC48688.2020.0-141","url":null,"abstract":"In this paper we design and implement a middleware service for dynamically allocating computing resources for Apache Spark applications on cloud platforms, and consider two different approaches to allocate resources. In the first approach, based on limited execution data of an application, we estimate the amount of resource adjustment (i.e., Delta) for each application separately a priori which is static during the execution of that particular application (i.e., Approach - I). In the second approach, we adjust the value of Delta dynamically during runtime based on execution pattern in real-time (i.e., Approach - II). Our evaluation using six different Apache Spark applications on both physical and virtual clusters demonstrates that our approaches can improve application performance while reducing resource requirements significantly in most cases compared to static resource allocation strategies.","PeriodicalId":430098,"journal":{"name":"2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"145 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115019879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Quantitative Evaluation of a Wide-Area Distributed System with SDN-FIT 基于SDN-FIT的广域分布式系统定量评价
Pub Date : 2020-07-01 DOI: 10.1109/COMPSAC48688.2020.0-189
Hiroki Kashiwazaki, H. Takakura, S. Shimojo
A wide-area distributed application is affected by network failure due to natural disasters because the servers on which the application operates are distributed geographically in a wide area. Failure Injection Testing (FIT) is a method for verifying fault tolerance of widely distributed applications. In this paper, by limiting network failures only to the connection line, whole FIT scenarios are generated, and exhaustive evaluation of fault tolerance is performed. The authors propose a method to omit the evaluations from the aspect of topological constraint conditions. And they evaluate the visualization method of performance data obtained from this evaluation and the reduction of the fault tolerance evaluation cost by the proposed method.
广域分布式应用程序会受到自然灾害引起的网络故障的影响,因为运行应用程序的服务器在地理上分布在广阔的区域。故障注入测试(FIT)是一种验证广泛分布应用程序容错性的方法。本文通过将网络故障仅限制在连接线上,生成了完整的FIT场景,并对容错性进行了穷举评估。作者从拓扑约束条件的角度提出了一种省略评价的方法。并对该评估所获得的性能数据的可视化方法以及该方法所降低的容错评估成本进行了评价。
{"title":"A Quantitative Evaluation of a Wide-Area Distributed System with SDN-FIT","authors":"Hiroki Kashiwazaki, H. Takakura, S. Shimojo","doi":"10.1109/COMPSAC48688.2020.0-189","DOIUrl":"https://doi.org/10.1109/COMPSAC48688.2020.0-189","url":null,"abstract":"A wide-area distributed application is affected by network failure due to natural disasters because the servers on which the application operates are distributed geographically in a wide area. Failure Injection Testing (FIT) is a method for verifying fault tolerance of widely distributed applications. In this paper, by limiting network failures only to the connection line, whole FIT scenarios are generated, and exhaustive evaluation of fault tolerance is performed. The authors propose a method to omit the evaluations from the aspect of topological constraint conditions. And they evaluate the visualization method of performance data obtained from this evaluation and the reduction of the fault tolerance evaluation cost by the proposed method.","PeriodicalId":430098,"journal":{"name":"2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115508318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1