Ziqi Chen, Huiyan Wang, Chang Xu, Xiaoxing Ma, Chun Cao
{"title":"VISION:通过镜像合成评估DNN模型的场景适用性","authors":"Ziqi Chen, Huiyan Wang, Chang Xu, Xiaoxing Ma, Chun Cao","doi":"10.1109/APSEC48747.2019.00020","DOIUrl":null,"url":null,"abstract":"Software systems assisted with deep neural networks (DNNs) are gaining increasing popularities. However, one outstanding problem is to judge whether a given application scenario suits a DNN model, whose answer highly affects its concerned system's performance. Existing work indirectly addressed this problem by seeking for higher test coverage or generating adversarial inputs. One pioneering work is SynEva, which exactly addressed this problem by synthesizing mirror programs for scenario suitableness evaluation of general machine learning programs, but fell short in supporting DNN models. In this paper, we propose VISION to eValuatIng Scenario suItableness fOr DNN models, specially catered for DNN characteristics. We conducted experiments on a real-world self-driving dataset Udacity, and the results show that VISION was effective in evaluating scenario suitableness for DNN models with an accuracy of 75.6–89.0% as compared to that of SynEva, 50.0–81.8%. We also explored different meta-models in VISION, and found out that the decision tree logic learner meta-model could be the best one for balancing VISION's effectiveness and efficiency.","PeriodicalId":325642,"journal":{"name":"2019 26th Asia-Pacific Software Engineering Conference (APSEC)","volume":"288 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"VISION: Evaluating Scenario Suitableness for DNN Models by Mirror Synthesis\",\"authors\":\"Ziqi Chen, Huiyan Wang, Chang Xu, Xiaoxing Ma, Chun Cao\",\"doi\":\"10.1109/APSEC48747.2019.00020\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Software systems assisted with deep neural networks (DNNs) are gaining increasing popularities. However, one outstanding problem is to judge whether a given application scenario suits a DNN model, whose answer highly affects its concerned system's performance. Existing work indirectly addressed this problem by seeking for higher test coverage or generating adversarial inputs. One pioneering work is SynEva, which exactly addressed this problem by synthesizing mirror programs for scenario suitableness evaluation of general machine learning programs, but fell short in supporting DNN models. In this paper, we propose VISION to eValuatIng Scenario suItableness fOr DNN models, specially catered for DNN characteristics. We conducted experiments on a real-world self-driving dataset Udacity, and the results show that VISION was effective in evaluating scenario suitableness for DNN models with an accuracy of 75.6–89.0% as compared to that of SynEva, 50.0–81.8%. We also explored different meta-models in VISION, and found out that the decision tree logic learner meta-model could be the best one for balancing VISION's effectiveness and efficiency.\",\"PeriodicalId\":325642,\"journal\":{\"name\":\"2019 26th Asia-Pacific Software Engineering Conference (APSEC)\",\"volume\":\"288 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 26th Asia-Pacific Software Engineering Conference (APSEC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/APSEC48747.2019.00020\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 26th Asia-Pacific Software Engineering Conference (APSEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/APSEC48747.2019.00020","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
VISION: Evaluating Scenario Suitableness for DNN Models by Mirror Synthesis
Software systems assisted with deep neural networks (DNNs) are gaining increasing popularities. However, one outstanding problem is to judge whether a given application scenario suits a DNN model, whose answer highly affects its concerned system's performance. Existing work indirectly addressed this problem by seeking for higher test coverage or generating adversarial inputs. One pioneering work is SynEva, which exactly addressed this problem by synthesizing mirror programs for scenario suitableness evaluation of general machine learning programs, but fell short in supporting DNN models. In this paper, we propose VISION to eValuatIng Scenario suItableness fOr DNN models, specially catered for DNN characteristics. We conducted experiments on a real-world self-driving dataset Udacity, and the results show that VISION was effective in evaluating scenario suitableness for DNN models with an accuracy of 75.6–89.0% as compared to that of SynEva, 50.0–81.8%. We also explored different meta-models in VISION, and found out that the decision tree logic learner meta-model could be the best one for balancing VISION's effectiveness and efficiency.