Adrian Ioan Pîrîu, M. Leonte, Nicolae Postolachi, Dragos Gavrilut
{"title":"Optimizing Cleanset Growth by Using Multi-Class Neural Networks","authors":"Adrian Ioan Pîrîu, M. Leonte, Nicolae Postolachi, Dragos Gavrilut","doi":"10.1109/SYNASC.2018.00071","DOIUrl":null,"url":null,"abstract":"Starting from 2005-2006 the number of malware samples had an exponential growth to a point where at the beginning of 2018 more than 800 million samples were known. With these changes, security vendors had to adjust - one solution being using machine learning algorithms for prediction. However, as the malware number grows so should the benign sample set (if one wants to have a reliable training and a proactive model). This paper presents some key aspects related to procedures and optimizations one needs to do in order to create a large cleanset (a collection of benign files) that can be used for machine learning training.","PeriodicalId":273805,"journal":{"name":"2018 20th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 20th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SYNASC.2018.00071","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Starting from 2005-2006 the number of malware samples had an exponential growth to a point where at the beginning of 2018 more than 800 million samples were known. With these changes, security vendors had to adjust - one solution being using machine learning algorithms for prediction. However, as the malware number grows so should the benign sample set (if one wants to have a reliable training and a proactive model). This paper presents some key aspects related to procedures and optimizations one needs to do in order to create a large cleanset (a collection of benign files) that can be used for machine learning training.