{"title":"一种具有内存限制的卷积神经结构生成新方法","authors":"Gábor Kertész, S. Szénási, Z. Vámossy","doi":"10.1109/SAMI.2019.8782734","DOIUrl":null,"url":null,"abstract":"A novel algorithm-based method for Neural Architecture Generation is introduced in this paper. Unlike NASNet or AutoML, which outputs one optimal trained model based on a Recurrent Neural Network, the presented method outputs multiple possible architectures, without training. This method is not meant for automatic model generation based on inputs, it is designed for analyzing input preprocessing methods. The classical convolutional design patterns are analyzed and the properties and validation steps for evaluation of the generated architectures are defined, including the estimated size of the model based on the parameter numbers and training batch sizes.","PeriodicalId":240256,"journal":{"name":"2019 IEEE 17th World Symposium on Applied Machine Intelligence and Informatics (SAMI)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"A novel method for Convolutional Neural Architecture Generation with memory limitation\",\"authors\":\"Gábor Kertész, S. Szénási, Z. Vámossy\",\"doi\":\"10.1109/SAMI.2019.8782734\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A novel algorithm-based method for Neural Architecture Generation is introduced in this paper. Unlike NASNet or AutoML, which outputs one optimal trained model based on a Recurrent Neural Network, the presented method outputs multiple possible architectures, without training. This method is not meant for automatic model generation based on inputs, it is designed for analyzing input preprocessing methods. The classical convolutional design patterns are analyzed and the properties and validation steps for evaluation of the generated architectures are defined, including the estimated size of the model based on the parameter numbers and training batch sizes.\",\"PeriodicalId\":240256,\"journal\":{\"name\":\"2019 IEEE 17th World Symposium on Applied Machine Intelligence and Informatics (SAMI)\",\"volume\":\"28 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE 17th World Symposium on Applied Machine Intelligence and Informatics (SAMI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SAMI.2019.8782734\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 17th World Symposium on Applied Machine Intelligence and Informatics (SAMI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SAMI.2019.8782734","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A novel method for Convolutional Neural Architecture Generation with memory limitation
A novel algorithm-based method for Neural Architecture Generation is introduced in this paper. Unlike NASNet or AutoML, which outputs one optimal trained model based on a Recurrent Neural Network, the presented method outputs multiple possible architectures, without training. This method is not meant for automatic model generation based on inputs, it is designed for analyzing input preprocessing methods. The classical convolutional design patterns are analyzed and the properties and validation steps for evaluation of the generated architectures are defined, including the estimated size of the model based on the parameter numbers and training batch sizes.