{"title":"比较Adam和SGD优化器来训练AlexNet分类古建筑的GPR c扫描","authors":"M. Manataki, A. Vafidis, Apostolos Sarris","doi":"10.1109/iwagpr50767.2021.9843162","DOIUrl":null,"url":null,"abstract":"In this study, AlexNet architecture is implemented and trained to classify C-scans featuring ancient structural patterns. The performance of two popular optimizers is examined and compared, namely the Stochastic Gradient Descent (SGD) with momentum and Adaptive Moments Estimate (Adam). The two optimizers were employed to train models using a GPR dataset from several archaeological sites. The results showed that even though SGD was more challenging to achieve learning, it eventually performed better than Adam when Batch Normalization, Dropout, and tuning the batch size and learning rate were performed. Furthermore, the generalization was tested using entirely independent data. SGD performed better, scoring 95% over 90% classification accuracy. The obtained results highlight how important the optimizer’s choice can be in the learning process and is worth investigating when training CNNs models with GPR data.","PeriodicalId":170169,"journal":{"name":"2021 11th International Workshop on Advanced Ground Penetrating Radar (IWAGPR)","volume":"4 5","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Comparing Adam and SGD optimizers to train AlexNet for classifying GPR C-scans featuring ancient structures\",\"authors\":\"M. Manataki, A. Vafidis, Apostolos Sarris\",\"doi\":\"10.1109/iwagpr50767.2021.9843162\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this study, AlexNet architecture is implemented and trained to classify C-scans featuring ancient structural patterns. The performance of two popular optimizers is examined and compared, namely the Stochastic Gradient Descent (SGD) with momentum and Adaptive Moments Estimate (Adam). The two optimizers were employed to train models using a GPR dataset from several archaeological sites. The results showed that even though SGD was more challenging to achieve learning, it eventually performed better than Adam when Batch Normalization, Dropout, and tuning the batch size and learning rate were performed. Furthermore, the generalization was tested using entirely independent data. SGD performed better, scoring 95% over 90% classification accuracy. The obtained results highlight how important the optimizer’s choice can be in the learning process and is worth investigating when training CNNs models with GPR data.\",\"PeriodicalId\":170169,\"journal\":{\"name\":\"2021 11th International Workshop on Advanced Ground Penetrating Radar (IWAGPR)\",\"volume\":\"4 5\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 11th International Workshop on Advanced Ground Penetrating Radar (IWAGPR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/iwagpr50767.2021.9843162\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 11th International Workshop on Advanced Ground Penetrating Radar (IWAGPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/iwagpr50767.2021.9843162","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Comparing Adam and SGD optimizers to train AlexNet for classifying GPR C-scans featuring ancient structures
In this study, AlexNet architecture is implemented and trained to classify C-scans featuring ancient structural patterns. The performance of two popular optimizers is examined and compared, namely the Stochastic Gradient Descent (SGD) with momentum and Adaptive Moments Estimate (Adam). The two optimizers were employed to train models using a GPR dataset from several archaeological sites. The results showed that even though SGD was more challenging to achieve learning, it eventually performed better than Adam when Batch Normalization, Dropout, and tuning the batch size and learning rate were performed. Furthermore, the generalization was tested using entirely independent data. SGD performed better, scoring 95% over 90% classification accuracy. The obtained results highlight how important the optimizer’s choice can be in the learning process and is worth investigating when training CNNs models with GPR data.