Hamid Hassannejad, G. Matrella, P. Ciampolini, I. D. Munari, M. Mordonini, S. Cagnoni
{"title":"使用非常深卷积网络的食物图像识别","authors":"Hamid Hassannejad, G. Matrella, P. Ciampolini, I. D. Munari, M. Mordonini, S. Cagnoni","doi":"10.1145/2986035.2986042","DOIUrl":null,"url":null,"abstract":"We evaluated the effectiveness in classifying food images of a deep-learning approach based on the specifications of Google's image recognition architecture Inception. The architecture is a deep convolutional neural network (DCNN) having a depth of 54 layers. In this study, we fine-tuned this architecture for classifying food images from three well-known food image datasets: ETH Food-101, UEC FOOD 100, and UEC FOOD 256. On these datasets we achieved, respectively, 88.28%, 81.45%, and 76.17% as top-1 accuracy and 96.88%, 97.27%, and 92.58% as top-5 accuracy. To the best of our knowledge, these results significantly improve the best published results obtained on the same datasets, while requiring less computation power, since the number of parameters and the computational complexity are much smaller than the competitors?. Because of this, even if it is still rather large, the deep network based on this architecture appears to be at least closer to the requirements for mobile systems.","PeriodicalId":91925,"journal":{"name":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","volume":"10 113 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"161","resultStr":"{\"title\":\"Food Image Recognition Using Very Deep Convolutional Networks\",\"authors\":\"Hamid Hassannejad, G. Matrella, P. Ciampolini, I. D. Munari, M. Mordonini, S. Cagnoni\",\"doi\":\"10.1145/2986035.2986042\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We evaluated the effectiveness in classifying food images of a deep-learning approach based on the specifications of Google's image recognition architecture Inception. The architecture is a deep convolutional neural network (DCNN) having a depth of 54 layers. In this study, we fine-tuned this architecture for classifying food images from three well-known food image datasets: ETH Food-101, UEC FOOD 100, and UEC FOOD 256. On these datasets we achieved, respectively, 88.28%, 81.45%, and 76.17% as top-1 accuracy and 96.88%, 97.27%, and 92.58% as top-5 accuracy. To the best of our knowledge, these results significantly improve the best published results obtained on the same datasets, while requiring less computation power, since the number of parameters and the computational complexity are much smaller than the competitors?. Because of this, even if it is still rather large, the deep network based on this architecture appears to be at least closer to the requirements for mobile systems.\",\"PeriodicalId\":91925,\"journal\":{\"name\":\"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...\",\"volume\":\"10 113 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-10-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"161\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2986035.2986042\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2986035.2986042","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Food Image Recognition Using Very Deep Convolutional Networks
We evaluated the effectiveness in classifying food images of a deep-learning approach based on the specifications of Google's image recognition architecture Inception. The architecture is a deep convolutional neural network (DCNN) having a depth of 54 layers. In this study, we fine-tuned this architecture for classifying food images from three well-known food image datasets: ETH Food-101, UEC FOOD 100, and UEC FOOD 256. On these datasets we achieved, respectively, 88.28%, 81.45%, and 76.17% as top-1 accuracy and 96.88%, 97.27%, and 92.58% as top-5 accuracy. To the best of our knowledge, these results significantly improve the best published results obtained on the same datasets, while requiring less computation power, since the number of parameters and the computational complexity are much smaller than the competitors?. Because of this, even if it is still rather large, the deep network based on this architecture appears to be at least closer to the requirements for mobile systems.