MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...最新文献
{"title":"Session details: Keynote Address","authors":"Thomas Mensink","doi":"10.1145/3257997","DOIUrl":"https://doi.org/10.1145/3257997","url":null,"abstract":"","PeriodicalId":91925,"journal":{"name":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","volume":"85 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77006797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent past has seen a lot of developments in the field of image-based dietary assessment. Food image classification and recognition are crucial steps for dietary assessment. In the last couple of years, advancements in the deep learning and convolutional neural networks proved to be a boon for the image classification and recognition tasks, specifically for food recognition because of the wide variety of food items. In this paper, we report experiments on food/non-food classification and food recognition using a GoogLeNet model based on deep convolutional neural network. The experiments were conducted on two image datasets created by our own, where the images were collected from existing image datasets, social media, and imaging devices such as smart phone and wearable cameras. Experimental results show a high accuracy of 99.2% on the food/non-food classification and 83.6% on the food category recognition.
{"title":"Food/Non-food Image Classification and Food Categorization using Pre-Trained GoogLeNet Model","authors":"Ashutosh Singla, Lin Yuan, T. Ebrahimi","doi":"10.1145/2986035.2986039","DOIUrl":"https://doi.org/10.1145/2986035.2986039","url":null,"abstract":"Recent past has seen a lot of developments in the field of image-based dietary assessment. Food image classification and recognition are crucial steps for dietary assessment. In the last couple of years, advancements in the deep learning and convolutional neural networks proved to be a boon for the image classification and recognition tasks, specifically for food recognition because of the wide variety of food items. In this paper, we report experiments on food/non-food classification and food recognition using a GoogLeNet model based on deep convolutional neural network. The experiments were conducted on two image datasets created by our own, where the images were collected from existing image datasets, social media, and imaging devices such as smart phone and wearable cameras. Experimental results show a high accuracy of 99.2% on the food/non-food classification and 83.6% on the food category recognition.","PeriodicalId":91925,"journal":{"name":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87852236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Oral Paper Session 3","authors":"Keiji Yanai","doi":"10.1145/3257998","DOIUrl":"https://doi.org/10.1145/3257998","url":null,"abstract":"","PeriodicalId":91925,"journal":{"name":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91502882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Oral Paper Session 2","authors":"S. Mougiakakou","doi":"10.1145/3257996","DOIUrl":"https://doi.org/10.1145/3257996","url":null,"abstract":"","PeriodicalId":91925,"journal":{"name":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91048171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hamid Hassannejad, G. Matrella, P. Ciampolini, I. D. Munari, M. Mordonini, S. Cagnoni
We evaluated the effectiveness in classifying food images of a deep-learning approach based on the specifications of Google's image recognition architecture Inception. The architecture is a deep convolutional neural network (DCNN) having a depth of 54 layers. In this study, we fine-tuned this architecture for classifying food images from three well-known food image datasets: ETH Food-101, UEC FOOD 100, and UEC FOOD 256. On these datasets we achieved, respectively, 88.28%, 81.45%, and 76.17% as top-1 accuracy and 96.88%, 97.27%, and 92.58% as top-5 accuracy. To the best of our knowledge, these results significantly improve the best published results obtained on the same datasets, while requiring less computation power, since the number of parameters and the computational complexity are much smaller than the competitors?. Because of this, even if it is still rather large, the deep network based on this architecture appears to be at least closer to the requirements for mobile systems.
{"title":"Food Image Recognition Using Very Deep Convolutional Networks","authors":"Hamid Hassannejad, G. Matrella, P. Ciampolini, I. D. Munari, M. Mordonini, S. Cagnoni","doi":"10.1145/2986035.2986042","DOIUrl":"https://doi.org/10.1145/2986035.2986042","url":null,"abstract":"We evaluated the effectiveness in classifying food images of a deep-learning approach based on the specifications of Google's image recognition architecture Inception. The architecture is a deep convolutional neural network (DCNN) having a depth of 54 layers. In this study, we fine-tuned this architecture for classifying food images from three well-known food image datasets: ETH Food-101, UEC FOOD 100, and UEC FOOD 256. On these datasets we achieved, respectively, 88.28%, 81.45%, and 76.17% as top-1 accuracy and 96.88%, 97.27%, and 92.58% as top-5 accuracy. To the best of our knowledge, these results significantly improve the best published results obtained on the same datasets, while requiring less computation power, since the number of parameters and the computational complexity are much smaller than the competitors?. Because of this, even if it is still rather large, the deep network based on this architecture appears to be at least closer to the requirements for mobile systems.","PeriodicalId":91925,"journal":{"name":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","volume":"10 113 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73241381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a CNN-based "food-ness" proposal method which requires neither pixel-wise annotation nor bounding box annotation. Some proposal methods have been proposed to detect regions with high "object-ness" so far. However, many of them generated a large number of candidates to raise the recall rate. Considering the recent advent of the deeper CNN, these methods to generate a large number of proposals have difficulty in processing time for practical use. Meanwhile, a fully convolutional network (FCN) was proposed the network of which localizes target objects directly. FCN saves computational cost, although FCN is essentially equivalent to the sliding window search. This approach made large progress and achieved significant success in various tasks. Then, in this paper we propose an intermediate approach between the traditional proposal approach and the fully convolutional approach. Especially we propose a novel proposal method which generates high "food-ness" regions by fully convolutional networks and back-propagation based approach with training food images gathered from the Web.
{"title":"Foodness Proposal for Multiple Food Detection by Training of Single Food Images","authors":"Wataru Shimoda, Keiji Yanai","doi":"10.1145/2986035.2986043","DOIUrl":"https://doi.org/10.1145/2986035.2986043","url":null,"abstract":"We propose a CNN-based \"food-ness\" proposal method which requires neither pixel-wise annotation nor bounding box annotation. Some proposal methods have been proposed to detect regions with high \"object-ness\" so far. However, many of them generated a large number of candidates to raise the recall rate. Considering the recent advent of the deeper CNN, these methods to generate a large number of proposals have difficulty in processing time for practical use. Meanwhile, a fully convolutional network (FCN) was proposed the network of which localizes target objects directly. FCN saves computational cost, although FCN is essentially equivalent to the sliding window search. This approach made large progress and achieved significant success in various tasks. Then, in this paper we propose an intermediate approach between the traditional proposal approach and the fully convolutional approach. Especially we propose a novel proposal method which generates high \"food-ness\" regions by fully convolutional networks and back-propagation based approach with training food images gathered from the Web.","PeriodicalId":91925,"journal":{"name":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81965615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michele Merler, Hui Wu, Rosario A. Uceda-Sosa, Q. Nguyen, John R. Smith
We present a system to assist users in dietary logging habits, which performs food recognition from pictures snapped on their phone in two different scenarios. In the first scenario, called "Food in context", we exploit the GPS information of a user to determine which restaurant they are having a meal at, therefore restricting the categories to recognize to the set of items in the menu. Such context allows us to also report precise calories information to the user about their meal, since restaurant chains tend to standardize portions and provide the dietary information of each meal. In the second scenario, called "Foods in the wild" we try to recognize a cooked meal from a picture which could be snapped anywhere. We perform extensive experiments on food recognition on both scenarios, demonstrating the feasibility of our approach at scale, on a newly introduced dataset with 105K images for 500 food categories.
{"title":"Snap, Eat, RepEat: A Food Recognition Engine for Dietary Logging","authors":"Michele Merler, Hui Wu, Rosario A. Uceda-Sosa, Q. Nguyen, John R. Smith","doi":"10.1145/2986035.2986036","DOIUrl":"https://doi.org/10.1145/2986035.2986036","url":null,"abstract":"We present a system to assist users in dietary logging habits, which performs food recognition from pictures snapped on their phone in two different scenarios. In the first scenario, called \"Food in context\", we exploit the GPS information of a user to determine which restaurant they are having a meal at, therefore restricting the categories to recognize to the set of items in the menu. Such context allows us to also report precise calories information to the user about their meal, since restaurant chains tend to standardize portions and provide the dietary information of each meal. In the second scenario, called \"Foods in the wild\" we try to recognize a cooked meal from a picture which could be snapped anywhere. We perform extensive experiments on food recognition on both scenarios, demonstrating the feasibility of our approach at scale, on a newly introduced dataset with 105K images for 500 food categories.","PeriodicalId":91925,"journal":{"name":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","volume":"73 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91470094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Several systems have been proposed for the automatic food intake assessment and dietary support by analyzing meal images captured by smartphones. A typical system consists of computational stages that detect/segment the existing foods, recognize each of them, compute their volume, and finally estimate the corresponding nutritional information. Although this newborn field has made remarkable progress over the last years, the lack of standardized datasets and established evaluation frameworks has made difficult the comparison between methods and eventually prevented the formal definition of the problem. In this paper, we present an overview of the datasets and protocols used for evaluating the computer vision stages of the proposed automatic meal assessment systems.
{"title":"Performance Evaluation Methods of Computer Vision Systems for Meal Assessment","authors":"M. Anthimopoulos, Joachim Dehais, S. Mougiakakou","doi":"10.1145/2986035.2986045","DOIUrl":"https://doi.org/10.1145/2986035.2986045","url":null,"abstract":"Several systems have been proposed for the automatic food intake assessment and dietary support by analyzing meal images captured by smartphones. A typical system consists of computational stages that detect/segment the existing foods, recognize each of them, compute their volume, and finally estimate the corresponding nutritional information. Although this newborn field has made remarkable progress over the last years, the lack of standardized datasets and established evaluation frameworks has made difficult the comparison between methods and eventually prevented the formal definition of the problem. In this paper, we present an overview of the datasets and protocols used for evaluating the computer vision stages of the proposed automatic meal assessment systems.","PeriodicalId":91925,"journal":{"name":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","volume":"42 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81391392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Food diaries or diet journals are thought to be effective for improving the dietary lives of users. One important challenge in this field involves assisting users in recording their daily food intake. In recent years, food image recognition has attracted a considerable amount of research interest as a new technology to help record users 'food intake. However, since there are so many types of food, and it is unrealistic to expect a system to recognize all foods. In this paper, we propose an optimal combination of image recognition and interactive search in order to record users 'intake of food. The image recognition generates a list of candidate names for a given food picture. The user chooses the closest name to the meal, which triggers an associative food search based on food contents, such as ingredients. We show the proposed system is efficient to assist users maintain food journals.
{"title":"Food Search Based on User Feedback to Assist Image-based Food Recording Systems","authors":"Sosuke Amano, Shota Horiguchi, K. Aizawa, Kazuki Maeda, Masanori Kubota, Makoto Ogawa","doi":"10.1145/2986035.2986037","DOIUrl":"https://doi.org/10.1145/2986035.2986037","url":null,"abstract":"Food diaries or diet journals are thought to be effective for improving the dietary lives of users. One important challenge in this field involves assisting users in recording their daily food intake. In recent years, food image recognition has attracted a considerable amount of research interest as a new technology to help record users 'food intake. However, since there are so many types of food, and it is unrealistic to expect a system to recognize all foods. In this paper, we propose an optimal combination of image recognition and interactive search in order to record users 'intake of food. The image recognition generates a list of candidate names for a given food picture. The user chooses the closest name to the meal, which triggers an associative food search based on food contents, such as ingredients. We show the proposed system is efficient to assist users maintain food journals.","PeriodicalId":91925,"journal":{"name":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","volume":"122 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81090936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, due to a rise in healthy thinking on eating, many people take care of their eating habits, and some people record daily diet regularly. To assist them, many mobile applications for recording everyday meals have been released so far. Some of them employ food image recognition which can estimate not only food names but also food calorie. However, most of such applications have some problems especially on their usability. Then, in this paper, we propose a novel single-image-based food calorie estimation system which runs on a smartphone as a standalone application without external recognition servers. The proposed system carries out food region segmentation, food region categorization, and calorie estimation automatically. By the experiments and the user study on the proposed system, the effectiveness of the proposed system was confirmed.
{"title":"An Automatic Calorie Estimation System of Food Images on a Smartphone","authors":"Koichi Okamoto, Keiji Yanai","doi":"10.1145/2986035.2986040","DOIUrl":"https://doi.org/10.1145/2986035.2986040","url":null,"abstract":"In recent years, due to a rise in healthy thinking on eating, many people take care of their eating habits, and some people record daily diet regularly. To assist them, many mobile applications for recording everyday meals have been released so far. Some of them employ food image recognition which can estimate not only food names but also food calorie. However, most of such applications have some problems especially on their usability. Then, in this paper, we propose a novel single-image-based food calorie estimation system which runs on a smartphone as a standalone application without external recognition servers. The proposed system carries out food region segmentation, food region categorization, and calorie estimation automatically. By the experiments and the user study on the proposed system, the effectiveness of the proposed system was confirmed.","PeriodicalId":91925,"journal":{"name":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79629910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...