首页 > 最新文献

MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...最新文献

英文 中文
Session details: Keynote Address 会议详情:主题演讲
Thomas Mensink
{"title":"Session details: Keynote Address","authors":"Thomas Mensink","doi":"10.1145/3257997","DOIUrl":"https://doi.org/10.1145/3257997","url":null,"abstract":"","PeriodicalId":91925,"journal":{"name":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","volume":"85 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77006797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Food/Non-food Image Classification and Food Categorization using Pre-Trained GoogLeNet Model 使用预训练GoogLeNet模型的食品/非食品图像分类和食品分类
Ashutosh Singla, Lin Yuan, T. Ebrahimi
Recent past has seen a lot of developments in the field of image-based dietary assessment. Food image classification and recognition are crucial steps for dietary assessment. In the last couple of years, advancements in the deep learning and convolutional neural networks proved to be a boon for the image classification and recognition tasks, specifically for food recognition because of the wide variety of food items. In this paper, we report experiments on food/non-food classification and food recognition using a GoogLeNet model based on deep convolutional neural network. The experiments were conducted on two image datasets created by our own, where the images were collected from existing image datasets, social media, and imaging devices such as smart phone and wearable cameras. Experimental results show a high accuracy of 99.2% on the food/non-food classification and 83.6% on the food category recognition.
近年来,基于图像的饮食评估领域有了很大的发展。食物图像的分类和识别是膳食评估的关键步骤。在过去的几年里,深度学习和卷积神经网络的进步被证明是图像分类和识别任务的福音,特别是对于食物识别,因为食物种类繁多。本文报道了基于深度卷积神经网络的GoogLeNet模型在食品/非食品分类和食品识别方面的实验。实验在我们自己创建的两个图像数据集上进行,其中图像收集自现有图像数据集,社交媒体和成像设备,如智能手机和可穿戴相机。实验结果表明,该方法对食品/非食品分类的准确率达到99.2%,对食品类别识别的准确率达到83.6%。
{"title":"Food/Non-food Image Classification and Food Categorization using Pre-Trained GoogLeNet Model","authors":"Ashutosh Singla, Lin Yuan, T. Ebrahimi","doi":"10.1145/2986035.2986039","DOIUrl":"https://doi.org/10.1145/2986035.2986039","url":null,"abstract":"Recent past has seen a lot of developments in the field of image-based dietary assessment. Food image classification and recognition are crucial steps for dietary assessment. In the last couple of years, advancements in the deep learning and convolutional neural networks proved to be a boon for the image classification and recognition tasks, specifically for food recognition because of the wide variety of food items. In this paper, we report experiments on food/non-food classification and food recognition using a GoogLeNet model based on deep convolutional neural network. The experiments were conducted on two image datasets created by our own, where the images were collected from existing image datasets, social media, and imaging devices such as smart phone and wearable cameras. Experimental results show a high accuracy of 99.2% on the food/non-food classification and 83.6% on the food category recognition.","PeriodicalId":91925,"journal":{"name":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87852236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 157
Session details: Oral Paper Session 3 会议细节:口头论文会议3
Keiji Yanai
{"title":"Session details: Oral Paper Session 3","authors":"Keiji Yanai","doi":"10.1145/3257998","DOIUrl":"https://doi.org/10.1145/3257998","url":null,"abstract":"","PeriodicalId":91925,"journal":{"name":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91502882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Session details: Oral Paper Session 2 会议细节:口头论文会议2
S. Mougiakakou
{"title":"Session details: Oral Paper Session 2","authors":"S. Mougiakakou","doi":"10.1145/3257996","DOIUrl":"https://doi.org/10.1145/3257996","url":null,"abstract":"","PeriodicalId":91925,"journal":{"name":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91048171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Food Image Recognition Using Very Deep Convolutional Networks 使用非常深卷积网络的食物图像识别
Hamid Hassannejad, G. Matrella, P. Ciampolini, I. D. Munari, M. Mordonini, S. Cagnoni
We evaluated the effectiveness in classifying food images of a deep-learning approach based on the specifications of Google's image recognition architecture Inception. The architecture is a deep convolutional neural network (DCNN) having a depth of 54 layers. In this study, we fine-tuned this architecture for classifying food images from three well-known food image datasets: ETH Food-101, UEC FOOD 100, and UEC FOOD 256. On these datasets we achieved, respectively, 88.28%, 81.45%, and 76.17% as top-1 accuracy and 96.88%, 97.27%, and 92.58% as top-5 accuracy. To the best of our knowledge, these results significantly improve the best published results obtained on the same datasets, while requiring less computation power, since the number of parameters and the computational complexity are much smaller than the competitors?. Because of this, even if it is still rather large, the deep network based on this architecture appears to be at least closer to the requirements for mobile systems.
我们基于谷歌图像识别架构Inception的规范评估了深度学习方法对食物图像分类的有效性。该架构是一个深度为54层的深度卷积神经网络(DCNN)。在本研究中,我们对该架构进行了微调,用于从三个知名的食品图像数据集(ETH food -101, UEC food 100和UEC food 256)中分类食品图像。在这些数据集上,我们分别实现了88.28%、81.45%和76.17%的前1准确率和96.88%、97.27%和92.58%的前5准确率。据我们所知,这些结果显著提高了在相同数据集上获得的最佳已发表结果,同时需要更少的计算能力,因为参数的数量和计算复杂度远小于竞争对手。正因为如此,即使它仍然相当大,基于这种架构的深度网络看起来至少更接近移动系统的要求。
{"title":"Food Image Recognition Using Very Deep Convolutional Networks","authors":"Hamid Hassannejad, G. Matrella, P. Ciampolini, I. D. Munari, M. Mordonini, S. Cagnoni","doi":"10.1145/2986035.2986042","DOIUrl":"https://doi.org/10.1145/2986035.2986042","url":null,"abstract":"We evaluated the effectiveness in classifying food images of a deep-learning approach based on the specifications of Google's image recognition architecture Inception. The architecture is a deep convolutional neural network (DCNN) having a depth of 54 layers. In this study, we fine-tuned this architecture for classifying food images from three well-known food image datasets: ETH Food-101, UEC FOOD 100, and UEC FOOD 256. On these datasets we achieved, respectively, 88.28%, 81.45%, and 76.17% as top-1 accuracy and 96.88%, 97.27%, and 92.58% as top-5 accuracy. To the best of our knowledge, these results significantly improve the best published results obtained on the same datasets, while requiring less computation power, since the number of parameters and the computational complexity are much smaller than the competitors?. Because of this, even if it is still rather large, the deep network based on this architecture appears to be at least closer to the requirements for mobile systems.","PeriodicalId":91925,"journal":{"name":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","volume":"10 113 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73241381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 161
Foodness Proposal for Multiple Food Detection by Training of Single Food Images 基于单一食物图像训练的多食物检测方法
Wataru Shimoda, Keiji Yanai
We propose a CNN-based "food-ness" proposal method which requires neither pixel-wise annotation nor bounding box annotation. Some proposal methods have been proposed to detect regions with high "object-ness" so far. However, many of them generated a large number of candidates to raise the recall rate. Considering the recent advent of the deeper CNN, these methods to generate a large number of proposals have difficulty in processing time for practical use. Meanwhile, a fully convolutional network (FCN) was proposed the network of which localizes target objects directly. FCN saves computational cost, although FCN is essentially equivalent to the sliding window search. This approach made large progress and achieved significant success in various tasks. Then, in this paper we propose an intermediate approach between the traditional proposal approach and the fully convolutional approach. Especially we propose a novel proposal method which generates high "food-ness" regions by fully convolutional networks and back-propagation based approach with training food images gathered from the Web.
我们提出了一种基于cnn的“食物度”提议方法,该方法既不需要像素化标注,也不需要边界框标注。目前已经提出了一些检测高“对象性”区域的方法。但是,为了提高召回率,很多地方都产生了大量的候选人。考虑到最近深度CNN的出现,这些生成大量提案的方法在实际使用的处理时间上存在困难。同时,提出了一种直接定位目标物体的全卷积网络(FCN)。FCN节省了计算成本,尽管FCN本质上相当于滑动窗口搜索。这种方法取得了很大的进展,在各项任务中取得了显著的成功。然后,在本文中,我们提出了一种介于传统提议方法和全卷积方法之间的中间方法。我们特别提出了一种基于全卷积网络和基于反向传播的方法,利用从网络上收集的训练食物图像生成高“食物度”区域的新提议方法。
{"title":"Foodness Proposal for Multiple Food Detection by Training of Single Food Images","authors":"Wataru Shimoda, Keiji Yanai","doi":"10.1145/2986035.2986043","DOIUrl":"https://doi.org/10.1145/2986035.2986043","url":null,"abstract":"We propose a CNN-based \"food-ness\" proposal method which requires neither pixel-wise annotation nor bounding box annotation. Some proposal methods have been proposed to detect regions with high \"object-ness\" so far. However, many of them generated a large number of candidates to raise the recall rate. Considering the recent advent of the deeper CNN, these methods to generate a large number of proposals have difficulty in processing time for practical use. Meanwhile, a fully convolutional network (FCN) was proposed the network of which localizes target objects directly. FCN saves computational cost, although FCN is essentially equivalent to the sliding window search. This approach made large progress and achieved significant success in various tasks. Then, in this paper we propose an intermediate approach between the traditional proposal approach and the fully convolutional approach. Especially we propose a novel proposal method which generates high \"food-ness\" regions by fully convolutional networks and back-propagation based approach with training food images gathered from the Web.","PeriodicalId":91925,"journal":{"name":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81965615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Snap, Eat, RepEat: A Food Recognition Engine for Dietary Logging Snap, Eat, RepEat:一个用于饮食记录的食物识别引擎
Michele Merler, Hui Wu, Rosario A. Uceda-Sosa, Q. Nguyen, John R. Smith
We present a system to assist users in dietary logging habits, which performs food recognition from pictures snapped on their phone in two different scenarios. In the first scenario, called "Food in context", we exploit the GPS information of a user to determine which restaurant they are having a meal at, therefore restricting the categories to recognize to the set of items in the menu. Such context allows us to also report precise calories information to the user about their meal, since restaurant chains tend to standardize portions and provide the dietary information of each meal. In the second scenario, called "Foods in the wild" we try to recognize a cooked meal from a picture which could be snapped anywhere. We perform extensive experiments on food recognition on both scenarios, demonstrating the feasibility of our approach at scale, on a newly introduced dataset with 105K images for 500 food categories.
我们提出了一个帮助用户记录饮食习惯的系统,该系统可以在两种不同的场景下从他们手机上拍摄的照片中进行食物识别。在第一个场景中,称为“上下文中的食物”,我们利用用户的GPS信息来确定他们在哪家餐厅用餐,从而将要识别的类别限制在菜单中的一组项目上。这样的上下文还允许我们向用户报告关于他们用餐的精确卡路里信息,因为连锁餐厅倾向于标准化份量并提供每餐的饮食信息。在第二个被称为“野外食物”的场景中,我们试图从一张可以在任何地方拍摄的照片中识别出一顿煮熟的食物。我们在这两种情况下对食物识别进行了广泛的实验,在一个新引入的数据集上展示了我们的方法在规模上的可行性,该数据集包含500种食物类别的105K图像。
{"title":"Snap, Eat, RepEat: A Food Recognition Engine for Dietary Logging","authors":"Michele Merler, Hui Wu, Rosario A. Uceda-Sosa, Q. Nguyen, John R. Smith","doi":"10.1145/2986035.2986036","DOIUrl":"https://doi.org/10.1145/2986035.2986036","url":null,"abstract":"We present a system to assist users in dietary logging habits, which performs food recognition from pictures snapped on their phone in two different scenarios. In the first scenario, called \"Food in context\", we exploit the GPS information of a user to determine which restaurant they are having a meal at, therefore restricting the categories to recognize to the set of items in the menu. Such context allows us to also report precise calories information to the user about their meal, since restaurant chains tend to standardize portions and provide the dietary information of each meal. In the second scenario, called \"Foods in the wild\" we try to recognize a cooked meal from a picture which could be snapped anywhere. We perform extensive experiments on food recognition on both scenarios, demonstrating the feasibility of our approach at scale, on a newly introduced dataset with 105K images for 500 food categories.","PeriodicalId":91925,"journal":{"name":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","volume":"73 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91470094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Performance Evaluation Methods of Computer Vision Systems for Meal Assessment 用于膳食评估的计算机视觉系统性能评估方法
M. Anthimopoulos, Joachim Dehais, S. Mougiakakou
Several systems have been proposed for the automatic food intake assessment and dietary support by analyzing meal images captured by smartphones. A typical system consists of computational stages that detect/segment the existing foods, recognize each of them, compute their volume, and finally estimate the corresponding nutritional information. Although this newborn field has made remarkable progress over the last years, the lack of standardized datasets and established evaluation frameworks has made difficult the comparison between methods and eventually prevented the formal definition of the problem. In this paper, we present an overview of the datasets and protocols used for evaluating the computer vision stages of the proposed automatic meal assessment systems.
已经提出了几个系统,通过分析智能手机拍摄的膳食图像来自动评估食物摄入和饮食支持。一个典型的系统包括检测/分割现有食物,识别每一种食物,计算它们的体积,最后估计相应的营养信息的计算阶段。尽管这一新兴领域在过去几年中取得了显著进展,但由于缺乏标准化的数据集和已建立的评估框架,使得方法之间的比较变得困难,并最终阻碍了对该问题的正式定义。在本文中,我们概述了用于评估所提议的自动膳食评估系统的计算机视觉阶段的数据集和协议。
{"title":"Performance Evaluation Methods of Computer Vision Systems for Meal Assessment","authors":"M. Anthimopoulos, Joachim Dehais, S. Mougiakakou","doi":"10.1145/2986035.2986045","DOIUrl":"https://doi.org/10.1145/2986035.2986045","url":null,"abstract":"Several systems have been proposed for the automatic food intake assessment and dietary support by analyzing meal images captured by smartphones. A typical system consists of computational stages that detect/segment the existing foods, recognize each of them, compute their volume, and finally estimate the corresponding nutritional information. Although this newborn field has made remarkable progress over the last years, the lack of standardized datasets and established evaluation frameworks has made difficult the comparison between methods and eventually prevented the formal definition of the problem. In this paper, we present an overview of the datasets and protocols used for evaluating the computer vision stages of the proposed automatic meal assessment systems.","PeriodicalId":91925,"journal":{"name":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","volume":"42 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81391392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Food Search Based on User Feedback to Assist Image-based Food Recording Systems 基于用户反馈的食物搜索辅助基于图像的食物记录系统
Sosuke Amano, Shota Horiguchi, K. Aizawa, Kazuki Maeda, Masanori Kubota, Makoto Ogawa
Food diaries or diet journals are thought to be effective for improving the dietary lives of users. One important challenge in this field involves assisting users in recording their daily food intake. In recent years, food image recognition has attracted a considerable amount of research interest as a new technology to help record users 'food intake. However, since there are so many types of food, and it is unrealistic to expect a system to recognize all foods. In this paper, we propose an optimal combination of image recognition and interactive search in order to record users 'intake of food. The image recognition generates a list of candidate names for a given food picture. The user chooses the closest name to the meal, which triggers an associative food search based on food contents, such as ingredients. We show the proposed system is efficient to assist users maintain food journals.
饮食日记或饮食日记被认为对改善用户的饮食生活很有效。这个领域的一个重要挑战是帮助用户记录他们每天的食物摄入量。近年来,食物图像识别作为一种帮助记录用户食物摄入量的新技术引起了相当多的研究兴趣。然而,由于食物种类繁多,期望一个系统识别所有食物是不现实的。在本文中,我们提出了一种图像识别和交互式搜索的最佳组合,以记录用户的食物摄入量。图像识别为给定的食物图片生成候选名称列表。用户选择与食物最接近的名字,这将触发基于食物内容(如配料)的关联食物搜索。我们展示了建议的系统是有效的,以帮助用户维护食品日志。
{"title":"Food Search Based on User Feedback to Assist Image-based Food Recording Systems","authors":"Sosuke Amano, Shota Horiguchi, K. Aizawa, Kazuki Maeda, Masanori Kubota, Makoto Ogawa","doi":"10.1145/2986035.2986037","DOIUrl":"https://doi.org/10.1145/2986035.2986037","url":null,"abstract":"Food diaries or diet journals are thought to be effective for improving the dietary lives of users. One important challenge in this field involves assisting users in recording their daily food intake. In recent years, food image recognition has attracted a considerable amount of research interest as a new technology to help record users 'food intake. However, since there are so many types of food, and it is unrealistic to expect a system to recognize all foods. In this paper, we propose an optimal combination of image recognition and interactive search in order to record users 'intake of food. The image recognition generates a list of candidate names for a given food picture. The user chooses the closest name to the meal, which triggers an associative food search based on food contents, such as ingredients. We show the proposed system is efficient to assist users maintain food journals.","PeriodicalId":91925,"journal":{"name":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","volume":"122 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81090936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An Automatic Calorie Estimation System of Food Images on a Smartphone 智能手机上的食物图像自动卡路里估算系统
Koichi Okamoto, Keiji Yanai
In recent years, due to a rise in healthy thinking on eating, many people take care of their eating habits, and some people record daily diet regularly. To assist them, many mobile applications for recording everyday meals have been released so far. Some of them employ food image recognition which can estimate not only food names but also food calorie. However, most of such applications have some problems especially on their usability. Then, in this paper, we propose a novel single-image-based food calorie estimation system which runs on a smartphone as a standalone application without external recognition servers. The proposed system carries out food region segmentation, food region categorization, and calorie estimation automatically. By the experiments and the user study on the proposed system, the effectiveness of the proposed system was confirmed.
近年来,由于健康饮食思想的兴起,很多人都很注意自己的饮食习惯,有些人还会定期记录每天的饮食。为了帮助他们,目前已经发布了许多记录日常饮食的移动应用程序。其中一些采用食品图像识别,不仅可以估计食品名称,还可以估计食品的卡路里。然而,大多数此类应用程序都存在一些问题,特别是在可用性方面。然后,在本文中,我们提出了一种新的基于单图像的食物卡路里估计系统,该系统作为一个独立的应用程序运行在智能手机上,而不需要外部识别服务器。该系统实现了食品区域分割、食品区域分类和热量自动估计。通过对系统的实验和用户研究,验证了系统的有效性。
{"title":"An Automatic Calorie Estimation System of Food Images on a Smartphone","authors":"Koichi Okamoto, Keiji Yanai","doi":"10.1145/2986035.2986040","DOIUrl":"https://doi.org/10.1145/2986035.2986040","url":null,"abstract":"In recent years, due to a rise in healthy thinking on eating, many people take care of their eating habits, and some people record daily diet regularly. To assist them, many mobile applications for recording everyday meals have been released so far. Some of them employ food image recognition which can estimate not only food names but also food calorie. However, most of such applications have some problems especially on their usability. Then, in this paper, we propose a novel single-image-based food calorie estimation system which runs on a smartphone as a standalone application without external recognition servers. The proposed system carries out food region segmentation, food region categorization, and calorie estimation automatically. By the experiments and the user study on the proposed system, the effectiveness of the proposed system was confirmed.","PeriodicalId":91925,"journal":{"name":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79629910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 57
期刊
MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1