MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...最新文献
F. Ragusa, V. Tomaselli, Antonino Furnari, S. Battiato, G. Farinella
Automatic understanding of food is an important research challenge. Food recognition engines can provide a valid aid for automatically monitoring the patient's diet and food-intake habits directly from images acquired using mobile or wearable cameras. One of the first challenges in the field is the discrimination between images containing food versus the others. Existing approaches for food vs non-food classification have used both shallow and deep representations, in combination with multi-class or one-class classification approaches. However, they have been generally evaluated using different methodologies and data, making a real comparison of the performances of existing methods unfeasible. In this paper, we consider the most recent classification approaches employed for food vs non-food classification, and compare them on a publicly available dataset. Different deep-learning based representations and classification methods are considered and evaluated.
{"title":"Food vs Non-Food Classification","authors":"F. Ragusa, V. Tomaselli, Antonino Furnari, S. Battiato, G. Farinella","doi":"10.1145/2986035.2986041","DOIUrl":"https://doi.org/10.1145/2986035.2986041","url":null,"abstract":"Automatic understanding of food is an important research challenge. Food recognition engines can provide a valid aid for automatically monitoring the patient's diet and food-intake habits directly from images acquired using mobile or wearable cameras. One of the first challenges in the field is the discrimination between images containing food versus the others. Existing approaches for food vs non-food classification have used both shallow and deep representations, in combination with multi-class or one-class classification approaches. However, they have been generally evaluated using different methodologies and data, making a real comparison of the performances of existing methods unfeasible. In this paper, we consider the most recent classification approaches employed for food vs non-food classification, and compare them on a publicly available dataset. Different deep-learning based representations and classification methods are considered and evaluated.","PeriodicalId":91925,"journal":{"name":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82559050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dietary and lifestyle management rely on objective and accurate diet assessment. To assess dietary intake itself requires training and skills however, and in that regard, trained individuals often misjudge what they eat, even when they are under strict constraints [1]. These issues emphasize the need for objective, accurate dietary assessment tools that can be delivered to and directly used by the public to monitor their intake.
{"title":"GoCARB: A Smartphone Application for Automatic Assessment of Carbohydrate Intake","authors":"Joachim Dehais, M. Anthimopoulos, S. Mougiakakou","doi":"10.1145/2986035.2986046","DOIUrl":"https://doi.org/10.1145/2986035.2986046","url":null,"abstract":"Dietary and lifestyle management rely on objective and accurate diet assessment. To assess dietary intake itself requires training and skills however, and in that regard, trained individuals often misjudge what they eat, even when they are under strict constraints [1]. These issues emphasize the need for objective, accurate dietary assessment tools that can be delivered to and directly used by the public to monitor their intake.","PeriodicalId":91925,"journal":{"name":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76580102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Poster and Demo Session","authors":"M. Anthimopoulos","doi":"10.1145/3257999","DOIUrl":"https://doi.org/10.1145/3257999","url":null,"abstract":"","PeriodicalId":91925,"journal":{"name":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78044077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Innovative Technology and Dietary Assessment in Low-Income Countries","authors":"J. Coates, Winnie Bell, Brooke Colaiezzi","doi":"10.1145/2986035.2986048","DOIUrl":"https://doi.org/10.1145/2986035.2986048","url":null,"abstract":"","PeriodicalId":91925,"journal":{"name":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","volume":"333 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76235912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The central question in my talk is how existing knowledge, in the form of available labeled datasets, can be (re-)used for solving a new (and possibly) unrelated image classification task. This brings together two of my recent research directions, which I'll discuss both. First, I'll present some recent works in zero-shot learning, where we use ImageNet objects and semantic embeddings for various classification tasks. Second, I'll present our work on active-learning. To re-use existing knowledge we propose to use zero-shot classifiers as prior information to guide the learning process by linking the new task to the existing labels. The work discussed in this talk has been published at ACM MM, CVPR, ECCV, and ICCV.
我演讲的中心问题是现有的知识,以可用的标记数据集的形式,可以(重新)用于解决一个新的(可能)不相关的图像分类任务。这结合了我最近的两个研究方向,我将同时讨论这两个方向。首先,我将介绍一些最近在零射击学习方面的工作,其中我们使用ImageNet对象和语义嵌入来完成各种分类任务。其次,我将介绍我们在主动学习方面的工作。为了重用现有的知识,我们建议使用零采样分类器作为先验信息,通过将新任务与现有标签联系起来来指导学习过程。本讲座所讨论的工作已在ACM MM, CVPR, ECCV和ICCV上发表。
{"title":"Learning to Reuse Visual Knowledge","authors":"Thomas Mensink","doi":"10.1145/2986035.2991077","DOIUrl":"https://doi.org/10.1145/2986035.2991077","url":null,"abstract":"The central question in my talk is how existing knowledge, in the form of available labeled datasets, can be (re-)used for solving a new (and possibly) unrelated image classification task. This brings together two of my recent research directions, which I'll discuss both. First, I'll present some recent works in zero-shot learning, where we use ImageNet objects and semantic embeddings for various classification tasks. Second, I'll present our work on active-learning. To re-use existing knowledge we propose to use zero-shot classifiers as prior information to guide the learning process by linking the new task to the existing labels. The work discussed in this talk has been published at ACM MM, CVPR, ECCV, and ICCV.","PeriodicalId":91925,"journal":{"name":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","volume":"63 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91000219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Keynote Address","authors":"E. Gavves","doi":"10.1145/3257995","DOIUrl":"https://doi.org/10.1145/3257995","url":null,"abstract":"","PeriodicalId":91925,"journal":{"name":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75120213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Keynote Address","authors":"J. Coates","doi":"10.1145/3257993","DOIUrl":"https://doi.org/10.1145/3257993","url":null,"abstract":"","PeriodicalId":91925,"journal":{"name":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","volume":"63 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80197056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this talk I will focus on how image retrieval and visual search can be re-purposed for tasks that traditionally are considered to be very different. More specifically, I will first discuss a new, retrieval-inspired tracker, which is radically different from state-of-the-art trackers: it requires no model updating, no occlusion detection, no combination of trackers, no geometric matching, and still deliver state-of-the-art tracking performance on state-of-the-art online tracking benchmarks (OTB) and other very challenging YouTube videos. Departing from tracking, I will next focus on the relation between image search and other types of modalities that are not strictly speaking images, such as motion. More specifically, I will discuss a novel method for converting motion, or other types of sequential, dynamical inputs into just standalone, single images, so called "dynamic images". By encoding all the relevant dynamic, information into simple single images, dynamic images allow for the use of existing, off-the-shelf image convolutional neural networks or handcrafted machine learning algorithms. The works presented in the talk have been published in the latest CVPR 2016 conference.
{"title":"A Novel Perspective of Image Search for Tracking and Actions","authors":"E. Gavves","doi":"10.1145/2986035.2991078","DOIUrl":"https://doi.org/10.1145/2986035.2991078","url":null,"abstract":"In this talk I will focus on how image retrieval and visual search can be re-purposed for tasks that traditionally are considered to be very different. More specifically, I will first discuss a new, retrieval-inspired tracker, which is radically different from state-of-the-art trackers: it requires no model updating, no occlusion detection, no combination of trackers, no geometric matching, and still deliver state-of-the-art tracking performance on state-of-the-art online tracking benchmarks (OTB) and other very challenging YouTube videos. Departing from tracking, I will next focus on the relation between image search and other types of modalities that are not strictly speaking images, such as motion. More specifically, I will discuss a novel method for converting motion, or other types of sequential, dynamical inputs into just standalone, single images, so called \"dynamic images\". By encoding all the relevant dynamic, information into simple single images, dynamic images allow for the use of existing, off-the-shelf image convolutional neural networks or handcrafted machine learning algorithms. The works presented in the talk have been published in the latest CVPR 2016 conference.","PeriodicalId":91925,"journal":{"name":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","volume":"43 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82225319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Oral Paper Session 1","authors":"G. Farinella","doi":"10.1145/3257994","DOIUrl":"https://doi.org/10.1145/3257994","url":null,"abstract":"","PeriodicalId":91925,"journal":{"name":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76933884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The prevalence of diet-related chronic diseases strongly impacts global health and health services. Currently, it takes training and strong personal involvement to manage or treat these diseases. One way to assist with dietary assessment is through computer vision systems that can recognize foods and their portion sizes from images and output the corresponding nutritional information. When multiple food items may exist, a food segmentation stage should also be applied before recognition. In this study, we propose a method to detect and segment the food of already detected dishes in an image. The method combines region growing/merging techniques with a deep CNN-based food border detection. A semi-automatic version of the method is also presented that improves the result with minimal user input. The proposed methods are trained and tested on non-overlapping subsets of a food image database including 821 images, taken under challenging conditions and annotated manually. The automatic and semi-automatic dish segmentation methods reached average accuracies of 88% and 92%, respectively, in roughly 0.5 seconds per image.
{"title":"Food Image Segmentation for Dietary Assessment","authors":"Joachim Dehais, M. Anthimopoulos, S. Mougiakakou","doi":"10.1145/2986035.2986047","DOIUrl":"https://doi.org/10.1145/2986035.2986047","url":null,"abstract":"The prevalence of diet-related chronic diseases strongly impacts global health and health services. Currently, it takes training and strong personal involvement to manage or treat these diseases. One way to assist with dietary assessment is through computer vision systems that can recognize foods and their portion sizes from images and output the corresponding nutritional information. When multiple food items may exist, a food segmentation stage should also be applied before recognition. In this study, we propose a method to detect and segment the food of already detected dishes in an image. The method combines region growing/merging techniques with a deep CNN-based food border detection. A semi-automatic version of the method is also presented that improves the result with minimal user input. The proposed methods are trained and tested on non-overlapping subsets of a food image database including 821 images, taken under challenging conditions and annotated manually. The automatic and semi-automatic dish segmentation methods reached average accuracies of 88% and 92%, respectively, in roughly 0.5 seconds per image.","PeriodicalId":91925,"journal":{"name":"MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87154905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
MADiMa'16 : proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management : October 16, 2016, Amsterdam, The Netherlands. International Workshop on Multimedia Assisted Dietary Management (2nd : 2016 : Amsterdam...