{"title":"AR Food Changer using Deep Learning And Cross-Modal Effects","authors":"Junya Ueda, K. Okajima","doi":"10.1109/AIVR46125.2019.00025","DOIUrl":null,"url":null,"abstract":"We propose an AR application that enables us to change the appearance of food without AR markers by applying machine learning and image processing. Modifying the appearance of real food is a difficult task because the shape of the food is atypical and deforms while eating. Therefore, we developed a real-time object region extraction method that combines two approaches in a complementary manner to extract food regions with high accuracy and stability. These approaches are based on color and edge information processing with a deep learning module trained with a small amount of data. Besides, we implemented some novel methods to improve the accuracy and reliability of the system. Then, we experimented and the results show that the taste and oral texture were affected by visual textures. Our application can change not only the appearance in real-time but also the taste and texture of actual real food. Therefore, in conclusion, our application can be virtually termed as an \"AR food changer\".","PeriodicalId":274566,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIVR46125.2019.00025","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11
Abstract
We propose an AR application that enables us to change the appearance of food without AR markers by applying machine learning and image processing. Modifying the appearance of real food is a difficult task because the shape of the food is atypical and deforms while eating. Therefore, we developed a real-time object region extraction method that combines two approaches in a complementary manner to extract food regions with high accuracy and stability. These approaches are based on color and edge information processing with a deep learning module trained with a small amount of data. Besides, we implemented some novel methods to improve the accuracy and reliability of the system. Then, we experimented and the results show that the taste and oral texture were affected by visual textures. Our application can change not only the appearance in real-time but also the taste and texture of actual real food. Therefore, in conclusion, our application can be virtually termed as an "AR food changer".