Ada V Taylor, R. Kaufman, Michael Huang, H. Admoni
{"title":"饭店群体活动识别以解决潜在需求:个案研究","authors":"Ada V Taylor, R. Kaufman, Michael Huang, H. Admoni","doi":"10.1109/RO-MAN53752.2022.9900691","DOIUrl":null,"url":null,"abstract":"Enabling robots to identify when humans need assistance is key to being able to provide help that is both proactive and efficient. This challenge is particularly difficult for humans eating a meal in a restaurant, a context which is dense with interlaced social elements such as conversation in addition to functional tasks such as eating. We investigated the challenge of identifying human dining activities from single-viewpoint footage by collecting and annotating the individual activities of five two-person meals. From this process, we found that addressing the question of identifying meal phases and overall neediness requires identifying an underlying group state for the table as a whole. We report on the individual activities and group states, as well as the interdependencies between these factors that can be leveraged to both provide and measure effective robotic restaurant service. In addition to the insights revealed by this dataset, we describe preliminary attempts to create an automated classification system for these activities.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Group Activity Recognition in Restaurants to Address Underlying Needs: A Case Study\",\"authors\":\"Ada V Taylor, R. Kaufman, Michael Huang, H. Admoni\",\"doi\":\"10.1109/RO-MAN53752.2022.9900691\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Enabling robots to identify when humans need assistance is key to being able to provide help that is both proactive and efficient. This challenge is particularly difficult for humans eating a meal in a restaurant, a context which is dense with interlaced social elements such as conversation in addition to functional tasks such as eating. We investigated the challenge of identifying human dining activities from single-viewpoint footage by collecting and annotating the individual activities of five two-person meals. From this process, we found that addressing the question of identifying meal phases and overall neediness requires identifying an underlying group state for the table as a whole. We report on the individual activities and group states, as well as the interdependencies between these factors that can be leveraged to both provide and measure effective robotic restaurant service. In addition to the insights revealed by this dataset, we describe preliminary attempts to create an automated classification system for these activities.\",\"PeriodicalId\":250997,\"journal\":{\"name\":\"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)\",\"volume\":\"56 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-08-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/RO-MAN53752.2022.9900691\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RO-MAN53752.2022.9900691","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Group Activity Recognition in Restaurants to Address Underlying Needs: A Case Study
Enabling robots to identify when humans need assistance is key to being able to provide help that is both proactive and efficient. This challenge is particularly difficult for humans eating a meal in a restaurant, a context which is dense with interlaced social elements such as conversation in addition to functional tasks such as eating. We investigated the challenge of identifying human dining activities from single-viewpoint footage by collecting and annotating the individual activities of five two-person meals. From this process, we found that addressing the question of identifying meal phases and overall neediness requires identifying an underlying group state for the table as a whole. We report on the individual activities and group states, as well as the interdependencies between these factors that can be leveraged to both provide and measure effective robotic restaurant service. In addition to the insights revealed by this dataset, we describe preliminary attempts to create an automated classification system for these activities.