F. Pratama, F. Mastrogiovanni, Sungmoon Jeong, N. Chong
{"title":"Long-term knowledge acquisition in a memory-based epigenetic robot architecture for verbal interaction","authors":"F. Pratama, F. Mastrogiovanni, Sungmoon Jeong, N. Chong","doi":"10.1109/ROMAN.2015.7333563","DOIUrl":null,"url":null,"abstract":"We present a robot cognitive framework based on (a) a memory-like architecture; and (b) the notion of “context”. We posit that relying solely on machine learning techniques may not be the right approach for a long-term, continuous knowledge acquisition. Since we are interested in long-term human-robot interaction, we focus on a scenario where a robot “remembers” relevant events happening in the environment. By visually sensing its surroundings, the robot is expected to infer and remember snapshots of events, and recall specific past events based on inputs and contextual information from humans. Using a COTS vision frameworks for the experiment, we show that the robot is able to form “memories” and recall related events based on cues and the context given during the human-robot interaction process.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ROMAN.2015.7333563","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
We present a robot cognitive framework based on (a) a memory-like architecture; and (b) the notion of “context”. We posit that relying solely on machine learning techniques may not be the right approach for a long-term, continuous knowledge acquisition. Since we are interested in long-term human-robot interaction, we focus on a scenario where a robot “remembers” relevant events happening in the environment. By visually sensing its surroundings, the robot is expected to infer and remember snapshots of events, and recall specific past events based on inputs and contextual information from humans. Using a COTS vision frameworks for the experiment, we show that the robot is able to form “memories” and recall related events based on cues and the context given during the human-robot interaction process.