{"title":"基于哈希表的前馈神经网络:一种可扩展的思维模仿方法","authors":"Javeria Iqbal, M. Yousaf","doi":"10.1109/ICET.2009.5353210","DOIUrl":null,"url":null,"abstract":"In this paper, we deal with the problem of inefficient context modules of recurrent networks (RNs), which form the basis of think aloud: a strategy for imitation. Learning from observation provides a fine way for knowledge acquisition of demonstrated task. In order to learn complex tasks then simply learning action sequences, strategy of think aloud imitation learning applies recurrent network model (RNM) [1]. We propose dynamic task imitation architecture in time and storage efficient way. Inefficient recurrent nodes are replaced with updated feed forward network (FFN). Our modified architecture is based on hash table. Single hash store is used instead of multiple recurrent nodes. History for input usability is saved for experience based task learning. Performance evaluation of this approach makes success guarantee for robot training. It is best suitable approach for all applications based on recurrent neural network by replacing this inefficient network with our designed approach.","PeriodicalId":307661,"journal":{"name":"2009 International Conference on Emerging Technologies","volume":"70 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Hash table based feed forward neural networks: A scalable approach towards think aloud imitation\",\"authors\":\"Javeria Iqbal, M. Yousaf\",\"doi\":\"10.1109/ICET.2009.5353210\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we deal with the problem of inefficient context modules of recurrent networks (RNs), which form the basis of think aloud: a strategy for imitation. Learning from observation provides a fine way for knowledge acquisition of demonstrated task. In order to learn complex tasks then simply learning action sequences, strategy of think aloud imitation learning applies recurrent network model (RNM) [1]. We propose dynamic task imitation architecture in time and storage efficient way. Inefficient recurrent nodes are replaced with updated feed forward network (FFN). Our modified architecture is based on hash table. Single hash store is used instead of multiple recurrent nodes. History for input usability is saved for experience based task learning. Performance evaluation of this approach makes success guarantee for robot training. It is best suitable approach for all applications based on recurrent neural network by replacing this inefficient network with our designed approach.\",\"PeriodicalId\":307661,\"journal\":{\"name\":\"2009 International Conference on Emerging Technologies\",\"volume\":\"70 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2009-12-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2009 International Conference on Emerging Technologies\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICET.2009.5353210\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 International Conference on Emerging Technologies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICET.2009.5353210","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Hash table based feed forward neural networks: A scalable approach towards think aloud imitation
In this paper, we deal with the problem of inefficient context modules of recurrent networks (RNs), which form the basis of think aloud: a strategy for imitation. Learning from observation provides a fine way for knowledge acquisition of demonstrated task. In order to learn complex tasks then simply learning action sequences, strategy of think aloud imitation learning applies recurrent network model (RNM) [1]. We propose dynamic task imitation architecture in time and storage efficient way. Inefficient recurrent nodes are replaced with updated feed forward network (FFN). Our modified architecture is based on hash table. Single hash store is used instead of multiple recurrent nodes. History for input usability is saved for experience based task learning. Performance evaluation of this approach makes success guarantee for robot training. It is best suitable approach for all applications based on recurrent neural network by replacing this inefficient network with our designed approach.