Oliver Chang, Christiana Marchese, Jared Mejia, A. Clark
{"title":"研究自主导航仿真中的神经网络架构、技术和数据集","authors":"Oliver Chang, Christiana Marchese, Jared Mejia, A. Clark","doi":"10.1109/SSCI50451.2021.9659907","DOIUrl":null,"url":null,"abstract":"Neural networks (NNs) are becoming an increasingly important part of mobile robot control systems. Compared with traditional methods, NNs (and other data-driven techniques) produce comparable-if not better-results while requiring less engineering knowhow. Training NNs, however, still requires exploration of a significant number of architectural, optimization, and evaluation options. In this study, we build a simulation environment, generate three image datasets using distinct techniques, train 652 models (including replicates) using a variety of architectures and paradigms (e.g., classification, regression, etc.), and evaluate the navigation ability of the model in simulation. Our goal is to explore a large number of model possibilities so that we can select the most promising for future study with a physical device. Training datasets leading to the best performing models were those that included a significant amount of noise from seemingly inefficient actions. The most promising models explicitly incorporated “memory” wherein previous actions were included as an input in the next step. Such models performed as good or better than conventional convolutional NNs, recurrent NNs, and custom architectures including two camera frames. Although trained models perform well in an environment matching the distribution of the training dataset, they fail when the simulation environment is altered in a seemingly insignificant manner. In robotics research it is often taken for granted that a model with good validation characteristics will perform well on the underlying task, but the results presented here show that there can often be a loose relationship between validation metrics and performance.","PeriodicalId":255763,"journal":{"name":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Investigating Neural Network Architectures, Techniques, and Datasets for Autonomous Navigation in Simulation\",\"authors\":\"Oliver Chang, Christiana Marchese, Jared Mejia, A. Clark\",\"doi\":\"10.1109/SSCI50451.2021.9659907\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Neural networks (NNs) are becoming an increasingly important part of mobile robot control systems. Compared with traditional methods, NNs (and other data-driven techniques) produce comparable-if not better-results while requiring less engineering knowhow. Training NNs, however, still requires exploration of a significant number of architectural, optimization, and evaluation options. In this study, we build a simulation environment, generate three image datasets using distinct techniques, train 652 models (including replicates) using a variety of architectures and paradigms (e.g., classification, regression, etc.), and evaluate the navigation ability of the model in simulation. Our goal is to explore a large number of model possibilities so that we can select the most promising for future study with a physical device. Training datasets leading to the best performing models were those that included a significant amount of noise from seemingly inefficient actions. The most promising models explicitly incorporated “memory” wherein previous actions were included as an input in the next step. Such models performed as good or better than conventional convolutional NNs, recurrent NNs, and custom architectures including two camera frames. Although trained models perform well in an environment matching the distribution of the training dataset, they fail when the simulation environment is altered in a seemingly insignificant manner. In robotics research it is often taken for granted that a model with good validation characteristics will perform well on the underlying task, but the results presented here show that there can often be a loose relationship between validation metrics and performance.\",\"PeriodicalId\":255763,\"journal\":{\"name\":\"2021 IEEE Symposium Series on Computational Intelligence (SSCI)\",\"volume\":\"22 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE Symposium Series on Computational Intelligence (SSCI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SSCI50451.2021.9659907\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSCI50451.2021.9659907","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Investigating Neural Network Architectures, Techniques, and Datasets for Autonomous Navigation in Simulation
Neural networks (NNs) are becoming an increasingly important part of mobile robot control systems. Compared with traditional methods, NNs (and other data-driven techniques) produce comparable-if not better-results while requiring less engineering knowhow. Training NNs, however, still requires exploration of a significant number of architectural, optimization, and evaluation options. In this study, we build a simulation environment, generate three image datasets using distinct techniques, train 652 models (including replicates) using a variety of architectures and paradigms (e.g., classification, regression, etc.), and evaluate the navigation ability of the model in simulation. Our goal is to explore a large number of model possibilities so that we can select the most promising for future study with a physical device. Training datasets leading to the best performing models were those that included a significant amount of noise from seemingly inefficient actions. The most promising models explicitly incorporated “memory” wherein previous actions were included as an input in the next step. Such models performed as good or better than conventional convolutional NNs, recurrent NNs, and custom architectures including two camera frames. Although trained models perform well in an environment matching the distribution of the training dataset, they fail when the simulation environment is altered in a seemingly insignificant manner. In robotics research it is often taken for granted that a model with good validation characteristics will perform well on the underlying task, but the results presented here show that there can often be a loose relationship between validation metrics and performance.