Karel Kuchar, E. Holasova, Lukas Hrboticky, Martin Rajnoha, Radim Burget
{"title":"Supervised Learning in Multi-Agent Environments Using Inverse Point of View","authors":"Karel Kuchar, E. Holasova, Lukas Hrboticky, Martin Rajnoha, Radim Burget","doi":"10.1109/TSP.2019.8768860","DOIUrl":null,"url":null,"abstract":"There are many approaches that are being used in multi-agent environment to learn agents’ behaviour. Semisupervised approaches such as reinforcement learning (RL) or genetic programming (GP) are one of the most frequently used. Disadvantage of these methods is they are relatively computational resources demanding, suffers from vanishing gradient during when machine learning approach is used and has often non-convex optimization function, which makes behaviour learning challenging. This paper introduces a method for data gathering for supervised machine learning using agent’s inverse point of view. Proposed method explores agent’s neighboring environment and collects data also from surrounding agents instead of traditional approaches that uses only agents’ sensors and knowledge. Advantage of this approach is, the collected data can be used with supervised machine learning, which is significantly less computationally demanding when compared to RL or GP. A proposed method was tested and demonstrated on Robocode game, where agents (i.e. tanks) were trained to avoid opponent tanks missiles.","PeriodicalId":399087,"journal":{"name":"2019 42nd International Conference on Telecommunications and Signal Processing (TSP)","volume":"8 9","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 42nd International Conference on Telecommunications and Signal Processing (TSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TSP.2019.8768860","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
There are many approaches that are being used in multi-agent environment to learn agents’ behaviour. Semisupervised approaches such as reinforcement learning (RL) or genetic programming (GP) are one of the most frequently used. Disadvantage of these methods is they are relatively computational resources demanding, suffers from vanishing gradient during when machine learning approach is used and has often non-convex optimization function, which makes behaviour learning challenging. This paper introduces a method for data gathering for supervised machine learning using agent’s inverse point of view. Proposed method explores agent’s neighboring environment and collects data also from surrounding agents instead of traditional approaches that uses only agents’ sensors and knowledge. Advantage of this approach is, the collected data can be used with supervised machine learning, which is significantly less computationally demanding when compared to RL or GP. A proposed method was tested and demonstrated on Robocode game, where agents (i.e. tanks) were trained to avoid opponent tanks missiles.