Aaron Rocha-Rocha, E. M. D. Cote, S. Hernández, E. Succar
{"title":"多智能体系统的冲突解决:平衡最优性和学习速度","authors":"Aaron Rocha-Rocha, E. M. D. Cote, S. Hernández, E. Succar","doi":"10.1109/MICAI.2012.16","DOIUrl":null,"url":null,"abstract":"Many real world applications demand solutions that are difficult to implement. It is common practice for system designers to recur to multiagent theory, where the problem at hand is broken in sub-problems and each is handled by an autonomous agent. Notwithstanding, new questions emerge, like How should a problem be broken? What the task of each agent should be? And What information should they need to process their task? In addition, conflicts between agents' partial solutions (actions) may arise as a consequence of their autonomy. In this spirit, another question would be how should conflicts be solved? In this paper we conduct a study to answer some of those questions under a multiagent learning framework. The proposed framework guarantees an optimal solution to the original problem, at the cost of a low learning speed, but can be tuned to balance learning speed and optimality. We present an experimental analysis that shows learning curves until convergence to optimality, illustrating the trade-offs between learning speeds and optimality.","PeriodicalId":348369,"journal":{"name":"2012 11th Mexican International Conference on Artificial Intelligence","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Conflict Resolution in Multiagent Systems: Balancing Optimality and Learning Speed\",\"authors\":\"Aaron Rocha-Rocha, E. M. D. Cote, S. Hernández, E. Succar\",\"doi\":\"10.1109/MICAI.2012.16\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Many real world applications demand solutions that are difficult to implement. It is common practice for system designers to recur to multiagent theory, where the problem at hand is broken in sub-problems and each is handled by an autonomous agent. Notwithstanding, new questions emerge, like How should a problem be broken? What the task of each agent should be? And What information should they need to process their task? In addition, conflicts between agents' partial solutions (actions) may arise as a consequence of their autonomy. In this spirit, another question would be how should conflicts be solved? In this paper we conduct a study to answer some of those questions under a multiagent learning framework. The proposed framework guarantees an optimal solution to the original problem, at the cost of a low learning speed, but can be tuned to balance learning speed and optimality. We present an experimental analysis that shows learning curves until convergence to optimality, illustrating the trade-offs between learning speeds and optimality.\",\"PeriodicalId\":348369,\"journal\":{\"name\":\"2012 11th Mexican International Conference on Artificial Intelligence\",\"volume\":\"44 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-10-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 11th Mexican International Conference on Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MICAI.2012.16\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 11th Mexican International Conference on Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MICAI.2012.16","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Conflict Resolution in Multiagent Systems: Balancing Optimality and Learning Speed
Many real world applications demand solutions that are difficult to implement. It is common practice for system designers to recur to multiagent theory, where the problem at hand is broken in sub-problems and each is handled by an autonomous agent. Notwithstanding, new questions emerge, like How should a problem be broken? What the task of each agent should be? And What information should they need to process their task? In addition, conflicts between agents' partial solutions (actions) may arise as a consequence of their autonomy. In this spirit, another question would be how should conflicts be solved? In this paper we conduct a study to answer some of those questions under a multiagent learning framework. The proposed framework guarantees an optimal solution to the original problem, at the cost of a low learning speed, but can be tuned to balance learning speed and optimality. We present an experimental analysis that shows learning curves until convergence to optimality, illustrating the trade-offs between learning speeds and optimality.