{"title":"高维均值场博弈的深度策略迭代","authors":"Mouhcine Assouli, Badr Missaoui","doi":"10.1016/j.amc.2024.128923","DOIUrl":null,"url":null,"abstract":"<div><p>This paper introduces Deep Policy Iteration (DPI), a novel approach that integrates the strengths of Neural Networks with the stability and convergence advantages of Policy Iteration (PI) to address high-dimensional stochastic Mean Field Games (MFG). DPI overcomes the limitations of PI, which is constrained by the curse of dimensionality to low-dimensional problems, by iteratively training three neural networks to solve PI equations and satisfy forward-backwards conditions. Our findings indicate that DPI achieves comparable convergence levels to the Mean Field Deep Galerkin Method (MFDGM), with additional advantages. Furthermore, deep learning techniques show promise in handling separable Hamiltonian cases where PI alone is less effective. DPI effectively manages high-dimensional problems, extending the applicability of PI to both separable and non-separable Hamiltonians.</p></div>","PeriodicalId":55496,"journal":{"name":"Applied Mathematics and Computation","volume":null,"pages":null},"PeriodicalIF":3.5000,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep Policy Iteration for high-dimensional mean field games\",\"authors\":\"Mouhcine Assouli, Badr Missaoui\",\"doi\":\"10.1016/j.amc.2024.128923\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>This paper introduces Deep Policy Iteration (DPI), a novel approach that integrates the strengths of Neural Networks with the stability and convergence advantages of Policy Iteration (PI) to address high-dimensional stochastic Mean Field Games (MFG). DPI overcomes the limitations of PI, which is constrained by the curse of dimensionality to low-dimensional problems, by iteratively training three neural networks to solve PI equations and satisfy forward-backwards conditions. Our findings indicate that DPI achieves comparable convergence levels to the Mean Field Deep Galerkin Method (MFDGM), with additional advantages. Furthermore, deep learning techniques show promise in handling separable Hamiltonian cases where PI alone is less effective. DPI effectively manages high-dimensional problems, extending the applicability of PI to both separable and non-separable Hamiltonians.</p></div>\",\"PeriodicalId\":55496,\"journal\":{\"name\":\"Applied Mathematics and Computation\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.5000,\"publicationDate\":\"2024-07-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Mathematics and Computation\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0096300324003849\",\"RegionNum\":2,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Mathematics and Computation","FirstCategoryId":"100","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0096300324003849","RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0
摘要
本文介绍了深度策略迭代(DPI),这是一种将神经网络的优势与策略迭代(PI)的稳定性和收敛性优势相结合的新方法,用于解决高维随机均场博弈(MFG)问题。PI 受限于低维问题的维度诅咒,DPI 通过迭代训练三个神经网络来求解 PI 方程并满足前向后向条件,从而克服了 PI 的局限性。我们的研究结果表明,DPI 达到了与平均场深度伽勒金方法(MFDGM)相当的收敛水平,并具有额外的优势。此外,深度学习技术在处理可分离哈密顿情况时也大有可为,而在这些情况下,单纯的 PI 方法效果较差。DPI 能有效处理高维问题,将 PI 的适用性扩展到可分离和不可分离的哈密顿。
Deep Policy Iteration for high-dimensional mean field games
This paper introduces Deep Policy Iteration (DPI), a novel approach that integrates the strengths of Neural Networks with the stability and convergence advantages of Policy Iteration (PI) to address high-dimensional stochastic Mean Field Games (MFG). DPI overcomes the limitations of PI, which is constrained by the curse of dimensionality to low-dimensional problems, by iteratively training three neural networks to solve PI equations and satisfy forward-backwards conditions. Our findings indicate that DPI achieves comparable convergence levels to the Mean Field Deep Galerkin Method (MFDGM), with additional advantages. Furthermore, deep learning techniques show promise in handling separable Hamiltonian cases where PI alone is less effective. DPI effectively manages high-dimensional problems, extending the applicability of PI to both separable and non-separable Hamiltonians.
期刊介绍:
Applied Mathematics and Computation addresses work at the interface between applied mathematics, numerical computation, and applications of systems – oriented ideas to the physical, biological, social, and behavioral sciences, and emphasizes papers of a computational nature focusing on new algorithms, their analysis and numerical results.
In addition to presenting research papers, Applied Mathematics and Computation publishes review articles and single–topics issues.