Pritthijit Nath, Henry Moss, Emily Shuckburgh, Mark Webb
{"title":"RAIN: Reinforcement Algorithms for Improving Numerical Weather and Climate Models","authors":"Pritthijit Nath, Henry Moss, Emily Shuckburgh, Mark Webb","doi":"arxiv-2408.16118","DOIUrl":null,"url":null,"abstract":"This study explores integrating reinforcement learning (RL) with idealised\nclimate models to address key parameterisation challenges in climate science.\nCurrent climate models rely on complex mathematical parameterisations to\nrepresent sub-grid scale processes, which can introduce substantial\nuncertainties. RL offers capabilities to enhance these parameterisation\nschemes, including direct interaction, handling sparse or delayed feedback,\ncontinuous online learning, and long-term optimisation. We evaluate the\nperformance of eight RL algorithms on two idealised environments: one for\ntemperature bias correction, another for radiative-convective equilibrium (RCE)\nimitating real-world computational constraints. Results show different RL\napproaches excel in different climate scenarios with exploration algorithms\nperforming better in bias correction, while exploitation algorithms proving\nmore effective for RCE. These findings support the potential of RL-based\nparameterisation schemes to be integrated into global climate models, improving\naccuracy and efficiency in capturing complex climate dynamics. Overall, this\nwork represents an important first step towards leveraging RL to enhance\nclimate model accuracy, critical for improving climate understanding and\npredictions. Code accessible at https://github.com/p3jitnath/climate-rl.","PeriodicalId":501166,"journal":{"name":"arXiv - PHYS - Atmospheric and Oceanic Physics","volume":"98 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - PHYS - Atmospheric and Oceanic Physics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.16118","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This study explores integrating reinforcement learning (RL) with idealised
climate models to address key parameterisation challenges in climate science.
Current climate models rely on complex mathematical parameterisations to
represent sub-grid scale processes, which can introduce substantial
uncertainties. RL offers capabilities to enhance these parameterisation
schemes, including direct interaction, handling sparse or delayed feedback,
continuous online learning, and long-term optimisation. We evaluate the
performance of eight RL algorithms on two idealised environments: one for
temperature bias correction, another for radiative-convective equilibrium (RCE)
imitating real-world computational constraints. Results show different RL
approaches excel in different climate scenarios with exploration algorithms
performing better in bias correction, while exploitation algorithms proving
more effective for RCE. These findings support the potential of RL-based
parameterisation schemes to be integrated into global climate models, improving
accuracy and efficiency in capturing complex climate dynamics. Overall, this
work represents an important first step towards leveraging RL to enhance
climate model accuracy, critical for improving climate understanding and
predictions. Code accessible at https://github.com/p3jitnath/climate-rl.