Interpretable machine learning for weather and climate prediction: A review

IF 4.2 2区 环境科学与生态学 Q2 ENVIRONMENTAL SCIENCES Atmospheric Environment Pub Date : 2024-09-12 DOI:10.1016/j.atmosenv.2024.120797
{"title":"Interpretable machine learning for weather and climate prediction: A review","authors":"","doi":"10.1016/j.atmosenv.2024.120797","DOIUrl":null,"url":null,"abstract":"<div><p>Advanced machine learning models have recently achieved high predictive accuracy for weather and climate prediction. However, these complex models often lack inherent transparency and interpretability, acting as “black boxes” that impede user trust and hinder further model improvements. As such, interpretable machine learning techniques have become crucial in enhancing the credibility and utility of weather and climate modeling. In this paper, we review current interpretable machine learning approaches applied to meteorological predictions. We categorize methods into two major paradigms: (1) Post-hoc interpretability techniques that explain pre-trained models, such as perturbation-based, game theory based, and gradient-based attribution methods. (2) Designing inherently interpretable models from scratch using architectures like tree ensembles and explainable neural networks. We summarize how each technique provides insights into the predictions, uncovering novel meteorological relationships captured by machine learning. Lastly, we discuss research challenges and provide future perspectives around achieving deeper mechanistic interpretations aligned with physical principles, developing standardized evaluation benchmarks, integrating interpretability into iterative model development workflows, and providing explainability for large foundation models.</p></div>","PeriodicalId":250,"journal":{"name":"Atmospheric Environment","volume":null,"pages":null},"PeriodicalIF":4.2000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Atmospheric Environment","FirstCategoryId":"93","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1352231024004722","RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENVIRONMENTAL SCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

Advanced machine learning models have recently achieved high predictive accuracy for weather and climate prediction. However, these complex models often lack inherent transparency and interpretability, acting as “black boxes” that impede user trust and hinder further model improvements. As such, interpretable machine learning techniques have become crucial in enhancing the credibility and utility of weather and climate modeling. In this paper, we review current interpretable machine learning approaches applied to meteorological predictions. We categorize methods into two major paradigms: (1) Post-hoc interpretability techniques that explain pre-trained models, such as perturbation-based, game theory based, and gradient-based attribution methods. (2) Designing inherently interpretable models from scratch using architectures like tree ensembles and explainable neural networks. We summarize how each technique provides insights into the predictions, uncovering novel meteorological relationships captured by machine learning. Lastly, we discuss research challenges and provide future perspectives around achieving deeper mechanistic interpretations aligned with physical principles, developing standardized evaluation benchmarks, integrating interpretability into iterative model development workflows, and providing explainability for large foundation models.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用于天气和气候预测的可解释机器学习:综述
先进的机器学习模型最近在天气和气候预测方面取得了很高的预测精度。然而,这些复杂的模型往往缺乏内在的透明度和可解释性,就像一个 "黑盒子",妨碍了用户的信任,也阻碍了模型的进一步改进。因此,可解释的机器学习技术对于提高天气和气候建模的可信度和实用性至关重要。在本文中,我们回顾了目前应用于气象预测的可解释机器学习方法。我们将这些方法分为两大范例:(1)解释预训练模型的事后可解释性技术,如基于扰动、基于博弈论和基于梯度的归因方法。(2) 利用树状集合和可解释神经网络等架构,从头开始设计本质上可解释的模型。我们总结了每种技术如何为预测提供洞察力,发现机器学习捕捉到的新的气象关系。最后,我们讨论了研究挑战并提供了未来展望,包括实现与物理原理相一致的更深入的机理解释、开发标准化评估基准、将可解释性整合到迭代模型开发工作流程中,以及为大型基础模型提供可解释性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Atmospheric Environment
Atmospheric Environment 环境科学-环境科学
CiteScore
9.40
自引率
8.00%
发文量
458
审稿时长
53 days
期刊介绍: Atmospheric Environment has an open access mirror journal Atmospheric Environment: X, sharing the same aims and scope, editorial team, submission system and rigorous peer review. Atmospheric Environment is the international journal for scientists in different disciplines related to atmospheric composition and its impacts. The journal publishes scientific articles with atmospheric relevance of emissions and depositions of gaseous and particulate compounds, chemical processes and physical effects in the atmosphere, as well as impacts of the changing atmospheric composition on human health, air quality, climate change, and ecosystems.
期刊最新文献
Editorial Board Indoor ozone reaction products: Contributors to the respiratory health effects associated with low-level outdoor ozone Ozone variability and the impacts of associated synoptic patterns over China during summer 2016–2020 based on a regional atmospheric composition reanalysis dataset The effect of the ultra-low emission zone on PM2.5 concentration in Seoul, South Korea Significant annual variations of firework-impacted aerosols in Northeast China: Implications for rethinking the firework bans
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1