Marion Mittermaier, Seshagiri Rao Kolusu, Joanne Robbins
The Met Office GloSea5-GC2 sub-seasonal-to-seasonal 40-member lagged ensemble consists of members who are up to 10 days different in age such that the between-ensemble-member bias is not internally consistent. Reforecasts tend to be used to convert these ensemble forecasts into anomalies from a normal state. These anomalies are however not that useful for applications where individual ensemble members are needed to drive downstream applications in the hazard and impact space. Here we explore whether there is a way of correcting for the within-ensemble bias without using reforecasts. An investigation into the individual daily precipitation distributions from the JJAS 2019 Indian monsoon season, stratified by forecast horizon, highlights how the distribution changes, and shows that the model distribution is markedly different to the observed. Initial results suggest that it could be better to use recent model forecast distribution(s) as the reference for adjusting the model rainfall accumulations as a function of lead day horizon, that is, not attempting to correct the members to a vastly different (observed) distribution shape, but a more subtle shift towards the model's best guess of reality, rather than reality itself, to remove the between-ensemble-member bias. A combination of Exponential and Generalized Pareto distributions are used for parametric quantile mapping to remove this internal ensemble bias using computationally efficient pre-computed lookup tables. Within- and out-of-sample results for the 2019 and 2020 monsoon seasons show that the method is effective in tightening precipitation gradients, with improvements in spread, accuracy and skill, especially for low accumulations.
{"title":"Mitigating against the between-ensemble-member precipitation bias in a lagged sub-seasonal ensemble","authors":"Marion Mittermaier, Seshagiri Rao Kolusu, Joanne Robbins","doi":"10.1002/met.2197","DOIUrl":"https://doi.org/10.1002/met.2197","url":null,"abstract":"<p>The Met Office GloSea5-GC2 sub-seasonal-to-seasonal 40-member lagged ensemble consists of members who are up to 10 days different in age such that the between-ensemble-member bias is not internally consistent. Reforecasts tend to be used to convert these ensemble forecasts into anomalies from a normal state. These anomalies are however not that useful for applications where individual ensemble members are needed to drive downstream applications in the hazard and impact space. Here we explore whether there is a way of correcting for the within-ensemble bias without using reforecasts. An investigation into the individual daily precipitation distributions from the JJAS 2019 Indian monsoon season, stratified by forecast horizon, highlights how the distribution changes, and shows that the model distribution is markedly different to the observed. Initial results suggest that it could be better to use recent model forecast distribution(s) as the reference for adjusting the model rainfall accumulations as a function of lead day horizon, that is, not attempting to correct the members to a vastly different (observed) distribution shape, but a more subtle shift towards the model's best guess of reality, rather than reality itself, to remove the between-ensemble-member bias. A combination of Exponential and Generalized Pareto distributions are used for parametric quantile mapping to remove this internal ensemble bias using computationally efficient pre-computed lookup tables. Within- and out-of-sample results for the 2019 and 2020 monsoon seasons show that the method is effective in tightening precipitation gradients, with improvements in spread, accuracy and skill, especially for low accumulations.</p>","PeriodicalId":49825,"journal":{"name":"Meteorological Applications","volume":"31 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/met.2197","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141326510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
<p>It is with great pride that we mark the 30th anniversary of the journal <i>Meteorological Applications</i>, and we take this opportunity to provide our readers with a review of the journal's accomplishments to date and with historical context. Indeed, this journal belongs to the forecasters, applied meteorologists, climate scientists and all users or providers of meteorological and climate services, including early career scientists and both graduate and undergraduate students who read and publish contributions on all aspects of meteorological science, including both weather and climate. We hope that in this editorial we can share with our readers the pleasure that we have had in revisiting our journal's history and the excitement we feel while looking toward the future of our “<i>Met Apps</i>.”</p><p>Founding Editor-in-Chief, Dr. Bob Riddaway, shared many stories with us so that we could give our readers a taste of what it was like to produce Met Apps in its early days. Bob told us that Professor Keith Browning approached him about the idea of creating a new journal for the publication of applied meteorological papers. Bob named our journal specifically to stand out from the plethora of journals at the time that were named “The Journal of…” and he also came up with our nickname “<i>Met Apps</i>.”</p><p>When Met Apps was first published, it was delivered as a paper journal via a subscription service in the post. No online magic in 1994! The journal was published four times per year, and Bob had to make the journey to Bristol each time to proofread every page before it could be printed and distributed. The entire submission and review process of manuscripts was conducted via post which, you can imagine, slowed down time to publication when compared with today.</p><p>In 1994, the published scope described Met Apps as including “<i>Science and technology needed to support meteorological applications</i>.” Today Met Apps has a tagline encapsulating that spirit and also showing how climate is relevant to our journal: “<i>Science and Technology for Weather and Climate</i>.”</p><p>The aims and scope has changed very little, and throughout its life, Met Apps has constantly strived to increase the depth and range of contributions from scientists, forecasters and industry colleagues from all over the world and to provide a positive author experience for all. We think that we can still achieve this by continuing to improve practices that lead to fairness, transparency and prompt and in–depth, expert scientific reviews that are not coloured by bias.</p><p>In recent years, we have made quite a few changes to the submission and review processes, always keeping the above goals in mind.</p><p>Our authors can now benefit from an easier submission process as Met Apps has moved to a free-format submission process. This also supports accessibility, as there is no longer any requirement for templates or specific software to be used to create a manuscript. We have
我们怀着无比自豪的心情纪念《气象应用》杂志创刊 30 周年,并借此机会向读者回顾该杂志迄今为止所取得的成就和历史背景。事实上,这份期刊属于预报员、应用气象学家、气候科学家以及所有气象和气候服务的用户或提供者,包括早期职业科学家、研究生和本科生,他们阅读并发表了关于气象科学各个方面(包括天气和气候)的投稿。我们希望能在这篇社论中与读者分享我们重温期刊历史时的喜悦,以及展望 "Met Apps "未来时的激动心情。"Met Apps "创刊主编鲍勃-里达维博士(Dr. Bob Riddaway)与我们分享了许多故事,以便我们能让读者了解 "Met Apps "创刊初期的情况。鲍勃告诉我们,基思-布朗宁教授(Professor Keith Browning)向他提出了创办一份新期刊来发表应用气象学论文的想法。鲍勃专门为我们的期刊起了一个名字,以便在当时众多以 "The Journal of...... "命名的期刊中脱颖而出,他还想出了我们的昵称 "Met Apps"。1994年还没有网络魔力!杂志每年出版四期,鲍勃每次都要赶到布里斯托尔校对每一页,然后才能印刷发行。整个投稿和审稿过程都是通过邮寄进行的,可想而知,与现在相比,出版时间被拖慢了。1994 年,出版的范围描述 Met Apps 包括 "支持气象应用所需的科学和技术"。如今,《Met Apps》的标语概括了这一精神,同时也表明了气候与我们期刊的相关性:"Met Apps 的目标和范围变化不大,在其整个生命周期中,Met Apps 一直在努力提高来自世界各地的科学家、预报员和业界同仁的贡献的深度和广度,并为所有人提供积极的作者体验。近年来,我们对稿件提交和审核流程进行了多项改革,并始终牢记上述目标。Met Apps 已改用自由格式的稿件提交流程,因此作者现在可以从更简便的稿件提交流程中获益。由于不再要求使用模板或特定软件来撰写稿件,这也为稿件的可访问性提供了支持。我们还对作者指南进行了一些调整(和简化)。现在,我们特别就色彩的使用提供了指导,从而鼓励采用更易于理解的图表和配色方案。这将确保色盲读者也能获取期刊论文中的信息。这也提高了我们发表的图表的影响力,使所有读者都能清楚地了解研究的科学内容。此外,2021 年,我们开始将审稿流程改为双盲法。传统上,大多数科学期刊(如《大都会应用》)都为审稿人提供匿名服务,但不为作者提供匿名服务。而采用双盲法后,稿件将被匿名,即作者姓名、所属机构、致谢、资助编号或其他任何可能泄露作者身份的信息都将从稿件中删除。通过这种方法,我们将审稿人从自己可能存在的无意识偏见中解放出来。这样,审稿人就可以自由地单独对作品发表评论,而不必担心冒犯同事。我们必须补充的是,审稿人对作者的评论始终是礼貌、有益和诚实的。这种审稿方式让作者放心,接受评估和批评的是他们的科研工作,而不是对他们的性别、种族或机构归属的任何评论。Met Apps 团队和作者需要一些时间来学习如何有效地对稿件进行匿名处理,但现在我们相信,我们正在消除障碍和偏见,创建一个更加公平的审稿流程。为了进一步支持我们的审稿流程,使期刊作者的地域来源多样化,我们将继续扩大编委会的组成和专业知识,同时努力促进性别平等。在撰写这篇社论时,编委会由 16 名女性和 19 名男性组成。 当然,我们非常感谢我们的作者继续委托我们发表他们的研究成果,我们也希望他们今后继续选择在我们这里发表文章。在我们任职初期,我们写过一篇社论(Charlton-Perez & Zardi, 2020),描绘了我们对期刊的规划,我们认为我们已经超越了最初的目标。我们提高了对《大都会应用》的期望。有了正在筹备中的新计划和我们周围的优秀社区,我们看到了《Met Apps》光明的未来!克里斯蒂娜-查尔顿-佩雷斯:写作--原稿;写作--审稿和编辑;构思。Dino Zardi:概念化;写作--原稿;写作--审核和编辑。
{"title":"Celebrating the 30th anniversary of Meteorological Applications","authors":"Cristina Charlton-Perez, Dino Zardi","doi":"10.1002/met.2214","DOIUrl":"https://doi.org/10.1002/met.2214","url":null,"abstract":"<p>It is with great pride that we mark the 30th anniversary of the journal <i>Meteorological Applications</i>, and we take this opportunity to provide our readers with a review of the journal's accomplishments to date and with historical context. Indeed, this journal belongs to the forecasters, applied meteorologists, climate scientists and all users or providers of meteorological and climate services, including early career scientists and both graduate and undergraduate students who read and publish contributions on all aspects of meteorological science, including both weather and climate. We hope that in this editorial we can share with our readers the pleasure that we have had in revisiting our journal's history and the excitement we feel while looking toward the future of our “<i>Met Apps</i>.”</p><p>Founding Editor-in-Chief, Dr. Bob Riddaway, shared many stories with us so that we could give our readers a taste of what it was like to produce Met Apps in its early days. Bob told us that Professor Keith Browning approached him about the idea of creating a new journal for the publication of applied meteorological papers. Bob named our journal specifically to stand out from the plethora of journals at the time that were named “The Journal of…” and he also came up with our nickname “<i>Met Apps</i>.”</p><p>When Met Apps was first published, it was delivered as a paper journal via a subscription service in the post. No online magic in 1994! The journal was published four times per year, and Bob had to make the journey to Bristol each time to proofread every page before it could be printed and distributed. The entire submission and review process of manuscripts was conducted via post which, you can imagine, slowed down time to publication when compared with today.</p><p>In 1994, the published scope described Met Apps as including “<i>Science and technology needed to support meteorological applications</i>.” Today Met Apps has a tagline encapsulating that spirit and also showing how climate is relevant to our journal: “<i>Science and Technology for Weather and Climate</i>.”</p><p>The aims and scope has changed very little, and throughout its life, Met Apps has constantly strived to increase the depth and range of contributions from scientists, forecasters and industry colleagues from all over the world and to provide a positive author experience for all. We think that we can still achieve this by continuing to improve practices that lead to fairness, transparency and prompt and in–depth, expert scientific reviews that are not coloured by bias.</p><p>In recent years, we have made quite a few changes to the submission and review processes, always keeping the above goals in mind.</p><p>Our authors can now benefit from an easier submission process as Met Apps has moved to a free-format submission process. This also supports accessibility, as there is no longer any requirement for templates or specific software to be used to create a manuscript. We have ","PeriodicalId":49825,"journal":{"name":"Meteorological Applications","volume":"31 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/met.2214","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141308887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Magdalena Pasierb, Zofia Bałdysz, Jan Szturc, Grzegorz Nykiel, Anna Jurczyk, Katarzyna Ośródka, Mariusz Figurski, Marcin Wojtczak, Cezary Wojtkowski
Precipitation estimation models are typically sourced by rain gauges, weather radars and satellite observations. A relatively new technique of precipitation estimation relies on the network of Commercial Microwave Links (CMLs) employed for cellular communication networks: the rain-inducted attenuation in the links enables the precipitation estimation. In the paper, it is analysed to what extent the precipitation derived from CML attenuation data is useful in estimation of the precipitation field with the high temporal and spatial resolution required in nowcasting models. Two methods of determination of precipitation along CMLs from attenuation of signal with several frequencies were proposed. Then, in order to generate precipitation field, three approaches for assigning appropriate precipitation values to a specific point or set of pixels along the link are developed and tested. The CML-based estimates are compared with point observations from manual rain gauges and multi-source precipitation fields using daily and half-hourly accumulations. It was found that the CML-based precipitation fields are much worse than radar-derived estimates. At the same time, they had slightly poorer reliability than spatially interpolated telemetric rain gauge data and significantly higher reliability than satellite estimates. Furthermore, the impact of link characteristics, such as length and frequency, on the reliability of CML-based precipitation estimates is analysed.
{"title":"Application of commercial microwave links (CMLs) attenuation for quantitative estimation of precipitation","authors":"Magdalena Pasierb, Zofia Bałdysz, Jan Szturc, Grzegorz Nykiel, Anna Jurczyk, Katarzyna Ośródka, Mariusz Figurski, Marcin Wojtczak, Cezary Wojtkowski","doi":"10.1002/met.2218","DOIUrl":"https://doi.org/10.1002/met.2218","url":null,"abstract":"<p>Precipitation estimation models are typically sourced by rain gauges, weather radars and satellite observations. A relatively new technique of precipitation estimation relies on the network of Commercial Microwave Links (CMLs) employed for cellular communication networks: the rain-inducted attenuation in the links enables the precipitation estimation. In the paper, it is analysed to what extent the precipitation derived from CML attenuation data is useful in estimation of the precipitation field with the high temporal and spatial resolution required in nowcasting models. Two methods of determination of precipitation along CMLs from attenuation of signal with several frequencies were proposed. Then, in order to generate precipitation field, three approaches for assigning appropriate precipitation values to a specific point or set of pixels along the link are developed and tested. The CML-based estimates are compared with point observations from manual rain gauges and multi-source precipitation fields using daily and half-hourly accumulations. It was found that the CML-based precipitation fields are much worse than radar-derived estimates. At the same time, they had slightly poorer reliability than spatially interpolated telemetric rain gauge data and significantly higher reliability than satellite estimates. Furthermore, the impact of link characteristics, such as length and frequency, on the reliability of CML-based precipitation estimates is analysed.</p>","PeriodicalId":49825,"journal":{"name":"Meteorological Applications","volume":"31 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/met.2218","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141304250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aniel Jardines, Manuel Soler, Javier García-Heras, Matteo Ponzano, Laure Raynaud
This paper aims to explore machine learning techniques for post-processing high-resolution Numerical Weather Prediction (NWP) products for the early detection of convection. Data from the Arome Ensemble Prediction System and satellite observations from the Rapidly Developing Thunderstorm (RDT) product by Météo-France are used to train a recurrent neural network model to predict areas of total convection and moderate convection. The learning task is formulated as a binary classification problem using a long short-term memory (LSTM) network architecture. Results from the LSTM model are compared with an object-based probabilistic approach to forecast convection using metrics such as a receiver operating characteristics (ROC) curve, the Brier score and reliability. Results indicate that the LSTM model performs similarly to the object-based probabilistic benchmark when classifying moderate convection areas and shows improved skill when classifying areas of total convective. Finally, the LSTM model results are presented within an air traffic management context to showcase the potential use of machine learning models within an operational application.
{"title":"Pre-tactical convection prediction for air traffic flow management using LSTM neural network","authors":"Aniel Jardines, Manuel Soler, Javier García-Heras, Matteo Ponzano, Laure Raynaud","doi":"10.1002/met.2215","DOIUrl":"https://doi.org/10.1002/met.2215","url":null,"abstract":"<p>This paper aims to explore machine learning techniques for post-processing high-resolution Numerical Weather Prediction (NWP) products for the early detection of convection. Data from the Arome Ensemble Prediction System and satellite observations from the Rapidly Developing Thunderstorm (RDT) product by Météo-France are used to train a recurrent neural network model to predict areas of total convection and moderate convection. The learning task is formulated as a binary classification problem using a long short-term memory (LSTM) network architecture. Results from the LSTM model are compared with an object-based probabilistic approach to forecast convection using metrics such as a receiver operating characteristics (ROC) curve, the Brier score and reliability. Results indicate that the LSTM model performs similarly to the object-based probabilistic benchmark when classifying moderate convection areas and shows improved skill when classifying areas of total convective. Finally, the LSTM model results are presented within an air traffic management context to showcase the potential use of machine learning models within an operational application.</p>","PeriodicalId":49825,"journal":{"name":"Meteorological Applications","volume":"31 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/met.2215","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141286862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Currently, there are three main shortcomings in meteorological drought indices: first, they rely on historical climate probability functions; second, the timescale used in calculations has a certain degree of subjectivity; third, the same index value may correspond to vastly different levels of actual drought in different climate types of regions. The purpose of this article is to establish a meteorological drought index that does not rely on historical meteorological element probability functions. Through theoretical derivation, four drought-level maintenance lines are established on the cumulative precipitation-cumulative water surface evaporation coordinate plane, and the coordinate quadrant is divided into five drought-level areas. Through forward daily rolling accumulation, the maximum distance point is selected from the dynamically changing coordinate points to determine the corresponding cumulative precipitation and cumulative evaporation. The meteorological drought index is established by the distance from the selected coordinate point to each drought-level maintenance line. Using daily precipitation and evaporation data from meteorological observation stations, the index is calculated based on the established meteorological drought index model, and compared with actual drought evolution and drought disaster records. The results show that the index can capture the development of drought well, and its changes are very consistent with drought disaster records. The index is of great significance for drought monitoring or assessment, and can provide guidance for water resource allocation, crop layout, and urban planning. Furthermore, it can also provide a way of thinking that does not rely on historical element probabilities for future drought research.
{"title":"Establish an agricultural drought index that is independent of historical element probabilities","authors":"Yongdi Pan, Jingjing Xiao, Yanhua Pan","doi":"10.1002/met.2216","DOIUrl":"https://doi.org/10.1002/met.2216","url":null,"abstract":"<p>Currently, there are three main shortcomings in meteorological drought indices: first, they rely on historical climate probability functions; second, the timescale used in calculations has a certain degree of subjectivity; third, the same index value may correspond to vastly different levels of actual drought in different climate types of regions. The purpose of this article is to establish a meteorological drought index that does not rely on historical meteorological element probability functions. Through theoretical derivation, four drought-level maintenance lines are established on the cumulative precipitation-cumulative water surface evaporation coordinate plane, and the coordinate quadrant is divided into five drought-level areas. Through forward daily rolling accumulation, the maximum distance point is selected from the dynamically changing coordinate points to determine the corresponding cumulative precipitation and cumulative evaporation. The meteorological drought index is established by the distance from the selected coordinate point to each drought-level maintenance line. Using daily precipitation and evaporation data from meteorological observation stations, the index is calculated based on the established meteorological drought index model, and compared with actual drought evolution and drought disaster records. The results show that the index can capture the development of drought well, and its changes are very consistent with drought disaster records. The index is of great significance for drought monitoring or assessment, and can provide guidance for water resource allocation, crop layout, and urban planning. Furthermore, it can also provide a way of thinking that does not rely on historical element probabilities for future drought research.</p>","PeriodicalId":49825,"journal":{"name":"Meteorological Applications","volume":"31 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/met.2216","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141264665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Damien B. Irving, James S. Risbey, Dougal T. Squire, Richard Matear, Carly Tozer, Didier P. Monselesan, Nandini Ramesh, P. Jyoteeshkumar Reddy, Mandy Freund
A large stretch of the east coast of Australia experienced unprecedented rainfall and flooding over a two-week period in early 2022. It is difficult to reliably estimate the likelihood of such a rare event from the relatively short observational record, so an alternative is to use data from an ensemble prediction system (e.g., a seasonal or decadal forecast system) to obtain a much larger sample of simulated weather events. This so-called ‘UNSEEN’ method has been successfully applied in several scientific studies, but those studies typically rely on a single prediction system. In this study, we use data from the Decadal Climate Prediction Project to explore the model uncertainty associated with the UNSEEN method by assessing 10 different hindcast ensembles. Using the 15-day rainfall total averaged over the river catchments impacted by the 2022 east coast event, we find that the models produce a wide range of likelihood estimates. Even after excluding a number of models that fail basic fidelity tests, estimates of the event return period ranged from 320 to 1814 years. The vast majority of models suggested the event is rarer than a standard extreme value assessment of the observational record (297 years). Such large model uncertainty suggests that multi-model analysis should become part of the standard UNSEEN procedure.
{"title":"A multi-model likelihood analysis of unprecedented extreme rainfall along the east coast of Australia","authors":"Damien B. Irving, James S. Risbey, Dougal T. Squire, Richard Matear, Carly Tozer, Didier P. Monselesan, Nandini Ramesh, P. Jyoteeshkumar Reddy, Mandy Freund","doi":"10.1002/met.2217","DOIUrl":"https://doi.org/10.1002/met.2217","url":null,"abstract":"<p>A large stretch of the east coast of Australia experienced unprecedented rainfall and flooding over a two-week period in early 2022. It is difficult to reliably estimate the likelihood of such a rare event from the relatively short observational record, so an alternative is to use data from an ensemble prediction system (e.g., a seasonal or decadal forecast system) to obtain a much larger sample of simulated weather events. This so-called ‘UNSEEN’ method has been successfully applied in several scientific studies, but those studies typically rely on a single prediction system. In this study, we use data from the Decadal Climate Prediction Project to explore the model uncertainty associated with the UNSEEN method by assessing 10 different hindcast ensembles. Using the 15-day rainfall total averaged over the river catchments impacted by the 2022 east coast event, we find that the models produce a wide range of likelihood estimates. Even after excluding a number of models that fail basic fidelity tests, estimates of the event return period ranged from 320 to 1814 years. The vast majority of models suggested the event is rarer than a standard extreme value assessment of the observational record (297 years). Such large model uncertainty suggests that multi-model analysis should become part of the standard UNSEEN procedure.</p>","PeriodicalId":49825,"journal":{"name":"Meteorological Applications","volume":"31 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/met.2217","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141187548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As both the population and economic output of India continue to grow, so does its demand for electricity. Coupled with an increasing determination to transition to net zero, India has responded to this rising demand by rapidly expanding its installed renewable capacity: an increase of 60% in the last 5 years has been driven largely by a quintupling of installed solar capacity. In this study, we use broad variety of data sources to quantify potential and realized capacity over India from 1979 to 2022. For potential capacity, we identify spatiotemporal patterns in solar, wind, hydro and wave power. We show that solar capacity factor is relatively homogeneous across India, except over the western Himalaya, and is highest during the pre-monsoon. Wind capacity factor is highest during the summer monsoon, and has high values off the southern coast, along the Western Ghats, and in Gujarat. We argue that wave power could be a useful source of renewable energy for the Andaman and Nicobar Islands, which are not connected to the main Indian power grid. Using gridded estimates of existing installed capacity combined with our historical capacity factor dataset, we create a simple but effective renewable production model. We use this model to identify weaknesses in the existing grid—particularly a lack of complementarity between wind and solar production in north India, and vulnerability to high-deficit generation in the winter. We discuss potential avenues for future renewable investment to counter existing seasonality problems, principally offshore wind and high-altitude solar.
{"title":"Quantifying renewable energy potential and realized capacity in India: Opportunities and challenges","authors":"Kieran M. R. Hunt, Hannah C. Bloomfield","doi":"10.1002/met.2196","DOIUrl":"https://doi.org/10.1002/met.2196","url":null,"abstract":"<p>As both the population and economic output of India continue to grow, so does its demand for electricity. Coupled with an increasing determination to transition to net zero, India has responded to this rising demand by rapidly expanding its installed renewable capacity: an increase of 60% in the last 5 years has been driven largely by a quintupling of installed solar capacity. In this study, we use broad variety of data sources to quantify potential and realized capacity over India from 1979 to 2022. For potential capacity, we identify spatiotemporal patterns in solar, wind, hydro and wave power. We show that solar capacity factor is relatively homogeneous across India, except over the western Himalaya, and is highest during the pre-monsoon. Wind capacity factor is highest during the summer monsoon, and has high values off the southern coast, along the Western Ghats, and in Gujarat. We argue that wave power could be a useful source of renewable energy for the Andaman and Nicobar Islands, which are not connected to the main Indian power grid. Using gridded estimates of existing installed capacity combined with our historical capacity factor dataset, we create a simple but effective renewable production model. We use this model to identify weaknesses in the existing grid—particularly a lack of complementarity between wind and solar production in north India, and vulnerability to high-deficit generation in the winter. We discuss potential avenues for future renewable investment to counter existing seasonality problems, principally offshore wind and high-altitude solar.</p>","PeriodicalId":49825,"journal":{"name":"Meteorological Applications","volume":"31 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/met.2196","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141182173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Atmospheric visibility profoundly impacts daily life, and accurate prediction is crucial, particularly in conditions of low visibility characterized by high aerosol loading and humidity. This study employed the WRF-Chem model to simulate a severe wintertime haze pollution episode that transpired from January 17 to 19, 2010, in Central-East China (112–122° E, 34–42° N). The results reveal that excluding aerosol–meteorology interactions led to underestimated PM2.5 concentrations and relative humidity in comparison with ground-based measurement data, accompanied by a significant overestimation of visibility. Aerosols can engage with meteorological elements, particularly humidity, resulting in positive feedback. Upon considering these feedback interactions, the simulation results showed an increase of 5.17% and 1.99% in PM2.5 concentration and relative humidity, respectively, compared with the original simulation. This adjustment narrowed the bias between simulated and measured data. The overestimation of simulated visibility was reduced by 16% and 25% for the entire study period and the severe haze pollution period, respectively. These findings underscore the vital role of incorporating aerosol–meteorology interactions in visibility simulations using the WRF-Chem model. Notably, the inclusion of aerosol–meteorological feedback significantly enhances the accuracy of visibility predictions, particularly during heavily polluted periods.
{"title":"Influence of aerosol–meteorology interactions on visibility during a wintertime heavily polluted episode in Central-East, China","authors":"Xin Zhang, Yue Wang, Zibo Zhuang, Chengduo Yuan","doi":"10.1002/met.2207","DOIUrl":"https://doi.org/10.1002/met.2207","url":null,"abstract":"<p>Atmospheric visibility profoundly impacts daily life, and accurate prediction is crucial, particularly in conditions of low visibility characterized by high aerosol loading and humidity. This study employed the WRF-Chem model to simulate a severe wintertime haze pollution episode that transpired from January 17 to 19, 2010, in Central-East China (112–122° E, 34–42° N). The results reveal that excluding aerosol–meteorology interactions led to underestimated PM<sub>2.5</sub> concentrations and relative humidity in comparison with ground-based measurement data, accompanied by a significant overestimation of visibility. Aerosols can engage with meteorological elements, particularly humidity, resulting in positive feedback. Upon considering these feedback interactions, the simulation results showed an increase of 5.17% and 1.99% in PM<sub>2.5</sub> concentration and relative humidity, respectively, compared with the original simulation. This adjustment narrowed the bias between simulated and measured data. The overestimation of simulated visibility was reduced by 16% and 25% for the entire study period and the severe haze pollution period, respectively. These findings underscore the vital role of incorporating aerosol–meteorology interactions in visibility simulations using the WRF-Chem model. Notably, the inclusion of aerosol–meteorological feedback significantly enhances the accuracy of visibility predictions, particularly during heavily polluted periods.</p>","PeriodicalId":49825,"journal":{"name":"Meteorological Applications","volume":"31 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/met.2207","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lewis P. Blunn, Flynn Ames, Hannah L. Croad, Adam Gainford, Ieuan Higgs, Mathew Lipson, Chun Hay Brian Lo
The urban heat island (UHI) effect exacerbates near-surface air temperature (T) extremes in cities, with negative impacts for human health, building energy consumption and infrastructure. Using conventional weather models, it is both difficult and computationally expensive to simulate the complex processes controlling neighbourhood-scale variation of T. We use machine learning (ML) to bias correct and downscale T predictions made by the Met Office operational regional forecast model (UKV) to 100 m horizontal grid length over London, UK. A set of ML models (random forest, XGBoost, multiplayer perceptron) are trained using citizen weather station observations and UKV variables from eight heatwaves, along with high-resolution land cover data. The ML models improve the T mean absolute error (MAE) by up to 0.12°C (11%) relative to the UKV. They also improve the UHI diurnal and spatial representation, reducing the UHI profile MAE from 0.64°C (UKV) to 0.15°C. A multiple linear regression performs almost as well as the ML models in terms of T MAE, but cannot match the UHI bias correction performance of the ML models, only reducing the UHI profile MAE to 0.49°C. UKV latent heat flux is found to be the most important predictor of T bias. It is demonstrated that including more heatwaves and observation sites in training would reduce overfitting and improve ML model performance.
城市热岛(UHI)效应加剧了城市近地面的极端气温(T),对人类健康、建筑能耗和基础设施产生了负面影响。使用传统的天气模型模拟控制邻域尺度气温变化的复杂过程既困难又耗费计算成本。我们使用机器学习(ML)对英国伦敦上空 100 米水平网格长度的气象局业务区域预报模型(UKV)的气温预测进行偏差校正和降尺度预测。一组 ML 模型(随机森林、XGBoost、多人感知器)是利用市民气象站观测数据和八次热浪中的 UKV 变量以及高分辨率土地覆盖数据进行训练的。相对于 UKV,ML 模型将 T 平均绝对误差 (MAE) 最多提高了 0.12°C(11%)。它们还改善了 UHI 的昼夜和空间代表性,将 UHI 剖面 MAE 从 0.64°C(UKV)降至 0.15°C。就 T MAE 而言,多元线性回归与 ML 模型的表现几乎一样好,但无法与 ML 模型的 UHI 偏差校正性能相媲美,只能将 UHI 剖面 MAE 降低到 0.49°C。研究发现,UKV 潜热通量是预测 T 偏差的最重要因素。结果表明,在训练中加入更多的热浪和观测点可以减少过拟合,提高 ML 模型的性能。
{"title":"Machine learning bias correction and downscaling of urban heatwave temperature predictions from kilometre to hectometre scale","authors":"Lewis P. Blunn, Flynn Ames, Hannah L. Croad, Adam Gainford, Ieuan Higgs, Mathew Lipson, Chun Hay Brian Lo","doi":"10.1002/met.2200","DOIUrl":"https://doi.org/10.1002/met.2200","url":null,"abstract":"<p>The urban heat island (UHI) effect exacerbates near-surface air temperature (<i>T</i>) extremes in cities, with negative impacts for human health, building energy consumption and infrastructure. Using conventional weather models, it is both difficult and computationally expensive to simulate the complex processes controlling neighbourhood-scale variation of <i>T</i>. We use machine learning (ML) to bias correct and downscale <i>T</i> predictions made by the Met Office operational regional forecast model (UKV) to 100 m horizontal grid length over London, UK. A set of ML models (random forest, XGBoost, multiplayer perceptron) are trained using citizen weather station observations and UKV variables from eight heatwaves, along with high-resolution land cover data. The ML models improve the <i>T</i> mean absolute error (MAE) by up to 0.12°C (11%) relative to the UKV. They also improve the UHI diurnal and spatial representation, reducing the UHI profile MAE from 0.64°C (UKV) to 0.15°C. A multiple linear regression performs almost as well as the ML models in terms of <i>T</i> MAE, but cannot match the UHI bias correction performance of the ML models, only reducing the UHI profile MAE to 0.49°C. UKV latent heat flux is found to be the most important predictor of <i>T</i> bias. It is demonstrated that including more heatwaves and observation sites in training would reduce overfitting and improve ML model performance.</p>","PeriodicalId":49825,"journal":{"name":"Meteorological Applications","volume":"31 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/met.2200","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140953081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Charlotte A. Malmborg, Alyssa M. Willson, L. M. Bradley, Meghan A. Beatty, David H. Klinges, Gerbrand Koren, Abigail S. L. Lewis, Kayode Oshinubi, Whitney M. Woelmer
Models have become a key component of scientific hypothesis testing and climate and sustainability planning, as enabled by increased data availability and computing power. As a result, understanding how the perceived ‘complexity’ of a model corresponds to its accuracy and predictive power has become a prevalent research topic. However, a wide variety of definitions of model complexity have been proposed and used, leading to an imprecise understanding of what model complexity is and its consequences across research studies, study systems, and disciplines. Here, we propose a more explicit definition of model complexity, incorporating four facets—model class, model inputs, model parameters, and computational complexity—which are modulated by the complexity of the real-world process being modelled. We illustrate these facets with several examples drawn from ecological literature. Overall, we argue that precise terminology and metrics of model complexity (e.g., number of parameters, number of inputs) may be necessary to characterize the emergent outcomes of complexity, including model comparison, model performance, model transferability and decision support.
{"title":"Defining model complexity: An ecological perspective","authors":"Charlotte A. Malmborg, Alyssa M. Willson, L. M. Bradley, Meghan A. Beatty, David H. Klinges, Gerbrand Koren, Abigail S. L. Lewis, Kayode Oshinubi, Whitney M. Woelmer","doi":"10.1002/met.2202","DOIUrl":"https://doi.org/10.1002/met.2202","url":null,"abstract":"<p>Models have become a key component of scientific hypothesis testing and climate and sustainability planning, as enabled by increased data availability and computing power. As a result, understanding how the perceived ‘complexity’ of a model corresponds to its accuracy and predictive power has become a prevalent research topic. However, a wide variety of definitions of model complexity have been proposed and used, leading to an imprecise understanding of what model complexity is and its consequences across research studies, study systems, and disciplines. Here, we propose a more explicit definition of model complexity, incorporating four facets—model class, model inputs, model parameters, and computational complexity—which are modulated by the complexity of the real-world process being modelled. We illustrate these facets with several examples drawn from ecological literature. Overall, we argue that precise terminology and metrics of model complexity (e.g., number of parameters, number of inputs) may be necessary to characterize the emergent outcomes of complexity, including model comparison, model performance, model transferability and decision support.</p>","PeriodicalId":49825,"journal":{"name":"Meteorological Applications","volume":"31 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/met.2202","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140953080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}