Pub Date : 2024-03-13DOI: 10.5194/gmd-17-2077-2024
Jérémy Bernard, E. Bocher, Matthieu Gousseff, François Leconte, Elisabeth Le Saux Wiederhold
Abstract. Geographical features may have a considerable effect on local climate. The local climate zone (LCZ) system proposed by Stewart and Oke (2012) is nowadays seen as a standard approach for classifying any zone according to a set of urban canopy parameters. While many methods already exist to map the LCZ, only few tools are openly and freely available. This paper presents the algorithm implemented in the GeoClimate software to identify the LCZ of any place in the world based on vector data. Six types of information are needed as input: the building footprint, road and rail networks, water, vegetation, and impervious surfaces. First, the territory is partitioned into reference spatial units (RSUs) using the road and rail network, as well as the boundaries of large vegetation and water patches. Then 14 urban canopy parameters are calculated for each RSU. Their values are used to classify each unit to a given LCZ type according to a set of rules. GeoClimate can automatically prepare the inputs and calculate the LCZ for two datasets, namely OpenStreetMap (OSM, available worldwide) and the BD TOPO® v2.2 (BDT, a French dataset produced by the national mapping agency). The LCZ are calculated for 22 French communes using these two datasets in order to evaluate the effect of the dataset on the results. About 55 % of all areas have obtained the same LCZ type, with large differences when differentiating this result by city (from 30 % to 82 %). The agreement is good for large patches of forest and water, as well as for compact mid-rise and open low-rise LCZ types. It is lower for open mid-rise and open high-rise, mainly due to the height underestimation of OSM buildings located in open areas. Through its simplicity of use, GeoClimate has great potential for new collaboration in the LCZ field. The software (and its source code) used to produce the LCZ data is freely available at https://doi.org/10.5281/zenodo.6372337 (Bocher et al., 2022); the scripts and data used for the purpose of this article can be freely accessed at https://doi.org/10.5281/zenodo.7687911 (Bernard et al., 2023) and are based on the R package available at https://doi.org/10.5281/zenodo.7646866 (Gousseff, 2023).
{"title":"A generic algorithm to automatically classify urban fabric according to the local climate zone system: implementation in GeoClimate 0.0.1 and application to French cities","authors":"Jérémy Bernard, E. Bocher, Matthieu Gousseff, François Leconte, Elisabeth Le Saux Wiederhold","doi":"10.5194/gmd-17-2077-2024","DOIUrl":"https://doi.org/10.5194/gmd-17-2077-2024","url":null,"abstract":"Abstract. Geographical features may have a considerable effect on local climate. The local climate zone (LCZ) system proposed by Stewart and Oke (2012) is nowadays seen as a standard approach for classifying any zone according to a set of urban canopy parameters. While many methods already exist to map the LCZ, only few tools are openly and freely available. This paper presents the algorithm implemented in the GeoClimate software to identify the LCZ of any place in the world based on vector data. Six types of information are needed as input: the building footprint, road and rail networks, water, vegetation, and impervious surfaces. First, the territory is partitioned into reference spatial units (RSUs) using the road and rail network, as well as the boundaries of large vegetation and water patches. Then 14 urban canopy parameters are calculated for each RSU. Their values are used to classify each unit to a given LCZ type according to a set of rules. GeoClimate can automatically prepare the inputs and calculate the LCZ for two datasets, namely OpenStreetMap (OSM, available worldwide) and the BD TOPO® v2.2 (BDT, a French dataset produced by the national mapping agency). The LCZ are calculated for 22 French communes using these two datasets in order to evaluate the effect of the dataset on the results. About 55 % of all areas have obtained the same LCZ type, with large differences when differentiating this result by city (from 30 % to 82 %). The agreement is good for large patches of forest and water, as well as for compact mid-rise and open low-rise LCZ types. It is lower for open mid-rise and open high-rise, mainly due to the height underestimation of OSM buildings located in open areas. Through its simplicity of use, GeoClimate has great potential for new collaboration in the LCZ field. The software (and its source code) used to produce the LCZ data is freely available at https://doi.org/10.5281/zenodo.6372337 (Bocher et al., 2022); the scripts and data used for the purpose of this article can be freely accessed at https://doi.org/10.5281/zenodo.7687911 (Bernard et al., 2023) and are based on the R package available at https://doi.org/10.5281/zenodo.7646866 (Gousseff, 2023).\u0000","PeriodicalId":12799,"journal":{"name":"Geoscientific Model Development","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140246432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-12DOI: 10.5194/gmd-17-2053-2024
S. Larosa, Domenico Cimini, D. Gallucci, S. Nilo, F. Romano
Abstract. This article introduces PyRTlib, a new standalone Python package for non-scattering line-by-line microwave radiative transfer simulations. PyRTlib is a flexible and user-friendly tool for computing down- and upwelling brightness temperatures and related quantities (e.g., atmospheric absorption, optical depth, opacity, mean radiating temperature) written in Python, a language commonly used nowadays for scientific software development, especially by students and early-career scientists. PyRTlib allows for simulating observations from ground-based, airborne, and satellite microwave sensors in clear-sky and in cloudy conditions (under non-scattering Rayleigh approximation). The intention for PyRTlib is not to be a competitor to state-of-the-art atmospheric radiative transfer codes that excel in speed and/or versatility (e.g., ARTS, Atmospheric Radiative Transfer Simulator; RTTOV, Radiative Transfer for TOVS (Television Infrared Observation Satellite (TIROS) Operational Vertical Sounder)). The intention is to provide an educational tool, completely written in Python, to readily simulate atmospheric microwave radiative transfer from a variety of input profiles, including predefined climatologies, global radiosonde archives, and model reanalysis. The paper presents quick examples for the built-in modules to access popular open data archives. The paper also presents examples for computing the simulated brightness temperature for different platforms (ground-based, airborne, and satellite), using various input profiles, showing how to easily modify other relevant parameters, such as the observing angle (zenith, nadir, slant), surface emissivity, and gas absorption model. PyRTlib can be easily embedded in other Python codes needing atmospheric microwave radiative transfer (e.g., surface emissivity models and retrievals). Despite its simplicity, PyRTlib can be readily used to produce present-day scientific results, as demonstrated by two examples showing (i) an absorption model comparison and validation with ground-based radiometric observations and (ii) uncertainty propagation of spectroscopic parameters through the radiative transfer calculations following a rigorous approach. To our knowledge, the uncertainty estimate is not provided by any other currently available microwave radiative transfer code, making PyRTlib unique for this aspect in the atmospheric microwave radiative transfer code scenario.
{"title":"PyRTlib: an educational Python-based library for non-scattering atmospheric microwave radiative transfer computations","authors":"S. Larosa, Domenico Cimini, D. Gallucci, S. Nilo, F. Romano","doi":"10.5194/gmd-17-2053-2024","DOIUrl":"https://doi.org/10.5194/gmd-17-2053-2024","url":null,"abstract":"Abstract. This article introduces PyRTlib, a new standalone Python package for non-scattering line-by-line microwave radiative transfer simulations. PyRTlib is a flexible and user-friendly tool for computing down- and upwelling brightness temperatures and related quantities (e.g., atmospheric absorption, optical depth, opacity, mean radiating temperature) written in Python, a language commonly used nowadays for scientific software development, especially by students and early-career scientists. PyRTlib allows for simulating observations from ground-based, airborne, and satellite microwave sensors in clear-sky and in cloudy conditions (under non-scattering Rayleigh approximation). The intention for PyRTlib is not to be a competitor to state-of-the-art atmospheric radiative transfer codes that excel in speed and/or versatility (e.g., ARTS, Atmospheric Radiative Transfer Simulator; RTTOV, Radiative Transfer for TOVS (Television Infrared Observation Satellite (TIROS) Operational Vertical Sounder)). The intention is to provide an educational tool, completely written in Python, to readily simulate atmospheric microwave radiative transfer from a variety of input profiles, including predefined climatologies, global radiosonde archives, and model reanalysis. The paper presents quick examples for the built-in modules to access popular open data archives. The paper also presents examples for computing the simulated brightness temperature for different platforms (ground-based, airborne, and satellite), using various input profiles, showing how to easily modify other relevant parameters, such as the observing angle (zenith, nadir, slant), surface emissivity, and gas absorption model. PyRTlib can be easily embedded in other Python codes needing atmospheric microwave radiative transfer (e.g., surface emissivity models and retrievals). Despite its simplicity, PyRTlib can be readily used to produce present-day scientific results, as demonstrated by two examples showing (i) an absorption model comparison and validation with ground-based radiometric observations and (ii) uncertainty propagation of spectroscopic parameters through the radiative transfer calculations following a rigorous approach. To our knowledge, the uncertainty estimate is not provided by any other currently available microwave radiative transfer code, making PyRTlib unique for this aspect in the atmospheric microwave radiative transfer code scenario.\u0000","PeriodicalId":12799,"journal":{"name":"Geoscientific Model Development","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140248637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-07DOI: 10.5194/gmd-17-2015-2024
M. Pehl, Felix Schreyer, Gunnar Luderer
Abstract. This paper presents an extension of industry modelling within the REMIND integrated assessment model to industry subsectors and a projection of future industry subsector activity and energy demand for different baseline scenarios for use with the REMIND model. The industry sector is the largest greenhouse-gas-emitting energy demand sector and is considered a mitigation bottleneck. At the same time, industry subsectors are heterogeneous and face distinct emission mitigation challenges. By extending the multi-region, general equilibrium integrated assessment model REMIND to an explicit representation of four industry subsectors (cement, chemicals, steel, and other industry production), along with subsector-specific carbon capture and sequestration (CCS), we are able to investigate industry emission mitigation strategies in the context of the entire energy–economy–climate system, covering mitigation options ranging from reduced demand for industrial goods, fuel switching, and electrification to endogenous energy efficiency increases and carbon capture. We also present the derivation of both activity and final energy demand trajectories for the industry subsectors for use with the REMIND model in baseline scenarios, based on short-term continuation of historic trends and long-term global convergence. The system allows for selective variation of specific subsector activity and final energy demand across scenarios and regions to create consistent scenarios for a wide range of socioeconomic drivers and scenario story lines, like the Shared Socioeconomic Pathways (SSPs).
{"title":"Modelling long-term industry energy demand and CO2 emissions in the system context using REMIND (version 3.1.0)","authors":"M. Pehl, Felix Schreyer, Gunnar Luderer","doi":"10.5194/gmd-17-2015-2024","DOIUrl":"https://doi.org/10.5194/gmd-17-2015-2024","url":null,"abstract":"Abstract. This paper presents an extension of industry modelling within the REMIND integrated assessment model to industry subsectors and a projection of future industry subsector activity and energy demand for different baseline scenarios for use with the REMIND model. The industry sector is the largest greenhouse-gas-emitting energy demand sector and is considered a mitigation bottleneck. At the same time, industry subsectors are heterogeneous and face distinct emission mitigation challenges. By extending the multi-region, general equilibrium integrated assessment model REMIND to an explicit representation of four industry subsectors (cement, chemicals, steel, and other industry production), along with subsector-specific carbon capture and sequestration (CCS), we are able to investigate industry emission mitigation strategies in the context of the entire energy–economy–climate system, covering mitigation options ranging from reduced demand for industrial goods, fuel switching, and electrification to endogenous energy efficiency increases and carbon capture. We also present the derivation of both activity and final energy demand trajectories for the industry subsectors for use with the REMIND model in baseline scenarios, based on short-term continuation of historic trends and long-term global convergence. The system allows for selective variation of specific subsector activity and final energy demand across scenarios and regions to create consistent scenarios for a wide range of socioeconomic drivers and scenario story lines, like the Shared Socioeconomic Pathways (SSPs).\u0000","PeriodicalId":12799,"journal":{"name":"Geoscientific Model Development","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140259638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-05DOI: 10.5194/gmd-17-1995-2024
Joffrey Dumont Le Brazidec, P. Vanderbecken, A. Farchi, G. Broquet, G. Kuhlmann, M. Bocquet
Abstract. The quantification of emissions of greenhouse gases and air pollutants through the inversion of plumes in satellite images remains a complex problem that current methods can only assess with significant uncertainties. The anticipated launch of the CO2M (Copernicus Anthropogenic Carbon Dioxide Monitoring) satellite constellation in 2026 is expected to provide high-resolution images of CO2 (carbon dioxide) column-averaged mole fractions (XCO2), opening up new possibilities. However, the inversion of future CO2 plumes from CO2M will encounter various obstacles. A challenge is the low CO2 plume signal-to-noise ratio due to the variability in the background and instrumental errors in satellite measurements. Moreover, uncertainties in the transport and dispersion processes further complicate the inversion task. To address these challenges, deep learning techniques, such as neural networks, offer promising solutions for retrieving emissions from plumes in XCO2 images. Deep learning models can be trained to identify emissions from plume dynamics simulated using a transport model. It then becomes possible to extract relevant information from new plumes and predict their emissions. In this paper, we develop a strategy employing convolutional neural networks (CNNs) to estimate the emission fluxes from a plume in a pseudo-XCO2 image. Our dataset used to train and test such methods includes pseudo-images based on simulations of hourly XCO2, NO2 (nitrogen dioxide), and wind fields near various power plants in eastern Germany, tracing plumes from anthropogenic and biogenic sources. CNN models are trained to predict emissions from three power plants that exhibit diverse characteristics. The power plants used to assess the deep learning model's performance are not used to train the model. We find that the CNN model outperforms state-of-the-art plume inversion approaches, achieving highly accurate results with an absolute error about half of that of the cross-sectional flux method and an absolute relative error of ∼ 20 % when only the XCO2 and wind fields are used as inputs. Furthermore, we show that our estimations are only slightly affected by the absence of NO2 fields or a detection mechanism as additional information. Finally, interpretability techniques applied to our models confirm that the CNN automatically learns to identify the XCO2 plume and to assess emissions from the plume concentrations. These promising results suggest a high potential of CNNs in estimating local CO2 emissions from satellite images.
{"title":"Deep learning applied to CO2 power plant emissions quantification using simulated satellite images","authors":"Joffrey Dumont Le Brazidec, P. Vanderbecken, A. Farchi, G. Broquet, G. Kuhlmann, M. Bocquet","doi":"10.5194/gmd-17-1995-2024","DOIUrl":"https://doi.org/10.5194/gmd-17-1995-2024","url":null,"abstract":"Abstract. The quantification of emissions of greenhouse gases and air pollutants through the inversion of plumes in satellite images remains a complex problem that current methods can only assess with significant uncertainties. The anticipated launch of the CO2M (Copernicus Anthropogenic Carbon Dioxide Monitoring) satellite constellation in 2026 is expected to provide high-resolution images of CO2 (carbon dioxide) column-averaged mole fractions (XCO2), opening up new possibilities. However, the inversion of future CO2 plumes from CO2M will encounter various obstacles. A challenge is the low CO2 plume signal-to-noise ratio due to the variability in the background and instrumental errors in satellite measurements. Moreover, uncertainties in the transport and dispersion processes further complicate the inversion task. To address these challenges, deep learning techniques, such as neural networks, offer promising solutions for retrieving emissions from plumes in XCO2 images. Deep learning models can be trained to identify emissions from plume dynamics simulated using a transport model. It then becomes possible to extract relevant information from new plumes and predict their emissions. In this paper, we develop a strategy employing convolutional neural networks (CNNs) to estimate the emission fluxes from a plume in a pseudo-XCO2 image. Our dataset used to train and test such methods includes pseudo-images based on simulations of hourly XCO2, NO2 (nitrogen dioxide), and wind fields near various power plants in eastern Germany, tracing plumes from anthropogenic and biogenic sources. CNN models are trained to predict emissions from three power plants that exhibit diverse characteristics. The power plants used to assess the deep learning model's performance are not used to train the model. We find that the CNN model outperforms state-of-the-art plume inversion approaches, achieving highly accurate results with an absolute error about half of that of the cross-sectional flux method and an absolute relative error of ∼ 20 % when only the XCO2 and wind fields are used as inputs. Furthermore, we show that our estimations are only slightly affected by the absence of NO2 fields or a detection mechanism as additional information. Finally, interpretability techniques applied to our models confirm that the CNN automatically learns to identify the XCO2 plume and to assess emissions from the plume concentrations. These promising results suggest a high potential of CNNs in estimating local CO2 emissions from satellite images.\u0000","PeriodicalId":12799,"journal":{"name":"Geoscientific Model Development","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140263723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-05DOI: 10.5194/gmd-17-1975-2024
Fernanda Alvarado-Neves, L. Aillères, Lachlan Grose, Alexander R. Cruden, R. Armit
Abstract. Over the last 2 decades, there have been significant advances in the 3D modelling of geological structures via the incorporation of geological knowledge into the model algorithms. These methods take advantage of different structural data types and do not require manual processing, making them robust and objective. Igneous intrusions have received little attention in 3D modelling workflows, and there is no current method that ensures the reproduction of intrusion shapes comparable to those mapped in the field or in geophysical imagery. Intrusions are usually partly or totally covered, making the generation of realistic 3D models challenging without the modeller's intervention. In this contribution, we present a method to model igneous intrusions in 3D considering geometric constraints consistent with emplacement mechanisms. Contact data and inflation and propagation direction are used to constrain the geometry of the intrusion. Conceptual models of the intrusion contact are fitted to the data, providing a characterisation of the intrusion thickness and width. The method is tested using synthetic and real-world case studies, and the results indicate that the method can reproduce expected geometries without manual processing and with restricted datasets. A comparison with radial basis function (RBF) interpolation shows that our method can better reproduce complex geometries, such as saucer-shaped sill complexes.
{"title":"Three-dimensional geological modelling of igneous intrusions in LoopStructural v1.5.10","authors":"Fernanda Alvarado-Neves, L. Aillères, Lachlan Grose, Alexander R. Cruden, R. Armit","doi":"10.5194/gmd-17-1975-2024","DOIUrl":"https://doi.org/10.5194/gmd-17-1975-2024","url":null,"abstract":"Abstract. Over the last 2 decades, there have been significant advances in the 3D modelling of geological structures via the incorporation of geological knowledge into the model algorithms. These methods take advantage of different structural data types and do not require manual processing, making them robust and objective. Igneous intrusions have received little attention in 3D modelling workflows, and there is no current method that ensures the reproduction of intrusion shapes comparable to those mapped in the field or in geophysical imagery. Intrusions are usually partly or totally covered, making the generation of realistic 3D models challenging without the modeller's intervention. In this contribution, we present a method to model igneous intrusions in 3D considering geometric constraints consistent with emplacement mechanisms. Contact data and inflation and propagation direction are used to constrain the geometry of the intrusion. Conceptual models of the intrusion contact are fitted to the data, providing a characterisation of the intrusion thickness and width. The method is tested using synthetic and real-world case studies, and the results indicate that the method can reproduce expected geometries without manual processing and with restricted datasets. A comparison with radial basis function (RBF) interpolation shows that our method can better reproduce complex geometries, such as saucer-shaped sill complexes.\u0000","PeriodicalId":12799,"journal":{"name":"Geoscientific Model Development","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140078811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-04DOI: 10.5194/gmd-17-1957-2024
A. Brodtkorb, A. Benedictow, Heiko Klein, A. Kylling, Agnes Nyiri, Á. Valdebenito, E. Sollum, Nina Kristiansen
Abstract. Accurate modeling of ash clouds from volcanic eruptions requires knowledge about the eruption source parameters including eruption onset, duration, mass eruption rates, particle size distribution, and vertical-emission profiles. However, most of these parameters are unknown and must be estimated somehow. Some are estimated based on observed correlations and known volcano parameters. However, a more accurate estimate is often needed to bring the model into closer agreement with observations. This paper describes the inversion procedure implemented at the Norwegian Meteorological Institute for estimating ash emission rates from retrieved satellite ash column amounts and a priori knowledge. The overall procedure consists of five stages: (1) generate a priori emission estimates, (2) run forward simulations with a set of unit emission profiles, (3) collocate/match observations with emission simulations, (4) build system of linear equations, and (5) solve overdetermined systems. We go through the mathematical foundations for the inversion procedure, performance for synthetic cases, and performance for real-world cases. The novelties of this paper include a memory efficient formulation of the inversion problem, a detailed description and illustrations of the mathematical formulations, evaluation of the inversion method using synthetic known-truth data as well as real data, and inclusion of observations of ash cloud-top height. The source code used in this work is freely available under an open-source license and is able to be used for other similar applications.
{"title":"Estimating volcanic ash emissions using retrieved satellite ash columns and inverse ash transport modeling using VolcanicAshInversion v1.2.1, within the operational eEMEP (emergency European Monitoring and Evaluation Programme) volcanic plume forecasting system (version rv4_17)","authors":"A. Brodtkorb, A. Benedictow, Heiko Klein, A. Kylling, Agnes Nyiri, Á. Valdebenito, E. Sollum, Nina Kristiansen","doi":"10.5194/gmd-17-1957-2024","DOIUrl":"https://doi.org/10.5194/gmd-17-1957-2024","url":null,"abstract":"Abstract. Accurate modeling of ash clouds from volcanic eruptions requires knowledge about the eruption source parameters including eruption onset, duration, mass eruption rates, particle size distribution, and vertical-emission profiles. However, most of these parameters are unknown and must be estimated somehow. Some are estimated based on observed correlations and known volcano parameters. However, a more accurate estimate is often needed to bring the model into closer agreement with observations. This paper describes the inversion procedure implemented at the Norwegian Meteorological Institute for estimating ash emission rates from retrieved satellite ash column amounts and a priori knowledge. The overall procedure consists of five stages: (1) generate a priori emission estimates, (2) run forward simulations with a set of unit emission profiles, (3) collocate/match observations with emission simulations, (4) build system of linear equations, and (5) solve overdetermined systems. We go through the mathematical foundations for the inversion procedure, performance for synthetic cases, and performance for real-world cases. The novelties of this paper include a memory efficient formulation of the inversion problem, a detailed description and illustrations of the mathematical formulations, evaluation of the inversion method using synthetic known-truth data as well as real data, and inclusion of observations of ash cloud-top height. The source code used in this work is freely available under an open-source license and is able to be used for other similar applications.\u0000","PeriodicalId":12799,"journal":{"name":"Geoscientific Model Development","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140265752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01DOI: 10.5194/gmd-17-1931-2024
Kyoung‐Min Kim, Si-Wan Kim, Seunghwan Seo, Donald R. Blake, Seogju Cho, James H. Crawford, L. Emmons, Alan Fried, J. Herman, Jinkyu Hong, Jinsang Jung, Gabriele G. Pfister, A. Weinheimer, Jung‐Hun Woo, Qiang Zhang
Abstract. In this study, the WRF-Chem v4.4 model was utilized to evaluate the sensitivity of O3 simulations with three bottom-up emission inventories (EDGAR-HTAP v2 and v3 and KORUS v5) using surface and aircraft data in East Asia during the Korea-United States Air Quality (KORUS-AQ) campaign period in 2016. All emission inventories were found to reproduce the diurnal variations of O3 and its main precursor NO2 as compared to the surface monitor data. However, the spatial distributions of the daily maximum 8 h average (MDA8) O3 in the model do not completely align with the observations. The model MDA8 O3 had a negative (positive) bias north (south) of 30° N over China. All simulations underestimated the observed CO by 50 %–60 % over China and South Korea. In the Seoul Metropolitan Area (SMA), EDGAR-HTAP v2 and v3 and KORUS v5 simulated the vertical shapes and diurnal patterns of O3 and other precursors effectively, but the model underestimated the observed O3, CO, and HCHO concentrations. Notably, the model aromatic volatile organic compounds (VOCs) were significantly underestimated with the three bottom-up emission inventories, although the KORUS v5 shows improvements. The model isoprene estimations had a positive bias relative to the observations, suggesting that the Model of Emissions of Gases and Aerosols from Nature (MEGAN) version 2.04 overestimated isoprene emissions. Additional model simulations were conducted by doubling CO and VOC emissions over China and South Korea to investigate the causes of the model O3 biases and the effects of the long-range transport on the O3 over South Korea. The doubled CO and VOC emission simulations improved the model O3 simulations for the local-emission-dominant case but led to the model O3 overestimations for the transport-dominant case, which emphasizes the need for accurate representations of the local VOC emissions over South Korea.
{"title":"Sensitivity of the WRF-Chem v4.4 simulations of ozone and formaldehyde and their precursors to multiple bottom-up emission inventories over East Asia during the KORUS-AQ 2016 field campaign","authors":"Kyoung‐Min Kim, Si-Wan Kim, Seunghwan Seo, Donald R. Blake, Seogju Cho, James H. Crawford, L. Emmons, Alan Fried, J. Herman, Jinkyu Hong, Jinsang Jung, Gabriele G. Pfister, A. Weinheimer, Jung‐Hun Woo, Qiang Zhang","doi":"10.5194/gmd-17-1931-2024","DOIUrl":"https://doi.org/10.5194/gmd-17-1931-2024","url":null,"abstract":"Abstract. In this study, the WRF-Chem v4.4 model was utilized to evaluate the sensitivity of O3 simulations with three bottom-up emission inventories (EDGAR-HTAP v2 and v3 and KORUS v5) using surface and aircraft data in East Asia during the Korea-United States Air Quality (KORUS-AQ) campaign period in 2016. All emission inventories were found to reproduce the diurnal variations of O3 and its main precursor NO2 as compared to the surface monitor data. However, the spatial distributions of the daily maximum 8 h average (MDA8) O3 in the model do not completely align with the observations. The model MDA8 O3 had a negative (positive) bias north (south) of 30° N over China. All simulations underestimated the observed CO by 50 %–60 % over China and South Korea. In the Seoul Metropolitan Area (SMA), EDGAR-HTAP v2 and v3 and KORUS v5 simulated the vertical shapes and diurnal patterns of O3 and other precursors effectively, but the model underestimated the observed O3, CO, and HCHO concentrations. Notably, the model aromatic volatile organic compounds (VOCs) were significantly underestimated with the three bottom-up emission inventories, although the KORUS v5 shows improvements. The model isoprene estimations had a positive bias relative to the observations, suggesting that the Model of Emissions of Gases and Aerosols from Nature (MEGAN) version 2.04 overestimated isoprene emissions. Additional model simulations were conducted by doubling CO and VOC emissions over China and South Korea to investigate the causes of the model O3 biases and the effects of the long-range transport on the O3 over South Korea. The doubled CO and VOC emission simulations improved the model O3 simulations for the local-emission-dominant case but led to the model O3 overestimations for the transport-dominant case, which emphasizes the need for accurate representations of the local VOC emissions over South Korea.\u0000","PeriodicalId":12799,"journal":{"name":"Geoscientific Model Development","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140087917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01DOI: 10.5194/gmd-17-1885-2024
S. Vardag, Robert Maiwald
Abstract. To design a monitoring network for estimating CO2 fluxes in an urban area, a high-resolution observing system simulation experiment (OSSE) is performed using the transport model Graz Mesoscale Model (GRAMMv19.1) coupled to the Graz Lagrangian Model (GRALv19.1). First, a high-resolution anthropogenic emission inventory which is considered as the truth serves as input to the model to simulate CO2 concentration in the urban atmosphere on 10 m horizontal resolution in a 12.3 km × 12.3 km domain centred in Heidelberg, Germany. By sampling the CO2 concentration at selected stations and feeding the measurements into a Bayesian inverse framework, CO2 fluxes on a neighbourhood scale are estimated. Different configurations of possible measurement networks are tested to assess the precision of posterior CO2 fluxes. We determine the trade-off between the quality and quantity of sensors by comparing the information content for different set-ups. Decisions on investing in a larger number or in more precise sensors can be based on this result. We further analyse optimal sensor locations for flux estimation using a Monte Carlo approach. We examine the benefit of additionally measuring carbon monoxide (CO). We find that including CO as tracer in the inversion enables the disaggregation of different emission sectors. Finally, we quantify the benefit of introducing a temporal correlation into the prior emissions. The results of this study have implications for an optimal measurement network design for a city like Heidelberg. The study showcases the general usefulness of the inverse framework developed using GRAMM/GRAL for planning and evaluating measurement networks in an urban area.
{"title":"Optimising urban measurement networks for CO2 flux estimation: a high-resolution observing system simulation experiment using GRAMM/GRAL","authors":"S. Vardag, Robert Maiwald","doi":"10.5194/gmd-17-1885-2024","DOIUrl":"https://doi.org/10.5194/gmd-17-1885-2024","url":null,"abstract":"Abstract. To design a monitoring network for estimating CO2 fluxes in an urban area, a high-resolution observing system simulation experiment (OSSE) is performed using the transport model Graz Mesoscale Model (GRAMMv19.1) coupled to the Graz Lagrangian Model (GRALv19.1). First, a high-resolution anthropogenic emission inventory which is considered as the truth serves as input to the model to simulate CO2 concentration in the urban atmosphere on 10 m horizontal resolution in a 12.3 km × 12.3 km domain centred in Heidelberg, Germany. By sampling the CO2 concentration at selected stations and feeding the measurements into a Bayesian inverse framework, CO2 fluxes on a neighbourhood scale are estimated. Different configurations of possible measurement networks are tested to assess the precision of posterior CO2 fluxes. We determine the trade-off between the quality and quantity of sensors by comparing the information content for different set-ups. Decisions on investing in a larger number or in more precise sensors can be based on this result. We further analyse optimal sensor locations for flux estimation using a Monte Carlo approach. We examine the benefit of additionally measuring carbon monoxide (CO). We find that including CO as tracer in the inversion enables the disaggregation of different emission sectors. Finally, we quantify the benefit of introducing a temporal correlation into the prior emissions. The results of this study have implications for an optimal measurement network design for a city like Heidelberg. The study showcases the general usefulness of the inverse framework developed using GRAMM/GRAL for planning and evaluating measurement networks in an urban area.\u0000","PeriodicalId":12799,"journal":{"name":"Geoscientific Model Development","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140090979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01DOI: 10.5194/gmd-17-1903-2024
Kévin Fourteau, J. Brondex, Fanny Brun, Marie Dumont
Abstract. The surface energy budget drives the melt of the snow cover and glacier ice and its computation is thus of crucial importance in numerical models. This surface energy budget is the result of various surface energy fluxes, which depend on the input meteorological variables and surface temperature; of heat conduction towards the interior of the snow/ice; and potentially of surface melting if the melt temperature is reached. The surface temperature and melt rate of a snowpack or ice are thus driven by coupled processes. In addition, these energy fluxes are non-linear with respect to the surface temperature, making their numerical treatment challenging. To handle this complexity, some of the current numerical models tend to rely on a sequential treatment of the involved physical processes, in which surface fluxes, heat conduction, and melting are treated with some degree of decoupling. Similarly, some models do not explicitly define a surface temperature and rather use the temperature of the internal point closest to the surface instead. While these kinds of approaches simplify the implementation and increase the modularity of models, they can also introduce several problems, such as instabilities and mesh sensitivity. Here, we present a numerical methodology to treat the surface and internal energy budgets of snowpacks and glaciers in a tightly coupled manner, including potential surface melting when the melt temperature is reached. Specific care is provided to ensure that the proposed numerical scheme is as fast and robust as classical numerical treatment of the surface energy budget. Comparisons based on simple test cases show that the proposed methodology yields smaller errors for almost all time steps and mesh sizes considered and does not suffer from numerical instabilities, contrary to some classical treatments.
{"title":"A novel numerical implementation for the surface energy budget of melting snowpacks and glaciers","authors":"Kévin Fourteau, J. Brondex, Fanny Brun, Marie Dumont","doi":"10.5194/gmd-17-1903-2024","DOIUrl":"https://doi.org/10.5194/gmd-17-1903-2024","url":null,"abstract":"Abstract. The surface energy budget drives the melt of the snow cover and glacier ice and its computation is thus of crucial importance in numerical models. This surface energy budget is the result of various surface energy fluxes, which depend on the input meteorological variables and surface temperature; of heat conduction towards the interior of the snow/ice; and potentially of surface melting if the melt temperature is reached. The surface temperature and melt rate of a snowpack or ice are thus driven by coupled processes. In addition, these energy fluxes are non-linear with respect to the surface temperature, making their numerical treatment challenging. To handle this complexity, some of the current numerical models tend to rely on a sequential treatment of the involved physical processes, in which surface fluxes, heat conduction, and melting are treated with some degree of decoupling. Similarly, some models do not explicitly define a surface temperature and rather use the temperature of the internal point closest to the surface instead. While these kinds of approaches simplify the implementation and increase the modularity of models, they can also introduce several problems, such as instabilities and mesh sensitivity. Here, we present a numerical methodology to treat the surface and internal energy budgets of snowpacks and glaciers in a tightly coupled manner, including potential surface melting when the melt temperature is reached. Specific care is provided to ensure that the proposed numerical scheme is as fast and robust as classical numerical treatment of the surface energy budget. Comparisons based on simple test cases show that the proposed methodology yields smaller errors for almost all time steps and mesh sizes considered and does not suffer from numerical instabilities, contrary to some classical treatments.\u0000","PeriodicalId":12799,"journal":{"name":"Geoscientific Model Development","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140083327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-29DOI: 10.5194/gmd-17-1869-2024
K. Findell, Zun Yin, Eunkyo Seo, P. Dirmeyer, Nathan P. Arnold, Nathaniel Chaney, Megan D. Fowler, Meng-Tian Huang, David M. Lawrence, Po-Lun Ma, Joseph A. Santanello Jr.
Abstract. Land–atmosphere (L–A) interactions are important for understanding convective processes, climate feedbacks, the development and perpetuation of droughts, heatwaves, pluvials, and other land-centered climate anomalies. Local L–A coupling (LoCo) metrics capture relevant L–A processes, highlighting the impact of soil and vegetation states on surface flux partitioning and the impact of surface fluxes on boundary layer (BL) growth and development and the entrainment of air above the BL. A primary goal of the Climate Process Team in the Coupling Land and Atmospheric Subgrid Parameterizations (CLASP) project is parameterizing and characterizing the impact of subgrid heterogeneity in global and regional Earth system models (ESMs) to improve the connection between land and atmospheric states and processes. A critical step in achieving that aim is the incorporation of L–A metrics, especially LoCo metrics, into climate model diagnostic process streams. However, because land–atmosphere interactions span timescales of minutes (e.g., turbulent fluxes), hours (e.g., BL growth and decay), days (e.g., soil moisture memory), and seasons (e.g., variability in behavioral regimes between soil moisture and latent heat flux), with multiple processes of interest happening in different geographic regions at different times of year, there is not a single metric that captures all the modes, means, and methods of interaction between the land and the atmosphere. And while monthly means of most of the LoCo-relevant variables are routinely saved from ESM simulations, data storage constraints typically preclude routine archival of the hourly data that would enable the calculation of all LoCo metrics. Here, we outline a reasonable data request that would allow for adequate characterization of sub-daily coupling processes between the land and the atmosphere, preserving enough sub-daily output to describe, analyze, and better understand L–A coupling in modern climate models. A secondary request involves embedding calculations within the models to determine mean properties in and above the BL to further improve characterization of model behavior. Higher-frequency model output will (i) allow for more direct comparison with observational field campaigns on process-relevant timescales, (ii) enable demonstration of inter-model spread in L–A coupling processes, and (iii) aid in targeted identification of sources of deficiencies and opportunities for improvement of the models.
{"title":"Accurate assessment of land–atmosphere coupling in climate models requires high-frequency data output","authors":"K. Findell, Zun Yin, Eunkyo Seo, P. Dirmeyer, Nathan P. Arnold, Nathaniel Chaney, Megan D. Fowler, Meng-Tian Huang, David M. Lawrence, Po-Lun Ma, Joseph A. Santanello Jr.","doi":"10.5194/gmd-17-1869-2024","DOIUrl":"https://doi.org/10.5194/gmd-17-1869-2024","url":null,"abstract":"Abstract. Land–atmosphere (L–A) interactions are important for understanding convective processes, climate feedbacks, the development and perpetuation of droughts, heatwaves, pluvials, and other land-centered climate anomalies. Local L–A coupling (LoCo) metrics capture relevant L–A processes, highlighting the impact of soil and vegetation states on surface flux partitioning and the impact of surface fluxes on boundary layer (BL) growth and development and the entrainment of air above the BL. A primary goal of the Climate Process Team in the Coupling Land and Atmospheric Subgrid Parameterizations (CLASP) project is parameterizing and characterizing the impact of subgrid heterogeneity in global and regional Earth system models (ESMs) to improve the connection between land and atmospheric states and processes. A critical step in achieving that aim is the incorporation of L–A metrics, especially LoCo metrics, into climate model diagnostic process streams. However, because land–atmosphere interactions span timescales of minutes (e.g., turbulent fluxes), hours (e.g., BL growth and decay), days (e.g., soil moisture memory), and seasons (e.g., variability in behavioral regimes between soil moisture and latent heat flux), with multiple processes of interest happening in different geographic regions at different times of year, there is not a single metric that captures all the modes, means, and methods of interaction between the land and the atmosphere. And while monthly means of most of the LoCo-relevant variables are routinely saved from ESM simulations, data storage constraints typically preclude routine archival of the hourly data that would enable the calculation of all LoCo metrics. Here, we outline a reasonable data request that would allow for adequate characterization of sub-daily coupling processes between the land and the atmosphere, preserving enough sub-daily output to describe, analyze, and better understand L–A coupling in modern climate models. A secondary request involves embedding calculations within the models to determine mean properties in and above the BL to further improve characterization of model behavior. Higher-frequency model output will (i) allow for more direct comparison with observational field campaigns on process-relevant timescales, (ii) enable demonstration of inter-model spread in L–A coupling processes, and (iii) aid in targeted identification of sources of deficiencies and opportunities for improvement of the models.\u0000","PeriodicalId":12799,"journal":{"name":"Geoscientific Model Development","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140415051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}