Pub Date : 2021-04-26DOI: 10.21203/RS.3.RS-444318/V1
Pedro H. C. Avelar, L. Lamb, S. Tsoka, Jonathan Cardoso-Silva
Background: The novel coronavirus pandemic has affected Brazil's Santa Catarina State (SC) severely. At the time of writing (24 March 2021), over 764,000 cases and over 9,800 deaths by COVID-19 have been confirmed, hospitals were fully occupied with local news reporting at least 397 people in the waiting list for an ICU bed. Despite initial state-wide measures at the outbreak of the pandemic, the state government passed most responsibilities down to cities local government, leaving them to plan whether and when to apply Non-Pharmaceutical Interventions (NPIs). In an attempt to better inform local policy making, we applied an existing Bayesian algorithm to model the spread of the pandemic in the seven geographic macro-regions of the state. However, as we found that the model was too reactive to change in data trends, here we propose changes to extend the model and improve its forecasting capabilities. Methods: Our four proposed variations of the original method allow accessing data of daily reported infections and take into account under-reporting of cases more explicitly. Two of the proposed versions also attempt to model the delay in test reporting. We simulated weekly forecasting of deaths from the period from 31/05/2020 until 31/01/2021.First week data were used as a cold-start to the algorithm, after which weekly calibrations of the model were able to converge in fewer iterations. Google Mobility data were used as covariates to the model, as well as to estimate of the susceptible population at each simulated run. Findings: The changes made the model significantly less reactive and more rapid in adapting to scenarios after a peak in deaths is observed. Assuming that the cases are under-reported greatly benefited the model in its stability, and modelling retroactively-added data (due to the “hot” nature of the data used) had a negligible impact in performance. Interpretation: Although not as reliable as death statistics, case statistics, when modelled in conjunction with an overestimate parameter, provide a good alternative for improving the forecasting of models, especially in long-range predictions and after the peak of an infection wave.
{"title":"Weekly Bayesian Modelling Strategy to Predict Deaths by COVID-19: a Model and Case Study for the State of Santa Catarina, Brazil","authors":"Pedro H. C. Avelar, L. Lamb, S. Tsoka, Jonathan Cardoso-Silva","doi":"10.21203/RS.3.RS-444318/V1","DOIUrl":"https://doi.org/10.21203/RS.3.RS-444318/V1","url":null,"abstract":"\u0000 Background: The novel coronavirus pandemic has affected Brazil's Santa Catarina State (SC) severely. At the time of writing (24 March 2021), over 764,000 cases and over 9,800 deaths by COVID-19 have been confirmed, hospitals were fully occupied with local news reporting at least 397 people in the waiting list for an ICU bed. Despite initial state-wide measures at the outbreak of the pandemic, the state government passed most responsibilities down to cities local government, leaving them to plan whether and when to apply Non-Pharmaceutical Interventions (NPIs). In an attempt to better inform local policy making, we applied an existing Bayesian algorithm to model the spread of the pandemic in the seven geographic macro-regions of the state. However, as we found that the model was too reactive to change in data trends, here we propose changes to extend the model and improve its forecasting capabilities. Methods: Our four proposed variations of the original method allow accessing data of daily reported infections and take into account under-reporting of cases more explicitly. Two of the proposed versions also attempt to model the delay in test reporting. We simulated weekly forecasting of deaths from the period from 31/05/2020 until 31/01/2021.First week data were used as a cold-start to the algorithm, after which weekly calibrations of the model were able to converge in fewer iterations. Google Mobility data were used as covariates to the model, as well as to estimate of the susceptible population at each simulated run. Findings: The changes made the model significantly less reactive and more rapid in adapting to scenarios after a peak in deaths is observed. Assuming that the cases are under-reported greatly benefited the model in its stability, and modelling retroactively-added data (due to the “hot” nature of the data used) had a negligible impact in performance. Interpretation: Although not as reliable as death statistics, case statistics, when modelled in conjunction with an overestimate parameter, provide a good alternative for improving the forecasting of models, especially in long-range predictions and after the peak of an infection wave.","PeriodicalId":409996,"journal":{"name":"arXiv: Applications","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129049175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Banerjee, Arun G. Chandrasekhar, S. Dalpath, E. Duflo, J. Floretta, M. Jackson, Harini Kannan, F. Loza, Anirudh Sankar, A. Schrimpf, Maheshwor Shrestha
We evaluate a large-scale set of interventions to increase demand for immunization in Haryana, India. The policies under consideration include the two most frequently discussed tools--reminders and incentives--as well as an intervention inspired by the networks literature. We cross-randomize whether (a) individuals receive SMS reminders about upcoming vaccination drives; (b) individuals receive incentives for vaccinating their children; (c) influential individuals (information hubs, trusted individuals, or both) are asked to act as "ambassadors" receiving regular reminders to spread the word about immunization in their community. By taking into account different versions (or "dosages") of each intervention, we obtain 75 unique policy combinations. We develop a new statistical technique--a smart pooling and pruning procedure--for finding a best policy from a large set, which also determines which policies are effective and the effect of the best policy. We proceed in two steps. First, we use a LASSO technique to collapse the data: we pool dosages of the same treatment if the data cannot reject that they had the same impact, and prune policies deemed ineffective. Second, using the remaining (pooled) policies, we estimate the effect of the best policy, accounting for the winner's curse. The key outcomes are (i) the number of measles immunizations and (ii) the number of immunizations per dollar spent. The policy that has the largest impact (information hubs, SMS reminders, incentives that increase with each immunization) increases the number of immunizations by 44% relative to the status quo. The most cost-effective policy (information hubs, SMS reminders, no incentives) increases the number of immunizations per dollar by 9.1%.
{"title":"Selecting the Most Effective Nudge: Evidence from a Large-Scale Experiment on Immunization","authors":"A. Banerjee, Arun G. Chandrasekhar, S. Dalpath, E. Duflo, J. Floretta, M. Jackson, Harini Kannan, F. Loza, Anirudh Sankar, A. Schrimpf, Maheshwor Shrestha","doi":"10.3386/W28726","DOIUrl":"https://doi.org/10.3386/W28726","url":null,"abstract":"We evaluate a large-scale set of interventions to increase demand for immunization in Haryana, India. The policies under consideration include the two most frequently discussed tools--reminders and incentives--as well as an intervention inspired by the networks literature. We cross-randomize whether (a) individuals receive SMS reminders about upcoming vaccination drives; (b) individuals receive incentives for vaccinating their children; (c) influential individuals (information hubs, trusted individuals, or both) are asked to act as \"ambassadors\" receiving regular reminders to spread the word about immunization in their community. By taking into account different versions (or \"dosages\") of each intervention, we obtain 75 unique policy combinations. We develop a new statistical technique--a smart pooling and pruning procedure--for finding a best policy from a large set, which also determines which policies are effective and the effect of the best policy. We proceed in two steps. First, we use a LASSO technique to collapse the data: we pool dosages of the same treatment if the data cannot reject that they had the same impact, and prune policies deemed ineffective. Second, using the remaining (pooled) policies, we estimate the effect of the best policy, accounting for the winner's curse. The key outcomes are (i) the number of measles immunizations and (ii) the number of immunizations per dollar spent. The policy that has the largest impact (information hubs, SMS reminders, incentives that increase with each immunization) increases the number of immunizations by 44% relative to the status quo. The most cost-effective policy (information hubs, SMS reminders, no incentives) increases the number of immunizations per dollar by 9.1%.","PeriodicalId":409996,"journal":{"name":"arXiv: Applications","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117032198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-22DOI: 10.21203/RS.3.RS-137557/V1
Xian Yang, Shuo Wang, Yuting Xing, Ling Li, R. Xu, Karl J. Friston, Yike Guo
In epidemiological modelling, the instantaneous reproduction number, Rt, is important to understand the transmission dynamics of infectious diseases. Current Rt estimates often suffer from problems such as lagging, averaging and uncertainties demoting the usefulness of Rt. To address these problems, we propose a new method in the framework of sequential Bayesian inference where a Data Assimilation approach is taken for Rt estimation, resulting in the state-of-the-art ‘DARt’ system for Rt estimation. With DARt, the problem of time misalignment caused by lagging observations is tackled by incorporating observation delays into the joint inference of infections and Rt; the drawback of averaging is improved by instantaneous updating upon new observations and a model selection mechanism capturing abrupt changes caused by interventions; the uncertainty is quantified and reduced by employing Bayesian smoothing. We validate the performance of DARt through simulations and demonstrate its power in revealing the transmission dynamics of COVID-19.
{"title":"Revealing the Transmission Dynamics of COVID-19: A Bayesian Framework for Rt Estimation","authors":"Xian Yang, Shuo Wang, Yuting Xing, Ling Li, R. Xu, Karl J. Friston, Yike Guo","doi":"10.21203/RS.3.RS-137557/V1","DOIUrl":"https://doi.org/10.21203/RS.3.RS-137557/V1","url":null,"abstract":"\u0000 In epidemiological modelling, the instantaneous reproduction number, Rt, is important to understand the transmission dynamics of infectious diseases. Current Rt estimates often suffer from problems such as lagging, averaging and uncertainties demoting the usefulness of Rt. To address these problems, we propose a new method in the framework of sequential Bayesian inference where a Data Assimilation approach is taken for Rt estimation, resulting in the state-of-the-art ‘DARt’ system for Rt estimation. With DARt, the problem of time misalignment caused by lagging observations is tackled by incorporating observation delays into the joint inference of infections and Rt; the drawback of averaging is improved by instantaneous updating upon new observations and a model selection mechanism capturing abrupt changes caused by interventions; the uncertainty is quantified and reduced by employing Bayesian smoothing. We validate the performance of DARt through simulations and demonstrate its power in revealing the transmission dynamics of COVID-19.","PeriodicalId":409996,"journal":{"name":"arXiv: Applications","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126661403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Breidenbach, J. Ivanovs, A. Kangas, T. Nord‐Larsen, M. Nilson, R. Astrup
Policy measures and management decisions aiming at enhancing the role of forests in mitigating climate-change require reliable estimates of C-stock dynamics in greenhouse gas inventories (GHGIs). Aim of this study was to assemble design-based estimators to provide estimates relevant for GHGIs using national forest inventory (NFI) data. We improve basic expansion (BE) estimates of living-biomass C-stock loss using field-data only, by leveraging with remotely-sensed auxiliary data in model-assisted (MA) estimates. Our case studies from Norway, Sweden, Denmark, and Latvia covered an area of >70 Mha. Landsat-based Forest Cover Loss (FCL) and one-time wall-to-wall airborne laser scanning (ALS) data served as auxiliary data. ALS provided information on the C-stock before a potential disturbance indicated by FCL. The use of FCL in MA estimators resulted in considerable efficiency gains which in most cases were further increased by using ALS in addition. A doubling of efficiency was possible for national estimates and even larger efficiencies were observed at the sub-national level. Average annual estimates were considerably more precise than pooled estimates using NFI data from all years at once. The combination of remotely-sensed with NFI field data yields reliable estimates which is not necessarily the case when using remotely-sensed data without reference observations.
{"title":"Improving living biomass C-stock loss estimates by combining optical satellite, airborne laser scanning, and NFI data","authors":"J. Breidenbach, J. Ivanovs, A. Kangas, T. Nord‐Larsen, M. Nilson, R. Astrup","doi":"10.1139/CJFR-2020-0518","DOIUrl":"https://doi.org/10.1139/CJFR-2020-0518","url":null,"abstract":"Policy measures and management decisions aiming at enhancing the role of forests in mitigating climate-change require reliable estimates of C-stock dynamics in greenhouse gas inventories (GHGIs). Aim of this study was to assemble design-based estimators to provide estimates relevant for GHGIs using national forest inventory (NFI) data. We improve basic expansion (BE) estimates of living-biomass C-stock loss using field-data only, by leveraging with remotely-sensed auxiliary data in model-assisted (MA) estimates. Our case studies from Norway, Sweden, Denmark, and Latvia covered an area of >70 Mha. Landsat-based Forest Cover Loss (FCL) and one-time wall-to-wall airborne laser scanning (ALS) data served as auxiliary data. ALS provided information on the C-stock before a potential disturbance indicated by FCL. The use of FCL in MA estimators resulted in considerable efficiency gains which in most cases were further increased by using ALS in addition. A doubling of efficiency was possible for national estimates and even larger efficiencies were observed at the sub-national level. Average annual estimates were considerably more precise than pooled estimates using NFI data from all years at once. The combination of remotely-sensed with NFI field data yields reliable estimates which is not necessarily the case when using remotely-sensed data without reference observations.","PeriodicalId":409996,"journal":{"name":"arXiv: Applications","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132420686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Armero, G. Garc'ia-Donato, Joaqu'in Jim'enez-Puerto, Salvador Pardo-Gord'o, J. Bernabeu
Dating is a key element for archaeologists. We propose a Bayesian approach to provide chronology to sites that have neither radiocarbon dating nor clear stratigraphy and whose only information comes from lithic arrowheads. This classifier is based on the Dirichlet-multinomial inferential process and posterior predictive distributions. The procedure is applied to predict the period of a set of undated sites located in the east of the Iberian Peninsula during the IVth and IIIrd millennium cal. BC.
{"title":"Bayesian classification for dating archaeological sites via projectile points.","authors":"C. Armero, G. Garc'ia-Donato, Joaqu'in Jim'enez-Puerto, Salvador Pardo-Gord'o, J. Bernabeu","doi":"10.2436/20.8080.02.108","DOIUrl":"https://doi.org/10.2436/20.8080.02.108","url":null,"abstract":"Dating is a key element for archaeologists. We propose a Bayesian approach to provide chronology to sites that have neither radiocarbon dating nor clear stratigraphy and whose only information comes from lithic arrowheads. This classifier is based on the Dirichlet-multinomial inferential process and posterior predictive distributions. The procedure is applied to predict the period of a set of undated sites located in the east of the Iberian Peninsula during the IVth and IIIrd millennium cal. BC.","PeriodicalId":409996,"journal":{"name":"arXiv: Applications","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121809387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-23DOI: 10.1061/(ASCE)EM.1943-7889.0001996
Anindya Bhaduri, C. Meyer, J. Gillespie, B. Haque, M. Shields, L. Graham‐Brady
Discrete response of structures is often a key probabilistic quantity of interest. For example, one may need to identify the probability of a binary event, such as, whether a structure has buckled or not. In this study, an adaptive domain-based decomposition and classification method, combined with sparse grid sampling, is used to develop an efficient classification surrogate modeling algorithm for such discrete outputs. An assumption of monotonic behaviour of the output with respect to all model parameters, based on the physics of the problem, helps to reduce the number of model evaluations and makes the algorithm more efficient. As an application problem, this paper deals with the development of a computational framework for generation of probabilistic penetration response of S-2 glass/SC-15 epoxy composite plates under ballistic impact. This enables the computationally feasible generation of the probabilistic velocity response (PVR) curve or the $V_0-V_{100}$ curve as a function of the impact velocity, and the ballistic limit velocity prediction as a function of the model parameters. The PVR curve incorporates the variability of the model input parameters and describes the probability of penetration of the plate as a function of impact velocity.
{"title":"Probabilistic modeling of discrete structural response with application to composite plate penetration models.","authors":"Anindya Bhaduri, C. Meyer, J. Gillespie, B. Haque, M. Shields, L. Graham‐Brady","doi":"10.1061/(ASCE)EM.1943-7889.0001996","DOIUrl":"https://doi.org/10.1061/(ASCE)EM.1943-7889.0001996","url":null,"abstract":"Discrete response of structures is often a key probabilistic quantity of interest. For example, one may need to identify the probability of a binary event, such as, whether a structure has buckled or not. In this study, an adaptive domain-based decomposition and classification method, combined with sparse grid sampling, is used to develop an efficient classification surrogate modeling algorithm for such discrete outputs. An assumption of monotonic behaviour of the output with respect to all model parameters, based on the physics of the problem, helps to reduce the number of model evaluations and makes the algorithm more efficient. As an application problem, this paper deals with the development of a computational framework for generation of probabilistic penetration response of S-2 glass/SC-15 epoxy composite plates under ballistic impact. This enables the computationally feasible generation of the probabilistic velocity response (PVR) curve or the $V_0-V_{100}$ curve as a function of the impact velocity, and the ballistic limit velocity prediction as a function of the model parameters. The PVR curve incorporates the variability of the model input parameters and describes the probability of penetration of the plate as a function of impact velocity.","PeriodicalId":409996,"journal":{"name":"arXiv: Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121878185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-19DOI: 10.1101/2020.11.19.20235036
M. Mieskolainen, R. Bainbridge, O. Buchmueller, L. Lyons, N. Wardle
The determination of the infection fatality rate (IFR) for the novel SARS-CoV-2 coronavirus is a key aim for many of the field studies that are currently being undertaken in response to the pandemic. The IFR together with the basic reproduction number R0, are the main epidemic parameters describing severity and transmissibility of the virus, respectively. The IFR can be also used as a basis for estimating and monitoring the number of infected individuals in a population, which may be subsequently used to inform policy decisions relating to public health interventions and lockdown strategies. The interpretation of IFR measurements requires the calculation of confidence intervals. We present a number of statistical methods that are relevant in this context and develop an inverse problem formulation to determine correction factors to mitigate time-dependent effects that can lead to biased IFR estimates. We also review a number of methods to combine IFR estimates from multiple independent studies, provide example calculations throughout this note and conclude with a summary and "best practice" recommendations. The developed code is available online.
{"title":"Statistical techniques to estimate the SARS-CoV-2 infection fatality rate","authors":"M. Mieskolainen, R. Bainbridge, O. Buchmueller, L. Lyons, N. Wardle","doi":"10.1101/2020.11.19.20235036","DOIUrl":"https://doi.org/10.1101/2020.11.19.20235036","url":null,"abstract":"The determination of the infection fatality rate (IFR) for the novel SARS-CoV-2 coronavirus is a key aim for many of the field studies that are currently being undertaken in response to the pandemic. The IFR together with the basic reproduction number R0, are the main epidemic parameters describing severity and transmissibility of the virus, respectively. The IFR can be also used as a basis for estimating and monitoring the number of infected individuals in a population, which may be subsequently used to inform policy decisions relating to public health interventions and lockdown strategies. The interpretation of IFR measurements requires the calculation of confidence intervals. We present a number of statistical methods that are relevant in this context and develop an inverse problem formulation to determine correction factors to mitigate time-dependent effects that can lead to biased IFR estimates. We also review a number of methods to combine IFR estimates from multiple independent studies, provide example calculations throughout this note and conclude with a summary and \"best practice\" recommendations. The developed code is available online.","PeriodicalId":409996,"journal":{"name":"arXiv: Applications","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130897616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diameter at breast height (DBH) distributions offer valuable information for operational and strategic forest management decisions. We predicted DBH distributions using Norwegian national forest inventory and airborne laser scanning data in an 8.7 Mha study area and compared the predictive performance of parameter prediction methods using linear-mixed effects (PPM) and generalized linear-mixed models (GLM), and a k nearest neighbor (NN) approach. With PPM and GLM, it was assumed that the data follow a truncated Weibull distribution. While GLM resulted in slightly smaller errors than PPM, both were clearly outperformed by NN. We applied NN to study the variance of model-assisted (MA) estimates of the DBH distribution in the whole study area. The MA estimator yielded greater than or almost equal efficiencies as the direct estimator in the 2 cm DBH classes (6, 8,..., 50 cm) where relative efficiencies (REs) varied in the range of 0.97$-$1.63. RE was largest in the DBH classes $leq$ 10 cm and decreased towards the right tail of the distribution. A forest mask and tree species map introduced further uncertainty beyond the DBH distribution model, which reduced REs to 0.97$-$1.50.
{"title":"Prediction and model-assisted estimation of diameter distributions using Norwegian national forest inventory and airborne laser scanning data","authors":"Janne Raty, R. Astrup, J. Breidenbach","doi":"10.1139/CJFR-2020-0440","DOIUrl":"https://doi.org/10.1139/CJFR-2020-0440","url":null,"abstract":"Diameter at breast height (DBH) distributions offer valuable information for operational and strategic forest management decisions. We predicted DBH distributions using Norwegian national forest inventory and airborne laser scanning data in an 8.7 Mha study area and compared the predictive performance of parameter prediction methods using linear-mixed effects (PPM) and generalized linear-mixed models (GLM), and a k nearest neighbor (NN) approach. With PPM and GLM, it was assumed that the data follow a truncated Weibull distribution. While GLM resulted in slightly smaller errors than PPM, both were clearly outperformed by NN. We applied NN to study the variance of model-assisted (MA) estimates of the DBH distribution in the whole study area. The MA estimator yielded greater than or almost equal efficiencies as the direct estimator in the 2 cm DBH classes (6, 8,..., 50 cm) where relative efficiencies (REs) varied in the range of 0.97$-$1.63. RE was largest in the DBH classes $leq$ 10 cm and decreased towards the right tail of the distribution. A forest mask and tree species map introduced further uncertainty beyond the DBH distribution model, which reduced REs to 0.97$-$1.50.","PeriodicalId":409996,"journal":{"name":"arXiv: Applications","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130496939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sanjib Sharma, Michael Gomez, K. Keller, R. Nicholas, A. Mejia
Flood-related risks to people and property are expected to increase in the future due to environmental and demographic changes. It is important to quantify and effectively communicate flood hazards and exposure to inform the design and implementation of flood risk management strategies. Here we develop an integrated modeling framework to assess projected changes in regional riverine flood inundation risks. The framework samples climate model outputs to force a hydrologic model and generate streamflow projections. Together with a statistical and hydraulic model, we use the projected streamflow to map the uncertainty of flood inundation projections for extreme flood events. We implement the framework for rivers across the state of Pennsylvania, United States. Our projections suggest that flood hazards and exposure across Pennsylvania are overall increasing with future climate change. Specific regions, including the main stem Susquehanna River, lower portion of the Allegheny basin and central portion of Delaware River basin, demonstrate higher flood inundation risks. In our analysis, the climate uncertainty dominates the overall uncertainty surrounding the flood inundation projection chain. The combined hydrologic and hydraulic uncertainties can account for as much as 37% of the total uncertainty. We discuss how this framework can provide regional and dynamic flood-risk assessments and help to inform the design of risk-management strategies.
{"title":"Regional Flood Risk Projections under Climate Change","authors":"Sanjib Sharma, Michael Gomez, K. Keller, R. Nicholas, A. Mejia","doi":"10.1175/JHM-D-20-0238.1","DOIUrl":"https://doi.org/10.1175/JHM-D-20-0238.1","url":null,"abstract":"Flood-related risks to people and property are expected to increase in the future due to environmental and demographic changes. It is important to quantify and effectively communicate flood hazards and exposure to inform the design and implementation of flood risk management strategies. Here we develop an integrated modeling framework to assess projected changes in regional riverine flood inundation risks. The framework samples climate model outputs to force a hydrologic model and generate streamflow projections. Together with a statistical and hydraulic model, we use the projected streamflow to map the uncertainty of flood inundation projections for extreme flood events. We implement the framework for rivers across the state of Pennsylvania, United States. Our projections suggest that flood hazards and exposure across Pennsylvania are overall increasing with future climate change. Specific regions, including the main stem Susquehanna River, lower portion of the Allegheny basin and central portion of Delaware River basin, demonstrate higher flood inundation risks. In our analysis, the climate uncertainty dominates the overall uncertainty surrounding the flood inundation projection chain. The combined hydrologic and hydraulic uncertainties can account for as much as 37% of the total uncertainty. We discuss how this framework can provide regional and dynamic flood-risk assessments and help to inform the design of risk-management strategies.","PeriodicalId":409996,"journal":{"name":"arXiv: Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129082883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-09DOI: 10.21203/rs.3.rs-96544/v1
Konstantinos Demertzis, L. Magafas, D. Tsiotas
The global crisis caused by the COVID-19 pandemic, in conjunction with the economic consequences and the collapse of health systems, has raised serious concerns in Europe, which is the most affected continent by the pandemic since it recorded 2,388,694 cases and 190,091 deaths (39.6% of the worldwide total), of which 71.7% (136,238) are in the United Kingdom (43,414), Italy (34,708), France (29,778), and Spain (28,338). Unlike other countries, Greece, with about 310 confirmed cases and 18 deaths per million, is one bright exception in the study and analysis of this phenomenon. Focusing on the peculiarities of the disease spreading in Greece, both in epidemiological and in implementation terms, this paper applies an exploratory analysis of COVID-19 temporal spread in Greece and proposes a methodological approach for the modelling and prediction of the disease based on the Regression Splines algorithm and the change rate of the total infections. Also, it proposes a hybrid spline regression and complex network model of social distance measures evaluating and interpreting the spread of the disease. The overall approach contributes to decision making and support of the public health system and to the fight against the pandemic.
{"title":"Flattening the COVID-19 Curve: The “Greek” case in the Global Pandemic","authors":"Konstantinos Demertzis, L. Magafas, D. Tsiotas","doi":"10.21203/rs.3.rs-96544/v1","DOIUrl":"https://doi.org/10.21203/rs.3.rs-96544/v1","url":null,"abstract":"\u0000 The global crisis caused by the COVID-19 pandemic, in conjunction with the economic consequences and the collapse of health systems, has raised serious concerns in Europe, which is the most affected continent by the pandemic since it recorded 2,388,694 cases and 190,091 deaths (39.6% of the worldwide total), of which 71.7% (136,238) are in the United Kingdom (43,414), Italy (34,708), France (29,778), and Spain (28,338). Unlike other countries, Greece, with about 310 confirmed cases and 18 deaths per million, is one bright exception in the study and analysis of this phenomenon. Focusing on the peculiarities of the disease spreading in Greece, both in epidemiological and in implementation terms, this paper applies an exploratory analysis of COVID-19 temporal spread in Greece and proposes a methodological approach for the modelling and prediction of the disease based on the Regression Splines algorithm and the change rate of the total infections. Also, it proposes a hybrid spline regression and complex network model of social distance measures evaluating and interpreting the spread of the disease. The overall approach contributes to decision making and support of the public health system and to the fight against the pandemic.","PeriodicalId":409996,"journal":{"name":"arXiv: Applications","volume":"57 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126129913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}