The American Indian war lasted over one hundred years, and is a major event in the history of North America. As expected, since the war commenced in late eighteenth century, casualty records surrounding this conflict contain numerous sources of error, such as rounding and counting. Additionally, while major battles such as the Battle of the Little Bighorn were recorded, many smaller skirmishes were completely omitted from the records. Over the last few decades, it has been observed that the number of casualties in major conflicts follows a power law distribution. This paper places this observation within the Bayesian paradigm, enabling modelling of different error sources, allowing inferences to be made about the overall casualty numbers in the American Indian war.
{"title":"Estimating the number of casualties in the American Indian war: a Bayesian analysis using the power law distribution","authors":"C. Gillespie","doi":"10.1214/17-AOAS1082","DOIUrl":"https://doi.org/10.1214/17-AOAS1082","url":null,"abstract":"The American Indian war lasted over one hundred years, and is a major event in the history of North America. As expected, since the war commenced in late eighteenth century, casualty records surrounding this conflict contain numerous sources of error, such as rounding and counting. Additionally, while major battles such as the Battle of the Little Bighorn were recorded, many smaller skirmishes were completely omitted from the records. Over the last few decades, it has been observed that the number of casualties in major conflicts follows a power law distribution. This paper places this observation within the Bayesian paradigm, enabling modelling of different error sources, allowing inferences to be made about the overall casualty numbers in the American Indian war.","PeriodicalId":409996,"journal":{"name":"arXiv: Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123497088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In an effort to combat ad annoyance in mobile apps, publishers have introduced a new ad format called "Incentivized Advertising" or "Rewarded Advertising", whereby users receive rewards in exchange for watching ads. There is much debate in the industry regarding its' effectiveness. On the one hand, incentivized advertising is less intrusive and annoying, but on the other hand, users might be more interested in the rewards rather than the ad content. Using a large dataset of 1 million impressions from a mobile advertising platform, and in three separate quasi-experimental approaches, we find that incentivized advertising leads to lower users' click-through rates, but a higher overall install rate of the advertised app. In the second part, we study the mechanism of how incentivized advertising affects users' behavior. We test the hypothesis that incentivized advertising causes a temptation effect, whereby users prefer to collect and enjoy their rewards immediately, instead of pursuing the ads. We find the temptation effect is stronger when (i) users have to wait longer before receiving the rewards and when (ii) the value of the reward is relatively larger. We further find support that incentivized advertising has a positive effect of reducing ad annoyance -- an effect that is stronger for small-screen mobile devices, where advertising is more annoying. Finally, we take the publisher's perspective and quantify the overall effect on ad revenue. Our difference-in-differences estimates suggest switching to incentivized advertising would increase the publisher's revenue by $3.10 per 1,000 impressions.
{"title":"Understanding the Effect of Incentivized Advertising along the Conversion Funnel","authors":"K. Chiong, Sha Yang, Richard Y. Chen","doi":"10.2139/ssrn.3714353","DOIUrl":"https://doi.org/10.2139/ssrn.3714353","url":null,"abstract":"In an effort to combat ad annoyance in mobile apps, publishers have introduced a new ad format called \"Incentivized Advertising\" or \"Rewarded Advertising\", whereby users receive rewards in exchange for watching ads. There is much debate in the industry regarding its' effectiveness. On the one hand, incentivized advertising is less intrusive and annoying, but on the other hand, users might be more interested in the rewards rather than the ad content. Using a large dataset of 1 million impressions from a mobile advertising platform, and in three separate quasi-experimental approaches, we find that incentivized advertising leads to lower users' click-through rates, but a higher overall install rate of the advertised app. \u0000In the second part, we study the mechanism of how incentivized advertising affects users' behavior. We test the hypothesis that incentivized advertising causes a temptation effect, whereby users prefer to collect and enjoy their rewards immediately, instead of pursuing the ads. We find the temptation effect is stronger when (i) users have to wait longer before receiving the rewards and when (ii) the value of the reward is relatively larger. We further find support that incentivized advertising has a positive effect of reducing ad annoyance -- an effect that is stronger for small-screen mobile devices, where advertising is more annoying. Finally, we take the publisher's perspective and quantify the overall effect on ad revenue. Our difference-in-differences estimates suggest switching to incentivized advertising would increase the publisher's revenue by $3.10 per 1,000 impressions.","PeriodicalId":409996,"journal":{"name":"arXiv: Applications","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114352921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Activity spaces are fundamental to the assessment of individuals' dynamic exposure to social and environmental risk factors associated with multiple spatial contexts that are visited during activities of daily living. In this paper we survey existing approaches for measuring the geometry, size and structure of activity spaces based on GPS data, and explain their limitations. We propose addressing these shortcomings through a nonparametric approach called density ranking, and also through three summary curves: the mass-volume curve, the Betti number curve, and the persistence curve. We introduce a novel mixture model for human activity spaces, and study its asymptotic properties. We prove that the kernel density estimator which, at the present time, is one of the most widespread methods for measuring activity spaces is not a stable estimator of their structure. We illustrate the practical value of our methods with a simulation study, and with a recently collected GPS dataset that comprises the locations visited by ten individuals over a six months period.
{"title":"Measuring human activity spaces from GPS data with density ranking and summary curves","authors":"Yen-Chi Chen, A. Dobra","doi":"10.1214/19-aoas1311","DOIUrl":"https://doi.org/10.1214/19-aoas1311","url":null,"abstract":"Activity spaces are fundamental to the assessment of individuals' dynamic exposure to social and environmental risk factors associated with multiple spatial contexts that are visited during activities of daily living. In this paper we survey existing approaches for measuring the geometry, size and structure of activity spaces based on GPS data, and explain their limitations. We propose addressing these shortcomings through a nonparametric approach called density ranking, and also through three summary curves: the mass-volume curve, the Betti number curve, and the persistence curve. We introduce a novel mixture model for human activity spaces, and study its asymptotic properties. We prove that the kernel density estimator which, at the present time, is one of the most widespread methods for measuring activity spaces is not a stable estimator of their structure. We illustrate the practical value of our methods with a simulation study, and with a recently collected GPS dataset that comprises the locations visited by ten individuals over a six months period.","PeriodicalId":409996,"journal":{"name":"arXiv: Applications","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131535510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we describe how the hypergeometric test can be used to determine whether a given theme of interest occurs in a storyset at a frequency more than would be expected by chance. By a storyset we mean simply a list of stories defined according to a common attribute (e.g., author, movement, period). The test works roughly as follows: Given a background storyset and a sub-storyset of interest, the test determines whether a given theme is over-represented in the sub-storyset, based on comparing the proportions of stories in the sub-storyset and background storyset featuring the theme. A storyset is said to be "enriched" for a theme with respect to a particular background storyset, when the theme is identified as being significantly over-represented by the test. Furthermore, we introduce here a toy dataset consisting of 280 manually themed Star Trek television franchise episodes. As a proof of concept, we use the hypergeometric test to analyze the Star Trek stories for enriched themes. The hypergeometric testing approach to theme enrichment analysis is implemented for the Star Trek thematic dataset in the R package stoRy. A related R Shiny web application can be found at this https URL.
{"title":"Theme Enrichment Analysis: A Statistical Test for Identifying Significantly Enriched Themes in a List of Stories with an Application to the Star Trek Television Franchise","authors":"Mikael Onsjo, Paul Sheridan","doi":"10.16995/dscn.316","DOIUrl":"https://doi.org/10.16995/dscn.316","url":null,"abstract":"In this paper, we describe how the hypergeometric test can be used to determine whether a given theme of interest occurs in a storyset at a frequency more than would be expected by chance. By a storyset we mean simply a list of stories defined according to a common attribute (e.g., author, movement, period). The test works roughly as follows: Given a background storyset and a sub-storyset of interest, the test determines whether a given theme is over-represented in the sub-storyset, based on comparing the proportions of stories in the sub-storyset and background storyset featuring the theme. A storyset is said to be \"enriched\" for a theme with respect to a particular background storyset, when the theme is identified as being significantly over-represented by the test. Furthermore, we introduce here a toy dataset consisting of 280 manually themed Star Trek television franchise episodes. As a proof of concept, we use the hypergeometric test to analyze the Star Trek stories for enriched themes. The hypergeometric testing approach to theme enrichment analysis is implemented for the Star Trek thematic dataset in the R package stoRy. A related R Shiny web application can be found at this https URL.","PeriodicalId":409996,"journal":{"name":"arXiv: Applications","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123953359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rudiger Hewer, P. Friederichs, A. Hense, M. Schlather
The integration of physical relationships into stochastic models is of major interest e.g. in data assimilation. Here, a multivariate Gaussian random field formulation is introduced, which represents the differential relations of the two-dimensional wind field and related variables such as streamfunction, velocity potential, vorticity and divergence. The covariance model is based on a flexible bivariate Matern covariance function for streamfunction and velocity potential. It allows for different variances in the potentials, non-zero correlations between them, anisotropy and a flexible smoothness parameter. The joint covariance function of the related variables is derived analytically. Further, it is shown that a consistent model with non-zero correlations between the potentials and positive definite covariance function is possible. The statistical model is fitted to forecasts of the horizontal wind fields of a mesoscale numerical weather prediction system. Parameter uncertainty is assessed by a parametric bootstrap method. The estimates reveal only physically negligible correlations between the potentials. In contrast to the numerical estimator, the statistical estimator of the ratio between the variances of the rotational and divergent wind components is unbiased.
{"title":"A Matern based multivariate Gaussian random process for a consistent model of the horizontal wind components and related variables","authors":"Rudiger Hewer, P. Friederichs, A. Hense, M. Schlather","doi":"10.1175/JAS-D-16-0369.1","DOIUrl":"https://doi.org/10.1175/JAS-D-16-0369.1","url":null,"abstract":"The integration of physical relationships into stochastic models is of major interest e.g. in data assimilation. Here, a multivariate Gaussian random field formulation is introduced, which represents the differential relations of the two-dimensional wind field and related variables such as streamfunction, velocity potential, vorticity and divergence. The covariance model is based on a flexible bivariate Matern covariance function for streamfunction and velocity potential. It allows for different variances in the potentials, non-zero correlations between them, anisotropy and a flexible smoothness parameter. The joint covariance function of the related variables is derived analytically. Further, it is shown that a consistent model with non-zero correlations between the potentials and positive definite covariance function is possible. The statistical model is fitted to forecasts of the horizontal wind fields of a mesoscale numerical weather prediction system. Parameter uncertainty is assessed by a parametric bootstrap method. The estimates reveal only physically negligible correlations between the potentials. In contrast to the numerical estimator, the statistical estimator of the ratio between the variances of the rotational and divergent wind components is unbiased.","PeriodicalId":409996,"journal":{"name":"arXiv: Applications","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132302999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-06-26DOI: 10.1007/978-3-319-77332-2_3
Elizabeth L. Ogburn
{"title":"Challenges to Estimating Contagion Effects from Observational Data","authors":"Elizabeth L. Ogburn","doi":"10.1007/978-3-319-77332-2_3","DOIUrl":"https://doi.org/10.1007/978-3-319-77332-2_3","url":null,"abstract":"","PeriodicalId":409996,"journal":{"name":"arXiv: Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129584356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-05-02DOI: 10.1142/S021952591750014X
L. Pappalardo, Paolo Cintia
The availability of massive data about sports activities offers nowadays the opportunity to quantify the relation between performance and success. In this study, we analyze more than 6,000 games and 10 million events in six European leagues and investigate this relation in soccer competitions. We discover that a team's position in a competition's final ranking is significantly related to its typical performance, as described by a set of technical features extracted from the soccer data. Moreover we find that, while victory and defeats can be explained by the team's performance during a game, it is difficult to detect draws by using a machine learning approach. We then simulate the outcomes of an entire season of each league only relying on technical data, i.e. excluding the goals scored, exploiting a machine learning model trained on data from past seasons. The simulation produces a team ranking (the PC ranking) which is close to the actual ranking, suggesting that a complex systems' view on soccer has the potential of revealing hidden patterns regarding the relation between performance and success.
{"title":"Quantifying the relation between performance and success in soccer","authors":"L. Pappalardo, Paolo Cintia","doi":"10.1142/S021952591750014X","DOIUrl":"https://doi.org/10.1142/S021952591750014X","url":null,"abstract":"The availability of massive data about sports activities offers nowadays the opportunity to quantify the relation between performance and success. In this study, we analyze more than 6,000 games and 10 million events in six European leagues and investigate this relation in soccer competitions. We discover that a team's position in a competition's final ranking is significantly related to its typical performance, as described by a set of technical features extracted from the soccer data. Moreover we find that, while victory and defeats can be explained by the team's performance during a game, it is difficult to detect draws by using a machine learning approach. We then simulate the outcomes of an entire season of each league only relying on technical data, i.e. excluding the goals scored, exploiting a machine learning model trained on data from past seasons. The simulation produces a team ranking (the PC ranking) which is close to the actual ranking, suggesting that a complex systems' view on soccer has the potential of revealing hidden patterns regarding the relation between performance and success.","PeriodicalId":409996,"journal":{"name":"arXiv: Applications","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124714504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-03-30DOI: 10.19080/BBOAJ.2017.01.555556
F. Atem, Roland A. Matsouaka
We consider linear regression model estimation where the covariate of interest is randomly censored. Under a non-informative censoring mechanism, one may obtain valid estimates by deleting censored observations. However, this comes at a cost of lost information and decreased efficiency, especially under heavy censoring. Other methods for dealing with censored covariates, such as ignoring censoring or replacing censored observations with a fixed number, often lead to severely biased results and are of limited practicality. Parametric methods based on maximum likelihood estimation as well as semiparametric and non-parametric methods have been successfully used in linear regression estimation with censored covariates where censoring is due to a limit of detection. In this paper, we adapt some of these methods to handle randomly censored covariates and compare them under different scenarios to recently-developed semiparametric and nonparametric methods for randomly censored covariates. Specifically, we consider both dependent and independent randomly censored mechanisms as well as the impact of using a non-parametric algorithm on the distribution of the randomly censored covariate. Through extensive simulation studies, we compare the performance of these methods under different scenarios. Finally, we illustrate and compare the methods using the Framingham Health Study data to assess the association between low-density lipoprotein (LDL) in offspring and parental age at onset of a clinically-diagnosed cardiovascular event.
{"title":"Linear regression model with a randomly censored predictor:Estimation procedures","authors":"F. Atem, Roland A. Matsouaka","doi":"10.19080/BBOAJ.2017.01.555556","DOIUrl":"https://doi.org/10.19080/BBOAJ.2017.01.555556","url":null,"abstract":"We consider linear regression model estimation where the covariate of interest is randomly censored. Under a non-informative censoring mechanism, one may obtain valid estimates by deleting censored observations. However, this comes at a cost of lost information and decreased efficiency, especially under heavy censoring. Other methods for dealing with censored covariates, such as ignoring censoring or replacing censored observations with a fixed number, often lead to severely biased results and are of limited practicality. Parametric methods based on maximum likelihood estimation as well as semiparametric and non-parametric methods have been successfully used in linear regression estimation with censored covariates where censoring is due to a limit of detection. \u0000In this paper, we adapt some of these methods to handle randomly censored covariates and compare them under different scenarios to recently-developed semiparametric and nonparametric methods for randomly censored covariates. Specifically, we consider both dependent and independent randomly censored mechanisms as well as the impact of using a non-parametric algorithm on the distribution of the randomly censored covariate. Through extensive simulation studies, we compare the performance of these methods under different scenarios. Finally, we illustrate and compare the methods using the Framingham Health Study data to assess the association between low-density lipoprotein (LDL) in offspring and parental age at onset of a clinically-diagnosed cardiovascular event.","PeriodicalId":409996,"journal":{"name":"arXiv: Applications","volume":"282 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121335777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Spatio-temporal hierarchical modeling is an extremely attractive way to model the spread of crime or terrorism data over a given region, especially when the observations are counts and must be modeled discretely. The spatio-temporal diffusion is placed, as a matter of convenience, in the process model allowing for straightforward estimation of the diffusion parameters through Bayesian techniques. However, this method of modeling does not allow for the existence of self-excitation, or a temporal data model dependency, that has been shown to exist in criminal and terrorism data. In this manuscript we will use existing theories on how violence spreads to create models that allow for both spatio-temporal diffusion in the process model as well as temporal diffusion, or self-excitation, in the data model. We will further demonstrate how Laplace approximations similar to their use in Integrated Nested Laplace Approximation can be used to quickly and accurately conduct inference of self-exciting spatio-temporal models allowing practitioners a new way of fitting and comparing multiple process models. We will illustrate this approach by fitting a self-exciting spatio-temporal model to terrorism data in Iraq and demonstrate how choice of process model leads to differing conclusions on the existence of self-excitation in the data and differing conclusions on how violence is spreading spatio-temporally.
{"title":"Modeling and Estimation for Self-Exciting Spatio-Temporal Models of Terrorist Activity","authors":"Nicholas J. Clark, P. Dixon","doi":"10.1214/17-AOAS1112","DOIUrl":"https://doi.org/10.1214/17-AOAS1112","url":null,"abstract":"Spatio-temporal hierarchical modeling is an extremely attractive way to model the spread of crime or terrorism data over a given region, especially when the observations are counts and must be modeled discretely. The spatio-temporal diffusion is placed, as a matter of convenience, in the process model allowing for straightforward estimation of the diffusion parameters through Bayesian techniques. However, this method of modeling does not allow for the existence of self-excitation, or a temporal data model dependency, that has been shown to exist in criminal and terrorism data. In this manuscript we will use existing theories on how violence spreads to create models that allow for both spatio-temporal diffusion in the process model as well as temporal diffusion, or self-excitation, in the data model. We will further demonstrate how Laplace approximations similar to their use in Integrated Nested Laplace Approximation can be used to quickly and accurately conduct inference of self-exciting spatio-temporal models allowing practitioners a new way of fitting and comparing multiple process models. We will illustrate this approach by fitting a self-exciting spatio-temporal model to terrorism data in Iraq and demonstrate how choice of process model leads to differing conclusions on the existence of self-excitation in the data and differing conclusions on how violence is spreading spatio-temporally.","PeriodicalId":409996,"journal":{"name":"arXiv: Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131359642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-03-15DOI: 10.18869/acadpub.jsri.13.2.155
V. Tadayon
In this paper, we suggest using a skew Gaussian-log Gaussian model for the analysis of spatial censored data from a Bayesian point of view. This approach furnishes an extension of the skew log Gaussian model to accommodate to both skewness and heavy tails and also censored data. All of the characteristics mentioned are three pervasive features of spatial data. We utilize data augmentation method and Markov chain Monte Carlo (MCMC) algorithms to do posterior calculations. The methodology is illustrated using simulated data, as well as applying it to a real data set. Keywords: Censored data, data augmentation, non-Gaussian spatial models, outlier, unified skew Gaussian.
{"title":"Bayesian Analysis of Censored Spatial Data Based on a Non-Gaussian Model","authors":"V. Tadayon","doi":"10.18869/acadpub.jsri.13.2.155","DOIUrl":"https://doi.org/10.18869/acadpub.jsri.13.2.155","url":null,"abstract":"In this paper, we suggest using a skew Gaussian-log Gaussian model for the analysis of spatial censored data from a Bayesian point of view. This approach furnishes an extension of the skew log Gaussian model to accommodate to both skewness and heavy tails and also censored data. All of the characteristics mentioned are three pervasive features of spatial data. We utilize data augmentation method and Markov chain Monte Carlo (MCMC) algorithms to do posterior calculations. The methodology is illustrated using simulated data, as well as applying it to a real data set. Keywords: Censored data, data augmentation, non-Gaussian spatial models, outlier, unified skew Gaussian.","PeriodicalId":409996,"journal":{"name":"arXiv: Applications","volume":"196 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114430157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}