Abstract We apply the Monte Carlo method to solving the Dirichlet problem of linear parabolic equations with fractional Laplacian. This method exploits the idea of weak approximation of related stochastic differential equations driven by the symmetric stable Lévy process with jumps. We utilize the jump-adapted scheme to approximate Lévy process which gives exact exit time to the boundary. When the solution has low regularity, we establish a numerical scheme by removing the small jumps of the Lévy process and then show the convergence order. When the solution has higher regularity, we build up a higher-order numerical scheme by replacing small jumps with a simple process and then display the higher convergence order. Finally, numerical experiments including ten- and one hundred-dimensional cases are presented, which confirm the theoretical estimates and show the numerical efficiency of the proposed schemes for high-dimensional parabolic equations.
{"title":"Monte Carlo method for parabolic equations involving fractional Laplacian","authors":"Caiyu Jiao, Changpin Li","doi":"10.1515/mcma-2022-2129","DOIUrl":"https://doi.org/10.1515/mcma-2022-2129","url":null,"abstract":"Abstract We apply the Monte Carlo method to solving the Dirichlet problem of linear parabolic equations with fractional Laplacian. This method exploits the idea of weak approximation of related stochastic differential equations driven by the symmetric stable Lévy process with jumps. We utilize the jump-adapted scheme to approximate Lévy process which gives exact exit time to the boundary. When the solution has low regularity, we establish a numerical scheme by removing the small jumps of the Lévy process and then show the convergence order. When the solution has higher regularity, we build up a higher-order numerical scheme by replacing small jumps with a simple process and then display the higher convergence order. Finally, numerical experiments including ten- and one hundred-dimensional cases are presented, which confirm the theoretical estimates and show the numerical efficiency of the proposed schemes for high-dimensional parabolic equations.","PeriodicalId":46576,"journal":{"name":"Monte Carlo Methods and Applications","volume":"29 1","pages":"33 - 53"},"PeriodicalIF":0.9,"publicationDate":"2022-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44779092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In many Monte Carlo applications, one can substitute the use of pseudorandom numbers with quasirandom numbers and achieve improved convergence. This is because quasirandom numbers are more uniform that pseudorandom numbers. The most common measure of that uniformity is the star discrepancy. Moreover, the main error bound in quasi-Monte Carlo methods, called the Koksma–Hlawka inequality, has the star discrepancy in the formulation. A difficulty with this bound is that computing the star discrepancy is very costly. The star discrepancy can be computed by evaluating a function called the local discrepancy at a number of points. The supremum of these local discrepancy values is the star discrepancy. If we have a point set in [ 0 , 1 ] s {[0,1]^{s}} with N members, we need to compute the local discrepancy at N s {N^{s}} points. In fact, computing star discrepancy is NP-hard. In this paper, we will consider an approximate algorithm for a lower bound on the star discrepancy based on using a random walk through some of the N s {N^{s}} points. This approximation is much less expensive that computing the star discrepancy, but still accurate enough to provide information on convergence. Our numerical results show that the random walk algorithm has the same convergence rate as the Monte Carlo method, which is O ( N - 1 2 {O(N^{-frac{1}{2}}} ).
摘要在许多蒙特卡罗应用中,可以用拟随机数代替伪随机数的使用,从而达到提高收敛性的目的。这是因为准随机数比伪随机数更均匀。这种均匀性最常见的测量方法是恒星差异。此外,拟蒙特卡罗方法的主要误差界,称为Koksma-Hlawka不等式,在公式中具有星形差异。这个界限的一个困难是,计算恒星差异的成本非常高。星形差异可以通过在若干点上计算一个称为局部差异的函数来计算。这些局部差值的最大值是星形差值。如果我们有一个在[0,1]s {[0,1]^{s}}中有N个成员的点集,我们需要计算N s {N^{s}}点上的局部差异。事实上,计算恒星差异是np困难的。在本文中,我们将考虑一种基于随机遍历一些N s {N^{s}}点的星差下界的近似算法。这种近似方法比计算恒星差异要便宜得多,但仍然足够精确,可以提供关于收敛的信息。我们的数值结果表明,随机漫步算法具有与蒙特卡罗方法相同的收敛速度,即O(N - 1 2 {O(N^{-frac{1}{2}}})。
{"title":"A random walk algorithm to estimate a lower bound of the star discrepancy","authors":"Maryam Alsolami, M. Mascagni","doi":"10.1515/mcma-2022-2125","DOIUrl":"https://doi.org/10.1515/mcma-2022-2125","url":null,"abstract":"Abstract In many Monte Carlo applications, one can substitute the use of pseudorandom numbers with quasirandom numbers and achieve improved convergence. This is because quasirandom numbers are more uniform that pseudorandom numbers. The most common measure of that uniformity is the star discrepancy. Moreover, the main error bound in quasi-Monte Carlo methods, called the Koksma–Hlawka inequality, has the star discrepancy in the formulation. A difficulty with this bound is that computing the star discrepancy is very costly. The star discrepancy can be computed by evaluating a function called the local discrepancy at a number of points. The supremum of these local discrepancy values is the star discrepancy. If we have a point set in [ 0 , 1 ] s {[0,1]^{s}} with N members, we need to compute the local discrepancy at N s {N^{s}} points. In fact, computing star discrepancy is NP-hard. In this paper, we will consider an approximate algorithm for a lower bound on the star discrepancy based on using a random walk through some of the N s {N^{s}} points. This approximation is much less expensive that computing the star discrepancy, but still accurate enough to provide information on convergence. Our numerical results show that the random walk algorithm has the same convergence rate as the Monte Carlo method, which is O ( N - 1 2 {O(N^{-frac{1}{2}}} ).","PeriodicalId":46576,"journal":{"name":"Monte Carlo Methods and Applications","volume":"28 1","pages":"341 - 348"},"PeriodicalIF":0.9,"publicationDate":"2022-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41818243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract I consider black body radiation. The wall of the black body exchanges photons with the radiation field in equilibrium, therefore with a common temperature in Planck’s radiation law. The underlying process of radiation consists of creation and annihilation of photons. I want to present an alternate model of motions, where the process of radiation consists of small steps in positive and negative direction, not zero in mean. The detection of radiation consists of storing and restoring of packages of energy. I get an analogue of Planck’s radiation law, where the common temperature emerges from the underlying common model of small steps. The object of the law is not the radiation, but a storage of packages of energy, which belongs to the wall of the black body.
{"title":"Superposition of forward and backward motion","authors":"Manfred Harringer","doi":"10.1515/mcma-2022-2124","DOIUrl":"https://doi.org/10.1515/mcma-2022-2124","url":null,"abstract":"Abstract I consider black body radiation. The wall of the black body exchanges photons with the radiation field in equilibrium, therefore with a common temperature in Planck’s radiation law. The underlying process of radiation consists of creation and annihilation of photons. I want to present an alternate model of motions, where the process of radiation consists of small steps in positive and negative direction, not zero in mean. The detection of radiation consists of storing and restoring of packages of energy. I get an analogue of Planck’s radiation law, where the common temperature emerges from the underlying common model of small steps. The object of the law is not the radiation, but a storage of packages of energy, which belongs to the wall of the black body.","PeriodicalId":46576,"journal":{"name":"Monte Carlo Methods and Applications","volume":"28 1","pages":"329 - 339"},"PeriodicalIF":0.9,"publicationDate":"2022-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45461671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Entropy and extropy are central measures in information theory. In this paper, Bayesian non-parametric estimators to entropy and extropy with possibly right censored data are proposed. The approach uses the beta-Stacy process and the difference operator. Examples are presented to illustrate the performance of the estimators.
{"title":"Estimation of entropy and extropy based on right censored data: A Bayesian non-parametric approach","authors":"L. Al-Labadi, Muhammad Tahir","doi":"10.1515/mcma-2022-2123","DOIUrl":"https://doi.org/10.1515/mcma-2022-2123","url":null,"abstract":"Abstract Entropy and extropy are central measures in information theory. In this paper, Bayesian non-parametric estimators to entropy and extropy with possibly right censored data are proposed. The approach uses the beta-Stacy process and the difference operator. Examples are presented to illustrate the performance of the estimators.","PeriodicalId":46576,"journal":{"name":"Monte Carlo Methods and Applications","volume":"28 1","pages":"319 - 328"},"PeriodicalIF":0.9,"publicationDate":"2022-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42910811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In this note, we describe a new approach to the option pricing problem by introducing the notion of the safe (and acceptable) price for the writer of an option, in contrast to the fair price used in the Black–Scholes model. Our starting point is that the option pricing problem is closely related with the hedging problem by practical techniques. Recalling that the Black–Scholes model does not give us the price of the option but the initial value of a replicating portfolio, we observe easily that this has a serious disadvantage because it assumes the building of this replicating portfolio continuously in time, and this is a disadvantage of any model that assumes such a construction. Here we study the problem from the practical point of view concerning mainly the over-the-counter market. This approach is not affected by the number of the underlying assets and is particularly useful for incomplete markets. In the usual Black–Scholes or binomial approach or some other approaches, one assumes that one can invest or borrow at the same risk-free rate r > 0 r>0 , which is not true in general. Even if this is the case, one can immediately observe that this risk-free rate is not a universal constant but is different among different people or institutions. So the fair price of an option is not so much fair! Moreover, the two sides are not, in general, equivalent against the risk; therefore, the notion of a fair price has no meaning at all. We also define a variant of the usual binomial model, by estimating safe upward and downward rates u , d u,d , trying to give a cheaper safe or acceptable price for the option.
{"title":"On the practical point of view of option pricing","authors":"N. Halidias","doi":"10.1515/mcma-2022-2122","DOIUrl":"https://doi.org/10.1515/mcma-2022-2122","url":null,"abstract":"Abstract In this note, we describe a new approach to the option pricing problem by introducing the notion of the safe (and acceptable) price for the writer of an option, in contrast to the fair price used in the Black–Scholes model. Our starting point is that the option pricing problem is closely related with the hedging problem by practical techniques. Recalling that the Black–Scholes model does not give us the price of the option but the initial value of a replicating portfolio, we observe easily that this has a serious disadvantage because it assumes the building of this replicating portfolio continuously in time, and this is a disadvantage of any model that assumes such a construction. Here we study the problem from the practical point of view concerning mainly the over-the-counter market. This approach is not affected by the number of the underlying assets and is particularly useful for incomplete markets. In the usual Black–Scholes or binomial approach or some other approaches, one assumes that one can invest or borrow at the same risk-free rate r > 0 r>0 , which is not true in general. Even if this is the case, one can immediately observe that this risk-free rate is not a universal constant but is different among different people or institutions. So the fair price of an option is not so much fair! Moreover, the two sides are not, in general, equivalent against the risk; therefore, the notion of a fair price has no meaning at all. We also define a variant of the usual binomial model, by estimating safe upward and downward rates u , d u,d , trying to give a cheaper safe or acceptable price for the option.","PeriodicalId":46576,"journal":{"name":"Monte Carlo Methods and Applications","volume":"28 1","pages":"307 - 318"},"PeriodicalIF":0.9,"publicationDate":"2022-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46929711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Konovalov, V. Vlasov, S. Kolchugin, G. Malyshkin, R. Mukhamadiyev
Abstract The paper describes a sensitivity function calculation method for few-view X-ray computed tomography of strongly absorbing objects. It is based on a probabilistic interpretation of energy transport through the object from a source to a detector. A PRIZMA code package is used to track photons. The code is developed at FSUE “RFNC–VNIITF named after Academ. E. I. Zababakhin” and implements a stochastic Monte Carlo method. The value of the sensitivity function in a discrete cell of the reconstruction region is assumed to be directly proportional to the fraction of photon trajectories which cross the cell from all those recorded by the detector. The method’s efficiency is validated through a numerical experiment on the reconstruction of a section of a spherical heavy-metal phantom with an air cavity and a density difference of 25 Ṫhe proposed method is shown to outperform the method based on projection approximation in case of reconstruction from 9 views.
{"title":"Monte Carlo simulation of sensitivity functions for few-view computed tomography of strongly absorbing media","authors":"A. Konovalov, V. Vlasov, S. Kolchugin, G. Malyshkin, R. Mukhamadiyev","doi":"10.1515/mcma-2022-2120","DOIUrl":"https://doi.org/10.1515/mcma-2022-2120","url":null,"abstract":"Abstract The paper describes a sensitivity function calculation method for few-view X-ray computed tomography of strongly absorbing objects. It is based on a probabilistic interpretation of energy transport through the object from a source to a detector. A PRIZMA code package is used to track photons. The code is developed at FSUE “RFNC–VNIITF named after Academ. E. I. Zababakhin” and implements a stochastic Monte Carlo method. The value of the sensitivity function in a discrete cell of the reconstruction region is assumed to be directly proportional to the fraction of photon trajectories which cross the cell from all those recorded by the detector. The method’s efficiency is validated through a numerical experiment on the reconstruction of a section of a spherical heavy-metal phantom with an air cavity and a density difference of 25 Ṫhe proposed method is shown to outperform the method based on projection approximation in case of reconstruction from 9 views.","PeriodicalId":46576,"journal":{"name":"Monte Carlo Methods and Applications","volume":"28 1","pages":"269 - 278"},"PeriodicalIF":0.9,"publicationDate":"2022-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44779220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Random numbers are used in a variety of applications including simulation, sampling, and cryptography. Fortunately, there exist many well-established methods of random number generation. An example of a well-known pseudorandom number generator is the lagged-Fibonacci generator (LFG). Marsaglia showed that the lagged-Fibonacci generator using addition failed some of his DIEHARD statistical tests, while it passed all when longer lags were used. This paper presents a scrambler that takes bits from a pseudorandom number generator and outputs (hopefully) improved pseudorandom numbers. The scrambler is based on a modified Feistel function, a method used in the generation of cryptographic random numbers, and multiplication by a chosen multiplier. We show that this scrambler improves the quality of pseudorandom numbers by applying it to the additive LFG with small lags. The scrambler performs well based on its performance with the TestU01 suite of randomness tests. The TestU01 suite of randomness tests is more comprehensive than the DIEHARD tests. In fact, the specific suite of tests we used from TestU01 includes the DIEHARD tests The scrambling of the LFG is so successful that scrambled LFGs with small lags perform as well as unscrambled LFGs with long lags. This comes at the cost of a doubling of execution time, and provides users with generators with small memory footprints that can provide parallel generators like the LFGs in the SPRNG parallel random number generation package.
{"title":"Scrambling additive lagged-Fibonacci generators","authors":"Haifa Aldossari, M. Mascagni","doi":"10.1515/mcma-2022-2115","DOIUrl":"https://doi.org/10.1515/mcma-2022-2115","url":null,"abstract":"Abstract Random numbers are used in a variety of applications including simulation, sampling, and cryptography. Fortunately, there exist many well-established methods of random number generation. An example of a well-known pseudorandom number generator is the lagged-Fibonacci generator (LFG). Marsaglia showed that the lagged-Fibonacci generator using addition failed some of his DIEHARD statistical tests, while it passed all when longer lags were used. This paper presents a scrambler that takes bits from a pseudorandom number generator and outputs (hopefully) improved pseudorandom numbers. The scrambler is based on a modified Feistel function, a method used in the generation of cryptographic random numbers, and multiplication by a chosen multiplier. We show that this scrambler improves the quality of pseudorandom numbers by applying it to the additive LFG with small lags. The scrambler performs well based on its performance with the TestU01 suite of randomness tests. The TestU01 suite of randomness tests is more comprehensive than the DIEHARD tests. In fact, the specific suite of tests we used from TestU01 includes the DIEHARD tests The scrambling of the LFG is so successful that scrambled LFGs with small lags perform as well as unscrambled LFGs with long lags. This comes at the cost of a doubling of execution time, and provides users with generators with small memory footprints that can provide parallel generators like the LFGs in the SPRNG parallel random number generation package.","PeriodicalId":46576,"journal":{"name":"Monte Carlo Methods and Applications","volume":"28 1","pages":"199 - 210"},"PeriodicalIF":0.9,"publicationDate":"2022-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46604130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In numerous industrial and related activities, the sums of the values of, frequently, unequal size samples are systematically recorded, for several purposes such as legal or quality control reasons. For the typical case where the individual values are not or no longer known, we address the point estimation, with confidence intervals, of the standard deviation (and mean) of the individual items, from those sums alone. The estimation may be useful also to corroborate estimates from previous statistical process control. An everyday case of a sum is the total weight of a set of items, such as a load of bags on a truck, which is used illustratively. For the parameters mean and standard deviation of the distribution, assumed Gaussian, we derive point estimates, which lead to weighted statistics, and we derive confidence intervals. For the latter, starting with a tentative reduction to equal size samples, we arrive at a solid conjecture for the mean, and a proposal for the standard deviation. All results are verifiable by direct computation or by simulation in a general and effective way. These computations can be run on public web pages of ours, namely for possible industrial use.
{"title":"Standard deviation estimation from sums of unequal size samples","authors":"M. Casquilho, J. Buescu","doi":"10.1515/mcma-2022-2118","DOIUrl":"https://doi.org/10.1515/mcma-2022-2118","url":null,"abstract":"Abstract In numerous industrial and related activities, the sums of the values of, frequently, unequal size samples are systematically recorded, for several purposes such as legal or quality control reasons. For the typical case where the individual values are not or no longer known, we address the point estimation, with confidence intervals, of the standard deviation (and mean) of the individual items, from those sums alone. The estimation may be useful also to corroborate estimates from previous statistical process control. An everyday case of a sum is the total weight of a set of items, such as a load of bags on a truck, which is used illustratively. For the parameters mean and standard deviation of the distribution, assumed Gaussian, we derive point estimates, which lead to weighted statistics, and we derive confidence intervals. For the latter, starting with a tentative reduction to equal size samples, we arrive at a solid conjecture for the mean, and a proposal for the standard deviation. All results are verifiable by direct computation or by simulation in a general and effective way. These computations can be run on public web pages of ours, namely for possible industrial use.","PeriodicalId":46576,"journal":{"name":"Monte Carlo Methods and Applications","volume":"28 1","pages":"235 - 253"},"PeriodicalIF":0.9,"publicationDate":"2022-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45218319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In this paper, we study the multiple recombination exciton–photon–exciton process governed by a coupled system of the drift-diffusion-recombination equation and the integral radiative transfer equation. We develop a random walk on spheres algorithm for solving this system of equations. The algorithm directly simulates the transient drift-diffusion process of exciton’s motion. Then, at a random time the exciton recombines to a photon that moves in accordance with the radiative transfer equation, which in turn may recombine to an exciton etc. This algorithm is applied to calculate fluxes of excitons and photons as functions of time, and some other characteristics of the process. Calculations have also been carried out to validate the constructed model.
{"title":"Simulation of transient and spatial structure of the radiative flux produced by multiple recombinations of excitons","authors":"K. Sabelfeld, V. Sapozhnikov","doi":"10.1515/mcma-2022-2117","DOIUrl":"https://doi.org/10.1515/mcma-2022-2117","url":null,"abstract":"Abstract In this paper, we study the multiple recombination exciton–photon–exciton process governed by a coupled system of the drift-diffusion-recombination equation and the integral radiative transfer equation. We develop a random walk on spheres algorithm for solving this system of equations. The algorithm directly simulates the transient drift-diffusion process of exciton’s motion. Then, at a random time the exciton recombines to a photon that moves in accordance with the radiative transfer equation, which in turn may recombine to an exciton etc. This algorithm is applied to calculate fluxes of excitons and photons as functions of time, and some other characteristics of the process. Calculations have also been carried out to validate the constructed model.","PeriodicalId":46576,"journal":{"name":"Monte Carlo Methods and Applications","volume":"28 1","pages":"255 - 268"},"PeriodicalIF":0.9,"publicationDate":"2022-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47080040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Markov chain Monte Carlo (MCMC) methods are important in a variety of statistical applications that require sampling from intractable probability distributions. Among the most common MCMC algorithms is the Gibbs sampler. When an MCMC algorithm is used, it is important to have an idea of how long it takes for the chain to become “close” to its stationary distribution. In many cases, there is high autocorrelation in the output of the chain, so the output needs to be thinned so that an approximate random sample from the desired probability distribution can be obtained by taking a state of the chain every h steps in a process called h-thinning. This manuscript extends the work of [D. A. Spade, Estimating drift and minorization coefficients for Gibbs sampling algorithms, Monte Carlo Methods Appl. 27 2021, 3, 195–209] by presenting a computational approach to obtaining an approximate upper bound on the mixing time of the h-thinned Gibbs sampler.
{"title":"Approximate bounding of mixing time for multiple-step Gibbs samplers","authors":"David A. Spade","doi":"10.1515/mcma-2022-2119","DOIUrl":"https://doi.org/10.1515/mcma-2022-2119","url":null,"abstract":"Abstract Markov chain Monte Carlo (MCMC) methods are important in a variety of statistical applications that require sampling from intractable probability distributions. Among the most common MCMC algorithms is the Gibbs sampler. When an MCMC algorithm is used, it is important to have an idea of how long it takes for the chain to become “close” to its stationary distribution. In many cases, there is high autocorrelation in the output of the chain, so the output needs to be thinned so that an approximate random sample from the desired probability distribution can be obtained by taking a state of the chain every h steps in a process called h-thinning. This manuscript extends the work of [D. A. Spade, Estimating drift and minorization coefficients for Gibbs sampling algorithms, Monte Carlo Methods Appl. 27 2021, 3, 195–209] by presenting a computational approach to obtaining an approximate upper bound on the mixing time of the h-thinned Gibbs sampler.","PeriodicalId":46576,"journal":{"name":"Monte Carlo Methods and Applications","volume":"28 1","pages":"221 - 233"},"PeriodicalIF":0.9,"publicationDate":"2022-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43553227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}