We consider the steady-state distribution of the sojourn time of a job entering an M/GI/1 queue with the foreground-background scheduling policy in heavy traffic. The growth rate of its mean, as well as the limiting distribution, are derived under broad conditions. Assumptions commonly used in extreme value theory play a key role in both the analysis and the results.
{"title":"Heavy-Traffic Analysis of Sojourn Time Under the Foreground–Background Scheduling Policy","authors":"B. Kamphorst, B. Zwart","doi":"10.1287/stsy.2019.0036","DOIUrl":"https://doi.org/10.1287/stsy.2019.0036","url":null,"abstract":"We consider the steady-state distribution of the sojourn time of a job entering an M/GI/1 queue with the foreground-background scheduling policy in heavy traffic. The growth rate of its mean, as well as the limiting distribution, are derived under broad conditions. Assumptions commonly used in extreme value theory play a key role in both the analysis and the results.","PeriodicalId":36337,"journal":{"name":"Stochastic Systems","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1287/stsy.2019.0036","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41779609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work we develop an effective Monte Carlo method for estimating sensitivities, or gradients of expectations of sufficiently smooth functionals, of a reflected diffusion in a convex polyhedral domain with respect to its defining parameters --- namely, its initial condition, drift and diffusion coefficients, and directions of reflection. Our method, which falls into the class of infinitesimal perturbation analysis (IPA) methods, uses a probabilistic representation for such sensitivities as the expectation of a functional of the reflected diffusion and its associated derivative process. The latter process is the unique solution to a constrained linear stochastic differential equation with jumps whose coefficients, domain and directions of reflection are modulated by the reflected diffusion. We propose an asymptotically unbiased estimator for such sensitivities using an Euler approximation of the reflected diffusion and its associated derivative process. Proving that the Euler approximation converges is challenging because the derivative process jumps whenever the reflected diffusion hits the boundary (of the domain). A key step in the proof is establishing a continuity property of the related derivative map, which is of independent interest. We compare the performance of our IPA estimator to a standard likelihood ratio estimator (whenever the latter is applicable), and provide numerical evidence that the variance of the former is substantially smaller than that of the latter. We illustrate our method with an example of a rank-based interacting diffusion model of equity markets. Interestingly, we show that estimating certain sensitivities of the rank-based interacting diffusion model using our method for a reflected Brownian motion description of the model outperforms a finite difference method for a stochastic differential equation description of the model.
{"title":"A Monte Carlo Method for Estimating Sensitivities of Reflected Diffusions in Convex Polyhedral Domains","authors":"David Lipshutz, K. Ramanan","doi":"10.1287/STSY.2019.0031","DOIUrl":"https://doi.org/10.1287/STSY.2019.0031","url":null,"abstract":"In this work we develop an effective Monte Carlo method for estimating sensitivities, or gradients of expectations of sufficiently smooth functionals, of a reflected diffusion in a convex polyhedral domain with respect to its defining parameters --- namely, its initial condition, drift and diffusion coefficients, and directions of reflection. Our method, which falls into the class of infinitesimal perturbation analysis (IPA) methods, uses a probabilistic representation for such sensitivities as the expectation of a functional of the reflected diffusion and its associated derivative process. The latter process is the unique solution to a constrained linear stochastic differential equation with jumps whose coefficients, domain and directions of reflection are modulated by the reflected diffusion. We propose an asymptotically unbiased estimator for such sensitivities using an Euler approximation of the reflected diffusion and its associated derivative process. Proving that the Euler approximation converges is challenging because the derivative process jumps whenever the reflected diffusion hits the boundary (of the domain). A key step in the proof is establishing a continuity property of the related derivative map, which is of independent interest. We compare the performance of our IPA estimator to a standard likelihood ratio estimator (whenever the latter is applicable), and provide numerical evidence that the variance of the former is substantially smaller than that of the latter. We illustrate our method with an example of a rank-based interacting diffusion model of equity markets. Interestingly, we show that estimating certain sensitivities of the rank-based interacting diffusion model using our method for a reflected Brownian motion description of the model outperforms a finite difference method for a stochastic differential equation description of the model.","PeriodicalId":36337,"journal":{"name":"Stochastic Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1287/STSY.2019.0031","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42896893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cascading failure models are typically used to capture the phenomenon where failures possibly trigger further failures in succession, causing knock-on effects. In many networks this ultimately leads to a disintegrated network where the failure propagation continues independently across the various components. In order to gain insight in the impact of network splitting on cascading failure processes, we extend a well-established cascading failure model for which the number of failures obeys a power-law distribution. We assume that a single line failure immediately splits the network in two components, and examine its effect on the power-law exponent. The results provide valuable qualitative insights that are crucial first steps towards understanding more complex network splitting scenarios.
{"title":"The Impact of a Network Split on Cascading Failure Processes","authors":"F. Sloothaak, S. Borst, B. Zwart","doi":"10.1287/stsy.2019.0035","DOIUrl":"https://doi.org/10.1287/stsy.2019.0035","url":null,"abstract":"Cascading failure models are typically used to capture the phenomenon where failures possibly trigger further failures in succession, causing knock-on effects. In many networks this ultimately leads to a disintegrated network where the failure propagation continues independently across the various components. In order to gain insight in the impact of network splitting on cascading failure processes, we extend a well-established cascading failure model for which the number of failures obeys a power-law distribution. We assume that a single line failure immediately splits the network in two components, and examine its effect on the power-law exponent. The results provide valuable qualitative insights that are crucial first steps towards understanding more complex network splitting scenarios.","PeriodicalId":36337,"journal":{"name":"Stochastic Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1287/stsy.2019.0035","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43131622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stochastic gradient descent in continuous time (SGDCT) provides a computationally efficient method for the statistical learning of continuous-time models, which are widely used in science, engineering, and finance. The SGDCT algorithm follows a (noisy) descent direction along a continuous stream of data. The parameter updates occur in continuous time and satisfy a stochastic differential equation. This paper analyzes the asymptotic convergence rate of the SGDCT algorithm by proving a central limit theorem for strongly convex objective functions and, under slightly stronger conditions, for nonconvex objective functions as well. An [Formula: see text] convergence rate is also proven for the algorithm in the strongly convex case. The mathematical analysis lies at the intersection of stochastic analysis and statistical learning.
{"title":"Stochastic Gradient Descent in Continuous Time: A Central Limit Theorem","authors":"Justin A. Sirignano, K. Spiliopoulos","doi":"10.1287/stsy.2019.0050","DOIUrl":"https://doi.org/10.1287/stsy.2019.0050","url":null,"abstract":"Stochastic gradient descent in continuous time (SGDCT) provides a computationally efficient method for the statistical learning of continuous-time models, which are widely used in science, engineering, and finance. The SGDCT algorithm follows a (noisy) descent direction along a continuous stream of data. The parameter updates occur in continuous time and satisfy a stochastic differential equation. This paper analyzes the asymptotic convergence rate of the SGDCT algorithm by proving a central limit theorem for strongly convex objective functions and, under slightly stronger conditions, for nonconvex objective functions as well. An [Formula: see text] convergence rate is also proven for the algorithm in the strongly convex case. The mathematical analysis lies at the intersection of stochastic analysis and statistical learning.","PeriodicalId":36337,"journal":{"name":"Stochastic Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1287/stsy.2019.0050","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45045189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study a multiclass M/M/1 queueing control problem with finite buffers under heavy-traffic where the decision maker is uncertain about the rates of arrivals and service of the system and by scheduling and admission/rejection decisions acts to minimize a discounted cost that accounts for the uncertainty. The main result is the asymptotic optimality of a $cmu$-type of policy derived via underlying stochastic differential games studied in [16]. Under this policy, with high probability, rejections are not performed when the workload lies below some cut-off that depends on the ambiguity level. When the workload exceeds this cut-off, rejections are carried out and only from the buffer with the cheapest rejection cost weighted with the mean service rate in some reference model. The allocation part of the policy is the same for all the ambiguity levels. This is the first work to address a heavy-traffic queueing control problem with model uncertainty.
{"title":"Asymptotic Analysis of a Multiclass Queueing Control Problem Under Heavy Traffic with Model Uncertainty","authors":"A. Cohen","doi":"10.1287/stsy.2019.0034","DOIUrl":"https://doi.org/10.1287/stsy.2019.0034","url":null,"abstract":"We study a multiclass M/M/1 queueing control problem with finite buffers under heavy-traffic where the decision maker is uncertain about the rates of arrivals and service of the system and by scheduling and admission/rejection decisions acts to minimize a discounted cost that accounts for the uncertainty. The main result is the asymptotic optimality of a $cmu$-type of policy derived via underlying stochastic differential games studied in [16]. Under this policy, with high probability, rejections are not performed when the workload lies below some cut-off that depends on the ambiguity level. When the workload exceeds this cut-off, rejections are carried out and only from the buffer with the cheapest rejection cost weighted with the mean service rate in some reference model. The allocation part of the policy is the same for all the ambiguity levels. This is the first work to address a heavy-traffic queueing control problem with model uncertainty.","PeriodicalId":36337,"journal":{"name":"Stochastic Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1287/stsy.2019.0034","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47076804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many stochastic systems have arrival processes that exhibit clustering behavior. In these systems, arriving entities influence additional arrivals to occur through self-excitation of the arrival process. In this paper, we analyze an infinite server queueing system in which the arrivals are driven by the self-exciting Hawkes process and where service follows a phase-type distribution or is deterministic. In the phase-type setting, we derive differential equations for the moments and a partial differential equation for the moment generating function; we also derive exact expressions for the transient and steady-state mean, variance, and covariances. Furthermore, we also derive exact expressions for the auto-covariance of the queue and provide an expression for the cumulant moment generating function in terms of a single ordinary differential equation. In the deterministic service setting, we provide exact expressions for the first and second moments and the queue auto-covariance. As motivation for our Hawkes queueing model, we demonstrate its usefulness through two novel applications. These applications are trending internet traffic and arrivals to nightclubs. In the web traffic setting, we investigate the impact of a click. In the nightclub or "Club Queue" setting, we design an optimal control problem for the rate to admit club-goers.
{"title":"Queues Driven by Hawkes Processes","authors":"A. Daw, Jamol Pender","doi":"10.2139/SSRN.3003376","DOIUrl":"https://doi.org/10.2139/SSRN.3003376","url":null,"abstract":"Many stochastic systems have arrival processes that exhibit clustering behavior. In these systems, arriving entities influence additional arrivals to occur through self-excitation of the arrival process. In this paper, we analyze an infinite server queueing system in which the arrivals are driven by the self-exciting Hawkes process and where service follows a phase-type distribution or is deterministic. In the phase-type setting, we derive differential equations for the moments and a partial differential equation for the moment generating function; we also derive exact expressions for the transient and steady-state mean, variance, and covariances. Furthermore, we also derive exact expressions for the auto-covariance of the queue and provide an expression for the cumulant moment generating function in terms of a single ordinary differential equation. In the deterministic service setting, we provide exact expressions for the first and second moments and the queue auto-covariance. As motivation for our Hawkes queueing model, we demonstrate its usefulness through two novel applications. These applications are trending internet traffic and arrivals to nightclubs. In the web traffic setting, we investigate the impact of a click. In the nightclub or \"Club Queue\" setting, we design an optimal control problem for the rate to admit club-goers.","PeriodicalId":36337,"journal":{"name":"Stochastic Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44494648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider a discrete-time Markov chain $boldsymbol{Phi}$ on a general state-space ${sf X}$, whose transition probabilities are parameterized by a real-valued vector $boldsymbol{theta}$. Under the assumption that $boldsymbol{Phi}$ is geometrically ergodic with corresponding stationary distribution $pi(boldsymbol{theta})$, we are interested in estimating the gradient $nabla alpha(boldsymbol{theta})$ of the steady-state expectation $$alpha(boldsymbol{theta}) = pi( boldsymbol{theta}) f.$$ To this end, we first give sufficient conditions for the differentiability of $alpha(boldsymbol{theta})$ and for the calculation of its gradient via a sequence of finite horizon expectations. We then propose two different likelihood ratio estimators and analyze their limiting behavior.
{"title":"Likelihood Ratio Gradient Estimation for Steady-State Parameters","authors":"P. Glynn, Mariana Olvera-Cravioto","doi":"10.1287/STSY.2018.0023","DOIUrl":"https://doi.org/10.1287/STSY.2018.0023","url":null,"abstract":"We consider a discrete-time Markov chain $boldsymbol{Phi}$ on a general state-space ${sf X}$, whose transition probabilities are parameterized by a real-valued vector $boldsymbol{theta}$. Under the assumption that $boldsymbol{Phi}$ is geometrically ergodic with corresponding stationary distribution $pi(boldsymbol{theta})$, we are interested in estimating the gradient $nabla alpha(boldsymbol{theta})$ of the steady-state expectation $$alpha(boldsymbol{theta}) = pi( boldsymbol{theta}) f.$$ \u0000To this end, we first give sufficient conditions for the differentiability of $alpha(boldsymbol{theta})$ and for the calculation of its gradient via a sequence of finite horizon expectations. We then propose two different likelihood ratio estimators and analyze their limiting behavior.","PeriodicalId":36337,"journal":{"name":"Stochastic Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1287/STSY.2018.0023","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44742406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider a queue to which only a finite pool of n customers can arrive, at times depending on their service requirement. A customer with stochastic service requirement S arrives to the queue after an exponentially distributed time with mean S-αfor some [Formula: see text]; therefore, larger service requirements trigger customers to join earlier. This finite-pool queue interpolates between two previously studied cases: α = 0 gives the so-called [Formula: see text] queue and α = 1 is closely related to the exploration process for inhomogeneous random graphs. We consider the asymptotic regime in which the pool size n grows to infinity and establish that the scaled queue-length process converges to a diffusion process with a negative quadratic drift. We leverage this asymptotic result to characterize the head start that is needed to create a long period of activity. We also describe how this first busy period of the queue gives rise to a critically connected random forest.
{"title":"Big Jobs Arrive Early: From Critical Queues to Random Graphs","authors":"G. Bet, R. van der Hofstad, J. V. van Leeuwaarden","doi":"10.1287/stsy.2019.0057","DOIUrl":"https://doi.org/10.1287/stsy.2019.0057","url":null,"abstract":"We consider a queue to which only a finite pool of n customers can arrive, at times depending on their service requirement. A customer with stochastic service requirement S arrives to the queue after an exponentially distributed time with mean S-αfor some [Formula: see text]; therefore, larger service requirements trigger customers to join earlier. This finite-pool queue interpolates between two previously studied cases: α = 0 gives the so-called [Formula: see text] queue and α = 1 is closely related to the exploration process for inhomogeneous random graphs. We consider the asymptotic regime in which the pool size n grows to infinity and establish that the scaled queue-length process converges to a diffusion process with a negative quadratic drift. We leverage this asymptotic result to characterize the head start that is needed to create a long period of activity. We also describe how this first busy period of the queue gives rise to a critically connected random forest.","PeriodicalId":36337,"journal":{"name":"Stochastic Systems","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1287/stsy.2019.0057","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43649421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Debankur Mukherjee, S. Borst, J. V. van Leeuwaarden, P. Whiting
We consider a system of $N$ parallel single-server queues with unit exponential service rates and a single dispatcher where tasks arrive as a Poisson process of rate $lambda(N)$. When a task arrives, the dispatcher assigns it to a server with the shortest queue among $d(N)$ randomly selected servers ($1 leq d(N) leq N$). This load balancing strategy is referred to as a JSQ($d(N)$) scheme, marking that it subsumes the celebrated Join-the-Shortest Queue (JSQ) policy as a crucial special case for $d(N) = N$. We construct a stochastic coupling to bound the difference in the queue length processes between the JSQ policy and a scheme with an arbitrary value of $d(N)$. We use the coupling to derive the fluid limit in the regime where $lambda(N) / N to lambda 0$ as $N to infty$ with $d(N)/(sqrt{N} log (N))toinfty$ corresponds to that for the JSQ policy. These results indicate that the optimality of the JSQ policy can be preserved at the fluid-level and diffusion-level while reducing the overhead by nearly a factor O($N$) and O($sqrt{N}/log(N)$), respectively.
我们考虑了一个$N$并行单服务器队列系统,该系统具有单位指数服务速率和单个调度器,其中任务到达速率为$lambda(N)$的泊松过程。当任务到达时,调度程序将其分配给$d(N)$随机选择的服务器($1 leq d(N) leq N$)中队列最短的服务器。这种负载平衡策略被称为JSQ($d(N)$)方案,这表明它将著名的最短队列加入(JSQ)策略作为$d(N) = N$的一个关键特例纳入其中。我们构造了一个随机耦合来约束JSQ策略和一个任意值为$d(N)$的方案之间的队列长度进程的差异。我们使用耦合来推导出$lambda(N) / N to lambda 0$为$N to infty$的状态下的流体极限,$d(N)/(sqrt{N} log (N))toinfty$对应于JSQ策略的流体极限。这些结果表明,JSQ策略可以在流体级和扩散级保持最优性,同时将开销分别减少近1倍($N$)和1倍($sqrt{N}/log(N)$)。
{"title":"Universality of Power-of-d Load Balancing in Many-Server Systems","authors":"Debankur Mukherjee, S. Borst, J. V. van Leeuwaarden, P. Whiting","doi":"10.1287/stsy.2018.0016","DOIUrl":"https://doi.org/10.1287/stsy.2018.0016","url":null,"abstract":"We consider a system of $N$ parallel single-server queues with unit exponential service rates and a single dispatcher where tasks arrive as a Poisson process of rate $lambda(N)$. When a task arrives, the dispatcher assigns it to a server with the shortest queue among $d(N)$ randomly selected servers ($1 leq d(N) leq N$). This load balancing strategy is referred to as a JSQ($d(N)$) scheme, marking that it subsumes the celebrated Join-the-Shortest Queue (JSQ) policy as a crucial special case for $d(N) = N$. \u0000We construct a stochastic coupling to bound the difference in the queue length processes between the JSQ policy and a scheme with an arbitrary value of $d(N)$. We use the coupling to derive the fluid limit in the regime where $lambda(N) / N to lambda 0$ as $N to infty$ with $d(N)/(sqrt{N} log (N))toinfty$ corresponds to that for the JSQ policy. These results indicate that the optimality of the JSQ policy can be preserved at the fluid-level and diffusion-level while reducing the overhead by nearly a factor O($N$) and O($sqrt{N}/log(N)$), respectively.","PeriodicalId":36337,"journal":{"name":"Stochastic Systems","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1287/stsy.2018.0016","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66531375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fabio Cecchi, S. Borst, J. V. van Leeuwaarden, P. Whiting
We establish mean-field limits for large-scale random-access networks with buffer dynamics and arbitrary interference graphs. Although saturated buffer scenarios have been widely investigated and yield useful throughput estimates for persistent sessions, they fail to capture the fluctuations in buffer contents over time and provide no insight in the delay performance of flows with intermittent packet arrivals. Motivated by that issue, we explore in the present paper random-access networks with buffer dynamics, where flows with empty buffers refrain from competition for the medium. The occurrence of empty buffers thus results in a complex dynamic interaction between activity states and buffer contents, which severely complicates the performance analysis. Hence, we focus on a many-sources regime where the total number of nodes grows large, which not only offers mathematical tractability but is also highly relevant with the densification of wireless networks as the Internet of Things emerges. We exploit timescale separation properties to prove that the properly scaled buffer occupancy process converges to the solution of a deterministic initial value problem and establish the existence and uniqueness of the associated fixed point. This approach simplifies the performance analysis of networks with huge numbers of nodes to a low-dimensional fixed-point calculation. For the case of a complete interference graph, we demonstrate asymptotic stability, provide a simple closed form expression for the fixed point, and prove interchange of the mean-field and steady-state limits. This yields asymptotically exact approximations for key performance metrics, in particular the stationary buffer content and packet delay distributions.
{"title":"Mean-Field Limits for Large-Scale Random-Access Networks","authors":"Fabio Cecchi, S. Borst, J. V. van Leeuwaarden, P. Whiting","doi":"10.1287/stsy.2021.0068","DOIUrl":"https://doi.org/10.1287/stsy.2021.0068","url":null,"abstract":"We establish mean-field limits for large-scale random-access networks with buffer dynamics and arbitrary interference graphs. Although saturated buffer scenarios have been widely investigated and yield useful throughput estimates for persistent sessions, they fail to capture the fluctuations in buffer contents over time and provide no insight in the delay performance of flows with intermittent packet arrivals. Motivated by that issue, we explore in the present paper random-access networks with buffer dynamics, where flows with empty buffers refrain from competition for the medium. The occurrence of empty buffers thus results in a complex dynamic interaction between activity states and buffer contents, which severely complicates the performance analysis. Hence, we focus on a many-sources regime where the total number of nodes grows large, which not only offers mathematical tractability but is also highly relevant with the densification of wireless networks as the Internet of Things emerges. We exploit timescale separation properties to prove that the properly scaled buffer occupancy process converges to the solution of a deterministic initial value problem and establish the existence and uniqueness of the associated fixed point. This approach simplifies the performance analysis of networks with huge numbers of nodes to a low-dimensional fixed-point calculation. For the case of a complete interference graph, we demonstrate asymptotic stability, provide a simple closed form expression for the fixed point, and prove interchange of the mean-field and steady-state limits. This yields asymptotically exact approximations for key performance metrics, in particular the stationary buffer content and packet delay distributions.","PeriodicalId":36337,"journal":{"name":"Stochastic Systems","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66531337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}