A Markov chain in discrete time, {X n : n ≥ 0}, remains in any state for exactly one unit of time before making a transition (change of state). We proceed now to relax this restriction by allowing a chain to spend a continuous amount of time in any state, but in such a way as to retain the Markov property. As motivation, suppose we consider the rat in the open maze. Clearly it is more realistic to be able to keep track of where the rat is at any continuous-time t ≥ 0 as oppposed to only where the rat is after n " steps ". Assume throughout that our state space is S = Z = {· · · , −2, −1, 0, 1, 2, · · · } (or some subset thereof). Suppose now that whenever a chain enters state i ∈ S, independent of the past, the length of time spent in state i is a continuous, strictly positive (and proper) random variable H i called the holding time in state i. When the holding time ends, the process then makes a transition into state j according to transition probability P ij , independent of the past, and so on. 1 Letting X(t) denote the state at time t, we end up with a continuous-time stochastic process {X(t) : t ≥ 0} with state space S. Our objective is to place conditions on the holding times to ensure that the continuous-time process satisfies the Markov property: The future, {X(s + t) : t ≥ 0}, given the present state, X(s), is independent of the past, {X(u) : 0 ≤ u < s}. Such a process will be called a continuous-time Markvov chain (CTMC), and as we will conclude shortly, the holding times will have to be exponentially distributed. The formal definition is given by Definition 1.1 A stochastic process {X(t) : t ≥ 0} with discrete state space S is called a continuous-time Markvov chain (CTMC) if for all t ≥ 0, s ≥ 0, i ∈ S, j ∈ S, P (X(s + t) = j|X(s) = i, {X(u) : 0 ≤ u < s}) = P (X(s + t) = j|X(s) = i) = P ij (t). P ij (t) is the probability that the chain will be in state j, t time units from now, given it is in state i now. For …
离散时间马尔可夫链{X n: n≥0},在进行状态转换(状态改变)之前,在任意状态下保持一个单位时间。我们现在继续放宽这个限制,允许链在任何状态下花费连续的时间,但以保留马尔可夫性质的方式。作为动机,假设我们考虑开放迷宫中的老鼠。显然,能够在任何连续时间t≥0时跟踪大鼠的位置比只跟踪大鼠在n个“步骤”之后的位置更现实。假设我们的状态空间为S = Z ={···,−2,−1,0,1,2,···}(或其子集)。现在假设,每当一条链进入状态i∈S,与过去无关,状态i所花费的时间长度是一个连续的严格正的(固有的)随机变量H i,称为状态i的保持时间。当保持时间结束时,该过程根据与过去无关的转移概率P ij过渡到状态j,以此类推。1让X(t)表示时刻t的状态,我们最终得到一个连续时间随机过程{X(t): t≥0},具有状态空间s。我们的目标是在保持时间上设置条件,以确保连续时间过程满足马尔可夫性质:未来,{X(s + t): t≥0},给定当前状态X(s),独立于过去,{X(u): 0≤u < s}。这样的过程将被称为连续时间马尔科夫链(CTMC),正如我们即将得出的结论,持有时间必须呈指数分布。1.1的正式定义是定义随机过程{X (t): t≥0}与离散状态空间称为连续时间Markvov链(中国十冶公司)如果t≥0,年代≥0,我∈年代,j∈S, P (X (S + t) = j | X (S) =我{X (u): 0≤u < S}) = P (X (S + t) = j | X (S) = i) = P ij (t) P ij (t)的概率是链将在州j, t时间单位,它是我现在的状态。为…
{"title":"Continuous-Time Markov Chains","authors":"Gregory F. Lawler","doi":"10.1201/b21389-11","DOIUrl":"https://doi.org/10.1201/b21389-11","url":null,"abstract":"A Markov chain in discrete time, {X n : n ≥ 0}, remains in any state for exactly one unit of time before making a transition (change of state). We proceed now to relax this restriction by allowing a chain to spend a continuous amount of time in any state, but in such a way as to retain the Markov property. As motivation, suppose we consider the rat in the open maze. Clearly it is more realistic to be able to keep track of where the rat is at any continuous-time t ≥ 0 as oppposed to only where the rat is after n \" steps \". Assume throughout that our state space is S = Z = {· · · , −2, −1, 0, 1, 2, · · · } (or some subset thereof). Suppose now that whenever a chain enters state i ∈ S, independent of the past, the length of time spent in state i is a continuous, strictly positive (and proper) random variable H i called the holding time in state i. When the holding time ends, the process then makes a transition into state j according to transition probability P ij , independent of the past, and so on. 1 Letting X(t) denote the state at time t, we end up with a continuous-time stochastic process {X(t) : t ≥ 0} with state space S. Our objective is to place conditions on the holding times to ensure that the continuous-time process satisfies the Markov property: The future, {X(s + t) : t ≥ 0}, given the present state, X(s), is independent of the past, {X(u) : 0 ≤ u < s}. Such a process will be called a continuous-time Markvov chain (CTMC), and as we will conclude shortly, the holding times will have to be exponentially distributed. The formal definition is given by Definition 1.1 A stochastic process {X(t) : t ≥ 0} with discrete state space S is called a continuous-time Markvov chain (CTMC) if for all t ≥ 0, s ≥ 0, i ∈ S, j ∈ S, P (X(s + t) = j|X(s) = i, {X(u) : 0 ≤ u < s}) = P (X(s + t) = j|X(s) = i) = P ij (t). P ij (t) is the probability that the chain will be in state j, t time units from now, given it is in state i now. For …","PeriodicalId":233191,"journal":{"name":"Introduction to Stochastic Processes","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115075711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}