连续时间马尔可夫链

Gregory F. Lawler
{"title":"连续时间马尔可夫链","authors":"Gregory F. Lawler","doi":"10.1201/b21389-11","DOIUrl":null,"url":null,"abstract":"A Markov chain in discrete time, {X n : n ≥ 0}, remains in any state for exactly one unit of time before making a transition (change of state). We proceed now to relax this restriction by allowing a chain to spend a continuous amount of time in any state, but in such a way as to retain the Markov property. As motivation, suppose we consider the rat in the open maze. Clearly it is more realistic to be able to keep track of where the rat is at any continuous-time t ≥ 0 as oppposed to only where the rat is after n \" steps \". Assume throughout that our state space is S = Z = {· · · , −2, −1, 0, 1, 2, · · · } (or some subset thereof). Suppose now that whenever a chain enters state i ∈ S, independent of the past, the length of time spent in state i is a continuous, strictly positive (and proper) random variable H i called the holding time in state i. When the holding time ends, the process then makes a transition into state j according to transition probability P ij , independent of the past, and so on. 1 Letting X(t) denote the state at time t, we end up with a continuous-time stochastic process {X(t) : t ≥ 0} with state space S. Our objective is to place conditions on the holding times to ensure that the continuous-time process satisfies the Markov property: The future, {X(s + t) : t ≥ 0}, given the present state, X(s), is independent of the past, {X(u) : 0 ≤ u < s}. Such a process will be called a continuous-time Markvov chain (CTMC), and as we will conclude shortly, the holding times will have to be exponentially distributed. The formal definition is given by Definition 1.1 A stochastic process {X(t) : t ≥ 0} with discrete state space S is called a continuous-time Markvov chain (CTMC) if for all t ≥ 0, s ≥ 0, i ∈ S, j ∈ S, P (X(s + t) = j|X(s) = i, {X(u) : 0 ≤ u < s}) = P (X(s + t) = j|X(s) = i) = P ij (t). P ij (t) is the probability that the chain will be in state j, t time units from now, given it is in state i now. For …","PeriodicalId":233191,"journal":{"name":"Introduction to Stochastic Processes","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Continuous-Time Markov Chains\",\"authors\":\"Gregory F. Lawler\",\"doi\":\"10.1201/b21389-11\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A Markov chain in discrete time, {X n : n ≥ 0}, remains in any state for exactly one unit of time before making a transition (change of state). We proceed now to relax this restriction by allowing a chain to spend a continuous amount of time in any state, but in such a way as to retain the Markov property. As motivation, suppose we consider the rat in the open maze. Clearly it is more realistic to be able to keep track of where the rat is at any continuous-time t ≥ 0 as oppposed to only where the rat is after n \\\" steps \\\". Assume throughout that our state space is S = Z = {· · · , −2, −1, 0, 1, 2, · · · } (or some subset thereof). Suppose now that whenever a chain enters state i ∈ S, independent of the past, the length of time spent in state i is a continuous, strictly positive (and proper) random variable H i called the holding time in state i. When the holding time ends, the process then makes a transition into state j according to transition probability P ij , independent of the past, and so on. 1 Letting X(t) denote the state at time t, we end up with a continuous-time stochastic process {X(t) : t ≥ 0} with state space S. Our objective is to place conditions on the holding times to ensure that the continuous-time process satisfies the Markov property: The future, {X(s + t) : t ≥ 0}, given the present state, X(s), is independent of the past, {X(u) : 0 ≤ u < s}. Such a process will be called a continuous-time Markvov chain (CTMC), and as we will conclude shortly, the holding times will have to be exponentially distributed. The formal definition is given by Definition 1.1 A stochastic process {X(t) : t ≥ 0} with discrete state space S is called a continuous-time Markvov chain (CTMC) if for all t ≥ 0, s ≥ 0, i ∈ S, j ∈ S, P (X(s + t) = j|X(s) = i, {X(u) : 0 ≤ u < s}) = P (X(s + t) = j|X(s) = i) = P ij (t). P ij (t) is the probability that the chain will be in state j, t time units from now, given it is in state i now. For …\",\"PeriodicalId\":233191,\"journal\":{\"name\":\"Introduction to Stochastic Processes\",\"volume\":\"34 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-09-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Introduction to Stochastic Processes\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1201/b21389-11\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Introduction to Stochastic Processes","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1201/b21389-11","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

离散时间马尔可夫链{X n: n≥0},在进行状态转换(状态改变)之前,在任意状态下保持一个单位时间。我们现在继续放宽这个限制,允许链在任何状态下花费连续的时间,但以保留马尔可夫性质的方式。作为动机,假设我们考虑开放迷宫中的老鼠。显然,能够在任何连续时间t≥0时跟踪大鼠的位置比只跟踪大鼠在n个“步骤”之后的位置更现实。假设我们的状态空间为S = Z ={···,−2,−1,0,1,2,···}(或其子集)。现在假设,每当一条链进入状态i∈S,与过去无关,状态i所花费的时间长度是一个连续的严格正的(固有的)随机变量H i,称为状态i的保持时间。当保持时间结束时,该过程根据与过去无关的转移概率P ij过渡到状态j,以此类推。1让X(t)表示时刻t的状态,我们最终得到一个连续时间随机过程{X(t): t≥0},具有状态空间s。我们的目标是在保持时间上设置条件,以确保连续时间过程满足马尔可夫性质:未来,{X(s + t): t≥0},给定当前状态X(s),独立于过去,{X(u): 0≤u < s}。这样的过程将被称为连续时间马尔科夫链(CTMC),正如我们即将得出的结论,持有时间必须呈指数分布。1.1的正式定义是定义随机过程{X (t): t≥0}与离散状态空间称为连续时间Markvov链(中国十冶公司)如果t≥0,年代≥0,我∈年代,j∈S, P (X (S + t) = j | X (S) =我{X (u): 0≤u < S}) = P (X (S + t) = j | X (S) = i) = P ij (t) P ij (t)的概率是链将在州j, t时间单位,它是我现在的状态。为…
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Continuous-Time Markov Chains
A Markov chain in discrete time, {X n : n ≥ 0}, remains in any state for exactly one unit of time before making a transition (change of state). We proceed now to relax this restriction by allowing a chain to spend a continuous amount of time in any state, but in such a way as to retain the Markov property. As motivation, suppose we consider the rat in the open maze. Clearly it is more realistic to be able to keep track of where the rat is at any continuous-time t ≥ 0 as oppposed to only where the rat is after n " steps ". Assume throughout that our state space is S = Z = {· · · , −2, −1, 0, 1, 2, · · · } (or some subset thereof). Suppose now that whenever a chain enters state i ∈ S, independent of the past, the length of time spent in state i is a continuous, strictly positive (and proper) random variable H i called the holding time in state i. When the holding time ends, the process then makes a transition into state j according to transition probability P ij , independent of the past, and so on. 1 Letting X(t) denote the state at time t, we end up with a continuous-time stochastic process {X(t) : t ≥ 0} with state space S. Our objective is to place conditions on the holding times to ensure that the continuous-time process satisfies the Markov property: The future, {X(s + t) : t ≥ 0}, given the present state, X(s), is independent of the past, {X(u) : 0 ≤ u < s}. Such a process will be called a continuous-time Markvov chain (CTMC), and as we will conclude shortly, the holding times will have to be exponentially distributed. The formal definition is given by Definition 1.1 A stochastic process {X(t) : t ≥ 0} with discrete state space S is called a continuous-time Markvov chain (CTMC) if for all t ≥ 0, s ≥ 0, i ∈ S, j ∈ S, P (X(s + t) = j|X(s) = i, {X(u) : 0 ≤ u < s}) = P (X(s + t) = j|X(s) = i) = P ij (t). P ij (t) is the probability that the chain will be in state j, t time units from now, given it is in state i now. For …
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Optimal Stopping Finite Markov Chains Reversible Markov Chains Countable Markov Chains Continuous-Time Markov Chains
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1