首页 > 最新文献

Introduction to Stochastic Processes最新文献

英文 中文
Reversible Markov Chains 可逆马尔可夫链
Pub Date : 2018-10-03 DOI: 10.1201/9781315273600-14
G. Lawler
{"title":"Reversible Markov Chains","authors":"G. Lawler","doi":"10.1201/9781315273600-14","DOIUrl":"https://doi.org/10.1201/9781315273600-14","url":null,"abstract":"","PeriodicalId":233191,"journal":{"name":"Introduction to Stochastic Processes","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126754721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Optimal Stopping 最优停止
Pub Date : 2018-10-03 DOI: 10.1007/978-0-387-75816-9_5
Thomas Kesselheim
{"title":"Optimal Stopping","authors":"Thomas Kesselheim","doi":"10.1007/978-0-387-75816-9_5","DOIUrl":"https://doi.org/10.1007/978-0-387-75816-9_5","url":null,"abstract":"","PeriodicalId":233191,"journal":{"name":"Introduction to Stochastic Processes","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115670594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Countable Markov Chains 可数马尔可夫链
Pub Date : 2018-10-03 DOI: 10.1201/9781315273600-9
G. Lawler
{"title":"Countable Markov Chains","authors":"G. Lawler","doi":"10.1201/9781315273600-9","DOIUrl":"https://doi.org/10.1201/9781315273600-9","url":null,"abstract":"","PeriodicalId":233191,"journal":{"name":"Introduction to Stochastic Processes","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132349915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Finite Markov Chains 有限马尔可夫链
Pub Date : 2018-10-03 DOI: 10.1201/9781315273600-8
G. Lawler
{"title":"Finite Markov Chains","authors":"G. Lawler","doi":"10.1201/9781315273600-8","DOIUrl":"https://doi.org/10.1201/9781315273600-8","url":null,"abstract":"","PeriodicalId":233191,"journal":{"name":"Introduction to Stochastic Processes","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122466871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Continuous-Time Markov Chains 连续时间马尔可夫链
Pub Date : 2018-09-03 DOI: 10.1201/b21389-11
Gregory F. Lawler
A Markov chain in discrete time, {X n : n ≥ 0}, remains in any state for exactly one unit of time before making a transition (change of state). We proceed now to relax this restriction by allowing a chain to spend a continuous amount of time in any state, but in such a way as to retain the Markov property. As motivation, suppose we consider the rat in the open maze. Clearly it is more realistic to be able to keep track of where the rat is at any continuous-time t ≥ 0 as oppposed to only where the rat is after n " steps ". Assume throughout that our state space is S = Z = {· · · , −2, −1, 0, 1, 2, · · · } (or some subset thereof). Suppose now that whenever a chain enters state i ∈ S, independent of the past, the length of time spent in state i is a continuous, strictly positive (and proper) random variable H i called the holding time in state i. When the holding time ends, the process then makes a transition into state j according to transition probability P ij , independent of the past, and so on. 1 Letting X(t) denote the state at time t, we end up with a continuous-time stochastic process {X(t) : t ≥ 0} with state space S. Our objective is to place conditions on the holding times to ensure that the continuous-time process satisfies the Markov property: The future, {X(s + t) : t ≥ 0}, given the present state, X(s), is independent of the past, {X(u) : 0 ≤ u < s}. Such a process will be called a continuous-time Markvov chain (CTMC), and as we will conclude shortly, the holding times will have to be exponentially distributed. The formal definition is given by Definition 1.1 A stochastic process {X(t) : t ≥ 0} with discrete state space S is called a continuous-time Markvov chain (CTMC) if for all t ≥ 0, s ≥ 0, i ∈ S, j ∈ S, P (X(s + t) = j|X(s) = i, {X(u) : 0 ≤ u < s}) = P (X(s + t) = j|X(s) = i) = P ij (t). P ij (t) is the probability that the chain will be in state j, t time units from now, given it is in state i now. For …
离散时间马尔可夫链{X n: n≥0},在进行状态转换(状态改变)之前,在任意状态下保持一个单位时间。我们现在继续放宽这个限制,允许链在任何状态下花费连续的时间,但以保留马尔可夫性质的方式。作为动机,假设我们考虑开放迷宫中的老鼠。显然,能够在任何连续时间t≥0时跟踪大鼠的位置比只跟踪大鼠在n个“步骤”之后的位置更现实。假设我们的状态空间为S = Z ={···,−2,−1,0,1,2,···}(或其子集)。现在假设,每当一条链进入状态i∈S,与过去无关,状态i所花费的时间长度是一个连续的严格正的(固有的)随机变量H i,称为状态i的保持时间。当保持时间结束时,该过程根据与过去无关的转移概率P ij过渡到状态j,以此类推。1让X(t)表示时刻t的状态,我们最终得到一个连续时间随机过程{X(t): t≥0},具有状态空间s。我们的目标是在保持时间上设置条件,以确保连续时间过程满足马尔可夫性质:未来,{X(s + t): t≥0},给定当前状态X(s),独立于过去,{X(u): 0≤u < s}。这样的过程将被称为连续时间马尔科夫链(CTMC),正如我们即将得出的结论,持有时间必须呈指数分布。1.1的正式定义是定义随机过程{X (t): t≥0}与离散状态空间称为连续时间Markvov链(中国十冶公司)如果t≥0,年代≥0,我∈年代,j∈S, P (X (S + t) = j | X (S) =我{X (u): 0≤u < S}) = P (X (S + t) = j | X (S) = i) = P ij (t) P ij (t)的概率是链将在州j, t时间单位,它是我现在的状态。为…
{"title":"Continuous-Time Markov Chains","authors":"Gregory F. Lawler","doi":"10.1201/b21389-11","DOIUrl":"https://doi.org/10.1201/b21389-11","url":null,"abstract":"A Markov chain in discrete time, {X n : n ≥ 0}, remains in any state for exactly one unit of time before making a transition (change of state). We proceed now to relax this restriction by allowing a chain to spend a continuous amount of time in any state, but in such a way as to retain the Markov property. As motivation, suppose we consider the rat in the open maze. Clearly it is more realistic to be able to keep track of where the rat is at any continuous-time t ≥ 0 as oppposed to only where the rat is after n \" steps \". Assume throughout that our state space is S = Z = {· · · , −2, −1, 0, 1, 2, · · · } (or some subset thereof). Suppose now that whenever a chain enters state i ∈ S, independent of the past, the length of time spent in state i is a continuous, strictly positive (and proper) random variable H i called the holding time in state i. When the holding time ends, the process then makes a transition into state j according to transition probability P ij , independent of the past, and so on. 1 Letting X(t) denote the state at time t, we end up with a continuous-time stochastic process {X(t) : t ≥ 0} with state space S. Our objective is to place conditions on the holding times to ensure that the continuous-time process satisfies the Markov property: The future, {X(s + t) : t ≥ 0}, given the present state, X(s), is independent of the past, {X(u) : 0 ≤ u < s}. Such a process will be called a continuous-time Markvov chain (CTMC), and as we will conclude shortly, the holding times will have to be exponentially distributed. The formal definition is given by Definition 1.1 A stochastic process {X(t) : t ≥ 0} with discrete state space S is called a continuous-time Markvov chain (CTMC) if for all t ≥ 0, s ≥ 0, i ∈ S, j ∈ S, P (X(s + t) = j|X(s) = i, {X(u) : 0 ≤ u < s}) = P (X(s + t) = j|X(s) = i) = P ij (t). P ij (t) is the probability that the chain will be in state j, t time units from now, given it is in state i now. For …","PeriodicalId":233191,"journal":{"name":"Introduction to Stochastic Processes","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115075711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Introduction to Stochastic Processes
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1