强化学习的溢价控制

IF 1.7 3区 经济学 Q2 ECONOMICS ASTIN Bulletin Pub Date : 2023-04-11 DOI:10.1017/asb.2023.13
L. Palmborg, F. Lindskog
{"title":"强化学习的溢价控制","authors":"L. Palmborg, F. Lindskog","doi":"10.1017/asb.2023.13","DOIUrl":null,"url":null,"abstract":"Abstract We consider a premium control problem in discrete time, formulated in terms of a Markov decision process. In a simplified setting, the optimal premium rule can be derived with dynamic programming methods. However, these classical methods are not feasible in a more realistic setting due to the dimension of the state space and lack of explicit expressions for transition probabilities. We explore reinforcement learning techniques, using function approximation, to solve the premium control problem for realistic stochastic models. We illustrate the appropriateness of the approximate optimal premium rule compared with the true optimal premium rule in a simplified setting and further demonstrate that the approximate optimal premium rule outperforms benchmark rules in more realistic settings where classical approaches fail.","PeriodicalId":8617,"journal":{"name":"ASTIN Bulletin","volume":"83 1","pages":"233 - 257"},"PeriodicalIF":1.7000,"publicationDate":"2023-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Premium control with reinforcement learning\",\"authors\":\"L. Palmborg, F. Lindskog\",\"doi\":\"10.1017/asb.2023.13\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract We consider a premium control problem in discrete time, formulated in terms of a Markov decision process. In a simplified setting, the optimal premium rule can be derived with dynamic programming methods. However, these classical methods are not feasible in a more realistic setting due to the dimension of the state space and lack of explicit expressions for transition probabilities. We explore reinforcement learning techniques, using function approximation, to solve the premium control problem for realistic stochastic models. We illustrate the appropriateness of the approximate optimal premium rule compared with the true optimal premium rule in a simplified setting and further demonstrate that the approximate optimal premium rule outperforms benchmark rules in more realistic settings where classical approaches fail.\",\"PeriodicalId\":8617,\"journal\":{\"name\":\"ASTIN Bulletin\",\"volume\":\"83 1\",\"pages\":\"233 - 257\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2023-04-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ASTIN Bulletin\",\"FirstCategoryId\":\"96\",\"ListUrlMain\":\"https://doi.org/10.1017/asb.2023.13\",\"RegionNum\":3,\"RegionCategory\":\"经济学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ECONOMICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ASTIN Bulletin","FirstCategoryId":"96","ListUrlMain":"https://doi.org/10.1017/asb.2023.13","RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ECONOMICS","Score":null,"Total":0}
引用次数: 0

摘要

摘要考虑离散时间下的溢价控制问题,该问题用马尔可夫决策过程表示。在简化情况下,可以用动态规划方法推导出最优溢价规则。然而,由于状态空间的维度和缺乏转移概率的显式表达式,这些经典方法在更现实的情况下是不可行的。我们探索强化学习技术,使用函数逼近,来解决实际随机模型的溢价控制问题。我们将近似最优保费规则与真正最优保费规则在简化设置中的适当性进行了比较,并进一步证明了近似最优保费规则在更现实的设置中优于基准规则,其中经典方法失败。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Premium control with reinforcement learning
Abstract We consider a premium control problem in discrete time, formulated in terms of a Markov decision process. In a simplified setting, the optimal premium rule can be derived with dynamic programming methods. However, these classical methods are not feasible in a more realistic setting due to the dimension of the state space and lack of explicit expressions for transition probabilities. We explore reinforcement learning techniques, using function approximation, to solve the premium control problem for realistic stochastic models. We illustrate the appropriateness of the approximate optimal premium rule compared with the true optimal premium rule in a simplified setting and further demonstrate that the approximate optimal premium rule outperforms benchmark rules in more realistic settings where classical approaches fail.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
ASTIN Bulletin
ASTIN Bulletin 数学-数学跨学科应用
CiteScore
3.20
自引率
5.30%
发文量
24
审稿时长
>12 weeks
期刊介绍: ASTIN Bulletin publishes papers that are relevant to any branch of actuarial science and insurance mathematics. Its papers are quantitative and scientific in nature, and draw on theory and methods developed in any branch of the mathematical sciences including actuarial mathematics, statistics, probability, financial mathematics and econometrics.
期刊最新文献
Construction of rating systems using global sensitivity analysis: A numerical investigation Optimal VIX-linked structure for the target benefit pension plan Risk sharing in equity-linked insurance products: Stackelberg equilibrium between an insurer and a reinsurer Target benefit versus defined contribution scheme: a multi-period framework ASB volume 53 issue 3 Cover and Front matter
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1