碳排放限额交易策略的深度递归 Q 网络算法。

IF 8 2区 环境科学与生态学 Q1 ENVIRONMENTAL SCIENCES Journal of Environmental Management Pub Date : 2024-11-15 DOI:10.1016/j.jenvman.2024.123308
Chao Wu , Wenjie Bi , Haiying Liu
{"title":"碳排放限额交易策略的深度递归 Q 网络算法。","authors":"Chao Wu ,&nbsp;Wenjie Bi ,&nbsp;Haiying Liu","doi":"10.1016/j.jenvman.2024.123308","DOIUrl":null,"url":null,"abstract":"<div><div>Against the backdrop of global warming, the carbon trading market is considered as an effective means of emission reduction. With more and more companies and individuals participating in carbon markets for trading, it is of great theoretical and practical significance to help them automatically identify carbon trading investment opportunities and achieve intelligent carbon trading decisions. Based on the characteristics of the carbon trading market, we propose a novel deep reinforcement learning (DRL) trading strategy - Deep Recurrent Q-Network (DRQN). The experimental results show that the carbon allowance trading model based on the DRQN algorithm can provide optimal trading strategies and adapt to market changes. Specifically, the annualized returns for the DRQN algorithm strategy in the Guangdong (GD) and Hubei (HB) carbon markets are 15.43% and 34.75%, respectively, significantly outperforming other strategies. To better meet the needs of the actual implementation scenarios of the model, we analyze the impacts of discount factors and trading costs. The research results indicate that discount factors can provide participants with clearer expectations. In both carbon markets (GD and HB), there exists an optimal discount factor value of 0.4, as both excessively small or large values can have adverse effects on trading. Simultaneously, the government can ensure the fairness of carbon trading by regulating the costs of carbon trading to limit the speculative behavior of participants.</div></div>","PeriodicalId":356,"journal":{"name":"Journal of Environmental Management","volume":"372 ","pages":"Article 123308"},"PeriodicalIF":8.0000,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep recurrent Q-network algorithm for carbon emission allowance trading strategy\",\"authors\":\"Chao Wu ,&nbsp;Wenjie Bi ,&nbsp;Haiying Liu\",\"doi\":\"10.1016/j.jenvman.2024.123308\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Against the backdrop of global warming, the carbon trading market is considered as an effective means of emission reduction. With more and more companies and individuals participating in carbon markets for trading, it is of great theoretical and practical significance to help them automatically identify carbon trading investment opportunities and achieve intelligent carbon trading decisions. Based on the characteristics of the carbon trading market, we propose a novel deep reinforcement learning (DRL) trading strategy - Deep Recurrent Q-Network (DRQN). The experimental results show that the carbon allowance trading model based on the DRQN algorithm can provide optimal trading strategies and adapt to market changes. Specifically, the annualized returns for the DRQN algorithm strategy in the Guangdong (GD) and Hubei (HB) carbon markets are 15.43% and 34.75%, respectively, significantly outperforming other strategies. To better meet the needs of the actual implementation scenarios of the model, we analyze the impacts of discount factors and trading costs. The research results indicate that discount factors can provide participants with clearer expectations. In both carbon markets (GD and HB), there exists an optimal discount factor value of 0.4, as both excessively small or large values can have adverse effects on trading. Simultaneously, the government can ensure the fairness of carbon trading by regulating the costs of carbon trading to limit the speculative behavior of participants.</div></div>\",\"PeriodicalId\":356,\"journal\":{\"name\":\"Journal of Environmental Management\",\"volume\":\"372 \",\"pages\":\"Article 123308\"},\"PeriodicalIF\":8.0000,\"publicationDate\":\"2024-11-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Environmental Management\",\"FirstCategoryId\":\"93\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0301479724032948\",\"RegionNum\":2,\"RegionCategory\":\"环境科学与生态学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENVIRONMENTAL SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Environmental Management","FirstCategoryId":"93","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0301479724032948","RegionNum":2,"RegionCategory":"环境科学与生态学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENVIRONMENTAL SCIENCES","Score":null,"Total":0}
引用次数: 0

摘要

在全球变暖的背景下,碳交易市场被认为是一种有效的减排手段。随着越来越多的企业和个人参与碳市场交易,帮助他们自动识别碳交易投资机会,实现智能碳交易决策,具有重要的理论和现实意义。根据碳交易市场的特点,我们提出了一种新颖的深度强化学习(DRL)交易策略--深度递归 Q 网络(DRQN)。实验结果表明,基于 DRQN 算法的碳配额交易模型能够提供最优交易策略,并适应市场变化。具体而言,DRQN算法策略在广东(GD)和湖北(HB)碳市场的年化收益率分别为15.43%和34.75%,明显优于其他策略。为了更好地满足模型实际应用场景的需要,我们分析了贴现因子和交易成本的影响。研究结果表明,贴现因子可以为参与者提供更清晰的预期。在两个碳市场(GD 和 HB)中,最佳贴现因子值均为 0.4,过小或过大的贴现因子值都会对交易产生不利影响。同时,政府可以通过调节碳交易成本来限制参与者的投机行为,从而保证碳交易的公平性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Deep recurrent Q-network algorithm for carbon emission allowance trading strategy
Against the backdrop of global warming, the carbon trading market is considered as an effective means of emission reduction. With more and more companies and individuals participating in carbon markets for trading, it is of great theoretical and practical significance to help them automatically identify carbon trading investment opportunities and achieve intelligent carbon trading decisions. Based on the characteristics of the carbon trading market, we propose a novel deep reinforcement learning (DRL) trading strategy - Deep Recurrent Q-Network (DRQN). The experimental results show that the carbon allowance trading model based on the DRQN algorithm can provide optimal trading strategies and adapt to market changes. Specifically, the annualized returns for the DRQN algorithm strategy in the Guangdong (GD) and Hubei (HB) carbon markets are 15.43% and 34.75%, respectively, significantly outperforming other strategies. To better meet the needs of the actual implementation scenarios of the model, we analyze the impacts of discount factors and trading costs. The research results indicate that discount factors can provide participants with clearer expectations. In both carbon markets (GD and HB), there exists an optimal discount factor value of 0.4, as both excessively small or large values can have adverse effects on trading. Simultaneously, the government can ensure the fairness of carbon trading by regulating the costs of carbon trading to limit the speculative behavior of participants.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Environmental Management
Journal of Environmental Management 环境科学-环境科学
CiteScore
13.70
自引率
5.70%
发文量
2477
审稿时长
84 days
期刊介绍: The Journal of Environmental Management is a journal for the publication of peer reviewed, original research for all aspects of management and the managed use of the environment, both natural and man-made.Critical review articles are also welcome; submission of these is strongly encouraged.
期刊最新文献
The farmgate phosphorus balance as a measure to achieve river and lake water quality targets. A conceptual framework to inform conservation status assessments of non-charismatic species. A mouse in the spotlight: Response capacity to artificial light at night in a rodent pest species, the southern multimammate mouse (Mastomys coucha). Application of advance oxidation processes for elimination of carbamazepine residues in soils. Changes in soil inorganic carbon following vegetation restoration in the cropland on the Loess Plateau in China: A meta-analysis.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1