电力市场竞价策略的多智能体强化学习

A. C. Tellidou, A. Bakirtzis, Senior Member
{"title":"电力市场竞价策略的多智能体强化学习","authors":"A. C. Tellidou, A. Bakirtzis, Senior Member","doi":"10.1109/IS.2006.348454","DOIUrl":null,"url":null,"abstract":"In the agent-based simulation discussed in this paper, we study the dynamics of the power market, when suppliers act following a Q-learning based bidding strategy. Power suppliers aim to satisfy two objectives: the maximization of their profit and their utilization rate. To meet with success their goals, they need to acquire a complex behavior by learning through a continuous exploiting and exploring process. Reinforcement learning theory provides a formal framework, along with a family of learning methods. In this paper we use Q-learning algorithm, perhaps the most popular among temporal difference methods. Q-learning offers suppliers the ability to evaluate their actions and to retain the most profitable of them. A five bus power system is used for our case studies; our experiments are contacted with three supplier-agents in all cases but the last one where sine agents participate. The locational marginal pricing (LMP) system serves as the market clearing mechanism","PeriodicalId":116809,"journal":{"name":"2006 3rd International IEEE Conference Intelligent Systems","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"22","resultStr":"{\"title\":\"Multi-Agent Reinforcement Learning for Strategic Bidding in Power Markets\",\"authors\":\"A. C. Tellidou, A. Bakirtzis, Senior Member\",\"doi\":\"10.1109/IS.2006.348454\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the agent-based simulation discussed in this paper, we study the dynamics of the power market, when suppliers act following a Q-learning based bidding strategy. Power suppliers aim to satisfy two objectives: the maximization of their profit and their utilization rate. To meet with success their goals, they need to acquire a complex behavior by learning through a continuous exploiting and exploring process. Reinforcement learning theory provides a formal framework, along with a family of learning methods. In this paper we use Q-learning algorithm, perhaps the most popular among temporal difference methods. Q-learning offers suppliers the ability to evaluate their actions and to retain the most profitable of them. A five bus power system is used for our case studies; our experiments are contacted with three supplier-agents in all cases but the last one where sine agents participate. The locational marginal pricing (LMP) system serves as the market clearing mechanism\",\"PeriodicalId\":116809,\"journal\":{\"name\":\"2006 3rd International IEEE Conference Intelligent Systems\",\"volume\":\"32 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2006-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"22\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2006 3rd International IEEE Conference Intelligent Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IS.2006.348454\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2006 3rd International IEEE Conference Intelligent Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IS.2006.348454","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 22

摘要

在本文讨论的基于智能体的仿真中,我们研究了当供应商遵循基于q学习的竞标策略时电力市场的动态。电力供应商的目标有两个:利润最大化和利用率最大化。为了实现他们的目标,他们需要通过不断的探索和探索过程来学习,从而获得一种复杂的行为。强化学习理论提供了一个正式的框架,以及一系列的学习方法。在本文中,我们使用了q -学习算法,这可能是时间差分方法中最流行的一种。Q-learning为供应商提供了评估其行为的能力,并保留了其中最有利可图的产品。我们的案例研究使用了一个五总线电源系统;除了最后一个有正弦代理参与的实验外,我们的实验都有三个供应商代理参与。区位边际定价(LMP)制度是市场出清机制
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Multi-Agent Reinforcement Learning for Strategic Bidding in Power Markets
In the agent-based simulation discussed in this paper, we study the dynamics of the power market, when suppliers act following a Q-learning based bidding strategy. Power suppliers aim to satisfy two objectives: the maximization of their profit and their utilization rate. To meet with success their goals, they need to acquire a complex behavior by learning through a continuous exploiting and exploring process. Reinforcement learning theory provides a formal framework, along with a family of learning methods. In this paper we use Q-learning algorithm, perhaps the most popular among temporal difference methods. Q-learning offers suppliers the ability to evaluate their actions and to retain the most profitable of them. A five bus power system is used for our case studies; our experiments are contacted with three supplier-agents in all cases but the last one where sine agents participate. The locational marginal pricing (LMP) system serves as the market clearing mechanism
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A Neurofuzzy Adaptive Kalman Filter Artificial Intelligence Technique for Gene Expression Profiling of Urinary Bladder Cancer Evolutionary Support Vector Machines for Diabetes Mellitus Diagnosis IGUANA: Individuation of Global Unsafe ANomalies and Alarm activation Smart Data Analysis Services
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1