基于深度强化学习的空腔滤波器自调谐算法

Daniel Poul Mtowe, Seongho Son, D. Ahn, Dong Min Kim
{"title":"基于深度强化学习的空腔滤波器自调谐算法","authors":"Daniel Poul Mtowe, Seongho Son, D. Ahn, Dong Min Kim","doi":"10.1109/PIERS59004.2023.10221259","DOIUrl":null,"url":null,"abstract":"Over the past few decades, the tuning of cavity filters has often been done by trial and error, using human experience and intuition, due to the imprecision of the design and manufacturing tolerances, which often results in detuning the filters and requiring costly post-production fine-tuning. Various techniques using optimization and machine learning have been investigated to automate the process. The superiority of a deep reinforcement learning approach, which can properly explore various possibilities and operate them in the desired way according to the well-defined reward, has motivated us to apply it to our problem. To meet the demand for an automatic tuning algorithm for cavity filters with high accuracy and efficiency, this study proposes an automatic tuning algorithm for cavity filters based on the deep reinforcement learning. For the efficiency of the tuning process, we limit the order of the elements to be tuned, inspired by the experience of experts based on domain knowledge. In addition, the coarse tuning process is performed first, followed by the fine tuning process to improve the tuning accuracy. The proposed method has demonstrated the ability of the deep reinforcement learning to learn the complex relationship between impedance values of equivalent circuit elements and S-parameters to effectively satisfy filter design requirements within an acceptable time range. The performance of the proposed automatic tuning algorithm has been evaluated through simulation experiments. The effectiveness of the proposed algorithm is demonstrated by the fact that it is able to tune a detuned filter from random starting point to meet its design requirements.","PeriodicalId":354610,"journal":{"name":"2023 Photonics & Electromagnetics Research Symposium (PIERS)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep Reinforcement Learning-based Auto-tuning Algorithm for Cavity Filters\",\"authors\":\"Daniel Poul Mtowe, Seongho Son, D. Ahn, Dong Min Kim\",\"doi\":\"10.1109/PIERS59004.2023.10221259\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Over the past few decades, the tuning of cavity filters has often been done by trial and error, using human experience and intuition, due to the imprecision of the design and manufacturing tolerances, which often results in detuning the filters and requiring costly post-production fine-tuning. Various techniques using optimization and machine learning have been investigated to automate the process. The superiority of a deep reinforcement learning approach, which can properly explore various possibilities and operate them in the desired way according to the well-defined reward, has motivated us to apply it to our problem. To meet the demand for an automatic tuning algorithm for cavity filters with high accuracy and efficiency, this study proposes an automatic tuning algorithm for cavity filters based on the deep reinforcement learning. For the efficiency of the tuning process, we limit the order of the elements to be tuned, inspired by the experience of experts based on domain knowledge. In addition, the coarse tuning process is performed first, followed by the fine tuning process to improve the tuning accuracy. The proposed method has demonstrated the ability of the deep reinforcement learning to learn the complex relationship between impedance values of equivalent circuit elements and S-parameters to effectively satisfy filter design requirements within an acceptable time range. The performance of the proposed automatic tuning algorithm has been evaluated through simulation experiments. The effectiveness of the proposed algorithm is demonstrated by the fact that it is able to tune a detuned filter from random starting point to meet its design requirements.\",\"PeriodicalId\":354610,\"journal\":{\"name\":\"2023 Photonics & Electromagnetics Research Symposium (PIERS)\",\"volume\":\"35 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 Photonics & Electromagnetics Research Symposium (PIERS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/PIERS59004.2023.10221259\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 Photonics & Electromagnetics Research Symposium (PIERS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PIERS59004.2023.10221259","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在过去的几十年里,由于设计和制造公差的不精确,腔滤波器的调谐通常是通过尝试和错误来完成的,这通常会导致滤波器失谐并需要昂贵的后期微调。已经研究了使用优化和机器学习的各种技术来实现该过程的自动化。深度强化学习方法的优势在于,它可以正确地探索各种可能性,并根据明确定义的奖励以期望的方式操作它们,这促使我们将其应用于我们的问题。为了满足高精度、高效率的空腔滤波器自动调谐算法的需求,本研究提出了一种基于深度强化学习的空腔滤波器自动调谐算法。为了提高调优过程的效率,我们借鉴了专家基于领域知识的经验,限制了需要调优的元素的顺序。此外,首先进行粗调,然后进行微调,以提高调谐精度。该方法证明了深度强化学习能够学习等效电路元件阻抗值与s参数之间的复杂关系,从而在可接受的时间范围内有效满足滤波器设计要求。通过仿真实验对所提出的自动调谐算法的性能进行了评价。该算法能够从随机起始点调整失谐滤波器以满足其设计要求,证明了该算法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Deep Reinforcement Learning-based Auto-tuning Algorithm for Cavity Filters
Over the past few decades, the tuning of cavity filters has often been done by trial and error, using human experience and intuition, due to the imprecision of the design and manufacturing tolerances, which often results in detuning the filters and requiring costly post-production fine-tuning. Various techniques using optimization and machine learning have been investigated to automate the process. The superiority of a deep reinforcement learning approach, which can properly explore various possibilities and operate them in the desired way according to the well-defined reward, has motivated us to apply it to our problem. To meet the demand for an automatic tuning algorithm for cavity filters with high accuracy and efficiency, this study proposes an automatic tuning algorithm for cavity filters based on the deep reinforcement learning. For the efficiency of the tuning process, we limit the order of the elements to be tuned, inspired by the experience of experts based on domain knowledge. In addition, the coarse tuning process is performed first, followed by the fine tuning process to improve the tuning accuracy. The proposed method has demonstrated the ability of the deep reinforcement learning to learn the complex relationship between impedance values of equivalent circuit elements and S-parameters to effectively satisfy filter design requirements within an acceptable time range. The performance of the proposed automatic tuning algorithm has been evaluated through simulation experiments. The effectiveness of the proposed algorithm is demonstrated by the fact that it is able to tune a detuned filter from random starting point to meet its design requirements.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Fast Calibration of Radar and Camera Images A Microwave Reflection-Based Measurement System for Moisture Detection in Textiles Design and Simulation of a Flood Forecasting and Alerting System: A Focus on Rwanda Localized Bessel Beams for Near-Field Focused Antenna Arrays in Biomedical Contexts Design and Analysis of a Compact Frequency Beam-scanning Leaky-wave Antenna Based on Slow-wave Half-mode Substrate Integrated Waveguide and Spoof Surface Plasmon Polaritons
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1