{"title":"RL4CEP: reinforcement learning for updating CEP rules","authors":"Afef Mdhaffar, Ghassen Baklouti, Yassine Rebai, Mohamed Jmaiel, Bernd Freisleben","doi":"10.1007/s40747-024-01742-3","DOIUrl":null,"url":null,"abstract":"<p>This paper presents RL4CEP, a reinforcement learning (RL) approach to dynamically update complex event processing (CEP) rules. RL4CEP uses Double Deep Q-Networks to update the threshold values used by CEP rules. It is implemented using Apache Flink as a CEP engine and Apache Kafka for message distribution. RL4CEP is a generic approach for scenarios in which CEP rules need to be updated dynamically. In this paper, we use RL4CEP in a financial trading use case. Our experimental results based on three financial trading rules and eight financial datasets demonstrate the merits of RL4CEP in improving the overall profit, when compared to baseline and state-of-the-art approaches, with a reasonable consumption of resources, i.e., RAM and CPU. Finally, our experiments indicate that RL4CEP is executed quite fast compared to traditional CEP engines processing static rules.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"204 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-024-01742-3","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
This paper presents RL4CEP, a reinforcement learning (RL) approach to dynamically update complex event processing (CEP) rules. RL4CEP uses Double Deep Q-Networks to update the threshold values used by CEP rules. It is implemented using Apache Flink as a CEP engine and Apache Kafka for message distribution. RL4CEP is a generic approach for scenarios in which CEP rules need to be updated dynamically. In this paper, we use RL4CEP in a financial trading use case. Our experimental results based on three financial trading rules and eight financial datasets demonstrate the merits of RL4CEP in improving the overall profit, when compared to baseline and state-of-the-art approaches, with a reasonable consumption of resources, i.e., RAM and CPU. Finally, our experiments indicate that RL4CEP is executed quite fast compared to traditional CEP engines processing static rules.
本文提出了一种动态更新复杂事件处理(CEP)规则的强化学习方法RL4CEP。RL4CEP使用Double Deep Q-Networks来更新CEP规则使用的阈值。它是使用Apache Flink作为CEP引擎和Apache Kafka作为消息分发来实现的。RL4CEP是一种通用方法,适用于需要动态更新CEP规则的场景。在本文中,我们在一个金融交易用例中使用RL4CEP。我们基于三个金融交易规则和八个金融数据集的实验结果表明,与基线和最先进的方法相比,RL4CEP在提高整体利润方面具有优势,并且合理消耗资源,即RAM和CPU。最后,我们的实验表明,与处理静态规则的传统CEP引擎相比,RL4CEP执行速度相当快。
期刊介绍:
Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.