Pub Date : 2024-03-15DOI: 10.1109/TETCI.2024.3398024
Thanveer Shaik;Xiaohui Tao;Haoran Xie;Lin Li;Jianming Yong;Yuefeng Li
Reinforcement learning (RL) is renowned for its proficiency in modeling sequential tasks and adaptively learning latent data patterns. Deep learning models have been extensively explored and adopted in regression and classification tasks. However, deep learning has limitations, such as the assumption of equally spaced and ordered data, and the inability to incorporate graph structure in time-series prediction. Graph Neural Network (GNN) can overcome these challenges by capturing the temporal dependencies in time-series data effectively. In this study, we propose a novel approach for predicting time-series data using GNN, augmented with Reinforcement Learning(GraphRL) for monitoring. GNNs explicitly integrate the graph structure of the data into the model, enabling them to naturally capture temporal dependencies. This approach facilitates more accurate predictions in complex temporal structures, as encountered in healthcare, traffic, and weather forecasting domains. We further enhance our GraphRL model's performance through fine-tuning with a Bayesian optimization technique. The proposed framework surpasses baseline models in time-series forecasting and monitoring. This study's contributions include introducing a novel GraphRL framework for time-series prediction and demonstrating GNNs' efficacy compared to traditional deep learning models, such as Recurrent Neural Networks (RNN) and Long Short-Term Memory Networks(LSTM). Overall, this study underscores the potential of GraphRL in yielding accurate and efficient predictions within dynamic RL environments.
{"title":"Graph-Enabled Reinforcement Learning for Time Series Forecasting With Adaptive Intelligence","authors":"Thanveer Shaik;Xiaohui Tao;Haoran Xie;Lin Li;Jianming Yong;Yuefeng Li","doi":"10.1109/TETCI.2024.3398024","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3398024","url":null,"abstract":"Reinforcement learning (RL) is renowned for its proficiency in modeling sequential tasks and adaptively learning latent data patterns. Deep learning models have been extensively explored and adopted in regression and classification tasks. However, deep learning has limitations, such as the assumption of equally spaced and ordered data, and the inability to incorporate graph structure in time-series prediction. Graph Neural Network (GNN) can overcome these challenges by capturing the temporal dependencies in time-series data effectively. In this study, we propose a novel approach for predicting time-series data using GNN, augmented with Reinforcement Learning(GraphRL) for monitoring. GNNs explicitly integrate the graph structure of the data into the model, enabling them to naturally capture temporal dependencies. This approach facilitates more accurate predictions in complex temporal structures, as encountered in healthcare, traffic, and weather forecasting domains. We further enhance our GraphRL model's performance through fine-tuning with a Bayesian optimization technique. The proposed framework surpasses baseline models in time-series forecasting and monitoring. This study's contributions include introducing a novel GraphRL framework for time-series prediction and demonstrating GNNs' efficacy compared to traditional deep learning models, such as Recurrent Neural Networks (RNN) and Long Short-Term Memory Networks(LSTM). Overall, this study underscores the potential of GraphRL in yielding accurate and efficient predictions within dynamic RL environments.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-14DOI: 10.1109/TETCI.2024.3369858
Bo Peng;Jia Zhang;Zhe Zhang;Qingming Huang;Liqun Chen;Jianjun Lei
Enhancing low-light images in an unsupervised manner has become a popular topic due to the challenge of obtaining paired real-world low/normal-light images. Driven by massive available normal-light images, learning a low-light image enhancement network from unpaired data is more practical and valuable. This paper presents an unsupervised low-light image enhancement method (DeULLE) via luminance mask and luminance-independent representation decoupling based on unpaired data. Specifically, by estimating a luminance mask from low-light image, a luminance mask-guided low-light image generation (LMLIG) module is presented to darken reference normal-light image. In addition, a luminance-independent representation-based low-light image enhancement (LRLIE) module is developed to enhance low-light image by learning luminance-independent representation and incorporating the luminance cue of reference normal-light image. With the LMLIG and LRLIE modules, a bidirectional mapping-based cycle supervision (BMCS) is constructed to facilitate the decoupling of the luminance mask and luminance-independent representation, which further promotes unsupervised low-light enhancement learning with unpaired data. Comprehensive experiments on various challenging benchmark datasets demonstrate that the proposed DeULLE exhibits superior performance.
{"title":"Unsupervised Low-Light Image Enhancement via Luminance Mask and Luminance-Independent Representation Decoupling","authors":"Bo Peng;Jia Zhang;Zhe Zhang;Qingming Huang;Liqun Chen;Jianjun Lei","doi":"10.1109/TETCI.2024.3369858","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3369858","url":null,"abstract":"Enhancing low-light images in an unsupervised manner has become a popular topic due to the challenge of obtaining paired real-world low/normal-light images. Driven by massive available normal-light images, learning a low-light image enhancement network from unpaired data is more practical and valuable. This paper presents an unsupervised low-light image enhancement method (DeULLE) via luminance mask and luminance-independent representation decoupling based on unpaired data. Specifically, by estimating a luminance mask from low-light image, a luminance mask-guided low-light image generation (LMLIG) module is presented to darken reference normal-light image. In addition, a luminance-independent representation-based low-light image enhancement (LRLIE) module is developed to enhance low-light image by learning luminance-independent representation and incorporating the luminance cue of reference normal-light image. With the LMLIG and LRLIE modules, a bidirectional mapping-based cycle supervision (BMCS) is constructed to facilitate the decoupling of the luminance mask and luminance-independent representation, which further promotes unsupervised low-light enhancement learning with unpaired data. Comprehensive experiments on various challenging benchmark datasets demonstrate that the proposed DeULLE exhibits superior performance.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141964862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-14DOI: 10.1109/TETCI.2024.3369866
Qing Song;Zilong Jia;Wenhe Jia;Wenyi Zhao;Mengjie Hu;Chun Liu
In complex long-term news videos, the fundamental component is the news excerpt which consists of many studio and interview screens. Spotting and identifying the correct news excerpt from such a complex long-term video is a challenging task. Apart from the inherent temporal semantics and the complex generic events interactions, the varied richness of semantics within the text and visual modalities further complicates matters. In this paper, we delve into the nuanced realm of video temporal understanding, examining it through a multimodal and multitask perspective. Our research involves presenting a more fine-grained challenge, which we refer to as M