首页 > 最新文献

IEEE Transactions on Emerging Topics in Computational Intelligence最新文献

英文 中文
Adversarial Examples Detection With Bayesian Neural Network 利用贝叶斯神经网络检测逆向实例
IF 5.3 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-18 DOI: 10.1109/TETCI.2024.3372383
Yao Li;Tongyi Tang;Cho-Jui Hsieh;Thomas C. M. Lee
In this paper, we propose a new framework to detect adversarial examples motivated by the observations that random components can improve the smoothness of predictors and make it easier to simulate the output distribution of a deep neural network. With these observations, we propose a novel Bayesian adversarial example detector, short for BATer, to improve the performance of adversarial example detection. Specifically, we study the distributional difference of hidden layer output between natural and adversarial examples, and propose to use the randomness of the Bayesian neural network to simulate hidden layer output distribution and leverage the distribution dispersion to detect adversarial examples. The advantage of a Bayesian neural network is that the output is stochastic while a deep neural network without random components does not have such characteristics. Empirical results on several benchmark datasets against popular attacks show that the proposed BATer outperforms the state-of-the-art detectors in adversarial example detection.
在本文中,我们提出了一种检测对抗示例的新框架,其动机是观察到随机成分可以提高预测器的平滑度,并使深度神经网络的输出分布更容易模拟。根据这些观察结果,我们提出了一种新型贝叶斯对抗示例检测器(简称 BATer),以提高对抗示例检测的性能。具体来说,我们研究了隐藏层输出在自然示例和对抗性示例之间的分布差异,并提出利用贝叶斯神经网络的随机性模拟隐藏层输出分布,并利用分布的分散性来检测对抗性示例。贝叶斯神经网络的优势在于输出具有随机性,而没有随机成分的深度神经网络则不具备这种特性。在几个针对流行攻击的基准数据集上的实证结果表明,所提出的 BATer 在对抗性示例检测方面优于最先进的检测器。
{"title":"Adversarial Examples Detection With Bayesian Neural Network","authors":"Yao Li;Tongyi Tang;Cho-Jui Hsieh;Thomas C. M. Lee","doi":"10.1109/TETCI.2024.3372383","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3372383","url":null,"abstract":"In this paper, we propose a new framework to detect adversarial examples motivated by the observations that random components can improve the smoothness of predictors and make it easier to simulate the output distribution of a deep neural network. With these observations, we propose a novel Bayesian adversarial example detector, short for \u0000<sc>BATer</small>\u0000, to improve the performance of adversarial example detection. Specifically, we study the distributional difference of hidden layer output between natural and adversarial examples, and propose to use the randomness of the Bayesian neural network to simulate hidden layer output distribution and leverage the distribution dispersion to detect adversarial examples. The advantage of a Bayesian neural network is that the output is stochastic while a deep neural network without random components does not have such characteristics. Empirical results on several benchmark datasets against popular attacks show that the proposed \u0000<sc>BATer</small>\u0000 outperforms the state-of-the-art detectors in adversarial example detection.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 5","pages":"3654-3664"},"PeriodicalIF":5.3,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142377011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Local Dimming for Video Based on an Improved Surrogate Model Assisted Evolutionary Algorithm 基于改进的代用模型辅助进化算法的视频局部调光技术
IF 5.3 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-18 DOI: 10.1109/TETCI.2024.3370033
Yahui Cao;Tao Zhang;Xin Zhao;Yuzheng Yan;Shuxin Cui
Compared with the traditional liquid crystal displays (LCD) systems, the local dimming systems can obtain higher display quality with lower power consumption. Considering local dimming of the static image as an optimization problem and solving it based on an evolutionary algorithm, a set of optimal backlight matrix can be obtained. However, the local dimming algorithm based on evolutionary algorithm is no longer applicable for the video sequences because the calculation is very time-consuming. This paper proposes a local dimming algorithm based on improved surrogate model assisted evolutional algorithm (ISAEA-LD). In this algorithm, the surrogate model assisted evolutionary algorithm is applied to solve the local dimming problem of the video sequences. The surrogate model is used to reduce the complexity of individual fitness evaluation of the evolutionary algorithm. Firstly, a surrogate model based on convolutional neural network is adopted to improve the accuracy of individual fitness evaluation of surrogate model. Secondly, the algorithm introduces the backlight update strategy based on the content correlation between the video sequences' adjacent frames and the model transfer strategy based on transfer learning to improve the efficiency of the algorithm. Experimental results show that the proposed ISAEA-LD algorithm can obtain better visual quality and higher algorithm efficiency.
与传统的液晶显示器(LCD)系统相比,局部调光系统能以更低的功耗获得更高的显示质量。将静态图像的局部调光视为一个优化问题,并基于进化算法进行求解,可以得到一组最佳背光矩阵。然而,基于进化算法的局部调光算法已不适用于视频序列,因为计算非常耗时。本文提出了一种基于改进代理模型辅助进化算法(ISAEA-LD)的局部调光算法。在该算法中,代用模型辅助进化算法被用于解决视频序列的局部调光问题。代用模型用于降低进化算法个体适应度评估的复杂性。首先,采用基于卷积神经网络的代用模型来提高代用模型个体适配性评估的准确性。其次,该算法引入了基于视频序列相邻帧内容相关性的背光更新策略和基于迁移学习的模型迁移策略,以提高算法的效率。实验结果表明,所提出的 ISAEA-LD 算法可以获得更好的视觉质量和更高的算法效率。
{"title":"Local Dimming for Video Based on an Improved Surrogate Model Assisted Evolutionary Algorithm","authors":"Yahui Cao;Tao Zhang;Xin Zhao;Yuzheng Yan;Shuxin Cui","doi":"10.1109/TETCI.2024.3370033","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3370033","url":null,"abstract":"Compared with the traditional liquid crystal displays (LCD) systems, the local dimming systems can obtain higher display quality with lower power consumption. Considering local dimming of the static image as an optimization problem and solving it based on an evolutionary algorithm, a set of optimal backlight matrix can be obtained. However, the local dimming algorithm based on evolutionary algorithm is no longer applicable for the video sequences because the calculation is very time-consuming. This paper proposes a local dimming algorithm based on improved surrogate model assisted evolutional algorithm (ISAEA-LD). In this algorithm, the surrogate model assisted evolutionary algorithm is applied to solve the local dimming problem of the video sequences. The surrogate model is used to reduce the complexity of individual fitness evaluation of the evolutionary algorithm. Firstly, a surrogate model based on convolutional neural network is adopted to improve the accuracy of individual fitness evaluation of surrogate model. Secondly, the algorithm introduces the backlight update strategy based on the content correlation between the video sequences' adjacent frames and the model transfer strategy based on transfer learning to improve the efficiency of the algorithm. Experimental results show that the proposed ISAEA-LD algorithm can obtain better visual quality and higher algorithm efficiency.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 4","pages":"3166-3179"},"PeriodicalIF":5.3,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141964673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reinforcement Learning and Transformer for Fast Magnetic Resonance Imaging Scan 用于快速磁共振成像扫描的强化学习和变压器
IF 5.3 3区 计算机科学 Q1 Mathematics Pub Date : 2024-03-18 DOI: 10.1109/TETCI.2024.3358180
Yiming Liu;Yanwei Pang;Ruiqi Jin;Yonghong Hou;Xuelong Li
A major drawback in Magnetic Resonance Imaging (MRI) is the long scan times necessary to acquire complete K-space matrices using phase encoding. This paper proposes a transformer-based deep Reinforcement Learning (RL) framework (called TITLE) to reduce the scan time by sequentially selecting partial phases in real-time so that a slice can be accurately reconstructed from the resultant slice-specific incomplete K-space matrix. As a deep learning based slice-specific method, the TITLE method has the following characteristic and merits: (1) It is real-time because the decision of which phase to be encoded in next time can be made within the period between the time at which an echo signal is obtained and the time at which the next 180° RF pulse is activated. (2) It exploits the powerful feature representation ability of transformer, a self-attention based neural network, for predicting phases with the mechanism of deep reinforcement learning. (3) Both historically selected phases (called phase-indicator vector) and the corresponding undersampled image of the slice being scanned are used for extracting features by transformer. Experimental results on the fastMRI dataset demonstrate that the proposed method is 150 times faster than the state-of-the-art reinforcement learning based method and outperforms the state-of-the-art deep learning based methods in reconstruction accuracy. The source codes are available.
磁共振成像(MRI)的一个主要缺点是使用相位编码获取完整的 K 空间矩阵所需的扫描时间较长。本文提出了一种基于变压器的深度强化学习(RL)框架(称为 TITLE),通过实时依次选择部分相位来缩短扫描时间,这样就能从由此产生的特定切片的不完整 K 空间矩阵中准确地重建切片。作为一种基于深度学习的切片特定方法,TITLE 方法具有以下特点和优点:(1)它具有实时性,因为从获得回波信号到激活下一个 180° 射频脉冲之间的时间段内就能决定下一次要编码的相位。(2) 利用基于自我注意的神经网络变压器强大的特征表示能力,通过深度强化学习机制预测相位。(3) 变压器利用历史选定的相位(称为相位指示向量)和相应的扫描切片欠采样图像来提取特征。在 fastMRI 数据集上的实验结果表明,所提出的方法比最先进的基于强化学习的方法快 150 倍,并且在重建精度上优于最先进的基于深度学习的方法。源代码已发布。
{"title":"Reinforcement Learning and Transformer for Fast Magnetic Resonance Imaging Scan","authors":"Yiming Liu;Yanwei Pang;Ruiqi Jin;Yonghong Hou;Xuelong Li","doi":"10.1109/TETCI.2024.3358180","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3358180","url":null,"abstract":"A major drawback in Magnetic Resonance Imaging (MRI) is the long scan times necessary to acquire complete K-space matrices using phase encoding. This paper proposes a transformer-based deep Reinforcement Learning (RL) framework (called TITLE) to reduce the scan time by sequentially selecting partial phases in real-time so that a slice can be accurately reconstructed from the resultant slice-specific incomplete K-space matrix. As a deep learning based slice-specific method, the TITLE method has the following characteristic and merits: (1) It is real-time because the decision of which phase to be encoded in next time can be made within the period between the time at which an echo signal is obtained and the time at which the next 180° RF pulse is activated. (2) It exploits the powerful feature representation ability of transformer, a self-attention based neural network, for predicting phases with the mechanism of deep reinforcement learning. (3) Both historically selected phases (called phase-indicator vector) and the corresponding undersampled image of the slice being scanned are used for extracting features by transformer. Experimental results on the fastMRI dataset demonstrate that the proposed method is 150 times faster than the state-of-the-art reinforcement learning based method and outperforms the state-of-the-art deep learning based methods in reconstruction accuracy. The source codes are available.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 3","pages":"2310-2323"},"PeriodicalIF":5.3,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141094881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Survey of Deep Learning Video Super-Resolution 深度学习视频超分辨率调查
IF 5.3 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-17 DOI: 10.1109/TETCI.2024.3398015
Arbind Agrahari Baniya;Tsz-Kwan Lee;Peter W. Eklund;Sunil Aryal
Video super-resolution (VSR) is a prominent research topic in low-level computer vision, where deep learning technologies have played a significant role. The rapid progress in deep learning and its applications in VSR has led to a proliferation of tools and techniques in the literature. However, the usage of these methods is often not adequately explained, and decisions are primarily driven by quantitative improvements. Given the significance of VSR's potential influence across multiple domains, it is imperative to conduct a comprehensive analysis of the elements and deep learning methodologies employed in VSR research. This methodical analysis will facilitate the informed development of models tailored to specific application needs. In this paper, we present an overarching overview of deep learning-based video super-resolution models, investigating each component and discussing its implications. Furthermore, we provide a synopsis of key components and technologies employed by state-of-the-art and earlier VSR models. By elucidating the underlying methodologies and categorising them systematically, we identified trends, requirements, and challenges in the domain. As a first-of-its-kind survey of deep learning-based VSR models, this work also establishes a multi-level taxonomy to guide current and future VSR research, enhancing the maturation and interpretation of VSR practices for various practical applications.
视频超分辨率(VSR)是低级计算机视觉领域的一个突出研究课题,深度学习技术在其中发挥了重要作用。深度学习及其在 VSR 中的应用进展迅速,导致文献中的工具和技术激增。然而,这些方法的使用往往没有得到充分的解释,决策主要受量化改进的驱动。鉴于 VSR 在多个领域的潜在影响力,对 VSR 研究中使用的元素和深度学习方法进行全面分析势在必行。这种有条不紊的分析将有助于根据特定应用需求量身定制模型。在本文中,我们将对基于深度学习的视频超分辨率模型进行总体概述,研究每个组成部分并讨论其影响。此外,我们还简要介绍了最先进和早期 VSR 模型所采用的关键组件和技术。通过阐明基础方法并对其进行系统分类,我们确定了该领域的趋势、要求和挑战。作为对基于深度学习的 VSR 模型的首次调查,这项工作还建立了一个多层次的分类法,以指导当前和未来的 VSR 研究,促进 VSR 实践的成熟和对各种实际应用的解释。
{"title":"A Survey of Deep Learning Video Super-Resolution","authors":"Arbind Agrahari Baniya;Tsz-Kwan Lee;Peter W. Eklund;Sunil Aryal","doi":"10.1109/TETCI.2024.3398015","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3398015","url":null,"abstract":"Video super-resolution (VSR) is a prominent research topic in low-level computer vision, where deep learning technologies have played a significant role. The rapid progress in deep learning and its applications in VSR has led to a proliferation of tools and techniques in the literature. However, the usage of these methods is often not adequately explained, and decisions are primarily driven by quantitative improvements. Given the significance of VSR's potential influence across multiple domains, it is imperative to conduct a comprehensive analysis of the elements and deep learning methodologies employed in VSR research. This methodical analysis will facilitate the informed development of models tailored to specific application needs. In this paper, we present an overarching overview of deep learning-based video super-resolution models, investigating each component and discussing its implications. Furthermore, we provide a synopsis of key components and technologies employed by state-of-the-art and earlier VSR models. By elucidating the underlying methodologies and categorising them systematically, we identified trends, requirements, and challenges in the domain. As a first-of-its-kind survey of deep learning-based VSR models, this work also establishes a multi-level taxonomy to guide current and future VSR research, enhancing the maturation and interpretation of VSR practices for various practical applications.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 4","pages":"2655-2676"},"PeriodicalIF":5.3,"publicationDate":"2024-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intensive Class Imbalance Learning in Drifting Data Streams 漂移数据流中的强化类失衡学习
IF 5.3 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-17 DOI: 10.1109/TETCI.2024.3399657
Muhammad Usman;Huanhuan Chen
Streaming data analysis faces two primary challenges: concept drifts and class imbalance. The co-occurrence of virtual drifts and class imbalance is a common real-world scenario requiring dedicated solutions. This paper presents Intensive Class Imbalance Learning (ICIL), a novel supervised classification method for virtually drifting data streams. ICIL facilitates the detection of virtual drifts through a feature-sensitive change detection method. It calibrates the data over time to resolve within-class imbalance, overlaps, and small sample size problems. A weighted voting ensemble is proposed for enhanced performance, wherein weights are constantly updated based on the recent performance of the member classifiers. Experiments are conducted on 14 synthetic and real-world data streams to demonstrate the efficacy of the proposed method. The comparative analysis against 11 state-of-the-art methods shows that the proposed method outperforms the other methods in 9/14 data streams on the G-mean metric.
流数据分析面临两个主要挑战:概念漂移和类失衡。虚拟漂移和类失衡同时出现是现实世界中常见的情况,需要专门的解决方案。本文介绍了强化类失衡学习(ICIL),这是一种针对虚拟漂移数据流的新型监督分类方法。ICIL 通过一种对特征敏感的变化检测方法来促进虚拟漂移的检测。它随着时间的推移对数据进行校准,以解决类内不平衡、重叠和样本量小的问题。为了提高性能,提出了一种加权投票组合,根据成员分类器的近期性能不断更新权重。我们在 14 个合成数据流和真实数据流上进行了实验,以证明所提方法的有效性。与 11 种最先进方法的对比分析表明,在 9/14 种数据流中,建议的方法在 G-mean 指标上优于其他方法。
{"title":"Intensive Class Imbalance Learning in Drifting Data Streams","authors":"Muhammad Usman;Huanhuan Chen","doi":"10.1109/TETCI.2024.3399657","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3399657","url":null,"abstract":"Streaming data analysis faces two primary challenges: concept drifts and class imbalance. The co-occurrence of virtual drifts and class imbalance is a common real-world scenario requiring dedicated solutions. This paper presents Intensive Class Imbalance Learning (ICIL), a novel supervised classification method for virtually drifting data streams. ICIL facilitates the detection of virtual drifts through a feature-sensitive change detection method. It calibrates the data over time to resolve within-class imbalance, overlaps, and small sample size problems. A weighted voting ensemble is proposed for enhanced performance, wherein weights are constantly updated based on the recent performance of the member classifiers. Experiments are conducted on 14 synthetic and real-world data streams to demonstrate the efficacy of the proposed method. The comparative analysis against 11 state-of-the-art methods shows that the proposed method outperforms the other methods in 9/14 data streams on the G-mean metric.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 5","pages":"3503-3517"},"PeriodicalIF":5.3,"publicationDate":"2024-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142376911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph-Enabled Reinforcement Learning for Time Series Forecasting With Adaptive Intelligence 利用自适应智能进行时间序列预测的图形强化学习
IF 5.3 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-15 DOI: 10.1109/TETCI.2024.3398024
Thanveer Shaik;Xiaohui Tao;Haoran Xie;Lin Li;Jianming Yong;Yuefeng Li
Reinforcement learning (RL) is renowned for its proficiency in modeling sequential tasks and adaptively learning latent data patterns. Deep learning models have been extensively explored and adopted in regression and classification tasks. However, deep learning has limitations, such as the assumption of equally spaced and ordered data, and the inability to incorporate graph structure in time-series prediction. Graph Neural Network (GNN) can overcome these challenges by capturing the temporal dependencies in time-series data effectively. In this study, we propose a novel approach for predicting time-series data using GNN, augmented with Reinforcement Learning(GraphRL) for monitoring. GNNs explicitly integrate the graph structure of the data into the model, enabling them to naturally capture temporal dependencies. This approach facilitates more accurate predictions in complex temporal structures, as encountered in healthcare, traffic, and weather forecasting domains. We further enhance our GraphRL model's performance through fine-tuning with a Bayesian optimization technique. The proposed framework surpasses baseline models in time-series forecasting and monitoring. This study's contributions include introducing a novel GraphRL framework for time-series prediction and demonstrating GNNs' efficacy compared to traditional deep learning models, such as Recurrent Neural Networks (RNN) and Long Short-Term Memory Networks(LSTM). Overall, this study underscores the potential of GraphRL in yielding accurate and efficient predictions within dynamic RL environments.
强化学习(RL)因其在连续任务建模和自适应学习潜在数据模式方面的能力而闻名于世。深度学习模型在回归和分类任务中得到了广泛的探索和采用。然而,深度学习也有局限性,比如假设数据间距相等且有序,以及无法在时间序列预测中纳入图结构。图神经网络(GNN)可以有效捕捉时间序列数据中的时间依赖性,从而克服这些挑战。在本研究中,我们提出了一种使用 GNN 预测时间序列数据的新方法,并使用强化学习(GraphRL)进行监控。GNN 将数据的图结构明确地整合到模型中,使其能够自然地捕捉时间依赖关系。这种方法有助于在复杂的时间结构中进行更准确的预测,例如在医疗保健、交通和天气预报领域。通过贝叶斯优化技术进行微调,我们进一步提高了 GraphRL 模型的性能。所提出的框架在时间序列预测和监控方面超越了基准模型。本研究的贡献包括为时间序列预测引入了一个新颖的 GraphRL 框架,并展示了 GNN 与循环神经网络(RNN)和长短期记忆网络(LSTM)等传统深度学习模型相比的功效。总之,这项研究强调了 GraphRL 在动态 RL 环境中进行准确、高效预测的潜力。
{"title":"Graph-Enabled Reinforcement Learning for Time Series Forecasting With Adaptive Intelligence","authors":"Thanveer Shaik;Xiaohui Tao;Haoran Xie;Lin Li;Jianming Yong;Yuefeng Li","doi":"10.1109/TETCI.2024.3398024","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3398024","url":null,"abstract":"Reinforcement learning (RL) is renowned for its proficiency in modeling sequential tasks and adaptively learning latent data patterns. Deep learning models have been extensively explored and adopted in regression and classification tasks. However, deep learning has limitations, such as the assumption of equally spaced and ordered data, and the inability to incorporate graph structure in time-series prediction. Graph Neural Network (GNN) can overcome these challenges by capturing the temporal dependencies in time-series data effectively. In this study, we propose a novel approach for predicting time-series data using GNN, augmented with Reinforcement Learning(GraphRL) for monitoring. GNNs explicitly integrate the graph structure of the data into the model, enabling them to naturally capture temporal dependencies. This approach facilitates more accurate predictions in complex temporal structures, as encountered in healthcare, traffic, and weather forecasting domains. We further enhance our GraphRL model's performance through fine-tuning with a Bayesian optimization technique. The proposed framework surpasses baseline models in time-series forecasting and monitoring. This study's contributions include introducing a novel GraphRL framework for time-series prediction and demonstrating GNNs' efficacy compared to traditional deep learning models, such as Recurrent Neural Networks (RNN) and Long Short-Term Memory Networks(LSTM). Overall, this study underscores the potential of GraphRL in yielding accurate and efficient predictions within dynamic RL environments.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 4","pages":"2908-2918"},"PeriodicalIF":5.3,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised Low-Light Image Enhancement via Luminance Mask and Luminance-Independent Representation Decoupling 通过亮度掩码和亮度无关表示解耦实现无监督低照度图像增强
IF 5.3 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-14 DOI: 10.1109/TETCI.2024.3369858
Bo Peng;Jia Zhang;Zhe Zhang;Qingming Huang;Liqun Chen;Jianjun Lei
Enhancing low-light images in an unsupervised manner has become a popular topic due to the challenge of obtaining paired real-world low/normal-light images. Driven by massive available normal-light images, learning a low-light image enhancement network from unpaired data is more practical and valuable. This paper presents an unsupervised low-light image enhancement method (DeULLE) via luminance mask and luminance-independent representation decoupling based on unpaired data. Specifically, by estimating a luminance mask from low-light image, a luminance mask-guided low-light image generation (LMLIG) module is presented to darken reference normal-light image. In addition, a luminance-independent representation-based low-light image enhancement (LRLIE) module is developed to enhance low-light image by learning luminance-independent representation and incorporating the luminance cue of reference normal-light image. With the LMLIG and LRLIE modules, a bidirectional mapping-based cycle supervision (BMCS) is constructed to facilitate the decoupling of the luminance mask and luminance-independent representation, which further promotes unsupervised low-light enhancement learning with unpaired data. Comprehensive experiments on various challenging benchmark datasets demonstrate that the proposed DeULLE exhibits superior performance.
由于难以获得真实世界中成对的弱光/正常光图像,以无监督方式增强弱光图像已成为一个热门话题。在大量可用正常光图像的驱动下,从非配对数据中学习低照度图像增强网络更加实用和有价值。本文提出了一种基于非配对数据的无监督弱光图像增强方法(DeULLE),该方法通过亮度掩码和亮度无关表示解耦实现。具体来说,通过从低照度图像中估算亮度掩码,提出了一个亮度掩码引导的低照度图像生成(LMLIG)模块,用于使参考的正常照度图像变暗。此外,还开发了基于亮度无关表示的弱光图像增强(LRLIE)模块,通过学习亮度无关表示并结合参考正常光图像的亮度线索来增强弱光图像。通过 LMLIG 和 LRLIE 模块,构建了基于双向映射的循环监督(BMCS),以促进亮度掩码和亮度无关表示的解耦,从而进一步促进了无配对数据的无监督弱光增强学习。在各种具有挑战性的基准数据集上进行的综合实验证明,所提出的 DeULLE 表现出了卓越的性能。
{"title":"Unsupervised Low-Light Image Enhancement via Luminance Mask and Luminance-Independent Representation Decoupling","authors":"Bo Peng;Jia Zhang;Zhe Zhang;Qingming Huang;Liqun Chen;Jianjun Lei","doi":"10.1109/TETCI.2024.3369858","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3369858","url":null,"abstract":"Enhancing low-light images in an unsupervised manner has become a popular topic due to the challenge of obtaining paired real-world low/normal-light images. Driven by massive available normal-light images, learning a low-light image enhancement network from unpaired data is more practical and valuable. This paper presents an unsupervised low-light image enhancement method (DeULLE) via luminance mask and luminance-independent representation decoupling based on unpaired data. Specifically, by estimating a luminance mask from low-light image, a luminance mask-guided low-light image generation (LMLIG) module is presented to darken reference normal-light image. In addition, a luminance-independent representation-based low-light image enhancement (LRLIE) module is developed to enhance low-light image by learning luminance-independent representation and incorporating the luminance cue of reference normal-light image. With the LMLIG and LRLIE modules, a bidirectional mapping-based cycle supervision (BMCS) is constructed to facilitate the decoupling of the luminance mask and luminance-independent representation, which further promotes unsupervised low-light enhancement learning with unpaired data. Comprehensive experiments on various challenging benchmark datasets demonstrate that the proposed DeULLE exhibits superior performance.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 4","pages":"3029-3039"},"PeriodicalIF":5.3,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141964862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
News-MESI: A Dataset for Multimodal News Excerpt Segmentation and Identification 新闻-MESI:多模态新闻摘录分割与识别数据集
IF 5.3 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-14 DOI: 10.1109/TETCI.2024.3369866
Qing Song;Zilong Jia;Wenhe Jia;Wenyi Zhao;Mengjie Hu;Chun Liu
In complex long-term news videos, the fundamental component is the news excerpt which consists of many studio and interview screens. Spotting and identifying the correct news excerpt from such a complex long-term video is a challenging task. Apart from the inherent temporal semantics and the complex generic events interactions, the varied richness of semantics within the text and visual modalities further complicates matters. In this paper, we delve into the nuanced realm of video temporal understanding, examining it through a multimodal and multitask perspective. Our research involves presenting a more fine-grained challenge, which we refer to as Multimodal News Excerpt Segmentation and Identification. The objective is to segment news videos into individual frame-level excerpts while accurately assigning elaborate tags to each segment by utilizing multimodal semantics. As there is an absence of multimodal fine-grained temporal segmentation dataset at present, we set up a new benchmark called News-MESI to support our research. News-MESI comprises over 150 high-quality news videos sourced from digital media, with approximately 150 hours in total and encompassing more than 2000 news excerpts. By annotating it with frame-level excerpt boundaries and an elaborate categorization hierarchy, this collection offers a valuable chance for multi-modal semantic understanding of these distinctive videos. We also present a novel algorithm employing coarse-to-fine multimodal fusion and hierarchical classification to address this problem. Extensive experiments are executed on our benchmark to show how the news content is temporally evolving in nature. Further analysis shows that multi-modal solutions are significantly superior to the single-modal solution.
在复杂的长期新闻视频中,最基本的组成部分是由许多演播室和采访画面组成的新闻节选。从如此复杂的长期视频中发现并识别正确的新闻节选是一项极具挑战性的任务。除了固有的时间语义和复杂的一般事件交互之外,文本和视觉模式中丰富多样的语义也使问题变得更加复杂。在本文中,我们深入探讨了视频时间理解的细微差别,并从多模态和多任务的角度对其进行了研究。我们的研究涉及提出一个更精细的挑战,我们称之为多模态新闻摘录分割和识别。我们的目标是将新闻视频分割成单个帧级摘录,同时利用多模态语义为每个片段准确分配精心制作的标签。由于目前缺乏多模态精细时间分割数据集,我们建立了一个名为 News-MESI 的新基准来支持我们的研究。News-MESI 包含 150 多个来自数字媒体的高质量新闻视频,总时长约 150 小时,包含 2000 多个新闻节选。通过使用帧级摘录边界和精心设计的分类层次对其进行注释,该视频集为多模态语义理解这些与众不同的视频提供了宝贵的机会。我们还提出了一种新颖的算法,采用从粗到细的多模态融合和分层分类来解决这一问题。我们在基准上进行了广泛的实验,以展示新闻内容在本质上是如何随时间演变的。进一步的分析表明,多模态解决方案明显优于单模态解决方案。
{"title":"News-MESI: A Dataset for Multimodal News Excerpt Segmentation and Identification","authors":"Qing Song;Zilong Jia;Wenhe Jia;Wenyi Zhao;Mengjie Hu;Chun Liu","doi":"10.1109/TETCI.2024.3369866","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3369866","url":null,"abstract":"In complex long-term news videos, the fundamental component is the news excerpt which consists of many studio and interview screens. Spotting and identifying the correct news excerpt from such a complex long-term video is a challenging task. Apart from the inherent temporal semantics and the complex generic events interactions, the varied richness of semantics within the text and visual modalities further complicates matters. In this paper, we delve into the nuanced realm of video temporal understanding, examining it through a multimodal and multitask perspective. Our research involves presenting a more fine-grained challenge, which we refer to as \u0000<bold>M</b>\u0000ultimodal News \u0000<bold>E</b>\u0000xcerpt \u0000<bold>S</b>\u0000egmentation and \u0000<bold>I</b>\u0000dentification. The objective is to segment news videos into individual frame-level excerpts while accurately assigning elaborate tags to each segment by utilizing multimodal semantics. As there is an absence of multimodal fine-grained temporal segmentation dataset at present, we set up a new benchmark called News-MESI to support our research. News-MESI comprises over 150 high-quality news videos sourced from digital media, with approximately 150 hours in total and encompassing more than 2000 news excerpts. By annotating it with frame-level excerpt boundaries and an elaborate categorization hierarchy, this collection offers a valuable chance for multi-modal semantic understanding of these distinctive videos. We also present a novel algorithm employing coarse-to-fine multimodal fusion and hierarchical classification to address this problem. Extensive experiments are executed on our benchmark to show how the news content is temporally evolving in nature. Further analysis shows that multi-modal solutions are significantly superior to the single-modal solution.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 4","pages":"3001-3016"},"PeriodicalIF":5.3,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141964861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PATReId: Pose Apprise Transformer Network for Vehicle Re-Identification PATReId:用于车辆再识别的 Pose Apprise Transformer 网络
IF 5.3 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-14 DOI: 10.1109/TETCI.2024.3372391
Rishi Kishore;Nazia Aslam;Maheshkumar H. Kolekar
Vehicle re-identification is a procedure for identifying a vehicle using multiple non-overlapping cameras. The use of licence plates for re-identification have constraints because a licence plates may not be seen owing to viewpoint differences. Also, the high intra-class variability (due to the shape and appearance from different angles) and small inter-class variability (due to the similarity in appearance and shapes of vehicles from different manufacturers) make it more challenging. To address these issues, we have proposed a novel PATReId, Pose Apprise Transformer network for Vehicle Re-identification. This network works two-fold: 1) generating the poses of the vehicles using the heatmap, keypoints, and segments, which eliminate the viewpoint dependencies, and 2) jointly classify the attributes of the vehicles (colour and type) while performing ReId by utilizing the multitask learning through a two-stream neural network-integrated with the pose. The vision transformer and ResNet50 networks are employed to create the two-stream neural network. Extensive experiments have been conducted on Veri776, VehicleID and Veri Wild datasets to demonstrate the accuracy and efficacy of the proposed PATReId framework.
车辆再识别是一种使用多个非重叠摄像机识别车辆的程序。使用车牌进行重新识别有其局限性,因为视角差异可能导致无法看到车牌。此外,高类内变异性(由于不同角度的形状和外观)和小类间变异性(由于不同制造商的车辆在外观和形状上的相似性)使其更具挑战性。为了解决这些问题,我们提出了一种新颖的 PATReId(Pose Apprise Transformer)网络,用于车辆再识别。该网络有两方面的功能:1)使用热图、关键点和片段生成车辆的姿势,消除视角依赖性;2)通过与姿势集成的双流神经网络,利用多任务学习,在执行 ReId 时对车辆的属性(颜色和类型)进行联合分类。视觉转换器和 ResNet50 网络被用于创建双流神经网络。在 Veri776、VehicleID 和 Veri Wild 数据集上进行了广泛的实验,以证明所提出的 PATReId 框架的准确性和有效性。
{"title":"PATReId: Pose Apprise Transformer Network for Vehicle Re-Identification","authors":"Rishi Kishore;Nazia Aslam;Maheshkumar H. Kolekar","doi":"10.1109/TETCI.2024.3372391","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3372391","url":null,"abstract":"Vehicle re-identification is a procedure for identifying a vehicle using multiple non-overlapping cameras. The use of licence plates for re-identification have constraints because a licence plates may not be seen owing to viewpoint differences. Also, the high intra-class variability (due to the shape and appearance from different angles) and small inter-class variability (due to the similarity in appearance and shapes of vehicles from different manufacturers) make it more challenging. To address these issues, we have proposed a novel PATReId, Pose Apprise Transformer network for Vehicle Re-identification. This network works two-fold: 1) generating the poses of the vehicles using the heatmap, keypoints, and segments, which eliminate the viewpoint dependencies, and 2) jointly classify the attributes of the vehicles (colour and type) while performing ReId by utilizing the multitask learning through a two-stream neural network-integrated with the pose. The vision transformer and ResNet50 networks are employed to create the two-stream neural network. Extensive experiments have been conducted on Veri776, VehicleID and Veri Wild datasets to demonstrate the accuracy and efficacy of the proposed PATReId framework.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 5","pages":"3691-3702"},"PeriodicalIF":5.3,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142377140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diversity-Induced Bipartite Graph Fusion for Multiview Graph Clustering 多视图图形聚类的多样性诱导双方图融合
IF 5.3 3区 计算机科学 Q1 Mathematics Pub Date : 2024-03-14 DOI: 10.1109/TETCI.2024.3369316
Weiqing Yan;Xinying Zhao;Guanghui Yue;Jinlai Ren;Jindong Xu;Zhaowei Liu;Chang Tang
Multi-view graph clustering can divide similar objects into the same category through learning the relationship among samples. To improve clustering efficiency, instead of all sample-based graph learning, the bipartite graph learning method can achieve efficient clustering by establishing the graph between data points and a few anchors, so it becomes an important research topic. However, most these bipartite graph-based multi-view clustering approaches focused on consistent information learning among views, ignored the diversity information of each view, which is not conductive to improve clustering precision. To address this issue, a diversity-induced bipartite graph fusion for multiview graph clustering (DiBGF-MGC) is proposed to simultaneously consider the consistency and diversity of multiple views. In our method, the constraint of diversity is achieved via minimizing the diversity of each view and minimizing the inconsistency of diversity in different views. The former ensures the sparse of diversity information, and the later ensures the diversity information is private information of each view. Specifically, we separate the bipartite graph to the consistent part and the divergent part in order to remove the diversity parts while preserving the consistency among multiple views. The consistent parts are used to learn the consensus bipartite graph, which can obtain a clear clustering structure due to eliminating diversity part from original bipartite graph. The diversity part is formulated by intra-view constraint and inter-views inconsistent constraint, which can better distinguish diversity part from original bipartite graph. The consistent learning and diversity learning can be improved iteratively via leveraging the results of the other one. Experiment shows that the proposed DiBGF-MGC method obtains better clustering results than state-of-the-art methods on several benchmark datasets.
多视图聚类可以通过学习样本之间的关系将相似对象划分为同一类别。为了提高聚类效率,双元图学习方法取代了所有基于样本的图学习,通过建立数据点与少数锚点之间的图来实现高效聚类,因此成为一个重要的研究课题。然而,这些基于双元图的多视图聚类方法大多侧重于视图间的一致性信息学习,忽略了每个视图的多样性信息,不利于提高聚类精度。针对这一问题,我们提出了一种同时考虑多视图一致性和多样性的多视图聚类的多样性诱导双方图融合方法(DiBGF-MGC)。在我们的方法中,多样性约束是通过最小化每个视图的多样性和最小化不同视图中多样性的不一致性来实现的。前者确保了多样性信息的稀疏性,后者确保了多样性信息是每个视图的私有信息。具体来说,我们将双向图分为一致部分和分歧部分,以去除多样性部分,同时保留多个视图之间的一致性。一致部分用于学习共识双栅格图,由于消除了原始双栅格图中的多样性部分,因此可以获得清晰的聚类结构。多样性部分由视图内约束和视图间不一致约束构成,能更好地将多样性部分从原始双叉图中区分出来。一致性学习和多样性学习可以通过利用另一种学习的结果进行迭代改进。实验表明,在多个基准数据集上,所提出的 DiBGF-MGC 方法比最先进的方法获得了更好的聚类结果。
{"title":"Diversity-Induced Bipartite Graph Fusion for Multiview Graph Clustering","authors":"Weiqing Yan;Xinying Zhao;Guanghui Yue;Jinlai Ren;Jindong Xu;Zhaowei Liu;Chang Tang","doi":"10.1109/TETCI.2024.3369316","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3369316","url":null,"abstract":"Multi-view graph clustering can divide similar objects into the same category through learning the relationship among samples. To improve clustering efficiency, instead of all sample-based graph learning, the bipartite graph learning method can achieve efficient clustering by establishing the graph between data points and a few anchors, so it becomes an important research topic. However, most these bipartite graph-based multi-view clustering approaches focused on consistent information learning among views, ignored the diversity information of each view, which is not conductive to improve clustering precision. To address this issue, a diversity-induced bipartite graph fusion for multiview graph clustering (DiBGF-MGC) is proposed to simultaneously consider the consistency and diversity of multiple views. In our method, the constraint of diversity is achieved via minimizing the diversity of each view and minimizing the inconsistency of diversity in different views. The former ensures the sparse of diversity information, and the later ensures the diversity information is private information of each view. Specifically, we separate the bipartite graph to the consistent part and the divergent part in order to remove the diversity parts while preserving the consistency among multiple views. The consistent parts are used to learn the consensus bipartite graph, which can obtain a clear clustering structure due to eliminating diversity part from original bipartite graph. The diversity part is formulated by intra-view constraint and inter-views inconsistent constraint, which can better distinguish diversity part from original bipartite graph. The consistent learning and diversity learning can be improved iteratively via leveraging the results of the other one. Experiment shows that the proposed DiBGF-MGC method obtains better clustering results than state-of-the-art methods on several benchmark datasets.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 3","pages":"2592-2601"},"PeriodicalIF":5.3,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141096288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Emerging Topics in Computational Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1