首页 > 最新文献

IEEE Transactions on Emerging Topics in Computational Intelligence最新文献

英文 中文
Semi-Fragile Neural Network Watermarking Based on Adversarial Examples 基于对抗性实例的半脆弱神经网络水印技术
IF 5.3 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-18 DOI: 10.1109/TETCI.2024.3372373
Zihan Yuan;Xinpeng Zhang;Zichi Wang;Zhaoxia Yin
Deep neural networks (DNNs) may be subject to various modifications during transmission and use. Regular processing operations do not affect the functionality of a model, while malicious tampering will cause serious damage. Therefore, it is crucial to determine the availability of a DNN model. To address this issue, we propose a semi-fragile black-box watermarking method that can distinguish between accidental modification and malicious tampering of DNNs, focusing on the privacy and security of neural network models. Specifically, for a given model, a strategy is designed to generate semi-fragile and sensitive samples using adversarial example techniques without decreasing the model accuracy. The model outputs for these samples are extremely sensitive to malicious tampering and robust to accidental modification. According to these properties, accidental modification and malicious tampering can be distinguished to assess the availability of a watermarked model. Extensive experiments demonstrate that the proposed method can detect malicious model tampering with high accuracy up to 100% while tolerating accidental modifications such as fine-tuning, pruning, and quantitation with the accuracy exceed 75%. Moreover, our semi-fragile neural network watermarking approach can be easily extended to various DNNs.
深度神经网络(DNN)在传输和使用过程中可能会受到各种修改。常规处理操作不会影响模型的功能,而恶意篡改则会造成严重损害。因此,确定 DNN 模型的可用性至关重要。针对这一问题,我们提出了一种半脆弱的黑盒水印方法,可以区分 DNN 的意外修改和恶意篡改,重点关注神经网络模型的隐私和安全。具体来说,对于给定的模型,我们设计了一种策略,在不降低模型准确性的情况下,利用对抗性示例技术生成半脆弱敏感样本。这些样本的模型输出对恶意篡改极其敏感,而对意外修改却很稳健。根据这些特性,可以区分意外修改和恶意篡改,从而评估水印模型的可用性。大量实验证明,所提出的方法能以高达 100%的准确率检测出恶意模型篡改,同时能容忍微调、剪枝和量化等意外修改,准确率超过 75%。此外,我们的半脆弱神经网络水印方法可以轻松扩展到各种 DNN。
{"title":"Semi-Fragile Neural Network Watermarking Based on Adversarial Examples","authors":"Zihan Yuan;Xinpeng Zhang;Zichi Wang;Zhaoxia Yin","doi":"10.1109/TETCI.2024.3372373","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3372373","url":null,"abstract":"Deep neural networks (DNNs) may be subject to various modifications during transmission and use. Regular processing operations do not affect the functionality of a model, while malicious tampering will cause serious damage. Therefore, it is crucial to determine the availability of a DNN model. To address this issue, we propose a semi-fragile black-box watermarking method that can distinguish between accidental modification and malicious tampering of DNNs, focusing on the privacy and security of neural network models. Specifically, for a given model, a strategy is designed to generate semi-fragile and sensitive samples using adversarial example techniques without decreasing the model accuracy. The model outputs for these samples are extremely sensitive to malicious tampering and robust to accidental modification. According to these properties, accidental modification and malicious tampering can be distinguished to assess the availability of a watermarked model. Extensive experiments demonstrate that the proposed method can detect malicious model tampering with high accuracy up to 100% while tolerating accidental modifications such as fine-tuning, pruning, and quantitation with the accuracy exceed 75%. Moreover, our semi-fragile neural network watermarking approach can be easily extended to various DNNs.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attribute-Based Injection Transformer for Personalized Sentiment Analysis 基于属性的个性化情感分析注入转换器
IF 5.3 3区 计算机科学 Q1 Mathematics Pub Date : 2024-03-18 DOI: 10.1109/TETCI.2024.3369323
You Zhang;Jin Wang;Liang-Chih Yu;Dan Xu;Xuejie Zhang
Personal attributes have been proven to be useful for sentiment analysis. However, previous models of learning attribute-specific language representations are suboptimal because only context- or content-wise injection is adopted. This study proposes a transformer structure with a combination of both context- and content-wise injections based on a well pretrained transformer encoder. For context-wise injection, self-interactive attention is implemented by incorporating personal attributes into a multi-head attention. For the content-wise perspective, an attribute-based layer normalization is used to align text representation with personal attributes. In particular, the proposed transformer layer can be a universal layer compatible with the original Google Transformer layer. Instead of training from scratch, the proposed Transformer layer can be initialized from a well pre-trained checkpoint for downstream tasks. Extensive experiments were conducted on three benchmarks of document-level sentiment analysis, including IMDB, Yelp-2013 and Yelp-2014. The results show that the proposed method outperforms the previous methods for personalized sentiment analysis, demonstrating that the combination of both context- and content-wise injections can facilitate model learning for attribute-specific language representations.
个人属性已被证明可用于情感分析。然而,以往学习特定属性语言表征的模型并不理想,因为只采用了按上下文或内容注入的方法。本研究基于经过良好预训练的变换器编码器,提出了一种结合了上下文和内容两种注入方式的变换器结构。就上下文注入而言,通过将个人属性纳入多头注意力来实现自我交互式注意力。从内容的角度来看,基于属性的层规范化用于使文本表示与个人属性保持一致。特别是,提议的转换器层可以成为与原始谷歌转换器层兼容的通用层。提议的转换器层无需从头开始训练,而是可以从预先训练好的检查点初始化,用于下游任务。在 IMDB、Yelp-2013 和 Yelp-2014 等三个文档级情感分析基准上进行了广泛的实验。实验结果表明,在个性化情感分析方面,所提出的方法优于之前的方法,这表明结合上下文和内容注入可以促进特定属性语言表征的模型学习。
{"title":"Attribute-Based Injection Transformer for Personalized Sentiment Analysis","authors":"You Zhang;Jin Wang;Liang-Chih Yu;Dan Xu;Xuejie Zhang","doi":"10.1109/TETCI.2024.3369323","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3369323","url":null,"abstract":"Personal attributes have been proven to be useful for sentiment analysis. However, previous models of learning attribute-specific language representations are suboptimal because only context- or content-wise injection is adopted. This study proposes a transformer structure with a combination of both context- and content-wise injections based on a well pretrained transformer encoder. For context-wise injection, self-interactive attention is implemented by incorporating personal attributes into a multi-head attention. For the content-wise perspective, an attribute-based layer normalization is used to align text representation with personal attributes. In particular, the proposed transformer layer can be a universal layer compatible with the original Google Transformer layer. Instead of training from scratch, the proposed Transformer layer can be initialized from a well pre-trained checkpoint for downstream tasks. Extensive experiments were conducted on three benchmarks of document-level sentiment analysis, including IMDB, Yelp-2013 and Yelp-2014. The results show that the proposed method outperforms the previous methods for personalized sentiment analysis, demonstrating that the combination of both context- and content-wise injections can facilitate model learning for attribute-specific language representations.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141094887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3DAttGAN: A 3D Attention-Based Generative Adversarial Network for Joint Space-Time Video Super-Resolution 3DAttGAN:用于联合时空视频超分辨率的基于三维注意力的生成对抗网络
IF 5.3 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-18 DOI: 10.1109/TETCI.2024.3369994
Congrui Fu;Hui Yuan;Liquan Shen;Raouf Hamzaoui;Hao Zhang
Joint space-time video super-resolution aims to increase both the spatial resolution and the frame rate of a video sequence. As a result, details become more apparent, leading to a better and more realistic viewing experience. This is particularly valuable for applications such as video streaming, video surveillance (object recognition and tracking), and digital entertainment. Over the last few years, several joint space-time video super-resolution methods have been proposed. While those built on deep learning have shown great potential, their performance still falls short. One major reason is that they heavily rely on two-dimensional (2D) convolutional networks, which restricts their capacity to effectively exploit spatio-temporal information. To address this limitation, we propose a novel generative adversarial network for joint space-time video super-resolution. The novelty of our network is twofold. First, we propose a three-dimensional (3D) attention mechanism instead of traditional two-dimensional attention mechanisms. Our generator uses 3D convolutions associated with the proposed 3D attention mechanism to process temporal and spatial information simultaneously and focus on the most important channel and spatial features. Second, we design two discriminator strategies to enhance the performance of the generator. The discriminative network uses a two-branch structure to handle the intra-frame texture details and inter-frame motion occlusions in parallel, making the generated results more accurate. Experimental results on the Vid4, Vimeo-90 K, and REDS datasets demonstrate the effectiveness of the proposed method.
联合时空视频超分辨率旨在提高视频序列的空间分辨率和帧频。因此,细节会变得更加明显,从而带来更好、更逼真的观看体验。这对于视频流、视频监控(物体识别和跟踪)和数字娱乐等应用尤为重要。在过去几年中,已经提出了几种联合时空视频超分辨率方法。虽然这些基于深度学习的方法已经显示出巨大的潜力,但其性能仍有不足。其中一个主要原因是它们严重依赖二维(2D)卷积网络,这限制了它们有效利用时空信息的能力。为了解决这一局限性,我们提出了一种用于联合时空视频超分辨率的新型生成对抗网络。我们网络的新颖之处有两方面。首先,我们提出了一种三维(3D)注意力机制,而不是传统的二维注意力机制。我们的生成器使用与三维注意力机制相关的三维卷积来同时处理时间和空间信息,并聚焦于最重要的信道和空间特征。其次,我们设计了两种判别策略来提高信号发生器的性能。判别网络采用双分支结构,并行处理帧内纹理细节和帧间运动遮挡,使生成的结果更加准确。在 Vid4、Vimeo-90 K 和 REDS 数据集上的实验结果证明了所提方法的有效性。
{"title":"3DAttGAN: A 3D Attention-Based Generative Adversarial Network for Joint Space-Time Video Super-Resolution","authors":"Congrui Fu;Hui Yuan;Liquan Shen;Raouf Hamzaoui;Hao Zhang","doi":"10.1109/TETCI.2024.3369994","DOIUrl":"10.1109/TETCI.2024.3369994","url":null,"abstract":"Joint space-time video super-resolution aims to increase both the spatial resolution and the frame rate of a video sequence. As a result, details become more apparent, leading to a better and more realistic viewing experience. This is particularly valuable for applications such as video streaming, video surveillance (object recognition and tracking), and digital entertainment. Over the last few years, several joint space-time video super-resolution methods have been proposed. While those built on deep learning have shown great potential, their performance still falls short. One major reason is that they heavily rely on two-dimensional (2D) convolutional networks, which restricts their capacity to effectively exploit spatio-temporal information. To address this limitation, we propose a novel generative adversarial network for joint space-time video super-resolution. The novelty of our network is twofold. First, we propose a three-dimensional (3D) attention mechanism instead of traditional two-dimensional attention mechanisms. Our generator uses 3D convolutions associated with the proposed 3D attention mechanism to process temporal and spatial information simultaneously and focus on the most important channel and spatial features. Second, we design two discriminator strategies to enhance the performance of the generator. The discriminative network uses a two-branch structure to handle the intra-frame texture details and inter-frame motion occlusions in parallel, making the generated results more accurate. Experimental results on the Vid4, Vimeo-90 K, and REDS datasets demonstrate the effectiveness of the proposed method.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141808757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Topic Tracing with a Textual Reader for Conversational Knowledge Based Question Answering 利用文本阅读器改进主题追踪,实现基于对话知识的问题解答
IF 5.3 3区 计算机科学 Q1 Mathematics Pub Date : 2024-03-18 DOI: 10.1109/TETCI.2024.3369478
Zhipeng Liu;Jing He;Tao Gong;Heng Weng;Fu Lee Wang;Hai Liu;Tianyong Hao
Conversational KBQA(Knowledge Based Question Answering) is a sequential question-answering process in the form of conversation based on knowledge, and it has been paid great attention in recent years. One of the major challenges in conversational KBQA is the ellipsis and co-reference of topic entities in follow-up questions, which affects the performance of the whole conversational KBQA. Previous approaches identified the topics of current turn questions by encoding conversation records or modeling entities in conversation records. However, they ignored the meanings carried by the entities themselves in the modeling process. To solve the above problem and mitigate the impact of the problem on the whole KBQA system, we propose a new textual reader to integrate entity-related textual information and construct a graph-based neural network containing the textual reader to determine the topics of questions. The graph-based neural network scores entities in each question in conversations. Further, the scores are jointly cooperated with the similarity between questions and answers to obtain the correct answers in conversational KBQA systems. Our proposed method improved the accuracy with 5.5% at topic entity prediction and 1.5% at conversational KBQA on benchmark datasets compared with baseline methods in more real-world settings respectively. Experiment results on two datasets demonstrate that our proposed method improves the performance of topic tracing and conversational KBQA.
会话式知识问答(KBQA,Knowledge Based Question Answering)是一种基于知识的会话形式的顺序答题过程,近年来备受关注。会话式知识问答的主要挑战之一是后续问题中话题实体的省略和共指,这影响了整个会话式知识问答的性能。以往的方法通过对对话记录进行编码或对对话记录中的实体进行建模来确定当前转折问题的主题。但是,这些方法在建模过程中忽略了实体本身所承载的含义。为了解决上述问题并减轻该问题对整个 KBQA 系统的影响,我们提出了一种新的文本阅读器来整合与实体相关的文本信息,并构建了一个包含文本阅读器的基于图的神经网络来确定问题的主题。基于图的神经网络会对会话中每个问题的实体进行评分。此外,在对话式 KBQA 系统中,这些分数与问题和答案之间的相似性共同作用,以获得正确答案。与基线方法相比,我们提出的方法在基准数据集上提高了 5.5% 的话题实体预测准确率和 1.5% 的会话 KBQA 准确率。在两个数据集上的实验结果表明,我们提出的方法提高了话题追踪和对话式知识库问答的性能。
{"title":"Improving Topic Tracing with a Textual Reader for Conversational Knowledge Based Question Answering","authors":"Zhipeng Liu;Jing He;Tao Gong;Heng Weng;Fu Lee Wang;Hai Liu;Tianyong Hao","doi":"10.1109/TETCI.2024.3369478","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3369478","url":null,"abstract":"Conversational KBQA(Knowledge Based Question Answering) is a sequential question-answering process in the form of conversation based on knowledge, and it has been paid great attention in recent years. One of the major challenges in conversational KBQA is the ellipsis and co-reference of topic entities in follow-up questions, which affects the performance of the whole conversational KBQA. Previous approaches identified the topics of current turn questions by encoding conversation records or modeling entities in conversation records. However, they ignored the meanings carried by the entities themselves in the modeling process. To solve the above problem and mitigate the impact of the problem on the whole KBQA system, we propose a new textual reader to integrate entity-related textual information and construct a graph-based neural network containing the textual reader to determine the topics of questions. The graph-based neural network scores entities in each question in conversations. Further, the scores are jointly cooperated with the similarity between questions and answers to obtain the correct answers in conversational KBQA systems. Our proposed method improved the accuracy with 5.5% at topic entity prediction and 1.5% at conversational KBQA on benchmark datasets compared with baseline methods in more real-world settings respectively. Experiment results on two datasets demonstrate that our proposed method improves the performance of topic tracing and conversational KBQA.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141096370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Collaborative Neural Solution for Time-Varying Nonconvex Optimization With Noise Rejection 具有噪声抑制功能的时变非凸优化协同神经解决方案
IF 5.3 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-18 DOI: 10.1109/TETCI.2024.3369482
Lin Wei;Long Jin
This paper focuses on an emerging topic that current neural dynamics methods generally fail to accurately solve time-varying nonconvex optimization problems especially when noises are taken into consideration. A collaborative neural solution that fuses the advantages of evolutionary computation and neural dynamics methods is proposed, which follows a meta-heuristic rule and exploits the robust gradient-based neural solution to deal with different noises. The gradient-based neural solution with robustness (GNSR) is proven to converge with the disturbance of noises and experts in local search. Besides, theoretical analysis ensures that the meta-heuristic rule guarantees the optimal solution for the global search with probability one. Lastly, simulative comparisons with existing methods and an application to manipulability optimization on a redundant manipulator substantiate the superiority of the proposed collaborative neural solution in solving the nonconvex time-varying optimization problems.
本文关注一个新课题,即当前的神经动力学方法通常无法准确解决时变非凸优化问题,尤其是在考虑噪声的情况下。本文提出了一种融合了进化计算和神经动力学方法优点的协同神经解决方案,它遵循元启发式规则,利用基于梯度的鲁棒神经解决方案来处理不同的噪声。基于梯度的鲁棒性神经解(GNSR)已被证明能在噪声和专家局部搜索的干扰下收敛。此外,理论分析确保元启发式规则能保证全局搜索最优解的概率为 1。最后,通过与现有方法的仿真比较,以及在冗余机械手可操纵性优化中的应用,证明了所提出的协作神经解决方案在解决非凸时变优化问题方面的优越性。
{"title":"Collaborative Neural Solution for Time-Varying Nonconvex Optimization With Noise Rejection","authors":"Lin Wei;Long Jin","doi":"10.1109/TETCI.2024.3369482","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3369482","url":null,"abstract":"This paper focuses on an emerging topic that current neural dynamics methods generally fail to accurately solve time-varying nonconvex optimization problems especially when noises are taken into consideration. A collaborative neural solution that fuses the advantages of evolutionary computation and neural dynamics methods is proposed, which follows a meta-heuristic rule and exploits the robust gradient-based neural solution to deal with different noises. The gradient-based neural solution with robustness (GNSR) is proven to converge with the disturbance of noises and experts in local search. Besides, theoretical analysis ensures that the meta-heuristic rule guarantees the optimal solution for the global search with probability one. Lastly, simulative comparisons with existing methods and an application to manipulability optimization on a redundant manipulator substantiate the superiority of the proposed collaborative neural solution in solving the nonconvex time-varying optimization problems.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GAA: Ghost Adversarial Attack for Object Tracking GAA:用于物体追踪的幽灵对抗攻击
IF 5.3 3区 计算机科学 Q1 Mathematics Pub Date : 2024-03-18 DOI: 10.1109/TETCI.2024.3369403
Mingyang Lei;Hong Song;Jingfan Fan;Deqiang Xiao;Danni Ai;Ying Gu;Jian Yang
Adversarial attack of convolutional neural networks (CNN) is a technique for deceiving models with perturbations, which provides a way to evaluate the robustness of models. Adversarial attack research has primarily focused on single images. However, videos are more widely used. The existing attack methods generally require iterative optimization on different video sequences with high time-consuming. In this paper, we propose a simple and effective approach for attacking video sequences, called Ghost Adversarial Attack (GAA), to greatly degrade the tracking performance of the state-of-the-art (SOTA) CNN-based trackers with the minimum ghost perturbations. Considering the timeliness of the attack, we only generate the ghost adversarial example once with a novel ghost-generator and use a less computable attack way in subsequent frames. The ghost-generator is used to extract the target region and generate the indistinguishable ghost noise of the target, hence misleading the tracker. Moreover, we propose a novel combined loss that includes the content loss, the ghost loss, and the transferred-fixed loss, which are used in different parts of the proposed method. The combined loss can help to generate similar adversarial examples with slight noises, like a ghost of the real target. Experiments were conducted on six benchmark datasets (UAV123, UAV20L, NFS, LaSOT, OTB50, and OTB100). The experimental results indicate that the ghost adversarial examples produced by GAA are well stealthy while remaining effective in fooling SOTA trackers with high transferability. The GAA can reduce the tracking success rate by an average of 66.6% and the precision rate by an average of 68.3%.
对卷积神经网络(CNN)的对抗性攻击是一种利用扰动欺骗模型的技术,它为评估模型的鲁棒性提供了一种方法。对抗性攻击研究主要集中在单一图像上。然而,视频的应用更为广泛。现有的攻击方法一般需要对不同的视频序列进行迭代优化,耗时较长。在本文中,我们提出了一种简单有效的视频序列攻击方法,称为鬼影对抗攻击(GAA),以最小的鬼影扰动大大降低基于 CNN 的最先进(SOTA)跟踪器的跟踪性能。考虑到攻击的及时性,我们只使用新型鬼影生成器生成一次鬼影对抗示例,并在后续帧中使用可计算性较低的攻击方式。鬼影生成器用于提取目标区域并生成目标的不可分辨鬼影噪声,从而误导跟踪器。此外,我们还提出了一种新颖的组合损耗,其中包括内容损耗、鬼影损耗和转移固定损耗。组合损失有助于生成具有轻微噪声的类似对抗示例,如真实目标的幽灵。实验在六个基准数据集(UAV123、UAV20L、NFS、LaSOT、OTB50 和 OTB100)上进行。实验结果表明,GAA 生成的幽灵对抗示例具有很好的隐蔽性,同时还能有效地欺骗 SOTA 跟踪器,具有很高的可移植性。GAA 可以将跟踪成功率平均降低 66.6%,精确率平均降低 68.3%。
{"title":"GAA: Ghost Adversarial Attack for Object Tracking","authors":"Mingyang Lei;Hong Song;Jingfan Fan;Deqiang Xiao;Danni Ai;Ying Gu;Jian Yang","doi":"10.1109/TETCI.2024.3369403","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3369403","url":null,"abstract":"Adversarial attack of convolutional neural networks (CNN) is a technique for deceiving models with perturbations, which provides a way to evaluate the robustness of models. Adversarial attack research has primarily focused on single images. However, videos are more widely used. The existing attack methods generally require iterative optimization on different video sequences with high time-consuming. In this paper, we propose a simple and effective approach for attacking video sequences, called Ghost Adversarial Attack (GAA), to greatly degrade the tracking performance of the state-of-the-art (SOTA) CNN-based trackers with the minimum ghost perturbations. Considering the timeliness of the attack, we only generate the ghost adversarial example once with a novel ghost-generator and use a less computable attack way in subsequent frames. The ghost-generator is used to extract the target region and generate the indistinguishable ghost noise of the target, hence misleading the tracker. Moreover, we propose a novel combined loss that includes the content loss, the ghost loss, and the transferred-fixed loss, which are used in different parts of the proposed method. The combined loss can help to generate similar adversarial examples with slight noises, like a ghost of the real target. Experiments were conducted on six benchmark datasets (UAV123, UAV20L, NFS, LaSOT, OTB50, and OTB100). The experimental results indicate that the ghost adversarial examples produced by GAA are well stealthy while remaining effective in fooling SOTA trackers with high transferability. The GAA can reduce the tracking success rate by an average of 66.6% and the precision rate by an average of 68.3%.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141096287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VTFR-AT: Adversarial Training With Visual Transformation and Feature Robustness VTFR-AT:具有视觉变换和特征鲁棒性的对抗训练
IF 5.3 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-18 DOI: 10.1109/TETCI.2024.3370004
Xiang Li;Changfei Zhao;Xinyang Deng;Wen Jiang
Research on the robustness of deep neural networks to adversarial samples has grown rapidly since studies have shown that deep learning is susceptible to adversarial perturbation noise. Adversarial training is widely regarded as the most powerful defence strategy against adversarial attacks out of many defence strategies. It has been shown that the adversarial vulnerability of models is due to the learned non-robust feature in the data. However, few methods have attempted to improve adversarial training by enhancing the critical information in the data, i.e., the important region of the object. Moreover, adversarial training is prone to overfitting the model due to the overuse of training set samples. In this paper, we propose a new adversarial training framework with visual transformation and feature robustness, named VTFR-AT. The visual transformation (VT) module enhances principal information in images, weakens background information, and eliminates nuisance noise by pre-processing images. The feature robustness (FR) loss function enhances the network feature extraction partly against perturbation by constraining the feature similarity of the network on similar images. Extensive experiments have shown that the VTFR framework can substantially promote the performance of models on adversarial samples and improve the adversarial robustness and generalization capabilities. As a plug-and-play module, the proposed framework can be easily combined with various existing adversarial training methods.
自从研究表明深度学习容易受到对抗性扰动噪声的影响后,有关深度神经网络对对抗性样本的鲁棒性的研究迅速发展。对抗性训练被广泛认为是众多防御策略中对抗对抗性攻击最强大的防御策略。研究表明,模型易受对抗性攻击的原因是学习到了数据中的非稳健特征。然而,很少有方法尝试通过增强数据中的关键信息(即对象的重要区域)来改进对抗训练。此外,由于过度使用训练集样本,对抗训练容易造成模型的过拟合。在本文中,我们提出了一种具有视觉变换和特征鲁棒性的新型对抗训练框架,命名为 VTFR-AT。视觉变换(VT)模块通过预处理图像来增强图像中的主要信息、弱化背景信息并消除干扰噪声。特征鲁棒性(FR)损失函数通过限制相似图像上网络特征的相似性,增强了网络特征提取的部分抗干扰能力。广泛的实验表明,VTFR 框架能大幅提高模型在对抗样本上的性能,并改善对抗鲁棒性和泛化能力。作为一个即插即用的模块,所提出的框架可以很容易地与现有的各种对抗训练方法相结合。
{"title":"VTFR-AT: Adversarial Training With Visual Transformation and Feature Robustness","authors":"Xiang Li;Changfei Zhao;Xinyang Deng;Wen Jiang","doi":"10.1109/TETCI.2024.3370004","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3370004","url":null,"abstract":"Research on the robustness of deep neural networks to adversarial samples has grown rapidly since studies have shown that deep learning is susceptible to adversarial perturbation noise. Adversarial training is widely regarded as the most powerful defence strategy against adversarial attacks out of many defence strategies. It has been shown that the adversarial vulnerability of models is due to the learned non-robust feature in the data. However, few methods have attempted to improve adversarial training by enhancing the critical information in the data, i.e., the important region of the object. Moreover, adversarial training is prone to overfitting the model due to the overuse of training set samples. In this paper, we propose a new adversarial training framework with visual transformation and feature robustness, named VTFR-AT. The visual transformation (VT) module enhances principal information in images, weakens background information, and eliminates nuisance noise by pre-processing images. The feature robustness (FR) loss function enhances the network feature extraction partly against perturbation by constraining the feature similarity of the network on similar images. Extensive experiments have shown that the VTFR framework can substantially promote the performance of models on adversarial samples and improve the adversarial robustness and generalization capabilities. As a plug-and-play module, the proposed framework can be easily combined with various existing adversarial training methods.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141964787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reinforcement Learning and Transformer for Fast Magnetic Resonance Imaging Scan 用于快速磁共振成像扫描的强化学习和变压器
IF 5.3 3区 计算机科学 Q1 Mathematics Pub Date : 2024-03-18 DOI: 10.1109/TETCI.2024.3358180
Yiming Liu;Yanwei Pang;Ruiqi Jin;Yonghong Hou;Xuelong Li
A major drawback in Magnetic Resonance Imaging (MRI) is the long scan times necessary to acquire complete K-space matrices using phase encoding. This paper proposes a transformer-based deep Reinforcement Learning (RL) framework (called TITLE) to reduce the scan time by sequentially selecting partial phases in real-time so that a slice can be accurately reconstructed from the resultant slice-specific incomplete K-space matrix. As a deep learning based slice-specific method, the TITLE method has the following characteristic and merits: (1) It is real-time because the decision of which phase to be encoded in next time can be made within the period between the time at which an echo signal is obtained and the time at which the next 180° RF pulse is activated. (2) It exploits the powerful feature representation ability of transformer, a self-attention based neural network, for predicting phases with the mechanism of deep reinforcement learning. (3) Both historically selected phases (called phase-indicator vector) and the corresponding undersampled image of the slice being scanned are used for extracting features by transformer. Experimental results on the fastMRI dataset demonstrate that the proposed method is 150 times faster than the state-of-the-art reinforcement learning based method and outperforms the state-of-the-art deep learning based methods in reconstruction accuracy. The source codes are available.
磁共振成像(MRI)的一个主要缺点是使用相位编码获取完整的 K 空间矩阵所需的扫描时间较长。本文提出了一种基于变压器的深度强化学习(RL)框架(称为 TITLE),通过实时依次选择部分相位来缩短扫描时间,这样就能从由此产生的特定切片的不完整 K 空间矩阵中准确地重建切片。作为一种基于深度学习的切片特定方法,TITLE 方法具有以下特点和优点:(1)它具有实时性,因为从获得回波信号到激活下一个 180° 射频脉冲之间的时间段内就能决定下一次要编码的相位。(2) 利用基于自我注意的神经网络变压器强大的特征表示能力,通过深度强化学习机制预测相位。(3) 变压器利用历史选定的相位(称为相位指示向量)和相应的扫描切片欠采样图像来提取特征。在 fastMRI 数据集上的实验结果表明,所提出的方法比最先进的基于强化学习的方法快 150 倍,并且在重建精度上优于最先进的基于深度学习的方法。源代码已发布。
{"title":"Reinforcement Learning and Transformer for Fast Magnetic Resonance Imaging Scan","authors":"Yiming Liu;Yanwei Pang;Ruiqi Jin;Yonghong Hou;Xuelong Li","doi":"10.1109/TETCI.2024.3358180","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3358180","url":null,"abstract":"A major drawback in Magnetic Resonance Imaging (MRI) is the long scan times necessary to acquire complete K-space matrices using phase encoding. This paper proposes a transformer-based deep Reinforcement Learning (RL) framework (called TITLE) to reduce the scan time by sequentially selecting partial phases in real-time so that a slice can be accurately reconstructed from the resultant slice-specific incomplete K-space matrix. As a deep learning based slice-specific method, the TITLE method has the following characteristic and merits: (1) It is real-time because the decision of which phase to be encoded in next time can be made within the period between the time at which an echo signal is obtained and the time at which the next 180° RF pulse is activated. (2) It exploits the powerful feature representation ability of transformer, a self-attention based neural network, for predicting phases with the mechanism of deep reinforcement learning. (3) Both historically selected phases (called phase-indicator vector) and the corresponding undersampled image of the slice being scanned are used for extracting features by transformer. Experimental results on the fastMRI dataset demonstrate that the proposed method is 150 times faster than the state-of-the-art reinforcement learning based method and outperforms the state-of-the-art deep learning based methods in reconstruction accuracy. The source codes are available.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141094881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Local Dimming for Video Based on an Improved Surrogate Model Assisted Evolutionary Algorithm 基于改进的代用模型辅助进化算法的视频局部调光技术
IF 5.3 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-18 DOI: 10.1109/TETCI.2024.3370033
Yahui Cao;Tao Zhang;Xin Zhao;Yuzheng Yan;Shuxin Cui
Compared with the traditional liquid crystal displays (LCD) systems, the local dimming systems can obtain higher display quality with lower power consumption. Considering local dimming of the static image as an optimization problem and solving it based on an evolutionary algorithm, a set of optimal backlight matrix can be obtained. However, the local dimming algorithm based on evolutionary algorithm is no longer applicable for the video sequences because the calculation is very time-consuming. This paper proposes a local dimming algorithm based on improved surrogate model assisted evolutional algorithm (ISAEA-LD). In this algorithm, the surrogate model assisted evolutionary algorithm is applied to solve the local dimming problem of the video sequences. The surrogate model is used to reduce the complexity of individual fitness evaluation of the evolutionary algorithm. Firstly, a surrogate model based on convolutional neural network is adopted to improve the accuracy of individual fitness evaluation of surrogate model. Secondly, the algorithm introduces the backlight update strategy based on the content correlation between the video sequences' adjacent frames and the model transfer strategy based on transfer learning to improve the efficiency of the algorithm. Experimental results show that the proposed ISAEA-LD algorithm can obtain better visual quality and higher algorithm efficiency.
与传统的液晶显示器(LCD)系统相比,局部调光系统能以更低的功耗获得更高的显示质量。将静态图像的局部调光视为一个优化问题,并基于进化算法进行求解,可以得到一组最佳背光矩阵。然而,基于进化算法的局部调光算法已不适用于视频序列,因为计算非常耗时。本文提出了一种基于改进代理模型辅助进化算法(ISAEA-LD)的局部调光算法。在该算法中,代用模型辅助进化算法被用于解决视频序列的局部调光问题。代用模型用于降低进化算法个体适应度评估的复杂性。首先,采用基于卷积神经网络的代用模型来提高代用模型个体适配性评估的准确性。其次,该算法引入了基于视频序列相邻帧内容相关性的背光更新策略和基于迁移学习的模型迁移策略,以提高算法的效率。实验结果表明,所提出的 ISAEA-LD 算法可以获得更好的视觉质量和更高的算法效率。
{"title":"Local Dimming for Video Based on an Improved Surrogate Model Assisted Evolutionary Algorithm","authors":"Yahui Cao;Tao Zhang;Xin Zhao;Yuzheng Yan;Shuxin Cui","doi":"10.1109/TETCI.2024.3370033","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3370033","url":null,"abstract":"Compared with the traditional liquid crystal displays (LCD) systems, the local dimming systems can obtain higher display quality with lower power consumption. Considering local dimming of the static image as an optimization problem and solving it based on an evolutionary algorithm, a set of optimal backlight matrix can be obtained. However, the local dimming algorithm based on evolutionary algorithm is no longer applicable for the video sequences because the calculation is very time-consuming. This paper proposes a local dimming algorithm based on improved surrogate model assisted evolutional algorithm (ISAEA-LD). In this algorithm, the surrogate model assisted evolutionary algorithm is applied to solve the local dimming problem of the video sequences. The surrogate model is used to reduce the complexity of individual fitness evaluation of the evolutionary algorithm. Firstly, a surrogate model based on convolutional neural network is adopted to improve the accuracy of individual fitness evaluation of surrogate model. Secondly, the algorithm introduces the backlight update strategy based on the content correlation between the video sequences' adjacent frames and the model transfer strategy based on transfer learning to improve the efficiency of the algorithm. Experimental results show that the proposed ISAEA-LD algorithm can obtain better visual quality and higher algorithm efficiency.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141964673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Survey of Deep Learning Video Super-Resolution 深度学习视频超分辨率调查
IF 5.3 3区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-17 DOI: 10.1109/TETCI.2024.3398015
Arbind Agrahari Baniya;Tsz-Kwan Lee;Peter W. Eklund;Sunil Aryal
Video super-resolution (VSR) is a prominent research topic in low-level computer vision, where deep learning technologies have played a significant role. The rapid progress in deep learning and its applications in VSR has led to a proliferation of tools and techniques in the literature. However, the usage of these methods is often not adequately explained, and decisions are primarily driven by quantitative improvements. Given the significance of VSR's potential influence across multiple domains, it is imperative to conduct a comprehensive analysis of the elements and deep learning methodologies employed in VSR research. This methodical analysis will facilitate the informed development of models tailored to specific application needs. In this paper, we present an overarching overview of deep learning-based video super-resolution models, investigating each component and discussing its implications. Furthermore, we provide a synopsis of key components and technologies employed by state-of-the-art and earlier VSR models. By elucidating the underlying methodologies and categorising them systematically, we identified trends, requirements, and challenges in the domain. As a first-of-its-kind survey of deep learning-based VSR models, this work also establishes a multi-level taxonomy to guide current and future VSR research, enhancing the maturation and interpretation of VSR practices for various practical applications.
视频超分辨率(VSR)是低级计算机视觉领域的一个突出研究课题,深度学习技术在其中发挥了重要作用。深度学习及其在 VSR 中的应用进展迅速,导致文献中的工具和技术激增。然而,这些方法的使用往往没有得到充分的解释,决策主要受量化改进的驱动。鉴于 VSR 在多个领域的潜在影响力,对 VSR 研究中使用的元素和深度学习方法进行全面分析势在必行。这种有条不紊的分析将有助于根据特定应用需求量身定制模型。在本文中,我们将对基于深度学习的视频超分辨率模型进行总体概述,研究每个组成部分并讨论其影响。此外,我们还简要介绍了最先进和早期 VSR 模型所采用的关键组件和技术。通过阐明基础方法并对其进行系统分类,我们确定了该领域的趋势、要求和挑战。作为对基于深度学习的 VSR 模型的首次调查,这项工作还建立了一个多层次的分类法,以指导当前和未来的 VSR 研究,促进 VSR 实践的成熟和对各种实际应用的解释。
{"title":"A Survey of Deep Learning Video Super-Resolution","authors":"Arbind Agrahari Baniya;Tsz-Kwan Lee;Peter W. Eklund;Sunil Aryal","doi":"10.1109/TETCI.2024.3398015","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3398015","url":null,"abstract":"Video super-resolution (VSR) is a prominent research topic in low-level computer vision, where deep learning technologies have played a significant role. The rapid progress in deep learning and its applications in VSR has led to a proliferation of tools and techniques in the literature. However, the usage of these methods is often not adequately explained, and decisions are primarily driven by quantitative improvements. Given the significance of VSR's potential influence across multiple domains, it is imperative to conduct a comprehensive analysis of the elements and deep learning methodologies employed in VSR research. This methodical analysis will facilitate the informed development of models tailored to specific application needs. In this paper, we present an overarching overview of deep learning-based video super-resolution models, investigating each component and discussing its implications. Furthermore, we provide a synopsis of key components and technologies employed by state-of-the-art and earlier VSR models. By elucidating the underlying methodologies and categorising them systematically, we identified trends, requirements, and challenges in the domain. As a first-of-its-kind survey of deep learning-based VSR models, this work also establishes a multi-level taxonomy to guide current and future VSR research, enhancing the maturation and interpretation of VSR practices for various practical applications.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":null,"pages":null},"PeriodicalIF":5.3,"publicationDate":"2024-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Emerging Topics in Computational Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1