首页 > 最新文献

Knowledge-Based Systems最新文献

英文 中文
Multivariate time series generation based on dual-channel Transformer conditional GAN for industrial remaining useful life prediction 基于双通道变压器条件 GAN 的多变量时间序列生成,用于工业剩余使用寿命预测
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-20 DOI: 10.1016/j.knosys.2024.112749
Zhizheng Zhang, Hui Gao, Wenxu Sun, Wen Song, Qiqiang Li
Remaining useful life (RUL) prediction is a key enabler of predictive maintenance. While deep learning based prediction methods have made great progress, the data imbalance issue caused by limited run-to-failure data severely undermines their performance. Some recent works employ generative adversarial network (GAN) to tackle this issue. However, most GAN-based generative methods have difficulties in simultaneously extracting correlations of different time steps and sensors. In this paper, we propose dual-channel Transformer conditional GAN (DCTC-GAN), a novel multivariate time series (MTS) generation framework, to generate high-quality MTS to enhance deep learning based RUL prediction models. We design a novel dual-channel Transformer architecture to construct the generator and discriminator, which consists of a temporal encoder and a spatial encoder that work in parallel to automatically pay different attention to different time steps and sensors. Based on this, DCTC-GAN can directly extract the long-distance temporal relations of different time steps while capturing the spatial correlations of different sensors to synthesize high-quality MTS data. Experimental analysis on widely used turbofan engine dataset and FEMTO bearing dataset demonstrates that our DCTC-GAN significantly enhances the performance of existing deep learning models for RUL prediction, without changing its structure, and exceeds the capabilities of current representative generative methods.
剩余使用寿命(RUL)预测是预测性维护的关键因素。虽然基于深度学习的预测方法取得了长足进步,但有限的运行到故障数据导致的数据不平衡问题严重影响了这些方法的性能。最近的一些研究采用生成式对抗网络(GAN)来解决这一问题。然而,大多数基于 GAN 的生成方法都难以同时提取不同时间步长和传感器的相关性。本文提出了一种新型多变量时间序列(MTS)生成框架--双通道变换器条件 GAN(DCTC-GAN),以生成高质量的 MTS,从而增强基于深度学习的 RUL 预测模型。我们设计了一种新颖的双通道 Transformer 架构来构建生成器和判别器,它由并行工作的时间编码器和空间编码器组成,可自动对不同的时间步骤和传感器给予不同的关注。在此基础上,DCTC-GAN 可以直接提取不同时间步长的时间关系,同时捕捉不同传感器的空间相关性,从而合成高质量的 MTS 数据。在广泛使用的涡轮风扇发动机数据集和 FEMTO 轴承数据集上进行的实验分析表明,我们的 DCTC-GAN 在不改变现有深度学习模型结构的情况下,显著提高了其在 RUL 预测方面的性能,并超越了当前代表性生成方法的能力。
{"title":"Multivariate time series generation based on dual-channel Transformer conditional GAN for industrial remaining useful life prediction","authors":"Zhizheng Zhang,&nbsp;Hui Gao,&nbsp;Wenxu Sun,&nbsp;Wen Song,&nbsp;Qiqiang Li","doi":"10.1016/j.knosys.2024.112749","DOIUrl":"10.1016/j.knosys.2024.112749","url":null,"abstract":"<div><div>Remaining useful life (RUL) prediction is a key enabler of predictive maintenance. While deep learning based prediction methods have made great progress, the data imbalance issue caused by limited run-to-failure data severely undermines their performance. Some recent works employ generative adversarial network (GAN) to tackle this issue. However, most GAN-based generative methods have difficulties in simultaneously extracting correlations of different time steps and sensors. In this paper, we propose dual-channel Transformer conditional GAN (DCTC-GAN), a novel multivariate time series (MTS) generation framework, to generate high-quality MTS to enhance deep learning based RUL prediction models. We design a novel dual-channel Transformer architecture to construct the generator and discriminator, which consists of a temporal encoder and a spatial encoder that work in parallel to automatically pay different attention to different time steps and sensors. Based on this, DCTC-GAN can directly extract the long-distance temporal relations of different time steps while capturing the spatial correlations of different sensors to synthesize high-quality MTS data. Experimental analysis on widely used turbofan engine dataset and FEMTO bearing dataset demonstrates that our DCTC-GAN significantly enhances the performance of existing deep learning models for RUL prediction, without changing its structure, and exceeds the capabilities of current representative generative methods.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"308 ","pages":"Article 112749"},"PeriodicalIF":7.2,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142704950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can question-texts improve the recognition of handwritten mathematical expressions in respondents’ solutions? 问题文本能否提高答卷人答案中手写数学表达式的识别率?
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-20 DOI: 10.1016/j.knosys.2024.112731
Ting Zhang, Xinxin Jin, Xiaoyang Ma, Xinzi Peng, Yiyang Zhao, Jinzheng Liu, Xinguo Yu
The accurate recognition of respondents’ handwritten solutions is important for implementing intelligent diagnosis and tutoring. This task is significantly challenging because of scribbled and irregular writing, especially when handling primary or secondary students whose handwriting has not yet been fully developed. Recognition becomes difficult in such cases even for humans relying only on the visual signals of handwritten content without any context. However, despite decades of work on handwriting recognition, few studies have explored the idea of utilizing external information (question priors) to improve the accuracy. Based on the correlation between questions and solutions, this study aims to explore whether question-texts can improve the recognition of handwritten mathematical expressions (HMEs) in respondents’ solutions. Based on the encoder–decoder framework, which is the mainstream method for HME recognition, we propose two models for fusing question-text signals and handwriting-vision signals at the encoder and decoder stages, respectively. The first, called encoder-fusion, adopts a static query to implement the interaction between two modalities at the encoder phase, and to better catch and interpret the interaction, a fusing method based on a dynamic query at the decoder stage, called decoder-attend is proposed. These two models were evaluated on a self-collected dataset comprising approximately 7k samples and achieved accuracies of 62.61% and 64.20%, respectively, at the expression level. The experimental results demonstrated that both models outperformed the baseline model, which utilized only visual information. The encoder fusion achieved results similar to those of other state-of-the-art methods.
准确识别受访者的手写答案对于实施智能诊断和辅导非常重要。由于书写潦草和不规范,尤其是在处理笔迹尚未完全成熟的中小学生时,这项任务具有极大的挑战性。在这种情况下,即使人类只依靠手写内容的视觉信号,而没有任何上下文,也很难进行识别。然而,尽管在手写识别领域已经开展了数十年的工作,但很少有研究探讨利用外部信息(问题先验)来提高识别准确率的想法。基于问题与解决方案之间的相关性,本研究旨在探讨问题文本是否能提高受访者解决方案中手写数学表达式(HMEs)的识别率。基于手写数学表达式识别的主流方法--编码器-解码器框架,我们提出了两种分别在编码器和解码器阶段融合问题文本信号和手写视图信号的模型。第一种称为编码器-融合(encoder-fusion),在编码器阶段采用静态查询来实现两种模态之间的交互,为了更好地捕捉和解释交互,我们提出了一种基于解码器阶段动态查询的融合方法,称为解码器-关注(decoder-attend)。这两个模型在一个包含约 7k 个样本的自收集数据集上进行了评估,在表达水平上的准确率分别达到了 62.61% 和 64.20%。实验结果表明,这两个模型的性能都优于只利用视觉信息的基线模型。编码器融合取得的结果与其他最先进的方法相似。
{"title":"Can question-texts improve the recognition of handwritten mathematical expressions in respondents’ solutions?","authors":"Ting Zhang,&nbsp;Xinxin Jin,&nbsp;Xiaoyang Ma,&nbsp;Xinzi Peng,&nbsp;Yiyang Zhao,&nbsp;Jinzheng Liu,&nbsp;Xinguo Yu","doi":"10.1016/j.knosys.2024.112731","DOIUrl":"10.1016/j.knosys.2024.112731","url":null,"abstract":"<div><div>The accurate recognition of respondents’ handwritten solutions is important for implementing intelligent diagnosis and tutoring. This task is significantly challenging because of scribbled and irregular writing, especially when handling primary or secondary students whose handwriting has not yet been fully developed. Recognition becomes difficult in such cases even for humans relying only on the visual signals of handwritten content without any context. However, despite decades of work on handwriting recognition, few studies have explored the idea of utilizing external information (question priors) to improve the accuracy. Based on the correlation between questions and solutions, this study aims to explore whether question-texts can improve the recognition of handwritten mathematical expressions (HMEs) in respondents’ solutions. Based on the encoder–decoder framework, which is the mainstream method for HME recognition, we propose two models for fusing question-text signals and handwriting-vision signals at the encoder and decoder stages, respectively. The first, called encoder-fusion, adopts a static query to implement the interaction between two modalities at the encoder phase, and to better catch and interpret the interaction, a fusing method based on a dynamic query at the decoder stage, called decoder-attend is proposed. These two models were evaluated on a self-collected dataset comprising approximately 7k samples and achieved accuracies of 62.61% and 64.20%, respectively, at the expression level. The experimental results demonstrated that both models outperformed the baseline model, which utilized only visual information. The encoder fusion achieved results similar to those of other state-of-the-art methods.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"307 ","pages":"Article 112731"},"PeriodicalIF":7.2,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142698240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mutual information-driven self-supervised point cloud pre-training 互信息驱动的自监督点云预训练
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-20 DOI: 10.1016/j.knosys.2024.112741
Weichen Xu , Tianhao Fu , Jian Cao , Xinyu Zhao , Xinxin Xu , Xixin Cao , Xing Zhang
Learning universal representations from unlabeled 3D point clouds is essential to improve the generalization and safety of autonomous driving. Generative self-supervised point cloud pre-training with low-level features as pretext tasks is a mainstream paradigm. However, from the perspective of mutual information, this approach is constrained by spatial information and entangled representations. In this study, we propose a generalized generative self-supervised point cloud pre-training framework called GPICTURE. High-level features were used as an additional pretext task to enhance the understanding of semantic information. Considering the varying difficulties caused by the discrimination of voxel features, we designed inter-class and intra-class discrimination-guided masking (I2Mask) to set the masking ratio adaptively. Furthermore, to ensure a hierarchical and stable reconstruction process, centered kernel alignment-guided hierarchical reconstruction and differential-gated progressive learning were employed to control multiple reconstruction tasks. Complete theoretical analyses demonstrated that high-level features can enhance the mutual information between latent features and high-level features, as well as the input point cloud. On Waymo, nuScenes, and SemanticKITTI, we achieved a 75.55% mAP for 3D object detection, 79.7% mIoU for 3D semantic segmentation, and 18.8% mIoU for occupancy prediction. Specifically, with only 50% of the fine-tuning data required, the performance of GPICURE was close to that of training from scratch with 100% of the fine-tuning data. In addition, consistent visualization with downstream tasks and a 57% reduction in weight disparity demonstrated a better fine-tuning starting point. The project page is hosted at https://gpicture-page.github.io/.
从无标记的三维点云中学习通用表征对于提高自动驾驶的通用性和安全性至关重要。以低级特征作为前置任务的生成式自监督点云预训练是一种主流模式。然而,从互信息的角度来看,这种方法受到空间信息和纠缠表征的限制。在本研究中,我们提出了一种名为 GPICTURE 的广义生成式自监督点云预训练框架。高层特征被用作额外的前置任务,以增强对语义信息的理解。考虑到体素特征的辨别会造成不同的困难,我们设计了类间和类内辨别引导的掩蔽(I2Mask),以自适应地设置掩蔽比例。此外,为了确保分层和稳定的重建过程,我们采用了中心核配准引导的分层重建和差异门控的渐进学习来控制多个重建任务。完整的理论分析表明,高层特征可以增强潜在特征与高层特征以及输入点云之间的互信息。在 Waymo、nuScenes 和 SemanticKITTI 上,我们实现了 75.55% 的三维物体检测 mAP、79.7% 的三维语义分割 mIoU 和 18.8% 的占用预测 mIoU。具体来说,GPICURE 只需要 50% 的微调数据,其性能就接近于使用 100% 微调数据从头开始训练的结果。此外,与下游任务的可视化效果一致,权重差异减少了 57%,这表明微调起点更好。项目页面托管在 https://gpicture-page.github.io/。
{"title":"Mutual information-driven self-supervised point cloud pre-training","authors":"Weichen Xu ,&nbsp;Tianhao Fu ,&nbsp;Jian Cao ,&nbsp;Xinyu Zhao ,&nbsp;Xinxin Xu ,&nbsp;Xixin Cao ,&nbsp;Xing Zhang","doi":"10.1016/j.knosys.2024.112741","DOIUrl":"10.1016/j.knosys.2024.112741","url":null,"abstract":"<div><div>Learning universal representations from unlabeled 3D point clouds is essential to improve the generalization and safety of autonomous driving. Generative self-supervised point cloud pre-training with low-level features as pretext tasks is a mainstream paradigm. However, from the perspective of mutual information, this approach is constrained by spatial information and entangled representations. In this study, we propose a generalized generative self-supervised point cloud pre-training framework called GPICTURE. High-level features were used as an additional pretext task to enhance the understanding of semantic information. Considering the varying difficulties caused by the discrimination of voxel features, we designed inter-class and intra-class discrimination-guided masking (I<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>Mask) to set the masking ratio adaptively. Furthermore, to ensure a hierarchical and stable reconstruction process, centered kernel alignment-guided hierarchical reconstruction and differential-gated progressive learning were employed to control multiple reconstruction tasks. Complete theoretical analyses demonstrated that high-level features can enhance the mutual information between latent features and high-level features, as well as the input point cloud. On Waymo, nuScenes, and SemanticKITTI, we achieved a 75.55% mAP for 3D object detection, 79.7% mIoU for 3D semantic segmentation, and 18.8% mIoU for occupancy prediction. Specifically, with only 50% of the fine-tuning data required, the performance of GPICURE was close to that of training from scratch with 100% of the fine-tuning data. In addition, consistent visualization with downstream tasks and a 57% reduction in weight disparity demonstrated a better fine-tuning starting point. The project page is hosted at <span><span>https://gpicture-page.github.io/</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"307 ","pages":"Article 112741"},"PeriodicalIF":7.2,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142698128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data augmentation based on large language models for radiological report classification 基于大语言模型的数据扩增,用于放射学报告分类
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-20 DOI: 10.1016/j.knosys.2024.112745
Jaime Collado-Montañez, María-Teresa Martín-Valdivia, Eugenio Martínez-Cámara
The International Classification of Diseases (ICD) is fundamental in the field of healthcare as it provides a standardized framework for the classification and coding of medical diagnoses and procedures, enabling the understanding of international public health patterns and trends. However, manually classifying medical reports according to this standard is a slow, tedious and error-prone process, which shows the need for automated systems to offload the healthcare professional of this task and to reduce the number of errors. In this paper, we propose an automated classification system based on Natural Language Processing to analyze radiological reports and classify them according to the ICD-10. Since the specialized use of the language of radiological reports and the usual unbalanced distribution of medical report sets, we propose a methodology grounded in leveraging large language models for augmenting the data of unrepresented classes and adapting the classification language models to the specific use of the language of radiological reports. The results show that the proposed methodology enhances the classification performance on the CARES corpus of radiological reports.
国际疾病分类(ICD)是医疗保健领域的基础,因为它为医疗诊断和程序的分类和编码提供了一个标准化框架,使人们能够了解国际公共卫生模式和趋势。然而,根据这一标准对医疗报告进行人工分类是一个缓慢、乏味且容易出错的过程,这表明需要自动化系统来减轻医疗专业人员的工作负担,并减少错误的发生。在本文中,我们提出了一种基于自然语言处理的自动分类系统,用于分析放射报告并根据 ICD-10 进行分类。由于放射学报告语言的特殊用途以及医疗报告集通常的不均衡分布,我们提出了一种方法,利用大型语言模型来增强未代表类别的数据,并使分类语言模型适应放射学报告语言的特殊用途。结果表明,所提出的方法提高了 CARES 放射报告语料库的分类性能。
{"title":"Data augmentation based on large language models for radiological report classification","authors":"Jaime Collado-Montañez,&nbsp;María-Teresa Martín-Valdivia,&nbsp;Eugenio Martínez-Cámara","doi":"10.1016/j.knosys.2024.112745","DOIUrl":"10.1016/j.knosys.2024.112745","url":null,"abstract":"<div><div>The International Classification of Diseases (ICD) is fundamental in the field of healthcare as it provides a standardized framework for the classification and coding of medical diagnoses and procedures, enabling the understanding of international public health patterns and trends. However, manually classifying medical reports according to this standard is a slow, tedious and error-prone process, which shows the need for automated systems to offload the healthcare professional of this task and to reduce the number of errors. In this paper, we propose an automated classification system based on Natural Language Processing to analyze radiological reports and classify them according to the ICD-10. Since the specialized use of the language of radiological reports and the usual unbalanced distribution of medical report sets, we propose a methodology grounded in leveraging large language models for augmenting the data of unrepresented classes and adapting the classification language models to the specific use of the language of radiological reports. The results show that the proposed methodology enhances the classification performance on the CARES corpus of radiological reports.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"308 ","pages":"Article 112745"},"PeriodicalIF":7.2,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142705077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Individualized image steganography method with Dynamic Separable Key and Adaptive Redundancy Anchor
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-20 DOI: 10.1016/j.knosys.2024.112729
Junchao Zhou, Yao Lu, Guangming Lu
Image steganography hides several secret images into a single cover image to produce a stego image. For transmission security, the stego image is visually indistinguishable from the cover image. Furthermore, for effective transmission of secret information, the receivers should recover the secret images with high quality. With the increasing steganography capacity, a stego image containing many secret images is transmitted through public channels. However, in the existing image steganography methods, all the secret images are usually revealed without quarantine among various recipients. This problem casts a threat to security in the recovery process. In order to overcome this issue, we propose the Individualized Image Steganography (IIS) Method with Dynamic Separable Key (DSK) and Adaptive Redundancy Anchor (ARA). Specifically, in the process of hiding secret images, the proposed DSK dynamically generates a global key and a local key and appropriately fuses them together. In the same batch of transmission, all recipients share the same global key, but each has a different local key. Only by matching both the global key and the local key simultaneously, can the secret image be restored by the specific receiver, which makes the secret image individualized for the target recipient. Additionally, in the process of revealing secret images, the proposed ARA learns the adaptive redundancy anchor for the inverse training to drive the input redundancy of revealing (backward) process and output redundancy of hiding (forward) process to be close. This achieves a better trade-off between the performances of hiding and revealing processes, and further enhances both the quality of restored secret images and stego images. Jointly using the DSK and ARA, a series of experiments have verified that our IIS method has achieved satisfactory performance improvements in extensive aspects. Code is available in https://github.com/Revive624/Individualized-Invertible-Steganography.
{"title":"Individualized image steganography method with Dynamic Separable Key and Adaptive Redundancy Anchor","authors":"Junchao Zhou,&nbsp;Yao Lu,&nbsp;Guangming Lu","doi":"10.1016/j.knosys.2024.112729","DOIUrl":"10.1016/j.knosys.2024.112729","url":null,"abstract":"<div><div>Image steganography hides several secret images into a single cover image to produce a stego image. For transmission security, the stego image is visually indistinguishable from the cover image. Furthermore, for effective transmission of secret information, the receivers should recover the secret images with high quality. With the increasing steganography capacity, a stego image containing many secret images is transmitted through public channels. However, in the existing image steganography methods, all the secret images are usually revealed without quarantine among various recipients. This problem casts a threat to security in the recovery process. In order to overcome this issue, we propose the Individualized Image Steganography (<strong>IIS</strong>) Method with Dynamic Separable Key (DSK) and Adaptive Redundancy Anchor (ARA). Specifically, in the process of hiding secret images, the proposed DSK dynamically generates a global key and a local key and appropriately fuses them together. In the same batch of transmission, all recipients share the same global key, but each has a different local key. Only by matching both the global key and the local key simultaneously, can the secret image be restored by the specific receiver, which makes the secret image individualized for the target recipient. Additionally, in the process of revealing secret images, the proposed ARA learns the adaptive redundancy anchor for the inverse training to drive the input redundancy of revealing (backward) process and output redundancy of hiding (forward) process to be close. This achieves a better trade-off between the performances of hiding and revealing processes, and further enhances both the quality of restored secret images and stego images. Jointly using the DSK and ARA, a series of experiments have verified that our <strong>IIS</strong> method has achieved satisfactory performance improvements in extensive aspects. Code is available in <span><span>https://github.com/Revive624/Individualized-Invertible-Steganography</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"309 ","pages":"Article 112729"},"PeriodicalIF":7.2,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142748703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A transformer based visual tracker with restricted token interaction and knowledge distillation 基于变换器的视觉跟踪器,具有受限标记交互和知识提炼功能
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-20 DOI: 10.1016/j.knosys.2024.112736
Nian Liu, Yi Zhang
Recently, one-stream pipelines have made significant progress in visual object tracking (VOT), where the template and search images interact in early stages. However, one-stream pipelines have a potential problem: They treat the object and the background equally (or other irrelevant parts), leading to weak discriminability of the extracted features. To remedy this issue, a restricted token interaction module based on asymmetric attention mechanism is proposed in this paper, which divides the search image into valuable part and other part. Only the valuable part is selected for cross-attention with the template so as to better distinguish the object from the background, which finally improves the localization accuracy and robustness. In addition, to avoid heavy computational overhead, we utilize logit distillation and localization distillation methods to optimize the outputs of the classification and regression heads respectively. At the same time, we separate the distillation regions and apply different knowledge distillation methods in different regions to effectively determine which regions are most beneficial for classification or localization learning. Extensive experiments have been conducted on mainstream datasets in which our tracker (dubbed RIDTrack) has achieved appealing results while meeting the real-time requirement.
最近,单流管道在视觉物体跟踪(VOT)领域取得了重大进展,在该领域,模板和搜索图像在早期阶段就会相互作用。然而,单流管道存在一个潜在问题:它们对物体和背景(或其他无关部分)一视同仁,导致提取特征的可区分性较弱。为了解决这个问题,本文提出了一种基于非对称关注机制的受限标记交互模块,它将搜索图像分为有价值部分和其他部分。只有有价值的部分才会被选中与模板进行交叉关注,从而更好地将物体与背景区分开来,最终提高定位精度和鲁棒性。此外,为了避免繁重的计算开销,我们利用对数蒸馏法和定位蒸馏法分别优化分类头和回归头的输出。同时,我们将蒸馏区域分开,并在不同区域应用不同的知识蒸馏方法,以有效确定哪些区域最有利于分类或定位学习。我们在主流数据集上进行了广泛的实验,在满足实时性要求的同时,我们的跟踪器(命名为 RIDTrack)取得了令人满意的结果。
{"title":"A transformer based visual tracker with restricted token interaction and knowledge distillation","authors":"Nian Liu,&nbsp;Yi Zhang","doi":"10.1016/j.knosys.2024.112736","DOIUrl":"10.1016/j.knosys.2024.112736","url":null,"abstract":"<div><div>Recently, one-stream pipelines have made significant progress in visual object tracking (VOT), where the template and search images interact in early stages. However, one-stream pipelines have a potential problem: They treat the object and the background equally (or other irrelevant parts), leading to weak discriminability of the extracted features. To remedy this issue, a restricted token interaction module based on asymmetric attention mechanism is proposed in this paper, which divides the search image into valuable part and other part. Only the valuable part is selected for cross-attention with the template so as to better distinguish the object from the background, which finally improves the localization accuracy and robustness. In addition, to avoid heavy computational overhead, we utilize logit distillation and localization distillation methods to optimize the outputs of the classification and regression heads respectively. At the same time, we separate the distillation regions and apply different knowledge distillation methods in different regions to effectively determine which regions are most beneficial for classification or localization learning. Extensive experiments have been conducted on mainstream datasets in which our tracker (dubbed RIDTrack) has achieved appealing results while meeting the real-time requirement.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"307 ","pages":"Article 112736"},"PeriodicalIF":7.2,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142698125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-source partial domain adaptation with Gaussian-based dual-level weighting for PPG-based heart rate estimation
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-20 DOI: 10.1016/j.knosys.2024.112769
Jihyun Kim , Hansam Cho , Minjung Lee , Seoung Bum Kim
Photoplethysmography (PPG) signals from wearable devices have expanded the accessibility of heart rate estimation. Recent advances in deep learning have significantly improved the generalizability of heart rate estimation from PPG signals. However, these models exhibit performance degradation when used for new subjects with different PPG distributions. Although previous studies have attempted subject-specific training and fine-tuning techniques, they require labeled data for each new subject, limiting their practicality. In response, we explore the application of domain adaptation techniques using only unlabeled PPG signals from the target subject. However, naive domain adaptation approaches do not adequately account for the variability in PPG signals among different subjects in the training dataset. Furthermore, they overlook the possibility that the heart rate range of the target subject may only partially overlap with that of the source subjects. To address these limitations, we propose a novel multi-source partial domain adaptation method, GAussian-based dUaL-level weighting (GAUL), designed for the PPG-based heart rate estimation, formulated as a regression task. GAUL considers and adjusts the contribution of relevant source data at the domain and sample levels during domain adaptation. The experimental results on three benchmark datasets demonstrate that our method outperforms existing domain adaptation approaches, enhancing the heart rate estimation accuracy for new subjects without requiring additional labeled data. The code is available at: https://github.com/Im-JihyunKim/GAUL.
{"title":"Multi-source partial domain adaptation with Gaussian-based dual-level weighting for PPG-based heart rate estimation","authors":"Jihyun Kim ,&nbsp;Hansam Cho ,&nbsp;Minjung Lee ,&nbsp;Seoung Bum Kim","doi":"10.1016/j.knosys.2024.112769","DOIUrl":"10.1016/j.knosys.2024.112769","url":null,"abstract":"<div><div>Photoplethysmography (PPG) signals from wearable devices have expanded the accessibility of heart rate estimation. Recent advances in deep learning have significantly improved the generalizability of heart rate estimation from PPG signals. However, these models exhibit performance degradation when used for new subjects with different PPG distributions. Although previous studies have attempted subject-specific training and fine-tuning techniques, they require labeled data for each new subject, limiting their practicality. In response, we explore the application of domain adaptation techniques using only unlabeled PPG signals from the target subject. However, naive domain adaptation approaches do not adequately account for the variability in PPG signals among different subjects in the training dataset. Furthermore, they overlook the possibility that the heart rate range of the target subject may only partially overlap with that of the source subjects. To address these limitations, we propose a novel multi-source partial domain adaptation method, GAussian-based dUaL-level weighting (GAUL), designed for the PPG-based heart rate estimation, formulated as a regression task. GAUL considers and adjusts the contribution of relevant source data at the domain and sample levels during domain adaptation. The experimental results on three benchmark datasets demonstrate that our method outperforms existing domain adaptation approaches, enhancing the heart rate estimation accuracy for new subjects without requiring additional labeled data. The code is available at: <span><span>https://github.com/Im-JihyunKim/GAUL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"309 ","pages":"Article 112769"},"PeriodicalIF":7.2,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142748611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Affective body expression recognition framework based on temporal and spatial fusion features 基于时空融合特征的肢体表情情感识别框架
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-20 DOI: 10.1016/j.knosys.2024.112744
Tao Wang , Shuang Liu , Feng He , Minghao Du , Weina Dai , Yufeng Ke , Dong Ming
Affective body expression recognition technology enables machines to interpret non-verbal emotional signals from human movements, which is crucial for facilitating natural and empathetic human–machine interaction (HCI). This work proposes a new framework for emotion recognition from body movements, providing a universal and effective solution for decoding the temporal–spatial mapping between emotions and body expressions. Compared with previous studies, our approach extracted interpretable temporal and spatial features by constructing a body expression energy model (BEEM) and a multi-input symmetric positive definite matrix network (MSPDnet). In particular, the temporal features extracted from the BEEM reveal the energy distribution, dynamical complexity, and frequency activity of the body expression under different emotions, while the spatial features obtained by MSPDnet capture the spatial Riemannian properties between body joints. Furthermore, this paper introduces an attentional temporal–spatial feature fusion (ATSFF) algorithm to adaptively fuse temporal and spatial features with different semantics and scales, significantly improving the discriminability and generalizability of the fused features. The proposed method achieves recognition accuracies over 90% across four public datasets, outperforming most state-of-the-art approaches.
肢体表情情感识别技术能让机器从人类动作中解读出非语言情感信号,这对于促进自然而富有共鸣的人机交互(HCI)至关重要。本研究提出了一种新的肢体动作情感识别框架,为解码情感与肢体表情之间的时空映射提供了一种通用而有效的解决方案。与之前的研究相比,我们的方法通过构建肢体表情能量模型(BEEM)和多输入对称正定矩阵网络(MSPDnet)来提取可解释的时空特征。其中,从 BEEM 提取的时间特征揭示了不同情绪下肢体表情的能量分布、动态复杂性和频率活动,而通过 MSPDnet 获得的空间特征则捕捉了肢体关节之间的空间黎曼特性。此外,本文还介绍了一种注意力时空特征融合(ATSFF)算法,可自适应性地融合不同语义和尺度的时空特征,从而显著提高融合特征的可辨别性和可泛化性。所提出的方法在四个公共数据集上的识别准确率超过了 90%,优于大多数最先进的方法。
{"title":"Affective body expression recognition framework based on temporal and spatial fusion features","authors":"Tao Wang ,&nbsp;Shuang Liu ,&nbsp;Feng He ,&nbsp;Minghao Du ,&nbsp;Weina Dai ,&nbsp;Yufeng Ke ,&nbsp;Dong Ming","doi":"10.1016/j.knosys.2024.112744","DOIUrl":"10.1016/j.knosys.2024.112744","url":null,"abstract":"<div><div>Affective body expression recognition technology enables machines to interpret non-verbal emotional signals from human movements, which is crucial for facilitating natural and empathetic human–machine interaction (HCI). This work proposes a new framework for emotion recognition from body movements, providing a universal and effective solution for decoding the temporal–spatial mapping between emotions and body expressions. Compared with previous studies, our approach extracted interpretable temporal and spatial features by constructing a body expression energy model (BEEM) and a multi-input symmetric positive definite matrix network (MSPDnet). In particular, the temporal features extracted from the BEEM reveal the energy distribution, dynamical complexity, and frequency activity of the body expression under different emotions, while the spatial features obtained by MSPDnet capture the spatial Riemannian properties between body joints. Furthermore, this paper introduces an attentional temporal–spatial feature fusion (ATSFF) algorithm to adaptively fuse temporal and spatial features with different semantics and scales, significantly improving the discriminability and generalizability of the fused features. The proposed method achieves recognition accuracies over 90% across four public datasets, outperforming most state-of-the-art approaches.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"308 ","pages":"Article 112744"},"PeriodicalIF":7.2,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142720588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DTDA: Dual-channel Triple-to-quintuple Data Augmentation for Comparative Opinion Quintuple Extraction DTDA:用于比较意见五元提取的双通道三元对五元数据增强技术
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-20 DOI: 10.1016/j.knosys.2024.112734
Qingting Xu , Kaisong Song , Yangyang Kang , Chaoqun Liu , Yu Hong , Guodong Zhou
Comparative Opinion Quintuple Extraction (COQE) is an essential task in sentiment analysis that entails the extraction of quintuples from comparative sentences. Each quintuple comprises a subject, an object, a shared aspect for comparison, a comparative opinion and a distinct preference. The prevalent reliance on extensively annotated datasets inherently constrains the efficiency of training. Manual data labeling is both time-consuming and labor-intensive, especially labeling quintuple data. Herein, we propose a Dual-channel Triple-to-quintuple Data Augmentation (DTDA) approach for the COQE task. In particular, we leverage ChatGPT to generate domain-specific triple data. Subsequently, we utilize these generated data and existing Aspect Sentiment Triplet Extraction (ASTE) data for separate preliminary fine-tuning. On this basis, we employ the two fine-tuned triple models for warm-up and construct a dual-channel quintuple model using the unabridged quintuples. We evaluate our approach on three benchmark datasets: Camera-COQE, Car-COQE and Ele-COQE. Our approach exhibits substantial improvements versus pipeline-based, joint, and T5-based baselines. Notably, the DTDA method significantly outperforms the best pipeline method, with exact match F1-score increasing by 10.32%, 8.97%, and 10.65% on Camera-COQE, Car-COQE and Ele-COQE, respectively. More importantly, our data augmentation method can adapt to any baselines. When integrated with the current SOTA UniCOQE method, it further improves performance by 0.34%, 1.65%, and 2.22%, respectively. We will make all related models and source code publicly available upon acceptance.
比较意见五元组提取(COQE)是情感分析中的一项重要任务,需要从比较句中提取五元组。每个五元组都包括一个主语、一个宾语、一个用于比较的共同方面、一个比较意见和一个不同的偏好。普遍依赖广泛注释的数据集从本质上限制了训练的效率。人工标注数据既耗时又耗力,尤其是标注五元数据。在此,我们针对 COQE 任务提出了一种双通道三重到五重数据增强(DTDA)方法。特别是,我们利用 ChatGPT 生成特定领域的三倍数据。随后,我们利用这些生成的数据和现有的方面情感三重提取(ASTE)数据分别进行初步微调。在此基础上,我们使用两个微调后的三元组模型进行热身,并使用未删节的五元组构建双通道五元组模型。我们在三个基准数据集上评估了我们的方法:Camera-COQE、Car-COQE 和 Ele-COQE。与基于流水线的方法、联合方法和基于 T5 的基线方法相比,我们的方法有了很大的改进。值得注意的是,DTDA 方法明显优于最佳流水线方法,在 Camera-COQE、Car-COQE 和 Ele-COQE 上,精确匹配的 F1 分数分别提高了 10.32%、8.97% 和 10.65%。更重要的是,我们的数据增强方法可以适应任何基线。当与当前的 SOTA UniCOQE 方法集成时,其性能分别进一步提高了 0.34%、1.65% 和 2.22%。我们将在获得认可后公开所有相关模型和源代码。
{"title":"DTDA: Dual-channel Triple-to-quintuple Data Augmentation for Comparative Opinion Quintuple Extraction","authors":"Qingting Xu ,&nbsp;Kaisong Song ,&nbsp;Yangyang Kang ,&nbsp;Chaoqun Liu ,&nbsp;Yu Hong ,&nbsp;Guodong Zhou","doi":"10.1016/j.knosys.2024.112734","DOIUrl":"10.1016/j.knosys.2024.112734","url":null,"abstract":"<div><div>Comparative Opinion Quintuple Extraction (COQE) is an essential task in sentiment analysis that entails the extraction of quintuples from comparative sentences. Each quintuple comprises a subject, an object, a shared aspect for comparison, a comparative opinion and a distinct preference. The prevalent reliance on extensively annotated datasets inherently constrains the efficiency of training. Manual data labeling is both time-consuming and labor-intensive, especially labeling quintuple data. Herein, we propose a <strong>D</strong>ual-channel <strong>T</strong>riple-to-quintuple <strong>D</strong>ata <strong>A</strong>ugmentation (<strong>DTDA</strong>) approach for the COQE task. In particular, we leverage ChatGPT to generate domain-specific triple data. Subsequently, we utilize these generated data and existing Aspect Sentiment Triplet Extraction (ASTE) data for separate preliminary fine-tuning. On this basis, we employ the two fine-tuned triple models for warm-up and construct a dual-channel quintuple model using the unabridged quintuples. We evaluate our approach on three benchmark datasets: Camera-COQE, Car-COQE and Ele-COQE. Our approach exhibits substantial improvements versus pipeline-based, joint, and T5-based baselines. Notably, the DTDA method significantly outperforms the best pipeline method, with exact match <span><math><mi>F</mi></math></span>1-score increasing by 10.32%, 8.97%, and 10.65% on Camera-COQE, Car-COQE and Ele-COQE, respectively. More importantly, our data augmentation method can adapt to any baselines. When integrated with the current SOTA UniCOQE method, it further improves performance by 0.34%, 1.65%, and 2.22%, respectively. We will make all related models and source code publicly available upon acceptance.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"307 ","pages":"Article 112734"},"PeriodicalIF":7.2,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142698239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing person re-identification via Uncertainty Feature Fusion Method and Auto-weighted Measure Combination 通过不确定性特征融合方法和自动加权测量组合增强人员再识别能力
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-20 DOI: 10.1016/j.knosys.2024.112737
Quang-Huy Che, Le-Chuong Nguyen, Duc-Tuan Luu, Vinh-Tiep Nguyen
Person re-identification (Re-ID) is a challenging task that involves identifying the same person across different camera views in surveillance systems. Current methods usually rely on features from single-camera views, which can be limiting when dealing with multiple cameras and challenges such as changing viewpoints and occlusions. In this paper, a new approach is introduced that enhances the capability of ReID models through the Uncertain Feature Fusion Method (UFFM) and Auto-weighted Measure Combination (AMC). UFFM generates multi-view features using features extracted independently from multiple images to mitigate view bias. However, relying only on similarity based on multi-view features is limited because these features ignore the details represented in single-view features. Therefore, we propose the AMC method to generate a more robust similarity measure by combining various measures. Our method significantly improves Rank@1 (Rank-1 accuracy) and Mean Average Precision (mAP) when evaluated on person re-identification datasets. Combined with the BoT Baseline on challenging datasets, we achieve impressive results, with a 7.9% improvement in Rank@1 and a 12.1% improvement in mAP on the MSMT17 dataset. On the Occluded-DukeMTMC dataset, our method increases Rank@1 by 22.0% and mAP by 18.4%.
人员再识别(Re-ID)是一项具有挑战性的任务,涉及在监控系统的不同摄像机视图中识别同一个人。目前的方法通常依赖于单摄像头视图的特征,在处理多摄像头以及视点变化和遮挡等挑战时,这种方法可能会受到限制。本文介绍了一种新方法,通过不确定特征融合方法(UFFM)和自动加权测量组合(AMC)增强 ReID 模型的能力。UFFM 使用从多幅图像中独立提取的特征生成多视图特征,以减轻视图偏差。然而,仅仅依靠基于多视角特征的相似性是有限的,因为这些特征忽略了单视角特征所代表的细节。因此,我们提出了 AMC 方法,通过结合各种测量方法来生成更稳健的相似性测量方法。在人物再识别数据集上进行评估时,我们的方法大大提高了 Rank@1(Rank-1 精确度)和平均精确度(mAP)。在具有挑战性的数据集上与 BoT 基准相结合,我们取得了令人印象深刻的结果,在 MSMT17 数据集上,Rank@1 提高了 7.9%,mAP 提高了 12.1%。在 Occluded-DukeMTMC 数据集上,我们的方法将 Rank@1 提高了 22.0%,mAP 提高了 18.4%。
{"title":"Enhancing person re-identification via Uncertainty Feature Fusion Method and Auto-weighted Measure Combination","authors":"Quang-Huy Che,&nbsp;Le-Chuong Nguyen,&nbsp;Duc-Tuan Luu,&nbsp;Vinh-Tiep Nguyen","doi":"10.1016/j.knosys.2024.112737","DOIUrl":"10.1016/j.knosys.2024.112737","url":null,"abstract":"<div><div>Person re-identification (Re-ID) is a challenging task that involves identifying the same person across different camera views in surveillance systems. Current methods usually rely on features from single-camera views, which can be limiting when dealing with multiple cameras and challenges such as changing viewpoints and occlusions. In this paper, a new approach is introduced that enhances the capability of ReID models through the Uncertain Feature Fusion Method (UFFM) and Auto-weighted Measure Combination (AMC). UFFM generates multi-view features using features extracted independently from multiple images to mitigate view bias. However, relying only on similarity based on multi-view features is limited because these features ignore the details represented in single-view features. Therefore, we propose the AMC method to generate a more robust similarity measure by combining various measures. Our method significantly improves Rank@1 (Rank-1 accuracy) and Mean Average Precision (mAP) when evaluated on person re-identification datasets. Combined with the BoT Baseline on challenging datasets, we achieve impressive results, with a 7.9% improvement in Rank@1 and a 12.1% improvement in mAP on the MSMT17 dataset. On the Occluded-DukeMTMC dataset, our method increases Rank@1 by 22.0% and mAP by 18.4%.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"307 ","pages":"Article 112737"},"PeriodicalIF":7.2,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Knowledge-Based Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1