首页 > 最新文献

IEEE transactions on artificial intelligence最新文献

英文 中文
Location, Neighborhood, and Semantic Guidance Network for RGB-D Co-Salient Object Detection RGB-D共显著目标检测的位置、邻域和语义引导网络
Pub Date : 2025-04-24 DOI: 10.1109/TAI.2025.3564238
Wujie Zhou;Bingying Wang;Xiena Dong;Caie Xu;Fangfang Qiang
Red–green–blue-depth (RGB-D) deep learning-based co-salient object detection (Co-SOD) automatically detects and segments common salient objects in images. However, this computationally intensive model cannot be run on mobile devices. To help overcome this limitation, this article proposes a localization, neighborhood, and semantic guidance network (LNSNet) with knowledge distillation (KD), called LNSNet-S*, for RGB-D Co-SOD to minimize the number of parameters and improve the accuracy. Apart from their backbone networks, the LNSNet student (LNSNet-S) and teacher (LNSNet-T) models use the same structure to capture similarity knowledge in category, channel, and pixel-point dimensions to train an LNSNet-S with KD for superior lightweight performance. For optimization, a positioning path progressive activation uses hierarchical transformers to fuse features from low to high levels, generating class activation localization maps using the fused bimodal information to obtain location information. The high-level neighborhood-guidance information is then used to guide the low-level features. Next, a multisource semantic enhancement embedding module progressively fuses multiscale cross-modal semantic information guided by class-activated localization information. A class-based progressive triplet loss facilitates the transfer of category, channel, and pixel-point information. Extensive experiments demonstrated the effectiveness and robustness of the novel LNSNet-S* in different sizes, and significant improvements were observed. The smallest LNSNet-S* model reduced the number of parameters by more than 92% compared to that of LNSNet-T, requiring only 15.9 M parameters.
基于红绿蓝深(RGB-D)深度学习的协同显著目标检测(Co-SOD)能够自动检测并分割图像中常见的显著目标。然而,这种计算密集型模型不能在移动设备上运行。为了克服这一限制,本文针对RGB-D Co-SOD提出了一种具有知识蒸馏(KD)的定位、邻域和语义引导网络(LNSNet),称为LNSNet- s *,以最大限度地减少参数数量并提高准确性。除了骨干网络之外,LNSNet学生(LNSNet- s)和教师(LNSNet- t)模型使用相同的结构来捕获类别、通道和像素点维度的相似性知识,以训练具有KD的LNSNet- s,以获得卓越的轻量级性能。在优化方面,定位路径渐进式激活利用层次变换从低到高融合特征,利用融合的双峰信息生成类激活定位图,获取位置信息。然后利用高阶邻域引导信息引导低阶特征。其次,多源语义增强嵌入模块以类激活定位信息为导向,逐步融合多尺度跨模态语义信息。基于类的渐进式三联体损耗促进了类别、信道和像素点信息的传输。大量的实验证明了新型LNSNet-S*在不同尺寸下的有效性和鲁棒性,并观察到显著的改进。最小的LNSNet-S*模型与LNSNet-T相比,参数数量减少了92%以上,只需要15.9 M个参数。
{"title":"Location, Neighborhood, and Semantic Guidance Network for RGB-D Co-Salient Object Detection","authors":"Wujie Zhou;Bingying Wang;Xiena Dong;Caie Xu;Fangfang Qiang","doi":"10.1109/TAI.2025.3564238","DOIUrl":"https://doi.org/10.1109/TAI.2025.3564238","url":null,"abstract":"Red–green–blue-depth (RGB-D) deep learning-based co-salient object detection (Co-SOD) automatically detects and segments common salient objects in images. However, this computationally intensive model cannot be run on mobile devices. To help overcome this limitation, this article proposes a localization, neighborhood, and semantic guidance network (LNSNet) with knowledge distillation (KD), called LNSNet-S<sup>*</sup>, for RGB-D Co-SOD to minimize the number of parameters and improve the accuracy. Apart from their backbone networks, the LNSNet student (LNSNet-S) and teacher (LNSNet-T) models use the same structure to capture similarity knowledge in category, channel, and pixel-point dimensions to train an LNSNet-S with KD for superior lightweight performance. For optimization, a positioning path progressive activation uses hierarchical transformers to fuse features from low to high levels, generating class activation localization maps using the fused bimodal information to obtain location information. The high-level neighborhood-guidance information is then used to guide the low-level features. Next, a multisource semantic enhancement embedding module progressively fuses multiscale cross-modal semantic information guided by class-activated localization information. A class-based progressive triplet loss facilitates the transfer of category, channel, and pixel-point information. Extensive experiments demonstrated the effectiveness and robustness of the novel LNSNet-S<sup>*</sup> in different sizes, and significant improvements were observed. The smallest LNSNet-S<sup>*</sup> model reduced the number of parameters by more than 92% compared to that of LNSNet-T, requiring only 15.9 M parameters.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 12","pages":"3297-3311"},"PeriodicalIF":0.0,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Transformer Inference Through Hybrid Dynamic Pruning 基于混合动态剪枝的高效变压器推理
Pub Date : 2025-04-23 DOI: 10.1109/TAI.2025.3563144
Ghadeer A. Jaradat;Mohammed F. Tolba;Ghada Alsuhli;Hani Saleh;Mahmoud Al-Qutayri;Thanos Stouraitis
In the world of deep learning, transformer models have become very significant, leading to improvements in many areas, from understanding language to recognizing images, covering a wide range of applications. Despite their success, the deployment of these models in real-time applications, particularly on edge devices, poses significant challenges due to their computational intensity and memory demands. To overcome these challenges, we introduce a novel hybrid dynamic pruning (HDP) technique, an efficient algorithm-architecture codesign approach that accelerates transformers using head sparsity, block sparsity, and approximation to reduce computations in attention and reduce memory access. With the observation of the huge redundancy in attention scores and attention heads, we propose a novel integer-based block pruning to prune unimportant blocks in the attention matrix at run time. We also propose integer-based head pruning to detect and prune unimportant heads at an early stage at run time. Also, we propose an approximation method that reduces attention computations. To efficiently support these methods with lower latency, we propose the HDP accelerator (HDPA) as a coprocessor architecture, synthesized in two configurations—HDPA-edge and HDPA-server—to meet the needs of mobile and server platforms. Extensive experiments with different transformer models and benchmarks demonstrate that HDPA-server achieves $481times$ and $381times$ speedup in attention layer computation over Intel i7-1185G7 CPU and NVIDIA T4 GPU, respectively. Compared to other state-of-the-art (SOTA) accelerators, HDPA achieves $1.26times$ to $2.08times$ higher throughput, $1.3times$ to $18times$ greater MAC efficiency, and $1.1times$ to $5.1times$ improved energy efficiency, when normalized to the same computational load.
在深度学习的世界里,变形模型已经变得非常重要,导致许多领域的改进,从理解语言到识别图像,涵盖了广泛的应用。尽管这些模型取得了成功,但由于其计算强度和内存需求,在实时应用程序(特别是边缘设备)中部署这些模型带来了重大挑战。为了克服这些挑战,我们引入了一种新的混合动态修剪(HDP)技术,这是一种有效的算法架构协同设计方法,可以使用头部稀疏性、块稀疏性和近似来加速变压器,以减少注意力的计算和减少内存访问。鉴于注意分数和注意头存在巨大的冗余,我们提出了一种基于整数的块剪枝方法,在运行时对注意矩阵中不重要的块进行剪枝。我们还提出了基于整数的头部修剪,以便在运行时的早期阶段检测和修剪不重要的头部。此外,我们还提出了一种近似方法来减少注意力计算。为了以更低的延迟有效地支持这些方法,我们提出了HDP加速器(HDPA)作为协处理器架构,综合了HDPA-edge和HDPA-server两种配置,以满足移动和服务器平台的需求。在不同变压器模型和基准测试中进行的大量实验表明,HDPA-server在Intel i7-1185G7 CPU和NVIDIA T4 GPU上的注意力层计算速度分别提高了481倍和381倍。与其他最先进的(SOTA)加速器相比,HDPA的吞吐量提高了1.26倍至2.08倍,MAC效率提高了1.3倍至18倍,在归一化相同计算负载的情况下,能效提高了1.1倍至5.1倍。
{"title":"Efficient Transformer Inference Through Hybrid Dynamic Pruning","authors":"Ghadeer A. Jaradat;Mohammed F. Tolba;Ghada Alsuhli;Hani Saleh;Mahmoud Al-Qutayri;Thanos Stouraitis","doi":"10.1109/TAI.2025.3563144","DOIUrl":"https://doi.org/10.1109/TAI.2025.3563144","url":null,"abstract":"In the world of deep learning, transformer models have become very significant, leading to improvements in many areas, from understanding language to recognizing images, covering a wide range of applications. Despite their success, the deployment of these models in real-time applications, particularly on edge devices, poses significant challenges due to their computational intensity and memory demands. To overcome these challenges, we introduce a novel hybrid dynamic pruning (HDP) technique, an efficient algorithm-architecture codesign approach that accelerates transformers using head sparsity, block sparsity, and approximation to reduce computations in attention and reduce memory access. With the observation of the huge redundancy in attention scores and attention heads, we propose a novel integer-based block pruning to prune unimportant blocks in the attention matrix at run time. We also propose integer-based head pruning to detect and prune unimportant heads at an early stage at run time. Also, we propose an approximation method that reduces attention computations. To efficiently support these methods with lower latency, we propose the HDP accelerator (HDPA) as a coprocessor architecture, synthesized in two configurations—HDPA-edge and HDPA-server—to meet the needs of mobile and server platforms. Extensive experiments with different transformer models and benchmarks demonstrate that HDPA-server achieves <inline-formula> <tex-math>$481times$</tex-math></inline-formula> and <inline-formula> <tex-math>$381times$</tex-math></inline-formula> speedup in attention layer computation over Intel i7-1185G7 CPU and NVIDIA T4 GPU, respectively. Compared to other state-of-the-art (SOTA) accelerators, HDPA achieves <inline-formula> <tex-math>$1.26times$</tex-math></inline-formula> to <inline-formula> <tex-math>$2.08times$</tex-math></inline-formula> higher throughput, <inline-formula> <tex-math>$1.3times$</tex-math></inline-formula> to <inline-formula> <tex-math>$18times$</tex-math></inline-formula> greater MAC efficiency, and <inline-formula> <tex-math>$1.1times$</tex-math></inline-formula> to <inline-formula> <tex-math>$5.1times$</tex-math></inline-formula> improved energy efficiency, when normalized to the same computational load.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 12","pages":"3273-3286"},"PeriodicalIF":0.0,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Is Perceptual Encryption Secure? A Security Benchmark for Perceptual Encryption Methods 感知加密安全吗?感知加密方法的安全基准
Pub Date : 2025-04-22 DOI: 10.1109/TAI.2025.3563438
Umesh Kashyap;Sudev Kumar Padhi;Sk. Subidh Ali
Perceptual encryption (PE) methods are the key enablers for protecting image privacy for deep learning-based applications in the cloud. In PE, the image content is obfuscated such that the deep learning models can work on the obfuscated data. The key advantage of PE over holomorphic encryption is that, unlike holomorphic encryption, the feature required by the target deep learning model is preserved in the encrypted data. Therefore, the model is not required to be retrained on the encrypted data. Recently, a significant number of PE methods have been proposed in the literature, each improving over the others. In this article, we perform a detailed security analysis of three best-known PE methods, namely, adversarial visual information hiding, learnable encryption, and encryption-then-compression methods designed to protect the privacy of images. We proposed a new generative adversarial network (GAN)-based security evaluation framework to successfully reconstruct the original images encrypted by these methods, showing clear security flaws. We conducted extensive experiments using different datasets and deep learning models. The results show significant vulnerabilities in the existing key-based PE methods.
感知加密(PE)方法是云中基于深度学习的应用程序保护图像隐私的关键使能器。在PE中,图像内容被混淆,以便深度学习模型可以处理被混淆的数据。PE相对于全纯加密的关键优势在于,与全纯加密不同,目标深度学习模型所需的特征保留在加密数据中。因此,不需要在加密数据上重新训练模型。最近,文献中提出了大量的PE方法,每种方法都比其他方法有所改进。在本文中,我们对三种最著名的PE方法进行了详细的安全性分析,即对抗性视觉信息隐藏、可学习加密和用于保护图像隐私的先加密后压缩方法。我们提出了一个新的基于生成对抗网络(GAN)的安全评估框架,成功地重建了经过这些方法加密的原始图像,显示出明显的安全缺陷。我们使用不同的数据集和深度学习模型进行了广泛的实验。结果表明,现有的基于密钥的PE方法存在明显的漏洞。
{"title":"Is Perceptual Encryption Secure? A Security Benchmark for Perceptual Encryption Methods","authors":"Umesh Kashyap;Sudev Kumar Padhi;Sk. Subidh Ali","doi":"10.1109/TAI.2025.3563438","DOIUrl":"https://doi.org/10.1109/TAI.2025.3563438","url":null,"abstract":"Perceptual encryption (PE) methods are the key enablers for protecting image privacy for deep learning-based applications in the cloud. In PE, the image content is obfuscated such that the deep learning models can work on the obfuscated data. The key advantage of PE over holomorphic encryption is that, unlike holomorphic encryption, the feature required by the target deep learning model is preserved in the encrypted data. Therefore, the model is not required to be retrained on the encrypted data. Recently, a significant number of PE methods have been proposed in the literature, each improving over the others. In this article, we perform a detailed security analysis of three best-known PE methods, namely, adversarial visual information hiding, learnable encryption, and encryption-then-compression methods designed to protect the privacy of images. We proposed a new generative adversarial network (GAN)-based security evaluation framework to successfully reconstruct the original images encrypted by these methods, showing clear security flaws. We conducted extensive experiments using different datasets and deep learning models. The results show significant vulnerabilities in the existing key-based PE methods.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 12","pages":"3287-3296"},"PeriodicalIF":0.0,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fuzzy Information Quantity Measurement and Feature Selection by Macrogranular Entropy 基于大颗粒熵的模糊信息量度量与特征选择
Pub Date : 2025-04-21 DOI: 10.1109/TAI.2025.3562839
Zhilin Zhu;Chucai Zhang;Jianhua Dai
Feature selection is an important data preprocessing process in artificial intelligence, which aims to eliminate redundant features while retaining essential features. Measuring feature significance and relevance between features is a significant challenge. Fuzzy information entropy is an extension of Shannon entropy. It is widely used for quantifying the information of fuzzy divisions. However, it has significant limitations, notably the lack of monotonicity in fuzzy conditional entropy measure of decision uncertainty in the feature selection process. We introduce a novel measurement macrogranular entropy (ME) and construct some generalized forms, such as conditional ME, mutual macrogranular information, and joint ME. The conditional ME exhibits monotonicity when measuring decision uncertainty. In addition, we propose two feature selection algorithms: one based on monotonic conditional ME (MCME), and the other based on the degree of symmetric association (ADSA). The ADSA algorithm and the MCME algorithm are compared against eight other feature selection algorithms through a series of experiments. The comparison was conducted based on classification performance using SVM and NB classifiers, and evaluation metrics including F1-score and recall. In terms of all four evaluation metrics, ADSA and MCME achieved the top two rankings, respectively. Specifically, on the NB and SVM classifiers, the ADSA algorithm improves the average accuracy by 12.22% and 2.88% compared to the original feature set, while MCME improves the accuracy by 10.07% and 1.01%, respectively. Experimental comparisons demonstrate that ADSA algorithm effectively removes redundant information from the dataset during feature selection.
特征选择是人工智能中重要的数据预处理过程,其目的是在保留基本特征的同时剔除冗余特征。测量特征的重要性和特征之间的相关性是一个重大的挑战。模糊信息熵是香农熵的扩展。它被广泛用于模糊划分信息的量化。然而,它也有明显的局限性,特别是在特征选择过程中,模糊条件熵度量的决策不确定性缺乏单调性。引入了一种新的度量宏粒熵的方法,并构造了一些广义形式,如条件宏粒熵、互宏粒熵和联合宏粒熵。条件最小二乘法在测量决策不确定性时表现出单调性。此外,我们提出了两种特征选择算法:一种是基于单调条件最小二乘法(MCME),另一种是基于对称关联度(ADSA)。通过一系列实验,将ADSA算法和MCME算法与其他八种特征选择算法进行了比较。基于SVM和NB分类器的分类性能,以及f1得分和召回率等评价指标进行比较。在所有四项评估指标中,ADSA和MCME分别获得了前两名。其中,在NB和SVM分类器上,ADSA算法的平均准确率比原始特征集提高了12.22%和2.88%,MCME算法的平均准确率分别提高了10.07%和1.01%。实验结果表明,ADSA算法在特征选择过程中能够有效地去除数据集中的冗余信息。
{"title":"Fuzzy Information Quantity Measurement and Feature Selection by Macrogranular Entropy","authors":"Zhilin Zhu;Chucai Zhang;Jianhua Dai","doi":"10.1109/TAI.2025.3562839","DOIUrl":"https://doi.org/10.1109/TAI.2025.3562839","url":null,"abstract":"Feature selection is an important data preprocessing process in artificial intelligence, which aims to eliminate redundant features while retaining essential features. Measuring feature significance and relevance between features is a significant challenge. Fuzzy information entropy is an extension of Shannon entropy. It is widely used for quantifying the information of fuzzy divisions. However, it has significant limitations, notably the lack of monotonicity in fuzzy conditional entropy measure of decision uncertainty in the feature selection process. We introduce a novel measurement macrogranular entropy (ME) and construct some generalized forms, such as conditional ME, mutual macrogranular information, and joint ME. The conditional ME exhibits monotonicity when measuring decision uncertainty. In addition, we propose two feature selection algorithms: one based on monotonic conditional ME (MCME), and the other based on the degree of symmetric association (ADSA). The ADSA algorithm and the MCME algorithm are compared against eight other feature selection algorithms through a series of experiments. The comparison was conducted based on classification performance using SVM and NB classifiers, and evaluation metrics including F1-score and recall. In terms of all four evaluation metrics, ADSA and MCME achieved the top two rankings, respectively. Specifically, on the NB and SVM classifiers, the ADSA algorithm improves the average accuracy by 12.22% and 2.88% compared to the original feature set, while MCME improves the accuracy by 10.07% and 1.01%, respectively. Experimental comparisons demonstrate that ADSA algorithm effectively removes redundant information from the dataset during feature selection.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 12","pages":"3258-3272"},"PeriodicalIF":0.0,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Model Selection of Anomaly Detectors in the Absence of Labeled Validation Data 无标记验证数据时异常检测器的模型选择
Pub Date : 2025-04-21 DOI: 10.1109/TAI.2025.3562505
Clement Fung;Chen Qiu;Aodong Li;Maja Rudolph
Anomaly detection is the task of identifying abnormal samples in large unlabeled datasets. Although the advent of foundation models has produced powerful zero-shot anomaly detection methods, their deployment in practice is often hindered by the absence of labeled validation data—without it, detection performance cannot be evaluated reliably. In this work, we propose selection with synthetic anomalies (SWSA): a general-purpose framework to select image-based anomaly detectors without labeled validation data. Instead of collecting labeled validation data, we generate synthetic anomalies from a small support set of normal images without using any training or fine-tuning. Our synthetic anomalies are then used to create detection tasks that compose a validation framework for model selection. In an empirical study, we evaluate SWSA with three types of synthetic anomalies and on two selection tasks: model selection of image-based anomaly detectors and prompt selection for CLIP-based anomaly detection. SWSA often selects models and prompts that match selections made with a ground-truth validation set, outperforming baseline selection strategies.
异常检测是在大型未标记数据集中识别异常样本的任务。尽管基础模型的出现产生了强大的零射击异常检测方法,但它们在实践中的部署常常受到缺乏标记验证数据的阻碍——没有标记验证数据,检测性能就无法可靠地评估。在这项工作中,我们提出了综合异常选择(SWSA):一个通用框架,用于选择基于图像的异常检测器,而不需要标记验证数据。我们没有收集标记的验证数据,而是在不使用任何训练或微调的情况下,从一个小的正常图像支持集生成合成异常。然后,我们的合成异常被用来创建检测任务,这些任务构成了模型选择的验证框架。在一项实证研究中,我们用三种类型的合成异常和两个选择任务来评估SWSA:基于图像的异常检测器的模型选择和基于clip的异常检测器的提示选择。SWSA经常选择模型,并提示匹配与基线验证集相匹配的选择,优于基线选择策略。
{"title":"Model Selection of Anomaly Detectors in the Absence of Labeled Validation Data","authors":"Clement Fung;Chen Qiu;Aodong Li;Maja Rudolph","doi":"10.1109/TAI.2025.3562505","DOIUrl":"https://doi.org/10.1109/TAI.2025.3562505","url":null,"abstract":"Anomaly detection is the task of identifying abnormal samples in large unlabeled datasets. Although the advent of foundation models has produced powerful zero-shot anomaly detection methods, their deployment in practice is often hindered by the absence of labeled validation data—without it, detection performance cannot be evaluated reliably. In this work, we propose selection with synthetic anomalies (SWSA): a general-purpose framework to select image-based anomaly detectors without labeled validation data. Instead of collecting labeled validation data, we generate synthetic anomalies from a small support set of normal images without using any training or fine-tuning. Our synthetic anomalies are then used to create detection tasks that compose a validation framework for model selection. In an empirical study, we evaluate SWSA with three types of synthetic anomalies and on two selection tasks: model selection of image-based anomaly detectors and prompt selection for CLIP-based anomaly detection. SWSA often selects models and prompts that match selections made with a ground-truth validation set, outperforming baseline selection strategies.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 12","pages":"3248-3257"},"PeriodicalIF":0.0,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Phenotype and Genotype Based Sample Aware Surrogate-Assisted Genetic Programming in Dynamic Flexible Job Shop Scheduling 基于表型和基因型的样本感知代理辅助遗传规划在动态柔性作业车间调度中的应用
Pub Date : 2025-04-17 DOI: 10.1109/TAI.2025.3562161
Luyao Zhu;Fangfang Zhang;Xiaodong Zhu;Ke Chen;Mengjie Zhang
Genetic programming (GP) has been widely applied to evolve scheduling heuristics for dynamic flexible job shop scheduling (DFJSS). However, the evaluation of GP individuals is computationally expensive, especially in large scale DFJSS scenarios. A k-nearest neighbor (KNN) based surrogate has been successfully used to reduce individual evaluation time for GP by predicting the fitness of an individual with the most similar sample in KNN. Particularly, the phenotypes of GP individuals have been utilized to generate samples for KNN-based surrogates with a precondition that the fitness of individuals with the same phenotype is the same or similar. However, their real fitness may differ greatly due to different input decision situations for fitness calculations in DFJSS. Thus, only considering phenotypes of GP individuals to extract samples could decrease the accuracy of KNN surrogates. This article proposes a KNN-based surrogate assisted GP algorithm by considering both the phenotype and genotype of GP individuals to generate samples. Specifically, a genotypic characterization based on terminal frequency is designed to measure the similarity of individual genotypes. The results show that with the same training time, the proposed algorithm can converge fast and achieve better scheduling heuristics than the state-of-the-art algorithms in most examined scenarios. With the same number of generations, the proposed algorithm can obtain comparable performance but only needs about one third of the training time of baseline GP. The effectiveness of the proposed algorithm is also verified from different aspects, e.g., relation between genotype correlation and fitness difference of individuals, and population diversity.
遗传规划(GP)被广泛应用于动态柔性作业车间调度(DFJSS)的进化调度启发式算法。然而,GP个体的评估在计算上是昂贵的,特别是在大规模DFJSS场景中。基于k最近邻(KNN)的代理通过预测最相似样本的个体适应度,成功地减少了GP的个体评估时间。特别是,在具有相同表型的个体的适应度相同或相似的前提下,GP个体的表型已被用来为基于knn的替代品生成样本。然而,由于DFJSS中适应度计算的输入决策情况不同,它们的实际适应度可能相差很大。因此,仅考虑GP个体的表型来提取样本可能会降低KNN替代品的准确性。本文提出了一种基于knn的代理辅助GP算法,通过考虑GP个体的表型和基因型来生成样本。具体来说,基于终端频率的基因型表征被设计用来测量个体基因型的相似性。结果表明,在相同的训练时间下,在大多数测试场景下,该算法收敛速度快,调度启发式优于现有算法。在相同的代数下,该算法可以获得相当的性能,而所需的训练时间仅为基线GP的三分之一左右。从个体基因型相关性与适应度差异的关系、种群多样性等方面验证了算法的有效性。
{"title":"Phenotype and Genotype Based Sample Aware Surrogate-Assisted Genetic Programming in Dynamic Flexible Job Shop Scheduling","authors":"Luyao Zhu;Fangfang Zhang;Xiaodong Zhu;Ke Chen;Mengjie Zhang","doi":"10.1109/TAI.2025.3562161","DOIUrl":"https://doi.org/10.1109/TAI.2025.3562161","url":null,"abstract":"Genetic programming (GP) has been widely applied to evolve scheduling heuristics for dynamic flexible job shop scheduling (DFJSS). However, the evaluation of GP individuals is computationally expensive, especially in large scale DFJSS scenarios. A k-nearest neighbor (KNN) based surrogate has been successfully used to reduce individual evaluation time for GP by predicting the fitness of an individual with the most similar sample in KNN. Particularly, the phenotypes of GP individuals have been utilized to generate samples for KNN-based surrogates with a precondition that the fitness of individuals with the same phenotype is the same or similar. However, their real fitness may differ greatly due to different input decision situations for fitness calculations in DFJSS. Thus, only considering phenotypes of GP individuals to extract samples could decrease the accuracy of KNN surrogates. This article proposes a KNN-based surrogate assisted GP algorithm by considering both the phenotype and genotype of GP individuals to generate samples. Specifically, a genotypic characterization based on terminal frequency is designed to measure the similarity of individual genotypes. The results show that with the same training time, the proposed algorithm can converge fast and achieve better scheduling heuristics than the state-of-the-art algorithms in most examined scenarios. With the same number of generations, the proposed algorithm can obtain comparable performance but only needs about one third of the training time of baseline GP. The effectiveness of the proposed algorithm is also verified from different aspects, e.g., relation between genotype correlation and fitness difference of individuals, and population diversity.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 12","pages":"3232-3247"},"PeriodicalIF":0.0,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reinforcement Learning for Efficient Multiagent Task Allocation in Potential Game Model 潜在博弈模型中高效多智能体任务分配的强化学习
Pub Date : 2025-04-17 DOI: 10.1109/TAI.2025.3562160
Yuxing Xing;Caixia Chen;Jie Wu;Jie Chen
The potential game has been widely used to describe multiagent task allocation. However, the application of traditional game-theoretic algorithms has shown unsatisfactory performance in scenarios with a high agent count. For this, we employ reinforcement learning algorithm to enable each agent to independently make decision in response to other agents’ decisions and variations in the number of agents, ultimately working towards achieving a desired goal. First, we construct a potential game for multiagent task allocation and design a corresponding utility function for each agent. Then, we propose a deep q-network algorithm based on graph neural network, and enhance the agent selection mechanism in this learning algorithm. During each iteration, a task is randomly selected for an agent from the participant set, and each agent updates its strategy accordingly. Finally, by comparing several representative game theoretical algorithms, the numerical simulations highlight the advantages and performance of our proposed GDQ-Net algorithm across various tasks and numbers of agents under the constructed model.
潜在博弈被广泛用于描述多智能体任务分配。然而,传统的博弈论算法在智能体数量较多的情况下表现不理想。为此,我们采用强化学习算法,使每个智能体能够独立地做出决策,以响应其他智能体的决策和智能体数量的变化,最终朝着预期的目标努力。首先,我们构造了一个多智能体任务分配的潜在博弈,并为每个智能体设计了相应的效用函数。然后,我们提出了一种基于图神经网络的深度q-network算法,并对该学习算法中的智能体选择机制进行了改进。在每次迭代过程中,从参与者集中随机为一个代理选择一个任务,每个代理相应地更新其策略。最后,通过比较几种具有代表性的博弈理论算法,通过数值仿真,突出了本文提出的GDQ-Net算法在构建的模型下,在各种任务和智能体数量下的优势和性能。
{"title":"Reinforcement Learning for Efficient Multiagent Task Allocation in Potential Game Model","authors":"Yuxing Xing;Caixia Chen;Jie Wu;Jie Chen","doi":"10.1109/TAI.2025.3562160","DOIUrl":"https://doi.org/10.1109/TAI.2025.3562160","url":null,"abstract":"The potential game has been widely used to describe multiagent task allocation. However, the application of traditional game-theoretic algorithms has shown unsatisfactory performance in scenarios with a high agent count. For this, we employ reinforcement learning algorithm to enable each agent to independently make decision in response to other agents’ decisions and variations in the number of agents, ultimately working towards achieving a desired goal. First, we construct a potential game for multiagent task allocation and design a corresponding utility function for each agent. Then, we propose a deep q-network algorithm based on graph neural network, and enhance the agent selection mechanism in this learning algorithm. During each iteration, a task is randomly selected for an agent from the participant set, and each agent updates its strategy accordingly. Finally, by comparing several representative game theoretical algorithms, the numerical simulations highlight the advantages and performance of our proposed GDQ-Net algorithm across various tasks and numbers of agents under the constructed model.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 12","pages":"3217-3231"},"PeriodicalIF":0.0,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ownership Infringement Detection for Generative Adversarial Networks Against Model Stealing 针对模型窃取的生成对抗网络所有权侵权检测
Pub Date : 2025-04-16 DOI: 10.1109/TAI.2025.3560921
Hailong Hu;Jun Pang
Generative adversarial networks (GANs) have shown remarkable success in image synthesis, making GAN models themselves commercially valuable to legitimate model owners. Therefore, it is critical to technically protect the intellectual property of GANs. Prior works need to tamper with the training set or training process to verify the ownership of a GAN. In this article, we show that these methods are not robust to emerging model extraction attacks. Then, we propose a new method GAN-Guards which utilizes the common characteristics of a target model and its stolen models for ownership infringement detection. Our method can be directly applicable to all well-trained GANs as it does not require retraining target models. Extensive experimental results show that our new method achieves superior detection performance, compared with the watermark-based and fingerprint-based methods. Finally, we demonstrate the effectiveness of our method with respect to the number of generations of model extraction attacks, the number of generated samples, and adaptive attacks.
生成对抗网络(GAN)在图像合成方面取得了显著的成功,使得GAN模型本身对合法的模型所有者具有商业价值。因此,从技术上保护gan的知识产权是至关重要的。先前的工作需要篡改训练集或训练过程来验证GAN的所有权。在本文中,我们展示了这些方法对新出现的模型提取攻击的鲁棒性。然后,我们提出了一种新的gan - guard方法,该方法利用目标模型和被盗模型的共同特征进行所有权侵权检测。我们的方法可以直接适用于所有训练良好的gan,因为它不需要再训练目标模型。大量的实验结果表明,与基于水印和指纹的检测方法相比,该方法具有更好的检测性能。最后,我们证明了我们的方法在模型提取攻击的代数、生成样本的数量和自适应攻击方面的有效性。
{"title":"Ownership Infringement Detection for Generative Adversarial Networks Against Model Stealing","authors":"Hailong Hu;Jun Pang","doi":"10.1109/TAI.2025.3560921","DOIUrl":"https://doi.org/10.1109/TAI.2025.3560921","url":null,"abstract":"Generative adversarial networks (GANs) have shown remarkable success in image synthesis, making GAN models themselves commercially valuable to legitimate model owners. Therefore, it is critical to technically protect the intellectual property of GANs. Prior works need to tamper with the training set or training process to verify the ownership of a GAN. In this article, we show that these methods are not robust to emerging model extraction attacks. Then, we propose a new method GAN-Guards which utilizes the common characteristics of a target model and its stolen models for ownership infringement detection. Our method can be directly applicable to all well-trained GANs as it does not require retraining target models. Extensive experimental results show that our new method achieves superior detection performance, compared with the watermark-based and fingerprint-based methods. Finally, we demonstrate the effectiveness of our method with respect to the number of generations of model extraction attacks, the number of generated samples, and adaptive attacks.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 11","pages":"3018-3029"},"PeriodicalIF":0.0,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Privacy-Enhancing Framework for Low-Dose CT Denoising 一种新的低剂量CT去噪隐私增强框架
Pub Date : 2025-04-16 DOI: 10.1109/TAI.2025.3561092
Ziyuan Yang;Huijie Huangfu;Maosong Ran;Zhiwen Wang;Hui Yu;Mengyu Sun;Yi Zhang
Deep learning (DL) has made significant advancements in tomographic imaging, particularly in low-dose computed tomography (LDCT) denoising. A recent trend involves servers training powerful models with enormous self-collected data and providing application programming interfaces (APIs) for users, such as Chat-GPT. To avoid model leakage, users are required to upload their data to the server. This approach is particularly advantageous for devices with limited computational capabilities, as it offloads computation to the server, easing the workload on the devices themselves. However, this way raises public concerns about the privacy disclosure risk. Hence, to alleviate related concerns, we propose to directly denoise LDCT in the encrypted domain to achieve privacy-preserving cloud services without exposing private data to the server. Concretely, we employ homomorphic encryption to encrypt private LDCT, which is then transferred to the server model trained with plaintext LDCT for further denoising. Since fundamental DL operations, such as convolution and linear transformation, cannot be directly used in the encrypted domain, we transform the fundamental mathematic operations in the plaintext domain into the operations in the encrypted domain. Moreover, we present two interactive frameworks for linear and nonlinear models, both of which can achieve lossless operating. In this way, the proposed methods can achieve two merits, the data privacy is well protected, and the server model is free from the risk of model leakage. Moreover, we provide theoretical proof to validate the lossless property of our framework. Finally, experiments were conducted to demonstrate that the transferred contents are well protected and cannot be reconstructed.1

The codes are released at https://github.com/Zi-YuanYang/Encrypt_LDCT_Recon

深度学习(DL)在层析成像方面取得了重大进展,特别是在低剂量计算机断层扫描(LDCT)去噪方面。最近的一个趋势是,服务器使用大量自收集的数据训练强大的模型,并为用户提供应用程序编程接口(api),比如Chat-GPT。为了避免模型泄漏,用户需要将他们的数据上传到服务器。这种方法对于计算能力有限的设备特别有利,因为它将计算转移到服务器上,从而减轻了设备本身的工作负载。然而,这种方式引发了公众对隐私泄露风险的担忧。因此,为了缓解相关问题,我们建议在加密域中直接对LDCT进行去噪,以实现保护隐私的云服务,而不会将私有数据暴露给服务器。具体而言,我们使用同态加密对私有LDCT进行加密,然后将其传输到使用明文LDCT训练的服务器模型中进行进一步去噪。由于基本的DL运算,如卷积和线性变换,不能直接在加密域中使用,我们将明文域中的基本数学运算转换为加密域中的运算。此外,我们提出了线性和非线性模型的两种交互框架,这两种框架都可以实现无损操作。这样,所提出的方法可以达到两个优点,一是数据隐私得到了很好的保护,二是服务器模型没有模型泄漏的风险。此外,我们还提供了理论证明来验证我们的框架的无损性。最后,通过实验验证了传输的内容得到了很好的保护,并且不能被重构。代码在https://github.com/Zi-YuanYang/Encrypt_LDCT_Recon上发布
{"title":"A Novel Privacy-Enhancing Framework for Low-Dose CT Denoising","authors":"Ziyuan Yang;Huijie Huangfu;Maosong Ran;Zhiwen Wang;Hui Yu;Mengyu Sun;Yi Zhang","doi":"10.1109/TAI.2025.3561092","DOIUrl":"https://doi.org/10.1109/TAI.2025.3561092","url":null,"abstract":"Deep learning (DL) has made significant advancements in tomographic imaging, particularly in low-dose computed tomography (LDCT) denoising. A recent trend involves servers training powerful models with enormous self-collected data and providing application programming interfaces (APIs) for users, such as Chat-GPT. To avoid model leakage, users are required to upload their data to the server. This approach is particularly advantageous for devices with limited computational capabilities, as it offloads computation to the server, easing the workload on the devices themselves. However, this way raises public concerns about the privacy disclosure risk. Hence, to alleviate related concerns, we propose to directly denoise LDCT in the encrypted domain to achieve privacy-preserving cloud services without exposing private data to the server. Concretely, we employ homomorphic encryption to encrypt private LDCT, which is then transferred to the server model trained with plaintext LDCT for further denoising. Since fundamental DL operations, such as convolution and linear transformation, cannot be directly used in the encrypted domain, we transform the fundamental mathematic operations in the plaintext domain into the operations in the encrypted domain. Moreover, we present two interactive frameworks for linear and nonlinear models, both of which can achieve lossless operating. In this way, the proposed methods can achieve two merits, the data privacy is well protected, and the server model is free from the risk of model leakage. Moreover, we provide theoretical proof to validate the lossless property of our framework. Finally, experiments were conducted to demonstrate that the transferred contents are well protected and cannot be reconstructed.<xref><sup>1</sup></xref><fn><label><sup>1</sup></label><p>The codes are released at <uri>https://github.com/Zi-YuanYang/Encrypt_LDCT_Recon</uri></p></fn>","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 11","pages":"3043-3055"},"PeriodicalIF":0.0,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating Large Language Model for Improved Causal Discovery 集成大语言模型改进因果发现
Pub Date : 2025-04-16 DOI: 10.1109/TAI.2025.3560927
Taiyu Ban;Lyuzhou Chen;Derui Lyu;Xiangyu Wang;Qinrui Zhu;Qiang Tu;Huanhuan Chen
Recovering the structure of causal graphical models from observational data is an essential yet challenging task for causal discovery in scientific scenarios. Domain-specific causal discovery usually relies on expert validation or prior analysis to improve the reliability of recovered causality, which is yet limited by the scarcity of expert resources. Recently, large language models (LLM) have been used for causal analysis across various domain-specific scenarios, suggesting its potential as autonomous expert roles in guiding data-based structure learning. However, integrating LLMs into causal discovery faces challenges due to inaccuracies in LLM-based reasoning on revealing the actual causal structure. To address this challenge, we propose an error-tolerant LLM-driven causal discovery framework. The error-tolerant mechanism is designed three-fold with sufficient consideration on potential inaccuracies. In the LLM-based reasoning process, an accuracy-oriented prompting strategy restricts causal analysis to a reliable range. Next, a knowledge-to-structure transition aligns LLM-derived causal statements with structural causal interactions. In the structure learning process, the goodness-of-fit to data and adherence to LLM-derived priors are balanced to further address prior inaccuracies. Evaluation of eight real-world causal structures demonstrates the efficacy of our LLM-driven approach in improving data-based causal discovery, along with its robustness to inaccurate LLM-derived priors.
从观测数据中恢复因果图模型的结构是科学情景中因果发现的一项重要但具有挑战性的任务。特定领域的因果关系发现通常依赖于专家验证或先验分析来提高因果关系恢复的可靠性,但这受到专家资源稀缺的限制。最近,大型语言模型(LLM)已被用于各种领域特定场景的因果分析,这表明它在指导基于数据的结构学习方面具有自主专家角色的潜力。然而,将法学硕士整合到因果发现中面临挑战,因为基于法学硕士的推理在揭示实际因果结构方面存在不准确性。为了应对这一挑战,我们提出了一个容错的法学硕士驱动的因果发现框架。容错机制设计了三层,充分考虑了潜在的误差。在基于llm的推理过程中,以准确性为导向的提示策略将因果分析限制在可靠的范围内。接下来,知识到结构的转变将法学硕士衍生的因果陈述与结构因果相互作用结合起来。在结构学习过程中,对数据的拟合优度和对llm推导的先验的依从性进行了平衡,以进一步解决先验的不准确性。对八个现实世界因果结构的评估证明了我们的法学硕士驱动方法在改进基于数据的因果发现方面的有效性,以及它对不准确的法学硕士衍生先验的鲁棒性。
{"title":"Integrating Large Language Model for Improved Causal Discovery","authors":"Taiyu Ban;Lyuzhou Chen;Derui Lyu;Xiangyu Wang;Qinrui Zhu;Qiang Tu;Huanhuan Chen","doi":"10.1109/TAI.2025.3560927","DOIUrl":"https://doi.org/10.1109/TAI.2025.3560927","url":null,"abstract":"Recovering the structure of causal graphical models from observational data is an essential yet challenging task for causal discovery in scientific scenarios. Domain-specific causal discovery usually relies on expert validation or prior analysis to improve the reliability of recovered causality, which is yet limited by the scarcity of expert resources. Recently, large language models (LLM) have been used for causal analysis across various domain-specific scenarios, suggesting its potential as autonomous expert roles in guiding data-based structure learning. However, integrating LLMs into causal discovery faces challenges due to inaccuracies in LLM-based reasoning on revealing the actual causal structure. To address this challenge, we propose an error-tolerant LLM-driven causal discovery framework. The error-tolerant mechanism is designed three-fold with sufficient consideration on potential inaccuracies. In the LLM-based reasoning process, an accuracy-oriented prompting strategy restricts causal analysis to a reliable range. Next, a knowledge-to-structure transition aligns LLM-derived causal statements with structural causal interactions. In the structure learning process, the goodness-of-fit to data and adherence to LLM-derived priors are balanced to further address prior inaccuracies. Evaluation of eight real-world causal structures demonstrates the efficacy of our LLM-driven approach in improving data-based causal discovery, along with its robustness to inaccurate LLM-derived priors.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 11","pages":"3030-3042"},"PeriodicalIF":0.0,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145456013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on artificial intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1