首页 > 最新文献

IEEE transactions on artificial intelligence最新文献

英文 中文
Malicious Clients and Contribution Co-Aware Federated Unlearning 恶意客户端与贡献协同感知联合学习
Pub Date : 2025-03-28 DOI: 10.1109/TAI.2025.3556092
Yang Wang;Xue Li;Siguang Chen
Existing federated unlearning methods to eliminate the negative impact of malicious clients on the global model are influenced by unreasonable assumptions (e.g., an auxiliary dataset) or fail to balance model performance and efficiency. To overcome these shortcomings, we propose a malicious clients and contribution co-aware federated unlearning (MCC-Fed) method. Specifically, we introduce a method for detecting malicious clients to reduce their impact on the global model. Next, we design a contribution-aware metric, which accurately quantifies the negative impact of malicious clients on the global calculating their historical contribution ratio. Then, based on this metric, we propose a novel federated unlearning method in which benign clients use the contribution-aware metric as a regularization term to unlearn the influence of malicious clients, and restoring model performance. Experimental results demonstrate that our method effectively addresses the issue of excessive unlearning during the unlearning process, improves the efficiency of performance recovery, and enhances robustness against malicious clients. Federated unlearning effectively removes malicious clients’ influence while reducing training costs compared to retraining.
现有的消除恶意客户端对全局模型负面影响的联合学习方法受到不合理假设(例如,辅助数据集)的影响,或者无法平衡模型性能和效率。为了克服这些缺点,我们提出了一种恶意客户端和贡献共同感知联合学习(MCC-Fed)方法。具体来说,我们介绍了一种检测恶意客户端的方法,以减少它们对全局模型的影响。接下来,我们设计了一个贡献感知度量,该度量准确地量化了恶意客户端对全局的负面影响,并计算了它们的历史贡献率。然后,在此基础上,提出了一种新的联合学习方法,良性客户端使用贡献感知度量作为正则化项来忘记恶意客户端的影响,并恢复模型性能。实验结果表明,该方法有效地解决了学习过程中过度学习的问题,提高了性能恢复效率,增强了对恶意客户端的鲁棒性。与再培训相比,联合学习有效地消除了恶意客户的影响,同时降低了培训成本。
{"title":"Malicious Clients and Contribution Co-Aware Federated Unlearning","authors":"Yang Wang;Xue Li;Siguang Chen","doi":"10.1109/TAI.2025.3556092","DOIUrl":"https://doi.org/10.1109/TAI.2025.3556092","url":null,"abstract":"Existing federated unlearning methods to eliminate the negative impact of malicious clients on the global model are influenced by unreasonable assumptions (e.g., an auxiliary dataset) or fail to balance model performance and efficiency. To overcome these shortcomings, we propose a malicious clients and contribution co-aware federated unlearning (MCC-Fed) method. Specifically, we introduce a method for detecting malicious clients to reduce their impact on the global model. Next, we design a contribution-aware metric, which accurately quantifies the negative impact of malicious clients on the global calculating their historical contribution ratio. Then, based on this metric, we propose a novel federated unlearning method in which benign clients use the contribution-aware metric as a regularization term to unlearn the influence of malicious clients, and restoring model performance. Experimental results demonstrate that our method effectively addresses the issue of excessive unlearning during the unlearning process, improves the efficiency of performance recovery, and enhances robustness against malicious clients. Federated unlearning effectively removes malicious clients’ influence while reducing training costs compared to retraining.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 10","pages":"2848-2857"},"PeriodicalIF":0.0,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145196041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
COSMIC: A Novel Contextualized Orientation Similarity Metric Incorporating Consistency for NLG Assessment COSMIC:一种包含一致性的新情境化取向相似性度量法,用于NLG评估
Pub Date : 2025-03-27 DOI: 10.1109/TAI.2025.3574292
Hadi Al Khansa;Mariette Awad
The field of natural language generation (NLG) has undergone remarkable expansion, largely enabled by enhanced model architectures, affordable computing, and the availability of large datasets. With NLG systems finding increasing adoption across many applications, the imperative to evaluate their performance has grown exponentially. However, relying solely on human evaluation for evaluation is nonscalable. To address this challenge, it is important to explore more scalable evaluation methodologies that can ensure the continued development and efficacy of NLG systems. Presently, only a few automated evaluation metrics are commonly utilized, with BLEU and ROUGE being the predominant choices. Yet, these metrics have faced criticism for their limited correlation with human judgment, their focus on surface-level similarity, and their tendency to overlook semantic nuances. While transformer metrics have been introduced to capture semantic similarity, our study reveals scenarios where even these metrics fail. Considering these limitations, we propose and validate a novel metric called “COSMIC,” which incorporates contradiction detection with contextual embedding similarity. To illustrate these limitations and showcase the performance of COSMIC, we conducted a case study using a fine-tuned LLAMA model to transform questions and short answers into declarative sentences. This task, despite its significance in generating natural language inference datasets, has not received widespread exploration since 2018. Results show that COSMIC can capture cases of contradiction between the reference and generated text while staying highly correlated with embeddings similarity when the reference and generated text are consistent and semantically similar. BLEU, ROUGE, and most transformer-based metrics demonstrate an inability to identify contradictions.
自然语言生成(NLG)领域已经经历了显著的扩展,这主要得益于增强的模型架构、可负担的计算和大型数据集的可用性。随着NLG系统在许多应用程序中得到越来越多的采用,评估其性能的必要性呈指数级增长。然而,仅仅依靠人的评价进行评价是不可扩展的。为了应对这一挑战,重要的是探索更具可扩展性的评估方法,以确保NLG系统的持续发展和有效性。目前,只有少数自动化评估度量标准被普遍使用,BLEU和ROUGE是主要的选择。然而,这些指标因其与人类判断的有限相关性、其对表面相似性的关注以及其忽视语义细微差别的倾向而面临批评。虽然已经引入了转换器度量来捕获语义相似性,但我们的研究揭示了甚至这些度量都失败的情况。考虑到这些限制,我们提出并验证了一个名为“COSMIC”的新度量,它将矛盾检测与上下文嵌入相似度结合在一起。为了说明这些限制并展示COSMIC的性能,我们进行了一个案例研究,使用经过微调的LLAMA模型将问题和简短答案转换为陈述句。尽管该任务在生成自然语言推理数据集方面具有重要意义,但自2018年以来并未得到广泛的探索。结果表明,当参考文献和生成文本一致且语义相似时,COSMIC可以捕获参考文献和生成文本之间的矛盾情况,同时与嵌入相似度保持高度相关。BLEU、ROUGE和大多数基于转换器的度量都无法识别矛盾。
{"title":"COSMIC: A Novel Contextualized Orientation Similarity Metric Incorporating Consistency for NLG Assessment","authors":"Hadi Al Khansa;Mariette Awad","doi":"10.1109/TAI.2025.3574292","DOIUrl":"https://doi.org/10.1109/TAI.2025.3574292","url":null,"abstract":"The field of natural language generation (NLG) has undergone remarkable expansion, largely enabled by enhanced model architectures, affordable computing, and the availability of large datasets. With NLG systems finding increasing adoption across many applications, the imperative to evaluate their performance has grown exponentially. However, relying solely on human evaluation for evaluation is nonscalable. To address this challenge, it is important to explore more scalable evaluation methodologies that can ensure the continued development and efficacy of NLG systems. Presently, only a few automated evaluation metrics are commonly utilized, with BLEU and ROUGE being the predominant choices. Yet, these metrics have faced criticism for their limited correlation with human judgment, their focus on surface-level similarity, and their tendency to overlook semantic nuances. While transformer metrics have been introduced to capture semantic similarity, our study reveals scenarios where even these metrics fail. Considering these limitations, we propose and validate a novel metric called “COSMIC,” which incorporates contradiction detection with contextual embedding similarity. To illustrate these limitations and showcase the performance of COSMIC, we conducted a case study using a fine-tuned LLAMA model to transform questions and short answers into declarative sentences. This task, despite its significance in generating natural language inference datasets, has not received widespread exploration since 2018. Results show that COSMIC can capture cases of contradiction between the reference and generated text while staying highly correlated with embeddings similarity when the reference and generated text are consistent and semantically similar. BLEU, ROUGE, and most transformer-based metrics demonstrate an inability to identify contradictions.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"7 1","pages":"332-346"},"PeriodicalIF":0.0,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145898245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IT2-ENFIS: Interval Type-2 Exclusionary Neuro-Fuzzy Inference System, an Attempt Toward Trustworthy Regression Learning 区间2型排他性神经模糊推理系统,一种可信回归学习的尝试
Pub Date : 2025-03-27 DOI: 10.1109/TAI.2025.3574299
Chuan Xue;Jianli Gao;Zhou Gu
As machine learning technologies progress and are increasingly applied to critical and sensitive fields, the reliability issues of earlier technologies are becoming more evident. For the new generation of machine learning solutions, trustworthiness frequently takes precedence over performance when evaluating their applicability for specific applications. This manuscript introduces the IT2-ENFIS neuro-fuzzy model, a robust and trustworthy single-network solution specifically designed for data regression tasks affected by substantial label noise and outliers. The primary architecture applies interval type-2 fuzzy logic and the Sugeno inference engine. A meta-heuristic gradient-based optimizer (GBO), the Huber loss function, and the Cauchy M-estimator are employed for robust learning. IT2-ENFIS demonstrates superior performance on noise-contaminated datasets and excels in real-world scenarios, with excellent generalization capability and interpretability.
随着机器学习技术的进步和越来越多地应用于关键和敏感领域,早期技术的可靠性问题变得越来越明显。对于新一代机器学习解决方案,在评估其对特定应用的适用性时,可信度通常优先于性能。本文介绍了IT2-ENFIS神经模糊模型,这是一种鲁棒且值得信赖的单网络解决方案,专为受大量标签噪声和异常值影响的数据回归任务而设计。主架构采用区间2型模糊逻辑和Sugeno推理引擎。采用基于梯度的元启发式优化器(GBO)、Huber损失函数和Cauchy m -估计器进行鲁棒学习。IT2-ENFIS在噪声污染数据集上表现优异,在现实场景中表现出色,具有出色的泛化能力和可解释性。
{"title":"IT2-ENFIS: Interval Type-2 Exclusionary Neuro-Fuzzy Inference System, an Attempt Toward Trustworthy Regression Learning","authors":"Chuan Xue;Jianli Gao;Zhou Gu","doi":"10.1109/TAI.2025.3574299","DOIUrl":"https://doi.org/10.1109/TAI.2025.3574299","url":null,"abstract":"As machine learning technologies progress and are increasingly applied to critical and sensitive fields, the reliability issues of earlier technologies are becoming more evident. For the new generation of machine learning solutions, trustworthiness frequently takes precedence over performance when evaluating their applicability for specific applications. This manuscript introduces the IT2-ENFIS neuro-fuzzy model, a robust and trustworthy single-network solution specifically designed for data regression tasks affected by substantial label noise and outliers. The primary architecture applies interval type-2 fuzzy logic and the Sugeno inference engine. A meta-heuristic gradient-based optimizer (GBO), the Huber loss function, and the Cauchy M-estimator are employed for robust learning. IT2-ENFIS demonstrates superior performance on noise-contaminated datasets and excels in real-world scenarios, with excellent generalization capability and interpretability.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"7 1","pages":"347-361"},"PeriodicalIF":0.0,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145898258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling Deep Unfolded Quantum Machine Learning Framework 深度展开量子机器学习框架建模
Pub Date : 2025-03-26 DOI: 10.1109/TAI.2025.3573303
Shanika Iroshi Nanayakkara;Shiva Raj Pokhrel
Quantum machine learning models, like quantum neural networks (QNN) and quantum support vector classifiers (QSVC), often struggle with overfitting, slow convergence, and suboptimal generalization across various datasets. This article explores the advantages of integrating deep unfolding techniques into quantum models and develops a framework focusing on deep unfolded variational quantum classifiers (DVQC), deep unfolded quantum neural networks (DQNN), and deep unfolded QSVC (DQSVC). Our novel unfolding transforms quantum circuit training into a sequence of learnable layers, with each layer representing an optimization step that concurrently renews both circuit parameters and QNN hyperparameters. The proposed framework significantly improves training and test accuracy by dynamically adjusting learning rate, perturbations, and other similar hyperparameters, particularly on complex datasets like genomic and breast cancer. Our evaluation and experiment show that proposed DVQC and DQNN outperform baseline VQC and QNN, achieving 90% training accuracy and up to 20% higher test accuracy on genomic and adhoc datasets. DQSVC achieves 100% accuracy on adhoc and 97% on genomic datasets, surpassing the 90% test accuracy of traditional QSVC. Our implementation details will be publicly available.
量子机器学习模型,如量子神经网络(QNN)和量子支持向量分类器(QSVC),经常在各种数据集上遇到过拟合、缓慢收敛和次优泛化的问题。本文探讨了将深度展开技术集成到量子模型中的优势,并开发了一个专注于深度未展开变分量子分类器(DVQC)、深度未展开量子神经网络(DQNN)和深度未展开QSVC (DQSVC)的框架。我们的新颖展开将量子电路训练转换为一系列可学习层,每层代表一个优化步骤,同时更新电路参数和QNN超参数。提出的框架通过动态调整学习率、扰动和其他类似的超参数,特别是在基因组和乳腺癌等复杂数据集上,显著提高了训练和测试的准确性。我们的评估和实验表明,所提出的DVQC和DQNN优于基线VQC和QNN,在基因组和特殊数据集上达到90%的训练准确率和高达20%的测试准确率。DQSVC在adhoc数据集上达到100%的准确率,在基因组数据集上达到97%的准确率,超过了传统QSVC 90%的测试准确率。我们的实现细节将会公开。
{"title":"Modeling Deep Unfolded Quantum Machine Learning Framework","authors":"Shanika Iroshi Nanayakkara;Shiva Raj Pokhrel","doi":"10.1109/TAI.2025.3573303","DOIUrl":"https://doi.org/10.1109/TAI.2025.3573303","url":null,"abstract":"Quantum machine learning models, like quantum neural networks (QNN) and quantum support vector classifiers (QSVC), often struggle with overfitting, slow convergence, and suboptimal generalization across various datasets. This article explores the advantages of integrating deep unfolding techniques into quantum models and develops a framework focusing on deep unfolded variational quantum classifiers (DVQC), deep unfolded quantum neural networks (DQNN), and deep unfolded QSVC (DQSVC). Our novel unfolding transforms quantum circuit training into a sequence of learnable layers, with each layer representing an optimization step that concurrently renews both circuit parameters and QNN hyperparameters. The proposed framework significantly improves training and test accuracy by dynamically adjusting learning rate, perturbations, and other similar hyperparameters, particularly on complex datasets like genomic and breast cancer. Our evaluation and experiment show that proposed DVQC and DQNN outperform baseline VQC and QNN, achieving 90% training accuracy and up to 20% higher test accuracy on genomic and adhoc datasets. DQSVC achieves 100% accuracy on adhoc and 97% on genomic datasets, surpassing the 90% test accuracy of traditional QSVC. Our implementation details will be publicly available.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"7 1","pages":"321-331"},"PeriodicalIF":0.0,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145898180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nuclei Segmentation Using Multiheaded U-Net and Shearlet-Based Unsharp Masking 基于多头U-Net和shearlet的非锐利掩蔽的核分割
Pub Date : 2025-03-26 DOI: 10.1109/TAI.2025.3572849
Shivam Mishra;Amit Vishwakarma;Anil Kumar
An automated nuclei segmentation is an important technique for understanding and analyzing cellular characteristics that ease computer-aided digital pathology and are useful for disease diagnosis. However, this task is difficult because of the diversity in nuclei size, blurry boundaries, and several imaging modalities. A convolutional neural network (CNN)-based multiheaded U-Net (M-UNet) framework has been proposed to address such issues. This architecture uses filters of different kernel sizes for multiple heads to extract multiresolution features of an image. Shearlet-based unsharp masking (SBUM) method is proposed for preprocessing, which primarily emphasizes features like contours, boundaries, and minute details of the source image. In this article, a hybrid loss function is formulated, which includes intersection over union (IOU) loss and Dice loss along with binary cross entropy loss. The hybrid loss function is tried to be minimized by the optimization algorithm, and the higher metrics values during the testing phase represent better segmentation performance in the spatial domain. The proposed method yields superior segmentation images and quantitative findings as compared to the state-of-the-art nuclei segmentation techniques. The proposed technique attains IOU, F1Score, accuracy, and precision values of 0.8325, 0.9086, 0.9651, and 0.9001, respectively.
自动细胞核分割是理解和分析细胞特征的一项重要技术,有助于计算机辅助数字病理和疾病诊断。然而,由于细胞核大小的多样性、边界模糊和多种成像方式,这项任务很困难。为了解决这些问题,提出了一种基于卷积神经网络(CNN)的多头U-Net (M-UNet)框架。该体系结构对多个头部使用不同核大小的过滤器来提取图像的多分辨率特征。提出了基于shearlet的非锐利掩蔽(SBUM)预处理方法,该方法主要强调源图像的轮廓、边界和微小细节等特征。本文建立了一种混合损失函数,它包括交联损失和骰子损失以及二元交叉熵损失。优化算法尽量使混合损失函数最小,测试阶段的度量值越高,在空间域中的分割性能越好。与最先进的核分割技术相比,提出的方法产生优越的分割图像和定量结果。该方法的IOU、F1Score、准确度和精度值分别为0.8325、0.9086、0.9651和0.9001。
{"title":"Nuclei Segmentation Using Multiheaded U-Net and Shearlet-Based Unsharp Masking","authors":"Shivam Mishra;Amit Vishwakarma;Anil Kumar","doi":"10.1109/TAI.2025.3572849","DOIUrl":"https://doi.org/10.1109/TAI.2025.3572849","url":null,"abstract":"An automated nuclei segmentation is an important technique for understanding and analyzing cellular characteristics that ease computer-aided digital pathology and are useful for disease diagnosis. However, this task is difficult because of the diversity in nuclei size, blurry boundaries, and several imaging modalities. A convolutional neural network (CNN)-based multiheaded U-Net (M-UNet) framework has been proposed to address such issues. This architecture uses filters of different kernel sizes for multiple heads to extract multiresolution features of an image. Shearlet-based unsharp masking (SBUM) method is proposed for preprocessing, which primarily emphasizes features like contours, boundaries, and minute details of the source image. In this article, a hybrid loss function is formulated, which includes intersection over union (IOU) loss and Dice loss along with binary cross entropy loss. The hybrid loss function is tried to be minimized by the optimization algorithm, and the higher metrics values during the testing phase represent better segmentation performance in the spatial domain. The proposed method yields superior segmentation images and quantitative findings as compared to the state-of-the-art nuclei segmentation techniques. The proposed technique attains IOU, F1Score, accuracy, and precision values of 0.8325, 0.9086, 0.9651, and 0.9001, respectively.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"7 1","pages":"297-307"},"PeriodicalIF":0.0,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lorentz-Equivariant Quantum Graph Neural Network for High-Energy Physics 高能物理中的洛伦兹等变量子图神经网络
Pub Date : 2025-03-24 DOI: 10.1109/TAI.2025.3554461
Md Abrar Jahin;Md. Akmol Masud;Md Wahiduzzaman Suva;M. F. Mridha;Nilanjan Dey
The rapid data surge from the high-luminosity Large Hadron Collider introduces critical computational challenges requiring novel approaches for efficient data processing in particle physics. Quantum machine learning, with its capability to leverage the extensive Hilbert space of quantum hardware, offers a promising solution. However, current quantum graph neural networks (GNNs) lack robustness to noise and are often constrained by fixed symmetry groups, limiting adaptability in complex particle interaction modeling. This article demonstrates that replacing the classical Lorentz group equivariant block modules in LorentzNet with a dressed quantum circuit significantly enhances performance despite using $approx 5.5$ times fewer parameters. Additionally, quantum circuits effectively replace MLPs by inherently preserving symmetries, with Lorentz symmetry integration ensuring robust handling of relativistic invariance. Our Lorentz-equivariant quantum graph neural network (Lorentz-EQGNN) achieved 74.00% test accuracy and an AUC of 87.38% on the Quark-Gluon jet tagging dataset, outperforming the classical and quantum GNNs with a reduced architecture using only 4 qubits. On the electron–photon dataset, Lorentz-EQGNN reached 67.00% test accuracy and an AUC of 68.20%, demonstrating competitive results with just 800 training samples. Evaluation of our model on generic MNIST and FashionMNIST datasets confirmed Lorentz-EQGNN’s efficiency, achieving 88.10% and 74.80% test accuracy, respectively. Ablation studies validated the impact of quantum components on performance, with notable improvements in background rejection rates over classical counterparts. These results highlight Lorentz-EQGNN’s potential for immediate applications in noise-resilient jet tagging, event classification, and broader data-scarce HEP tasks.
来自高亮度大型强子对撞机的快速数据激增带来了关键的计算挑战,需要新的方法来有效地处理粒子物理中的数据。量子机器学习,凭借其利用量子硬件的广泛希尔伯特空间的能力,提供了一个有前途的解决方案。然而,目前的量子图神经网络(gnn)缺乏对噪声的鲁棒性,并且经常受到固定对称群的约束,限制了对复杂粒子相互作用建模的适应性。本文证明了用修饰量子电路取代LorentzNet中的经典Lorentz群等变块模块可以显著提高性能,尽管使用的参数减少了约5.5倍。此外,量子电路通过固有地保持对称性而有效地取代了mlp,洛伦兹对称集成确保了对相对论不变性的鲁棒处理。我们的lorentz -等变量子图神经网络(Lorentz-EQGNN)在夸克-胶子射流标记数据集上的测试准确率为74.00%,AUC为87.38%,仅使用4个量子比特就优于经典和量子gnn。在电子-光子数据集上,Lorentz-EQGNN的测试准确率达到67.00%,AUC为68.20%,仅用800个训练样本就显示出具有竞争力的结果。我们的模型在通用MNIST和FashionMNIST数据集上的评估证实了Lorentz-EQGNN的效率,分别达到了88.10%和74.80%的测试准确率。烧蚀研究证实了量子元件对性能的影响,与经典元件相比,其背景拒绝率有显著提高。这些结果突出了Lorentz-EQGNN在抗噪声射流标记、事件分类和更广泛的数据稀缺HEP任务中的直接应用潜力。
{"title":"Lorentz-Equivariant Quantum Graph Neural Network for High-Energy Physics","authors":"Md Abrar Jahin;Md. Akmol Masud;Md Wahiduzzaman Suva;M. F. Mridha;Nilanjan Dey","doi":"10.1109/TAI.2025.3554461","DOIUrl":"https://doi.org/10.1109/TAI.2025.3554461","url":null,"abstract":"The rapid data surge from the high-luminosity Large Hadron Collider introduces critical computational challenges requiring novel approaches for efficient data processing in particle physics. Quantum machine learning, with its capability to leverage the extensive Hilbert space of quantum hardware, offers a promising solution. However, current quantum graph neural networks (GNNs) lack robustness to noise and are often constrained by fixed symmetry groups, limiting adaptability in complex particle interaction modeling. This article demonstrates that replacing the classical Lorentz group equivariant block modules in LorentzNet with a dressed quantum circuit significantly enhances performance despite using <inline-formula><tex-math>$approx 5.5$</tex-math></inline-formula> times fewer parameters. Additionally, quantum circuits effectively replace MLPs by inherently preserving symmetries, with Lorentz symmetry integration ensuring robust handling of relativistic invariance. Our <underline>Lorentz</u>-<underline>e</u>quivariant <underline>q</u>uantum <underline>g</u>raph <underline>n</u>eural <underline>n</u>etwork (Lorentz-EQGNN) achieved 74.00% test accuracy and an AUC of 87.38% on the Quark-Gluon jet tagging dataset, outperforming the classical and quantum GNNs with a reduced architecture using only 4 qubits. On the electron–photon dataset, Lorentz-EQGNN reached 67.00% test accuracy and an AUC of 68.20%, demonstrating competitive results with just 800 training samples. Evaluation of our model on generic MNIST and FashionMNIST datasets confirmed Lorentz-EQGNN’s efficiency, achieving 88.10% and 74.80% test accuracy, respectively. Ablation studies validated the impact of quantum components on performance, with notable improvements in background rejection rates over classical counterparts. These results highlight Lorentz-EQGNN’s potential for immediate applications in noise-resilient jet tagging, event classification, and broader data-scarce HEP tasks.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 12","pages":"3195-3206"},"PeriodicalIF":0.0,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
QSVM-QNN: Quantum Support Vector Machine Based Quantum Neural Network Learning Algorithm for Brain–Computer Interfacing Systems 基于量子支持向量机的脑机接口系统量子神经网络学习算法
Pub Date : 2025-03-23 DOI: 10.1109/TAI.2025.3572852
Bikash K. Behera;Saif Al-Kuwari;Ahmed Farouk
A brain–computer interface (BCI) system enables direct communication between the brain and external devices, offering significant potential for assistive technologies and advanced human–computer interaction. Despite progress, BCI systems face persistent challenges, including signal variability, classification inefficiency, and difficulty adapting to individual users in real time. In this study, we propose a novel hybrid quantum learning model, termed QSVM-QNN, which integrates a quantum support vector machine (QSVM) with a quantum neural network (QNN), to improve classification accuracy and robustness in EEG-based BCI tasks. Unlike existing models, QSVM-QNN combines the decision boundary capabilities of QSVM with the expressive learning power of QNN, leading to superior generalization performance. The proposed model is evaluated on two benchmark EEG datasets, achieving high accuracies of 0.990 and 0.950, outperforming both classical and standalone quantum models. To demonstrate real-world viability, we further validated the robustness of QNN, QSVM, and QSVM-QNN against six realistic quantum noise models, including bit flip and phase damping. These experiments reveal that QSVM-QNN maintains stable performance under noisy conditions, establishing its applicability for deployment in practical, noisy quantum environments. Beyond BCI, the proposed hybrid quantum architecture is generalizable to other biomedical and time-series classification tasks, offering a scalable and noise-resilient solution for next-generation neurotechnological systems.
脑机接口(BCI)系统可以实现大脑和外部设备之间的直接通信,为辅助技术和先进的人机交互提供了巨大的潜力。尽管取得了进展,但BCI系统仍面临着持续的挑战,包括信号变异性、分类效率低下以及难以实时适应个人用户。在这项研究中,我们提出了一种新的混合量子学习模型,称为QSVM-QNN,它将量子支持向量机(QSVM)与量子神经网络(QNN)相结合,以提高基于脑电图的脑机接口任务的分类精度和鲁棒性。与现有模型不同,QSVM-QNN将QSVM的决策边界能力与QNN的表达学习能力相结合,具有更好的泛化性能。在两个基准脑电数据集上对该模型进行了评估,准确率分别为0.990和0.950,优于经典量子模型和独立量子模型。为了证明现实世界的可行性,我们进一步验证了QNN、QSVM和QSVM-QNN对六种现实量子噪声模型的鲁棒性,包括比特翻转和相位阻尼。这些实验表明,QSVM-QNN在噪声条件下保持稳定的性能,建立了在实际噪声量子环境中部署的适用性。除了BCI之外,所提出的混合量子架构还可推广到其他生物医学和时间序列分类任务,为下一代神经技术系统提供可扩展和抗噪声的解决方案。
{"title":"QSVM-QNN: Quantum Support Vector Machine Based Quantum Neural Network Learning Algorithm for Brain–Computer Interfacing Systems","authors":"Bikash K. Behera;Saif Al-Kuwari;Ahmed Farouk","doi":"10.1109/TAI.2025.3572852","DOIUrl":"https://doi.org/10.1109/TAI.2025.3572852","url":null,"abstract":"A brain–computer interface (BCI) system enables direct communication between the brain and external devices, offering significant potential for assistive technologies and advanced human–computer interaction. Despite progress, BCI systems face persistent challenges, including signal variability, classification inefficiency, and difficulty adapting to individual users in real time. In this study, we propose a novel hybrid quantum learning model, termed QSVM-QNN, which integrates a quantum support vector machine (QSVM) with a quantum neural network (QNN), to improve classification accuracy and robustness in EEG-based BCI tasks. Unlike existing models, QSVM-QNN combines the decision boundary capabilities of QSVM with the expressive learning power of QNN, leading to superior generalization performance. The proposed model is evaluated on two benchmark EEG datasets, achieving high accuracies of 0.990 and 0.950, outperforming both classical and standalone quantum models. To demonstrate real-world viability, we further validated the robustness of QNN, QSVM, and QSVM-QNN against six realistic quantum noise models, including bit flip and phase damping. These experiments reveal that QSVM-QNN maintains stable performance under noisy conditions, establishing its applicability for deployment in practical, noisy quantum environments. Beyond BCI, the proposed hybrid quantum architecture is generalizable to other biomedical and time-series classification tasks, offering a scalable and noise-resilient solution for next-generation neurotechnological systems.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"7 1","pages":"308-320"},"PeriodicalIF":0.0,"publicationDate":"2025-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145898244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Maximum Margin-Based Activation Clipping for Posttraining Overfitting Mitigation in DNN Classifiers DNN分类器训练后过拟合的最大边缘激活裁剪
Pub Date : 2025-03-19 DOI: 10.1109/TAI.2025.3552686
Hang Wang;David J. Miller;George Kesidis
Sources of overfitting in deep neural net (DNN) classifiers include: 1) large class imbalances; 2) insufficient training set diversity; and 3) over-training. Recently, it was shown that backdoor data-poisoning also induces overfitting, with unusually large maximum classification margins (MMs) to the attacker’s target class. This is enabled by (unbounded) ReLU activation functions, which allow large signals to propagate in the DNN. Thus, an effective posttraining backdoor mitigation approach (with no knowledge of the training set and no knowledge or control of the training process) was proposed, informed by a small, clean (poisoning-free) data set and choosing saturation levels on neural activations to limit the DNN’s MMs. Here, we show that nonmalicious sources of overfitting also exhibit unusually large MMs. Thus, we propose novel posttraining MM-based regularization that substantially mitigates nonmalicious overfitting due to class imbalances and overtraining. Whereas backdoor mitigation and other adversarial learning defenses often trade off a classifier’s accuracy to achieve robustness against attacks, our approach, inspired by ideas from adversarial learning, helps the classifier’s generalization accuracy: as shown for CIFAR-10 and CIFAR-100, our approach improves both the accuracy for rare categories as well as overall. Moreover, unlike other overfitting mitigation methods, it does so with no knowledge of class imbalances, no knowledge of the training set, and without control of the training process.
深度神经网络(DNN)分类器的过拟合来源包括:1)大的类不平衡;2)训练集多样性不足;3)过度训练。最近,研究表明,后门数据中毒也会导致过拟合,对攻击者的目标类别具有异常大的最大分类裕度(mm)。这是由(无界)ReLU激活函数启用的,它允许大信号在DNN中传播。因此,提出了一种有效的训练后后门缓解方法(不知道训练集,也不知道或控制训练过程),由一个小的、干净的(无毒害的)数据集和选择神经激活的饱和水平来限制DNN的mm。在这里,我们表明非恶意的过拟合源也表现出异常大的mm。因此,我们提出了一种新的基于训练后mm的正则化方法,大大减轻了由于类不平衡和过度训练而导致的非恶意过拟合。尽管后门缓解和其他对抗性学习防御通常会牺牲分类器的准确性来实现对攻击的鲁棒性,但我们的方法受到对抗性学习思想的启发,有助于分类器的泛化准确性:正如CIFAR-10和CIFAR-100所示,我们的方法既提高了罕见类别的准确性,也提高了总体的准确性。此外,与其他过拟合缓解方法不同,它在不了解类不平衡、不了解训练集、不控制训练过程的情况下实现了这一目标。
{"title":"Maximum Margin-Based Activation Clipping for Posttraining Overfitting Mitigation in DNN Classifiers","authors":"Hang Wang;David J. Miller;George Kesidis","doi":"10.1109/TAI.2025.3552686","DOIUrl":"https://doi.org/10.1109/TAI.2025.3552686","url":null,"abstract":"Sources of overfitting in deep neural net (DNN) classifiers include: 1) large class imbalances; 2) insufficient training set diversity; and 3) over-training. Recently, it was shown that backdoor data-poisoning <italic>also</i> induces overfitting, with unusually large maximum classification margins (MMs) to the attacker’s target class. This is enabled by (unbounded) ReLU activation functions, which allow large signals to propagate in the DNN. Thus, an effective <italic>posttraining</i> backdoor mitigation approach (with no knowledge of the training set and no knowledge or control of the training process) was proposed, informed by a small, clean (poisoning-free) data set and choosing saturation levels on neural activations to limit the DNN’s MMs. Here, we show that nonmalicious sources of overfitting <italic>also</i> exhibit unusually large MMs. Thus, we propose novel posttraining MM-based regularization that substantially mitigates <italic>nonmalicious</i> overfitting due to class imbalances and overtraining. Whereas backdoor mitigation and other adversarial learning defenses often <italic>trade off</i> a classifier’s accuracy to achieve robustness against attacks, our approach, inspired by ideas from adversarial learning, <italic>helps</i> the classifier’s generalization accuracy: as shown for CIFAR-10 and CIFAR-100, our approach improves both the accuracy for rare categories as well as overall. Moreover, unlike other overfitting mitigation methods, it does so with no knowledge of class imbalances, no knowledge of the training set, and without control of the training process.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 10","pages":"2840-2847"},"PeriodicalIF":0.0,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145196042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning From N-Tuple Similarities and Unlabeled Data 从n元组相似性和未标记数据中学习
Pub Date : 2025-03-18 DOI: 10.1109/TAI.2025.3552687
Junpeng Li;Shuying Huang;Changchun Hua;Yana Yang
Learning from pairwise similarity and unlabeled data (SU) is a recently emerging weakly-supervised learning method, which learns a classifier from similar data pairs (two instances belonging to the same class) and unlabeled data. However, this framework is insoluble for triplet similarities and unlabeled data. To address this limitation, this article develops a framework for learning from triplet similarities (three instances belonging to the same class) and unlabeled data points, denoted as TSU. This framework not only showcases the feasibility of constructing a TSU classifier but also serves as an inspiration to explore the broader challenge of addressing N-tuple similarities (N ≥ 2) and unlabeled data points. To tackle this more generalized problem, the present article develops an advancing weakly-supervision framework of learning from N-tuple similarities (N instances belong to the same class) and unlabeled data points, named NSU. This framework provides a solid foundation for handling diverse similarity scenarios. Based on these findings, we propose empirical risk minimization estimators for both TSU and NSU classification. The estimation error bounds are also established for the proposed methods. Finally, experiments are performed to verify the effectiveness of the proposed algorithm.
从成对相似和未标记数据中学习(SU)是最近出现的一种弱监督学习方法,它从相似数据对(属于同一类的两个实例)和未标记数据中学习分类器。然而,对于三元组相似性和未标记数据,该框架是不可解决的。为了解决这一限制,本文开发了一个框架,用于从三重相似性(属于同一类的三个实例)和未标记的数据点(表示为TSU)中学习。该框架不仅展示了构建TSU分类器的可行性,而且还为探索解决N元组相似性(N≥2)和未标记数据点的更广泛挑战提供了灵感。为了解决这个更普遍的问题,本文开发了一个先进的弱监督框架,用于从N元组相似性(N个实例属于同一类)和未标记数据点中学习,称为NSU。这个框架为处理不同的相似场景提供了坚实的基础。基于这些发现,我们提出了TSU和NSU分类的经验风险最小化估计。建立了该方法的估计误差范围。最后,通过实验验证了该算法的有效性。
{"title":"Learning From N-Tuple Similarities and Unlabeled Data","authors":"Junpeng Li;Shuying Huang;Changchun Hua;Yana Yang","doi":"10.1109/TAI.2025.3552687","DOIUrl":"https://doi.org/10.1109/TAI.2025.3552687","url":null,"abstract":"Learning from pairwise similarity and unlabeled data (SU) is a recently emerging weakly-supervised learning method, which learns a classifier from similar data pairs (two instances belonging to the same class) and unlabeled data. However, this framework is insoluble for triplet similarities and unlabeled data. To address this limitation, this article develops a framework for learning from triplet similarities (three instances belonging to the same class) and unlabeled data points, denoted as TSU. This framework not only showcases the feasibility of constructing a TSU classifier but also serves as an inspiration to explore the broader challenge of addressing N-tuple similarities (N ≥ 2) and unlabeled data points. To tackle this more generalized problem, the present article develops an advancing weakly-supervision framework of learning from N-tuple similarities (N instances belong to the same class) and unlabeled data points, named NSU. This framework provides a solid foundation for handling diverse similarity scenarios. Based on these findings, we propose empirical risk minimization estimators for both TSU and NSU classification. The estimation error bounds are also established for the proposed methods. Finally, experiments are performed to verify the effectiveness of the proposed algorithm.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 9","pages":"2542-2551"},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144926901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ensuring Reliable Learning in Graph Convolutional Networks: Convergence Analysis and Training Methodology 确保图卷积网络的可靠学习:收敛分析和训练方法
Pub Date : 2025-03-17 DOI: 10.1109/TAI.2025.3550458
Xinge Zhao;Chien Chern Cheah
Recent advancements in learning from graph-structured data have highlighted the importance of graph convolutional networks (GCNs). Despite some research efforts on the theoretical aspects of GCNs, a gap remains in understanding their training process, especially concerning convergence analysis. This study introduces a two-stage training methodology for GCNs, incorporating both pretraining and fine-tuning phases. A two-layer GCN model is used for the convergence analysis and case studies. The convergence analysis that employs a Lyapunov-like approach is performed on the proposed learning algorithm, providing conditions to ensure the convergence of the model learning. Additionally, an automated learning rate scheduler is proposed based on the convergence conditions to prevent divergence and eliminate the need for manual tuning of the initial learning rate. The efficacy of the proposed method is demonstrated through case studies on the node classification problem. The results reveal that the proposed method outperforms gradient descent-based optimizers by achieving consistent training accuracies within a variation of 0.1% across various initial learning rates, without requiring manual tuning.
从图结构数据中学习的最新进展突出了图卷积网络(GCNs)的重要性。尽管对GCNs的理论方面进行了一些研究,但在理解其训练过程方面仍然存在差距,特别是在收敛分析方面。本研究介绍了GCNs的两阶段训练方法,包括预训练和微调阶段。采用两层GCN模型进行收敛性分析和实例研究。采用类lyapunov方法对所提出的学习算法进行收敛性分析,为保证模型学习的收敛性提供了条件。此外,提出了一种基于收敛条件的自动学习率调度器,以防止发散并消除人工调整初始学习率的需要。通过对节点分类问题的实例研究,证明了该方法的有效性。结果表明,所提出的方法优于基于梯度下降的优化器,在不同初始学习率的0.1%变化范围内实现一致的训练精度,而无需手动调优。
{"title":"Ensuring Reliable Learning in Graph Convolutional Networks: Convergence Analysis and Training Methodology","authors":"Xinge Zhao;Chien Chern Cheah","doi":"10.1109/TAI.2025.3550458","DOIUrl":"https://doi.org/10.1109/TAI.2025.3550458","url":null,"abstract":"Recent advancements in learning from graph-structured data have highlighted the importance of graph convolutional networks (GCNs). Despite some research efforts on the theoretical aspects of GCNs, a gap remains in understanding their training process, especially concerning convergence analysis. This study introduces a two-stage training methodology for GCNs, incorporating both pretraining and fine-tuning phases. A two-layer GCN model is used for the convergence analysis and case studies. The convergence analysis that employs a Lyapunov-like approach is performed on the proposed learning algorithm, providing conditions to ensure the convergence of the model learning. Additionally, an automated learning rate scheduler is proposed based on the convergence conditions to prevent divergence and eliminate the need for manual tuning of the initial learning rate. The efficacy of the proposed method is demonstrated through case studies on the node classification problem. The results reveal that the proposed method outperforms gradient descent-based optimizers by achieving consistent training accuracies within a variation of 0.1% across various initial learning rates, without requiring manual tuning.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"6 9","pages":"2510-2525"},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144926893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on artificial intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1