首页 > 最新文献

Information Sciences最新文献

英文 中文
LPCLNet: Leveraging local pixel-wise contrastive learning for image tampering localization lclnet:利用局部逐像素对比学习实现图像篡改定位
IF 6.8 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-02-06 DOI: 10.1016/j.ins.2026.123205
Jun Sang , Xiaowen Chen , Wenhui Gong , Sergey Gorbachev , Shanjun Zhang
To address poor generalization caused by the scarcity of real samples in image tampering localization, this paper proposes a Local Pixel-level Contrastive Learning Network (LPCLNet). The main contributions are: (1) a contour patch-oriented contrastive learning mechanism that categorizes patches into tampered, authentic, and contour classes, applying pixel-level and patch-level contrastive losses alongside binary cross-entropy loss to leverage boundary information and reduce dependence on synthetic data; (2) an LPCLNet architecture that integrates a multi-scale feature fusion module and an Atrous Spatial Pyramid Pooling module to aggregate fine-grained features and embed contextual information for multi-scale representation of tampered regions; (3) a joint optimization strategy combining InfoNCE contrastive loss with binary cross-entropy loss to enhance feature discriminability and localization accuracy. Experiments on the Columbia, NIST16, CASIA v1, and Coverage datasets demonstrate that LPCLNet achieves comparable or superior performance to mainstream methods without requiring synthetic data pre-training. Specifically, it attains leading F1 scores of 0.529 and 0.369 on CASIA v1 and NIST16, respectively, as well as the highest average IoU of 0.500 and AUC of 0.830 across benchmarks, validating its stable and highly generalizable performance with limited real samples.
为了解决图像篡改定位中由于真实样本的稀缺性而导致的泛化差的问题,本文提出了一种局部像素级对比学习网络(Local Pixel-level contrast Learning Network, LPCLNet)。主要贡献有:(1)一种面向轮廓斑块的对比学习机制,该机制将斑块分为篡改类、真实类和轮廓类,利用像素级和斑块级对比损失以及二元交叉熵损失来利用边界信息,减少对合成数据的依赖;(2)集成多尺度特征融合模块和空间金字塔池模块的LPCLNet体系结构,聚合细粒度特征并嵌入上下文信息,实现篡改区域的多尺度表示;(3)结合InfoNCE对比损失和二值交叉熵损失的联合优化策略,提高特征的可分辨性和定位精度。在Columbia、NIST16、CASIA v1和Coverage数据集上的实验表明,LPCLNet在不需要合成数据预训练的情况下实现了与主流方法相当或更好的性能。具体来说,它在CASIA v1和NIST16上分别获得了0.529和0.369的领先F1分数,以及最高的平均IoU 0.500和AUC 0.830,在有限的真实样本下验证了其稳定和高度泛化的性能。
{"title":"LPCLNet: Leveraging local pixel-wise contrastive learning for image tampering localization","authors":"Jun Sang ,&nbsp;Xiaowen Chen ,&nbsp;Wenhui Gong ,&nbsp;Sergey Gorbachev ,&nbsp;Shanjun Zhang","doi":"10.1016/j.ins.2026.123205","DOIUrl":"10.1016/j.ins.2026.123205","url":null,"abstract":"<div><div>To address poor generalization caused by the scarcity of real samples in image tampering localization, this paper proposes a Local Pixel-level Contrastive Learning Network (LPCLNet). The main contributions are: (1) a contour patch-oriented contrastive learning mechanism that categorizes patches into tampered, authentic, and contour classes, applying pixel-level and patch-level contrastive losses alongside binary cross-entropy loss to leverage boundary information and reduce dependence on synthetic data; (2) an LPCLNet architecture that integrates a multi-scale feature fusion module and an Atrous Spatial Pyramid Pooling module to aggregate fine-grained features and embed contextual information for multi-scale representation of tampered regions; (3) a joint optimization strategy combining InfoNCE contrastive loss with binary cross-entropy loss to enhance feature discriminability and localization accuracy. Experiments on the Columbia, NIST16, CASIA v1, and Coverage datasets demonstrate that LPCLNet achieves comparable or superior performance to mainstream methods without requiring synthetic data pre-training. Specifically, it attains leading F1 scores of 0.529 and 0.369 on CASIA v1 and NIST16, respectively, as well as the highest average IoU of 0.500 and AUC of 0.830 across benchmarks, validating its stable and highly generalizable performance with limited real samples.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"740 ","pages":"Article 123205"},"PeriodicalIF":6.8,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146147253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UHTS-DRL: A deep reinforcement learning framework for integrated agile satellite observation and data transmission scheduling utts - drl:一种集成敏捷卫星观测和数据传输调度的深度强化学习框架
IF 6.8 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-02-06 DOI: 10.1016/j.ins.2026.123200
Jiaqi Cheng , Mingfeng Fan , Yi Gu , Wei Tang , Qizhang Luo , Yalin Wang , Xinwei Wang , Guohua Wu
Efficient Earth observation satellite scheduling requires integrated management of both observation and data transmission tasks to optimize the overall system efficiency in practice. However, conventional approaches typically treat these processes separately, leading to suboptimal resource utilization. This paper formulates the Earth observation satellite scheduling as a Multi-agile Satellite and Multi-ground Station Integrated Observation and Data Transmission Scheduling Problem (MSIODTSP) and proposes a Unified Hierarchical Two-stage Scheduling Deep Reinforce-ment Learning (UHTS-DRL) framework for solving MSIODTSP. The UHTS-DRL approach formulates MSIODTSP as a unified Markov Decision Process (MDP), enabling joint optimization of observation and transmission scheduling while accounting for multi-orbit time windows and complex operational constraints. The framework employs a hierarchical policy network comprising a time window encoder, a state encoder, and a dual-decoder for two-stage scheduling, facilitating end-to-end optimization without the need for handcrafted heuristics. Experimental results demonstrate that UHTS-DRL consistently outperforms existing approaches across various problem scales, resource configurations, and geographical target distributions, achieving up to 12.5% relative improvement in total profit while maintaining computational efficiency.
高效的对地观测卫星调度需要对观测任务和数据传输任务进行综合管理,以优化整个系统在实践中的效率。然而,传统方法通常将这些过程分开处理,导致资源利用率达不到最佳水平。本文将对地观测卫星调度问题表述为多敏捷卫星和多地面站综合观测与数据传输调度问题(MSIODTSP),并提出了一种统一的分层两阶段调度深度强化学习(utts - drl)框架来解决MSIODTSP问题。utts - drl方法将MSIODTSP制定为统一的马尔可夫决策过程(MDP),在考虑多轨道时间窗和复杂操作约束的情况下,实现观测和传输调度的联合优化。该框架采用分层策略网络,包括一个时间窗编码器、一个状态编码器和一个双解码器,用于两阶段调度,促进端到端优化,而不需要手工制作的启发式。实验结果表明,utts - drl在各种问题规模、资源配置和地理目标分布上始终优于现有方法,在保持计算效率的同时,总利润的相对提高可达12.5%。
{"title":"UHTS-DRL: A deep reinforcement learning framework for integrated agile satellite observation and data transmission scheduling","authors":"Jiaqi Cheng ,&nbsp;Mingfeng Fan ,&nbsp;Yi Gu ,&nbsp;Wei Tang ,&nbsp;Qizhang Luo ,&nbsp;Yalin Wang ,&nbsp;Xinwei Wang ,&nbsp;Guohua Wu","doi":"10.1016/j.ins.2026.123200","DOIUrl":"10.1016/j.ins.2026.123200","url":null,"abstract":"<div><div>Efficient Earth observation satellite scheduling requires integrated management of both observation and data transmission tasks to optimize the overall system efficiency in practice. However, conventional approaches typically treat these processes separately, leading to suboptimal resource utilization. This paper formulates the Earth observation satellite scheduling as a Multi-agile Satellite and Multi-ground Station Integrated Observation and Data Transmission Scheduling Problem (MSIODTSP) and proposes a Unified Hierarchical Two-stage Scheduling Deep Reinforce-ment Learning (UHTS-DRL) framework for solving MSIODTSP. The UHTS-DRL approach formulates MSIODTSP as a unified Markov Decision Process (MDP), enabling joint optimization of observation and transmission scheduling while accounting for multi-orbit time windows and complex operational constraints. The framework employs a hierarchical policy network comprising a time window encoder, a state encoder, and a dual-decoder for two-stage scheduling, facilitating end-to-end optimization without the need for handcrafted heuristics. Experimental results demonstrate that UHTS-DRL consistently outperforms existing approaches across various problem scales, resource configurations, and geographical target distributions, achieving up to 12.5% relative improvement in total profit while maintaining computational efficiency.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"740 ","pages":"Article 123200"},"PeriodicalIF":6.8,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146147426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-attribute intuitionistic fuzzy twin support vector machine based on data distribution 基于数据分布的多属性直觉模糊双支持向量机
IF 6.8 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-02-05 DOI: 10.1016/j.ins.2026.123203
Jianxiang Qiu, Jialiang Xie
Twin support vector machine (TSVM) is sensitive to noise due to its inability to differentiate sample contributions. How to construct the fuzzy weight assignment strategy to describe the sample contribution is the key to solving the noise-sensitive problem of TSVM. However, the existing strategies still face the challenges in describing the nonlinear characteristics of the complex sample distribution, over-reliance on specific parameter settings and neglecting the global distribution information. To address the above challenges, this paper constructs a weight assignment strategy based on multi-attribute intuitionistic fuzzy sets (IFSs) and further proposes a noise robust multi-attribute intuitionistic fuzzy TSVM based on data distribution (MIFTSVM). First, MIFTSVM constructs the multi-attribute IFS for each training sample based on data distribution and the generalized bell function. Then, inspired by the concepts of fuzzy absolute deviation and feature weighting, a novel multi-attribute IFS distance measure is developed. The proposed weight assignment strategy assigns fuzzy weights to training samples based on the distance measure which integrates data distribution information and is capable of accurately identifying noise. Numerical experiments show that MIFTSVM outperforms state-of-the-art baseline models in generalization performance and noise resistance, demonstrating promising applicability in brain tumor classification.
双支持向量机(TSVM)由于不能区分样本贡献而对噪声敏感。如何构建描述样本贡献的模糊权重分配策略是解决TSVM噪声敏感问题的关键。然而,现有的策略在描述复杂样本分布的非线性特征、过度依赖特定参数设置和忽略全局分布信息方面仍然面临挑战。针对上述问题,本文构建了基于多属性直觉模糊集(ifs)的权重分配策略,并进一步提出了基于数据分布的噪声鲁棒多属性直觉模糊TSVM (MIFTSVM)。首先,MIFTSVM基于数据分布和广义钟函数为每个训练样本构建多属性IFS;然后,在模糊绝对偏差和特征加权概念的启发下,提出了一种新的多属性IFS距离测度方法。提出的权值分配策略基于距离度量为训练样本分配模糊权值,该策略集成了数据分布信息,能够准确识别噪声。数值实验表明,MIFTSVM在泛化性能和抗噪声性能方面优于最先进的基线模型,在脑肿瘤分类中具有良好的适用性。
{"title":"Multi-attribute intuitionistic fuzzy twin support vector machine based on data distribution","authors":"Jianxiang Qiu,&nbsp;Jialiang Xie","doi":"10.1016/j.ins.2026.123203","DOIUrl":"10.1016/j.ins.2026.123203","url":null,"abstract":"<div><div>Twin support vector machine (TSVM) is sensitive to noise due to its inability to differentiate sample contributions. How to construct the fuzzy weight assignment strategy to describe the sample contribution is the key to solving the noise-sensitive problem of TSVM. However, the existing strategies still face the challenges in describing the nonlinear characteristics of the complex sample distribution, over-reliance on specific parameter settings and neglecting the global distribution information. To address the above challenges, this paper constructs a weight assignment strategy based on multi-attribute intuitionistic fuzzy sets (IFSs) and further proposes a noise robust multi-attribute intuitionistic fuzzy TSVM based on data distribution (MIFTSVM). First, MIFTSVM constructs the multi-attribute IFS for each training sample based on data distribution and the generalized bell function. Then, inspired by the concepts of fuzzy absolute deviation and feature weighting, a novel multi-attribute IFS distance measure is developed. The proposed weight assignment strategy assigns fuzzy weights to training samples based on the distance measure which integrates data distribution information and is capable of accurately identifying noise. Numerical experiments show that MIFTSVM outperforms state-of-the-art baseline models in generalization performance and noise resistance, demonstrating promising applicability in brain tumor classification.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"740 ","pages":"Article 123203"},"PeriodicalIF":6.8,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146147250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning on dynamic functional connectivity: Promise, pitfalls, and interpretations 动态功能连接上的机器学习:承诺、陷阱和解释
IF 6.8 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-02-05 DOI: 10.1016/j.ins.2026.123184
Jiaqi Ding , Tingting Dan , Ziquan Wei , Paul J. Laurienti , Guorong Wu
An unprecedented amount of existing functional Magnetic Resonance Imaging (fMRI) data provides a new opportunity to understand how functional fluctuations relate to human cognition/behavior using data-driven approaches. To this end, tremendous efforts have been made in machine learning to decode cognitive states from evolving volumetric images of blood-oxygen-level-dependent (BOLD) signals. However, due to the complex nature of brain function, the performance and findings of current deep learning models remain inconsistent across tasks, datasets, and evaluation settings. In this work, by capitalizing on large-scale existing neuroimaging data (39,784 fMRI samples from seven databases), we seek to establish a well-founded empirical guideline for designing deep models in functional neuroimaging by linking the methodology underpinning with neuroscientific understanding. Specifically, we put the spotlight on (1) What is the performance landscape of various models in cognitive task recognition and disease diagnosis? (2) What are the key limitations and trade-offs of current deep models? and (3) What is the general guideline for selecting the suitable machine learning backbone for specific neuroimaging applications? We have conducted comprehensive evaluations and statistical analyses across cognitive and clinical scenarios, to answer the above outstanding questions. Our findings demonstrate that no universal model dominates all scenarios; instead, model effectiveness depends on factors such as demographics, task type, and disease stage. Furthermore, we introduce an attention-based interpretability method to reveal spatial patterns of brain activation associated with tasks and disorders.
现有功能磁共振成像(fMRI)数据的空前数量为使用数据驱动的方法了解功能波动与人类认知/行为的关系提供了新的机会。为此,在机器学习方面已经做出了巨大的努力,从不断变化的血氧水平依赖(BOLD)信号的体积图像中解码认知状态。然而,由于大脑功能的复杂性,当前深度学习模型的性能和结果在任务、数据集和评估设置上仍然不一致。在这项工作中,通过利用现有的大规模神经成像数据(来自七个数据库的39,784个fMRI样本),我们试图通过将方法基础与神经科学理解联系起来,为设计功能神经成像的深度模型建立一个有充分根据的经验指导方针。具体而言,我们将重点放在(1)认知任务识别和疾病诊断中各种模型的性能景观是什么?(2)当前深度模型的主要限制和权衡是什么?(3)为特定的神经成像应用选择合适的机器学习骨干的一般准则是什么?我们在认知和临床两方面进行了全面的评估和统计分析,以回答上述突出的问题。我们的研究结果表明,没有一个普遍的模型能支配所有的情景;相反,模型的有效性取决于人口统计、任务类型和疾病阶段等因素。此外,我们引入了一种基于注意的可解释性方法来揭示与任务和障碍相关的大脑激活的空间模式。
{"title":"Machine learning on dynamic functional connectivity: Promise, pitfalls, and interpretations","authors":"Jiaqi Ding ,&nbsp;Tingting Dan ,&nbsp;Ziquan Wei ,&nbsp;Paul J. Laurienti ,&nbsp;Guorong Wu","doi":"10.1016/j.ins.2026.123184","DOIUrl":"10.1016/j.ins.2026.123184","url":null,"abstract":"<div><div>An unprecedented amount of existing functional Magnetic Resonance Imaging (fMRI) data provides a new opportunity to understand how functional fluctuations relate to human cognition/behavior using data-driven approaches. To this end, tremendous efforts have been made in machine learning to decode cognitive states from evolving volumetric images of blood-oxygen-level-dependent (BOLD) signals. However, due to the complex nature of brain function, the performance and findings of current deep learning models remain inconsistent across tasks, datasets, and evaluation settings. In this work, by capitalizing on large-scale existing neuroimaging data (39,784 fMRI samples from seven databases), we seek to establish a well-founded empirical guideline for designing deep models in functional neuroimaging by linking the methodology underpinning with neuroscientific understanding. Specifically, we put the spotlight on (1) What is the performance landscape of various models in cognitive task recognition and disease diagnosis? (2) What are the key limitations and trade-offs of current deep models? and (3) What is the general guideline for selecting the suitable machine learning backbone for specific neuroimaging applications? We have conducted comprehensive evaluations and statistical analyses across cognitive and clinical scenarios, to answer the above outstanding questions. Our findings demonstrate that no universal model dominates all scenarios; instead, model effectiveness depends on factors such as demographics, task type, and disease stage. Furthermore, we introduce an attention-based interpretability method to reveal spatial patterns of brain activation associated with tasks and disorders.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"740 ","pages":"Article 123184"},"PeriodicalIF":6.8,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146147251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A framework for technological bottleneck detection and collaborative optimization in heterogeneous parallel networks 异构并行网络技术瓶颈检测与协同优化框架
IF 6.8 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-02-05 DOI: 10.1016/j.ins.2026.123189
Zhanxin Ma , Chuanzhe Zhang , Meixia Sun
Accurately identifying internal technological bottlenecks that constrain system efficiency and effectively coordinating interactions among system components represent critical challenges in contemporary social science research. However, studies on parallel network systems with structural heterogeneity in economic and social contexts remain limited, and several existing models still suffer from issues such as the infeasibility of projection-point techniques. To address these limitations, this study makes three main contributions. First, based on the dual feasibility framework, a DEA model is proposed for evaluating heterogeneous parallel networks. Second, to uncover latent technological bottlenecks within parallel network systems, a method for identifying internal technological bottlenecks is developed. Third, to promote the collaborative optimization of members within the system, a re-adjustment model considering pressure dispersion of subunits is presented. Finally, the proposed approach is applied to evaluate the research and development (R&D) innovation efficiency of 11 regions in western China. The empirical results indicate that the proposed method is capable of assessing parallel network systems with more complex structures while effectively overcoming the limitations of existing models. In particular, it not only identifies technological bottlenecks within decision-making units (DMUs) but also explicitly accounts for the feasibility and balance of each subunit in task allocation.
准确识别制约系统效率的内部技术瓶颈,并有效协调系统组件之间的相互作用,是当代社会科学研究的关键挑战。然而,对经济和社会背景下具有结构异质性的平行网络系统的研究仍然有限,一些现有模型仍然存在投影点技术不可行等问题。为了解决这些局限性,本研究做出了三个主要贡献。首先,基于双重可行性框架,提出了异构并行网络的DEA模型。其次,为了揭示并行网络系统中潜在的技术瓶颈,提出了一种识别内部技术瓶颈的方法。第三,为了促进系统内部成员的协同优化,提出了考虑子单元压力分散的再调整模型。最后,对西部11个地区的研发创新效率进行了实证研究。实证结果表明,该方法能够有效地克服现有模型的局限性,对结构更为复杂的并行网络系统进行评估。特别是,它不仅识别决策单元(dmu)中的技术瓶颈,而且明确地说明了任务分配中每个子单元的可行性和平衡性。
{"title":"A framework for technological bottleneck detection and collaborative optimization in heterogeneous parallel networks","authors":"Zhanxin Ma ,&nbsp;Chuanzhe Zhang ,&nbsp;Meixia Sun","doi":"10.1016/j.ins.2026.123189","DOIUrl":"10.1016/j.ins.2026.123189","url":null,"abstract":"<div><div>Accurately identifying internal technological bottlenecks that constrain system efficiency and effectively coordinating interactions among system components represent critical challenges in contemporary social science research. However, studies on parallel network systems with structural heterogeneity in economic and social contexts remain limited, and several existing models still suffer from issues such as the infeasibility of projection-point techniques. To address these limitations, this study makes three main contributions. First, based on the dual feasibility framework, a DEA model is proposed for evaluating heterogeneous parallel networks. Second, to uncover latent technological bottlenecks within parallel network systems, a method for identifying internal technological bottlenecks is developed. Third, to promote the collaborative optimization of members within the system, a re-adjustment model considering pressure dispersion of subunits is presented. Finally, the proposed approach is applied to evaluate the research and development (R&amp;D) innovation efficiency of 11 regions in western China. The empirical results indicate that the proposed method is capable of assessing parallel network systems with more complex structures while effectively overcoming the limitations of existing models. In particular, it not only identifies technological bottlenecks within decision-making units (DMUs) but also explicitly accounts for the feasibility and balance of each subunit in task allocation.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"740 ","pages":"Article 123189"},"PeriodicalIF":6.8,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146147249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Post-hoc explainability of graph neural networks: A comprehensive survey 图神经网络的事后可解释性:一个全面的调查
IF 6.8 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-02-04 DOI: 10.1016/j.ins.2026.123202
Wenzheng Ma , Xiaofeng Liu , Yihu Liu , Yinglong Ma
Graph Neural Networks (GNNs) are powerful tools for analyzing graph-structured data and are widely applied in areas such as molecular structure prediction and social network analysis. However, GNN models are inherently nonlinear and opaque, making their internal mechanisms and the rationale behind their predictions difficult to understand. To address this issue, numerous explainability methods have been proposed to uncover the underlying decision-making mechanisms of GNNs. Among these, post-hoc explanation techniques offer significant flexibility, as they can be applied to any pre-trained GNN model without requiring modifications to the model itself. In this paper, we provide a comprehensive survey of existing post-hoc explainability methods for GNNs and propose a technology-oriented taxonomy based on the theoretical techniques they rely on. We analyze the strengths and limitations of each method and review commonly used datasets and evaluation protocols in the field. We further conduct a quantitative comparison of representative methods on selected datasets using commonly adopted evaluation metrics. Moreover, we outline promising research directions to advance the field. Altogether, this survey aims to provide researchers with a comprehensive understanding of the current landscape of post-hoc GNN explainability methods, identify their technical limitations, and facilitate the further advancement of explainable graph-based machine learning.
图神经网络(Graph Neural Networks, gnn)是分析图结构数据的强大工具,广泛应用于分子结构预测和社会网络分析等领域。然而,GNN模型本质上是非线性和不透明的,这使得它们的内部机制和预测背后的基本原理难以理解。为了解决这个问题,已经提出了许多可解释性方法来揭示gnn的潜在决策机制。其中,事后解释技术提供了显著的灵活性,因为它们可以应用于任何预训练的GNN模型,而无需对模型本身进行修改。在本文中,我们对现有的gnn事后可解释性方法进行了全面的调查,并基于它们所依赖的理论技术提出了一种面向技术的分类方法。我们分析了每种方法的优势和局限性,并回顾了该领域常用的数据集和评估协议。我们进一步使用常用的评估指标对选定数据集上的代表性方法进行定量比较。此外,我们概述了有前途的研究方向,以推进该领域。总而言之,本调查旨在为研究人员提供对事后GNN可解释性方法的现状的全面了解,确定其技术局限性,并促进可解释的基于图的机器学习的进一步发展。
{"title":"Post-hoc explainability of graph neural networks: A comprehensive survey","authors":"Wenzheng Ma ,&nbsp;Xiaofeng Liu ,&nbsp;Yihu Liu ,&nbsp;Yinglong Ma","doi":"10.1016/j.ins.2026.123202","DOIUrl":"10.1016/j.ins.2026.123202","url":null,"abstract":"<div><div>Graph Neural Networks (GNNs) are powerful tools for analyzing graph-structured data and are widely applied in areas such as molecular structure prediction and social network analysis. However, GNN models are inherently nonlinear and opaque, making their internal mechanisms and the rationale behind their predictions difficult to understand. To address this issue, numerous explainability methods have been proposed to uncover the underlying decision-making mechanisms of GNNs. Among these, post-hoc explanation techniques offer significant flexibility, as they can be applied to any pre-trained GNN model without requiring modifications to the model itself. In this paper, we provide a comprehensive survey of existing post-hoc explainability methods for GNNs and propose a technology-oriented taxonomy based on the theoretical techniques they rely on. We analyze the strengths and limitations of each method and review commonly used datasets and evaluation protocols in the field. We further conduct a quantitative comparison of representative methods on selected datasets using commonly adopted evaluation metrics. Moreover, we outline promising research directions to advance the field. Altogether, this survey aims to provide researchers with a comprehensive understanding of the current landscape of post-hoc GNN explainability methods, identify their technical limitations, and facilitate the further advancement of explainable graph-based machine learning.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"740 ","pages":"Article 123202"},"PeriodicalIF":6.8,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146147248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A large-scale multi-objective optimization algorithm integrating multi-directional fuzzy sampling and multi-source learning competitive swarm optimizer 结合多向模糊采样和多源学习竞争群优化器的大规模多目标优化算法
IF 6.8 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-02-04 DOI: 10.1016/j.ins.2026.123194
Wenyan Guo, Shenglong Li, Fang Dai, Junfeng Wang, Mengzhen Zhang
Existing directional sampling-based methods for large-scale multi-objective optimization problems (LSMOPs) show promise but are often constrained by their reliance on singular representative solutions and insufficient diversity in search directions. To overcome these limitations, this paper proposes LSMDCSO, a novel algorithm integrating multi-directional fuzzy sampling (MDFS) and a multi-source learning competitive swarm optimizer (MLCSO). First, LSMDCSO utilizes angle-penalized distance to select representative solutions as guiding anchors. It then constructs a comprehensive set of search directions by synergizing approximate gradient-based sampling for convergence and orthogonal sampling for diversity. A fuzzy variable operator is incorporated to further enhance solution adaptability in high-dimensional spaces. Additionally, an MLCSO is designed to perform fine-grained exploitation, compensating for sampling imprecision. Experimental evaluations on the LSMOP, ZCAT, and real-world TREE benchmarks demonstrate that LSMDCSO outperforms state-of-the-art algorithms, exhibiting superior capabilities in balancing convergence and diversity for solving complex LSMOPs.
现有的基于方向采样的大规模多目标优化问题(LSMOPs)方法显示出良好的前景,但往往受到依赖奇异代表性解和搜索方向多样性不足的限制。为了克服这些局限性,本文提出了LSMDCSO算法,这是一种将多方向模糊采样(MDFS)和多源学习竞争群优化器(MLCSO)相结合的新算法。首先,LSMDCSO利用角度惩罚距离选择有代表性的解作为导向锚。然后,通过近似梯度采样的收敛性和正交采样的多样性的协同作用,构建了一套完整的搜索方向集。引入模糊变量算子,进一步提高了求解在高维空间中的适应性。此外,MLCSO设计用于执行细粒度开发,以补偿采样不精确。在LSMOP、ZCAT和现实世界的TREE基准测试中进行的实验评估表明,LSMDCSO优于最先进的算法,在解决复杂LSMOPs时,在平衡收敛性和多样性方面表现出卓越的能力。
{"title":"A large-scale multi-objective optimization algorithm integrating multi-directional fuzzy sampling and multi-source learning competitive swarm optimizer","authors":"Wenyan Guo,&nbsp;Shenglong Li,&nbsp;Fang Dai,&nbsp;Junfeng Wang,&nbsp;Mengzhen Zhang","doi":"10.1016/j.ins.2026.123194","DOIUrl":"10.1016/j.ins.2026.123194","url":null,"abstract":"<div><div>Existing directional sampling-based methods for large-scale multi-objective optimization problems (LSMOPs) show promise but are often constrained by their reliance on singular representative solutions and insufficient diversity in search directions. To overcome these limitations, this paper proposes LSMDCSO, a novel algorithm integrating multi-directional fuzzy sampling (MDFS) and a multi-source learning competitive swarm optimizer (MLCSO). First, LSMDCSO utilizes angle-penalized distance to select representative solutions as guiding anchors. It then constructs a comprehensive set of search directions by synergizing approximate gradient-based sampling for convergence and orthogonal sampling for diversity. A fuzzy variable operator is incorporated to further enhance solution adaptability in high-dimensional spaces. Additionally, an MLCSO is designed to perform fine-grained exploitation, compensating for sampling imprecision. Experimental evaluations on the LSMOP, ZCAT, and real-world TREE benchmarks demonstrate that LSMDCSO outperforms state-of-the-art algorithms, exhibiting superior capabilities in balancing convergence and diversity for solving complex LSMOPs.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"740 ","pages":"Article 123194"},"PeriodicalIF":6.8,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146147252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An effective and robust deep clustering approach for time series with spatial information 一种具有空间信息的时间序列深度聚类方法
IF 6.8 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-02-04 DOI: 10.1016/j.ins.2026.123201
Ze Deng , Haibo Zeng
Deep clustering of time series with spatial information is a challenging task, as the representation learning in the clustering process needs to embed both temporal and spatial features of time series data. The existing deep clustering models fail to achieve a high quality of clustering due to the lack of support for complex time patterns and spatial entities in the representation learning process. Meanwhile, these deep clustering models ignore the effects of noise and outliers. Therefore, in this paper, we propose an effective and robust deep clustering model for time series data with spatial information called ER-Spatial-DEC. Our clustering model can effectively embed complex temporal patterns and spatial entities through a representation learning method by combining a spatial transformer and a quantum representational network. We further enhance the robustness of our clustering model by utilizing a collaborative multi-contrastive learning method. The experimental results demonstrate that ER-Spatial-DEC achieves superior clustering performance compared with all baseline methods across different sequence lengths. In addition, ER-Spatial-DEC exhibits strong robustness under various levels of noise and outlier perturbations, maintaining stable clustering quality compared with baselines.
具有空间信息的时间序列深度聚类是一项具有挑战性的任务,聚类过程中的表示学习需要同时嵌入时间序列数据的时间和空间特征。由于在表征学习过程中缺乏对复杂时间模式和空间实体的支持,现有的深度聚类模型无法实现高质量的聚类。同时,这些深度聚类模型忽略了噪声和异常值的影响。因此,本文提出了一种有效且鲁棒的具有空间信息的时间序列数据深度聚类模型ER-Spatial-DEC。我们的聚类模型通过空间转换器和量子表征网络相结合的表征学习方法,可以有效地嵌入复杂的时间模式和空间实体。我们利用协同多对比学习方法进一步增强了聚类模型的鲁棒性。实验结果表明,与所有基线方法相比,ER-Spatial-DEC在不同序列长度上具有更好的聚类性能。此外,ER-Spatial-DEC在不同程度的噪声和离群扰动下表现出较强的鲁棒性,与基线相比保持稳定的聚类质量。
{"title":"An effective and robust deep clustering approach for time series with spatial information","authors":"Ze Deng ,&nbsp;Haibo Zeng","doi":"10.1016/j.ins.2026.123201","DOIUrl":"10.1016/j.ins.2026.123201","url":null,"abstract":"<div><div>Deep clustering of time series with spatial information is a challenging task, as the representation learning in the clustering process needs to embed both temporal and spatial features of time series data. The existing deep clustering models fail to achieve a high quality of clustering due to the lack of support for complex time patterns and spatial entities in the representation learning process. Meanwhile, these deep clustering models ignore the effects of noise and outliers. Therefore, in this paper, we propose an effective and robust deep clustering model for time series data with spatial information called ER-Spatial-DEC. Our clustering model can effectively embed complex temporal patterns and spatial entities through a representation learning method by combining a spatial transformer and a quantum representational network. We further enhance the robustness of our clustering model by utilizing a collaborative multi-contrastive learning method. The experimental results demonstrate that ER-Spatial-DEC achieves superior clustering performance compared with all baseline methods across different sequence lengths. In addition, ER-Spatial-DEC exhibits strong robustness under various levels of noise and outlier perturbations, maintaining stable clustering quality compared with baselines.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"740 ","pages":"Article 123201"},"PeriodicalIF":6.8,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146147247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mathematical guarantees for trust region policy optimization 信任域策略优化的数学保证
IF 6.8 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-02-03 DOI: 10.1016/j.ins.2026.123190
Li Li , Xiangyu Luo , Xiaoyu Song
Policy gradient methods have achieved remarkable success in reinforcement learning, yet their performance critically depends on the step size selection during policy updates. Inappropriate step sizes can lead to drastic performance degradation or even training collapse. To mitigate this challenge, the trust region mechanism in TRPO formally guarantees stable policy gradient training through a bounded total variation divergence in consecutive policy iterations. This work establishes a tighter performance difference bound for the discounted return |η(πˇ)Lπ(πˇ)|2ϵγ(1γ)2α2, where α measures policy divergence and ϵ bounds advantage estimation errors. Leveraging mathematical induction, we rigorously analyze the total variation divergence between policy pairs, systematically quantifying the relationship between state advantage disparities and trajectory probability discrepancies. This formal proof reveals the fundamental mechanisms underlying policy improvement constraints, addressing key gaps in the intuitive proof of TRPO theory. Furthermore, our generalized framework demonstrates that any divergence metric satisfying specific axiomatic properties preserves the structural form of the monotonic improvement guarantee. These theoretical advances translate into practical engineering benefits, enabling more precise trust region sizing for safety-critical applications, including autonomous driving, robotic control, and large language model alignment. The tighter bounds provide concrete mathematical guidance for algorithm designers to balance the stability-efficiency tradeoff, minimizing reliance on exhaustive hyperparameter search.
策略梯度方法在强化学习中取得了显著的成功,但其性能严重依赖于策略更新过程中的步长选择。不适当的步长可能导致剧烈的性能下降甚至训练崩溃。为了缓解这一挑战,TRPO中的信任域机制通过连续策略迭代中的有界总变异散度正式保证了策略梯度训练的稳定性。这项工作为贴现收益|η(π ω)−Lπ(π ω)|≤2ϵγ(1−γ)2α2建立了更严格的性能差异界,其中α度量策略分歧和λ界优势估计误差。利用数学归纳法,我们严格分析了政策对之间的总变异差异,系统量化了状态优势差异与轨迹概率差异之间的关系。这一形式化证明揭示了政策改进约束的基本机制,解决了TRPO理论直观证明中的关键空白。进一步,我们的广义框架证明了任何满足特定公理性质的散度度量都保留了单调改进保证的结构形式。这些理论进步转化为实际工程效益,为安全关键应用(包括自动驾驶、机器人控制和大型语言模型对齐)提供更精确的信任区域大小。更严格的边界为算法设计者提供了具体的数学指导,以平衡稳定性和效率的权衡,最大限度地减少对穷举超参数搜索的依赖。
{"title":"Mathematical guarantees for trust region policy optimization","authors":"Li Li ,&nbsp;Xiangyu Luo ,&nbsp;Xiaoyu Song","doi":"10.1016/j.ins.2026.123190","DOIUrl":"10.1016/j.ins.2026.123190","url":null,"abstract":"<div><div>Policy gradient methods have achieved remarkable success in reinforcement learning, yet their performance critically depends on the step size selection during policy updates. Inappropriate step sizes can lead to drastic performance degradation or even training collapse. To mitigate this challenge, the trust region mechanism in TRPO formally guarantees stable policy gradient training through a bounded total variation divergence in consecutive policy iterations. This work establishes a tighter performance difference bound for the discounted return <span><math><mo>|</mo><mi>η</mi><mo>(</mo><mrow><mover><mi>π</mi><mo>ˇ</mo></mover></mrow><mo>)</mo><mo>−</mo><msub><mi>L</mi><mrow><mi>π</mi></mrow></msub><mo>(</mo><mrow><mover><mi>π</mi><mo>ˇ</mo></mover></mrow><mo>)</mo><mrow><mo>|</mo></mrow><mo>≤</mo><mfrac><mrow><mn>2</mn><mi>ϵ</mi><mi>γ</mi></mrow><msup><mrow><mo>(</mo><mn>1</mn><mo>−</mo><mi>γ</mi><mo>)</mo></mrow><mn>2</mn></msup></mfrac><msup><mi>α</mi><mn>2</mn></msup></math></span>, where <span><math><mi>α</mi></math></span> measures policy divergence and <span><math><mi>ϵ</mi></math></span> bounds advantage estimation errors. Leveraging mathematical induction, we rigorously analyze the total variation divergence between policy pairs, systematically quantifying the relationship between state advantage disparities and trajectory probability discrepancies. This formal proof reveals the fundamental mechanisms underlying policy improvement constraints, addressing key gaps in the intuitive proof of TRPO theory. Furthermore, our generalized framework demonstrates that any divergence metric satisfying specific axiomatic properties preserves the structural form of the monotonic improvement guarantee. These theoretical advances translate into practical engineering benefits, enabling more precise trust region sizing for safety-critical applications, including autonomous driving, robotic control, and large language model alignment. The tighter bounds provide concrete mathematical guidance for algorithm designers to balance the stability-efficiency tradeoff, minimizing reliance on exhaustive hyperparameter search.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"740 ","pages":"Article 123190"},"PeriodicalIF":6.8,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146147425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual-branch Meso-Xception network for hybrid-domain feature of deepfake detection 基于混合域特征的双分支Meso-Xception网络深度伪造检测
IF 6.8 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-27 DOI: 10.1016/j.ins.2026.123132
Yanan Song, Xiangyuan Chen, Ronghua Xu
The rapid advancement of deep learning-based generative technologies has led to remarkable achievements in deepfake applications using video and image media, particularly in areas such as face swapping and expression transfer. However, these developments have also triggered significant concerns regarding media authenticity and information security. Deepfake content often exhibits various artifacts: in the spatial domain, it may suffer from over-smoothed textures, loss of edge details, or jagged distortions; in the frequency domain, abnormal peaks in high-frequency spectra or noise-induced distortions may appear; and at the semantic level, misaligned keypoints and poor temporal coherence are frequently observed. To address these limitations, this study proposes a network architecture that first performs hybrid-domain feature extraction on deepfake samples. The Xception backbone, optimized through a knowledge distillation strategy to remove redundant layers, is combined with the lightweight MesoNet4 architecture to form a dual-branch backbone that can capture semantic features at different levels. While preserving semantic representation, the overall model size is compressed to just 8.0M parameters, achieving both high-precision detection of deepfake samples (accuracy 99%) and real-time inference performance (single-frame latency 10 ms).
基于深度学习的生成技术的快速发展使得使用视频和图像媒体的深度伪造应用取得了显着成就,特别是在面部交换和表情转移等领域。然而,这些发展也引发了对媒体真实性和信息安全的重大关注。深度伪造的内容通常会呈现出各种各样的伪影:在空间域中,它可能会遭受过度平滑的纹理、边缘细节的丢失或锯齿状的扭曲;在频域,高频频谱可能出现异常峰或噪声引起的畸变;在语义层面上,经常观察到关键点不对齐和时间连贯性差。为了解决这些限制,本研究提出了一种网络架构,首先对deepfake样本进行混合域特征提取。异常主干通过知识蒸馏策略进行优化以去除冗余层,并与轻量级MesoNet4架构结合形成双分支主干,可以捕获不同层次的语义特征。在保留语义表示的同时,整体模型大小被压缩到仅8.0M个参数,实现了对deepfake样本的高精度检测(准确率≥99%)和实时推理性能(单帧延迟≤10 ms)。
{"title":"Dual-branch Meso-Xception network for hybrid-domain feature of deepfake detection","authors":"Yanan Song,&nbsp;Xiangyuan Chen,&nbsp;Ronghua Xu","doi":"10.1016/j.ins.2026.123132","DOIUrl":"10.1016/j.ins.2026.123132","url":null,"abstract":"<div><div>The rapid advancement of deep learning-based generative technologies has led to remarkable achievements in deepfake applications using video and image media, particularly in areas such as face swapping and expression transfer. However, these developments have also triggered significant concerns regarding media authenticity and information security. Deepfake content often exhibits various artifacts: in the spatial domain, it may suffer from over-smoothed textures, loss of edge details, or jagged distortions; in the frequency domain, abnormal peaks in high-frequency spectra or noise-induced distortions may appear; and at the semantic level, misaligned keypoints and poor temporal coherence are frequently observed. To address these limitations, this study proposes a network architecture that first performs hybrid-domain feature extraction on deepfake samples. The Xception backbone, optimized through a knowledge distillation strategy to remove redundant layers, is combined with the lightweight MesoNet4 architecture to form a dual-branch backbone that can capture semantic features at different levels. While preserving semantic representation, the overall model size is compressed to just 8.0M parameters, achieving both high-precision detection of deepfake samples (accuracy <span><math><mo>≥</mo></math></span> 99%) and real-time inference performance (single-frame latency <span><math><mo>≤</mo></math></span> 10 ms).</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"739 ","pages":"Article 123132"},"PeriodicalIF":6.8,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146081919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Information Sciences
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1