首页 > 最新文献

IEEE Transactions on Knowledge and Data Engineering最新文献

英文 中文
Hierarchical Deep Document Model 层次深度文档模型
IF 8.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-29 DOI: 10.1109/TKDE.2024.3487523
Yi Yang;John P. Lalor;Ahmed Abbasi;Daniel Dajun Zeng
Topic modeling is a commonly used text analysis tool for discovering latent topics in a text corpus. However, while topics in a text corpus often exhibit a hierarchical structure (e.g., cellphone is a sub-topic of electronics), most topic modeling methods assume a flat topic structure that ignores the hierarchical dependency among topics, or utilize a predefined topic hierarchy. In this work, we present a novel Hierarchical Deep Document Model (HDDM) to learn topic hierarchies using a variational autoencoder framework. We propose a novel objective function, sum of log likelihood, instead of the widely used evidence lower bound, to facilitate the learning of hierarchical latent topic structure. The proposed objective function can directly model and optimize the hierarchical topic-word distributions at all topic levels. We conduct experiments on four real-world text datasets to evaluate the topic modeling capability of the proposed HDDM method compared to state-of-the-art hierarchical topic modeling benchmarks. Experimental results show that HDDM achieves considerable improvement over benchmarks and is capable of learning meaningful topics and topic hierarchies. To further demonstrate the practical utility of HDDM, we apply it to a real-world medical notes dataset for clinical prediction. Experimental results show that HDDM can better summarize topics in medical notes, resulting in more accurate clinical predictions.
主题建模是一种常用的文本分析工具,用于发现文本语料库中的潜在主题。然而,虽然文本语料库中的主题通常表现出层次结构(例如,手机是电子产品的子主题),但大多数主题建模方法都假设一个扁平的主题结构,忽略了主题之间的层次依赖关系,或者利用预定义的主题层次结构。在这项工作中,我们提出了一种新的分层深度文档模型(HDDM),使用变分自编码器框架来学习主题层次结构。我们提出了一个新的目标函数,即对数似然和,以取代广泛使用的证据下界,以促进分层潜在主题结构的学习。所提出的目标函数可以直接对各个主题层次的分层主题词分布进行建模和优化。我们在四个真实文本数据集上进行了实验,以评估所提出的HDDM方法与最先进的分层主题建模基准的主题建模能力。实验结果表明,HDDM在基准测试的基础上取得了相当大的进步,能够学习有意义的主题和主题层次。为了进一步证明HDDM的实际效用,我们将其应用于现实世界的医疗笔记数据集进行临床预测。实验结果表明,HDDM可以更好地总结医疗笔记中的主题,从而获得更准确的临床预测。
{"title":"Hierarchical Deep Document Model","authors":"Yi Yang;John P. Lalor;Ahmed Abbasi;Daniel Dajun Zeng","doi":"10.1109/TKDE.2024.3487523","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3487523","url":null,"abstract":"Topic modeling is a commonly used text analysis tool for discovering latent topics in a text corpus. However, while topics in a text corpus often exhibit a hierarchical structure (e.g., cellphone is a sub-topic of electronics), most topic modeling methods assume a flat topic structure that ignores the hierarchical dependency among topics, or utilize a predefined topic hierarchy. In this work, we present a novel Hierarchical Deep Document Model (HDDM) to learn topic hierarchies using a variational autoencoder framework. We propose a novel objective function, sum of log likelihood, instead of the widely used evidence lower bound, to facilitate the learning of hierarchical latent topic structure. The proposed objective function can directly model and optimize the hierarchical topic-word distributions at all topic levels. We conduct experiments on four real-world text datasets to evaluate the topic modeling capability of the proposed HDDM method compared to state-of-the-art hierarchical topic modeling benchmarks. Experimental results show that HDDM achieves considerable improvement over benchmarks and is capable of learning meaningful topics and topic hierarchies. To further demonstrate the practical utility of HDDM, we apply it to a real-world medical notes dataset for clinical prediction. Experimental results show that HDDM can better summarize topics in medical notes, resulting in more accurate clinical predictions.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 1","pages":"351-364"},"PeriodicalIF":8.9,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Handling Low Homophily in Recommender Systems With Partitioned Graph Transformer 用分区图转换器处理推荐系统的低同态性
IF 8.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-28 DOI: 10.1109/TKDE.2024.3485880
Thanh Tam Nguyen;Thanh Toan Nguyen;Matthias Weidlich;Jun Jo;Quoc Viet Hung Nguyen;Hongzhi Yin;Alan Wee-Chung Liew
Modern recommender systems derive predictions from an interaction graph that links users and items. To this end, many of today's state-of-the-art systems use graph neural networks (GNNs) to learn effective representations of these graphs under the assumption of homophily, i.e., the idea that similar users will sit close to each other in the graph. However, recent studies have revealed that real-world recommendation graphs are often heterophilous, i.e., dissimilar users will also often sit close to each other. One of the reasons for this heterophilia is shilling attacks that obscure the inherent characteristics of the graph and make the derived recommendations less accurate as a consequence. Hence, to cope with low homophily in recommender systems, we propose a recommendation model called PGT4Rec that is based on a Partitioned Graph Transformer. The model integrates label information into the learning process, which allows discriminative neighbourhoods of users to be generated. As such, the framework can both detect shilling attacks and predict user ratings for items. Extensive experiments on real and synthetic datasets show PGT4Rec as not only providing superior performance in these two tasks but also significant robustness to a range of adversarial conditions.
现代推荐系统从连接用户和项目的交互图中得出预测。为此,许多当今最先进的系统使用图神经网络(gnn)在同态假设下学习这些图的有效表示,即相似的用户将在图中彼此靠近的想法。然而,最近的研究表明,现实世界的推荐图往往是异性恋的,即不同的用户也经常坐在一起。这种异性恋的原因之一是先令攻击模糊了图形的固有特征,从而使导出的推荐不那么准确。因此,为了解决推荐系统中的低同质性问题,我们提出了一种基于分区图转换器的推荐模型PGT4Rec。该模型将标签信息集成到学习过程中,从而可以生成用户的判别邻域。因此,该框架既可以检测先令攻击,也可以预测用户对商品的评级。在真实和合成数据集上的大量实验表明,PGT4Rec不仅在这两个任务中提供了卓越的性能,而且对一系列对抗条件具有显著的鲁棒性。
{"title":"Handling Low Homophily in Recommender Systems With Partitioned Graph Transformer","authors":"Thanh Tam Nguyen;Thanh Toan Nguyen;Matthias Weidlich;Jun Jo;Quoc Viet Hung Nguyen;Hongzhi Yin;Alan Wee-Chung Liew","doi":"10.1109/TKDE.2024.3485880","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3485880","url":null,"abstract":"Modern recommender systems derive predictions from an interaction graph that links users and items. To this end, many of today's state-of-the-art systems use graph neural networks (GNNs) to learn effective representations of these graphs under the assumption of homophily, i.e., the idea that similar users will sit close to each other in the graph. However, recent studies have revealed that real-world recommendation graphs are often heterophilous, i.e., dissimilar users will also often sit close to each other. One of the reasons for this heterophilia is shilling attacks that obscure the inherent characteristics of the graph and make the derived recommendations less accurate as a consequence. Hence, to cope with low homophily in recommender systems, we propose a recommendation model called PGT4Rec that is based on a Partitioned Graph Transformer. The model integrates label information into the learning process, which allows discriminative neighbourhoods of users to be generated. As such, the framework can both detect shilling attacks and predict user ratings for items. Extensive experiments on real and synthetic datasets show PGT4Rec as not only providing superior performance in these two tasks but also significant robustness to a range of adversarial conditions.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 1","pages":"334-350"},"PeriodicalIF":8.9,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142797954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Expressiveness Analysis and Enhancing Framework for Geometric Knowledge Graph Embedding Models 几何知识图嵌入模型的表达性分析与增强框架
IF 8.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-28 DOI: 10.1109/TKDE.2024.3486915
Tengwei Song;Long Yin;Yang Liu;Long Liao;Jie Luo;Zhiqiang Xu
Existing geometric knowledge graph embedding methods employ various relational transformations, such as translation, rotation, and projection, to model different relation patterns, which aims to enhance the expressiveness of models. In contrast to current approaches that treat the expressiveness of the model as a binary issue, we aim to delve deeper into analyzing the level of difficulty in which geometric knowledge graph embedding models can represent relation patterns. In this paper, we provide a theoretical analysis framework that measures the expressiveness of the model in relation patterns by quantifying the size of the solution space of linear equation systems. Additionally, we propose a mechanism for imposing relational constraints on geometric knowledge graph embedding models by setting “traps” near relational optimal solutions, which enables the model to better converge to the optimal solution. Empirically, we analyze and compare several typical knowledge graph embedding models with different geometric algebras, revealing that some models have insufficient solution space due to their design, which leads to performance weaknesses. We also demonstrate that the proposed relational constraint operations can improve the performance of certain relation patterns. The experimental results on public benchmarks and relation pattern specified dataset are consistent with our theoretical analysis.
现有的几何知识图嵌入方法采用平移、旋转、投影等关系变换对不同的关系模式进行建模,以增强模型的表达能力。与目前将模型的表达性视为二元问题的方法相反,我们的目标是更深入地分析几何知识图嵌入模型表示关系模式的困难程度。在本文中,我们提供了一个理论分析框架,通过量化线性方程组的解空间的大小来衡量模型在关系模式中的可表达性。此外,我们提出了一种通过在关系最优解附近设置“陷阱”对几何知识图嵌入模型施加关系约束的机制,使模型能够更好地收敛到最优解。实证分析和比较了几种具有不同几何代数的典型知识图嵌入模型,揭示了一些模型由于其设计导致解空间不足,从而导致性能不足。我们还证明了所提出的关系约束操作可以提高某些关系模式的性能。在公共基准和关系模式指定数据集上的实验结果与我们的理论分析一致。
{"title":"Expressiveness Analysis and Enhancing Framework for Geometric Knowledge Graph Embedding Models","authors":"Tengwei Song;Long Yin;Yang Liu;Long Liao;Jie Luo;Zhiqiang Xu","doi":"10.1109/TKDE.2024.3486915","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3486915","url":null,"abstract":"Existing geometric knowledge graph embedding methods employ various relational transformations, such as translation, rotation, and projection, to model different relation patterns, which aims to enhance the expressiveness of models. In contrast to current approaches that treat the expressiveness of the model as a binary issue, we aim to delve deeper into analyzing the level of difficulty in which geometric knowledge graph embedding models can represent relation patterns. In this paper, we provide a theoretical analysis framework that measures the expressiveness of the model in relation patterns by quantifying the size of the solution space of linear equation systems. Additionally, we propose a mechanism for imposing relational constraints on geometric knowledge graph embedding models by setting “traps” near relational optimal solutions, which enables the model to better converge to the optimal solution. Empirically, we analyze and compare several typical knowledge graph embedding models with different geometric algebras, revealing that some models have insufficient solution space due to their design, which leads to performance weaknesses. We also demonstrate that the proposed relational constraint operations can improve the performance of certain relation patterns. The experimental results on public benchmarks and relation pattern specified dataset are consistent with our theoretical analysis.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 1","pages":"306-318"},"PeriodicalIF":8.9,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142797991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Fine-Grained Network for Joint Multimodal Entity-Relation Extraction 用于联合提取多模态实体关系的细粒度网络
IF 8.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-25 DOI: 10.1109/TKDE.2024.3485107
Li Yuan;Yi Cai;Jingyu Xu;Qing Li;Tao Wang
Joint multimodal entity-relation extraction (JMERE) is a challenging task that involves two joint subtasks, i.e., named entity recognition and relation extraction, from multimodal data such as text sentences with associated images. Previous JMERE methods have primarily employed 1) pipeline models, which apply pre-trained unimodal models separately and ignore the interaction between tasks, or 2) word-pair relation tagging methods, which neglect neighboring word pairs. To address these limitations, we propose a fine-grained network for JMERE. Specifically, we introduce a fine-grained alignment module that utilizes a phrase-patch to establish connections between text phrases and visual objects. This module can learn consistent multimodal representations from multimodal data. Furthermore, we address the task-irrelevant image information issue by proposing a gate fusion module, which mitigates the impact of image noise and ensures a balanced representation between image objects and text representations. Furthermore, we design a multi-word decoder that enables ensemble prediction of tags for each word pair. This approach leverages the predicted results of neighboring word pairs, improving the ability to extract multi-word entities. Evaluation results from a series of experiments demonstrate the superiority of our proposed model over state-of-the-art models in JMERE.
联合多模态实体关系提取(JMERE)是一项具有挑战性的任务,它涉及两个联合子任务,即从多模态数据(如带有相关图像的文本句子)中进行命名实体识别和关系提取。以前的JMERE方法主要采用1)管道模型,它单独应用预训练的单峰模型,忽略任务之间的相互作用;2)词对关系标记方法,忽略相邻的词对。为了解决这些限制,我们为JMERE提出了一个细粒度网络。具体来说,我们引入了一个细粒度的对齐模块,它利用短语补丁来建立文本短语和可视化对象之间的连接。该模块可以从多模态数据中学习一致的多模态表示。此外,我们通过提出门融合模块来解决与任务无关的图像信息问题,该模块减轻了图像噪声的影响,并确保图像对象和文本表示之间的平衡表示。此外,我们设计了一个多词解码器,可以对每个词对的标签进行集成预测。这种方法利用相邻词对的预测结果,提高了提取多词实体的能力。一系列实验的评估结果表明,我们提出的模型优于JMERE中最先进的模型。
{"title":"A Fine-Grained Network for Joint Multimodal Entity-Relation Extraction","authors":"Li Yuan;Yi Cai;Jingyu Xu;Qing Li;Tao Wang","doi":"10.1109/TKDE.2024.3485107","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3485107","url":null,"abstract":"Joint multimodal entity-relation extraction (JMERE) is a challenging task that involves two joint subtasks, i.e., named entity recognition and relation extraction, from multimodal data such as text sentences with associated images. Previous JMERE methods have primarily employed 1) pipeline models, which apply pre-trained unimodal models separately and ignore the interaction between tasks, or 2) word-pair relation tagging methods, which neglect neighboring word pairs. To address these limitations, we propose a fine-grained network for JMERE. Specifically, we introduce a fine-grained alignment module that utilizes a phrase-patch to establish connections between text phrases and visual objects. This module can learn consistent multimodal representations from multimodal data. Furthermore, we address the task-irrelevant image information issue by proposing a gate fusion module, which mitigates the impact of image noise and ensures a balanced representation between image objects and text representations. Furthermore, we design a multi-word decoder that enables ensemble prediction of tags for each word pair. This approach leverages the predicted results of neighboring word pairs, improving the ability to extract multi-word entities. Evaluation results from a series of experiments demonstrate the superiority of our proposed model over state-of-the-art models in JMERE.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 1","pages":"1-14"},"PeriodicalIF":8.9,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142797912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scalable Semi-Supervised Clustering via Structural Entropy With Different Constraints 基于不同约束条件下结构熵的可扩展半监督聚类
IF 8.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-25 DOI: 10.1109/TKDE.2024.3486530
Guangjie Zeng;Hao Peng;Angsheng Li;Jia Wu;Chunyang Liu;Philip S. Yu
Semi-supervised clustering leverages prior information in the form of constraints to achieve higher-quality clustering outcomes. However, most existing methods struggle with large-scale datasets owing to their high time and space complexity. Moreover, they encounter the challenge of seamlessly integrating various constraints, thereby limiting their applicability. In this paper, we present Scalable Semi-supervised clustering via Structural Entropy (SSSE), a novel method that tackles scalable datasets with different types of constraints from diverse sources to perform both semi-supervised partitioning and hierarchical clustering, which is fully explainable compared to deep learning-based methods. Specifically, we design objectives based on structural entropy, integrating constraints for semi-supervised partitioning and hierarchical clustering. To achieve scalability on data size, we develop efficient algorithms based on graph sampling to reduce the time and space complexity. To achieve generalization on constraint types, we formulate a uniform view for widely used pairwise and label constraints. Extensive experiments on real-world clustering datasets at different scales demonstrate the superiority of SSSE in clustering accuracy and scalability with different constraints. Additionally, Cell clustering experiments on single-cell RNA-seq datasets demonstrate the functionality of SSSE for biological data analysis.
半监督聚类利用约束形式的先验信息来获得更高质量的聚类结果。然而,大多数现有的方法由于其高时间和空间复杂性而难以处理大规模数据集。此外,它们遇到了无缝集成各种约束的挑战,从而限制了它们的适用性。在本文中,我们提出了基于结构熵(SSSE)的可扩展半监督聚类,这是一种新的方法,它处理来自不同来源的具有不同类型约束的可扩展数据集,以执行半监督分区和分层聚类,与基于深度学习的方法相比,这是完全可以解释的。具体来说,我们基于结构熵设计目标,整合半监督划分和分层聚类的约束。为了实现数据大小的可扩展性,我们开发了基于图采样的高效算法来降低时间和空间复杂度。为了实现约束类型的泛化,我们对广泛使用的成对约束和标签约束制定了统一的视图。在不同尺度的真实聚类数据集上的大量实验证明了SSSE在不同约束条件下的聚类精度和可扩展性方面的优势。此外,单细胞RNA-seq数据集上的细胞聚类实验证明了SSSE在生物数据分析中的功能。
{"title":"Scalable Semi-Supervised Clustering via Structural Entropy With Different Constraints","authors":"Guangjie Zeng;Hao Peng;Angsheng Li;Jia Wu;Chunyang Liu;Philip S. Yu","doi":"10.1109/TKDE.2024.3486530","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3486530","url":null,"abstract":"Semi-supervised clustering leverages prior information in the form of constraints to achieve higher-quality clustering outcomes. However, most existing methods struggle with large-scale datasets owing to their high time and space complexity. Moreover, they encounter the challenge of seamlessly integrating various constraints, thereby limiting their applicability. In this paper, we present \u0000<underline>S</u>\u0000calable \u0000<underline>S</u>\u0000emi-supervised clustering via \u0000<underline>S</u>\u0000tructural \u0000<underline>E</u>\u0000ntropy (SSSE), a novel method that tackles scalable datasets with different types of constraints from diverse sources to perform both semi-supervised partitioning and hierarchical clustering, which is fully explainable compared to deep learning-based methods. Specifically, we design objectives based on structural entropy, integrating constraints for semi-supervised partitioning and hierarchical clustering. To achieve scalability on data size, we develop efficient algorithms based on graph sampling to reduce the time and space complexity. To achieve generalization on constraint types, we formulate a uniform view for widely used pairwise and label constraints. Extensive experiments on real-world clustering datasets at different scales demonstrate the superiority of SSSE in clustering accuracy and scalability with different constraints. Additionally, Cell clustering experiments on single-cell RNA-seq datasets demonstrate the functionality of SSSE for biological data analysis.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 1","pages":"478-492"},"PeriodicalIF":8.9,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Y-Graph: A Max-Ascent-Angle Graph for Detecting Clusters y图:用于检测聚类的最大上升角图
IF 8.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-24 DOI: 10.1109/TKDE.2024.3486221
Junyi Guan;Sheng Li;Xiongxiong He;Jiajia Chen;Yangyang Zhao;Yuxuan Zhang
Graph clustering technique is highly effective in detecting complex-shaped clusters, in which graph building is a crucial step. Nevertheless, building a reasonable graph that can exhibit high connectivity within clusters and low connectivity across clusters is challenging. Herein, we design a max-ascent-angle graph called the “Y-graph”, a high-sparse graph that automatically allocates dense edges within clusters and sparse edges across clusters, regardless of their shapes or dimensionality. In the graph, every point $x$ is allowed to connect its nearest higher-density neighbor $delta$, and another higher-density neighbor $gamma$, satisfying that the angle $angle delta xgamma$ is the largest, called “max-ascent-angle”. By seeking the max-ascent-angle, points are automatically connected as the Y-graph, which is a reasonable graph that can effectively balance inter-cluster connectivity and intra-cluster non-connectivity. Besides, an edge weight function is designed to capture the similarity of the neighbor probability distribution, which effectively represents the density connectivity between points. By employing the Normalized-Cut (Ncut) technique, a Ncut-Y algorithm is proposed. Benefiting from the excellent performance of Y-graph, Ncut-Y can fast seek and cut the edges located in the low-density boundaries between clusters, thereby, capturing clusters effectively. Experimental results on both synthetic and real datasets demonstrate the effectiveness of Y-graph and Ncut-Y.
图聚类技术是一种高效的复杂形状聚类检测技术,其中图的构建是关键步骤。然而,构建一个合理的图来显示集群内部的高连通性和集群之间的低连通性是具有挑战性的。在这里,我们设计了一个称为“y图”的最大上升角图,这是一个高度稀疏的图,可以自动分配集群内的密集边缘和集群间的稀疏边缘,而不管它们的形状或维度如何。在图中,每个点$x$被允许连接其最近的高密度邻居$delta$和另一个高密度邻居$gamma$,满足角度$angle delta xgamma$是最大的,称为“最大上升角”。通过寻找最大上升角,将点自动连接成y图,这是一种合理的图,可以有效地平衡簇间连通性和簇内非连通性。此外,设计了一个边缘权函数来捕捉相邻概率分布的相似性,有效地表示了点之间的密度连通性。采用归一化切割(Ncut)技术,提出了一种Ncut- y算法。得益于y图的优异性能,Ncut-Y可以快速寻找和切割位于簇之间低密度边界的边,从而有效地捕获簇。在合成数据集和真实数据集上的实验结果都证明了Y-graph和Ncut-Y的有效性。
{"title":"Y-Graph: A Max-Ascent-Angle Graph for Detecting Clusters","authors":"Junyi Guan;Sheng Li;Xiongxiong He;Jiajia Chen;Yangyang Zhao;Yuxuan Zhang","doi":"10.1109/TKDE.2024.3486221","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3486221","url":null,"abstract":"Graph clustering technique is highly effective in detecting complex-shaped clusters, in which graph building is a crucial step. Nevertheless, building a reasonable graph that can exhibit high connectivity within clusters and low connectivity across clusters is challenging. Herein, we design a max-ascent-angle graph called the “Y-graph”, a high-sparse graph that automatically allocates dense edges within clusters and sparse edges across clusters, regardless of their shapes or dimensionality. In the graph, every point \u0000<inline-formula><tex-math>$x$</tex-math></inline-formula>\u0000 is allowed to connect its nearest higher-density neighbor \u0000<inline-formula><tex-math>$delta$</tex-math></inline-formula>\u0000, and another higher-density neighbor \u0000<inline-formula><tex-math>$gamma$</tex-math></inline-formula>\u0000, satisfying that the angle \u0000<inline-formula><tex-math>$angle delta xgamma$</tex-math></inline-formula>\u0000 is the largest, called “max-ascent-angle”. By seeking the max-ascent-angle, points are automatically connected as the Y-graph, which is a reasonable graph that can effectively balance inter-cluster connectivity and intra-cluster non-connectivity. Besides, an edge weight function is designed to capture the similarity of the neighbor probability distribution, which effectively represents the density connectivity between points. By employing the Normalized-Cut (Ncut) technique, a Ncut-Y algorithm is proposed. Benefiting from the excellent performance of Y-graph, Ncut-Y can fast seek and cut the edges located in the low-density boundaries between clusters, thereby, capturing clusters effectively. Experimental results on both synthetic and real datasets demonstrate the effectiveness of Y-graph and Ncut-Y.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 1","pages":"542-556"},"PeriodicalIF":8.9,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PUMA: Efficient Continual Graph Learning for Node Classification With Graph Condensation 基于图凝聚的节点分类的高效连续图学习
IF 8.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-24 DOI: 10.1109/TKDE.2024.3485691
Yilun Liu;Ruihong Qiu;Yanran Tang;Hongzhi Yin;Zi Huang
When handling streaming graphs, existing graph representation learning models encounter a catastrophic forgetting problem, where previously learned knowledge of these models is easily overwritten when learning with newly incoming graphs. In response, Continual Graph Learning (CGL) emerges as a novel paradigm enabling graph representation learning from static to streaming graphs. Our prior work, Condense and Train (CaT) (Liu et al. 2023) is a replay-based CGL framework with a balanced continual learning procedure, which designs a small yet effective memory bank for replaying data by condensing incoming graphs. Although the CaT alleviates the catastrophic forgetting problem, there exist three issues: (1) The graph condensation algorithm derived in CaT only focuses on labelled nodes while neglecting abundant information carried by unlabelled nodes; (2) The continual training scheme of the CaT overemphasises on the previously learned knowledge, limiting the model capacity to learn from newly added memories; (3) Both the condensation process and replaying process of the CaT are time-consuming. In this paper, we propose a PsUdo-label guided Memory bAnk (PUMA) CGL framework, extending from the CaT to enhance its efficiency and effectiveness by overcoming the above-mentioned weaknesses and limits. To fully exploit the information in a graph, PUMA expands the coverage of nodes during graph condensation with both labelled and unlabelled nodes. Furthermore, a training-from-scratch strategy is proposed to upgrade the previous continual learning scheme for a balanced training between the historical and the new graphs. Besides, PUMA uses a one-time prorogation and wide graph encoders to accelerate the graph condensation and the graph encoding process in the training stage to improve the efficiency of the whole framework. Extensive experiments on seven datasets for the node classification task demonstrate the state-of-the-art performance and efficiency over existing methods.
在处理流图时,现有的图表示学习模型会遇到灾难性的遗忘问题,在使用新传入的图学习时,以前学习过的这些模型的知识很容易被覆盖。作为回应,持续图学习(CGL)作为一种新的范式出现,使图表示能够从静态图学习到流图。我们之前的工作,冷凝和训练(CaT) (Liu et al. 2023)是一个基于重播的CGL框架,具有平衡的持续学习过程,它设计了一个小而有效的记忆库,通过压缩传入的图来重播数据。虽然CaT缓解了灾难性遗忘问题,但存在三个问题:(1)CaT导出的图凝聚算法只关注有标记的节点,而忽略了未标记节点携带的丰富信息;(2) CaT的持续训练方案过分强调之前学习过的知识,限制了模型从新增记忆中学习的能力;(3) CaT的冷凝过程和重放过程都是耗时的。在本文中,我们提出了一个PsUdo-label引导记忆库(PUMA) CGL框架,该框架从CaT扩展而来,通过克服上述弱点和局限性来提高其效率和有效性。为了充分利用图中的信息,PUMA在图凝聚过程中扩展了节点的覆盖范围,包括标记节点和未标记节点。此外,提出了一种从头开始的训练策略,以升级以前的连续学习方案,实现历史图和新图之间的平衡训练。PUMA在训练阶段使用了一次延图和宽图编码器,加速了图的凝聚和图的编码过程,提高了整个框架的效率。在7个数据集上对节点分类任务进行了广泛的实验,证明了该方法优于现有方法的最先进性能和效率。
{"title":"PUMA: Efficient Continual Graph Learning for Node Classification With Graph Condensation","authors":"Yilun Liu;Ruihong Qiu;Yanran Tang;Hongzhi Yin;Zi Huang","doi":"10.1109/TKDE.2024.3485691","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3485691","url":null,"abstract":"When handling streaming graphs, existing graph representation learning models encounter a catastrophic forgetting problem, where previously learned knowledge of these models is easily overwritten when learning with newly incoming graphs. In response, Continual Graph Learning (CGL) emerges as a novel paradigm enabling graph representation learning from static to streaming graphs. Our prior work, Condense and Train (CaT) (Liu et al. 2023) is a replay-based CGL framework with a balanced continual learning procedure, which designs a small yet effective memory bank for replaying data by condensing incoming graphs. Although the CaT alleviates the catastrophic forgetting problem, there exist three issues: (1) The graph condensation algorithm derived in CaT only focuses on labelled nodes while neglecting abundant information carried by unlabelled nodes; (2) The continual training scheme of the CaT overemphasises on the previously learned knowledge, limiting the model capacity to learn from newly added memories; (3) Both the condensation process and replaying process of the CaT are time-consuming. In this paper, we propose a \u0000<bold>P</b>\u0000s\u0000<bold>U</b>\u0000do-label guided \u0000<bold>M</b>\u0000emory b\u0000<bold>A</b>\u0000nk (PUMA) CGL framework, extending from the CaT to enhance its efficiency and effectiveness by overcoming the above-mentioned weaknesses and limits. To fully exploit the information in a graph, PUMA expands the coverage of nodes during graph condensation with both labelled and unlabelled nodes. Furthermore, a training-from-scratch strategy is proposed to upgrade the previous continual learning scheme for a balanced training between the historical and the new graphs. Besides, PUMA uses a one-time prorogation and wide graph encoders to accelerate the graph condensation and the graph encoding process in the training stage to improve the efficiency of the whole framework. Extensive experiments on seven datasets for the node classification task demonstrate the state-of-the-art performance and efficiency over existing methods.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 1","pages":"449-461"},"PeriodicalIF":8.9,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Domain Adversarial Active Learning for Domain Generalization Classification 面向领域泛化分类的领域对抗主动学习
IF 8.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-24 DOI: 10.1109/TKDE.2024.3486204
Jianting Chen;Ling Ding;Yunxiao Yang;Zaiyuan Di;Yang Xiang
Domain generalization (DG) tasks aim to learn cross-domain models from source domains and apply them to unknown target domains. Recent research has demonstrated that diverse and rich source domain samples can enhance domain generalization capability. This work argues that the impact of each sample on the model's generalization ability varies. Even a small-scale but high-quality dataset can achieve a notable level of generalization. Motivated by this, we propose a domain-adversarial active learning (DAAL) algorithm for classification tasks in DG. First, we analyze that the objective of DG tasks is to maximize the inter-class distance within the same domain and minimize the intra-class distance across different domains. We design a domain adversarial selection method that prioritizes challenging samples in an active learning (AL) framework. Second, we hypothesize that even in a converged model, some feature subsets lack discriminatory power within each domain. We develop a method to identify and optimize these feature subsets, thereby maximizing inter-class distance of features. Lastly, We experimentally compare our DAAL algorithm with various DG and AL algorithms across four datasets. The results demonstrate that the DAAL algorithm can achieve strong generalization ability with fewer data resources, thereby significantly reducing data annotation costs in DG tasks.
领域泛化(DG)任务旨在从源领域学习跨领域模型,并将其应用于未知的目标领域。近年来的研究表明,多样化和丰富的源域样本可以提高域泛化能力。这项工作认为,每个样本对模型泛化能力的影响是不同的。即使是一个小规模但高质量的数据集也可以达到显著的泛化水平。基于此,我们提出了一种针对DG分类任务的领域对抗主动学习(DAAL)算法。首先,我们分析了DG任务的目标是最大化同一领域内的类间距离,最小化不同领域内的类内距离。我们设计了一种领域对抗选择方法,在主动学习(AL)框架中优先考虑具有挑战性的样本。其次,我们假设即使在收敛模型中,一些特征子集在每个域中也缺乏区分能力。我们开发了一种方法来识别和优化这些特征子集,从而最大化特征的类间距离。最后,我们在四个数据集上实验比较了我们的DAAL算法与各种DG和AL算法。结果表明,DAAL算法能够以较少的数据资源实现较强的泛化能力,从而显著降低了DG任务中的数据标注成本。
{"title":"Domain Adversarial Active Learning for Domain Generalization Classification","authors":"Jianting Chen;Ling Ding;Yunxiao Yang;Zaiyuan Di;Yang Xiang","doi":"10.1109/TKDE.2024.3486204","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3486204","url":null,"abstract":"Domain generalization (DG) tasks aim to learn cross-domain models from source domains and apply them to unknown target domains. Recent research has demonstrated that diverse and rich source domain samples can enhance domain generalization capability. This work argues that the impact of each sample on the model's generalization ability varies. Even a small-scale but high-quality dataset can achieve a notable level of generalization. Motivated by this, we propose a domain-adversarial active learning (DAAL) algorithm for classification tasks in DG. First, we analyze that the objective of DG tasks is to maximize the inter-class distance within the same domain and minimize the intra-class distance across different domains. We design a domain adversarial selection method that prioritizes challenging samples in an active learning (AL) framework. Second, we hypothesize that even in a converged model, some feature subsets lack discriminatory power within each domain. We develop a method to identify and optimize these feature subsets, thereby maximizing inter-class distance of features. Lastly, We experimentally compare our DAAL algorithm with various DG and AL algorithms across four datasets. The results demonstrate that the DAAL algorithm can achieve strong generalization ability with fewer data resources, thereby significantly reducing data annotation costs in DG tasks.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 1","pages":"226-238"},"PeriodicalIF":8.9,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142798045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hierarchy-Aware Adaptive Graph Neural Network 层次感知自适应图神经网络
IF 8.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-24 DOI: 10.1109/TKDE.2024.3485736
Dengsheng Wu;Huidong Wu;Jianping Li
Graph Neural Networks (GNNs) have gained attention for their ability in capturing node interactions to generate node representations. However, their performances are frequently restricted in real-world directed networks with natural hierarchical structures. Most current GNNs incorporate information from immediate neighbors or within predefined receptive fields, potentially overlooking long-range dependencies inherent in hierarchical structures. They also tend to neglect node adaptability, which varies based on their positions. To address these limitations, we propose a new model called Hierarchy-Aware Adaptive Graph Neural Network (HAGNN) to adaptively capture hierarchical long-range dependencies. Technically, HAGNN creates a hierarchical structure based on directional pair-wise node interactions, revealing underlying hierarchical relationships among nodes. The inferred hierarchy helps to identify certain key nodes, named Source Hubs in our research, which serve as hierarchical contexts for individual nodes. Shortcuts adaptively connect these Source Hubs with distant nodes, enabling efficient message passing for informative long-range interactions. Through comprehensive experiments across multiple datasets, our proposed model outperforms several baseline methods, thus establishing a new state-of-the-art in performance. Further analysis demonstrates the effectiveness of our approach in capturing relevant adaptive hierarchical contexts, leading to improved and explainable node representation.
图神经网络(gnn)因其捕获节点交互以生成节点表示的能力而受到关注。然而,它们的性能在具有自然层次结构的有向网络中经常受到限制。目前大多数gnn包含来自近邻或预定义接受域的信息,可能忽略了层次结构中固有的长期依赖关系。它们还往往忽略了节点的适应性,这取决于它们的位置。为了解决这些限制,我们提出了一种新的模型,称为层次感知自适应图神经网络(HAGNN),以自适应地捕获层次远程依赖关系。从技术上讲,HAGNN创建了一个基于定向成对节点交互的层次结构,揭示了节点之间潜在的层次关系。推断的层次结构有助于识别某些关键节点,在我们的研究中称为Source Hubs,它们作为单个节点的层次上下文。快捷方式自适应地将这些源集线器与远程节点连接起来,从而为信息丰富的远程交互提供高效的消息传递。通过跨多个数据集的综合实验,我们提出的模型优于几种基线方法,从而建立了性能上的新技术。进一步的分析证明了我们的方法在捕获相关的自适应分层上下文方面的有效性,从而导致改进和可解释的节点表示。
{"title":"Hierarchy-Aware Adaptive Graph Neural Network","authors":"Dengsheng Wu;Huidong Wu;Jianping Li","doi":"10.1109/TKDE.2024.3485736","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3485736","url":null,"abstract":"Graph Neural Networks (GNNs) have gained attention for their ability in capturing node interactions to generate node representations. However, their performances are frequently restricted in real-world directed networks with natural hierarchical structures. Most current GNNs incorporate information from immediate neighbors or within predefined receptive fields, potentially overlooking long-range dependencies inherent in hierarchical structures. They also tend to neglect node adaptability, which varies based on their positions. To address these limitations, we propose a new model called Hierarchy-Aware Adaptive Graph Neural Network (HAGNN) to adaptively capture hierarchical long-range dependencies. Technically, HAGNN creates a hierarchical structure based on directional pair-wise node interactions, revealing underlying hierarchical relationships among nodes. The inferred hierarchy helps to identify certain key nodes, named Source Hubs in our research, which serve as hierarchical contexts for individual nodes. Shortcuts adaptively connect these Source Hubs with distant nodes, enabling efficient message passing for informative long-range interactions. Through comprehensive experiments across multiple datasets, our proposed model outperforms several baseline methods, thus establishing a new state-of-the-art in performance. Further analysis demonstrates the effectiveness of our approach in capturing relevant adaptive hierarchical contexts, leading to improved and explainable node representation.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 1","pages":"365-378"},"PeriodicalIF":8.9,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Projection-Based Algorithms for Tip Decomposition on Dynamic Bipartite Graphs 基于投影的动态二部图尖端分解算法
IF 8.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-24 DOI: 10.1109/TKDE.2024.3486310
Tongfeng Weng;Yumeng Liu;Mo Sha;Xinyuan Chen;Xu Zhou;Kenli Li;Kian-Lee Tan
This paper addresses the pressing need for effective k-tips decomposition in dynamic bipartite graphs, a crucial aspect of real-time applications that analyze and mine binary relationship patterns. Recognizing the dynamic nature of these graphs, our study is the first to provide a solution for k-tips decomposition in such evolving environments. We introduce a pioneering projection-based algorithm, coupled with advanced incremental maintenance strategies for edge modifications, tailored specifically for dynamic graphs. This novel approach not only fills a significant gap in the analysis of dynamic bipartite graphs but also substantially enhances the accuracy and timeliness of data-driven decisions in critical areas like public health. Our contributions set a new benchmark in the field, paving the way for more nuanced and responsive analyses in various domains reliant on dynamic data interpretation.
本文解决了动态二部图中有效k尖分解的迫切需求,这是分析和挖掘二元关系模式的实时应用的一个关键方面。认识到这些图的动态性质,我们的研究是第一个在这种不断变化的环境中为k-tips分解提供解决方案的研究。我们引入了一种开创性的基于投影的算法,以及专门为动态图量身定制的边缘修改的高级增量维护策略。这种新颖的方法不仅填补了动态二部图分析的重大空白,而且大大提高了公共卫生等关键领域数据驱动决策的准确性和及时性。我们的贡献在该领域树立了一个新的基准,为在依赖于动态数据解释的各个领域进行更细致和响应性的分析铺平了道路。
{"title":"Efficient Projection-Based Algorithms for Tip Decomposition on Dynamic Bipartite Graphs","authors":"Tongfeng Weng;Yumeng Liu;Mo Sha;Xinyuan Chen;Xu Zhou;Kenli Li;Kian-Lee Tan","doi":"10.1109/TKDE.2024.3486310","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3486310","url":null,"abstract":"This paper addresses the pressing need for effective k-tips decomposition in dynamic bipartite graphs, a crucial aspect of real-time applications that analyze and mine binary relationship patterns. Recognizing the dynamic nature of these graphs, our study is the first to provide a solution for k-tips decomposition in such evolving environments. We introduce a pioneering projection-based algorithm, coupled with advanced incremental maintenance strategies for edge modifications, tailored specifically for dynamic graphs. This novel approach not only fills a significant gap in the analysis of dynamic bipartite graphs but also substantially enhances the accuracy and timeliness of data-driven decisions in critical areas like public health. Our contributions set a new benchmark in the field, paving the way for more nuanced and responsive analyses in various domains reliant on dynamic data interpretation.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 2","pages":"626-640"},"PeriodicalIF":8.9,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142940961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Knowledge and Data Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1