首页 > 最新文献

IEEE Transactions on Big Data最新文献

英文 中文
Core Maintenance on Dynamic Graphs: A Distributed Approach Built on H-Index 动态图上的核心维护:基于 H-Index 的分布式方法
IF 7.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-11 DOI: 10.1109/TBDATA.2024.3352973
Qiang-Sheng Hua;Hongen Wang;Hai Jin;Xuanhua Shi
Core number is an essential tool for analyzing graph structure. Graphs in the real world are typically large and dynamic, requiring the development of distributed algorithms to refrain from expensive I/O operations and the maintenance algorithms to address dynamism. Core maintenance updates the core number of each vertex upon the insertion/deletion of vertices/edges. Although the state-of-the-art distributed maintenance algorithm (Weng et al.~2022) can handle multiple edge insertions/deletions simultaneously, it still has two aspects to improve. (I) Parallel processing is not allowed when inserting/removing edges with the same core number, reducing the degree of parallelism and raising the number of rounds. (II) During the implementation phase, only one thread is assigned to the vertices with the same core number, leading to the inability to fully utilize the distributed computing power. Furthermore, the h-index (Lü, et al. 2016) based distributed core decomposition algorithm (Montresor et al. 2013) can fully utilize the distributed computing power where all vertices can be processed in parallel. However, it requires all vertices to recompute their core numbers upon graph changes. In this article, we propose a distributed core maintenance algorithm based on h-index, which circumvents the issues of algorithm (Weng et al.~2022). In addition, our algorithm avoids core numbers recalculation where the numbers do not change. In comparison to the state-of-the-art distributed maintenance algorithm (Weng et al.~2022), the time speedup ratio is at least 100 in the scenarios of both insertion and deletion. Compared to the distributed core decomposition algorithm (Montresor et al. 2013), the average time speedup ratios are 2 and 8 for the cases of insertion and deletion, respectively.
核心数是分析图结构的重要工具。现实世界中的图形通常是庞大而动态的,这就要求开发分布式算法以避免昂贵的 I/O 操作,并开发维护算法以解决动态性问题。核心维护在插入/删除顶点/边时更新每个顶点的核心数。虽然最先进的分布式维护算法(Weng 等~2022)可以同时处理多条边的插入/删除,但仍有两方面需要改进。(I) 在插入/删除具有相同核心数的边时,不允许并行处理,从而降低了并行程度,增加了回合数。(二)在执行阶段,只为具有相同核心数的顶点分配一个线程,导致无法充分利用分布式计算能力。此外,基于 h 指数(Lü 等人,2016 年)的分布式核心分解算法(Montresor 等人,2013 年)可以充分利用分布式计算能力,并行处理所有顶点。然而,它要求所有顶点在图发生变化时重新计算其核心数。在本文中,我们提出了一种基于 h-index 的分布式核心维护算法,它规避了算法(Weng 等~2022)的问题。此外,我们的算法还避免了在核心数不变的情况下重新计算核心数。与最先进的分布式维护算法(Weng 等 ~2022)相比,在插入和删除两种情况下,时间加速比至少为 100。与分布式内核分解算法(Montresor 等人,2013 年)相比,插入和删除情况下的平均时间加速比分别为 2 和 8。
{"title":"Core Maintenance on Dynamic Graphs: A Distributed Approach Built on H-Index","authors":"Qiang-Sheng Hua;Hongen Wang;Hai Jin;Xuanhua Shi","doi":"10.1109/TBDATA.2024.3352973","DOIUrl":"https://doi.org/10.1109/TBDATA.2024.3352973","url":null,"abstract":"Core number is an essential tool for analyzing graph structure. Graphs in the real world are typically large and dynamic, requiring the development of distributed algorithms to refrain from expensive I/O operations and the maintenance algorithms to address dynamism. Core maintenance updates the core number of each vertex upon the insertion/deletion of vertices/edges. Although the state-of-the-art distributed maintenance algorithm (Weng et al.~2022) can handle multiple edge insertions/deletions simultaneously, it still has two aspects to improve. (I) Parallel processing is not allowed when inserting/removing edges with the same core number, reducing the degree of parallelism and raising the number of rounds. (II) During the implementation phase, only one thread is assigned to the vertices with the same core number, leading to the inability to fully utilize the distributed computing power. Furthermore, the h-index (Lü, et al. 2016) based distributed core decomposition algorithm (Montresor et al. 2013) can fully utilize the distributed computing power where all vertices can be processed in parallel. However, it requires all vertices to recompute their core numbers upon graph changes. In this article, we propose a distributed core maintenance algorithm based on h-index, which circumvents the issues of algorithm (Weng et al.~2022). In addition, our algorithm avoids core numbers recalculation where the numbers do not change. In comparison to the state-of-the-art distributed maintenance algorithm (Weng et al.~2022), the time speedup ratio is at least 100 in the scenarios of both insertion and deletion. Compared to the distributed core decomposition algorithm (Montresor et al. 2013), the average time speedup ratios are 2 and 8 for the cases of insertion and deletion, respectively.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"10 5","pages":"595-608"},"PeriodicalIF":7.5,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10388383","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142130280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Online Heterogeneous Streaming Feature Selection Without Feature Type Information 无特征类型信息的在线异构流特征选择
IF 7.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-08 DOI: 10.1109/TBDATA.2024.3350630
Peng Zhou;Yunyun Zhang;Zhaolong Ling;Yuanting Yan;Shu Zhao;Xindong Wu
Feature selection aims to select an optimal minimal feature subset from the original datasets and has become an indispensable preprocessing component before data mining and machine learning, especially in the era of Big Data. However, features may be generated dynamically and arrive individually over time in practice, which we call streaming features. Most existing streaming feature selection methods assume that all dynamically generated features are the same type or assume we can know the feature type for each new arriving feature in advance, but this is unreasonable and unrealistic. Therefore, this paper first studies a practical issue of Online Heterogeneous Streaming Feature Selection without the feature type information before learning, named OHSFS. Specifically, we first model the streaming feature selection issue as a minimax problem. Then, in terms of MIC (Maximal Information Coefficient), we derive a new metric $MIC_{Gain}$ to determine whether a new streaming feature should be selected. To speed up the efficiency of OHSFS, we present the metric $MIC_{Cor}$ that can directly discard low correlation features. Finally, extensive experimental results indicate the effectiveness of OHSFS. Moreover, OHSFS is nonparametric and does not need to know the feature type before learning, which aligns with practical application needs.
特征选择的目的是从原始数据集中选择一个最佳的最小特征子集,它已成为数据挖掘和机器学习前不可或缺的预处理组件,尤其是在大数据时代。然而,在实践中,特征可能是动态生成的,并随着时间的推移逐个到达,我们称之为流特征。现有的流特征选择方法大多假设所有动态生成的特征都是同一类型,或者假设我们可以提前知道每个新到达特征的特征类型,但这是不合理的,也是不现实的。因此,本文首先研究了在学习前不知道特征类型信息的在线异构流特征选择的实际问题,并将其命名为 OHSFS。具体来说,我们首先将流式特征选择问题建模为最小问题。然后,根据最大信息系数(MIC),我们推导出一个新指标 $MIC_{Gain}$,用于判断是否应该选择新的流特征。为了加快 OHSFS 的效率,我们提出了可以直接舍弃低相关性特征的指标 $MIC_{Cor}$。最后,大量实验结果表明了 OHSFS 的有效性。此外,OHSFS 是非参数的,在学习之前不需要知道特征类型,这符合实际应用的需要。
{"title":"Online Heterogeneous Streaming Feature Selection Without Feature Type Information","authors":"Peng Zhou;Yunyun Zhang;Zhaolong Ling;Yuanting Yan;Shu Zhao;Xindong Wu","doi":"10.1109/TBDATA.2024.3350630","DOIUrl":"https://doi.org/10.1109/TBDATA.2024.3350630","url":null,"abstract":"Feature selection aims to select an optimal minimal feature subset from the original datasets and has become an indispensable preprocessing component before data mining and machine learning, especially in the era of Big Data. However, features may be generated dynamically and arrive individually over time in practice, which we call streaming features. Most existing streaming feature selection methods assume that all dynamically generated features are the same type or assume we can know the feature type for each new arriving feature in advance, but this is unreasonable and unrealistic. Therefore, this paper first studies a practical issue of Online Heterogeneous Streaming Feature Selection without the feature type information before learning, named OHSFS. Specifically, we first model the streaming feature selection issue as a minimax problem. Then, in terms of MIC (Maximal Information Coefficient), we derive a new metric \u0000<inline-formula><tex-math>$MIC_{Gain}$</tex-math></inline-formula>\u0000 to determine whether a new streaming feature should be selected. To speed up the efficiency of OHSFS, we present the metric \u0000<inline-formula><tex-math>$MIC_{Cor}$</tex-math></inline-formula>\u0000 that can directly discard low correlation features. Finally, extensive experimental results indicate the effectiveness of OHSFS. Moreover, OHSFS is nonparametric and does not need to know the feature type before learning, which aligns with practical application needs.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"10 4","pages":"470-485"},"PeriodicalIF":7.5,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141602573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Multi-Scale Features Mutual Mapping Fusion Based on Reverse Knowledge Distillation for Industrial Anomaly Detection and Localization 基于反向知识提炼的增强型多尺度特征相互映射融合技术,用于工业异常检测和定位
IF 7.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-08 DOI: 10.1109/TBDATA.2024.3350539
Guoxiang Tong;Quanquan Li;Yan Song
Unsupervised anomaly detection methods based on knowledge distillation have exhibited promising results. However, there is still room for improvement in the differential characterization of anomalous samples. In this article, a novel anomaly detection and localization model based on reverse knowledge distillation is proposed, where an enhanced multi-scale feature mutual mapping feature fusion module is proposed to greatly extract discrepant features at different scales. This module helps enhance the difference in anomaly region representation in the teacher-student structure by inhomogeneously fusing features at different levels. Then, the coordinate attention mechanism is introduced in the reverse distillation structure to pay special attention to dominant issues, facilitating nice direction guidance and position encoding. Furthermore, an innovative single-category embedding memory bank, inspired by human memory mechanisms, is developed to normalize single-category embedding to encourage high-quality model reconstruction. Finally, in several categories of the well-known MVTec dataset, our model achieves better results than state-of-the-art models in terms of AUROC and PRO, with an overall average of 98.1%, 98.3%, and 95.0% for detection AUROC scores, localization AUROC scores, and localization PRO scores, respectively, across 15 categories. Extensive experiments are conducted on the ablation study to validate the contribution of each component of the model.
基于知识提炼的无监督异常检测方法取得了可喜的成果。然而,在对异常样本进行差异化特征描述方面仍有改进的空间。本文提出了一种基于反向知识提炼的新型异常检测和定位模型,其中提出了一个增强型多尺度特征相互映射特征融合模块,以极大地提取不同尺度上的差异特征。该模块通过非均质地融合不同层次的特征,有助于增强师生结构中异常区域表征的差异性。然后,在反向蒸馏结构中引入坐标关注机制,特别关注主导问题,促进良好的方向引导和位置编码。此外,还受人类记忆机制的启发,开发了创新的单类嵌入记忆库,将单类嵌入归一化,以促进高质量的模型重构。最后,在著名的 MVTec 数据集的几个类别中,我们的模型在 AUROC 和 PRO 方面取得了比最先进模型更好的结果,在 15 个类别中,检测 AUROC 分数、定位 AUROC 分数和定位 PRO 分数的总体平均值分别为 98.1%、98.3% 和 95.0%。在消融研究中进行了广泛的实验,以验证模型各组成部分的贡献。
{"title":"Enhanced Multi-Scale Features Mutual Mapping Fusion Based on Reverse Knowledge Distillation for Industrial Anomaly Detection and Localization","authors":"Guoxiang Tong;Quanquan Li;Yan Song","doi":"10.1109/TBDATA.2024.3350539","DOIUrl":"https://doi.org/10.1109/TBDATA.2024.3350539","url":null,"abstract":"Unsupervised anomaly detection methods based on knowledge distillation have exhibited promising results. However, there is still room for improvement in the differential characterization of anomalous samples. In this article, a novel anomaly detection and localization model based on reverse knowledge distillation is proposed, where an enhanced multi-scale feature mutual mapping feature fusion module is proposed to greatly extract discrepant features at different scales. This module helps enhance the difference in anomaly region representation in the teacher-student structure by inhomogeneously fusing features at different levels. Then, the coordinate attention mechanism is introduced in the reverse distillation structure to pay special attention to dominant issues, facilitating nice direction guidance and position encoding. Furthermore, an innovative single-category embedding memory bank, inspired by human memory mechanisms, is developed to normalize single-category embedding to encourage high-quality model reconstruction. Finally, in several categories of the well-known MVTec dataset, our model achieves better results than state-of-the-art models in terms of AUROC and PRO, with an overall average of 98.1%, 98.3%, and 95.0% for detection AUROC scores, localization AUROC scores, and localization PRO scores, respectively, across 15 categories. Extensive experiments are conducted on the ablation study to validate the contribution of each component of the model.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"10 4","pages":"498-513"},"PeriodicalIF":7.5,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141602434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scalable Unsupervised Hashing via Exploiting Robust Cross-Modal Consistency 通过利用稳健的跨模态一致性实现可扩展的无监督哈希算法
IF 7.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-08 DOI: 10.1109/TBDATA.2024.3350541
Xingbo Liu;Jiamin Li;Xiushan Nie;Xuening Zhang;Shaohua Wang;Yilong Yin
Unsupervised cross-modal hashing has received increasing attention because of its efficiency and scalability for large-scale data retrieval and analysis. However, existing unsupervised cross-modal hashing methods primarily focus on learning shared feature embedding, ignoring robustness and consistency across different modalities. To this end, this study proposes a novel method called scalable unsupervised hashing (SUH) for large-scale cross-modal retrieval. In the proposed method, latent semantic information and common semantic embedding within heterogeneous data are simultaneously exploited using multimodal clustering and collective matrix factorization, respectively. Furthermore, the robust norm is seamlessly integrated into the two processes, making SUH insensitive to outliers. Based on the robust consistency exploited from the latent semantic information and feature embedding, hash codes can be learned discretely to avoid cumulative quantitation loss. The experimental results on five benchmark datasets demonstrate the effectiveness of the proposed method under various scenarios.
无监督跨模态哈希因其在大规模数据检索和分析中的高效性和可扩展性而受到越来越多的关注。然而,现有的无监督跨模态散列方法主要侧重于学习共享特征嵌入,而忽略了不同模态之间的鲁棒性和一致性。为此,本研究提出了一种用于大规模跨模态检索的新型方法,即可扩展的无监督散列(SUH)。在所提出的方法中,利用多模态聚类和集合矩阵因式分解,可同时利用异构数据中的潜在语义信息和共同语义嵌入。此外,鲁棒规范被无缝集成到这两个过程中,使得 SUH 对异常值不敏感。基于从潜在语义信息和特征嵌入中利用的鲁棒一致性,哈希代码可以离散学习,以避免累积量化损失。在五个基准数据集上的实验结果证明了所提方法在不同场景下的有效性。
{"title":"Scalable Unsupervised Hashing via Exploiting Robust Cross-Modal Consistency","authors":"Xingbo Liu;Jiamin Li;Xiushan Nie;Xuening Zhang;Shaohua Wang;Yilong Yin","doi":"10.1109/TBDATA.2024.3350541","DOIUrl":"https://doi.org/10.1109/TBDATA.2024.3350541","url":null,"abstract":"Unsupervised cross-modal hashing has received increasing attention because of its efficiency and scalability for large-scale data retrieval and analysis. However, existing unsupervised cross-modal hashing methods primarily focus on learning shared feature embedding, ignoring robustness and consistency across different modalities. To this end, this study proposes a novel method called scalable unsupervised hashing (SUH) for large-scale cross-modal retrieval. In the proposed method, latent semantic information and common semantic embedding within heterogeneous data are simultaneously exploited using multimodal clustering and collective matrix factorization, respectively. Furthermore, the robust norm is seamlessly integrated into the two processes, making SUH insensitive to outliers. Based on the robust consistency exploited from the latent semantic information and feature embedding, hash codes can be learned discretely to avoid cumulative quantitation loss. The experimental results on five benchmark datasets demonstrate the effectiveness of the proposed method under various scenarios.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"10 4","pages":"514-527"},"PeriodicalIF":7.5,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141602571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Few-Shot Learning With Multi-Granularity Knowledge Fusion and Decision-Making 利用多粒度知识融合和决策的 "少量学习"(Few-Shot Learning with Multi-Granularity Knowledge Fusion and Decision-Making
IF 7.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-01-08 DOI: 10.1109/TBDATA.2024.3350542
Yuling Su;Hong Zhao;Yifeng Zheng;Yu Wang
Few-shot learning (FSL) is a challenging task in classifying new classes from few labelled examples. Many existing models embed class structural knowledge as prior knowledge to enhance FSL against data scarcity. However, they fall short of connecting the class structural knowledge with the limited visual information which plays a decisive role in FSL model performance. In this paper, we propose a unified FSL framework with multi-granularity knowledge fusion and decision-making (MGKFD) to overcome the limitation. We aim to simultaneously explore the visual information and structural knowledge, working in a mutual way to enhance FSL. On the one hand, we strongly connect global and local visual information with multi-granularity class knowledge to explore intra-image and inter-class relationships, generating specific multi-granularity class representations with limited images. On the other hand, a weight fusion strategy is introduced to integrate multi-granularity knowledge and visual information to make the classification decision of FSL. It enables models to learn more effectively from limited labelled examples and allows generalization to new classes. Moreover, considering varying erroneous predictions, a hierarchical loss is established by structural knowledge to minimize the classification loss, where greater degree of misclassification is penalized more. Experimental results on three benchmark datasets show the advantages of MGKFD over several advanced models.
少量学习(FSL)是一项具有挑战性的任务,即从少量标记的示例中对新类别进行分类。许多现有模型都将类结构知识作为先验知识嵌入其中,以增强 FSL 的能力,应对数据匮乏问题。然而,这些模型没有将类别结构知识与有限的视觉信息联系起来,而视觉信息对 FSL 模型的性能起着决定性作用。在本文中,我们提出了一个统一的 FSL 框架,该框架具有多粒度知识融合和决策(MGKFD)功能,以克服上述局限性。我们的目标是同时探索视觉信息和结构知识,以相互促进的方式增强 FSL。一方面,我们将全局和局部视觉信息与多粒度类别知识紧密联系起来,探索图像内和类别间的关系,从而利用有限的图像生成特定的多粒度类别表征。另一方面,我们引入了权重融合策略,以整合多粒度知识和视觉信息,从而做出 FSL 的分类决策。这使模型能更有效地从有限的标注示例中学习,并能泛化到新的类别。此外,考虑到不同的错误预测,通过结构知识建立了分层损失,以最小化分类损失,其中错误分类程度越高,受到的惩罚越大。在三个基准数据集上的实验结果表明,MGKFD 比几种高级模型更具优势。
{"title":"Few-Shot Learning With Multi-Granularity Knowledge Fusion and Decision-Making","authors":"Yuling Su;Hong Zhao;Yifeng Zheng;Yu Wang","doi":"10.1109/TBDATA.2024.3350542","DOIUrl":"https://doi.org/10.1109/TBDATA.2024.3350542","url":null,"abstract":"Few-shot learning (FSL) is a challenging task in classifying new classes from few labelled examples. Many existing models embed class structural knowledge as prior knowledge to enhance FSL against data scarcity. However, they fall short of connecting the class structural knowledge with the limited visual information which plays a decisive role in FSL model performance. In this paper, we propose a unified FSL framework with multi-granularity knowledge fusion and decision-making (MGKFD) to overcome the limitation. We aim to simultaneously explore the visual information and structural knowledge, working in a mutual way to enhance FSL. On the one hand, we strongly connect global and local visual information with multi-granularity class knowledge to explore intra-image and inter-class relationships, generating specific multi-granularity class representations with limited images. On the other hand, a weight fusion strategy is introduced to integrate multi-granularity knowledge and visual information to make the classification decision of FSL. It enables models to learn more effectively from limited labelled examples and allows generalization to new classes. Moreover, considering varying erroneous predictions, a hierarchical loss is established by structural knowledge to minimize the classification loss, where greater degree of misclassification is penalized more. Experimental results on three benchmark datasets show the advantages of MGKFD over several advanced models.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"10 4","pages":"486-497"},"PeriodicalIF":7.5,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141602552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SCOREH+: A High-Order Node Proximity Spectral Clustering on Ratios-of-Eigenvectors Algorithm for Community Detection SCOREH+:用于群落检测的基于特征向量比的高阶节点邻近度谱聚类算法
IF 7.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-25 DOI: 10.1109/TBDATA.2023.3346715
Yanhui Zhu;Fang Hu;Lei Hsin Kuo;Jia Liu
The research on complex networks has achieved significant progress in revealing the mesoscopic features of networks. Community detection is an important aspect of understanding real-world complex systems. We present in this paper a High-order node proximity Spectral Clustering on Ratios-of-Eigenvectors (SCOREH+) algorithm for locating communities in complex networks. The algorithm improves SCORE and SCORE+ and preserves high-order transitivity information of the network affinity matrix. We optimize the high-order proximity matrix from the initial affinity matrix using the Radial Basis Functions (RBFs) and Katz index. In addition to the optimization of the Laplacian matrix, we implement a procedure that joins an additional eigenvector (the $(k+1){rm th}$ leading eigenvector) to the spectrum domain for clustering if the network is considered to be a “weak signal” graph. The algorithm has been successfully applied to both real-world and synthetic data sets. The proposed algorithm is compared with state-of-art algorithms, such as ASE, Louvain, Fast-Greedy, Spectral Clustering (SC), SCORE, and SCORE+. To demonstrate the high efficacy of the proposed method, we conducted comparison experiments on eleven real-world networks and a number of synthetic networks with noise. The experimental results in most of these networks demonstrate that SCOREH+ outperforms the baseline methods. Moreover, by tuning the RBFs and their shaping parameters, we may generate state-of-the-art community structures on all real-world networks and even on noisy synthetic networks.
复杂网络研究在揭示网络的中观特征方面取得了重大进展。群落检测是理解现实世界复杂系统的一个重要方面。本文提出了一种在复杂网络中定位群落的高阶节点邻近特征向量比谱聚类算法(SCOREH+)。该算法改进了 SCORE 和 SCORE+,并保留了网络亲缘矩阵的高阶传递信息。我们利用径向基函数(RBF)和卡茨指数从初始亲和矩阵中优化高阶亲和矩阵。除了优化拉普拉斯矩阵外,如果网络被认为是一个 "弱信号 "图,我们还实施了一个程序,将一个额外的特征向量($(k+1){rm th}$ 领先特征向量)加入频谱域进行聚类。该算法已成功应用于现实世界和合成数据集。该算法与 ASE、Louvain、Fast-Greedy、Spectral Clustering (SC)、SCORE 和 SCORE+ 等先进算法进行了比较。为了证明所提方法的高效性,我们在 11 个真实世界网络和一些带噪声的合成网络上进行了对比实验。其中大部分网络的实验结果表明,SCOREH+ 的性能优于基线方法。此外,通过调整 RBF 及其整形参数,我们可以在所有真实世界网络甚至是有噪声的合成网络上生成最先进的社区结构。
{"title":"SCOREH+: A High-Order Node Proximity Spectral Clustering on Ratios-of-Eigenvectors Algorithm for Community Detection","authors":"Yanhui Zhu;Fang Hu;Lei Hsin Kuo;Jia Liu","doi":"10.1109/TBDATA.2023.3346715","DOIUrl":"https://doi.org/10.1109/TBDATA.2023.3346715","url":null,"abstract":"The research on complex networks has achieved significant progress in revealing the mesoscopic features of networks. Community detection is an important aspect of understanding real-world complex systems. We present in this paper a High-order node proximity Spectral Clustering on Ratios-of-Eigenvectors (SCOREH+) algorithm for locating communities in complex networks. The algorithm improves SCORE and SCORE+ and preserves high-order transitivity information of the network affinity matrix. We optimize the high-order proximity matrix from the initial affinity matrix using the Radial Basis Functions (RBFs) and Katz index. In addition to the optimization of the Laplacian matrix, we implement a procedure that joins an additional eigenvector (the \u0000<inline-formula><tex-math>$(k+1){rm th}$</tex-math></inline-formula>\u0000 leading eigenvector) to the spectrum domain for clustering if the network is considered to be a “weak signal” graph. The algorithm has been successfully applied to both real-world and synthetic data sets. The proposed algorithm is compared with state-of-art algorithms, such as ASE, Louvain, Fast-Greedy, Spectral Clustering (SC), SCORE, and SCORE+. To demonstrate the high efficacy of the proposed method, we conducted comparison experiments on eleven real-world networks and a number of synthetic networks with noise. The experimental results in most of these networks demonstrate that SCOREH+ outperforms the baseline methods. Moreover, by tuning the RBFs and their shaping parameters, we may generate state-of-the-art community structures on all real-world networks and even on noisy synthetic networks.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"10 3","pages":"301-312"},"PeriodicalIF":7.2,"publicationDate":"2023-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140924735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Causal Chain Graph Structure via Alternate Learning and Double Pruning 通过交替学习和双修剪学习因果链图结构
IF 7.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-25 DOI: 10.1109/TBDATA.2023.3346712
Shujing Yang;Fuyuan Cao;Kui Yu;Jiye Liang
Causal chain graphs model the dependency structure between individuals when the assumption of individual independence in causal inference is violated. However, causal chain graphs are often unknown in practice and require learning from data. Existing learning algorithms have certain limitations. Specifically, learning local information requires multiple subset searches, building the skeleton requires additional conditional independence testing, and directing the edges requires obtaining local information from the skeleton again. To remedy these problems, we propose a novel algorithm for learning causal chain graph structure. The algorithm alternately learns the adjacencies and spouses of each variable as local information and doubly prunes them to obtain more accurate local information, which reduces subset searches, improves its accuracy, and facilitates subsequent learning. It then directly constructs the chain graphs skeleton using the learned adjacencies without conditional independence testing. Finally, it directs the edges of complexes using the learned adjacencies and spouses to learn chain graphs without reacquiring local information, further improving its efficiency. We conduct theoretical analysis to prove the correctness of our algorithm and compare it with the state-of-the-art algorithms on synthetic and real-world datasets. The experimental results demonstrate our algorithm is more reliable than its rivals.
当因果推理中的个体独立性假设被违反时,因果链图可以模拟个体之间的依赖结构。然而,因果链图在实践中往往是未知的,需要从数据中学习。现有的学习算法有一定的局限性。具体来说,学习局部信息需要多次子集搜索,构建骨架需要额外的条件独立性测试,而引导边缘则需要再次从骨架中获取局部信息。为了解决这些问题,我们提出了一种学习因果链图结构的新算法。该算法将每个变量的邻接关系和配偶关系作为局部信息交替学习,并对其进行双重修剪,以获得更准确的局部信息,从而减少子集搜索,提高准确性,方便后续学习。然后,它利用学习到的邻接关系直接构建链图骨架,而无需进行条件独立性测试。最后,它利用学习到的邻接和配偶来引导复合物的边,从而在不重新获取局部信息的情况下学习链图,进一步提高了效率。我们进行了理论分析以证明我们算法的正确性,并在合成数据集和实际数据集上与最先进的算法进行了比较。实验结果表明,我们的算法比竞争对手更可靠。
{"title":"Learning Causal Chain Graph Structure via Alternate Learning and Double Pruning","authors":"Shujing Yang;Fuyuan Cao;Kui Yu;Jiye Liang","doi":"10.1109/TBDATA.2023.3346712","DOIUrl":"https://doi.org/10.1109/TBDATA.2023.3346712","url":null,"abstract":"Causal chain graphs model the dependency structure between individuals when the assumption of individual independence in causal inference is violated. However, causal chain graphs are often unknown in practice and require learning from data. Existing learning algorithms have certain limitations. Specifically, learning local information requires multiple subset searches, building the skeleton requires additional conditional independence testing, and directing the edges requires obtaining local information from the skeleton again. To remedy these problems, we propose a novel algorithm for learning causal chain graph structure. The algorithm alternately learns the adjacencies and spouses of each variable as local information and doubly prunes them to obtain more accurate local information, which reduces subset searches, improves its accuracy, and facilitates subsequent learning. It then directly constructs the chain graphs skeleton using the learned adjacencies without conditional independence testing. Finally, it directs the edges of complexes using the learned adjacencies and spouses to learn chain graphs without reacquiring local information, further improving its efficiency. We conduct theoretical analysis to prove the correctness of our algorithm and compare it with the state-of-the-art algorithms on synthetic and real-world datasets. The experimental results demonstrate our algorithm is more reliable than its rivals.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"10 4","pages":"442-456"},"PeriodicalIF":7.5,"publicationDate":"2023-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141602541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cascaded Knowledge-Level Fusion Network for Online Course Recommendation System 用于在线课程推荐系统的级联知识层融合网络
IF 7.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-25 DOI: 10.1109/TBDATA.2023.3346711
Wenjun Ma;Yibing Zhao;Xiaomao Fan
In light of the global proliferation of the COVID-19 pandemic, there is a notable surge in public interest towards Massive Open Online Courses (MOOCs) recently. Within the realm of personalized course-learning services, large amounts of online course recommendation systems have been developed to cater to the diverse needs of learners. However, despite these advancements, there still exist three unsolved challenges: 1) how to effectively utilize the course information spanning from the title-level to the more granular keyword-level; 2) how to well capture the sequential information among learning courses; 3) how to identify the high-correlated courses in the course corpora. To address these challenges, we propose a novel solution known as Cascaded Knowledge-level Fusion Network (CKFN) for online course recommendation with incorporating a three-fold approach to maximize the utilization of course information: 1) two knowledge graphs spanning from the keyword-level to title-level; 2) a two-stage attention fusion mechanism; 3) a novel knowledge-aware negative sampling method. Experimental results on a real dataset of XuetangX demonstrate that CKFN surpasses existing baseline models by a substantial margin, thereby achieving the state-of-the-art recommendation performance. It means that CKFN can be potentially deployed into MOOCs platforms as a pivotal component to provide personalized course recommendation service.
随着 COVID-19 大流行病在全球的蔓延,公众对大规模开放式在线课程(MOOCs)的兴趣最近明显激增。在个性化课程学习服务领域,大量在线课程推荐系统应运而生,以满足学习者的不同需求。然而,尽管取得了这些进步,仍存在三个尚未解决的难题:1) 如何有效利用从标题级到更细粒度的关键字级的课程信息;2) 如何很好地捕捉学习课程之间的序列信息;3) 如何识别课程库中的高关联课程。为了应对这些挑战,我们提出了一种用于在线课程推荐的新型解决方案,即级联知识融合网络(CKFN),它从三个方面最大限度地利用了课程信息:1) 从关键词级到标题级的两个知识图谱;2) 两阶段注意力融合机制;3) 新型知识感知负抽样方法。在学堂 X 真实数据集上的实验结果表明,CKFN 大大超过了现有的基线模型,从而达到了最先进的推荐性能。这意味着CKFN有可能被部署到MOOCs平台中,成为提供个性化课程推荐服务的关键组件。
{"title":"Cascaded Knowledge-Level Fusion Network for Online Course Recommendation System","authors":"Wenjun Ma;Yibing Zhao;Xiaomao Fan","doi":"10.1109/TBDATA.2023.3346711","DOIUrl":"https://doi.org/10.1109/TBDATA.2023.3346711","url":null,"abstract":"In light of the global proliferation of the COVID-19 pandemic, there is a notable surge in public interest towards Massive Open Online Courses (MOOCs) recently. Within the realm of personalized course-learning services, large amounts of online course recommendation systems have been developed to cater to the diverse needs of learners. However, despite these advancements, there still exist three unsolved challenges: 1) how to effectively utilize the course information spanning from the title-level to the more granular keyword-level; 2) how to well capture the sequential information among learning courses; 3) how to identify the high-correlated courses in the course corpora. To address these challenges, we propose a novel solution known as \u0000<bold>C</b>\u0000ascaded \u0000<bold>K</b>\u0000nowledge-level \u0000<bold>F</b>\u0000usion \u0000<bold>N</b>\u0000etwork (CKFN) for online course recommendation with incorporating a three-fold approach to maximize the utilization of course information: 1) two knowledge graphs spanning from the keyword-level to title-level; 2) a two-stage attention fusion mechanism; 3) a novel knowledge-aware negative sampling method. Experimental results on a real dataset of XuetangX demonstrate that CKFN surpasses existing baseline models by a substantial margin, thereby achieving the state-of-the-art recommendation performance. It means that CKFN can be potentially deployed into MOOCs platforms as a pivotal component to provide personalized course recommendation service.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"10 4","pages":"457-469"},"PeriodicalIF":7.5,"publicationDate":"2023-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141602580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bi-Selection of Instances and Features Based on Neighborhood Importance Degree 基于邻域重要程度的实例和特征双向选择
IF 7.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-14 DOI: 10.1109/TBDATA.2023.3342643
Xiao Zhang;Zhaoqian He;Jinhai Li;Changlin Mei;Yanyan Yang
As one of the most important concepts for classification learning, neighborhood granules obtained by dividing adjacent objects or instances can be regarded as the minimal elements to simulate human cognition. At present, neighborhood granules have been successfully applied to knowledge acquisition. Nevertheless, little work has been devoted to the simultaneous selection of features and instances by the use of neighborhood granules. To fill this gap, we investigate in this paper the issue of bi-selection of instances and features based on neighborhood importance degree (NID). First, the conditional neighborhood entropy is defined to measure decision uncertainty of a neighborhood granule. Considering both decision uncertainty and coverage ability of a neighborhood granule, we propose the concept of NID. Then, an instance selection algorithm is formulated to select representative instances based on NID. Furthermore, an NID-based feature selection algorithm is provided for a neighborhood decision system. By integrating the instance selection and feature selection methods, a bi-selection approach based on NID (BSNID) is finally proposed to select instances and features. Lastly, some numerical experiments are conducted to evaluate the performance of BSNID. The results demonstrate that BSNID can take account of both reduction ratio and classification accuracy and, therefore, performs satisfactorily in effectiveness.
作为分类学习中最重要的概念之一,通过划分相邻对象或实例而获得的邻域颗粒可被视为模拟人类认知的最小元素。目前,邻域颗粒已成功应用于知识获取。然而,利用邻域颗粒同时选择特征和实例的研究却很少。为了填补这一空白,我们在本文中研究了基于邻域重要度(NID)的实例和特征双重选择问题。首先,定义了条件邻域熵来衡量邻域颗粒的决策不确定性。考虑到邻域颗粒的决策不确定性和覆盖能力,我们提出了 NID 的概念。然后,制定了一种实例选择算法,根据 NID 选择具有代表性的实例。此外,我们还为邻域决策系统提供了基于 NID 的特征选择算法。通过整合实例选择和特征选择方法,最终提出了一种基于 NID 的双选择方法(BSNID)来选择实例和特征。最后,我们进行了一些数值实验来评估 BSNID 的性能。结果表明,BSNID 可以同时兼顾缩减率和分类准确率,因此在有效性方面表现令人满意。
{"title":"Bi-Selection of Instances and Features Based on Neighborhood Importance Degree","authors":"Xiao Zhang;Zhaoqian He;Jinhai Li;Changlin Mei;Yanyan Yang","doi":"10.1109/TBDATA.2023.3342643","DOIUrl":"https://doi.org/10.1109/TBDATA.2023.3342643","url":null,"abstract":"As one of the most important concepts for classification learning, neighborhood granules obtained by dividing adjacent objects or instances can be regarded as the minimal elements to simulate human cognition. At present, neighborhood granules have been successfully applied to knowledge acquisition. Nevertheless, little work has been devoted to the simultaneous selection of features and instances by the use of neighborhood granules. To fill this gap, we investigate in this paper the issue of bi-selection of instances and features based on neighborhood importance degree (NID). First, the conditional neighborhood entropy is defined to measure decision uncertainty of a neighborhood granule. Considering both decision uncertainty and coverage ability of a neighborhood granule, we propose the concept of NID. Then, an instance selection algorithm is formulated to select representative instances based on NID. Furthermore, an NID-based feature selection algorithm is provided for a neighborhood decision system. By integrating the instance selection and feature selection methods, a bi-selection approach based on NID (BSNID) is finally proposed to select instances and features. Lastly, some numerical experiments are conducted to evaluate the performance of BSNID. The results demonstrate that BSNID can take account of both reduction ratio and classification accuracy and, therefore, performs satisfactorily in effectiveness.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"10 4","pages":"415-428"},"PeriodicalIF":7.5,"publicationDate":"2023-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141602570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FIG: Feature-Weighted Information Granules With High Consistency Rate 图:具有高一致性率的特征加权信息颗粒
IF 7.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-14 DOI: 10.1109/TBDATA.2023.3343348
Jianghe Cai;Yuhui Deng;Yi Zhou;Jiande Huang;Geyong Min
Information granules are effective in revealing the structure of data. Therefore, it is a common practice in data mining to use information granules for classifying datasets. In the existing granular classifiers, the information granules are often classified according to the standard membership function only without considering the influence of different feature weights on the quality of granules and label classification results. In this article, we utilize the feature weighting of data to produce the information granules with high consistency rate called FIG. First, we use consistency rate and contribution scores to generate information granules. Then, we propose a granular two-stage classifier GTC based on FIG. GTC divides the data into fuzzy and fixed points and then calculates the interval matching degree to assign data points to the most suitable cluster in the second step. Finally, we compare FIG with two state-of-the-art granular models (T-GrM and FGC-rule), and classification accuracy is also compared with other classification algorithms. The extensive experiments on synthetic datasets and public datasets from UCI show that FIG has sufficient performance to describe the data structure and excellent capability under the constructed granular classifier GTC. Compared with T-GrM and FGC-rule, the time overhead required for FIG to obtain information granules is reduced by an average of 51.07%, the per unit quality of the granules is also increased by more than 14.74%. Compared with other classification algorithms, an average of 5.04% improves GTC accuracy.
信息颗粒能有效揭示数据结构。因此,利用信息颗粒对数据集进行分类是数据挖掘领域的常见做法。在现有的颗粒分类器中,信息颗粒通常只根据标准成员函数进行分类,而没有考虑不同特征权重对颗粒质量和标签分类结果的影响。本文利用数据的特征权重来生成具有高一致性率的信息颗粒,称为 FIG。GTC 将数据分为模糊点和固定点,然后计算区间匹配度,在第二步中将数据点分配到最合适的聚类中。最后,我们将 FIG 与两种最先进的颗粒模型(T-GrM 和 FGC-rule)进行了比较,并将分类准确率与其他分类算法进行了比较。在合成数据集和 UCI 公开数据集上的大量实验表明,FIG 有足够的性能来描述数据结构,并且在构建的粒度分类器 GTC 下有出色的能力。与 T-GrM 和 FGC-rule 相比,FIG 获得信息颗粒所需的时间开销平均减少了 51.07%,颗粒的单位质量也提高了 14.74% 以上。与其他分类算法相比,GTC 的准确率平均提高了 5.04%。
{"title":"FIG: Feature-Weighted Information Granules With High Consistency Rate","authors":"Jianghe Cai;Yuhui Deng;Yi Zhou;Jiande Huang;Geyong Min","doi":"10.1109/TBDATA.2023.3343348","DOIUrl":"https://doi.org/10.1109/TBDATA.2023.3343348","url":null,"abstract":"Information granules are effective in revealing the structure of data. Therefore, it is a common practice in data mining to use information granules for classifying datasets. In the existing granular classifiers, the information granules are often classified according to the standard membership function only without considering the influence of different feature weights on the quality of granules and label classification results. In this article, we utilize the feature weighting of data to produce the information granules with high consistency rate called FIG. First, we use consistency rate and contribution scores to generate information granules. Then, we propose a granular two-stage classifier GTC based on FIG. GTC divides the data into fuzzy and fixed points and then calculates the interval matching degree to assign data points to the most suitable cluster in the second step. Finally, we compare FIG with two state-of-the-art granular models (T-GrM and FGC-rule), and classification accuracy is also compared with other classification algorithms. The extensive experiments on synthetic datasets and public datasets from UCI show that FIG has sufficient performance to describe the data structure and excellent capability under the constructed granular classifier GTC. Compared with T-GrM and FGC-rule, the time overhead required for FIG to obtain information granules is reduced by an average of 51.07%, the per unit quality of the granules is also increased by more than 14.74%. Compared with other classification algorithms, an average of 5.04% improves GTC accuracy.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"10 4","pages":"400-414"},"PeriodicalIF":7.5,"publicationDate":"2023-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141602530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Big Data
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1