首页 > 最新文献

IEEE Transactions on Big Data最新文献

英文 中文
2025 Reviewers List* 2025审稿人名单*
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-16 DOI: 10.1109/TBDATA.2026.3652336
{"title":"2025 Reviewers List*","authors":"","doi":"10.1109/TBDATA.2026.3652336","DOIUrl":"https://doi.org/10.1109/TBDATA.2026.3652336","url":null,"abstract":"","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"12 1","pages":"301-306"},"PeriodicalIF":5.7,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11357242","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual-Channel Learning Framework for miRNA-Drug Interaction Prediction Based on Structural Features and Signed Bipartite Graph Neural Network 基于结构特征和签名二部图神经网络的mirna -药物相互作用预测双通道学习框架
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-04 DOI: 10.1109/TBDATA.2025.3639954
Xiaoxuan Zhang;Xiujuan Lei;Ling Guo;Ming Chen;Fang-Xiang Wu;Yi Pan
MicroRNAs (miRNAs) play a vital role in regulating a wide range of biological functions and are key players in the development of many complex human diseases, making them novel therapeutic targets for drug development. Given the high expenses and time demands of traditional experimental methods, it is essential to develop efficient computational approaches for predicting miRNA-drug interactions (MDIs). This article presents a dual-channel learning framework, SSMDI, based on structural features and Signed Bipartite Graph Neural Network (SBGNN) for predicting MDIs. Firstly, Graph Isomorphism Networks (GIN) is employed to extract molecular graph features of drugs. Meanwhile, a combined framework of Convolutional Neural Network (CNN), Bidirectional Long Short-Term Memory (BiLSTM) network and Self-attention Mechanism is utilized to capture sequence features of miRNAs. Compared with traditional networks, signed networks can deliver richer semantic information in drugs and miRNAs. Therefore, SBGNN is then used to aggregate and update the signed topological features of miRNAs and drugs. Finally, structural and signed topological features are integrated to predict MDIs. The predictive performance of the model is evaluated using 5-fold cross-validation (CV), achieving AUC of 0.9447 and AUPR of 0.9238. The case study further demonstrates the effectiveness of SSMDI in predicting MDIs. In summary, the SSMDI model proves to be an accurate tool for predicting MDIs, which holds significant implications for drug development and miRNA-based therapeutic research.
MicroRNAs (miRNAs)在调节多种生物功能中发挥着重要作用,在许多复杂人类疾病的发展中起着关键作用,使其成为药物开发的新治疗靶点。考虑到传统实验方法的高费用和时间要求,开发有效的计算方法来预测mirna -药物相互作用(mdi)是必不可少的。本文提出了一种基于结构特征和签名二部图神经网络(SBGNN)的双通道学习框架SSMDI,用于预测mdi。首先,利用图同构网络(GIN)提取药物的分子图特征。同时,利用卷积神经网络(CNN)、双向长短期记忆(BiLSTM)网络和自注意机制(Self-attention Mechanism)相结合的框架捕捉mirna的序列特征。与传统网络相比,签名网络在药物和mirna中可以传递更丰富的语义信息。因此,SBGNN随后被用于聚合和更新mirna和药物的签名拓扑特征。最后,结合结构特征和签名拓扑特征来预测mdi。采用5倍交叉验证(CV)对模型的预测性能进行评价,AUC为0.9447,AUPR为0.9238。案例研究进一步证明了SSMDI预测mdi的有效性。综上所述,SSMDI模型被证明是预测mdi的准确工具,对药物开发和基于mirna的治疗研究具有重要意义。
{"title":"Dual-Channel Learning Framework for miRNA-Drug Interaction Prediction Based on Structural Features and Signed Bipartite Graph Neural Network","authors":"Xiaoxuan Zhang;Xiujuan Lei;Ling Guo;Ming Chen;Fang-Xiang Wu;Yi Pan","doi":"10.1109/TBDATA.2025.3639954","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3639954","url":null,"abstract":"MicroRNAs (miRNAs) play a vital role in regulating a wide range of biological functions and are key players in the development of many complex human diseases, making them novel therapeutic targets for drug development. Given the high expenses and time demands of traditional experimental methods, it is essential to develop efficient computational approaches for predicting miRNA-drug interactions (MDIs). This article presents a dual-channel learning framework, SSMDI, based on structural features and Signed Bipartite Graph Neural Network (SBGNN) for predicting MDIs. Firstly, Graph Isomorphism Networks (GIN) is employed to extract molecular graph features of drugs. Meanwhile, a combined framework of Convolutional Neural Network (CNN), Bidirectional Long Short-Term Memory (BiLSTM) network and Self-attention Mechanism is utilized to capture sequence features of miRNAs. Compared with traditional networks, signed networks can deliver richer semantic information in drugs and miRNAs. Therefore, SBGNN is then used to aggregate and update the signed topological features of miRNAs and drugs. Finally, structural and signed topological features are integrated to predict MDIs. The predictive performance of the model is evaluated using 5-fold cross-validation (CV), achieving AUC of 0.9447 and AUPR of 0.9238. The case study further demonstrates the effectiveness of SSMDI in predicting MDIs. In summary, the SSMDI model proves to be an accurate tool for predicting MDIs, which holds significant implications for drug development and miRNA-based therapeutic research.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"12 2","pages":"688-701"},"PeriodicalIF":5.7,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147440665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lafa: Unlocking Superior Memory Efficiency via Adaptive Metadata Strategy for Scalable Large-Scale Dataset Loading 拉法:通过自适应元数据策略解锁卓越的内存效率,用于可扩展的大规模数据集加载
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-04 DOI: 10.1109/TBDATA.2025.3640011
Cong Wang;Yang Luo;Ke Wang;Hui Zhang;Naijie Gu;Ran Zhang;Wenzhuo Du;Fan Yu;Jun Yu
The rapid growth of deep learning models and the increasing demand for large-scale datasets have posed unprecedented challenges for data loading and memory management. Existing frameworks (e.g., PyTorch, TensorFlow) often encounter performance bottlenecks when handling large datasets resulting in inefficiencies and excessive memory usage. To address these issues, we propose Lafa, a dynamic metadata loading mechanism optimized for efficient large-scale dataset processing. Lafa introduces the. Lafa format and an adaptive loading strategy with three modes to balance memory usage and loading performance, along with a local shuffle approach that reduces memory overhead and computational complexity while preserving data randomness. Experimental results on GPU (RTX 3090) and Ascend (910 A) platforms demonstrate that Lafa significantly improves memory efficiency compared to existing frameworks. Specifically, for every 10 million samples loaded, Lafa reduces additional memory consumption by a factor of 1.33× to 31.34× across various dataset types, relative to the most memory-efficient baseline among PyTorch, TensorFlow, and MindSpore.
深度学习模型的快速发展和对大规模数据集的需求日益增长,对数据加载和内存管理提出了前所未有的挑战。现有框架(例如,PyTorch, TensorFlow)在处理大型数据集时经常遇到性能瓶颈,导致效率低下和内存使用过多。为了解决这些问题,我们提出了一种动态元数据加载机制Lafa,该机制针对高效的大规模数据集处理进行了优化。拉法介绍。Lafa格式和具有三种模式的自适应加载策略,以平衡内存使用和加载性能,以及在保持数据随机性的同时减少内存开销和计算复杂性的本地shuffle方法。在GPU (RTX 3090)和Ascend (910 A)平台上的实验结果表明,与现有框架相比,Lafa显著提高了内存效率。具体来说,相对于PyTorch、TensorFlow和MindSpore中内存效率最高的基准,对于每加载1000万个样本,Lafa在各种数据集类型中减少了1.33到31.34倍的额外内存消耗。
{"title":"Lafa: Unlocking Superior Memory Efficiency via Adaptive Metadata Strategy for Scalable Large-Scale Dataset Loading","authors":"Cong Wang;Yang Luo;Ke Wang;Hui Zhang;Naijie Gu;Ran Zhang;Wenzhuo Du;Fan Yu;Jun Yu","doi":"10.1109/TBDATA.2025.3640011","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3640011","url":null,"abstract":"The rapid growth of deep learning models and the increasing demand for large-scale datasets have posed unprecedented challenges for data loading and memory management. Existing frameworks (e.g., PyTorch, TensorFlow) often encounter performance bottlenecks when handling large datasets resulting in inefficiencies and excessive memory usage. To address these issues, we propose Lafa, a dynamic metadata loading mechanism optimized for efficient large-scale dataset processing. Lafa introduces the. Lafa format and an adaptive loading strategy with three modes to balance memory usage and loading performance, along with a local shuffle approach that reduces memory overhead and computational complexity while preserving data randomness. Experimental results on GPU (RTX 3090) and Ascend (910 A) platforms demonstrate that Lafa significantly improves memory efficiency compared to existing frameworks. Specifically, for every 10 million samples loaded, Lafa reduces additional memory consumption by a factor of 1.33× to 31.34× across various dataset types, relative to the most memory-efficient baseline among PyTorch, TensorFlow, and MindSpore.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"12 2","pages":"674-687"},"PeriodicalIF":5.7,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147440667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Fast Linearithmic Graph Clustering Approach for Big Data Using Gravitational Attraction Principle 基于引力吸引原理的大数据快速线性图聚类方法
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-12-04 DOI: 10.1109/TBDATA.2025.3639917
Mohammad Maksood Akhter;Abdul Atif Khan;Rashmi Maheshwari;Sraban Kumar Mohanty
With the exponential growth of Big Data in domains such as healthcare, genomics, and sensor networks, computationally efficient and effective clustering techniques have become essential for uncovering meaningful patterns. Traditional clustering methods face fundamental limitations in Big Data analysis. K-means is among the fastest known approaches, but it fails to capture non-spherical clusters. Hierarchical clustering can detect arbitrary shapes but suffers from sub-cubic complexity, while many state-of-the-art methods still incur quadratic complexity. Moreover, most existing approaches fail to capture the intrinsic structure of data. In this context, graph-based clustering has emerged as a powerful alternative due to its ability to model geometric relationships and reveal underlying structures. However, existing graph-based techniques typically incur quadratic complexity, limiting their scalability. The objective of this work is to develop a scalable graph-based clustering framework that reduces complexity while preserving clustering quality in large, noisy, and high-dimensional datasets. To achieve this, we propose a fast graph clustering framework with overall complexity $mathcal {O}(N lg N)$, where $N$ denotes the number of data points. The method employs a two-stage dispersion-based partitioning to generate cohesive sub-clusters, followed by the construction of a sparse graph on sub-cluster centers to efficiently capture adjacency. Sub-clusters are then merged iteratively using a gravitational-force-inspired attraction model, enabling the discovery of coherent structures with reduced computation. Extensive experiments on 41 multi-scale datasets demonstrate that our method consistently outperforms traditional and state-of-the-art approaches, achieving average 27.33% higher clustering accuracy while reducing runtime by more than 86.64% on average. These results highlight both the innovation and the effectiveness of the proposed approach, making it highly suitable for Big Data analytics.
随着大数据在医疗保健、基因组学和传感器网络等领域的指数级增长,计算效率高且有效的聚类技术对于发现有意义的模式至关重要。传统的聚类方法在大数据分析中面临着根本性的局限性。K-means是已知最快的方法之一,但它无法捕获非球形簇。分层聚类可以检测任意形状,但存在次立方复杂度,而许多最先进的方法仍然存在二次复杂度。此外,大多数现有方法都无法捕捉数据的内在结构。在这种情况下,基于图的聚类由于其建模几何关系和揭示底层结构的能力而成为一种强大的替代方案。然而,现有的基于图的技术通常会产生二次复杂度,限制了它们的可扩展性。这项工作的目标是开发一个可扩展的基于图的聚类框架,以降低复杂性,同时保持大型、嘈杂和高维数据集的聚类质量。为了实现这一点,我们提出了一个整体复杂度为$mathcal {O}(N lg N)$的快速图聚类框架,其中$N$表示数据点的数量。该方法采用基于分散的两阶段划分来生成内聚子簇,然后在子簇中心构造稀疏图以有效捕获邻接关系。然后使用引力启发的吸引力模型迭代合并子簇,从而减少计算量,从而发现相干结构。在41个多尺度数据集上的大量实验表明,我们的方法始终优于传统和最先进的方法,平均提高27.33%的聚类精度,平均减少86.64%以上的运行时间。这些结果突出了所提出方法的创新性和有效性,使其非常适合大数据分析。
{"title":"A Fast Linearithmic Graph Clustering Approach for Big Data Using Gravitational Attraction Principle","authors":"Mohammad Maksood Akhter;Abdul Atif Khan;Rashmi Maheshwari;Sraban Kumar Mohanty","doi":"10.1109/TBDATA.2025.3639917","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3639917","url":null,"abstract":"With the exponential growth of Big Data in domains such as healthcare, genomics, and sensor networks, computationally efficient and effective clustering techniques have become essential for uncovering meaningful patterns. Traditional clustering methods face fundamental limitations in Big Data analysis. K-means is among the fastest known approaches, but it fails to capture non-spherical clusters. Hierarchical clustering can detect arbitrary shapes but suffers from sub-cubic complexity, while many state-of-the-art methods still incur quadratic complexity. Moreover, most existing approaches fail to capture the intrinsic structure of data. In this context, graph-based clustering has emerged as a powerful alternative due to its ability to model geometric relationships and reveal underlying structures. However, existing graph-based techniques typically incur quadratic complexity, limiting their scalability. The objective of this work is to develop a scalable graph-based clustering framework that reduces complexity while preserving clustering quality in large, noisy, and high-dimensional datasets. To achieve this, we propose a fast graph clustering framework with overall complexity <inline-formula><tex-math>$mathcal {O}(N lg N)$</tex-math></inline-formula>, where <inline-formula><tex-math>$N$</tex-math></inline-formula> denotes the number of data points. The method employs a two-stage dispersion-based partitioning to generate cohesive sub-clusters, followed by the construction of a sparse graph on sub-cluster centers to efficiently capture adjacency. Sub-clusters are then merged iteratively using a gravitational-force-inspired attraction model, enabling the discovery of coherent structures with reduced computation. Extensive experiments on 41 multi-scale datasets demonstrate that our method consistently outperforms traditional and state-of-the-art approaches, achieving average 27.33% higher clustering accuracy while reducing runtime by more than 86.64% on average. These results highlight both the innovation and the effectiveness of the proposed approach, making it highly suitable for Big Data analytics.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"12 2","pages":"661-673"},"PeriodicalIF":5.7,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147440666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal Recommendation Based on Adaptive Deep Matrix Factorization 基于自适应深度矩阵分解的时间推荐
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-13 DOI: 10.1109/TBDATA.2025.3621144
Yali Feng;Zhifeng Hao;Wen Wen;Ruichu Cai
Temporal recommendation is an important class of tasks in recommender systems, which focuses on modeling and capturing temporal patterns in user behavior to achieve finer-grained and higher-quality recommendations. In real-world scenario, users’ temporal behaviors are not only characterized by sequential dependencies among consecutive items, but also by periodic correlations of different items and time-varying similarity of different users. In this paper, we propose an Adaptive Temporal Recommendation (AdaTR) algorithm to capture the inherent features of temporal behaviors and dynamic collaborative signals. Firstly, based on the periodic characteristics of user behaviors, the user-item interactions are counted and aggregated in different time segments across multiple periods, which forms the temporal user-item interaction matrix. Then, in order to capture the time-varying collaborative signals between different users, a deep spectral clustering (DSC) method is implemented on the temporal user-item interaction matrix, where the original representation of user-item interaction is projected into a latent space, and users’ temporal behaviors are clustered into different groups. Furthermore, an Adaptive Deep Matrix Factorization (AdaDMF) module is designed to learn the time-varying representations of user preferences on each cluster of temporal user behaviors, which incoporate dynamic collaborative signals among different users. Finally, we combine users’ short-term and long-term preferences to generate personalized temporal recommendations. Extensive experiments on four datasets demonstrate that AdaTR performs significantly better than the state-of-the-art baselines.
时间推荐是推荐系统中一类重要的任务,其重点是建模和捕获用户行为中的时间模式,以实现更细粒度和更高质量的推荐。在现实场景中,用户的时间行为不仅表现为连续项之间的顺序依赖关系,还表现为不同项之间的周期性相关性和不同用户之间的时变相似性。在本文中,我们提出了一种自适应时间推荐(AdaTR)算法来捕捉时间行为和动态协同信号的固有特征。首先,根据用户行为的周期性特征,对不同时间段的用户-物品交互行为进行统计和聚合,形成时态用户-物品交互矩阵;然后,为了捕获不同用户之间时变的协作信号,在时间用户-物品交互矩阵上实现了深谱聚类(DSC)方法,将用户-物品交互的原始表示投影到潜在空间中,并将用户的时间行为聚类到不同的组中。此外,设计了一个自适应深度矩阵分解(AdaDMF)模块来学习用户偏好在每个时间用户行为簇上的时变表示,其中包含不同用户之间的动态协作信号。最后,我们结合用户的短期和长期偏好来生成个性化的临时推荐。在四个数据集上进行的大量实验表明,AdaTR的性能明显优于最先进的基线。
{"title":"Temporal Recommendation Based on Adaptive Deep Matrix Factorization","authors":"Yali Feng;Zhifeng Hao;Wen Wen;Ruichu Cai","doi":"10.1109/TBDATA.2025.3621144","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3621144","url":null,"abstract":"Temporal recommendation is an important class of tasks in recommender systems, which focuses on modeling and capturing temporal patterns in user behavior to achieve finer-grained and higher-quality recommendations. In real-world scenario, users’ temporal behaviors are not only characterized by sequential dependencies among consecutive items, but also by periodic correlations of different items and time-varying similarity of different users. In this paper, we propose an Adaptive Temporal Recommendation (AdaTR) algorithm to capture the inherent features of temporal behaviors and dynamic collaborative signals. Firstly, based on the periodic characteristics of user behaviors, the user-item interactions are counted and aggregated in different time segments across multiple periods, which forms the temporal user-item interaction matrix. Then, in order to capture the time-varying collaborative signals between different users, a deep spectral clustering (DSC) method is implemented on the temporal user-item interaction matrix, where the original representation of user-item interaction is projected into a latent space, and users’ temporal behaviors are clustered into different groups. Furthermore, an Adaptive Deep Matrix Factorization (AdaDMF) module is designed to learn the time-varying representations of user preferences on each cluster of temporal user behaviors, which incoporate dynamic collaborative signals among different users. Finally, we combine users’ short-term and long-term preferences to generate personalized temporal recommendations. Extensive experiments on four datasets demonstrate that AdaTR performs significantly better than the state-of-the-art baselines.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"12 1","pages":"288-300"},"PeriodicalIF":5.7,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bridging User Dynamic Preferences: A Unified Bridge-Based Diffusion Model for Next POI Recommendation 桥接用户动态偏好:下一个POI推荐的统一桥接扩散模型
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-07 DOI: 10.1109/TBDATA.2025.3618453
Jiankai Zuo;Zihao Yao;Yaying Zhang
Next POI recommendation plays a crucial role in delivering personalized location-based services, but it faces significant challenges in capturing complex user behavior and adapting to dynamic interest distributions. Most methods often provide insufficient modeling of implicit features in user trajectories, such as directional transitions and latent edge relationships, which are essential for understanding user behavior. Moreover, existing diffusion models, constrained by Gaussian priors, struggle to handle the diverse and evolving nature of user preferences. The lack of a unified scheduling for noise and sampling also limits the flexibility of diffusion models. In this paper, we propose a Unified Bridge-based Diffusion model (UB-Diff) for the next POI recommendation. UB-Diff incorporates a direction-aware POI transition graph learning, which jointly captures spatio-temporal and directional features. To overcome the limitations of Gaussian priors, we introduce a bridge-based diffusion POI generative model. It can achieve distribution translation from the user’s historical distribution to the target distribution by learning a bridge to associate user behavior with POI recommendation, adapting to dynamic user interests. In the end, we design a novel intermediate function to unify the diffusion process, enabling precise control over noise scheduling and modular optimization. Extensive experiments on five real-world datasets demonstrate the superiority of UB-Diff over advanced baseline methods.
Next POI推荐在提供个性化的基于位置的服务中起着至关重要的作用,但它在捕获复杂的用户行为和适应动态兴趣分布方面面临着重大挑战。大多数方法往往对用户轨迹中的隐式特征建模不足,例如方向转换和潜在边缘关系,而这些特征对于理解用户行为至关重要。此外,现有的扩散模型受到高斯先验的约束,难以处理用户偏好的多样性和不断演变的本质。缺乏对噪声和采样的统一调度也限制了扩散模型的灵活性。在本文中,我们为下一个POI建议提出了一个统一的基于桥的扩散模型(UB-Diff)。UB-Diff结合了一个方向感知的POI过渡图学习,它可以联合捕获时空和方向特征。为了克服高斯先验的局限性,我们引入了一种基于桥的扩散POI生成模型。通过学习将用户行为与POI推荐相关联的桥梁,适应动态用户兴趣,实现从用户历史分布到目标分布的分布转换。最后,我们设计了一个新的中间函数来统一扩散过程,实现对噪声调度的精确控制和模块化优化。在五个真实数据集上进行的大量实验表明,UB-Diff优于先进的基线方法。
{"title":"Bridging User Dynamic Preferences: A Unified Bridge-Based Diffusion Model for Next POI Recommendation","authors":"Jiankai Zuo;Zihao Yao;Yaying Zhang","doi":"10.1109/TBDATA.2025.3618453","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3618453","url":null,"abstract":"Next POI recommendation plays a crucial role in delivering personalized location-based services, but it faces significant challenges in capturing complex user behavior and adapting to dynamic interest distributions. Most methods often provide insufficient modeling of implicit features in user trajectories, such as directional transitions and latent edge relationships, which are essential for understanding user behavior. Moreover, existing diffusion models, constrained by Gaussian priors, struggle to handle the diverse and evolving nature of user preferences. The lack of a unified scheduling for noise and sampling also limits the flexibility of diffusion models. In this paper, we propose a Unified Bridge-based Diffusion model (UB-Diff) for the next POI recommendation. UB-Diff incorporates a direction-aware POI transition graph learning, which jointly captures spatio-temporal and directional features. To overcome the limitations of Gaussian priors, we introduce a bridge-based diffusion POI generative model. It can achieve distribution translation from the user’s historical distribution to the target distribution by learning a bridge to associate user behavior with POI recommendation, adapting to dynamic user interests. In the end, we design a novel intermediate function to unify the diffusion process, enabling precise control over noise scheduling and modular optimization. Extensive experiments on five real-world datasets demonstrate the superiority of UB-Diff over advanced baseline methods.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"12 1","pages":"261-275"},"PeriodicalIF":5.7,"publicationDate":"2025-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Two-Step Nyström Sampling for Large-Scale Kernel Approximation 两步Nyström大规模核近似的采样
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-07 DOI: 10.1109/TBDATA.2025.3618472
Li He;Hong Zhang
Nyström approximation is one of the most popular approximation methods to accelerate kernel analysis on large-scale data sets. Nyström employs one single landmark set to obtain eigenvectors (low-rank decomposition) and projects the entire data set to the eigenvectors (embedding). Most existing methods focus on accelerating landmark selection. For extremely large-scale data sets, however, the embedding time cost, rather than that of low-rank decomposition, is critical. In addition, both accuracy and embedding time cost are dominated by the landmark set size. As a result, using more landmarks is the only way to improve accuracy at the cost of extremely high embedding costs. In this paper, we propose a method for the first time to decouple embedding cost from that of low-rank decomposition. We first obtain the eigenvectors from a large landmark set for a low error, and then optimize a small landmark set that minimizes the landmark-set-embedding error to ensure a low embedding cost. In return, our accuracy is close to that of the large landmark set but the small one dominates the embedding time cost. Our method can deal with popular kernels and be plugged into most existing methods. Experimental results demonstrate the superiority of the proposed method.
Nyström近似是在大规模数据集上加速核分析的最流行的近似方法之一。Nyström采用一个单一的地标集来获得特征向量(低秩分解),并将整个数据集投影到特征向量(嵌入)。现有的方法大多侧重于加速地标的选择。然而,对于超大规模的数据集,关键是嵌入时间成本,而不是低秩分解的时间成本。此外,准确率和嵌入时间成本都受标记集大小的影响。因此,使用更多的地标是以极高的嵌入成本为代价提高精度的唯一方法。本文首次提出了一种将嵌入代价与低秩分解解耦的方法。我们首先从一个大的标记集获得低误差的特征向量,然后对一个小的标记集进行优化,使标记集嵌入误差最小化,以保证低的嵌入成本。作为回报,我们的准确率接近于大的地标集,但小的地标集支配着嵌入的时间成本。我们的方法可以处理流行的内核,并且可以插入到大多数现有的方法中。实验结果证明了该方法的优越性。
{"title":"Two-Step Nyström Sampling for Large-Scale Kernel Approximation","authors":"Li He;Hong Zhang","doi":"10.1109/TBDATA.2025.3618472","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3618472","url":null,"abstract":"Nyström approximation is one of the most popular approximation methods to accelerate kernel analysis on large-scale data sets. Nyström employs one single landmark set to obtain eigenvectors (low-rank decomposition) and projects the entire data set to the eigenvectors (embedding). Most existing methods focus on accelerating landmark selection. For extremely large-scale data sets, however, the embedding time cost, rather than that of low-rank decomposition, is critical. In addition, both accuracy and embedding time cost are dominated by the landmark set size. As a result, using more landmarks is the <italic>only</i> way to improve accuracy at the cost of extremely high embedding costs. In this paper, we propose a method for the first time to decouple embedding cost from that of low-rank decomposition. We first obtain the eigenvectors from a large landmark set for a low error, and then optimize a small landmark set that minimizes the landmark-set-embedding error to ensure a low embedding cost. In return, our accuracy is close to that of the large landmark set but the small one dominates the embedding time cost. Our method can deal with popular kernels and be plugged into most existing methods. Experimental results demonstrate the superiority of the proposed method.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"12 1","pages":"249-260"},"PeriodicalIF":5.7,"publicationDate":"2025-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Differential Encoding for Improved Representation Learning Over Graphs 改进图上表示学习的差分编码
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-07 DOI: 10.1109/TBDATA.2025.3618447
Haimin Zhang;Jiaohao Xia;Min Xu
Combining the message-passing paradigm with the global attention mechanism has emerged as an effective framework for learning over graphs. The message-passing paradigm and the global attention mechanism basically generate embeddings of nodes by taking the sum of information from a node’s local neighbourhood and from the entire graph, respectively. However, this simple summation aggregation approach fails to distinguish between the information from a node itself or from the node’s neighbours. Therefore, there exists information lost at each layer of embedding generation, and this information lost could be accumulated and become more serious in deeper model layers. In this paper, we present a differential encoding method to address the issue of information lost. Instead of simply taking the sum to aggregate local or global information, we explicitly encode the difference between the information from a node itself and that from the node’s local neighbours (or from the rest of the entire graph nodes). The obtained differential encoding is then combined with the original aggregated representation to generate the updated node embedding. By combining differential encodings, the representational ability of generated node embeddings is improved, and therefore the model performance is improved. The differential encoding method is empirically evaluated on different graph tasks on seven benchmark datasets. The results show that it is a general method that improves the message-passing update and the global attention update, advancing the state-of-the-art performance for graph representation learning on these benchmark datasets.
将消息传递范式与全局注意机制相结合已经成为一种有效的图学习框架。消息传递范式和全局关注机制基本上分别通过从节点的局部邻域和整个图中获取信息的总和来生成节点嵌入。然而,这种简单的求和聚合方法无法区分来自节点本身的信息还是来自节点邻居的信息。因此,在嵌入生成的每一层都存在信息丢失,并且这种信息丢失会在更深的模型层中积累并变得更严重。在本文中,我们提出了一种差分编码方法来解决信息丢失问题。我们不是简单地用求和来聚合局部或全局信息,而是显式地对来自节点本身和来自节点的局部邻居(或来自整个图节点的其余部分)的信息之间的差异进行编码。然后将得到的差分编码与原始聚合表示相结合,生成更新后的节点嵌入。通过结合差分编码,提高了生成节点嵌入的表示能力,从而提高了模型的性能。差分编码方法在7个基准数据集上对不同的图任务进行了经验评估。结果表明,该方法改善了消息传递更新和全局关注更新,提高了图表示学习在这些基准数据集上的性能。
{"title":"Differential Encoding for Improved Representation Learning Over Graphs","authors":"Haimin Zhang;Jiaohao Xia;Min Xu","doi":"10.1109/TBDATA.2025.3618447","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3618447","url":null,"abstract":"Combining the message-passing paradigm with the global attention mechanism has emerged as an effective framework for learning over graphs. The message-passing paradigm and the global attention mechanism basically generate embeddings of nodes by taking the sum of information from a node’s local neighbourhood and from the entire graph, respectively. However, this simple summation aggregation approach fails to distinguish between the information from a node itself or from the node’s neighbours. Therefore, there exists information lost at each layer of embedding generation, and this information lost could be accumulated and become more serious in deeper model layers. In this paper, we present a differential encoding method to address the issue of information lost. Instead of simply taking the sum to aggregate local or global information, we explicitly encode the difference between the information from a node itself and that from the node’s local neighbours (or from the rest of the entire graph nodes). The obtained differential encoding is then combined with the original aggregated representation to generate the updated node embedding. By combining differential encodings, the representational ability of generated node embeddings is improved, and therefore the model performance is improved. The differential encoding method is empirically evaluated on different graph tasks on seven benchmark datasets. The results show that it is a general method that improves the message-passing update and the global attention update, advancing the state-of-the-art performance for graph representation learning on these benchmark datasets.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"12 1","pages":"276-287"},"PeriodicalIF":5.7,"publicationDate":"2025-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Convergent Federated Learning via Decaying SGD Updates 基于衰减SGD更新的快速收敛联邦学习
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-06 DOI: 10.1109/TBDATA.2025.3618454
Md Palash Uddin;Yong Xiang;Mahmudul Hasan;Yao Zhao;Youyang Qu;Longxiang Gao
Federated Learning (FL), a groundbreaking approach for collaborative model training across decentralized devices, maintains data privacy while constructing a decent global machine learning model. Conventional FL methods typically demand more communication rounds to achieve convergence in non-Independent and non-Identically Distributed (non-IID) data scenarios due to their reliance on fixed Stochastic Gradient Descent (SGD) updates at each Communication Round (CR). In this paper, we introduce a novel strategy to expedite the convergence of FL models, inspired by the insights from McMahan et al.’s seminal work. We focus on FL convergence via traditional SGD decay by introducing a dynamic adjusting mechanism for local epochs and local batch size. Our method adapts the decay of SGD updates during the training process, akin to decaying learning rates in classical optimization. Particularly, by adaptively reducing local epochs and increasing local batch size using their ongoing values and the CR as the model progresses, our method enhances convergence speed without compromising accuracy, specifically by effectively addressing challenges posed by non-IID data. We provide theoretical results of the benefits of the dynamic decay of SGD updates in FL scenarios. We demonstrate our method’s consistent outperformance regarding the global model’s communication speedup and convergence behavior through comprehensive experiments.
联邦学习(FL)是一种开创性的方法,用于跨分散设备的协作模型训练,在构建体面的全球机器学习模型的同时维护数据隐私。在非独立和非同分布(非iid)数据场景下,传统的FL方法通常需要更多的通信轮来实现收敛,因为它们依赖于每次通信轮(CR)时固定的随机梯度下降(SGD)更新。在本文中,我们引入了一种新的策略来加速FL模型的收敛,灵感来自McMahan等人开创性工作的见解。通过引入局部时代和局部批大小的动态调整机制,我们重点研究了传统SGD衰减的FL收敛性。我们的方法在训练过程中适应了SGD更新的衰减,类似于经典优化中的学习率衰减。特别是,随着模型的发展,通过自适应地减少局部epoch并使用其持续值和CR增加局部batch大小,我们的方法在不影响精度的情况下提高了收敛速度,特别是通过有效地解决非iid数据带来的挑战。我们提供了在FL场景中SGD更新的动态衰减的好处的理论结果。通过全面的实验,我们证明了我们的方法在全局模型的通信加速和收敛行为方面具有一致的优势。
{"title":"Fast Convergent Federated Learning via Decaying SGD Updates","authors":"Md Palash Uddin;Yong Xiang;Mahmudul Hasan;Yao Zhao;Youyang Qu;Longxiang Gao","doi":"10.1109/TBDATA.2025.3618454","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3618454","url":null,"abstract":"Federated Learning (FL), a groundbreaking approach for collaborative model training across decentralized devices, maintains data privacy while constructing a decent global machine learning model. Conventional FL methods typically demand more communication rounds to achieve convergence in non-Independent and non-Identically Distributed (non-IID) data scenarios due to their reliance on fixed Stochastic Gradient Descent (SGD) updates at each Communication Round (CR). In this paper, we introduce a novel strategy to expedite the convergence of FL models, inspired by the insights from McMahan et al.’s seminal work. We focus on FL convergence via traditional SGD decay by introducing a dynamic adjusting mechanism for local epochs and local batch size. Our method adapts the decay of SGD updates during the training process, akin to decaying learning rates in classical optimization. Particularly, by adaptively reducing local epochs and increasing local batch size using their ongoing values and the CR as the model progresses, our method enhances convergence speed without compromising accuracy, specifically by effectively addressing challenges posed by non-IID data. We provide theoretical results of the benefits of the dynamic decay of SGD updates in FL scenarios. We demonstrate our method’s consistent outperformance regarding the global model’s communication speedup and convergence behavior through comprehensive experiments.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"12 1","pages":"186-199"},"PeriodicalIF":5.7,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
STGym: A Modular Benchmark for Spatio-Temporal Networks With a Survey and Case Study on Traffic Forecasting STGym:时空网络的模块化基准与交通预测的调查和案例研究
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-06 DOI: 10.1109/TBDATA.2025.3618482
Chun-Wei Shen;Jia-Wei Jiang;Hsun-Ping Hsieh
The rapid advancement of spatio-temporal domain has led to a surge of novel models. These models can typically be decomposed into different modules, such as various types of graph neural networks and temporal networks. Notably, many of these models share identical or similar modules. However, the existing literature often relies on fragmented and self-constructed experimental frameworks. This fragmentation hinders a comprehensive understanding of model interrelationships and makes fair comparisons difficult due to inconsistent training and evaluation processes. To address these issues, we introduce Spatio-Temporal Gym (STGym), an innovative modular benchmark that provides a platform for exploring various spatio-temporal models and supports research for developers. The modular design of STGym facilitates an in-depth analysis of model components and promotes the seamless adoption and extension of existing methods. By standardizing the training and evaluation processes, STGym ensures reproducibility and scalability, enabling fair comparisons across different models. In this paper, we use traffic forecasting, a popular research topic in the spatio-temporal domain, as a case to demonstrate the capabilities of the STGym. Our detailed survey systematically utilizes the modular framework of STGym to organize key modules into various models, thereby facilitating deeper insights into their structures and mechanisms. We also evaluate 18 models on six widely used traffic forecasting datasets and analyze critical hyperparameters to reveal their impact on performance. This study provides valuable resources and insights for developers and researchers.
随着时空领域的快速发展,新型模型层出不穷。这些模型通常可以分解成不同的模块,例如各种类型的图神经网络和时间网络。值得注意的是,许多这些模型共享相同或相似的模块。然而,现有文献往往依赖于碎片化和自建的实验框架。这种分裂阻碍了对模型相互关系的全面理解,并且由于不一致的训练和评估过程而使公平比较变得困难。为了解决这些问题,我们引入了时空健身房(STGym),这是一个创新的模块化基准,为探索各种时空模型提供了一个平台,并支持开发人员的研究。STGym的模块化设计便于对模型组件进行深入分析,并促进现有方法的无缝采用和扩展。通过标准化的培训和评估过程,STGym确保了可重复性和可扩展性,实现了不同模型之间的公平比较。在本文中,我们以交通预测作为一个在时空领域的热门研究课题,作为一个案例来展示STGym的能力。我们的详细调查系统地利用STGym的模块化框架,将关键模块组织成各种模型,从而更深入地了解其结构和机制。我们还在6个广泛使用的交通预测数据集上评估了18个模型,并分析了关键超参数,以揭示它们对性能的影响。本研究为开发人员和研究人员提供了宝贵的资源和见解。
{"title":"STGym: A Modular Benchmark for Spatio-Temporal Networks With a Survey and Case Study on Traffic Forecasting","authors":"Chun-Wei Shen;Jia-Wei Jiang;Hsun-Ping Hsieh","doi":"10.1109/TBDATA.2025.3618482","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3618482","url":null,"abstract":"The rapid advancement of spatio-temporal domain has led to a surge of novel models. These models can typically be decomposed into different modules, such as various types of graph neural networks and temporal networks. Notably, many of these models share identical or similar modules. However, the existing literature often relies on fragmented and self-constructed experimental frameworks. This fragmentation hinders a comprehensive understanding of model interrelationships and makes fair comparisons difficult due to inconsistent training and evaluation processes. To address these issues, we introduce Spatio-Temporal Gym (STGym), an innovative modular benchmark that provides a platform for exploring various spatio-temporal models and supports research for developers. The modular design of STGym facilitates an in-depth analysis of model components and promotes the seamless adoption and extension of existing methods. By standardizing the training and evaluation processes, STGym ensures reproducibility and scalability, enabling fair comparisons across different models. In this paper, we use traffic forecasting, a popular research topic in the spatio-temporal domain, as a case to demonstrate the capabilities of the STGym. Our detailed survey systematically utilizes the modular framework of STGym to organize key modules into various models, thereby facilitating deeper insights into their structures and mechanisms. We also evaluate 18 models on six widely used traffic forecasting datasets and analyze critical hyperparameters to reveal their impact on performance. This study provides valuable resources and insights for developers and researchers.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"12 1","pages":"15-33"},"PeriodicalIF":5.7,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Big Data
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1