首页 > 最新文献

IEEE Transactions on Big Data最新文献

英文 中文
Utility-Driven Data Analytics Algorithm for Transaction Modifications Using Pre-Large Concept With Single Database Scan 使用单个数据库扫描的Pre-Large概念的事务修改的效用驱动数据分析算法
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-04-01 DOI: 10.1109/TBDATA.2025.3556615
Unil Yun;Hanju Kim;Myungha Cho;Taewoong Ryu;Seungwan Park;Doyoon Kim;Doyoung Kim;Chanhee Lee;Witold Pedrycz
Utility-driven pattern analysis is a fundamental method for analyzing noteworthy patterns with high utility for diverse quantitative transactional databases. Recently, various approaches have emerged to handle large, dynamic database environments more efficiently by reducing the number of data scans and pattern expansion operations with the pre-large concept. However, existing pre-large-based high utility pattern mining methods either fail to handle real-time transaction modifications or require additional data scans to validate candidate patterns. In this paper, we propose a novel efficient utility-driven pattern mining algorithm using the pre-large concept for transaction modifications. Our method incorporates a single-scan-based framework through the management of actual utility values and discovers high utility patterns without candidate generation for efficient utility-driven dynamic data analysis in the modification environment. We compared the performance of the proposed method with state-of-the-art methods through extensive performance evaluation utilizing real and synthetic datasets. According to the evaluation results and a case study, the suggested method performs a minimum of 1.5 times faster than state-of-the-art methods alongside minimal compromise in memory, and it scaled well with increases in database size. Further statistical analyses indicate that the proposed method reduces the pattern search space compared to the previous method while delivering a complete set of accurate results without loss.
效用驱动的模式分析是分析各种定量事务数据库中具有高效用的重要模式的基本方法。最近,出现了各种方法,通过使用pre-large概念减少数据扫描和模式展开操作的数量,从而更有效地处理大型动态数据库环境。然而,现有的pre-large-based高实用模式挖掘方法要么无法处理实时事务修改,要么需要额外的数据扫描来验证候选模式。在本文中,我们提出了一种新的高效实用驱动的模式挖掘算法,该算法使用pre-large概念进行事务修改。我们的方法结合了一个基于单一扫描的框架,通过对实际效用值的管理,发现高效用模式,而不需要在修改环境中为有效的效用驱动的动态数据分析生成候选模式。我们通过利用真实和合成数据集进行广泛的性能评估,将所提出的方法的性能与最先进的方法进行了比较。根据评估结果和一个案例研究,建议的方法的执行速度比最先进的方法至少快1.5倍,同时对内存的损害最小,并且随着数据库大小的增加而扩展得很好。进一步的统计分析表明,与之前的方法相比,所提出的方法减少了模式搜索空间,同时提供了一组完整的准确结果而没有损失。
{"title":"Utility-Driven Data Analytics Algorithm for Transaction Modifications Using Pre-Large Concept With Single Database Scan","authors":"Unil Yun;Hanju Kim;Myungha Cho;Taewoong Ryu;Seungwan Park;Doyoon Kim;Doyoung Kim;Chanhee Lee;Witold Pedrycz","doi":"10.1109/TBDATA.2025.3556615","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3556615","url":null,"abstract":"Utility-driven pattern analysis is a fundamental method for analyzing noteworthy patterns with high utility for diverse quantitative transactional databases. Recently, various approaches have emerged to handle large, dynamic database environments more efficiently by reducing the number of data scans and pattern expansion operations with the pre-large concept. However, existing pre-large-based high utility pattern mining methods either fail to handle real-time transaction modifications or require additional data scans to validate candidate patterns. In this paper, we propose a novel efficient utility-driven pattern mining algorithm using the pre-large concept for transaction modifications. Our method incorporates a single-scan-based framework through the management of actual utility values and discovers high utility patterns without candidate generation for efficient utility-driven dynamic data analysis in the modification environment. We compared the performance of the proposed method with state-of-the-art methods through extensive performance evaluation utilizing real and synthetic datasets. According to the evaluation results and a case study, the suggested method performs a minimum of 1.5 times faster than state-of-the-art methods alongside minimal compromise in memory, and it scaled well with increases in database size. Further statistical analyses indicate that the proposed method reduces the pattern search space compared to the previous method while delivering a complete set of accurate results without loss.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 5","pages":"2792-2808"},"PeriodicalIF":5.7,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144934407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Concept-Cognitive Learning Model Oriented to Three-Way Concept for Knowledge Acquisition 一种新的概念认知学习模式——面向知识获取的三向概念
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-04-01 DOI: 10.1109/TBDATA.2025.3556637
Weihua Xu;Di Jiang
Concept-cognitive learning (CCL) is the process of enabling machines to simulate the concept learning of the human brain. Existing CCL models focus on formal context while neglecting the importance of skill context. Furthermore, CCL models, which solely focus on positive information, restrict the learning capacity by neglecting negative information, and greatly impeding the acquisition of knowledge. To overcome these issues, we proposes a novel concept-cognitive learning model oriented to three-way concept for knowledge acquisition. First, this paper explains and investigates the relationship between skills and knowledge based on the three-way concept and its properties. Then, in order to simultaneously consider positive and negative information, describe more detailed information, learn more skills, and acquire accurate knowledge, a three-way information granule is described from the perspective of cognitive learning. Then, a transformation method is proposed to transform between different three-way information granules, allowing for the transformation of arbitrary three-way information granule into necessary, sufficient, sufficient and necessary three-way information granules. Finally, algorithm corresponding to the transformation method is designed, and subsequently tested across diverse UCI datasets. The experimental outcomes affirm the effectiveness and excellence of the suggested model and algorithm.
概念认知学习(CCL)是使机器能够模拟人脑概念学习的过程。现有的CCL模型侧重于形式语境,而忽视了技能语境的重要性。此外,CCL模型只关注正面信息,忽略了负面信息,限制了学习能力,极大地阻碍了知识的获取。为了克服这些问题,我们提出了一种新的基于三向概念的知识获取概念认知学习模型。首先,本文从“三向”概念及其性质出发,对技能与知识的关系进行了解释和探讨。然后,为了同时考虑正面和负面信息,描述更详细的信息,学习更多的技能,获得准确的知识,从认知学习的角度描述了一个三向信息颗粒。然后,提出了一种在不同三向信息粒之间转换的转换方法,实现了任意三向信息粒向必要、充分、充分、必要四向信息粒的转换。最后,设计了与转换方法相对应的算法,并在不同的UCI数据集上进行了测试。实验结果证实了所提模型和算法的有效性和优越性。
{"title":"A Novel Concept-Cognitive Learning Model Oriented to Three-Way Concept for Knowledge Acquisition","authors":"Weihua Xu;Di Jiang","doi":"10.1109/TBDATA.2025.3556637","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3556637","url":null,"abstract":"Concept-cognitive learning (CCL) is the process of enabling machines to simulate the concept learning of the human brain. Existing CCL models focus on formal context while neglecting the importance of skill context. Furthermore, CCL models, which solely focus on positive information, restrict the learning capacity by neglecting negative information, and greatly impeding the acquisition of knowledge. To overcome these issues, we proposes a novel concept-cognitive learning model oriented to three-way concept for knowledge acquisition. First, this paper explains and investigates the relationship between skills and knowledge based on the three-way concept and its properties. Then, in order to simultaneously consider positive and negative information, describe more detailed information, learn more skills, and acquire accurate knowledge, a three-way information granule is described from the perspective of cognitive learning. Then, a transformation method is proposed to transform between different three-way information granules, allowing for the transformation of arbitrary three-way information granule into necessary, sufficient, sufficient and necessary three-way information granules. Finally, algorithm corresponding to the transformation method is designed, and subsequently tested across diverse UCI datasets. The experimental outcomes affirm the effectiveness and excellence of the suggested model and algorithm.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 5","pages":"2779-2791"},"PeriodicalIF":5.7,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144934563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PViTGAtt-IP: Severity Quantification of Lung Infections in Chest X-Rays and CT Scans via Parallel and Cross-Attended Encoders PViTGAtt-IP:通过平行和交叉编码器对胸部x线和CT扫描中肺部感染的严重程度进行量化
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-03-31 DOI: 10.1109/TBDATA.2025.3556612
Bouthaina Slika;Fadi Dornaika;Fares Bougourzi;Karim Hammoudi
The development of a robust and adaptive deep learning technique for the diagnosis of pneumonia and the assessment of its severity was a major challenge. Indeed, both chest X-rays (CXR) and CT scans have been widely studied for the diagnosis, detection and quantification of pneumonia. In this paper, a novel approach (PViTGAtt-IP) based on a parallel array of vision transformers is presented, in which the input image is divided into regions of interest. Each region is fed into an individual model and the collective output gives the severity score. Three parallel architectures were also derived and tested. The proposed models were subjected to rigorous tests on two different datasets: RALO CXRs and Per COVID-19 CT scans. The experimental results showed that the proposed models exhibited high performance in accurately predicting scores for both datasets. In particular, the parallel transformers with multi-gate attention proved to be the best performing model. Furthermore, a comparative analysis using state-of-the-art methods showed that our proposed approach consistently achieved competitive or even better performance in terms of the Mean Absolute Error (MAE) and the Pearson Correlation Coefficient (PC). This emphasizes the effectiveness and superiority of our models in the context of diagnosing and assessing the severity of pneumonia.
开发一种用于肺炎诊断和严重程度评估的强大且自适应的深度学习技术是一项重大挑战。事实上,胸部x光片(CXR)和CT扫描已被广泛研究用于肺炎的诊断、检测和量化。本文提出了一种基于视觉变压器并行阵列的PViTGAtt-IP方法,该方法将输入图像划分为感兴趣的区域。每个地区都被输入到一个单独的模型中,集体输出给出了严重程度评分。还推导并测试了三种并行架构。提出的模型在两个不同的数据集上进行了严格的测试:RALO cxr和Per COVID-19 CT扫描。实验结果表明,所提出的模型在准确预测两个数据集的分数方面表现出很高的性能。其中,具有多栅极关注的并联变压器是性能最好的模型。此外,使用最先进的方法进行的比较分析表明,我们提出的方法在平均绝对误差(MAE)和皮尔逊相关系数(PC)方面始终具有竞争力甚至更好的性能。这强调了我们的模型在诊断和评估肺炎严重程度方面的有效性和优越性。
{"title":"PViTGAtt-IP: Severity Quantification of Lung Infections in Chest X-Rays and CT Scans via Parallel and Cross-Attended Encoders","authors":"Bouthaina Slika;Fadi Dornaika;Fares Bougourzi;Karim Hammoudi","doi":"10.1109/TBDATA.2025.3556612","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3556612","url":null,"abstract":"The development of a robust and adaptive deep learning technique for the diagnosis of pneumonia and the assessment of its severity was a major challenge. Indeed, both chest X-rays (CXR) and CT scans have been widely studied for the diagnosis, detection and quantification of pneumonia. In this paper, a novel approach (PViTGAtt-IP) based on a parallel array of vision transformers is presented, in which the input image is divided into regions of interest. Each region is fed into an individual model and the collective output gives the severity score. Three parallel architectures were also derived and tested. The proposed models were subjected to rigorous tests on two different datasets: RALO CXRs and Per COVID-19 CT scans. The experimental results showed that the proposed models exhibited high performance in accurately predicting scores for both datasets. In particular, the parallel transformers with multi-gate attention proved to be the best performing model. Furthermore, a comparative analysis using state-of-the-art methods showed that our proposed approach consistently achieved competitive or even better performance in terms of the Mean Absolute Error (MAE) and the Pearson Correlation Coefficient (PC). This emphasizes the effectiveness and superiority of our models in the context of diagnosing and assessing the severity of pneumonia.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 5","pages":"2736-2748"},"PeriodicalIF":5.7,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144934547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Split Learning on Segmented Healthcare Data 分段医疗保健数据的分割学习
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-03-31 DOI: 10.1109/TBDATA.2025.3556639
Ling Hu;Tongqing Zhou;Zhihuang Liu;Fang Liu;Zhiping Cai
Sequential data learning is vital to harnessing the encompassed rich knowledge for diverse downstream tasks, particularly in healthcare (e.g., disease prediction). Considering data sensitiveness, privacy-preserving learning methods, based on federated learning (FL) and split learning (SL), have been widely investigated. Yet, this work identifies, for the first time, existing methods overlook that sequential data are generated by different patients at different times and stored in different hospitals, failing to learn the sequential correlations between different temporal segments. To fill this void, a novel distributed learning framework STSL is proposed by training a model on the segments in order. Considering that patients have different visit sequences, STSL first implements privacy-preserving visit ordering based on a secure multi-party computation mechanism. Then batch scheduling participates patients with similar visit (sub-)sequences into the same training batch, facilitating subsequent split learning on batches. The scheduling process is formulated as an NP-hard optimization problem on balancing learning loss and efficiency and a greedy-based solution is presented. Theoretical analysis proves the privacy preservation property of STSL. Experimental results on real-world eICU data show its superior performance compared with FL and SL ($5% sim 28%$ better accuracy) and effectiveness (a remarkable 75% reduction in communication costs).
顺序数据学习对于利用所包含的丰富知识完成各种下游任务至关重要,特别是在医疗保健领域(例如,疾病预测)。考虑到数据敏感性,基于联邦学习(FL)和分裂学习(SL)的隐私保护学习方法得到了广泛的研究。然而,这项工作首次发现,现有的方法忽略了顺序数据是由不同的患者在不同的时间产生的,并存储在不同的医院,未能学习不同时间段之间的顺序相关性。为了填补这一空白,提出了一种新的分布式学习框架STSL,该框架通过在分段上按顺序训练模型来实现。考虑到患者就诊顺序不同,STSL首先实现了基于安全多方计算机制的保密性就诊排序。然后,批调度将具有相似就诊(子)序列的患者参与到同一训练批次中,便于后续批次上的分裂学习。将调度过程描述为一个平衡学习损失和效率的NP-hard优化问题,并提出了一种基于贪婪的求解方法。理论分析证明了STSL的隐私保护特性。在实际eICU数据上的实验结果表明,与FL和SL相比,其性能优越(准确率提高5%),效率提高28 %(通信成本显著降低75%)。
{"title":"Split Learning on Segmented Healthcare Data","authors":"Ling Hu;Tongqing Zhou;Zhihuang Liu;Fang Liu;Zhiping Cai","doi":"10.1109/TBDATA.2025.3556639","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3556639","url":null,"abstract":"Sequential data learning is vital to harnessing the encompassed rich knowledge for diverse downstream tasks, particularly in healthcare (e.g., disease prediction). Considering data sensitiveness, privacy-preserving learning methods, based on federated learning (FL) and split learning (SL), have been widely investigated. Yet, this work identifies, for the first time, existing methods overlook that sequential data are generated by different patients at different times and stored in different hospitals, failing to learn the sequential correlations between different temporal segments. To fill this void, a novel distributed learning framework <monospace>STSL</monospace> is proposed by training a model on the segments in order. Considering that patients have different visit sequences, <monospace>STSL</monospace> first implements privacy-preserving visit ordering based on a secure multi-party computation mechanism. Then batch scheduling participates patients with similar visit (sub-)sequences into the same training batch, facilitating subsequent split learning on batches. The scheduling process is formulated as an NP-hard optimization problem on balancing learning loss and efficiency and a greedy-based solution is presented. Theoretical analysis proves the privacy preservation property of <monospace>STSL</monospace>. Experimental results on real-world eICU data show its superior performance compared with FL and SL (<inline-formula><tex-math>$5% sim 28%$</tex-math></inline-formula> better accuracy) and effectiveness (a remarkable 75% reduction in communication costs).","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 5","pages":"2749-2763"},"PeriodicalIF":5.7,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144934398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Incorporating Confused Phraseological Knowledge Based on Pinyin Input Method for Chinese Spelling Correction 基于拼音输入法的混淆短语知识整合与汉语拼写校正
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-03-26 DOI: 10.1109/TBDATA.2025.3552344
Weidong Zhao;Xiaoyu Wang;Liqing Qiu
Chinese Spelling Correction (CSC) is designed to detect and correct spelling errors that occur in Chinese text. In real life, most keyboard input scenarios use the pinyin input method. Researching spelling errors in this scenario is practical and valuable. However, there is currently no research that has truly proposed a model suitable for this scenario. Considering this concern, this paper proposes a model IPCK-IME, which incorporates confused phraseological knowledge based on the pinyin input method. The model integrates its own phonetic features with external similarity knowledge to guide the model to output more correct characters. Furthermore, to mitigate the influence of spelling errors on the semantics of sentences, a Gaussian bias is introduced into the self-attention network of the model. This approach aims to reduces the focus on typos and improve attention to local context. Empirical evidence indicates that our method surpasses existing models in correcting spelling errors generated by the pinyin input method. And, it is more appropriate for correcting Chinese spelling errors in real input scenarios.
中文拼写校正(CSC)是一种检测和纠正中文文本中出现的拼写错误的系统。在现实生活中,大多数键盘输入场景使用拼音输入法。在这种情况下研究拼写错误是实用且有价值的。然而,目前还没有研究真正提出适合这种情况的模型。考虑到这一问题,本文提出了一种基于拼音输入法的融合混淆短语知识的IPCK-IME模型。该模型将自身的语音特征与外部相似度知识相结合,引导模型输出更正确的字符。此外,为了减轻拼写错误对句子语义的影响,在模型的自注意网络中引入了高斯偏差。这种方法旨在减少对错别字的关注,提高对本地上下文的关注。经验证据表明,我们的方法在纠正拼音输入法产生的拼写错误方面优于现有的模型。并且,它更适合于在真实输入场景中纠正中文拼写错误。
{"title":"Incorporating Confused Phraseological Knowledge Based on Pinyin Input Method for Chinese Spelling Correction","authors":"Weidong Zhao;Xiaoyu Wang;Liqing Qiu","doi":"10.1109/TBDATA.2025.3552344","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3552344","url":null,"abstract":"Chinese Spelling Correction (CSC) is designed to detect and correct spelling errors that occur in Chinese text. In real life, most keyboard input scenarios use the pinyin input method. Researching spelling errors in this scenario is practical and valuable. However, there is currently no research that has truly proposed a model suitable for this scenario. Considering this concern, this paper proposes a model IPCK-IME, which incorporates confused phraseological knowledge based on the pinyin input method. The model integrates its own phonetic features with external similarity knowledge to guide the model to output more correct characters. Furthermore, to mitigate the influence of spelling errors on the semantics of sentences, a Gaussian bias is introduced into the self-attention network of the model. This approach aims to reduces the focus on typos and improve attention to local context. Empirical evidence indicates that our method surpasses existing models in correcting spelling errors generated by the pinyin input method. And, it is more appropriate for correcting Chinese spelling errors in real input scenarios.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 5","pages":"2724-2735"},"PeriodicalIF":5.7,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144934449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Graph Structure Learning Neural Rough Differential Equations for Multivariate Time Series Forecasting 多元时间序列预测的自适应图结构学习神经粗糙微分方程
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-03-24 DOI: 10.1109/TBDATA.2025.3552334
Yuming Su;Tinghuai Ma;Huan Rong;Mohamed Magdy Abdel Wahab
Multivariate time series forecasting has extensive applications in urban computing, such as financial analysis, weather prediction, and traffic forecasting. Using graph structures to model the complex correlations among variables in time series, and leveraging graph neural networks and recurrent neural networks for temporal aggregation and spatial propagation stage, has shown promise. However, traditional methods’ graph structure node learning and discrete neural architecture are not sensitive to issues such as sudden changes, time variance, and irregular sampling often found in real-world data. To address these challenges, we propose a method called Adaptive Graph structure Learning neural Rough Differential Equations (AGLRDE). Specifically, we combine dynamic and static graph structure learning to adaptively generate a more robust graph representation. Then we employ a spatio-temporal encoder-decoder based on Neural Rough Differential Equations (Neural RDE) to model spatio-temporal dependencies. Additionally, we introduce a path reconstruction loss to constrain the path generation stage. We conduct experiments on six benchmark datasets, demonstrating that our proposed method outperforms existing state-of-the-art methods. The results show that AGLRDE effectively handles aforementioned challenges, significantly improving the accuracy of multivariate time series forecasting.
多元时间序列预测在城市计算中有着广泛的应用,如金融分析、天气预报、交通预测等。利用图结构来模拟时间序列中变量之间的复杂相关性,并利用图神经网络和递归神经网络进行时间聚集和空间传播阶段,已经显示出前景。然而,传统方法的图结构、节点学习和离散神经结构对现实数据中的突变、时变和不规则采样等问题并不敏感。为了解决这些挑战,我们提出了一种称为自适应图结构学习神经粗糙微分方程(AGLRDE)的方法。具体来说,我们结合了动态和静态图结构学习来自适应地生成更鲁棒的图表示。然后采用基于神经粗糙微分方程(Neural RDE)的时空编码器对时空依赖关系进行建模。此外,我们引入了路径重建损失来约束路径生成阶段。我们在六个基准数据集上进行了实验,证明我们提出的方法优于现有的最先进的方法。结果表明,AGLRDE有效地解决了上述挑战,显著提高了多元时间序列预测的准确性。
{"title":"Adaptive Graph Structure Learning Neural Rough Differential Equations for Multivariate Time Series Forecasting","authors":"Yuming Su;Tinghuai Ma;Huan Rong;Mohamed Magdy Abdel Wahab","doi":"10.1109/TBDATA.2025.3552334","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3552334","url":null,"abstract":"Multivariate time series forecasting has extensive applications in urban computing, such as financial analysis, weather prediction, and traffic forecasting. Using graph structures to model the complex correlations among variables in time series, and leveraging graph neural networks and recurrent neural networks for temporal aggregation and spatial propagation stage, has shown promise. However, traditional methods’ graph structure node learning and discrete neural architecture are not sensitive to issues such as sudden changes, time variance, and irregular sampling often found in real-world data. To address these challenges, we propose a method called <underline>A</u>daptive <underline>G</u>raph structure <underline>L</u>earning neural <underline>R</u>ough <underline>D</u>ifferential <underline>E</u>quations (AGLRDE). Specifically, we combine dynamic and static graph structure learning to adaptively generate a more robust graph representation. Then we employ a spatio-temporal encoder-decoder based on Neural Rough Differential Equations (Neural RDE) to model spatio-temporal dependencies. Additionally, we introduce a path reconstruction loss to constrain the path generation stage. We conduct experiments on six benchmark datasets, demonstrating that our proposed method outperforms existing state-of-the-art methods. The results show that AGLRDE effectively handles aforementioned challenges, significantly improving the accuracy of multivariate time series forecasting.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 5","pages":"2710-2723"},"PeriodicalIF":5.7,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144934381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Objective Graph Contrastive Learning for Recommendation 面向推荐的多目标图对比学习
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-03-18 DOI: 10.1109/TBDATA.2025.3552341
Lei Zhang;Mingren Ke;Likang Wu;Wuji Zhang;Zihao Chen;Hongke Zhao
Recently, numerous studies have integrated self-supervised contrastive learning with Graph Convolutional Networks (GCNs) to address the data sparsity and popularity bias to enhance recommendation performance. While such studies have made breakthroughs in accuracy metric, they often neglect non-accuracy objectives such as diversity, novelty and percentage of long-tail items, which greatly reduces the user experience in real-world applications. To this end, we propose a novel graph collaborative filtering model named Multi-Objective Graph Contrastive Learning for recommendation (MOGCL), designed to provide more comprehensive recommendations by considering multiple objectives. Specifically, MOGCL comprises three modules: a multi-objective embedding generation module, an embedding fusion module and a transfer learning module. In the multi-objective embedding generation module, we employ two GCN encoders with different goal orientations to generate node embeddings targeting accuracy and non-accuracy objectives, respectively. These embeddings are then effectively fused with complementary weights in the embedding fusion module. In the transfer learning module, we suggest an auxiliary self-supervised task to promote the maximization of the mutual information of the two sets of embeddings, so that the obtained final embeddings are more stable and comprehensive. The experimental results on three real-world datasets show that MOGCL achieves optimal trade-offs between multiple objectives comparing to the state-of-the-arts.
近年来,许多研究将自监督对比学习与图卷积网络(GCNs)相结合,以解决数据稀疏性和流行度偏差问题,从而提高推荐性能。虽然这些研究在准确性度量方面取得了突破,但往往忽略了非准确性目标,如多样性、新颖性和长尾项目的百分比,这大大降低了实际应用中的用户体验。为此,我们提出了一种新的图协同过滤模型,称为多目标图对比学习推荐(MOGCL),旨在通过考虑多个目标提供更全面的推荐。具体来说,MOGCL包括三个模块:多目标嵌入生成模块、嵌入融合模块和迁移学习模块。在多目标嵌入生成模块中,我们采用了两个目标方向不同的GCN编码器,分别针对精确目标和非精确目标生成节点嵌入。然后在嵌入融合模块中有效地将这些嵌入与互补权值融合。在迁移学习模块中,我们提出了一个辅助的自监督任务来促进两组嵌入的互信息最大化,从而使得到的最终嵌入更加稳定和全面。在三个真实数据集上的实验结果表明,与最先进的技术相比,MOGCL实现了多目标之间的最佳权衡。
{"title":"Multi-Objective Graph Contrastive Learning for Recommendation","authors":"Lei Zhang;Mingren Ke;Likang Wu;Wuji Zhang;Zihao Chen;Hongke Zhao","doi":"10.1109/TBDATA.2025.3552341","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3552341","url":null,"abstract":"Recently, numerous studies have integrated self-supervised contrastive learning with Graph Convolutional Networks (GCNs) to address the data sparsity and popularity bias to enhance recommendation performance. While such studies have made breakthroughs in accuracy metric, they often neglect non-accuracy objectives such as diversity, novelty and percentage of long-tail items, which greatly reduces the user experience in real-world applications. To this end, we propose a novel graph collaborative filtering model named Multi-Objective Graph Contrastive Learning for recommendation (MOGCL), designed to provide more comprehensive recommendations by considering multiple objectives. Specifically, MOGCL comprises three modules: a multi-objective embedding generation module, an embedding fusion module and a transfer learning module. In the multi-objective embedding generation module, we employ two GCN encoders with different goal orientations to generate node embeddings targeting accuracy and non-accuracy objectives, respectively. These embeddings are then effectively fused with complementary weights in the embedding fusion module. In the transfer learning module, we suggest an auxiliary self-supervised task to promote the maximization of the mutual information of the two sets of embeddings, so that the obtained final embeddings are more stable and comprehensive. The experimental results on three real-world datasets show that MOGCL achieves optimal trade-offs between multiple objectives comparing to the state-of-the-arts.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 5","pages":"2696-2709"},"PeriodicalIF":5.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144934371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Guided Graph Refinement With Progressive Fusion for Multiplex Graph Contrastive Representation Learning 基于渐进式融合的多路图对比表示学习自引导图精
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-03-18 DOI: 10.1109/TBDATA.2025.3552331
Qi Dai;Yu Gu;Xiaofeng Zhu;Xiaohua Li;Fangfang Li;Ge Yu
Multiplex Graph Contrastive Learning (MGCL) has attracted significant attention. However, existing MGCL methods often struggle with suboptimal graph structures and fail to fully capture intricate interdependencies across multiplex views. To address these issues, we propose a novel self-supervised framework, Multiplex Graph Refinement with progressive fusion (MGRefine), for multiplex graph contrastive representation learning. Specifically, MGRefine introduces a multi-view learning module to extract a structural guidance matrix by exploring the underlying relationships between nodes. Then, a progressive fusion module is employed to progressively enhance and fuse representations from different views, capturing and leveraging nuanced interdependencies and comprehensive information across the multiplex graphs. The fused representation is then used to construct a consensus guidance matrix. A self-enhanced refinement module continuously refines the multiplex graphs using these guidance matrices while providing effective supervision signals. MGRefine achieves mutual reinforcement between graph structures and representations, ensuring continuous optimization of the model throughout the learning process in a self-enhanced manner. Extensive experiments demonstrate that MGRefine outperforms state-of-the-art methods and also verify the effectiveness of MGRefine across various downstream tasks on several benchmark datasets.
多元图对比学习(MGCL)已经引起了广泛的关注。然而,现有的MGCL方法经常与次优图结构作斗争,并且不能完全捕获多路视图之间复杂的相互依赖关系。为了解决这些问题,我们提出了一种新的自监督框架,multi - plex Graph Refinement with progressive fusion (MGRefine),用于多路图对比表示学习。具体来说,MGRefine引入了一个多视图学习模块,通过探索节点之间的潜在关系来提取结构指导矩阵。然后,采用渐进融合模块逐步增强和融合来自不同视图的表示,捕获和利用多路图中细微的相互依赖关系和综合信息。然后使用融合表示构造一致指导矩阵。自增强的细化模块利用这些引导矩阵不断地细化多路图,同时提供有效的监督信号。MGRefine实现了图结构和表示之间的相互强化,以自我增强的方式确保模型在整个学习过程中不断优化。大量的实验表明,MGRefine优于最先进的方法,并且在几个基准数据集上验证了MGRefine在各种下游任务中的有效性。
{"title":"Self-Guided Graph Refinement With Progressive Fusion for Multiplex Graph Contrastive Representation Learning","authors":"Qi Dai;Yu Gu;Xiaofeng Zhu;Xiaohua Li;Fangfang Li;Ge Yu","doi":"10.1109/TBDATA.2025.3552331","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3552331","url":null,"abstract":"Multiplex Graph Contrastive Learning (MGCL) has attracted significant attention. However, existing MGCL methods often struggle with suboptimal graph structures and fail to fully capture intricate interdependencies across multiplex views. To address these issues, we propose a novel self-supervised framework, Multiplex Graph Refinement with progressive fusion (MGRefine), for multiplex graph contrastive representation learning. Specifically, MGRefine introduces a multi-view learning module to extract a structural guidance matrix by exploring the underlying relationships between nodes. Then, a progressive fusion module is employed to progressively enhance and fuse representations from different views, capturing and leveraging nuanced interdependencies and comprehensive information across the multiplex graphs. The fused representation is then used to construct a consensus guidance matrix. A self-enhanced refinement module continuously refines the multiplex graphs using these guidance matrices while providing effective supervision signals. MGRefine achieves mutual reinforcement between graph structures and representations, ensuring continuous optimization of the model throughout the learning process in a self-enhanced manner. Extensive experiments demonstrate that MGRefine outperforms state-of-the-art methods and also verify the effectiveness of MGRefine across various downstream tasks on several benchmark datasets.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 5","pages":"2669-2680"},"PeriodicalIF":5.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144934433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MSST: Multi-Scale Spatial-Temporal Representation Learning for Trajectory Similarity Computation 轨迹相似度计算的多尺度时空表征学习
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-03-18 DOI: 10.1109/TBDATA.2025.3552340
Li Li;Junjun Si;Jinna Lv;Junting Lu;Jianyu Zhang;Shuaifu Dai
Computing trajectory similarity is a fundamental task in trajectory analysis. Traditional heuristic methods suffer from quadratic computational complexity, which limits their scalability to large datasets. Recently, Trajectory Representation Learning (TRL) has been extensively studied to address this limitation. However, most existing TRL algorithms face two key challenges. First, they prioritize spatial similarity while neglecting the intricate spatio-temporal dynamics of trajectories, particularly temporal regularities. Second, these methods are often constrained by predefined single spatial or temporal scales, which can significantly impact performance, since the measurement of trajectory similarity depends on spatial and temporal resolution. To address these issues, we propose MSST, a Multi-Scale Self-supervised Trajectory Representation Learning framework. MSST simultaneously processes spatial and temporal information by generating 3D spatial-temporal tokens, thereby capturing spatio-temporal characteristics of trajectories more effectively. Further, MSST explore the multi-scale characteristics of trajectories. Finally, self-supervised contrastive learning is employed to enhance the consistency between the trajectory representations from different views. Experimental results on three real-world datasets for similarity trajectory computation provide insight into the design properties of our approach and demonstrate the superiority of our approach over existing TRL methods. MSST significantly surpasses all state-of-the-art competitors in terms of effectiveness, efficiency, and robustness. We explore the multi-scale characteristics of trajectories. To the best of our knowledge, this is the first effort in the TRL literature. Compared to previous TRL research, the proposed method can balance the noise and the details of trajectories, enabling a more comprehensive analysis by accounting for the variability inherent in trajectory data across different scales.
轨迹相似度计算是轨迹分析中的一项基本任务。传统的启发式方法具有二次型计算复杂度,限制了其在大型数据集上的可扩展性。最近,轨迹表示学习(TRL)得到了广泛的研究,以解决这一限制。然而,大多数现有的TRL算法面临两个关键挑战。首先,他们优先考虑空间相似性,而忽略了轨迹复杂的时空动态,特别是时间规律。其次,这些方法通常受到预定义的单一空间或时间尺度的限制,这可能会严重影响性能,因为轨迹相似性的测量取决于空间和时间分辨率。为了解决这些问题,我们提出了多尺度自监督轨迹表示学习框架MSST。MSST通过生成三维时空标记同时处理空间和时间信息,从而更有效地捕获轨迹的时空特征。此外,MSST还探索了轨迹的多尺度特征。最后,采用自监督对比学习来增强不同观点的轨迹表征之间的一致性。在三个真实数据集上进行相似轨迹计算的实验结果揭示了我们的方法的设计特性,并证明了我们的方法优于现有的TRL方法。MSST在有效性,效率和稳健性方面显著超过所有最先进的竞争对手。我们探索了轨迹的多尺度特征。据我们所知,这是TRL文献中的第一次尝试。与以往的TRL研究相比,该方法可以平衡噪声和轨迹细节,通过考虑不同尺度轨迹数据固有的可变性,实现更全面的分析。
{"title":"MSST: Multi-Scale Spatial-Temporal Representation Learning for Trajectory Similarity Computation","authors":"Li Li;Junjun Si;Jinna Lv;Junting Lu;Jianyu Zhang;Shuaifu Dai","doi":"10.1109/TBDATA.2025.3552340","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3552340","url":null,"abstract":"Computing trajectory similarity is a fundamental task in trajectory analysis. Traditional heuristic methods suffer from quadratic computational complexity, which limits their scalability to large datasets. Recently, Trajectory Representation Learning (TRL) has been extensively studied to address this limitation. However, most existing TRL algorithms face two key challenges. First, they prioritize spatial similarity while neglecting the intricate spatio-temporal dynamics of trajectories, particularly temporal regularities. Second, these methods are often constrained by predefined single spatial or temporal scales, which can significantly impact performance, since the measurement of trajectory similarity depends on spatial and temporal resolution. To address these issues, we propose MSST, a Multi-Scale Self-supervised Trajectory Representation Learning framework. MSST simultaneously processes spatial and temporal information by generating 3D spatial-temporal tokens, thereby capturing spatio-temporal characteristics of trajectories more effectively. Further, MSST explore the multi-scale characteristics of trajectories. Finally, self-supervised contrastive learning is employed to enhance the consistency between the trajectory representations from different views. Experimental results on three real-world datasets for similarity trajectory computation provide insight into the design properties of our approach and demonstrate the superiority of our approach over existing TRL methods. MSST significantly surpasses all state-of-the-art competitors in terms of effectiveness, efficiency, and robustness. We explore the multi-scale characteristics of trajectories. To the best of our knowledge, this is the first effort in the TRL literature. Compared to previous TRL research, the proposed method can balance the noise and the details of trajectories, enabling a more comprehensive analysis by accounting for the variability inherent in trajectory data across different scales.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 5","pages":"2657-2668"},"PeriodicalIF":5.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144934392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MHT-Net: A Matching-Based Hierarchical Transfer Network for Glaucoma Detection From Fundus Images MHT-Net:一种基于匹配的眼底图像青光眼检测分层转移网络
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-03-18 DOI: 10.1109/TBDATA.2025.3552342
Linna Zhao;Jianqiang Li;Li Li;Xi Xu
Glaucoma is a chronic and irreversible eye disease. Early detection and treatment can effectively prevent severe consequences. Deep transfer learning is widely used in fundus imaging analysis to remedy the shortage of training data of glaucoma. The model trained on the source domain may struggle to predict glaucoma in the target domain due to distribution differences. Several limitations cannot be ignored: (1) Image matching: enhancing global and local image consistency through bidirectional matching; (2) Hierarchical transfer: developing a strategy for transferring different hierarchical features. To this end, we propose a novel Matched Hierarchical Transfer Network (MHT-Net) to achieve automatic glaucoma detection. We initially create a fundus structure detector to match global and local images using intermediate layers of a pre-trained diagnostic model with source domain data. Next, a hierarchical transfer network is implemented, sharing parameters for general features and using a domain discriminator for specific features. By integrating adversarial and classification losses, the model acquires domain-invariant features, facilitating precise and seamless transfer of fundus information from source to target domains. Extensive experiments demonstrate the effectiveness of our proposed method, outperforming existing glaucoma detection methods. These advantages endow our algorithm as a promising efficient assisted tool in the glaucoma screening.
青光眼是一种慢性、不可逆的眼部疾病。早期发现和治疗可以有效预防严重后果。深度迁移学习被广泛应用于眼底成像分析,以弥补青光眼训练数据的不足。由于分布差异,在源域训练的模型可能难以预测目标域的青光眼。有几个局限性不能忽视:(1)图像匹配:通过双向匹配增强图像全局和局部一致性;(2)层次转移:制定不同层次特征的转移策略。为此,我们提出了一种新的匹配分层传输网络(MHT-Net)来实现青光眼的自动检测。我们首先创建了一个眼底结构检测器,使用预先训练的诊断模型的中间层与源域数据匹配全局和局部图像。其次,实现了一个分层传输网络,对一般特征共享参数,对特定特征使用域鉴别器。通过整合对抗损失和分类损失,该模型获得了域不变特征,实现了眼底信息从源域到目标域的精确无缝传输。大量的实验证明了我们提出的方法的有效性,优于现有的青光眼检测方法。这些优点使我们的算法在青光眼筛查中成为一种很有前景的高效辅助工具。
{"title":"MHT-Net: A Matching-Based Hierarchical Transfer Network for Glaucoma Detection From Fundus Images","authors":"Linna Zhao;Jianqiang Li;Li Li;Xi Xu","doi":"10.1109/TBDATA.2025.3552342","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3552342","url":null,"abstract":"Glaucoma is a chronic and irreversible eye disease. Early detection and treatment can effectively prevent severe consequences. Deep transfer learning is widely used in fundus imaging analysis to remedy the shortage of training data of glaucoma. The model trained on the source domain may struggle to predict glaucoma in the target domain due to distribution differences. Several limitations cannot be ignored: (1) Image matching: enhancing global and local image consistency through bidirectional matching; (2) Hierarchical transfer: developing a strategy for transferring different hierarchical features. To this end, we propose a novel Matched Hierarchical Transfer Network (MHT-Net) to achieve automatic glaucoma detection. We initially create a fundus structure detector to match global and local images using intermediate layers of a pre-trained diagnostic model with source domain data. Next, a hierarchical transfer network is implemented, sharing parameters for general features and using a domain discriminator for specific features. By integrating adversarial and classification losses, the model acquires domain-invariant features, facilitating precise and seamless transfer of fundus information from source to target domains. Extensive experiments demonstrate the effectiveness of our proposed method, outperforming existing glaucoma detection methods. These advantages endow our algorithm as a promising efficient assisted tool in the glaucoma screening.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 5","pages":"2681-2695"},"PeriodicalIF":5.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144934486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Big Data
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1