首页 > 最新文献

IEEE Transactions on Big Data最新文献

英文 中文
Expertise or Hallucination? A Comprehensive Evaluation of ChatGPT's Aptitude in Clinical Genetics 专业还是幻觉?ChatGPT在临床遗传学上的综合评价
IF 7.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-30 DOI: 10.1109/TBDATA.2025.3536939
Yingbo Zhang;Shumin Ren;Jiao Wang;Chaoying Zhan;Mengqiao He;Xingyun Liu;Rongrong Wu;Jing Zhao;Cong Wu;Chuanzhu Fan;Bairong Shen
Whether viewed as an expert or as a source of ‘knowledge hallucination’, the use of ChatGPT in medical practice has stirred ongoing debate. This study sought to evaluate ChatGPT's capabilities in the field of clinical genetics, focusing on tasks such as ‘Clinical genetics exams’, ‘Associations between genetic diseases and pathogenic genes’, and ‘Limitations and trends in clinical genetics’. Results indicated that ChatGPT performed exceptionally well in question-answering tasks, particularly in clinical genetics exams and diagnosing single-gene diseases. It also effectively outlined the current limitations and prospective trends in clinical genetics. However, ChatGPT struggled to provide comprehensive answers regarding multi-gene or epigenetic diseases, particularly with respect to genetic variations or chromosomal abnormalities. In terms of systematic summarization and inference, some randomness was evident in ChatGPT's responses. In summary, while ChatGPT possesses a foundational understanding of general knowledge in clinical genetics due to hyperparameter learning, it encounters significant challenges when delving into specialized knowledge and navigating the complexities of clinical genetics, particularly in mitigating ‘Knowledge Hallucination’. To optimize its performance and depth of expertise in clinical genetics, integration with specialized knowledge databases and knowledge graphs is imperative.
无论是被视为专家还是“知识幻觉”的来源,ChatGPT在医疗实践中的使用都引发了持续的争论。该研究旨在评估ChatGPT在临床遗传学领域的能力,重点关注诸如“临床遗传学检查”、“遗传疾病与致病基因之间的关联”和“临床遗传学的局限性和趋势”等任务。结果表明,ChatGPT在问答任务中表现异常出色,特别是在临床遗传学检查和诊断单基因疾病方面。它还有效地概述了临床遗传学目前的局限性和未来的趋势。然而,ChatGPT努力提供关于多基因或表观遗传疾病的全面答案,特别是关于遗传变异或染色体异常。在系统总结和推理方面,ChatGPT的回答有明显的随机性。总之,由于超参数学习,ChatGPT对临床遗传学的一般知识有了基本的了解,但在深入研究专业知识和驾驭临床遗传学的复杂性时,它遇到了重大挑战,特别是在减轻“知识幻觉”方面。为了优化其在临床遗传学方面的性能和专业知识的深度,与专业知识数据库和知识图谱的集成是必不可少的。
{"title":"Expertise or Hallucination? A Comprehensive Evaluation of ChatGPT's Aptitude in Clinical Genetics","authors":"Yingbo Zhang;Shumin Ren;Jiao Wang;Chaoying Zhan;Mengqiao He;Xingyun Liu;Rongrong Wu;Jing Zhao;Cong Wu;Chuanzhu Fan;Bairong Shen","doi":"10.1109/TBDATA.2025.3536939","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3536939","url":null,"abstract":"Whether viewed as an expert or as a source of ‘knowledge hallucination’, the use of ChatGPT in medical practice has stirred ongoing debate. This study sought to evaluate ChatGPT's capabilities in the field of clinical genetics, focusing on tasks such as ‘Clinical genetics exams’, ‘Associations between genetic diseases and pathogenic genes’, and ‘Limitations and trends in clinical genetics’. Results indicated that ChatGPT performed exceptionally well in question-answering tasks, particularly in clinical genetics exams and diagnosing single-gene diseases. It also effectively outlined the current limitations and prospective trends in clinical genetics. However, ChatGPT struggled to provide comprehensive answers regarding multi-gene or epigenetic diseases, particularly with respect to genetic variations or chromosomal abnormalities. In terms of systematic summarization and inference, some randomness was evident in ChatGPT's responses. In summary, while ChatGPT possesses a foundational understanding of general knowledge in clinical genetics due to hyperparameter learning, it encounters significant challenges when delving into specialized knowledge and navigating the complexities of clinical genetics, particularly in mitigating ‘Knowledge Hallucination’. To optimize its performance and depth of expertise in clinical genetics, integration with specialized knowledge databases and knowledge graphs is imperative.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 3","pages":"919-932"},"PeriodicalIF":7.5,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143949097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multi-Modal Assessment Framework for Comparison of Specialized Deep Learning and General-Purpose Large Language Models 用于比较专业深度学习和通用大型语言模型的多模态评估框架
IF 7.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-30 DOI: 10.1109/TBDATA.2025.3536937
Mohammad Nadeem;Shahab Saquib Sohail;Dag Øivind Madsen;Ahmed Ibrahim Alzahrani;Javier Del Ser;Khan Muhammad
Recent years have witnessed tremendous advancements in Al tools (e.g., ChatGPT, GPT-4, and Bard), driven by the growing power, reasoning, and efficiency of Large Language Models (LLMs). LLMs have been shown to excel in tasks ranging from poem writing and coding to essay generation and puzzle solving. Despite their proficiency in general queries, specialized tasks such as metaphor understanding and fake news detection often require finely tuned models, posing a comparison challenge with specialized Deep Learning (DL). We propose an assessment framework to compare task-specific intelligence with general-purpose LLMs on suicide and depression tendency identification. For this purpose, we trained two DL models on a suicide and depression detection dataset, followed by testing their performance on a test set. Afterward, the same test dataset is used to evaluate the performance of four LLMs (GPT-3.5, GPT-4, Google Bard, and MS Bing) using four classification metrics. The BERT-based DL model performed the best among all, with a testing accuracy of 94.61%, while GPT-4 was the runner-up with accuracy 92.5%. Results demonstrate that LLMs do not outperform the specialized DL models but are able to achieve comparable performance, making them a decent option for downstream tasks without specialized training. However, LLMs outperformed specialized models on the reduced dataset.
近年来,人工智能工具(例如,ChatGPT, GPT-4和Bard)在大型语言模型(llm)不断增长的能力,推理和效率的推动下取得了巨大的进步。法学硕士在从诗歌写作和编码到论文生成和解谜等任务中表现出色。尽管它们精通一般查询,但隐喻理解和假新闻检测等专业任务通常需要精细调整的模型,这与专业深度学习(DL)构成了比较挑战。我们提出了一个评估框架来比较特定任务智力与通用llm在自杀和抑郁倾向识别方面的作用。为此,我们在自杀和抑郁检测数据集上训练了两个深度学习模型,然后在测试集上测试它们的性能。之后,使用相同的测试数据集使用四种分类指标来评估四种LLMs (GPT-3.5, GPT-4, b谷歌Bard和MS Bing)的性能。其中,基于bert的深度学习模型表现最好,测试准确率为94.61%,GPT-4以92.5%的准确率位居第二。结果表明,llm并不优于专门的DL模型,但能够达到相当的性能,使其成为无需专门训练的下游任务的不错选择。然而,llm在简化数据集上的表现优于专门的模型。
{"title":"A Multi-Modal Assessment Framework for Comparison of Specialized Deep Learning and General-Purpose Large Language Models","authors":"Mohammad Nadeem;Shahab Saquib Sohail;Dag Øivind Madsen;Ahmed Ibrahim Alzahrani;Javier Del Ser;Khan Muhammad","doi":"10.1109/TBDATA.2025.3536937","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3536937","url":null,"abstract":"Recent years have witnessed tremendous advancements in Al tools (e.g., ChatGPT, GPT-4, and Bard), driven by the growing power, reasoning, and efficiency of Large Language Models (LLMs). LLMs have been shown to excel in tasks ranging from poem writing and coding to essay generation and puzzle solving. Despite their proficiency in general queries, specialized tasks such as metaphor understanding and fake news detection often require finely tuned models, posing a comparison challenge with specialized Deep Learning (DL). We propose an assessment framework to compare task-specific intelligence with general-purpose LLMs on suicide and depression tendency identification. For this purpose, we trained two DL models on a suicide and depression detection dataset, followed by testing their performance on a test set. Afterward, the same test dataset is used to evaluate the performance of four LLMs (GPT-3.5, GPT-4, Google Bard, and MS Bing) using four classification metrics. The BERT-based DL model performed the best among all, with a testing accuracy of 94.61%, while GPT-4 was the runner-up with accuracy 92.5%. Results demonstrate that LLMs do not outperform the specialized DL models but are able to achieve comparable performance, making them a decent option for downstream tasks without specialized training. However, LLMs outperformed specialized models on the reduced dataset.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 3","pages":"1001-1012"},"PeriodicalIF":7.5,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143949116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SRGTNet: Subregion-Guided Transformer Hash Network for Fine-Grained Image Retrieval SRGTNet:用于细粒度图像检索的子区域导向变压器哈希网络
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-27 DOI: 10.1109/TBDATA.2025.3533916
Hongchun Lu;Songlin He;Xue Li;Min Han;Chase Wu
Fine-grained image retrieval (FGIR) is a crucial task in computer vision, with broad applications in areas such as biodiversity monitoring, e-commerce, and medical diagnostics. However, capturing discriminative feature information to generate binary codes is difficult because of high intraclass variance and low interclass variance. To address this challenge, we (i) build a novel and highly reliable fine-grained deep hash learning framework for more accurate retrieval of fine-grained images. (ii) We propose a part significant region erasure method that forces the network to generate compact binary codes. (iii) We introduce a CNN-guided Transformer structure for use in fine-grained retrieval tasks to capture fine-grained images effectively in contextual feature relationships to mine more discriminative regional features. (iv) A multistage mixture loss is designed to optimize network training and enhance feature representation. Experiments were conducted on three publicly available fine-grained datasets. The results show that our method effectively improves the performance of fine-grained image retrieval.
细粒度图像检索是计算机视觉中的一项重要任务,在生物多样性监测、电子商务和医疗诊断等领域有着广泛的应用。然而,由于类内方差大,类间方差小,很难捕获判别特征信息来生成二进制码。为了应对这一挑战,我们(i)构建了一个新颖且高度可靠的细粒度深度哈希学习框架,以更准确地检索细粒度图像。(ii)我们提出了一种部分有效区域擦除方法,迫使网络生成紧凑的二进制码。(iii)我们引入了一个cnn引导的Transformer结构,用于细粒度检索任务,在上下文特征关系中有效捕获细粒度图像,以挖掘更具判别性的区域特征。(iv)设计多级混合损失优化网络训练,增强特征表示。实验在三个公开的细粒度数据集上进行。结果表明,该方法有效地提高了细粒度图像检索的性能。
{"title":"SRGTNet: Subregion-Guided Transformer Hash Network for Fine-Grained Image Retrieval","authors":"Hongchun Lu;Songlin He;Xue Li;Min Han;Chase Wu","doi":"10.1109/TBDATA.2025.3533916","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3533916","url":null,"abstract":"Fine-grained image retrieval (FGIR) is a crucial task in computer vision, with broad applications in areas such as biodiversity monitoring, e-commerce, and medical diagnostics. However, capturing discriminative feature information to generate binary codes is difficult because of high intraclass variance and low interclass variance. To address this challenge, we (i) build a novel and highly reliable fine-grained deep hash learning framework for more accurate retrieval of fine-grained images. (ii) We propose a part significant region erasure method that forces the network to generate compact binary codes. (iii) We introduce a CNN-guided Transformer structure for use in fine-grained retrieval tasks to capture fine-grained images effectively in contextual feature relationships to mine more discriminative regional features. (iv) A multistage mixture loss is designed to optimize network training and enhance feature representation. Experiments were conducted on three publicly available fine-grained datasets. The results show that our method effectively improves the performance of fine-grained image retrieval.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 5","pages":"2388-2400"},"PeriodicalIF":5.7,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144989930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anomaly Detection in Multi-Level Model Space 多层次模型空间中的异常检测
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-27 DOI: 10.1109/TBDATA.2025.3534625
Ao Chen;Xiren Zhou;Yizhan Fan;Huanhuan Chen
Anomaly detection (AD) is gaining prominence, especially in situations with limited labeled data or unknown anomalies, demanding an efficient approach with minimal reliance on labeled data or prior knowledge. Building upon the framework of Learning in the Model Space (LMS), this paper proposes conducting AD through Learning in the Multi-Level Model Spaces (MLMS). LMS transforms the data from the data space to the model space by representing each data instance with a fitted model. In MLMS, to fully capture the dynamic characteristics within the data, multi-level details of the original data instance are decomposed. These details are individually fitted, resulting in a set of fitted models that capture the multi-level dynamic characteristics of the original instance. Representing each data instance with a set of fitted models, rather than a single one, transforms it from the data space into the multi-level model spaces. The pairwise difference measurement between model sets is introduced, fully considering the distance between fitted models and the intra-class aggregation of similar models at each level of detail. Subsequently, effective AD can be implemented in the multi-level model spaces, with or without sufficient multi-class labeled data. Experiments on multiple AD datasets demonstrate the effectiveness of the proposed method.
异常检测(AD)越来越受到重视,特别是在标记数据有限或未知异常的情况下,需要一种对标记数据或先验知识依赖最小的有效方法。本文在模型空间学习(LMS)框架的基础上,提出了多层次模型空间学习(MLMS)在多层次模型空间中进行决策。LMS通过用拟合模型表示每个数据实例,将数据从数据空间转换为模型空间。在MLMS中,为了充分捕捉数据内部的动态特征,对原始数据实例的多层次细节进行了分解。这些细节被单独拟合,从而产生一组拟合模型,这些模型捕获了原始实例的多层次动态特征。用一组拟合模型(而不是单个模型)表示每个数据实例,将其从数据空间转换为多级模型空间。引入了模型集之间的两两差分度量,充分考虑了拟合模型之间的距离和相似模型在每个细节层次上的类内聚集。随后,无论是否有足够的多类标记数据,都可以在多级模型空间中实现有效的AD。在多个AD数据集上的实验证明了该方法的有效性。
{"title":"Anomaly Detection in Multi-Level Model Space","authors":"Ao Chen;Xiren Zhou;Yizhan Fan;Huanhuan Chen","doi":"10.1109/TBDATA.2025.3534625","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3534625","url":null,"abstract":"Anomaly detection (AD) is gaining prominence, especially in situations with limited labeled data or unknown anomalies, demanding an efficient approach with minimal reliance on labeled data or prior knowledge. Building upon the framework of Learning in the Model Space (LMS), this paper proposes conducting AD through Learning in the Multi-Level Model Spaces (MLMS). LMS transforms the data from the data space to the model space by representing each data instance with a fitted model. In MLMS, to fully capture the dynamic characteristics within the data, multi-level details of the original data instance are decomposed. These details are individually fitted, resulting in a set of fitted models that capture the multi-level dynamic characteristics of the original instance. Representing each data instance with a set of fitted models, rather than a single one, transforms it from the data space into the multi-level model spaces. The pairwise difference measurement between model sets is introduced, fully considering the distance between fitted models and the intra-class aggregation of similar models at each level of detail. Subsequently, effective AD can be implemented in the multi-level model spaces, with or without sufficient multi-class labeled data. Experiments on multiple AD datasets demonstrate the effectiveness of the proposed method.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 5","pages":"2376-2387"},"PeriodicalIF":5.7,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144990026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MultiTec: A Data-Driven Multimodal Short Video Detection Framework for Healthcare Misinformation on TikTok MultiTec:一个数据驱动的多模式短视频检测框架,用于TikTok上的医疗保健错误信息
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-27 DOI: 10.1109/TBDATA.2025.3533919
Lanyu Shang;Yang Zhang;Yawen Deng;Dong Wang
With the prevalence of social media and short video sharing platforms (e.g., TikTok, YouTube Shorts), the proliferation of healthcare misinformation has become a widespread and concerning issue that threatens public health and undermines trust in mass media. This paper focuses on an important problem of detecting multimodal healthcare misinformation in short videos on TikTok. Our objective is to accurately identify misleading healthcare information that is jointly conveyed by the visual, audio, and textual content within the TikTok short videos. Three critical challenges exist in solving our problem: i) how to effectively extract information from distractive and manipulated visual content in short videos? ii) How to efficiently identify the interrelation of the heterogeneous visual and speech content in short videos? iii) How to accurately capture the complex dependency of the densely connected sequential content in short videos? To address the above challenges, we develop MultiTec, a multimodal detector that explicitly explores the audio and visual content in short videos to investigate both the sequential relation of video elements and their inter-modality dependencies to jointly detect misinformation in healthcare videos on TikTok. To the best of our knowledge, MultiTec is the first modality-aware dual-attentive short video detection model for multimodal healthcare misinformation on TikTok. We evaluate MultiTec on two real-world healthcare video datasets collected from TikTok. Evaluation results show that MultiTec achieves substantial performance gains compared to state-of-the-art baselines in accurately detecting misleading healthcare short videos.
随着社交媒体和短视频分享平台(如TikTok、YouTube Shorts)的普及,医疗保健错误信息的泛滥已成为一个普遍而令人担忧的问题,威胁着公众健康,破坏了对大众媒体的信任。本文关注的是在TikTok短视频中检测多模式医疗保健错误信息的重要问题。我们的目标是准确识别由TikTok短视频中的视觉、音频和文本内容共同传达的误导性医疗信息。解决我们的问题存在三个关键挑战:1)如何有效地从短视频中分散注意力和被操纵的视觉内容中提取信息?ii)如何有效识别短视频中异质的视觉和语音内容之间的相互关系?iii)如何准确捕捉短视频中紧密相连的序列内容的复杂依赖关系?为了解决上述挑战,我们开发了MultiTec,这是一个多模态检测器,可以明确地探索短视频中的音频和视觉内容,以研究视频元素的顺序关系及其模态间依赖关系,从而共同检测TikTok上医疗保健视频中的错误信息。据我们所知,MultiTec是TikTok上第一个可感知多模式医疗保健错误信息的双关注短视频检测模型。我们在从TikTok收集的两个真实世界的医疗保健视频数据集上评估MultiTec。评估结果显示,与最先进的基线相比,MultiTec在准确检测误导性医疗保健短视频方面取得了显著的性能提升。
{"title":"MultiTec: A Data-Driven Multimodal Short Video Detection Framework for Healthcare Misinformation on TikTok","authors":"Lanyu Shang;Yang Zhang;Yawen Deng;Dong Wang","doi":"10.1109/TBDATA.2025.3533919","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3533919","url":null,"abstract":"With the prevalence of social media and short video sharing platforms (e.g., TikTok, YouTube Shorts), the proliferation of healthcare misinformation has become a widespread and concerning issue that threatens public health and undermines trust in mass media. This paper focuses on an important problem of detecting multimodal healthcare misinformation in short videos on TikTok. Our objective is to accurately identify misleading healthcare information that is jointly conveyed by the visual, audio, and textual content within the TikTok short videos. Three critical challenges exist in solving our problem: i) how to effectively extract information from distractive and manipulated visual content in short videos? ii) How to efficiently identify the interrelation of the heterogeneous visual and speech content in short videos? iii) How to accurately capture the complex dependency of the densely connected sequential content in short videos? To address the above challenges, we develop <italic>MultiTec</i>, a multimodal detector that explicitly explores the audio and visual content in short videos to investigate both the sequential relation of video elements and their inter-modality dependencies to jointly detect misinformation in healthcare videos on TikTok. To the best of our knowledge, MultiTec is the first modality-aware dual-attentive short video detection model for multimodal healthcare misinformation on TikTok. We evaluate MultiTec on two real-world healthcare video datasets collected from TikTok. Evaluation results show that MultiTec achieves substantial performance gains compared to state-of-the-art baselines in accurately detecting misleading healthcare short videos.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 5","pages":"2471-2488"},"PeriodicalIF":5.7,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10854802","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144934330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CTDI: CNN-Transformer-Based Spatial-Temporal Missing Air Pollution Data Imputation CTDI:基于cnn -变压器的时空缺失空气污染数据输入
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-27 DOI: 10.1109/TBDATA.2025.3533882
Yangwen Yu;Victor O. K. Li;Jacqueline C. K. Lam;Kelvin Chan;Qi Zhang
Accurate and comprehensive air pollution data is essential for understanding and addressing environmental challenges. Missing data can impair accurate analysis and decision-making. This study presents a novel approach, named CNN-Transformer-based Spatial-Temporal Data Imputation (CTDI), for imputing missing air pollution data. Data pre-processing incorporates observed air pollution data and related urban data to produce 24-hour period tensors as input samples. 1-by-1 CNN layers capture the interaction between different types of input data. Deep learning transformer architecture is employed in a spatial-temporal (S-T) transformer module to capture long-range dependencies and extract complex relationships in both spatial and temporal dimensions. Hong Kong air pollution data is statistically analyzed and used to evaluate CTDI in its recovery of generated and actual patterns of missing data. Experimental results show that CTDI consistently outperforms existing imputation methods across all evaluated scenarios, including cases with higher rates of missing data, thereby demonstrating its robustness and effectiveness in enhancing air quality monitoring. Additionally, ablation experiments reveal that each component significantly contributes to the model's performance, with the temporal transformer proving particularly crucial under varying rates of missing data.
准确和全面的空气污染数据对于理解和应对环境挑战至关重要。缺少数据会影响准确的分析和决策。本研究提出了一种新的方法,称为CNN-Transformer-based Spatial-Temporal Data Imputation (CTDI),用于输入缺失的空气污染数据。数据预处理结合观测到的空气污染数据和相关城市数据,产生24小时周期张量作为输入样本。1乘1的CNN层捕获不同类型输入数据之间的交互。在时空(S-T)转换器模块中采用深度学习转换器架构来捕获远程依赖关系并提取时空维度上的复杂关系。对香港的空气污染数据进行统计分析,并用于评估CTDI对缺失数据的生成模式和实际模式的恢复。实验结果表明,CTDI在所有评估情景(包括数据缺失率较高的情况)中始终优于现有的归算方法,从而证明了其在加强空气质量监测方面的鲁棒性和有效性。此外,烧蚀实验表明,每个分量对模型的性能都有显著贡献,在数据丢失率不同的情况下,时间转换器被证明尤为重要。
{"title":"CTDI: CNN-Transformer-Based Spatial-Temporal Missing Air Pollution Data Imputation","authors":"Yangwen Yu;Victor O. K. Li;Jacqueline C. K. Lam;Kelvin Chan;Qi Zhang","doi":"10.1109/TBDATA.2025.3533882","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3533882","url":null,"abstract":"Accurate and comprehensive air pollution data is essential for understanding and addressing environmental challenges. Missing data can impair accurate analysis and decision-making. This study presents a novel approach, named CNN-Transformer-based Spatial-Temporal Data Imputation (CTDI), for imputing missing air pollution data. Data pre-processing incorporates observed air pollution data and related urban data to produce 24-hour period tensors as input samples. 1-by-1 CNN layers capture the interaction between different types of input data. Deep learning transformer architecture is employed in a spatial-temporal (S-T) transformer module to capture long-range dependencies and extract complex relationships in both spatial and temporal dimensions. Hong Kong air pollution data is statistically analyzed and used to evaluate CTDI in its recovery of generated and actual patterns of missing data. Experimental results show that CTDI consistently outperforms existing imputation methods across all evaluated scenarios, including cases with higher rates of missing data, thereby demonstrating its robustness and effectiveness in enhancing air quality monitoring. Additionally, ablation experiments reveal that each component significantly contributes to the model's performance, with the temporal transformer proving particularly crucial under varying rates of missing data.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 5","pages":"2443-2456"},"PeriodicalIF":5.7,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144934506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing the Transferability of Adversarial Examples With Random Diversity Ensemble and Variance Reduction Augmentation 利用随机多样性集成和方差减小增强对抗样本的可转移性
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-27 DOI: 10.1109/TBDATA.2025.3533892
Sensen Zhang;Haibo Hong;Mande Xie
Currently, deep neural networks (DNNs) are susceptible to adversarial attacks, particularly when the network's structure and parameters are known, while most of the existing attacks do not perform satisfactorily in the presence of black-box settings. In this context, model augmentation is considered to be effective to improve the success rates of black-box attacks on adversarial examples. However, the existing model augmentation methods tend to rely on a single transformation, which limits the diversity of augmented model collections and thus affects the transferability of adversarial examples. In this paper, we first propose the random diversity ensemble method (RDE-MI-FGSM) to effectively enhance the diversity of the augmented model collection, thereby improving the transferability of the generated adversarial examples. Afterwards, we put forward the random diversity variance ensemble method (RDE-VRA-MI-FGSM), which adopts variance reduction augmentation (VRA) to improve the gradient variance of the enhanced model set and avoid falling into a poor local optimum, so as to further improve the transferability of adversarial examples. Furthermore, experimental results demonstrate that our approaches are compatible with many existing transfer-based attacks and can effectively improve the transferability of gradient-based adversarial attacks on the ImageNet dataset. Also, our proposals have achieved higher attack success rates even if the target model adopts advanced defenses. Specifically, we have achieved an average attack success rate of 91.4% on the defense model, which is higher than other baseline approaches.
目前,深度神经网络(dnn)容易受到对抗性攻击,特别是当网络的结构和参数已知时,而大多数现有的攻击在黑盒设置的存在下都不能令人满意地执行。在这种情况下,模型增强被认为是有效的,以提高黑盒攻击的成功率对抗性的例子。然而,现有的模型增强方法往往依赖于单一的转换,这限制了增强模型集合的多样性,从而影响了对抗示例的可转移性。本文首次提出随机多样性集成方法(RDE-MI-FGSM)来有效增强增强模型集合的多样性,从而提高生成的对抗样本的可转移性。随后,我们提出了随机多样性方差集成方法(RDE-VRA-MI-FGSM),该方法采用方差减少增强(VRA)来提高增强模型集的梯度方差,避免陷入较差的局部最优,从而进一步提高对抗样本的可转移性。此外,实验结果表明,我们的方法与许多现有的基于传输的攻击兼容,可以有效地提高基于梯度的对抗攻击在ImageNet数据集上的可移植性。此外,即使目标模型采用先进的防御措施,我们的建议也实现了更高的攻击成功率。具体来说,我们在防御模型上实现了91.4%的平均攻击成功率,高于其他基线方法。
{"title":"Enhancing the Transferability of Adversarial Examples With Random Diversity Ensemble and Variance Reduction Augmentation","authors":"Sensen Zhang;Haibo Hong;Mande Xie","doi":"10.1109/TBDATA.2025.3533892","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3533892","url":null,"abstract":"Currently, deep neural networks (DNNs) are susceptible to adversarial attacks, particularly when the network's structure and parameters are known, while most of the existing attacks do not perform satisfactorily in the presence of black-box settings. In this context, model augmentation is considered to be effective to improve the success rates of black-box attacks on adversarial examples. However, the existing model augmentation methods tend to rely on a single transformation, which limits the diversity of augmented model collections and thus affects the transferability of adversarial examples. In this paper, we first propose the random diversity ensemble method (RDE-MI-FGSM) to effectively enhance the diversity of the augmented model collection, thereby improving the transferability of the generated adversarial examples. Afterwards, we put forward the random diversity variance ensemble method (RDE-VRA-MI-FGSM), which adopts variance reduction augmentation (VRA) to improve the gradient variance of the enhanced model set and avoid falling into a poor local optimum, so as to further improve the transferability of adversarial examples. Furthermore, experimental results demonstrate that our approaches are compatible with many existing transfer-based attacks and can effectively improve the transferability of gradient-based adversarial attacks on the ImageNet dataset. Also, our proposals have achieved higher attack success rates even if the target model adopts advanced defenses. Specifically, we have achieved an average attack success rate of 91.4% on the defense model, which is higher than other baseline approaches.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 5","pages":"2417-2430"},"PeriodicalIF":5.7,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144990138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scalable Learning-Based Community-Preserving Graph Generation 基于可扩展学习的保社区图生成
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-27 DOI: 10.1109/TBDATA.2025.3533898
Sheng Xiang;Chenhao Xu;Dawei Cheng;Ying Zhang
Graph generation plays an essential role in understanding the formation of complex network structures across various fields, such as biological and social networks. Recent studies have shifted towards employing deep learning methods to grasp the topology of graphs. Yet, most current graph generators fail to adequately capture the community structure, which stands out as a critical and distinctive aspect of graphs. Additionally, these generators are generally limited to smaller graphs because of their inefficiencies and scaling challenges. This paper introduces the Community-Preserving Graph Adversarial Network (CPGAN), designed to effectively simulate graphs. CPGAN leverages graph convolution networks within its encoder and maintains shared parameters during generation to encapsulate community structure data and ensure permutation invariance. We also present the Scalable Community-Preserving Graph Attention Network (SCPGAN), aimed at enhancing the scalability of our model. SCPGAN considerably cuts down on inference and training durations, as well as GPU memory usage, through the use of an ego-graph sampling approach and a short-pipeline autoencoder framework. Tests conducted on six real-world graph datasets reveal that CPGAN manages a beneficial balance between efficiency and simulation quality when compared to leading-edge baselines. Moreover, SCPGAN marks substantial strides in model efficiency and scalability, successfully increasing the size of generated graphs to the 10 million node level while maintaining competitive quality, on par with other advanced learning models.
图生成在理解生物和社会网络等各个领域复杂网络结构的形成方面起着至关重要的作用。最近的研究转向使用深度学习方法来掌握图的拓扑结构。然而,大多数当前的图形生成器未能充分捕获社区结构,这是图形的一个关键和独特的方面。此外,由于效率低下和缩放困难,这些生成器通常仅限于较小的图。本文介绍了一种用于有效模拟图的保社区图对抗网络(CPGAN)。CPGAN利用编码器内的图卷积网络,并在生成过程中保持共享参数,以封装社区结构数据并确保排列不变性。我们还提出了可扩展的社区保持图注意网络(SCPGAN),旨在增强我们模型的可扩展性。SCPGAN通过使用自我图采样方法和短管道自动编码器框架,大大减少了推理和训练持续时间,以及GPU内存使用。在六个真实世界的图形数据集上进行的测试表明,与前沿基线相比,CPGAN在效率和模拟质量之间取得了有益的平衡。此外,SCPGAN在模型效率和可扩展性方面取得了重大进展,成功地将生成的图的大小增加到1000万个节点级别,同时保持了与其他高级学习模型相当的质量。
{"title":"Scalable Learning-Based Community-Preserving Graph Generation","authors":"Sheng Xiang;Chenhao Xu;Dawei Cheng;Ying Zhang","doi":"10.1109/TBDATA.2025.3533898","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3533898","url":null,"abstract":"Graph generation plays an essential role in understanding the formation of complex network structures across various fields, such as biological and social networks. Recent studies have shifted towards employing deep learning methods to grasp the topology of graphs. Yet, most current graph generators fail to adequately capture the community structure, which stands out as a critical and distinctive aspect of graphs. Additionally, these generators are generally limited to smaller graphs because of their inefficiencies and scaling challenges. This paper introduces the Community-Preserving Graph Adversarial Network (CPGAN), designed to effectively simulate graphs. CPGAN leverages graph convolution networks within its encoder and maintains shared parameters during generation to encapsulate community structure data and ensure permutation invariance. We also present the Scalable Community-Preserving Graph Attention Network (SCPGAN), aimed at enhancing the scalability of our model. SCPGAN considerably cuts down on inference and training durations, as well as GPU memory usage, through the use of an ego-graph sampling approach and a short-pipeline autoencoder framework. Tests conducted on six real-world graph datasets reveal that CPGAN manages a beneficial balance between efficiency and simulation quality when compared to leading-edge baselines. Moreover, SCPGAN marks substantial strides in model efficiency and scalability, successfully increasing the size of generated graphs to the 10 million node level while maintaining competitive quality, on par with other advanced learning models.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 5","pages":"2457-2470"},"PeriodicalIF":5.7,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144934444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data Exchange for the Metaverse With Accountable Decentralized TTPs and Incentive Mechanisms 负责任的分散ttp和激励机制的元宇宙数据交换
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-27 DOI: 10.1109/TBDATA.2025.3533924
Liang Zhang;Xingyu Wu;Yuhang Ma;Haibin Kan
As a global virtual environment, the metaverse poses various challenges regarding data storage, sharing, interoperability, and privacy preservation. Typically, a trusted third party (TTP) is considered necessary in these scenarios. However, relying on a single TTP may introduce biases, compromise privacy, or lead to single-point-of-failure problem. To address these challenges and enable secure data exchange in the metaverse, we propose a system based on decentralized TTPs and the Ethereum blockchain. First, we use the threshold ElGamal cryptosystem to create the decentralized TTPs, employing verifiable secret sharing (VSS) to force owners to share data honestly. Second, we leverage the Ethereum blockchain to serve as the public communication channel, automatic verification machine, and smart contract engine. Third, we apply discrete logarithm equality (DLEQ) algorithms to generate non-interactive zero knowledge (NIZK) proofs when encrypted data is uploaded to the blockchain. Fourth, we present an incentive mechanism to benefit data owners and TTPs from data-sharing activities, as well as a penalty policy if malicious behavior is detected. Consequently, we construct a data exchange framework for the metaverse, in which all involved entities are accountable. Finally, we perform comprehensive experiments to demonstrate the feasibility and analyze the properties of the proposed system.
作为一个全球性的虚拟环境,元宇宙在数据存储、共享、互操作性和隐私保护方面提出了各种挑战。通常,在这些场景中,可信第三方(TTP)被认为是必要的。但是,依赖于单个http可能会引入偏差、损害隐私或导致单点故障问题。为了应对这些挑战并实现虚拟世界中的安全数据交换,我们提出了一个基于去中心化https和以太坊区块链的系统。首先,我们使用阈值ElGamal密码系统创建去中心化的https,采用可验证的秘密共享(VSS)来强制所有者诚实地共享数据。其次,我们利用以太坊区块链作为公共通信通道、自动验证机和智能合约引擎。第三,我们应用离散对数等式(DLEQ)算法在加密数据上传到区块链时生成非交互式零知识(NIZK)证明。第四,我们提出了一种激励机制,使数据所有者和ttp从数据共享活动中受益,以及如果检测到恶意行为的惩罚政策。因此,我们为元世界构建了一个数据交换框架,其中所有涉及的实体都是负责任的。最后,我们进行了全面的实验来证明该系统的可行性,并分析了该系统的性能。
{"title":"Data Exchange for the Metaverse With Accountable Decentralized TTPs and Incentive Mechanisms","authors":"Liang Zhang;Xingyu Wu;Yuhang Ma;Haibin Kan","doi":"10.1109/TBDATA.2025.3533924","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3533924","url":null,"abstract":"As a global virtual environment, the metaverse poses various challenges regarding data storage, sharing, interoperability, and privacy preservation. Typically, a trusted third party (TTP) is considered necessary in these scenarios. However, relying on a single TTP may introduce biases, compromise privacy, or lead to single-point-of-failure problem. To address these challenges and enable secure data exchange in the metaverse, we propose a system based on decentralized TTPs and the Ethereum blockchain. First, we use the threshold ElGamal cryptosystem to create the decentralized TTPs, employing verifiable secret sharing (VSS) to force owners to share data honestly. Second, we leverage the Ethereum blockchain to serve as the public communication channel, automatic verification machine, and smart contract engine. Third, we apply discrete logarithm equality (DLEQ) algorithms to generate non-interactive zero knowledge (NIZK) proofs when encrypted data is uploaded to the blockchain. Fourth, we present an incentive mechanism to benefit data owners and TTPs from data-sharing activities, as well as a penalty policy if malicious behavior is detected. Consequently, we construct a data exchange framework for the metaverse, in which all involved entities are accountable. Finally, we perform comprehensive experiments to demonstrate the feasibility and analyze the properties of the proposed system.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 5","pages":"2431-2442"},"PeriodicalIF":5.7,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144990069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multiplex Hypergraph Attribute-Based Graph Collaborative Filtering for Cold-Start POI Recommendation 冷启动POI推荐中基于多路超图属性的图协同过滤
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-27 DOI: 10.1109/TBDATA.2025.3533908
Simon Nandwa Anjiri;Derui Ding;Yan Song;Ying Sun
Within the scope of location-based services and personalized recommendations, the challenges of recommending new and unvisited points of interest (POIs) to mobile users are compounded by the sparsity of check-in data. Traditional recommendation models often overlook user and POI attributes, which exacerbates data sparsity and cold-start problems. To address this issue, a novel multiplex hypergraph attribute-based graph collaborative filtering is proposed for POI recommendation to create a robust recommendation system capable of handling sparse data and cold-start scenarios. Specifically, a multiplex network hypergraph is first constructed to capture complex relationships between users, POIs, and attributes based on the similarities of attributes, visit frequencies, and preferences. Then, an adaptive variational graph auto-encoder adversarial network is developed to accurately infer the users’/POIs’ preference embeddings from their attribute distributions, which reflect complex attribute dependencies and latent structures within the data. Moreover, a dual graph neural network variant based on both Graphsage K-nearest neighbor networks and gated recurrent units are created to effectively capture various attributes of different modalities in a neighborhood, including temporal dependencies in user preferences and spatial attributes of POIs. Finally, experiments conducted on Foursquare and Yelp datasets reveal the superiority and robustness of the developed model compared to some typical state-of-the-art approaches and adequately illustrate the effectiveness of the issues with cold-start users and POIs.
在基于位置的服务和个性化推荐的范围内,向移动用户推荐新的和未访问的兴趣点(poi)的挑战由于签到数据的稀疏性而变得更加复杂。传统的推荐模型往往忽略了用户和POI属性,这加剧了数据稀疏性和冷启动问题。为了解决这一问题,提出了一种新的基于多路超图属性的图协同过滤方法,用于POI推荐,以创建一个能够处理稀疏数据和冷启动场景的鲁棒推荐系统。具体来说,首先构建了一个多路网络超图,以捕获基于属性相似性、访问频率和偏好的用户、poi和属性之间的复杂关系。然后,开发了一种自适应变分图自编码器对抗网络,从用户/ poi的属性分布中准确推断出用户/ poi的偏好嵌入,这些属性分布反映了数据中复杂的属性依赖关系和潜在结构。此外,基于Graphsage k近邻网络和门控循环单元,创建了对偶图神经网络变体,以有效捕获邻域中不同模态的各种属性,包括用户偏好的时间依赖性和poi的空间属性。最后,在Foursquare和Yelp数据集上进行的实验表明,与一些典型的最先进的方法相比,所开发的模型具有优越性和鲁棒性,并充分说明了冷启动用户和poi问题的有效性。
{"title":"A Multiplex Hypergraph Attribute-Based Graph Collaborative Filtering for Cold-Start POI Recommendation","authors":"Simon Nandwa Anjiri;Derui Ding;Yan Song;Ying Sun","doi":"10.1109/TBDATA.2025.3533908","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3533908","url":null,"abstract":"Within the scope of location-based services and personalized recommendations, the challenges of recommending new and unvisited points of interest (POIs) to mobile users are compounded by the sparsity of check-in data. Traditional recommendation models often overlook user and POI attributes, which exacerbates data sparsity and cold-start problems. To address this issue, a novel multiplex hypergraph attribute-based graph collaborative filtering is proposed for POI recommendation to create a robust recommendation system capable of handling sparse data and cold-start scenarios. Specifically, a multiplex network hypergraph is first constructed to capture complex relationships between users, POIs, and attributes based on the similarities of attributes, visit frequencies, and preferences. Then, an adaptive variational graph auto-encoder adversarial network is developed to accurately infer the users’/POIs’ preference embeddings from their attribute distributions, which reflect complex attribute dependencies and latent structures within the data. Moreover, a dual graph neural network variant based on both Graphsage K-nearest neighbor networks and gated recurrent units are created to effectively capture various attributes of different modalities in a neighborhood, including temporal dependencies in user preferences and spatial attributes of POIs. Finally, experiments conducted on Foursquare and Yelp datasets reveal the superiority and robustness of the developed model compared to some typical state-of-the-art approaches and adequately illustrate the effectiveness of the issues with cold-start users and POIs.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 5","pages":"2401-2416"},"PeriodicalIF":5.7,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144990139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Big Data
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1