首页 > 最新文献

IEEE Transactions on Big Data最新文献

英文 中文
Terrain Scene Generation Using a Lightweight Vector Quantized Generative Adversarial Network 基于轻量级矢量量化生成对抗网络的地形场景生成
IF 7.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-30 DOI: 10.1109/TBDATA.2025.3536926
Yan Wang;Huiyu Zhou;Xinghui Dong
Natural terrain scene images play important roles in the geographical research and application. However, it is challenging to collect a large set of terrain scene images. Recently, great progress has been made in image generation. Although impressive results can be achieved, the efficiency of the state-of-the-art methods, e.g., the Vector Quantized Generative Adversarial Network (VQGAN), is still dissatisfying. The VQGAN confronts two issues, i.e., high space complexity and heavy computational demand. To efficiently fulfill the terrain scene generation task, we first collect a Natural Terrain Scene Data Set (NTSD), which contains 36,672 images divided into 38 classes. Then we propose a Lightweight VQGAN (Lit-VQGAN), which uses the fewer parameters and has the lower computational complexity, compared with the VQGAN. A lightweight super-resolution network is further adopted, to speedily derive a high-resolution image from the image that the Lit-VQGAN generates. The Lit-VQGAN can be trained and tested on the NTSD. To our knowledge, either the NTSD or the Lit-VQGAN has not been exploited before.1 Experimental results show that the Lit-VQGAN is more efficient and effective than the VQGAN for the image generation task. These promising results should be due to the lightweight yet effective networks that we design.
自然地形场景图像在地理学研究和应用中具有重要作用。然而,收集大量的地形场景图像是一个挑战。近年来,在图像生成方面取得了很大的进展。尽管可以取得令人印象深刻的结果,但最先进的方法,例如矢量量化生成对抗网络(VQGAN)的效率仍然令人不满意。VQGAN面临着空间复杂度高和计算量大的问题。为了有效地完成地形场景生成任务,我们首先收集了一个自然地形场景数据集(NTSD),该数据集包含36,672张图像,分为38类。然后,我们提出了一种轻量级的VQGAN (lite -VQGAN),与VQGAN相比,它使用的参数更少,计算复杂度更低。进一步采用轻量级的超分辨率网络,从Lit-VQGAN生成的图像中快速导出高分辨率图像。Lit-VQGAN可以在NTSD上进行培训和测试。据我们所知,无论是NTSD还是Lit-VQGAN之前都没有被利用过实验结果表明,Lit-VQGAN在图像生成任务中比VQGAN更高效。这些有希望的结果应该归功于我们设计的轻量级但有效的网络。
{"title":"Terrain Scene Generation Using a Lightweight Vector Quantized Generative Adversarial Network","authors":"Yan Wang;Huiyu Zhou;Xinghui Dong","doi":"10.1109/TBDATA.2025.3536926","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3536926","url":null,"abstract":"Natural terrain scene images play important roles in the geographical research and application. However, it is challenging to collect a large set of terrain scene images. Recently, great progress has been made in image generation. Although impressive results can be achieved, the efficiency of the state-of-the-art methods, e.g., the Vector Quantized Generative Adversarial Network (VQGAN), is still dissatisfying. The VQGAN confronts two issues, i.e., high space complexity and heavy computational demand. To efficiently fulfill the terrain scene generation task, we first collect a Natural Terrain Scene Data Set (NTSD), which contains 36,672 images divided into 38 classes. Then we propose a Lightweight VQGAN (Lit-VQGAN), which uses the fewer parameters and has the lower computational complexity, compared with the VQGAN. A lightweight super-resolution network is further adopted, to speedily derive a high-resolution image from the image that the Lit-VQGAN generates. The Lit-VQGAN can be trained and tested on the NTSD. To our knowledge, either the NTSD or the Lit-VQGAN has not been exploited before.<sup>1</sup> Experimental results show that the Lit-VQGAN is more efficient and effective than the VQGAN for the image generation task. These promising results should be due to the lightweight yet effective networks that we design.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 3","pages":"988-1000"},"PeriodicalIF":7.5,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143949212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adapt Anything: Tailor Any Image Classifier Across Domains and Categories Using Text-to-Image Diffusion Models 适应任何:使用文本到图像扩散模型跨域和类别定制任何图像分类器
IF 7.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-30 DOI: 10.1109/TBDATA.2025.3536933
Weijie Chen;Haoyu Wang;Shicai Yang;Lei Zhang;Wei Wei;Yanning Zhang;Luojun Lin;Di Xie;Yueting Zhuang
We study a novel problem in this paper, that is, if a modern text-to-image diffusion model can tailor any image classifier across domains and categories. Existing domain adaption works exploit both source and target data for domain alignment so as to transfer the knowledge from the labeled source data to the unlabeled target data. However, as the development of text-to-image diffusion models, we wonder if the high-fidelity synthetic data can serve as a surrogate of the source data in real world. In this way, we do not need to collect and annotate the source data for each image classification task in a one-for-one manner. Instead, we utilize only one off-the-shelf text-to-image model to synthesize images with labels derived from text prompts, and then leverage them as a bridge to dig out the knowledge from the task-agnostic text-to-image generator to the task-oriented image classifier via domain adaptation. Such a one-for-all adaptation paradigm allows us to adapt anything in the world using only one text-to-image generator as well as any unlabeled target data. Extensive experiments validate the feasibility of this idea, which even surprisingly surpasses the state-of-the-art domain adaptation works using the source data collected and annotated in real world.
本文研究了一个新的问题,即现代文本到图像扩散模型是否可以跨领域和类别定制任何图像分类器。现有的领域自适应工作利用源数据和目标数据进行领域对齐,从而将知识从标记的源数据传递到未标记的目标数据。然而,随着文本到图像扩散模型的发展,我们想知道高保真合成数据是否可以在现实世界中作为源数据的替代品。这样,我们就不需要对每个图像分类任务的源数据进行一对一的收集和标注。相反,我们只使用一个现成的文本到图像模型来合成带有文本提示的标签的图像,然后利用它们作为桥梁,通过域适应从任务不可知的文本到图像生成器到面向任务的图像分类器中挖掘知识。这种“一刀切”的适应范式允许我们仅使用一个文本到图像生成器以及任何未标记的目标数据来适应世界上的任何东西。大量的实验验证了这一想法的可行性,甚至令人惊讶地超过了使用在现实世界中收集和注释的源数据进行的最先进的领域自适应工作。
{"title":"Adapt Anything: Tailor Any Image Classifier Across Domains and Categories Using Text-to-Image Diffusion Models","authors":"Weijie Chen;Haoyu Wang;Shicai Yang;Lei Zhang;Wei Wei;Yanning Zhang;Luojun Lin;Di Xie;Yueting Zhuang","doi":"10.1109/TBDATA.2025.3536933","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3536933","url":null,"abstract":"We study a novel problem in this paper, that is, if a modern text-to-image diffusion model can tailor any image classifier across domains and categories. Existing domain adaption works exploit both source and target data for domain alignment so as to transfer the knowledge from the labeled source data to the unlabeled target data. However, as the development of text-to-image diffusion models, we wonder if the high-fidelity synthetic data can serve as a surrogate of the source data in real world. In this way, we do not need to collect and annotate the source data for each image classification task in a one-for-one manner. Instead, we utilize only one off-the-shelf text-to-image model to synthesize images with labels derived from text prompts, and then leverage them as a bridge to dig out the knowledge from the task-agnostic text-to-image generator to the task-oriented image classifier via domain adaptation. Such a one-for-all adaptation paradigm allows us to adapt anything in the world using only one text-to-image generator as well as any unlabeled target data. Extensive experiments validate the feasibility of this idea, which even surprisingly surpasses the state-of-the-art domain adaptation works using the source data collected and annotated in real world.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 3","pages":"1013-1026"},"PeriodicalIF":7.5,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143949114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AugGPT: Leveraging ChatGPT for Text Data Augmentation AugGPT:利用ChatGPT进行文本数据增强
IF 7.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-30 DOI: 10.1109/TBDATA.2025.3536934
Haixing Dai;Zhengliang Liu;Wenxiong Liao;Xiaoke Huang;Yihan Cao;Zihao Wu;Lin Zhao;Shaochen Xu;Fang Zeng;Wei Liu;Ninghao Liu;Sheng Li;Dajiang Zhu;Hongmin Cai;Lichao Sun;Quanzheng Li;Dinggang Shen;Tianming Liu;Xiang Li
Text data augmentation is an effective strategy for overcoming the challenge of limited sample sizes in many natural language processing (NLP) tasks. This challenge is especially prominent in the few-shot learning (FSL) scenario, where the data in the target domain is generally much scarcer and of lowered quality. A natural and widely used strategy to mitigate such challenges is to perform data augmentation to better capture data invariance and increase the sample size. However, current text data augmentation methods either can’t ensure the correct labeling of the generated data (lacking faithfulness), or can’t ensure sufficient diversity in the generated data (lacking compactness), or both. Inspired by the recent success of large language models (LLM), especially the development of ChatGPT, we propose a text data augmentation approach based on ChatGPT (named ”AugGPT”). AugGPT rephrases each sentence in the training samples into multiple conceptually similar but semantically different samples. The augmented samples can then be used in downstream model training. Experiment results on multiple few-shot learning text classification tasks show the superior performance of the proposed AugGPT approach over state-of-the-art text data augmentation methods in terms of testing accuracy and distribution of the augmented samples.
在许多自然语言处理(NLP)任务中,文本数据增强是克服样本量有限挑战的有效策略。这一挑战在少次学习(FSL)场景中尤为突出,在这种场景中,目标领域中的数据通常要少得多,质量也较低。缓解此类挑战的一种自然且广泛使用的策略是执行数据增强,以更好地捕获数据不变性并增加样本量。然而,目前的文本数据增强方法要么不能保证生成数据的正确标注(缺乏信度),要么不能保证生成数据的足够多样性(缺乏紧凑性),要么两者兼而有之。受近年来大型语言模型(LLM)的成功,特别是ChatGPT的发展的启发,我们提出了一种基于ChatGPT的文本数据增强方法(命名为“AugGPT”)。AugGPT将训练样本中的每个句子重新表述为多个概念相似但语义不同的样本。增强后的样本可以用于下游模型训练。在多个小样本学习文本分类任务上的实验结果表明,本文提出的AugGPT方法在测试准确率和增强样本分布方面优于当前最先进的文本数据增强方法。
{"title":"AugGPT: Leveraging ChatGPT for Text Data Augmentation","authors":"Haixing Dai;Zhengliang Liu;Wenxiong Liao;Xiaoke Huang;Yihan Cao;Zihao Wu;Lin Zhao;Shaochen Xu;Fang Zeng;Wei Liu;Ninghao Liu;Sheng Li;Dajiang Zhu;Hongmin Cai;Lichao Sun;Quanzheng Li;Dinggang Shen;Tianming Liu;Xiang Li","doi":"10.1109/TBDATA.2025.3536934","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3536934","url":null,"abstract":"Text data augmentation is an effective strategy for overcoming the challenge of limited sample sizes in many natural language processing (NLP) tasks. This challenge is especially prominent in the few-shot learning (FSL) scenario, where the data in the target domain is generally much scarcer and of lowered quality. A natural and widely used strategy to mitigate such challenges is to perform data augmentation to better capture data invariance and increase the sample size. However, current text data augmentation methods either can’t ensure the correct labeling of the generated data (lacking faithfulness), or can’t ensure sufficient diversity in the generated data (lacking compactness), or both. Inspired by the recent success of large language models (LLM), especially the development of ChatGPT, we propose a text data augmentation approach based on ChatGPT (named ”AugGPT”). AugGPT rephrases each sentence in the training samples into multiple conceptually similar but semantically different samples. The augmented samples can then be used in downstream model training. Experiment results on multiple few-shot learning text classification tasks show the superior performance of the proposed AugGPT approach over state-of-the-art text data augmentation methods in terms of testing accuracy and distribution of the augmented samples.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 3","pages":"907-918"},"PeriodicalIF":7.5,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143949265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Expertise or Hallucination? A Comprehensive Evaluation of ChatGPT's Aptitude in Clinical Genetics 专业还是幻觉?ChatGPT在临床遗传学上的综合评价
IF 7.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-30 DOI: 10.1109/TBDATA.2025.3536939
Yingbo Zhang;Shumin Ren;Jiao Wang;Chaoying Zhan;Mengqiao He;Xingyun Liu;Rongrong Wu;Jing Zhao;Cong Wu;Chuanzhu Fan;Bairong Shen
Whether viewed as an expert or as a source of ‘knowledge hallucination’, the use of ChatGPT in medical practice has stirred ongoing debate. This study sought to evaluate ChatGPT's capabilities in the field of clinical genetics, focusing on tasks such as ‘Clinical genetics exams’, ‘Associations between genetic diseases and pathogenic genes’, and ‘Limitations and trends in clinical genetics’. Results indicated that ChatGPT performed exceptionally well in question-answering tasks, particularly in clinical genetics exams and diagnosing single-gene diseases. It also effectively outlined the current limitations and prospective trends in clinical genetics. However, ChatGPT struggled to provide comprehensive answers regarding multi-gene or epigenetic diseases, particularly with respect to genetic variations or chromosomal abnormalities. In terms of systematic summarization and inference, some randomness was evident in ChatGPT's responses. In summary, while ChatGPT possesses a foundational understanding of general knowledge in clinical genetics due to hyperparameter learning, it encounters significant challenges when delving into specialized knowledge and navigating the complexities of clinical genetics, particularly in mitigating ‘Knowledge Hallucination’. To optimize its performance and depth of expertise in clinical genetics, integration with specialized knowledge databases and knowledge graphs is imperative.
无论是被视为专家还是“知识幻觉”的来源,ChatGPT在医疗实践中的使用都引发了持续的争论。该研究旨在评估ChatGPT在临床遗传学领域的能力,重点关注诸如“临床遗传学检查”、“遗传疾病与致病基因之间的关联”和“临床遗传学的局限性和趋势”等任务。结果表明,ChatGPT在问答任务中表现异常出色,特别是在临床遗传学检查和诊断单基因疾病方面。它还有效地概述了临床遗传学目前的局限性和未来的趋势。然而,ChatGPT努力提供关于多基因或表观遗传疾病的全面答案,特别是关于遗传变异或染色体异常。在系统总结和推理方面,ChatGPT的回答有明显的随机性。总之,由于超参数学习,ChatGPT对临床遗传学的一般知识有了基本的了解,但在深入研究专业知识和驾驭临床遗传学的复杂性时,它遇到了重大挑战,特别是在减轻“知识幻觉”方面。为了优化其在临床遗传学方面的性能和专业知识的深度,与专业知识数据库和知识图谱的集成是必不可少的。
{"title":"Expertise or Hallucination? A Comprehensive Evaluation of ChatGPT's Aptitude in Clinical Genetics","authors":"Yingbo Zhang;Shumin Ren;Jiao Wang;Chaoying Zhan;Mengqiao He;Xingyun Liu;Rongrong Wu;Jing Zhao;Cong Wu;Chuanzhu Fan;Bairong Shen","doi":"10.1109/TBDATA.2025.3536939","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3536939","url":null,"abstract":"Whether viewed as an expert or as a source of ‘knowledge hallucination’, the use of ChatGPT in medical practice has stirred ongoing debate. This study sought to evaluate ChatGPT's capabilities in the field of clinical genetics, focusing on tasks such as ‘Clinical genetics exams’, ‘Associations between genetic diseases and pathogenic genes’, and ‘Limitations and trends in clinical genetics’. Results indicated that ChatGPT performed exceptionally well in question-answering tasks, particularly in clinical genetics exams and diagnosing single-gene diseases. It also effectively outlined the current limitations and prospective trends in clinical genetics. However, ChatGPT struggled to provide comprehensive answers regarding multi-gene or epigenetic diseases, particularly with respect to genetic variations or chromosomal abnormalities. In terms of systematic summarization and inference, some randomness was evident in ChatGPT's responses. In summary, while ChatGPT possesses a foundational understanding of general knowledge in clinical genetics due to hyperparameter learning, it encounters significant challenges when delving into specialized knowledge and navigating the complexities of clinical genetics, particularly in mitigating ‘Knowledge Hallucination’. To optimize its performance and depth of expertise in clinical genetics, integration with specialized knowledge databases and knowledge graphs is imperative.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 3","pages":"919-932"},"PeriodicalIF":7.5,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143949097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multi-Modal Assessment Framework for Comparison of Specialized Deep Learning and General-Purpose Large Language Models 用于比较专业深度学习和通用大型语言模型的多模态评估框架
IF 7.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-30 DOI: 10.1109/TBDATA.2025.3536937
Mohammad Nadeem;Shahab Saquib Sohail;Dag Øivind Madsen;Ahmed Ibrahim Alzahrani;Javier Del Ser;Khan Muhammad
Recent years have witnessed tremendous advancements in Al tools (e.g., ChatGPT, GPT-4, and Bard), driven by the growing power, reasoning, and efficiency of Large Language Models (LLMs). LLMs have been shown to excel in tasks ranging from poem writing and coding to essay generation and puzzle solving. Despite their proficiency in general queries, specialized tasks such as metaphor understanding and fake news detection often require finely tuned models, posing a comparison challenge with specialized Deep Learning (DL). We propose an assessment framework to compare task-specific intelligence with general-purpose LLMs on suicide and depression tendency identification. For this purpose, we trained two DL models on a suicide and depression detection dataset, followed by testing their performance on a test set. Afterward, the same test dataset is used to evaluate the performance of four LLMs (GPT-3.5, GPT-4, Google Bard, and MS Bing) using four classification metrics. The BERT-based DL model performed the best among all, with a testing accuracy of 94.61%, while GPT-4 was the runner-up with accuracy 92.5%. Results demonstrate that LLMs do not outperform the specialized DL models but are able to achieve comparable performance, making them a decent option for downstream tasks without specialized training. However, LLMs outperformed specialized models on the reduced dataset.
近年来,人工智能工具(例如,ChatGPT, GPT-4和Bard)在大型语言模型(llm)不断增长的能力,推理和效率的推动下取得了巨大的进步。法学硕士在从诗歌写作和编码到论文生成和解谜等任务中表现出色。尽管它们精通一般查询,但隐喻理解和假新闻检测等专业任务通常需要精细调整的模型,这与专业深度学习(DL)构成了比较挑战。我们提出了一个评估框架来比较特定任务智力与通用llm在自杀和抑郁倾向识别方面的作用。为此,我们在自杀和抑郁检测数据集上训练了两个深度学习模型,然后在测试集上测试它们的性能。之后,使用相同的测试数据集使用四种分类指标来评估四种LLMs (GPT-3.5, GPT-4, b谷歌Bard和MS Bing)的性能。其中,基于bert的深度学习模型表现最好,测试准确率为94.61%,GPT-4以92.5%的准确率位居第二。结果表明,llm并不优于专门的DL模型,但能够达到相当的性能,使其成为无需专门训练的下游任务的不错选择。然而,llm在简化数据集上的表现优于专门的模型。
{"title":"A Multi-Modal Assessment Framework for Comparison of Specialized Deep Learning and General-Purpose Large Language Models","authors":"Mohammad Nadeem;Shahab Saquib Sohail;Dag Øivind Madsen;Ahmed Ibrahim Alzahrani;Javier Del Ser;Khan Muhammad","doi":"10.1109/TBDATA.2025.3536937","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3536937","url":null,"abstract":"Recent years have witnessed tremendous advancements in Al tools (e.g., ChatGPT, GPT-4, and Bard), driven by the growing power, reasoning, and efficiency of Large Language Models (LLMs). LLMs have been shown to excel in tasks ranging from poem writing and coding to essay generation and puzzle solving. Despite their proficiency in general queries, specialized tasks such as metaphor understanding and fake news detection often require finely tuned models, posing a comparison challenge with specialized Deep Learning (DL). We propose an assessment framework to compare task-specific intelligence with general-purpose LLMs on suicide and depression tendency identification. For this purpose, we trained two DL models on a suicide and depression detection dataset, followed by testing their performance on a test set. Afterward, the same test dataset is used to evaluate the performance of four LLMs (GPT-3.5, GPT-4, Google Bard, and MS Bing) using four classification metrics. The BERT-based DL model performed the best among all, with a testing accuracy of 94.61%, while GPT-4 was the runner-up with accuracy 92.5%. Results demonstrate that LLMs do not outperform the specialized DL models but are able to achieve comparable performance, making them a decent option for downstream tasks without specialized training. However, LLMs outperformed specialized models on the reduced dataset.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 3","pages":"1001-1012"},"PeriodicalIF":7.5,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143949116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SRGTNet: Subregion-Guided Transformer Hash Network for Fine-Grained Image Retrieval SRGTNet:用于细粒度图像检索的子区域导向变压器哈希网络
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-27 DOI: 10.1109/TBDATA.2025.3533916
Hongchun Lu;Songlin He;Xue Li;Min Han;Chase Wu
Fine-grained image retrieval (FGIR) is a crucial task in computer vision, with broad applications in areas such as biodiversity monitoring, e-commerce, and medical diagnostics. However, capturing discriminative feature information to generate binary codes is difficult because of high intraclass variance and low interclass variance. To address this challenge, we (i) build a novel and highly reliable fine-grained deep hash learning framework for more accurate retrieval of fine-grained images. (ii) We propose a part significant region erasure method that forces the network to generate compact binary codes. (iii) We introduce a CNN-guided Transformer structure for use in fine-grained retrieval tasks to capture fine-grained images effectively in contextual feature relationships to mine more discriminative regional features. (iv) A multistage mixture loss is designed to optimize network training and enhance feature representation. Experiments were conducted on three publicly available fine-grained datasets. The results show that our method effectively improves the performance of fine-grained image retrieval.
细粒度图像检索是计算机视觉中的一项重要任务,在生物多样性监测、电子商务和医疗诊断等领域有着广泛的应用。然而,由于类内方差大,类间方差小,很难捕获判别特征信息来生成二进制码。为了应对这一挑战,我们(i)构建了一个新颖且高度可靠的细粒度深度哈希学习框架,以更准确地检索细粒度图像。(ii)我们提出了一种部分有效区域擦除方法,迫使网络生成紧凑的二进制码。(iii)我们引入了一个cnn引导的Transformer结构,用于细粒度检索任务,在上下文特征关系中有效捕获细粒度图像,以挖掘更具判别性的区域特征。(iv)设计多级混合损失优化网络训练,增强特征表示。实验在三个公开的细粒度数据集上进行。结果表明,该方法有效地提高了细粒度图像检索的性能。
{"title":"SRGTNet: Subregion-Guided Transformer Hash Network for Fine-Grained Image Retrieval","authors":"Hongchun Lu;Songlin He;Xue Li;Min Han;Chase Wu","doi":"10.1109/TBDATA.2025.3533916","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3533916","url":null,"abstract":"Fine-grained image retrieval (FGIR) is a crucial task in computer vision, with broad applications in areas such as biodiversity monitoring, e-commerce, and medical diagnostics. However, capturing discriminative feature information to generate binary codes is difficult because of high intraclass variance and low interclass variance. To address this challenge, we (i) build a novel and highly reliable fine-grained deep hash learning framework for more accurate retrieval of fine-grained images. (ii) We propose a part significant region erasure method that forces the network to generate compact binary codes. (iii) We introduce a CNN-guided Transformer structure for use in fine-grained retrieval tasks to capture fine-grained images effectively in contextual feature relationships to mine more discriminative regional features. (iv) A multistage mixture loss is designed to optimize network training and enhance feature representation. Experiments were conducted on three publicly available fine-grained datasets. The results show that our method effectively improves the performance of fine-grained image retrieval.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 5","pages":"2388-2400"},"PeriodicalIF":5.7,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144989930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anomaly Detection in Multi-Level Model Space 多层次模型空间中的异常检测
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-27 DOI: 10.1109/TBDATA.2025.3534625
Ao Chen;Xiren Zhou;Yizhan Fan;Huanhuan Chen
Anomaly detection (AD) is gaining prominence, especially in situations with limited labeled data or unknown anomalies, demanding an efficient approach with minimal reliance on labeled data or prior knowledge. Building upon the framework of Learning in the Model Space (LMS), this paper proposes conducting AD through Learning in the Multi-Level Model Spaces (MLMS). LMS transforms the data from the data space to the model space by representing each data instance with a fitted model. In MLMS, to fully capture the dynamic characteristics within the data, multi-level details of the original data instance are decomposed. These details are individually fitted, resulting in a set of fitted models that capture the multi-level dynamic characteristics of the original instance. Representing each data instance with a set of fitted models, rather than a single one, transforms it from the data space into the multi-level model spaces. The pairwise difference measurement between model sets is introduced, fully considering the distance between fitted models and the intra-class aggregation of similar models at each level of detail. Subsequently, effective AD can be implemented in the multi-level model spaces, with or without sufficient multi-class labeled data. Experiments on multiple AD datasets demonstrate the effectiveness of the proposed method.
异常检测(AD)越来越受到重视,特别是在标记数据有限或未知异常的情况下,需要一种对标记数据或先验知识依赖最小的有效方法。本文在模型空间学习(LMS)框架的基础上,提出了多层次模型空间学习(MLMS)在多层次模型空间中进行决策。LMS通过用拟合模型表示每个数据实例,将数据从数据空间转换为模型空间。在MLMS中,为了充分捕捉数据内部的动态特征,对原始数据实例的多层次细节进行了分解。这些细节被单独拟合,从而产生一组拟合模型,这些模型捕获了原始实例的多层次动态特征。用一组拟合模型(而不是单个模型)表示每个数据实例,将其从数据空间转换为多级模型空间。引入了模型集之间的两两差分度量,充分考虑了拟合模型之间的距离和相似模型在每个细节层次上的类内聚集。随后,无论是否有足够的多类标记数据,都可以在多级模型空间中实现有效的AD。在多个AD数据集上的实验证明了该方法的有效性。
{"title":"Anomaly Detection in Multi-Level Model Space","authors":"Ao Chen;Xiren Zhou;Yizhan Fan;Huanhuan Chen","doi":"10.1109/TBDATA.2025.3534625","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3534625","url":null,"abstract":"Anomaly detection (AD) is gaining prominence, especially in situations with limited labeled data or unknown anomalies, demanding an efficient approach with minimal reliance on labeled data or prior knowledge. Building upon the framework of Learning in the Model Space (LMS), this paper proposes conducting AD through Learning in the Multi-Level Model Spaces (MLMS). LMS transforms the data from the data space to the model space by representing each data instance with a fitted model. In MLMS, to fully capture the dynamic characteristics within the data, multi-level details of the original data instance are decomposed. These details are individually fitted, resulting in a set of fitted models that capture the multi-level dynamic characteristics of the original instance. Representing each data instance with a set of fitted models, rather than a single one, transforms it from the data space into the multi-level model spaces. The pairwise difference measurement between model sets is introduced, fully considering the distance between fitted models and the intra-class aggregation of similar models at each level of detail. Subsequently, effective AD can be implemented in the multi-level model spaces, with or without sufficient multi-class labeled data. Experiments on multiple AD datasets demonstrate the effectiveness of the proposed method.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 5","pages":"2376-2387"},"PeriodicalIF":5.7,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144990026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MultiTec: A Data-Driven Multimodal Short Video Detection Framework for Healthcare Misinformation on TikTok MultiTec:一个数据驱动的多模式短视频检测框架,用于TikTok上的医疗保健错误信息
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-27 DOI: 10.1109/TBDATA.2025.3533919
Lanyu Shang;Yang Zhang;Yawen Deng;Dong Wang
With the prevalence of social media and short video sharing platforms (e.g., TikTok, YouTube Shorts), the proliferation of healthcare misinformation has become a widespread and concerning issue that threatens public health and undermines trust in mass media. This paper focuses on an important problem of detecting multimodal healthcare misinformation in short videos on TikTok. Our objective is to accurately identify misleading healthcare information that is jointly conveyed by the visual, audio, and textual content within the TikTok short videos. Three critical challenges exist in solving our problem: i) how to effectively extract information from distractive and manipulated visual content in short videos? ii) How to efficiently identify the interrelation of the heterogeneous visual and speech content in short videos? iii) How to accurately capture the complex dependency of the densely connected sequential content in short videos? To address the above challenges, we develop MultiTec, a multimodal detector that explicitly explores the audio and visual content in short videos to investigate both the sequential relation of video elements and their inter-modality dependencies to jointly detect misinformation in healthcare videos on TikTok. To the best of our knowledge, MultiTec is the first modality-aware dual-attentive short video detection model for multimodal healthcare misinformation on TikTok. We evaluate MultiTec on two real-world healthcare video datasets collected from TikTok. Evaluation results show that MultiTec achieves substantial performance gains compared to state-of-the-art baselines in accurately detecting misleading healthcare short videos.
随着社交媒体和短视频分享平台(如TikTok、YouTube Shorts)的普及,医疗保健错误信息的泛滥已成为一个普遍而令人担忧的问题,威胁着公众健康,破坏了对大众媒体的信任。本文关注的是在TikTok短视频中检测多模式医疗保健错误信息的重要问题。我们的目标是准确识别由TikTok短视频中的视觉、音频和文本内容共同传达的误导性医疗信息。解决我们的问题存在三个关键挑战:1)如何有效地从短视频中分散注意力和被操纵的视觉内容中提取信息?ii)如何有效识别短视频中异质的视觉和语音内容之间的相互关系?iii)如何准确捕捉短视频中紧密相连的序列内容的复杂依赖关系?为了解决上述挑战,我们开发了MultiTec,这是一个多模态检测器,可以明确地探索短视频中的音频和视觉内容,以研究视频元素的顺序关系及其模态间依赖关系,从而共同检测TikTok上医疗保健视频中的错误信息。据我们所知,MultiTec是TikTok上第一个可感知多模式医疗保健错误信息的双关注短视频检测模型。我们在从TikTok收集的两个真实世界的医疗保健视频数据集上评估MultiTec。评估结果显示,与最先进的基线相比,MultiTec在准确检测误导性医疗保健短视频方面取得了显著的性能提升。
{"title":"MultiTec: A Data-Driven Multimodal Short Video Detection Framework for Healthcare Misinformation on TikTok","authors":"Lanyu Shang;Yang Zhang;Yawen Deng;Dong Wang","doi":"10.1109/TBDATA.2025.3533919","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3533919","url":null,"abstract":"With the prevalence of social media and short video sharing platforms (e.g., TikTok, YouTube Shorts), the proliferation of healthcare misinformation has become a widespread and concerning issue that threatens public health and undermines trust in mass media. This paper focuses on an important problem of detecting multimodal healthcare misinformation in short videos on TikTok. Our objective is to accurately identify misleading healthcare information that is jointly conveyed by the visual, audio, and textual content within the TikTok short videos. Three critical challenges exist in solving our problem: i) how to effectively extract information from distractive and manipulated visual content in short videos? ii) How to efficiently identify the interrelation of the heterogeneous visual and speech content in short videos? iii) How to accurately capture the complex dependency of the densely connected sequential content in short videos? To address the above challenges, we develop <italic>MultiTec</i>, a multimodal detector that explicitly explores the audio and visual content in short videos to investigate both the sequential relation of video elements and their inter-modality dependencies to jointly detect misinformation in healthcare videos on TikTok. To the best of our knowledge, MultiTec is the first modality-aware dual-attentive short video detection model for multimodal healthcare misinformation on TikTok. We evaluate MultiTec on two real-world healthcare video datasets collected from TikTok. Evaluation results show that MultiTec achieves substantial performance gains compared to state-of-the-art baselines in accurately detecting misleading healthcare short videos.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 5","pages":"2471-2488"},"PeriodicalIF":5.7,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10854802","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144934330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CTDI: CNN-Transformer-Based Spatial-Temporal Missing Air Pollution Data Imputation CTDI:基于cnn -变压器的时空缺失空气污染数据输入
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-27 DOI: 10.1109/TBDATA.2025.3533882
Yangwen Yu;Victor O. K. Li;Jacqueline C. K. Lam;Kelvin Chan;Qi Zhang
Accurate and comprehensive air pollution data is essential for understanding and addressing environmental challenges. Missing data can impair accurate analysis and decision-making. This study presents a novel approach, named CNN-Transformer-based Spatial-Temporal Data Imputation (CTDI), for imputing missing air pollution data. Data pre-processing incorporates observed air pollution data and related urban data to produce 24-hour period tensors as input samples. 1-by-1 CNN layers capture the interaction between different types of input data. Deep learning transformer architecture is employed in a spatial-temporal (S-T) transformer module to capture long-range dependencies and extract complex relationships in both spatial and temporal dimensions. Hong Kong air pollution data is statistically analyzed and used to evaluate CTDI in its recovery of generated and actual patterns of missing data. Experimental results show that CTDI consistently outperforms existing imputation methods across all evaluated scenarios, including cases with higher rates of missing data, thereby demonstrating its robustness and effectiveness in enhancing air quality monitoring. Additionally, ablation experiments reveal that each component significantly contributes to the model's performance, with the temporal transformer proving particularly crucial under varying rates of missing data.
准确和全面的空气污染数据对于理解和应对环境挑战至关重要。缺少数据会影响准确的分析和决策。本研究提出了一种新的方法,称为CNN-Transformer-based Spatial-Temporal Data Imputation (CTDI),用于输入缺失的空气污染数据。数据预处理结合观测到的空气污染数据和相关城市数据,产生24小时周期张量作为输入样本。1乘1的CNN层捕获不同类型输入数据之间的交互。在时空(S-T)转换器模块中采用深度学习转换器架构来捕获远程依赖关系并提取时空维度上的复杂关系。对香港的空气污染数据进行统计分析,并用于评估CTDI对缺失数据的生成模式和实际模式的恢复。实验结果表明,CTDI在所有评估情景(包括数据缺失率较高的情况)中始终优于现有的归算方法,从而证明了其在加强空气质量监测方面的鲁棒性和有效性。此外,烧蚀实验表明,每个分量对模型的性能都有显著贡献,在数据丢失率不同的情况下,时间转换器被证明尤为重要。
{"title":"CTDI: CNN-Transformer-Based Spatial-Temporal Missing Air Pollution Data Imputation","authors":"Yangwen Yu;Victor O. K. Li;Jacqueline C. K. Lam;Kelvin Chan;Qi Zhang","doi":"10.1109/TBDATA.2025.3533882","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3533882","url":null,"abstract":"Accurate and comprehensive air pollution data is essential for understanding and addressing environmental challenges. Missing data can impair accurate analysis and decision-making. This study presents a novel approach, named CNN-Transformer-based Spatial-Temporal Data Imputation (CTDI), for imputing missing air pollution data. Data pre-processing incorporates observed air pollution data and related urban data to produce 24-hour period tensors as input samples. 1-by-1 CNN layers capture the interaction between different types of input data. Deep learning transformer architecture is employed in a spatial-temporal (S-T) transformer module to capture long-range dependencies and extract complex relationships in both spatial and temporal dimensions. Hong Kong air pollution data is statistically analyzed and used to evaluate CTDI in its recovery of generated and actual patterns of missing data. Experimental results show that CTDI consistently outperforms existing imputation methods across all evaluated scenarios, including cases with higher rates of missing data, thereby demonstrating its robustness and effectiveness in enhancing air quality monitoring. Additionally, ablation experiments reveal that each component significantly contributes to the model's performance, with the temporal transformer proving particularly crucial under varying rates of missing data.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 5","pages":"2443-2456"},"PeriodicalIF":5.7,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144934506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing the Transferability of Adversarial Examples With Random Diversity Ensemble and Variance Reduction Augmentation 利用随机多样性集成和方差减小增强对抗样本的可转移性
IF 5.7 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-01-27 DOI: 10.1109/TBDATA.2025.3533892
Sensen Zhang;Haibo Hong;Mande Xie
Currently, deep neural networks (DNNs) are susceptible to adversarial attacks, particularly when the network's structure and parameters are known, while most of the existing attacks do not perform satisfactorily in the presence of black-box settings. In this context, model augmentation is considered to be effective to improve the success rates of black-box attacks on adversarial examples. However, the existing model augmentation methods tend to rely on a single transformation, which limits the diversity of augmented model collections and thus affects the transferability of adversarial examples. In this paper, we first propose the random diversity ensemble method (RDE-MI-FGSM) to effectively enhance the diversity of the augmented model collection, thereby improving the transferability of the generated adversarial examples. Afterwards, we put forward the random diversity variance ensemble method (RDE-VRA-MI-FGSM), which adopts variance reduction augmentation (VRA) to improve the gradient variance of the enhanced model set and avoid falling into a poor local optimum, so as to further improve the transferability of adversarial examples. Furthermore, experimental results demonstrate that our approaches are compatible with many existing transfer-based attacks and can effectively improve the transferability of gradient-based adversarial attacks on the ImageNet dataset. Also, our proposals have achieved higher attack success rates even if the target model adopts advanced defenses. Specifically, we have achieved an average attack success rate of 91.4% on the defense model, which is higher than other baseline approaches.
目前,深度神经网络(dnn)容易受到对抗性攻击,特别是当网络的结构和参数已知时,而大多数现有的攻击在黑盒设置的存在下都不能令人满意地执行。在这种情况下,模型增强被认为是有效的,以提高黑盒攻击的成功率对抗性的例子。然而,现有的模型增强方法往往依赖于单一的转换,这限制了增强模型集合的多样性,从而影响了对抗示例的可转移性。本文首次提出随机多样性集成方法(RDE-MI-FGSM)来有效增强增强模型集合的多样性,从而提高生成的对抗样本的可转移性。随后,我们提出了随机多样性方差集成方法(RDE-VRA-MI-FGSM),该方法采用方差减少增强(VRA)来提高增强模型集的梯度方差,避免陷入较差的局部最优,从而进一步提高对抗样本的可转移性。此外,实验结果表明,我们的方法与许多现有的基于传输的攻击兼容,可以有效地提高基于梯度的对抗攻击在ImageNet数据集上的可移植性。此外,即使目标模型采用先进的防御措施,我们的建议也实现了更高的攻击成功率。具体来说,我们在防御模型上实现了91.4%的平均攻击成功率,高于其他基线方法。
{"title":"Enhancing the Transferability of Adversarial Examples With Random Diversity Ensemble and Variance Reduction Augmentation","authors":"Sensen Zhang;Haibo Hong;Mande Xie","doi":"10.1109/TBDATA.2025.3533892","DOIUrl":"https://doi.org/10.1109/TBDATA.2025.3533892","url":null,"abstract":"Currently, deep neural networks (DNNs) are susceptible to adversarial attacks, particularly when the network's structure and parameters are known, while most of the existing attacks do not perform satisfactorily in the presence of black-box settings. In this context, model augmentation is considered to be effective to improve the success rates of black-box attacks on adversarial examples. However, the existing model augmentation methods tend to rely on a single transformation, which limits the diversity of augmented model collections and thus affects the transferability of adversarial examples. In this paper, we first propose the random diversity ensemble method (RDE-MI-FGSM) to effectively enhance the diversity of the augmented model collection, thereby improving the transferability of the generated adversarial examples. Afterwards, we put forward the random diversity variance ensemble method (RDE-VRA-MI-FGSM), which adopts variance reduction augmentation (VRA) to improve the gradient variance of the enhanced model set and avoid falling into a poor local optimum, so as to further improve the transferability of adversarial examples. Furthermore, experimental results demonstrate that our approaches are compatible with many existing transfer-based attacks and can effectively improve the transferability of gradient-based adversarial attacks on the ImageNet dataset. Also, our proposals have achieved higher attack success rates even if the target model adopts advanced defenses. Specifically, we have achieved an average attack success rate of 91.4% on the defense model, which is higher than other baseline approaches.","PeriodicalId":13106,"journal":{"name":"IEEE Transactions on Big Data","volume":"11 5","pages":"2417-2430"},"PeriodicalIF":5.7,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144990138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Big Data
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1