首页 > 最新文献

IEEE Transactions on Computational Social Systems最新文献

英文 中文
IEEE Transactions on Computational Social Systems 电气和电子工程师学会计算社会系统论文集
IF 5 2区 计算机科学 Q1 Social Sciences Pub Date : 2024-04-02 DOI: 10.1109/TCSS.2024.3377286
{"title":"IEEE Transactions on Computational Social Systems","authors":"","doi":"10.1109/TCSS.2024.3377286","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3377286","url":null,"abstract":"","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10488827","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140348414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Federated Graph Augmentation for Semisupervised Node Classification 用于半监督节点分类的联合图增强技术
IF 5 2区 计算机科学 Q1 Social Sciences Pub Date : 2024-04-01 DOI: 10.1109/TCSS.2024.3373633
Zhichang Xia;Xinglin Zhang;Lingyu Liang;Yun Li;Yuejiao Gong
Semisupervised node classification is a prevalent task on graphs, which involves predicting the labels of unlabeled nodes based on limited labeled data available. At present, centralized approaches to training models for this task are unsustainable due to the increasing demand for computational power, storage capacity, and privacy. An approach of potential is federated graph learning (FGL), which allows multiple clients to collaborate on learning a model while maintaining data privacy. However, current methods suffer from the inability to consider the topology of the graph data and inadequate use of unlabeled data. To address these issues, we propose federated graph augmentation (FedGA) by combining graph neural network (GNN) models to utilize similar topologies existing in different client graphs and augment the client data. Furthermore, we develop FedGA-L based on FedGA, which integrates pseudolabeling and label-injection to improve the utilization of unlabeled data. FedGA-L allows pseudolabels to be used as additional information to enhance data augmentation and further improve the accuracy of node classification. We evaluate the effectiveness of FedGA and FedGA-L through experiments on multiple datasets. The results demonstrate improved accuracy in solving typical classification tasks and their compatibility with a variety of federated learning (FL) frameworks. On widely recognized datasets for graph learning, we achieve an accuracy improvement of 5%–7% compared to vanilla federated learning algorithms.
半监督节点分类是图中的一项普遍任务,它涉及根据有限的标注数据预测未标注节点的标签。目前,由于对计算能力、存储容量和隐私的要求越来越高,为这项任务训练模型的集中式方法难以为继。一种有潜力的方法是联合图学习(FGL),它允许多个客户端协作学习模型,同时维护数据隐私。然而,目前的方法无法考虑图数据的拓扑结构,也没有充分利用无标记数据。为了解决这些问题,我们提出了联合图增强(FedGA),通过结合图神经网络(GNN)模型,利用不同客户端图中存在的类似拓扑结构,增强客户端数据。此外,我们还在 FedGA 的基础上开发了 FedGA-L,它整合了伪标签和标签注入技术,以提高未标签数据的利用率。FedGA-L 允许将伪标签作为附加信息使用,以加强数据扩增,进一步提高节点分类的准确性。我们通过在多个数据集上进行实验,评估了 FedGA 和 FedGA-L 的有效性。结果表明,它们在解决典型分类任务时的准确性得到了提高,而且与各种联合学习(FL)框架兼容。在广泛认可的图学习数据集上,我们的准确率比普通联合学习算法提高了 5%-7%。
{"title":"Federated Graph Augmentation for Semisupervised Node Classification","authors":"Zhichang Xia;Xinglin Zhang;Lingyu Liang;Yun Li;Yuejiao Gong","doi":"10.1109/TCSS.2024.3373633","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3373633","url":null,"abstract":"Semisupervised node classification is a prevalent task on graphs, which involves predicting the labels of unlabeled nodes based on limited labeled data available. At present, centralized approaches to training models for this task are unsustainable due to the increasing demand for computational power, storage capacity, and privacy. An approach of potential is federated graph learning (FGL), which allows multiple clients to collaborate on learning a model while maintaining data privacy. However, current methods suffer from the inability to consider the topology of the graph data and inadequate use of unlabeled data. To address these issues, we propose federated graph augmentation (FedGA) by combining graph neural network (GNN) models to utilize similar topologies existing in different client graphs and augment the client data. Furthermore, we develop FedGA-L based on FedGA, which integrates pseudolabeling and label-injection to improve the utilization of unlabeled data. FedGA-L allows pseudolabels to be used as additional information to enhance data augmentation and further improve the accuracy of node classification. We evaluate the effectiveness of FedGA and FedGA-L through experiments on multiple datasets. The results demonstrate improved accuracy in solving typical classification tasks and their compatibility with a variety of federated learning (FL) frameworks. On widely recognized datasets for graph learning, we achieve an accuracy improvement of 5%–7% compared to vanilla federated learning algorithms.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141319712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Traffic Origin-Destination Demand Prediction via Multichannel Hypergraph Convolutional Networks 通过多通道超图卷积网络预测交通起源-目的地需求
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2024-03-29 DOI: 10.1109/TCSS.2024.3372856
Ming Wang;Yong Zhang;Xia Zhao;Yongli Hu;Baocai Yin
Accurate prediction of origin-destination (OD) demand is critical for service providers to efficiently allocate limited resources in regions with high travel demands. However, OD distributions pose significant challenges, characterized by high sparsity, complex spatial correlations within regions or chains, and potential repetition due to the recurrence of similar semantic contexts. These challenges impede traditional graph-based approaches, which connect two vertices through an edge, from performing effectively in OD prediction. Thus, we present a novel multichannel hypergraph convolutional neural network (MC-HGCN) to overcome the above challenges. The model innovatively extracts distinctive features from the channels of inflows, outflows, and OD flows, to conquer the high sparsity in OD matrices. High-order spatial proximity within regions and OD chains are then modeled by the three adjacency hypergraphs constructed for the above three channels. In each adjacency hypergraph, multiple neighboring stations are treated as vertices, while multiple OD pairs constitute hyperedges. These structures are learned by hypergraph convolutional networks for latent spatial correlations. On this basis, a semantic hypergraph is created for the OD channel to model OD distributions lacking spatial proximity but sharing semantic correlations. It utilizes hyperedges to represent semantic correlations among OD pairs whose origins and destinations both possess similar point-of-interest (POI) functions, before learned by a hypergraph convolutional network (HGCN). Both spatial and semantic correlations intrinsic to OD flows are accordingly captured and embedded into a gated recurrent unit (GRU) to unveil hidden spatiotemporal dependencies among OD distributions. These embedded correlations are ultimately integrated through a multichannel fusion module to enhance the prediction of OD flows, even for minor ones. Our model is validated through experiments on three public datasets, demonstrating its robust performances across long and short time steps. Findings may contribute theoretical insights for practical applications, such as coordinating traffic scheduling or route planning.
准确预测出发地-目的地(OD)需求对于服务提供商在旅行需求旺盛的地区有效分配有限资源至关重要。然而,出发地-目的地分布具有高度稀疏性、区域或链条内复杂的空间相关性,以及由于类似语义上下文的重复出现而可能造成的重复等特点,给预测带来了巨大挑战。这些挑战阻碍了传统的基于图的方法(通过边连接两个顶点)在 OD 预测中有效发挥作用。因此,我们提出了一种新型多通道超图卷积神经网络(MC-HGCN)来克服上述挑战。该模型创新性地从流入、流出和外径流量的通道中提取出独特的特征,以克服外径矩阵的高稀疏性。然后,通过为上述三个渠道构建的三个邻接超图来模拟区域和 OD 链内的高阶空间邻近性。在每个邻接超图中,多个相邻站点被视为顶点,而多个 OD 对构成超门。这些结构通过超图卷积网络学习潜在的空间相关性。在此基础上,为 OD 信道创建了语义超图,以模拟缺乏空间邻近性但共享语义相关性的 OD 分布。在超图卷积网络(HGCN)学习之前,它利用超桥来表示起源和目的地都具有类似兴趣点(POI)功能的 OD 对之间的语义相关性。因此,OD 流量固有的空间和语义相关性被捕获并嵌入到门控递归单元(GRU)中,以揭示 OD 分布之间隐藏的时空相关性。这些嵌入的相关性最终会通过多通道融合模块进行整合,以增强对 OD 流量的预测,即使是次要的 OD 流量。我们的模型通过在三个公共数据集上的实验进行了验证,证明了其在长短时间步长上的稳健表现。研究结果可为协调交通调度或路线规划等实际应用提供理论见解。
{"title":"Traffic Origin-Destination Demand Prediction via Multichannel Hypergraph Convolutional Networks","authors":"Ming Wang;Yong Zhang;Xia Zhao;Yongli Hu;Baocai Yin","doi":"10.1109/TCSS.2024.3372856","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3372856","url":null,"abstract":"Accurate prediction of origin-destination (OD) demand is critical for service providers to efficiently allocate limited resources in regions with high travel demands. However, OD distributions pose significant challenges, characterized by high sparsity, complex spatial correlations within regions or chains, and potential repetition due to the recurrence of similar semantic contexts. These challenges impede traditional graph-based approaches, which connect two vertices through an edge, from performing effectively in OD prediction. Thus, we present a novel multichannel hypergraph convolutional neural network (MC-HGCN) to overcome the above challenges. The model innovatively extracts distinctive features from the channels of inflows, outflows, and OD flows, to conquer the high sparsity in OD matrices. High-order spatial proximity within regions and OD chains are then modeled by the three adjacency hypergraphs constructed for the above three channels. In each adjacency hypergraph, multiple neighboring stations are treated as vertices, while multiple OD pairs constitute hyperedges. These structures are learned by hypergraph convolutional networks for latent spatial correlations. On this basis, a semantic hypergraph is created for the OD channel to model OD distributions lacking spatial proximity but sharing semantic correlations. It utilizes hyperedges to represent semantic correlations among OD pairs whose origins and destinations both possess similar point-of-interest (POI) functions, before learned by a hypergraph convolutional network (HGCN). Both spatial and semantic correlations intrinsic to OD flows are accordingly captured and embedded into a gated recurrent unit (GRU) to unveil hidden spatiotemporal dependencies among OD distributions. These embedded correlations are ultimately integrated through a multichannel fusion module to enhance the prediction of OD flows, even for minor ones. Our model is validated through experiments on three public datasets, demonstrating its robust performances across long and short time steps. Findings may contribute theoretical insights for practical applications, such as coordinating traffic scheduling or route planning.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":null,"pages":null},"PeriodicalIF":4.5,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141993976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
conteNXt: A Graph-Based Approach to Assimilate Content and Context for Event Detection in OSN conteNXt:基于图的方法,在 OSN 中吸收内容和上下文以进行事件检测
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2024-03-29 DOI: 10.1109/TCSS.2024.3372399
Sielvie Sharma;Muhammad Abulaish;Tanvir Ahmad
Social networks are rapidly expanding due to their imperative role in disseminating information in a split second, emerging as the primary source for breaking news. As a result, the rich, user-generated information entices researchers to delve deeper and extract valuable insights. Event detection in online social networks (OSNs) is a research problem that has shifted researchers attention from traditional news media to online social media data. Event detection in OSNs is an automated process, addressing the impractical task of manually filtering potential events from vast amounts of online data. Unfortunately, the informality and semantic sparsity of online social networking text pose significant challenges to the event detection task. To this end, we present an approach named conteNXt for detecting events from Twitter (currently “X”) posts (also known as Tweets). To handle large amounts of data, the proposed method divides tweets into bins and uses postprocessing methods to extract bursty keyphrases. These keyphrases are then used to generate a weighted keyphrase graph using the Word2Vec model. Finally, Markov clustering is employed to cluster and detect events in the bursty keyphrase graph. conteNXt is evaluated on the EventCorpus2012 benchmark dataset and two additional datasets extracted from the archive, Archive2020 and Archive2021, using performance evaluation metrics: #events, precision, recall, and F1-score. The proposed approach outperforms state-of-the-art methods, including SEDTWik, Twevent, Sentence-BERT, MABED, EDED, CommunityINDICATOR, and EventX. Additionally, the proposed approach is capable of detecting vital events that are not identified by the aforementioned state-of-the-art methods. https://github.com/Sielvi/conteNXt
社交网络因其在瞬间传播信息方面的重要作用而迅速扩张,成为突发新闻的主要来源。因此,丰富的用户生成信息吸引着研究人员深入研究并提取有价值的见解。在线社交网络(OSN)中的事件检测是一个研究课题,它将研究人员的注意力从传统新闻媒体转移到了在线社交媒体数据上。OSN 中的事件检测是一个自动化过程,它解决了从海量在线数据中人工筛选潜在事件这一不切实际的任务。遗憾的是,在线社交网络文本的非正式性和语义稀缺性给事件检测任务带来了巨大挑战。为此,我们提出了一种名为 conteNXt 的方法,用于从 Twitter(目前为 "X")帖子(也称为 Tweets)中检测事件。为了处理海量数据,我们提出的方法将推文划分为若干箱,并使用后处理方法来提取突发关键词。然后使用 Word2Vec 模型将这些关键词生成加权关键词图。ConteNXt 在 EventCorpus2012 基准数据集以及从档案中提取的另外两个数据集 Archive2020 和 Archive2021 上使用性能评估指标进行了评估:#事件、精确度、召回率和 F1 分数。所提出的方法优于最先进的方法,包括 SEDTWik、Twevent、Sentence-BERT、MABED、EDED、CommunityINDICATOR 和 EventX。此外,所提出的方法还能检测到上述最先进方法无法识别的重要事件。https://github.com/Sielvi/conteNXt。
{"title":"conteNXt: A Graph-Based Approach to Assimilate Content and Context for Event Detection in OSN","authors":"Sielvie Sharma;Muhammad Abulaish;Tanvir Ahmad","doi":"10.1109/TCSS.2024.3372399","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3372399","url":null,"abstract":"Social networks are rapidly expanding due to their imperative role in disseminating information in a split second, emerging as the primary source for breaking news. As a result, the rich, user-generated information entices researchers to delve deeper and extract valuable insights. Event detection in online social networks (OSNs) is a research problem that has shifted researchers attention from traditional news media to online social media data. Event detection in OSNs is an automated process, addressing the impractical task of manually filtering potential events from vast amounts of online data. Unfortunately, the informality and semantic sparsity of online social networking text pose significant challenges to the event detection task. To this end, we present an approach named \u0000<monospace>conteNXt</monospace>\u0000 for detecting events from Twitter (currently “X”) posts (also known as Tweets). To handle large amounts of data, the proposed method divides tweets into bins and uses postprocessing methods to extract bursty keyphrases. These keyphrases are then used to generate a weighted keyphrase graph using the Word2Vec model. Finally, Markov clustering is employed to cluster and detect events in the bursty keyphrase graph. \u0000<monospace>conteNXt</monospace>\u0000 is evaluated on the \u0000<monospace>EventCorpus2012</monospace>\u0000 benchmark dataset and two additional datasets extracted from the archive, \u0000<monospace>Archive2020</monospace>\u0000 and \u0000<monospace>Archive2021</monospace>\u0000, using performance evaluation metrics: #events, precision, recall, and F1-score. The proposed approach outperforms state-of-the-art methods, including \u0000<monospace>SEDTWik</monospace>\u0000, \u0000<monospace>Twevent</monospace>\u0000, \u0000<monospace>Sentence-BERT</monospace>\u0000, \u0000<monospace>MABED</monospace>\u0000, \u0000<monospace>EDED</monospace>\u0000, \u0000<monospace>CommunityINDICATOR</monospace>\u0000, and \u0000<monospace>EventX</monospace>\u0000. Additionally, the proposed approach is capable of detecting vital events that are not identified by the aforementioned state-of-the-art methods. \u0000<uri>https://github.com/Sielvi/conteNXt</uri>","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":null,"pages":null},"PeriodicalIF":4.5,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141993977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Layer-Adapted Implicit Distribution Alignment Networks for Cross-Corpus Speech Emotion Recognition 用于跨语料库语音情感识别的层适配隐式分布对齐网络
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2024-03-27 DOI: 10.1109/TCSS.2024.3362690
Yan Zhao;Yuan Zong;Jincen Wang;Hailun Lian;Cheng Lu;Li Zhao;Wenming Zheng
In this article, we propose a new unsupervised domain adaptation (DA) method called layer-adapted implicit distribution alignment networks (LIDANs) to address the challenge of cross-corpus speech emotion recognition (SER). LIDAN extends our previous ICASSP work, deep implicit distribution alignment networks (DIDANs), whose key contribution lies in the introduction of a novel regularization term called implicit distribution alignment (IDA). This term allows DIDAN trained on source (training) speech samples to remain applicable to predicting emotion labels for target (testing) speech samples, regardless of corpus variance in cross-corpus SER. To further enhance this method, we extend IDA to layer-adapted IDA (LIDA), resulting in LIDAN. This layer-adapted extension consists of three modified IDA terms that consider emotion labels at different levels of granularity. These terms are strategically arranged within different fully connected layers in LIDAN, aligning with the increasing emotion-discriminative abilities with respect to the layer depth. This arrangement enables LIDAN to more effectively learn emotion-discriminative and corpus-invariant features for SER across various corpora compared to DIDAN. It is also worthy to mention that unlike most existing methods that rely on estimating statistical moments to describe preassumed explicit distributions, both IDA and LIDA take a different approach. They utilize an idea of target sample reconstruction to directly bridge the feature distribution gap without making assumptions about their distribution type. As a result, DIDAN and LIDAN can be viewed as implicit cross-corpus SER methods. To evaluate LIDAN, we conducted extensive cross-corpus SER experiments on EmoDB, eNTERFACE, and CASIA corpora. The experimental results demonstrate that LIDAN surpasses recent state-of-the-art explicit unsupervised DA methods in tackling cross-corpus SER tasks.
在本文中,我们提出了一种新的无监督领域适应(DA)方法,称为层适应隐式分布对齐网络(LIDANs),以应对跨语料库语音情感识别(SER)的挑战。LIDAN 扩展了我们之前的 ICASSP 工作,即深度隐式分布对齐网络(DIDANs),其主要贡献在于引入了一个名为隐式分布对齐(IDA)的新型正则化项。该术语允许在源(训练)语音样本上训练的 DIDAN 继续适用于预测目标(测试)语音样本的情感标签,而无需考虑跨语料库 SER 中的语料库差异。为了进一步增强这种方法,我们将 IDA 扩展为层适配 IDA(LIDA),从而产生了 LIDAN。这种层适应扩展由三个修改后的 IDA 术语组成,这些术语考虑了不同粒度水平的情感标签。这些术语被战略性地安排在 LIDAN 的不同全连接层中,与层深度不断增加的情感判别能力保持一致。与 DIDAN 相比,这种安排使 LIDAN 能够更有效地学习各种语料库中 SER 的情感判别和语料不变特征。值得一提的还有,与大多数依赖估计统计矩来描述预设显式分布的现有方法不同,IDA 和 LIDA 都采用了不同的方法。它们利用目标样本重构的理念直接弥补了特征分布的差距,而无需对其分布类型做出假设。因此,DIDAN 和 LIDAN 可被视为隐式跨语料库 SER 方法。为了评估 LIDAN,我们在 EmoDB、eNTERFACE 和 CASIA 语料库上进行了广泛的跨语料库 SER 实验。实验结果表明,在处理跨语料库 SER 任务方面,LIDAN 超越了最近最先进的显式无监督 DA 方法。
{"title":"Layer-Adapted Implicit Distribution Alignment Networks for Cross-Corpus Speech Emotion Recognition","authors":"Yan Zhao;Yuan Zong;Jincen Wang;Hailun Lian;Cheng Lu;Li Zhao;Wenming Zheng","doi":"10.1109/TCSS.2024.3362690","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3362690","url":null,"abstract":"In this article, we propose a new unsupervised domain adaptation (DA) method called layer-adapted implicit distribution alignment networks (LIDANs) to address the challenge of cross-corpus speech emotion recognition (SER). LIDAN extends our previous ICASSP work, deep implicit distribution alignment networks (DIDANs), whose key contribution lies in the introduction of a novel regularization term called implicit distribution alignment (IDA). This term allows DIDAN trained on source (training) speech samples to remain applicable to predicting emotion labels for target (testing) speech samples, regardless of corpus variance in cross-corpus SER. To further enhance this method, we extend IDA to layer-adapted IDA (LIDA), resulting in LIDAN. This layer-adapted extension consists of three modified IDA terms that consider emotion labels at different levels of granularity. These terms are strategically arranged within different fully connected layers in LIDAN, aligning with the increasing emotion-discriminative abilities with respect to the layer depth. This arrangement enables LIDAN to more effectively learn emotion-discriminative and corpus-invariant features for SER across various corpora compared to DIDAN. It is also worthy to mention that unlike most existing methods that rely on estimating statistical moments to describe preassumed explicit distributions, both IDA and LIDA take a different approach. They utilize an idea of target sample reconstruction to directly bridge the feature distribution gap without making assumptions about their distribution type. As a result, DIDAN and LIDAN can be viewed as implicit cross-corpus SER methods. To evaluate LIDAN, we conducted extensive cross-corpus SER experiments on EmoDB, eNTERFACE, and CASIA corpora. The experimental results demonstrate that LIDAN surpasses recent state-of-the-art explicit unsupervised DA methods in tackling cross-corpus SER tasks.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":null,"pages":null},"PeriodicalIF":4.5,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141993929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Modal Attention Network for Detecting Multimodal Misinformation From Multiple Platforms 用于检测来自多个平台的多模态错误信息的跨模态注意力网络
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2024-03-26 DOI: 10.1109/TCSS.2024.3373661
Zhiwei Guo;Yang Li;Zhenguo Yang;Xiaoping Li;Lap-Kei Lee;Qing Li;Wenyin Liu
Misinformation detection in short videos on social media has become a pressing issue due to its popularity. However, datasets for misinformation detection are limited in terms of modality and sources, hindering the development of effective detection methods. In this article, we introduce a novel dataset denoted the multiplatform multimodal misinformation (3M) dataset. Our dataset is collected specifically to investigate and address misinformation in a multimodal context. A total of 17 352 videos were collected from two prominent social media platforms, namely TikTok and Weibo. The 3M dataset covers 30 different topics, such as sports, health, news, and art, providing a diverse range of content for analysis. We propose a novel approach named cross-modal attention misinformation detection (CAMD) for effectively detecting and addressing multimodal misinformation. CAMD leverages the cross-modal attention module to facilitate effective information exchange and fusion between modalities by learning the correlations and weights among them. The cross-modal attention module is capable of learning multilevel modality correlations, focuses primarily on the interaction between multimodal sequences across different time steps, and simultaneously adjusts the information from the source modality based on the information of the target modality. Extensive experiments on the 3M dataset show that the proposed method achieves state-of-the-art performance. Specifically, CAMD achieves accuracy, F1-score, precision, and recall values of 76.86%, 58.05%, 87.86%, and 58.70%, respectively, on the 3M dataset.
由于社交媒体短视频的流行,其误导信息检测已成为一个亟待解决的问题。然而,用于错误信息检测的数据集在模式和来源方面都很有限,这阻碍了有效检测方法的开发。在本文中,我们介绍了一个新颖的数据集,称为多平台多模态错误信息(3M)数据集。我们的数据集专门用于研究和处理多模态背景下的错误信息。我们从两个著名的社交媒体平台(即嘀嗒和微博)共收集了 17 352 个视频。3M 数据集涵盖 30 个不同的主题,如体育、健康、新闻和艺术,为分析提供了多样化的内容。我们提出了一种名为跨模态注意力错误信息检测(CAMD)的新方法,用于有效检测和处理多模态错误信息。CAMD 利用跨模态注意力模块,通过学习模态之间的相关性和权重,促进模态之间有效的信息交换和融合。跨模态注意力模块能够学习多层次的模态相关性,主要关注不同时间步长的多模态序列之间的相互作用,并同时根据目标模态的信息调整源模态的信息。在 3M 数据集上进行的大量实验表明,所提出的方法达到了最先进的性能。具体来说,CAMD 在 3M 数据集上的准确率、F1 分数、精确度和召回率分别达到了 76.86%、58.05%、87.86% 和 58.70%。
{"title":"Cross-Modal Attention Network for Detecting Multimodal Misinformation From Multiple Platforms","authors":"Zhiwei Guo;Yang Li;Zhenguo Yang;Xiaoping Li;Lap-Kei Lee;Qing Li;Wenyin Liu","doi":"10.1109/TCSS.2024.3373661","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3373661","url":null,"abstract":"Misinformation detection in short videos on social media has become a pressing issue due to its popularity. However, datasets for misinformation detection are limited in terms of modality and sources, hindering the development of effective detection methods. In this article, we introduce a novel dataset denoted the multiplatform multimodal misinformation (3M) dataset. Our dataset is collected specifically to investigate and address misinformation in a multimodal context. A total of 17 352 videos were collected from two prominent social media platforms, namely TikTok and Weibo. The 3M dataset covers 30 different topics, such as sports, health, news, and art, providing a diverse range of content for analysis. We propose a novel approach named cross-modal attention misinformation detection (CAMD) for effectively detecting and addressing multimodal misinformation. CAMD leverages the cross-modal attention module to facilitate effective information exchange and fusion between modalities by learning the correlations and weights among them. The cross-modal attention module is capable of learning multilevel modality correlations, focuses primarily on the interaction between multimodal sequences across different time steps, and simultaneously adjusts the information from the source modality based on the information of the target modality. Extensive experiments on the 3M dataset show that the proposed method achieves state-of-the-art performance. Specifically, CAMD achieves accuracy, F1-score, precision, and recall values of 76.86%, 58.05%, 87.86%, and 58.70%, respectively, on the 3M dataset.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":null,"pages":null},"PeriodicalIF":4.5,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141993905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Comprehensive Understanding of Code-Mixed Language Semantics Using Hierarchical Transformer 利用层次转换器全面理解代码混合语言语义
IF 5 2区 计算机科学 Q1 Social Sciences Pub Date : 2024-03-25 DOI: 10.1109/TCSS.2024.3360378
Tharun Suresh;Ayan Sengupta;Md Shad Akhtar;Tanmoy Chakraborty
Being a popular mode of text-based communication in multilingual communities, code mixing in online social media has become an important subject to study. Learning the semantics and morphology of code-mixed language remains a key challenge due to the scarcity of data, the unavailability of robust, and language-invariant representation learning techniques. Any morphologically rich language can benefit from character, subword, and word-level embeddings, aiding in learning meaningful correlations. In this article, we explore a hierarchical transformer (HIT)-based architecture to learn the semantics of code-mixed languages. HIT consists of multiheaded self-attention (MSA) and outer product attention components to simultaneously comprehend the semantic and syntactic structures of code-mixed texts. We evaluate the proposed method across six Indian languages (Bengali, Gujarati, Hindi, Tamil, Telugu, and Malayalam) and Spanish for nine tasks on 17 datasets. The HIT model outperforms state-of-the-art code-mixed representation learning and multilingual language models on 13 datasets across eight tasks. We further demonstrate the generalizability of the HIT architecture using masked language modeling (MLM)-based pretraining, zero-shot learning (ZSL), and transfer learning approaches. Our empirical results show that the pretraining objectives significantly improve the performance of downstream tasks.
作为多语言社区中一种流行的基于文本的交流模式,网络社交媒体中的代码混合已成为一个重要的研究课题。由于数据稀缺、缺乏稳健且语言不变的表征学习技术,学习代码混合语言的语义和形态仍然是一个关键挑战。任何形态丰富的语言都可以从字符、子词和词级嵌入中受益,从而帮助学习有意义的相关性。在本文中,我们探索了一种基于分层变换器(HIT)的架构,用于学习代码混合语言的语义。HIT 由多头自我注意(MSA)和外积注意组件组成,可同时理解代码混合文本的语义和句法结构。我们在 17 个数据集上对六种印度语言(孟加拉语、古吉拉特语、印地语、泰米尔语、泰卢固语和马拉雅拉姆语)和西班牙语的九项任务对所提出的方法进行了评估。在 8 项任务的 13 个数据集上,HIT 模型的表现优于最先进的代码混合表示学习和多语言语言模型。我们使用基于掩码语言建模(MLM)的预训练、零点学习(ZSL)和迁移学习方法进一步证明了 HIT 架构的通用性。我们的实证结果表明,预训练目标显著提高了下游任务的性能。
{"title":"A Comprehensive Understanding of Code-Mixed Language Semantics Using Hierarchical Transformer","authors":"Tharun Suresh;Ayan Sengupta;Md Shad Akhtar;Tanmoy Chakraborty","doi":"10.1109/TCSS.2024.3360378","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3360378","url":null,"abstract":"Being a popular mode of text-based communication in multilingual communities, code mixing in online social media has become an important subject to study. Learning the semantics and morphology of code-mixed language remains a key challenge due to the scarcity of data, the unavailability of robust, and language-invariant representation learning techniques. Any morphologically rich language can benefit from character, subword, and word-level embeddings, aiding in learning meaningful correlations. In this article, we explore a hierarchical transformer (HIT)-based architecture to learn the semantics of code-mixed languages. HIT consists of multiheaded self-attention (MSA) and outer product attention components to simultaneously comprehend the semantic and syntactic structures of code-mixed texts. We evaluate the proposed method across six Indian languages (Bengali, Gujarati, Hindi, Tamil, Telugu, and Malayalam) and Spanish for nine tasks on 17 datasets. The HIT model outperforms state-of-the-art code-mixed representation learning and multilingual language models on 13 datasets across eight tasks. We further demonstrate the generalizability of the HIT architecture using masked language modeling (MLM)-based pretraining, zero-shot learning (ZSL), and transfer learning approaches. Our empirical results show that the pretraining objectives significantly improve the performance of downstream tasks.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141319606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extracting Higher Order Topological Semantic via Motif-Based Deep Graph Neural Networks 通过基于动机的深度图神经网络提取高阶拓扑语义
IF 4.5 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS Pub Date : 2024-03-21 DOI: 10.1109/TCSS.2024.3372775
Ke-Jia Zhang;Xiao Ding;Bing-Bing Xiang;Hai-Feng Zhang;Zhong-Kui Bao
Graph neural networks (GNNs) are efficient techniques for learning graph representations and have shown remarkable success in tackling diverse graph-related tasks. However, in the context of the neighborhood aggregation paradigm, conventional GNNs have limited capabilities in capturing the higher order structures and topological semantics of graphs. Researchers have attempted to overcome this limitation by designing new GNNs that explore the impacts of motifs to capture potentially higher order graph information. However, existing motif-based GNNs often ignore lower order connectivity patterns such as nodes and edges, which leads to poor representation of sparse networks. To address these limitations, we propose an innovative approach. First, we design convolution kernels on both motif-based and simple graphs. Second, we introduce a multilevel graph convolution framework for extracting higher order topological semantics of graphs. Our approach overcomes the limitations of prior methods, demonstrating state-of-the-art performance in downstream tasks with excellent scalability. Extensive experiments on real-world datasets validate the effectiveness of our proposed method.
图神经网络(GNN)是学习图表示的高效技术,在处理各种与图相关的任务方面取得了显著的成功。然而,在邻域聚合范例的背景下,传统的图神经网络在捕捉图的高阶结构和拓扑语义方面能力有限。研究人员试图通过设计新的 GNN 来克服这一局限,这种 GNN 可探索图案的影响,从而捕捉潜在的高阶图信息。然而,现有的基于图案的 GNN 通常会忽略节点和边等低阶连接模式,从而导致对稀疏网络的表征不佳。为了解决这些局限性,我们提出了一种创新方法。首先,我们设计了基于图案和简单图形的卷积核。其次,我们引入了多级图卷积框架,用于提取图的高阶拓扑语义。我们的方法克服了先前方法的局限性,在下游任务中表现出了最先进的性能和出色的可扩展性。在真实世界数据集上进行的大量实验验证了我们提出的方法的有效性。
{"title":"Extracting Higher Order Topological Semantic via Motif-Based Deep Graph Neural Networks","authors":"Ke-Jia Zhang;Xiao Ding;Bing-Bing Xiang;Hai-Feng Zhang;Zhong-Kui Bao","doi":"10.1109/TCSS.2024.3372775","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3372775","url":null,"abstract":"Graph neural networks (GNNs) are efficient techniques for learning graph representations and have shown remarkable success in tackling diverse graph-related tasks. However, in the context of the neighborhood aggregation paradigm, conventional GNNs have limited capabilities in capturing the higher order structures and topological semantics of graphs. Researchers have attempted to overcome this limitation by designing new GNNs that explore the impacts of motifs to capture potentially higher order graph information. However, existing motif-based GNNs often ignore lower order connectivity patterns such as nodes and edges, which leads to poor representation of sparse networks. To address these limitations, we propose an innovative approach. First, we design convolution kernels on both motif-based and simple graphs. Second, we introduce a multilevel graph convolution framework for extracting higher order topological semantics of graphs. Our approach overcomes the limitations of prior methods, demonstrating state-of-the-art performance in downstream tasks with excellent scalability. Extensive experiments on real-world datasets validate the effectiveness of our proposed method.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":null,"pages":null},"PeriodicalIF":4.5,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141993992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Asymmetric Cross-Modal Hashing Retrieval With Dual Semantic Enhancement 通过双重语义增强实现稳健的非对称跨模态哈希检索
IF 5 2区 计算机科学 Q1 Social Sciences Pub Date : 2024-03-20 DOI: 10.1109/TCSS.2024.3352494
Shaohua Teng;Tuhong Xu;Zefeng Zheng;NaiQi Wu;Wei Zhang;Luyao Teng
As social media faces with large amounts of data and multimodal properties, cross-modal hashing (CMH) retrieval gains extensive applications with its high efficiency and low storage consumption. However, there are two issues that hinder the performance of the existing semantics-learning-based CMH methods: 1) there exist some nonlinear relationships, noises, and outliers in the data, which may degrade the learning effectiveness of a model; and 2) the complementary relationships between the label semantics and sample semantics may be inadequately explored. To address the above two problems, a method called robust asymmetric cross-modal hashing retrieval with dual semantic enhancement (RADSE) is proposed. RADSE consists of three parts: 1) cross-modal data alignment (CDA) that applies kernel mapping and establishes a unified linear representation in the neighborhood to capture the nonlinear relationships between cross-modal data; 2) relaxed label semantic learning for robustness (RLSLR) that uses a relaxation strategy to expand label distinctiveness, and leverages $ell_{2,1}$ norm to enhance the robustness of the model against noise and outliers; and 3) dual semantic enhancement learning (DSEL) that learns more interrelationships between samples under the label semantic guidance to ensure the mutual enhancement of semantic information. Extensive experiments and analyses on three popular datasets demonstrate that RADSE outperforms the most existing methods in terms of mean average precision (MAP), precision recall (P–R) curves, and top-N precision curves. In the comparisons of MAP, RADSE improves by an average of 2%–3% in two retrieval tasks.
随着社交媒体面临大量数据和多模态特性,跨模态哈希(CMH)检索以其高效率和低存储消耗获得了广泛的应用。然而,现有的基于语义学习的跨模态哈希方法存在两个问题:1)数据中存在一些非线性关系、噪声和异常值,可能会降低模型的学习效率;2)标签语义和样本语义之间的互补关系可能没有被充分挖掘。为了解决上述两个问题,我们提出了一种名为 "双语义增强的鲁棒非对称跨模态哈希检索(RADSE)"的方法。RADSE 由三部分组成:1) 跨模态数据对齐(CDA),应用核映射并在邻域中建立统一的线性表示,以捕捉跨模态数据之间的非线性关系;2)鲁棒性松弛标签语义学习(RLSLR),使用松弛策略扩大标签的显著性,并利用$ell_{2,1}$规范增强模型对噪声和异常值的鲁棒性;以及3)双重语义增强学习(DSEL),在标签语义指导下学习样本间更多的相互关系,确保语义信息的相互增强。在三个流行数据集上进行的大量实验和分析表明,RADSE 在平均精度(MAP)、精度召回率(P-R)曲线和前 N 精确度曲线方面都优于大多数现有方法。在 MAP 的比较中,RADSE 在两个检索任务中平均提高了 2%-3%。
{"title":"Robust Asymmetric Cross-Modal Hashing Retrieval With Dual Semantic Enhancement","authors":"Shaohua Teng;Tuhong Xu;Zefeng Zheng;NaiQi Wu;Wei Zhang;Luyao Teng","doi":"10.1109/TCSS.2024.3352494","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3352494","url":null,"abstract":"As social media faces with large amounts of data and multimodal properties, cross-modal hashing (CMH) retrieval gains extensive applications with its high efficiency and low storage consumption. However, there are two issues that hinder the performance of the existing semantics-learning-based CMH methods: 1) there exist some nonlinear relationships, noises, and outliers in the data, which may degrade the learning effectiveness of a model; and 2) the complementary relationships between the label semantics and sample semantics may be inadequately explored. To address the above two problems, a method called robust asymmetric cross-modal hashing retrieval with dual semantic enhancement (RADSE) is proposed. RADSE consists of three parts: 1) cross-modal data alignment (CDA) that applies kernel mapping and establishes a unified linear representation in the neighborhood to capture the nonlinear relationships between cross-modal data; 2) relaxed label semantic learning for robustness (RLSLR) that uses a relaxation strategy to expand label distinctiveness, and leverages \u0000<inline-formula><tex-math>$ell_{2,1}$</tex-math></inline-formula>\u0000 norm to enhance the robustness of the model against noise and outliers; and 3) dual semantic enhancement learning (DSEL) that learns more interrelationships between samples under the label semantic guidance to ensure the mutual enhancement of semantic information. Extensive experiments and analyses on three popular datasets demonstrate that RADSE outperforms the most existing methods in terms of mean average precision (MAP), precision recall (P–R) curves, and top-N precision curves. In the comparisons of MAP, RADSE improves by an average of 2%–3% in two retrieval tasks.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141319604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TT-GCN: Temporal-Tightly Graph Convolutional Network for Emotion Recognition From Gaits TT-GCN:用于从步态识别情绪的时空紧密图卷积网络
IF 5 2区 计算机科学 Q1 Social Sciences Pub Date : 2024-03-20 DOI: 10.1109/TCSS.2024.3364378
Tong Zhang;Yelin Chen;Shuzhen Li;Xiping Hu;C. L. Philip Chen
The human gait reflects substantial information about individual emotions. Current gait emotion recognition methods focus on capturing gait topology information and ignore the importance of fine-grained temporal features. This article proposes the temporal-tightly graph convolutional network (TT-GCN) to extract temporal features. TT-GCN comprises three significant mechanisms: the causal temporal convolution network (casual-TCN), the walking direction recognition auxiliary task, and the feature mapping layer. To obtain tight temporal dependencies and enhance the relevance among gait periods, the causal-TCN is introduced. Based on the assumption of emotional consistency in the walking directions, the auxiliary task is proposed to enhance the ability of fine-grained feature extraction. Through the feature mapping layer, affective features can be mapped into the appropriate representation and fused with deep learning features. TT-GCN shows the best performance across five comprehensive metrics. All experimental results verify the necessity and feasibility of exploring fine-grained temporal feature extraction.
人类步态反映了个体情绪的大量信息。目前的步态情绪识别方法侧重于捕捉步态拓扑信息,而忽视了细粒度时态特征的重要性。本文提出了时序-紧密图卷积网络(TT-GCN)来提取时序特征。TT-GCN 包括三个重要机制:因果时空卷积网络(casual-TCN)、行走方向识别辅助任务和特征映射层。为了获得紧密的时间依赖性并增强步态周期之间的相关性,引入了因果时空卷积网络。基于行走方向的情感一致性假设,提出了辅助任务,以增强细粒度特征提取的能力。通过特征映射层,情感特征可以被映射到适当的表征中,并与深度学习特征融合。TT-GCN 在五个综合指标中表现最佳。所有实验结果都验证了探索细粒度时态特征提取的必要性和可行性。
{"title":"TT-GCN: Temporal-Tightly Graph Convolutional Network for Emotion Recognition From Gaits","authors":"Tong Zhang;Yelin Chen;Shuzhen Li;Xiping Hu;C. L. Philip Chen","doi":"10.1109/TCSS.2024.3364378","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3364378","url":null,"abstract":"The human gait reflects substantial information about individual emotions. Current gait emotion recognition methods focus on capturing gait topology information and ignore the importance of fine-grained temporal features. This article proposes the temporal-tightly graph convolutional network (TT-GCN) to extract temporal features. TT-GCN comprises three significant mechanisms: the causal temporal convolution network (casual-TCN), the walking direction recognition auxiliary task, and the feature mapping layer. To obtain tight temporal dependencies and enhance the relevance among gait periods, the causal-TCN is introduced. Based on the assumption of emotional consistency in the walking directions, the auxiliary task is proposed to enhance the ability of fine-grained feature extraction. Through the feature mapping layer, affective features can be mapped into the appropriate representation and fused with deep learning features. TT-GCN shows the best performance across five comprehensive metrics. All experimental results verify the necessity and feasibility of exploring fine-grained temporal feature extraction.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141319711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Computational Social Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1