首页 > 最新文献

Information Fusion最新文献

英文 中文
CMVF: Cross-Modal Unregistered Video Fusion via Spatio-Temporal Consistency CMVF:基于时空一致性的跨模态未注册视频融合
IF 18.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-07 DOI: 10.1016/j.inffus.2026.104212
Jianfeng Ding, Hao Zhang, Zhongyuan Wang, Jinsheng Xiao, Xin Tian, Zhen Han, Jiayi Ma
{"title":"CMVF: Cross-Modal Unregistered Video Fusion via Spatio-Temporal Consistency","authors":"Jianfeng Ding, Hao Zhang, Zhongyuan Wang, Jinsheng Xiao, Xin Tian, Zhen Han, Jiayi Ma","doi":"10.1016/j.inffus.2026.104212","DOIUrl":"https://doi.org/10.1016/j.inffus.2026.104212","url":null,"abstract":"","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"91 1","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146138678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-modal contrastive learning for 3D point cloud-text fusion via implicit semantic alignment 基于隐式语义对齐的三维点云-文本融合跨模态对比学习
IF 18.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-05 DOI: 10.1016/j.inffus.2026.104208
Xiangtian Zheng, Chen Ji, Wei Cai, Xianghua Tang, Xiaolin Yang, Liang Cheng
{"title":"Cross-modal contrastive learning for 3D point cloud-text fusion via implicit semantic alignment","authors":"Xiangtian Zheng, Chen Ji, Wei Cai, Xianghua Tang, Xiaolin Yang, Liang Cheng","doi":"10.1016/j.inffus.2026.104208","DOIUrl":"https://doi.org/10.1016/j.inffus.2026.104208","url":null,"abstract":"","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"1 1","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146134526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CATCH: Causal attention enhanced meta-path semantic fusion for robust hyperbolic heterogeneous graph embedding 捕获:因果注意增强的元路径语义融合鲁棒双曲异构图嵌入
IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-05 DOI: 10.1016/j.inffus.2026.104206
Bojia Liu , Conghui Zheng , Li Pan
Heterogeneous graph representation learning seeks to capture the complex structural and semantic properties in heterogeneous graphs. The integration of hyperbolic space, which is well-suited to modeling the intrinsic degree power-law distribution of graphs, has facilitated significant advancements in this area. Recent methods leverage hyperbolic attention mechanisms to fuse semantic information within metapath-induced subgraphs. Despite this progress, a major limitation remains: these methods leverage attention for information aggregation but fail to model the causal relationship between semantic fusion and downstream task performance, leading to spurious semantic associations that reduce robustness to noise and impair cross-task generalization. To address this challenge, we propose a Causal ATtention enhanCed Hyperbolic Heterogeneous Graph Neural Network (CATCH), intending to achieve sufficient semantic information fusion. To the best of our knowledge, CATCH is the first to integrate hyperbolic space with causal inference for heterogeneous graph representations, directly targeting spurious semantic correlations at the source. Specifically, CATCH explicitly encodes the Euclidean node attributes of different types into a shared semantic hyperbolic space. To capture the underlying semantics, context subgraphs based on one-order and high-order metapaths are constructed to facilitate hyperbolic attention-based intra-level and inter-level information aggregation, thus forming comprehensive representations. Finally, a causal attention enhancement mechanism is implemented with direct supervision on attention learning, leveraging counterfactual causal inference to generate counterfactual representations for computing direct causal effects. By jointly optimizing a task-specific objective alongside a causal loss, CATCH promotes more faithful semantic encoding, leading to improved robustness and generalization. Extensive experiments on four real-world datasets validate the superior performance of CATCH across multiple tasks. The implementation is available at https://github.com/Crystal-LiuBojia/CATCH.
Recommendation performance on Amazon-CD and Amazon-Book.
异构图表示学习旨在捕获异构图中复杂的结构和语义属性。双曲空间的积分非常适合于图形的内禀次幂律分布的建模,促进了这一领域的重大进展。最近的方法利用双曲注意机制来融合元路径诱导子图中的语义信息。尽管取得了这些进展,但仍然存在一个主要的限制:这些方法利用注意力进行信息聚合,但未能模拟语义融合与下游任务性能之间的因果关系,导致虚假的语义关联,从而降低了对噪声的鲁棒性并损害了跨任务泛化。为了解决这一挑战,我们提出了一种因果注意增强双曲异构图神经网络(CATCH),旨在实现足够的语义信息融合。据我们所知,CATCH是第一个将双曲空间与异构图表示的因果推理集成在一起的,直接针对来源的虚假语义关联。具体来说,CATCH将不同类型的欧几里得节点属性显式地编码到共享的语义双曲空间中。为了捕获底层语义,构建基于一阶和高阶元路径的上下文子图,促进基于双曲注意的层内和层间信息聚合,从而形成综合表征。最后,通过对注意学习的直接监督,实现了因果注意增强机制,利用反事实因果推理生成反事实表征来计算直接因果效应。通过联合优化特定于任务的目标和因果损失,CATCH促进了更忠实的语义编码,从而提高了鲁棒性和泛化。在四个真实数据集上进行的大量实验验证了CATCH跨多任务的卓越性能。该实现可在amazon.com - cd和Amazon-Book上的https://github.com/Crystal-LiuBojia/CATCH.Recommendation性能上获得。
{"title":"CATCH: Causal attention enhanced meta-path semantic fusion for robust hyperbolic heterogeneous graph embedding","authors":"Bojia Liu ,&nbsp;Conghui Zheng ,&nbsp;Li Pan","doi":"10.1016/j.inffus.2026.104206","DOIUrl":"10.1016/j.inffus.2026.104206","url":null,"abstract":"<div><div>Heterogeneous graph representation learning seeks to capture the complex structural and semantic properties in heterogeneous graphs. The integration of hyperbolic space, which is well-suited to modeling the intrinsic degree power-law distribution of graphs, has facilitated significant advancements in this area. Recent methods leverage hyperbolic attention mechanisms to fuse semantic information within metapath-induced subgraphs. Despite this progress, a major limitation remains: these methods leverage attention for information aggregation but fail to model the causal relationship between semantic fusion and downstream task performance, leading to spurious semantic associations that reduce robustness to noise and impair cross-task generalization. To address this challenge, we propose a <strong>C</strong>ausal <strong>AT</strong>tention enhan<strong>C</strong>ed <strong>H</strong>yperbolic Heterogeneous Graph Neural Network (<strong>CATCH</strong>), intending to achieve sufficient semantic information fusion. To the best of our knowledge, CATCH is the first to integrate hyperbolic space with causal inference for heterogeneous graph representations, directly targeting spurious semantic correlations at the source. Specifically, CATCH explicitly encodes the Euclidean node attributes of different types into a shared semantic hyperbolic space. To capture the underlying semantics, context subgraphs based on one-order and high-order metapaths are constructed to facilitate hyperbolic attention-based intra-level and inter-level information aggregation, thus forming comprehensive representations. Finally, a causal attention enhancement mechanism is implemented with direct supervision on attention learning, leveraging counterfactual causal inference to generate counterfactual representations for computing direct causal effects. By jointly optimizing a task-specific objective alongside a causal loss, CATCH promotes more faithful semantic encoding, leading to improved robustness and generalization. Extensive experiments on four real-world datasets validate the superior performance of CATCH across multiple tasks. The implementation is available at <span><span>https://github.com/Crystal-LiuBojia/CATCH</span><svg><path></path></svg></span>.</div><div>Recommendation performance on Amazon-CD and Amazon-Book.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"132 ","pages":"Article 104206"},"PeriodicalIF":15.5,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146134527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-lingual approach for multi-modal emotion and sentiment recognition based on triple fusion 基于三重融合的多语言多模态情绪和情绪识别方法
IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-04 DOI: 10.1016/j.inffus.2026.104207
Maxim Markitantov , Elena Ryumina , Anastasia Dvoynikova , Alexey Karpov
Affective states recognition is a challenging task that requires a large amount of input data, such as audio, video, and text. Current multi-modal approaches are often single-task and corpus-specific, resulting in overfitting, poor generalization across corpora, and reduced real-world performance. In this work, we address these limitations by: (1) multi-lingual training on corpora that include Russian (RAMAS) and English (MELD, CMU-MOSEI) speech; (2) multi-task learning for joint emotion and sentiment recognition; and (3) a novel Triple Fusion strategy that employs cross-modal integration at both hierarchical uni-modal and fused multi-modal feature levels, enhancing intra- and inter-modal relationships of different affective states and modalities. Additionally, to optimize performance of the approach proposed, we compare temporal encoders (Transformer-based, Mamba, xLSTM) and fusion strategies (double and triple fusion strategies with and without a label encoder) to comprehensively understand their capabilities and limitations. On the Test subset of the CMU-MOSEI corpus, the proposed approach showed mean weighted F1-score (mWF) of 88.6% for emotion recognition and weighted F1-score (WF) of 84.8% for sentiment recognition (respectively +9.5% and +6.0% absolute over prior multi-task baselines). On the Test subset of the MELD corpus, the proposed approach showed WF of 49.6% for emotion and 60.0% for sentiment (+8.4% WF for emotion recognition over the strongest multi-task baseline). On the Test subset of the RAMAS corpus, the proposed approach showed a competitive performance with WF of 71.8% and 90.0% for emotion and sentiment, respectively. We compare the performance of the approach proposed with that of the state-of-the-art ones. The source code and demo of the developed approach is publicly available at https://smil-spcras.github.io/MASAI/.
情感状态识别是一项具有挑战性的任务,需要大量的输入数据,如音频、视频和文本。当前的多模态方法通常是单任务和特定于语料库的,导致过拟合,跨语料库的泛化不良,并降低了实际性能。在这项工作中,我们通过以下方式解决了这些限制:(1)在包括俄语(RAMAS)和英语(MELD, CMU-MOSEI)语音的语料库上进行多语言训练;(2)联合情绪和情绪识别的多任务学习;(3)一种新颖的三重融合策略,该策略在分层单模态和融合多模态特征水平上采用跨模态整合,增强了不同情感状态和模态的模态内和模态间的关系。此外,为了优化所提出的方法的性能,我们比较了时间编码器(基于transformer的,Mamba的,xLSTM的)和融合策略(带和不带标签编码器的双重和三重融合策略),以全面了解它们的能力和局限性。在CMU-MOSEI语料库的Test子集上,所提出的方法显示情绪识别的平均加权f1分数(mWF)为88.6%,情绪识别的加权f1分数(WF)为84.8%(分别比先前的多任务基线绝对值+9.5%和+6.0%)。在MELD语料库的Test子集上,所提出的方法显示情绪识别的WF为49.6%,情绪识别的WF为60.0%(在最强的多任务基线上,情绪识别的WF为8.4%)。在RAMAS语料库的Test子集上,该方法在情感和情绪方面的WF分别为71.8%和90.0%,具有较强的竞争力。我们将提出的方法的性能与最先进的方法进行比较。开发的方法的源代码和演示可以在https://smil-spcras.github.io/MASAI/上公开获得。
{"title":"Multi-lingual approach for multi-modal emotion and sentiment recognition based on triple fusion","authors":"Maxim Markitantov ,&nbsp;Elena Ryumina ,&nbsp;Anastasia Dvoynikova ,&nbsp;Alexey Karpov","doi":"10.1016/j.inffus.2026.104207","DOIUrl":"10.1016/j.inffus.2026.104207","url":null,"abstract":"<div><div>Affective states recognition is a challenging task that requires a large amount of input data, such as audio, video, and text. Current multi-modal approaches are often single-task and corpus-specific, resulting in overfitting, poor generalization across corpora, and reduced real-world performance. In this work, we address these limitations by: (1) multi-lingual training on corpora that include Russian (RAMAS) and English (MELD, CMU-MOSEI) speech; (2) multi-task learning for joint emotion and sentiment recognition; and (3) a novel Triple Fusion strategy that employs cross-modal integration at both hierarchical uni-modal and fused multi-modal feature levels, enhancing intra- and inter-modal relationships of different affective states and modalities. Additionally, to optimize performance of the approach proposed, we compare temporal encoders (Transformer-based, Mamba, xLSTM) and fusion strategies (double and triple fusion strategies with and without a label encoder) to comprehensively understand their capabilities and limitations. On the Test subset of the CMU-MOSEI corpus, the proposed approach showed mean weighted F1-score (mWF) of 88.6% for emotion recognition and weighted F1-score (WF) of 84.8% for sentiment recognition (respectively +9.5% and +6.0% absolute over prior multi-task baselines). On the Test subset of the MELD corpus, the proposed approach showed WF of 49.6% for emotion and 60.0% for sentiment (+8.4% WF for emotion recognition over the strongest multi-task baseline). On the Test subset of the RAMAS corpus, the proposed approach showed a competitive performance with WF of 71.8% and 90.0% for emotion and sentiment, respectively. We compare the performance of the approach proposed with that of the state-of-the-art ones. The source code and demo of the developed approach is publicly available at <span><span>https://smil-spcras.github.io/MASAI/</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"132 ","pages":"Article 104207"},"PeriodicalIF":15.5,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146134528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph-guided cross-image correlation learning with adaptive global-local feature fusion for fine-grained visual representation 基于自适应全局-局部特征融合的图导交叉图像相关学习方法
IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-03 DOI: 10.1016/j.inffus.2026.104204
Hongxing You , Yangtao Wang , Xiaocui Li , Yanzhao Xie , Da Chen , Xinyu Zhang , Wensheng Zhang
Fine-grained visual classification (FGVC) has been challenging due to the difficulty of distinguishing between highly similar local regions. Recent studies leverage graph neural network (GNN) to learn local representations, but they solely focus on patch interactions within each image, failing to capture semantic relationships across different samples and rendering fine-grained features semantically disconnected from each other. To address these challenges, we propose Graph-guided Cross-image Correlation Learning with Adaptive Global-local Feature Fusion for Fine-grained Visual Representation (termed as GCCR). We design a Cross-image Correlation Learning (CCL) module where spatially corresponding patches across images are connected as graph nodes, enabling inter-image interactions to capture semantically rich local features. In this CCL module, we introduce a Ranking Loss to address the limitation of traditional classification losses that focus solely on maximizing individual sample confidence without explicitly constraining feature discriminability among visually similar categories. In addition, GCCR constructs a lightweight fusion module that dynamically balances the contributions of global and local features, leading to unbiased image representations. We conduct extensive experiments on 4 popular FGVC datasets including CUB-200-2011, Stanford Cars, FGVC-Aircraft, and iNaturalist 2017. Experimental results verify that GCCR can achieve much higher performance than the state-of-the-art (SOTA) FGVC methods, while maintaining lower model complexity. Take the most challenging iNaturalist 2017 for example, GCCR gains at least 7.51% accuracy while reducing more than 4.42M parameter scale and 80M FLOPs than the optimal solution. We release the pretrained model and code at GitHub: https://github.com/dislie/GCCR.
由于难以区分高度相似的局部区域,细粒度视觉分类(FGVC)一直具有挑战性。最近的研究利用图神经网络(GNN)来学习局部表示,但它们只关注每个图像内的补丁交互,未能捕获不同样本之间的语义关系,并呈现彼此语义断开的细粒度特征。为了解决这些挑战,我们提出了用于细粒度视觉表示的具有自适应全局-局部特征融合的图引导交叉图像相关学习(称为GCCR)。我们设计了一个跨图像相关学习(Cross-image Correlation Learning, CCL)模块,其中跨图像的空间对应的patch被连接为图节点,使图像间的交互能够捕获语义丰富的局部特征。在这个CCL模块中,我们引入了排序损失来解决传统分类损失的局限性,传统分类损失只关注最大化个体样本置信度,而没有明确约束视觉相似类别之间的特征可判别性。此外,GCCR构建了一个轻量级融合模块,动态平衡全局和局部特征的贡献,从而实现无偏图像表示。我们在4个流行的FGVC数据集上进行了广泛的实验,包括ub -200-2011, Stanford Cars, FGVC- aircraft和iNaturalist 2017。实验结果表明,GCCR可以实现比最先进的(SOTA) FGVC方法更高的性能,同时保持更低的模型复杂度。以最具挑战性的iNaturalist 2017为例,GCCR的准确率至少提高了7.51%,同时比最优解减少了4.42M以上的参数尺度和80M以上的FLOPs。我们在GitHub上发布了预训练模型和代码:https://github.com/dislie/GCCR。
{"title":"Graph-guided cross-image correlation learning with adaptive global-local feature fusion for fine-grained visual representation","authors":"Hongxing You ,&nbsp;Yangtao Wang ,&nbsp;Xiaocui Li ,&nbsp;Yanzhao Xie ,&nbsp;Da Chen ,&nbsp;Xinyu Zhang ,&nbsp;Wensheng Zhang","doi":"10.1016/j.inffus.2026.104204","DOIUrl":"10.1016/j.inffus.2026.104204","url":null,"abstract":"<div><div>Fine-grained visual classification (FGVC) has been challenging due to the difficulty of distinguishing between highly similar local regions. Recent studies leverage graph neural network (GNN) to learn local representations, but they solely focus on patch interactions within each image, failing to capture semantic relationships across different samples and rendering fine-grained features semantically disconnected from each other. To address these challenges, we propose <strong>G</strong>raph-guided <strong>C</strong>ross-image <strong>C</strong>orrelation Learning with Adaptive Global-local Feature Fusion for Fine-grained Visual <strong>R</strong>epresentation (termed as GCCR). We design a Cross-image Correlation Learning (CCL) module where spatially corresponding patches across images are connected as graph nodes, enabling inter-image interactions to capture semantically rich local features. In this CCL module, we introduce a Ranking Loss to address the limitation of traditional classification losses that focus solely on maximizing individual sample confidence without explicitly constraining feature discriminability among visually similar categories. In addition, GCCR constructs a lightweight fusion module that dynamically balances the contributions of global and local features, leading to unbiased image representations. We conduct extensive experiments on 4 popular FGVC datasets including CUB-200-2011, Stanford Cars, FGVC-Aircraft, and iNaturalist 2017. Experimental results verify that GCCR can achieve much higher performance than the state-of-the-art (SOTA) FGVC methods, while maintaining lower model complexity. Take the most challenging iNaturalist 2017 for example, GCCR gains at least 7.51% accuracy while reducing more than 4.42M parameter scale and 80M FLOPs than the optimal solution. We release the pretrained model and code at GitHub: <span><span>https://github.com/dislie/GCCR</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"131 ","pages":"Article 104204"},"PeriodicalIF":15.5,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146110180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Crowdsourced federated learning with inconsistent label representation 不一致标签表示的众包联邦学习
IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-30 DOI: 10.1016/j.inffus.2026.104194
Yunlong He, Fei Chen, Hanlin Zhang, Jia Yu
When personalized federated learning meets crowdsourced label annotation, it can potentially form a complete ecosystem from large-scale data labeling, through model training in massive devices, toward flexible service for diverse end users. Actually, most common crowdsourced annotators can hardly follow a uniform annotation regulation and make the annotations in their own way. Even though they can share the cognitive consistency on the perception, the label annotation can still be expressed in various ways. This situation can be specifically serious in the federated learning scenario, in which the diverse label expressions are always kept locally in distributed clients for privacy concerns and can hardly be unified. In this work, we are motivated to propose CrowdFed, a systematic solution for crowdsourced federated learning systems with underlying label representation skew issue. Specifically, the global model is trained through federated learning for global categorical alignment, and the personalized layers are learned through an auxiliary network in each client for local representation alignment. Furthermore, a category-level similarity matching strategy is presented for the alignment of inconsistent label representations between the local category and the global category. Evaluated by four benchmark datasets, our proposed strategy proves its superiority in terms of system efficiency and cost.
当个性化的联邦学习遇到众包标签标注时,它可能会形成一个完整的生态系统,从大规模数据标注,到大规模设备上的模型训练,再到为不同的最终用户提供灵活的服务。实际上,大多数常见的众包注释器很难遵循统一的注释规则,以自己的方式进行注释。尽管它们可以在感知上共享认知一致性,但标签标注仍然可以以多种方式表示。这种情况在联邦学习场景中尤为严重,因为出于隐私考虑,不同的标签表达式总是保存在分布式客户机的本地,很难统一。在这项工作中,我们提出了CrowdFed,这是一个针对具有潜在标签表示倾斜问题的众包联邦学习系统的系统解决方案。具体来说,通过联邦学习训练全局模型以实现全局分类对齐,通过每个客户端的辅助网络学习个性化层以实现局部表示对齐。此外,针对局部类别和全局类别之间不一致的标签表示,提出了一种类别级相似度匹配策略。通过四个基准数据集的测试,证明了该策略在系统效率和成本方面的优越性。
{"title":"Crowdsourced federated learning with inconsistent label representation","authors":"Yunlong He,&nbsp;Fei Chen,&nbsp;Hanlin Zhang,&nbsp;Jia Yu","doi":"10.1016/j.inffus.2026.104194","DOIUrl":"10.1016/j.inffus.2026.104194","url":null,"abstract":"<div><div>When personalized federated learning meets crowdsourced label annotation, it can potentially form a complete ecosystem from large-scale data labeling, through model training in massive devices, toward flexible service for diverse end users. Actually, most common crowdsourced annotators can hardly follow a uniform annotation regulation and make the annotations in their own way. Even though they can share the cognitive consistency on the perception, the label annotation can still be expressed in various ways. This situation can be specifically serious in the federated learning scenario, in which the diverse label expressions are always kept locally in distributed clients for privacy concerns and can hardly be unified. In this work, we are motivated to propose CrowdFed, a systematic solution for crowdsourced federated learning systems with underlying label representation skew issue. Specifically, the global model is trained through federated learning for global categorical alignment, and the personalized layers are learned through an auxiliary network in each client for local representation alignment. Furthermore, a category-level similarity matching strategy is presented for the alignment of inconsistent label representations between the local category and the global category. Evaluated by four benchmark datasets, our proposed strategy proves its superiority in terms of system efficiency and cost.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"131 ","pages":"Article 104194"},"PeriodicalIF":15.5,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146089494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A two-stage learning network for PVINS modeling and fusion estimation in challenging environments 挑战性环境下PVINS建模与融合估计的两阶段学习网络
IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-30 DOI: 10.1016/j.inffus.2026.104192
Xuanyu Wu , Jiankai Yin , Jian Yang , Xin Liu , Wenshuo Li , Lei Guo
In the polarization-based visual-inertial navigation system (PVINS), information from polarization sensor (PS) and visual-inertial navigation system (VINS) is fused to enable position and attitude estimation, thereby offering an effective solution for autonomous navigation in global navigation satellite system (GNSS)-denied environments. However, under challenging conditions such as complex weather, the state-space model of PVINS becomes susceptible to uncertain model error, limiting the accuracy and adaptability of the system. To address this issue, we propose a tightly coupled PVINS integration scheme based on a two-stage learning network, which consists of model error compensation and adaptive Kalman gain learning. In the first stage, a deep neural network with a shared-weight architecture is designed to learn and compensate for the state-space model error, thereby reducing network complexity and enabling more precise system modeling. In the second stage, to improve fusion accuracy of PVINS, a Kalman gain learning network (KGLN)-based intelligent fusion method is proposed. This approach enables the adaptive learning of Kalman gains, circumventing the dependency of the system on knowledge of the noise statistics. Finally, the performance of the system is verified through the semi-physical simulation and flight test. The experimental results confirm that the proposed method outperforms conventional PVINS in terms of both position and heading estimation.
基于极化的视觉惯性导航系统(PVINS)将极化传感器(PS)和视觉惯性导航系统(VINS)的信息融合在一起,实现位置和姿态估计,为全球导航卫星系统(GNSS)拒绝环境下的自主导航提供了有效的解决方案。然而,在复杂天气等具有挑战性的条件下,PVINS的状态空间模型容易受到不确定模型误差的影响,限制了系统的准确性和自适应性。为了解决这一问题,我们提出了一种基于两阶段学习网络的紧密耦合PVINS集成方案,该方案由模型误差补偿和自适应卡尔曼增益学习组成。在第一阶段,设计一个具有共享权重架构的深度神经网络来学习和补偿状态空间模型误差,从而降低网络复杂性,实现更精确的系统建模。第二阶段,为了提高PVINS的融合精度,提出了一种基于卡尔曼增益学习网络(KGLN)的智能融合方法。这种方法可以实现卡尔曼增益的自适应学习,避免了系统对噪声统计知识的依赖。最后,通过半实物仿真和飞行试验验证了系统的性能。实验结果表明,该方法在位置和航向估计方面都优于传统的PVINS。
{"title":"A two-stage learning network for PVINS modeling and fusion estimation in challenging environments","authors":"Xuanyu Wu ,&nbsp;Jiankai Yin ,&nbsp;Jian Yang ,&nbsp;Xin Liu ,&nbsp;Wenshuo Li ,&nbsp;Lei Guo","doi":"10.1016/j.inffus.2026.104192","DOIUrl":"10.1016/j.inffus.2026.104192","url":null,"abstract":"<div><div>In the polarization-based visual-inertial navigation system (PVINS), information from polarization sensor (PS) and visual-inertial navigation system (VINS) is fused to enable position and attitude estimation, thereby offering an effective solution for autonomous navigation in global navigation satellite system (GNSS)-denied environments. However, under challenging conditions such as complex weather, the state-space model of PVINS becomes susceptible to uncertain model error, limiting the accuracy and adaptability of the system. To address this issue, we propose a tightly coupled PVINS integration scheme based on a two-stage learning network, which consists of model error compensation and adaptive Kalman gain learning. In the first stage, a deep neural network with a shared-weight architecture is designed to learn and compensate for the state-space model error, thereby reducing network complexity and enabling more precise system modeling. In the second stage, to improve fusion accuracy of PVINS, a Kalman gain learning network (KGLN)-based intelligent fusion method is proposed. This approach enables the adaptive learning of Kalman gains, circumventing the dependency of the system on knowledge of the noise statistics. Finally, the performance of the system is verified through the semi-physical simulation and flight test. The experimental results confirm that the proposed method outperforms conventional PVINS in terms of both position and heading estimation.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"131 ","pages":"Article 104192"},"PeriodicalIF":15.5,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146089493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vision-language model with siamese bilateral difference network and text-guided image feature enhancement for acute ischemic stroke outcome prediction on CT angiography 基于Siamese双侧差异网络和文本引导图像特征增强的视觉语言模型在急性缺血性卒中CT血管造影预后预测中的应用
IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.inffus.2026.104195
Hulin Kuang , Bin Hu , Shuai Yang , Dongcui Wang , Guanghua Luo , Weihua Liao , Wu Qiu , Shulin Liu , Jianxin Wang
Acute ischemic stroke (AIS) outcome prediction is crucial for treatment decisions. However, AIS outcome prediction is challenging due to the combined influence of lesion characteristics, vascular status, and other health conditions. In this study, we introduce a vision-language model with a Siamese bilateral difference network and a text-guided image feature enhancement module for predicting AIS outcome (e.g., modified Rankin Scale, mRS) on CT angiography. In the Siamese bilateral difference network, based on fine-tuning the foundation model LVM-Med, we design an interactive Transformer fine-tuning encoder and a vision question answering guided bilateral difference awareness module, which generates bilateral difference text via image-text pair question answering as a prompt to enhance the extracted brain vascular difference features. Additionally, in the text-guided image feature enhancement module, we propose a text feature extraction module to extract patient phrase-level and inter-phrase embeddings from clinical notes, and employ a multi-scale image-text interaction module to obtain fine-grained phrase-enhanced image attention feature and coarse-grained phrase context-aware image attention feature. We validate our model on the public ISLES2024 dataset, a private dataset A, and an external AIS dataset. It achieves accuracies of 81.11%, 83.05%, and 80.00% and AUCs of 80.06%, 85.48% and 82.62% for 90-day mRS prediction on the 3 datasets, respectively, outperforming several state-of-the-art methods and demonstrating its generalization ability. Moreover, the proposed method can be effectively extended to glaucoma visual field progression prediction, which is also related to vascular differences and clinical notes.
急性缺血性卒中(AIS)预后预测对治疗决策至关重要。然而,由于病变特征、血管状态和其他健康状况的综合影响,AIS的预后预测具有挑战性。在本研究中,我们引入了一种带有Siamese双侧差异网络和文本引导图像特征增强模块的视觉语言模型,用于预测CT血管成像的AIS结果(例如,修改的Rankin量表,mRS)。在Siamese双侧差异网络中,基于基础模型LVM-Med的微调,设计了互动式Transformer微调编码器和视觉问答引导双侧差异感知模块,通过图像-文本对问答提示生成双侧差异文本,增强提取的脑血管差异特征。此外,在文本引导的图像特征增强模块中,我们提出了文本特征提取模块,从临床笔记中提取患者短语级和短语间嵌入,并采用多尺度图像-文本交互模块,获得细粒度的短语增强图像关注特征和粗粒度的短语上下文感知图像关注特征。我们在公共ISLES2024数据集、私有数据集a和外部AIS数据集上验证了我们的模型。该方法在3个数据集上的90天mRS预测准确率分别为81.11%、83.05%和80.00%,auc分别为80.06%、85.48%和82.62%,优于几种最先进的方法,显示了其泛化能力。此外,该方法可有效推广到青光眼视野进展预测,这也与血管差异和临床注意事项有关。
{"title":"Vision-language model with siamese bilateral difference network and text-guided image feature enhancement for acute ischemic stroke outcome prediction on CT angiography","authors":"Hulin Kuang ,&nbsp;Bin Hu ,&nbsp;Shuai Yang ,&nbsp;Dongcui Wang ,&nbsp;Guanghua Luo ,&nbsp;Weihua Liao ,&nbsp;Wu Qiu ,&nbsp;Shulin Liu ,&nbsp;Jianxin Wang","doi":"10.1016/j.inffus.2026.104195","DOIUrl":"10.1016/j.inffus.2026.104195","url":null,"abstract":"<div><div>Acute ischemic stroke (AIS) outcome prediction is crucial for treatment decisions. However, AIS outcome prediction is challenging due to the combined influence of lesion characteristics, vascular status, and other health conditions. In this study, we introduce a vision-language model with a Siamese bilateral difference network and a text-guided image feature enhancement module for predicting AIS outcome (e.g., modified Rankin Scale, mRS) on CT angiography. In the Siamese bilateral difference network, based on fine-tuning the foundation model LVM-Med, we design an interactive Transformer fine-tuning encoder and a vision question answering guided bilateral difference awareness module, which generates bilateral difference text via image-text pair question answering as a prompt to enhance the extracted brain vascular difference features. Additionally, in the text-guided image feature enhancement module, we propose a text feature extraction module to extract patient phrase-level and inter-phrase embeddings from clinical notes, and employ a multi-scale image-text interaction module to obtain fine-grained phrase-enhanced image attention feature and coarse-grained phrase context-aware image attention feature. We validate our model on the public ISLES2024 dataset, a private dataset A, and an external AIS dataset. It achieves accuracies of 81.11%, 83.05%, and 80.00% and AUCs of 80.06%, 85.48% and 82.62% for 90-day mRS prediction on the 3 datasets, respectively, outperforming several state-of-the-art methods and demonstrating its generalization ability. Moreover, the proposed method can be effectively extended to glaucoma visual field progression prediction, which is also related to vascular differences and clinical notes.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"131 ","pages":"Article 104195"},"PeriodicalIF":15.5,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146072487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Information-theoretic graph fusion with vision-language-action model for policy reasoning and dual robotic control 基于视觉语言-动作模型的信息图融合策略推理与双机器人控制
IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.inffus.2026.104193
Shunlei Li , Longsen Gao , Jin Wang , Chang Che , Xi Xiao , Jiuwen Cao , Yingbai Hu , Hamid Reza Karimi
Teaching robots dexterous skills from human videos remains challenging due to the reliance on low-level trajectory imitation, which fails to generalize across object types, spatial layouts, and manipulator configurations. We propose Graph-Fused Vision-Language-Action (GF-VLA), a framework that enables dual-arm robotic systems to perform task-level reasoning and execution directly from RGB(-D) human demonstrations. GF-VLA first extracts Shannon-information-based cues to identify hands and objects with the highest task relevance, then encodes these cues into temporally ordered scene graphs that capture both hand-object and object-object interactions. These graphs are fused with a language-conditioned transformer that generates hierarchical behavior trees and interpretable Cartesian motion commands. To improve execution efficiency in bimanual settings, we further introduce a cross-hand selection policy that infers optimal gripper assignment without explicit geometric reasoning. We evaluate GF-VLA on four structured dual-arm block assembly tasks involving symbolic shape construction and spatial generalization. Experimental results show that the information-theoretic scene representation achieves over 95% graph accuracy and 93% subtask segmentation, supporting the LLM planner in generating reliable and human-readable task policies. When executed by the dual-arm robot, these policies yield 94% grasp success, 89% placement accuracy, and 90% overall task success across stacking, letter-building, and geometric reconfiguration scenarios, demonstrating strong generalization and robustness across diverse spatial and semantic variations.
从人类视频中教授机器人灵巧技能仍然具有挑战性,因为它依赖于低水平的轨迹模仿,而这种模仿无法概括对象类型、空间布局和机械手配置。我们提出了图形融合视觉语言动作(GF-VLA)框架,该框架使双臂机器人系统能够直接从RGB(d)人类演示中执行任务级推理和执行。GF-VLA首先提取基于香农信息的线索来识别具有最高任务相关性的手和物体,然后将这些线索编码成捕捉手-物体和物体-物体相互作用的时序场景图。这些图形与一个语言条件转换器融合在一起,该转换器生成分层行为树和可解释的笛卡尔运动命令。为了提高双手操作的执行效率,我们进一步引入了一种交叉手选择策略,该策略可以在没有显式几何推理的情况下推断出最佳的抓手分配。我们评估了GF-VLA在包含符号形状构建和空间概化的四种结构化双臂块组装任务上的效果。实验结果表明,基于信息论的场景表示实现了95%以上的图准确率和93%以上的子任务分割,支持LLM规划器生成可靠的、人类可读的任务策略。当由双臂机器人执行时,这些策略在堆叠、字母构建和几何重构场景中获得了94%的抓取成功率、89%的放置准确性和90%的总体任务成功率,在不同的空间和语义变化中表现出强大的泛化和鲁棒性。
{"title":"Information-theoretic graph fusion with vision-language-action model for policy reasoning and dual robotic control","authors":"Shunlei Li ,&nbsp;Longsen Gao ,&nbsp;Jin Wang ,&nbsp;Chang Che ,&nbsp;Xi Xiao ,&nbsp;Jiuwen Cao ,&nbsp;Yingbai Hu ,&nbsp;Hamid Reza Karimi","doi":"10.1016/j.inffus.2026.104193","DOIUrl":"10.1016/j.inffus.2026.104193","url":null,"abstract":"<div><div>Teaching robots dexterous skills from human videos remains challenging due to the reliance on low-level trajectory imitation, which fails to generalize across object types, spatial layouts, and manipulator configurations. We propose Graph-Fused Vision-Language-Action (GF-VLA), a framework that enables dual-arm robotic systems to perform task-level reasoning and execution directly from RGB(-D) human demonstrations. GF-VLA first extracts Shannon-information-based cues to identify hands and objects with the highest task relevance, then encodes these cues into temporally ordered scene graphs that capture both hand-object and object-object interactions. These graphs are fused with a language-conditioned transformer that generates hierarchical behavior trees and interpretable Cartesian motion commands. To improve execution efficiency in bimanual settings, we further introduce a cross-hand selection policy that infers optimal gripper assignment without explicit geometric reasoning. We evaluate GF-VLA on four structured dual-arm block assembly tasks involving symbolic shape construction and spatial generalization. Experimental results show that the information-theoretic scene representation achieves over 95% graph accuracy and 93% subtask segmentation, supporting the LLM planner in generating reliable and human-readable task policies. When executed by the dual-arm robot, these policies yield 94% grasp success, 89% placement accuracy, and 90% overall task success across stacking, letter-building, and geometric reconfiguration scenarios, demonstrating strong generalization and robustness across diverse spatial and semantic variations.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"131 ","pages":"Article 104193"},"PeriodicalIF":15.5,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146072488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Shape-aware osteoarthritis network: Bidirectional fusion of MRI and 3D point clouds for knee osteoarthritis diagnosis 形状感知骨关节炎网络:MRI和3D点云双向融合诊断膝关节骨关节炎
IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.inffus.2026.104198
Dawei Zhang , Chenglin Sang , Tianyi Lyu
Knee osteoarthritis (KOA) is a common degenerative joint disease, and accurate diagnosis and severity grading are crucial for effective treatment. At present, although deep learning techniques based on X-rays or magnetic resonance imaging (MRI) have greatly improved diagnostic accuracy, two-dimensional images often cannot fully capture the complex three-dimensional morphology and texture changes related to KOA. To address these challenges, we propose a shape aware osteoarthritis diagnostic network, which is a novel bidirectional cross modal fusion framework that integrates 3D point clouds and MRI sequences. This framework consists of three parts: (1) a local relation aware dynamic graph convolutional neural network (CNN) used to extract complex geometric features from point clouds representing the surfaces of knee joint bones and cartilage; (2) For MRI sequences, a sequence aggregation method was adopted, which combines 2D CNN for spatial feature extraction and self-attention mechanism for cross slice sequences. (3) The bidirectional transmembrane fusion module is capable of conducting in-depth interactive feature learning between the geometric domain of point clouds and the texture spatiotemporal domain of MRI, enabling these two modes to improve and enhance each other’s representations. Extensive experiments conducted on a large cohort of osteoarthritis initiatives (OAI) have shown that our model achieves state-of-the-art performance. Its accuracy in the challenging 5-level Kellgren Lawrence (KL) classification is 0.73, which represents a improvement of approximately 23.7% over the 0.59 achieved by using 3D shape features alone in the ShapeMed-Knee benchmark. Furthermore, its AUC in binary OA diagnosis is 0.95, significantly better than existing unimodal and multimodal baselines.
膝关节骨性关节炎(KOA)是一种常见的退行性关节疾病,准确的诊断和严重程度分级是有效治疗的关键。目前,尽管基于x射线或磁共振成像(MRI)的深度学习技术大大提高了诊断准确性,但二维图像往往不能完全捕捉到与KOA相关的复杂的三维形态和纹理变化。为了解决这些挑战,我们提出了一个形状感知骨关节炎诊断网络,这是一个新的双向交叉模态融合框架,集成了3D点云和MRI序列。该框架由三部分组成:(1)局部关系感知的动态图卷积神经网络(CNN)用于从代表膝关节骨骼和软骨表面的点云中提取复杂几何特征;(2)对于MRI序列,采用序列聚合方法,结合二维CNN的空间特征提取和横切面序列的自关注机制。(3)双向跨膜融合模块能够在点云的几何域和MRI的纹理时空域之间进行深度的交互特征学习,使这两种模式能够相互改进和增强表征。在骨关节炎倡议(OAI)的大量队列中进行的广泛实验表明,我们的模型达到了最先进的性能。在具有挑战性的5级Kellgren Lawrence (KL)分类中,它的准确率为0.73,比在ShapeMed-Knee基准中单独使用3D形状特征获得的0.59提高了约23.7%。诊断二元OA的AUC为0.95,明显优于现有的单峰和多峰基线。
{"title":"Shape-aware osteoarthritis network: Bidirectional fusion of MRI and 3D point clouds for knee osteoarthritis diagnosis","authors":"Dawei Zhang ,&nbsp;Chenglin Sang ,&nbsp;Tianyi Lyu","doi":"10.1016/j.inffus.2026.104198","DOIUrl":"10.1016/j.inffus.2026.104198","url":null,"abstract":"<div><div>Knee osteoarthritis (KOA) is a common degenerative joint disease, and accurate diagnosis and severity grading are crucial for effective treatment. At present, although deep learning techniques based on X-rays or magnetic resonance imaging (MRI) have greatly improved diagnostic accuracy, two-dimensional images often cannot fully capture the complex three-dimensional morphology and texture changes related to KOA. To address these challenges, we propose a shape aware osteoarthritis diagnostic network, which is a novel bidirectional cross modal fusion framework that integrates 3D point clouds and MRI sequences. This framework consists of three parts: (1) a local relation aware dynamic graph convolutional neural network (CNN) used to extract complex geometric features from point clouds representing the surfaces of knee joint bones and cartilage; (2) For MRI sequences, a sequence aggregation method was adopted, which combines 2D CNN for spatial feature extraction and self-attention mechanism for cross slice sequences. (3) The bidirectional transmembrane fusion module is capable of conducting in-depth interactive feature learning between the geometric domain of point clouds and the texture spatiotemporal domain of MRI, enabling these two modes to improve and enhance each other’s representations. Extensive experiments conducted on a large cohort of osteoarthritis initiatives (OAI) have shown that our model achieves state-of-the-art performance. Its accuracy in the challenging 5-level Kellgren Lawrence (KL) classification is 0.73, which represents a improvement of approximately 23.7% over the 0.59 achieved by using 3D shape features alone in the ShapeMed-Knee benchmark. Furthermore, its AUC in binary OA diagnosis is 0.95, significantly better than existing unimodal and multimodal baselines.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"131 ","pages":"Article 104198"},"PeriodicalIF":15.5,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146072486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Information Fusion
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1