首页 > 最新文献

Information Fusion最新文献

英文 中文
StegaFusion: Steganography for information hiding and fusion in multimodality 隐写融合:多模态信息隐藏与融合的隐写技术
IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-22 DOI: 10.1016/j.inffus.2026.104150
Zihao Xu , Dawei Xu , Zihan Li , Juan Hu , Baokun Zheng , Chuan Zhang , Liehuang Zhu
Current generative steganography techniques have attracted considerable attention due to their security. However, different platforms and social environments exhibit varying preferred modalities, and existing generative steganography techniques are often restricted to a single modality. Inspired by advancements in inpainting techniques, we observe that the inpainting process is inherently generative. Moreover, cross-modal inpainting minimally perturbs unchanged regions and shares a consistent masking-and-fill procedure. Based on these insights, we introduce StegaFusion, a novel framework for unifying multimodal generative steganography. StegaFusion leverages shared generation seeds and conditional information, which enables the receiver to deterministically reconstruct the reference content. The receiver then performs differential analysis on the inpainting-generated stego content to extract the secret message. Compared to traditional unimodal methods, StegaFusion enhances controllability, security, compatibility, and interpretability without requiring additional model training. To the best of our knowledge, StegaFusion is the first framework to formalize and unify cross-modal generative steganography, offering wide applicability. Extensive qualitative and quantitative experiments demonstrate the superior performance of StegaFusion in terms of controllability, security, and cross-modal compatibility.
目前的生成隐写技术由于其安全性受到了广泛的关注。然而,不同的平台和社会环境表现出不同的首选模式,现有的生成隐写技术通常仅限于单一模式。受绘画技术进步的启发,我们观察到绘画过程本身就是生成的。此外,跨模态绘制最小程度地干扰未改变的区域,并共享一致的遮罩和填充过程。基于这些见解,我们介绍了StegaFusion,一个统一多模态生成隐写的新框架。StegaFusion利用共享的生成种子和条件信息,使接收者能够确定性地重建参考内容。然后,接收方对绘制生成的隐写内容执行差分分析以提取秘密消息。与传统的单模态方法相比,StegaFusion增强了可控性、安全性、兼容性和可解释性,而无需额外的模型训练。据我们所知,StegaFusion是第一个形式化和统一跨模态生成隐写的框架,具有广泛的适用性。大量的定性和定量实验证明了StegaFusion在可控性、安全性和跨模态兼容性方面的优越性能。
{"title":"StegaFusion: Steganography for information hiding and fusion in multimodality","authors":"Zihao Xu ,&nbsp;Dawei Xu ,&nbsp;Zihan Li ,&nbsp;Juan Hu ,&nbsp;Baokun Zheng ,&nbsp;Chuan Zhang ,&nbsp;Liehuang Zhu","doi":"10.1016/j.inffus.2026.104150","DOIUrl":"10.1016/j.inffus.2026.104150","url":null,"abstract":"<div><div>Current generative steganography techniques have attracted considerable attention due to their security. However, different platforms and social environments exhibit varying preferred modalities, and existing generative steganography techniques are often restricted to a single modality. Inspired by advancements in inpainting techniques, we observe that the inpainting process is inherently generative. Moreover, cross-modal inpainting minimally perturbs unchanged regions and shares a consistent masking-and-fill procedure. Based on these insights, we introduce StegaFusion, a novel framework for unifying multimodal generative steganography. StegaFusion leverages shared generation seeds and conditional information, which enables the receiver to deterministically reconstruct the reference content. The receiver then performs differential analysis on the inpainting-generated stego content to extract the secret message. Compared to traditional unimodal methods, StegaFusion enhances controllability, security, compatibility, and interpretability without requiring additional model training. To the best of our knowledge, StegaFusion is the first framework to formalize and unify cross-modal generative steganography, offering wide applicability. Extensive qualitative and quantitative experiments demonstrate the superior performance of StegaFusion in terms of controllability, security, and cross-modal compatibility.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"131 ","pages":"Article 104150"},"PeriodicalIF":15.5,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146033292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EDGCN: An embedding-driven fusion framework for heterogeneity-aware motor imagery decoding 基于嵌入驱动的异构感知运动图像解码融合框架
IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-22 DOI: 10.1016/j.inffus.2026.104170
Chaowen Shen, Yanwen Zhang, Zejing Zhao, Akio Namiki
Motor imagery electroencephalography (MI-EEG) captures neural activity associated with imagined motor tasks and has been widely applied in both basic neuroscience and clinical research. However, the intrinsic spatio-temporal heterogeneity of MI-EEG signals and pronounced inter-subject variability present major challenges for accurate decoding. Most existing deep learning methods rely on fixed architectures and shared parameters, which limits their ability to capture the complex, dynamic patterns driven by individual differences. To address these limitations, we propose an Embedding-Driven Graph Convolutional Network (EDGCN), which leverages a heterogeneity-aware spatio-temporal embedding fusion mechanism to adaptively generate graph convolutional kernel parameters from a shared embedding-driven parameter bank. Specifically, we design a Multi-Resolution Temporal Embedding (MRTE) strategy based on multi-resolution power spectral features and a Structure-Aware Spatial Embedding (SASE) mechanism that integrates both local and global connectivity structures. On this basis, we construct a heterogeneity-aware parameter generation mechanism based on Chebyshev graph convolution to effectively capture the spatiotemporal heterogeneity of EEG signals, with an orthogonality-constrained parameter space that enhances diversity and representational fusion. Experimental results demonstrate that the proposed model achieves superior classification accuracies of 86.50% and 90.14% on the BCIC-IV-2a and BCIC-IV-2b datasets, respectively, outperforming current state-of-the-art methods.
运动图像脑电图(MI-EEG)捕获与想象的运动任务相关的神经活动,已广泛应用于基础神经科学和临床研究。然而,MI-EEG信号固有的时空异质性和明显的主体间变异性给准确解码带来了重大挑战。大多数现有的深度学习方法依赖于固定的架构和共享参数,这限制了它们捕捉由个体差异驱动的复杂动态模式的能力。为了解决这些限制,我们提出了一种嵌入驱动的图卷积网络(EDGCN),它利用异构感知的时空嵌入融合机制从共享的嵌入驱动参数库中自适应地生成图卷积核参数。具体而言,我们设计了一种基于多分辨率功率谱特征的多分辨率时间嵌入(MRTE)策略和一种集成了局部和全局连接结构的结构感知空间嵌入(SASE)机制。在此基础上,构建了基于切比雪夫图卷积的异构感知参数生成机制,有效捕获脑电信号的时空异质性,正交性约束的参数空间增强了多样性和表征融合。实验结果表明,该模型在bbic - iv -2a和bbic - iv -2b数据集上的分类准确率分别达到了86.50%和90.14%,优于目前最先进的方法。
{"title":"EDGCN: An embedding-driven fusion framework for heterogeneity-aware motor imagery decoding","authors":"Chaowen Shen,&nbsp;Yanwen Zhang,&nbsp;Zejing Zhao,&nbsp;Akio Namiki","doi":"10.1016/j.inffus.2026.104170","DOIUrl":"10.1016/j.inffus.2026.104170","url":null,"abstract":"<div><div>Motor imagery electroencephalography (MI-EEG) captures neural activity associated with imagined motor tasks and has been widely applied in both basic neuroscience and clinical research. However, the intrinsic spatio-temporal heterogeneity of MI-EEG signals and pronounced inter-subject variability present major challenges for accurate decoding. Most existing deep learning methods rely on fixed architectures and shared parameters, which limits their ability to capture the complex, dynamic patterns driven by individual differences. To address these limitations, we propose an Embedding-Driven Graph Convolutional Network (EDGCN), which leverages a heterogeneity-aware spatio-temporal embedding fusion mechanism to adaptively generate graph convolutional kernel parameters from a shared embedding-driven parameter bank. Specifically, we design a Multi-Resolution Temporal Embedding (MRTE) strategy based on multi-resolution power spectral features and a Structure-Aware Spatial Embedding (SASE) mechanism that integrates both local and global connectivity structures. On this basis, we construct a heterogeneity-aware parameter generation mechanism based on Chebyshev graph convolution to effectively capture the spatiotemporal heterogeneity of EEG signals, with an orthogonality-constrained parameter space that enhances diversity and representational fusion. Experimental results demonstrate that the proposed model achieves superior classification accuracies of 86.50% and 90.14% on the BCIC-IV-2a and BCIC-IV-2b datasets, respectively, outperforming current state-of-the-art methods.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"131 ","pages":"Article 104170"},"PeriodicalIF":15.5,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146033916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unleashing Mamba’s expressive power: A non-tradeoff approach to spatio-temporal forecasting 释放曼巴的表现力:一种非权衡的时空预测方法
IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-22 DOI: 10.1016/j.inffus.2026.104172
Zhiqi Shao , Ze Wang , Haoning Xi , Michael G.H. Bell , Xusheng Yao , D. Glenn Geers , Junbin Gao
Real-time spatiotemporal forecasting, particularly in traffic systems, requires balancing computational cost and predictive accuracy-a challenge that conventional methods struggle to address effectively. In this work, we propose a non-trade-off framework called Spatial-Temporal Selective State Space (ST-Mamba), which leverages two key components to achieve both efficiency and accuracy concurrently. The Spatial-Temporal Mixer (ST-Mixer) dynamically fuses spatial and temporal features to capture complex dependencies, and the STF-Mamba layer incorporates Mamba’s selective state-space formulation to capture long-range dynamics efficiently. Beyond empirical improvements, we address a critical gap in the literature by presenting a theoretical analysis of ST-Mamba’s expressive power. Specifically, we establish its ability to approximate a broad class of Transformer and formally demonstrate its equivalence to at least two consecutive attention layers within the same framework. This result highlights ST-Mamba’s capacity to capture long-range dependencies while reducing computational overhead efficiently, reinforcing its theoretical and practical advantages over conventional transformer-based models. Through extensive evaluations of real-world traffic datasets, ST-Mamba demonstrates a 61.11% reduction in runtime alongside a 0.67% improvement in predictive performance compared to leading approaches, underscoring its potential to set a new benchmark for real-time spatiotemporal forecasting.
实时时空预测,特别是在交通系统中,需要平衡计算成本和预测准确性,这是传统方法难以有效解决的挑战。在这项工作中,我们提出了一个称为时空选择状态空间(ST-Mamba)的非权衡框架,它利用两个关键组件同时实现效率和准确性。时空混频器(ST-Mixer)动态地融合空间和时间特征来捕获复杂的依赖关系,而STF-Mamba层结合了Mamba的选择性状态空间公式来有效地捕获远程动态。除了经验的改进,我们通过提出st -曼巴的表达能力的理论分析,解决了一个关键的差距在文献。具体地说,我们建立了它近似于一个广泛的Transformer类的能力,并正式证明了它与同一框架内至少两个连续的注意层的等价性。这一结果突出了ST-Mamba在有效减少计算开销的同时捕获远程依赖关系的能力,加强了其与传统变压器模型相比的理论和实践优势。通过对现实世界交通数据集的广泛评估,ST-Mamba显示,与领先的方法相比,运行时间减少了61.11%,预测性能提高了0.67%,强调了其为实时时空预测设定新基准的潜力。
{"title":"Unleashing Mamba’s expressive power: A non-tradeoff approach to spatio-temporal forecasting","authors":"Zhiqi Shao ,&nbsp;Ze Wang ,&nbsp;Haoning Xi ,&nbsp;Michael G.H. Bell ,&nbsp;Xusheng Yao ,&nbsp;D. Glenn Geers ,&nbsp;Junbin Gao","doi":"10.1016/j.inffus.2026.104172","DOIUrl":"10.1016/j.inffus.2026.104172","url":null,"abstract":"<div><div>Real-time spatiotemporal forecasting, particularly in traffic systems, requires balancing computational cost and predictive accuracy-a challenge that conventional methods struggle to address effectively. In this work, we propose a non-trade-off framework called Spatial-Temporal Selective State Space (ST-Mamba), which leverages two key components to achieve both efficiency and accuracy concurrently. The Spatial-Temporal Mixer (ST-Mixer) dynamically fuses spatial and temporal features to capture complex dependencies, and the STF-Mamba layer incorporates Mamba’s selective state-space formulation to capture long-range dynamics efficiently. Beyond empirical improvements, we address a critical gap in the literature by presenting a theoretical analysis of ST-Mamba’s expressive power. Specifically, we establish its ability to approximate a broad class of Transformer and formally demonstrate its equivalence to at least two consecutive attention layers within the same framework. This result highlights ST-Mamba’s capacity to capture long-range dependencies while reducing computational overhead efficiently, reinforcing its theoretical and practical advantages over conventional transformer-based models. Through extensive evaluations of real-world traffic datasets, <span>ST-Mamba</span> demonstrates a 61.11% reduction in runtime alongside a 0.67% improvement in predictive performance compared to leading approaches, underscoring its potential to set a new benchmark for real-time spatiotemporal forecasting.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"131 ","pages":"Article 104172"},"PeriodicalIF":15.5,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146033290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MRFNet: Multi-reference fusion for image deblurring MRFNet:多参考融合图像去模糊
IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-22 DOI: 10.1016/j.inffus.2026.104169
Tingrui Guo , Chi Xu , Kaifeng Tang , Hao Qian
Motion blur is a persistent challenge in visual data processing. While single-image deblurring methods have made significant progress, using multiple reference images from the same scene for deblurring remains an overlooked problem. Existing methods struggle to integrate information from multiple reference images with differences in lighting, color, and perspective. Herein, we propose a novel framework MRFNet which leverages any number of discontinuous reference images for deblurring. The framework consists of two key components: (1) the Offset Fusion Module (OFM) guided by dense matching, which aggregates features from discontinuous reference images through high-frequency detail enhancement and permutation-invariant units; and (2) the Deformable Enrichment Module (DEM), which refines misaligned features using deformable convolutions for precise detail recovery. Quantitative and qualitative evaluations on synthetic and real-world datasets show that the proposed method outperforms state-of-the-art deblurring approaches. Additionally, a new real-world dataset is provided to fill the gap in evaluating discontinuous reference problems.
运动模糊一直是视觉数据处理中的难题。虽然单幅图像去模糊方法已经取得了重大进展,但使用来自同一场景的多幅参考图像进行去模糊仍然是一个被忽视的问题。现有的方法很难整合来自多个参考图像的信息,这些图像在光照、颜色和视角上存在差异。在此,我们提出了一个新的框架MRFNet利用任意数量的不连续参考图像去模糊。该框架由两个关键部分组成:(1)偏移融合模块(OFM)以密集匹配为指导,通过高频细节增强和置换不变单元聚合不连续参考图像的特征;(2)可变形浓缩模块(DEM),它使用可变形卷积来细化不对齐的特征,以实现精确的细节恢复。对合成和真实世界数据集的定量和定性评估表明,所提出的方法优于最先进的去模糊方法。此外,还提供了一个新的真实数据集来填补评估不连续参考问题的空白。
{"title":"MRFNet: Multi-reference fusion for image deblurring","authors":"Tingrui Guo ,&nbsp;Chi Xu ,&nbsp;Kaifeng Tang ,&nbsp;Hao Qian","doi":"10.1016/j.inffus.2026.104169","DOIUrl":"10.1016/j.inffus.2026.104169","url":null,"abstract":"<div><div>Motion blur is a persistent challenge in visual data processing. While single-image deblurring methods have made significant progress, using multiple reference images from the same scene for deblurring remains an overlooked problem. Existing methods struggle to integrate information from multiple reference images with differences in lighting, color, and perspective. Herein, we propose a novel framework MRFNet which leverages any number of discontinuous reference images for deblurring. The framework consists of two key components: (1) the Offset Fusion Module (OFM) guided by dense matching, which aggregates features from discontinuous reference images through high-frequency detail enhancement and permutation-invariant units; and (2) the Deformable Enrichment Module (DEM), which refines misaligned features using deformable convolutions for precise detail recovery. Quantitative and qualitative evaluations on synthetic and real-world datasets show that the proposed method outperforms state-of-the-art deblurring approaches. Additionally, a new real-world dataset is provided to fill the gap in evaluating discontinuous reference problems.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"131 ","pages":"Article 104169"},"PeriodicalIF":15.5,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146033291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A data fusion approach to synthesize microwave imagery of tropical cyclones from infrared data using vision transformers 利用视觉变换从红外数据合成热带气旋微波图像的数据融合方法
IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-22 DOI: 10.1016/j.inffus.2026.104167
Fan Meng , Tao Song , Xianxuan Lin , Kunlin Yang
Microwave images with high spatiotemporal resolution are essential for observing and predicting tropical cyclones (TCs), including TC positioning, intensity estimation, and detection of concentric eyewall. Nevertheless, the temporal resolution of tropical cyclone microwave (TCMW) images is limited due to satellite quantity and orbit constraints, presenting a challenging problem for TC disaster forecasting. This research suggests a multi-sensor data fusion approach, using high-temporal-resolution tropical cyclone infrared (TCIR) images to generate synthetic TCMW images, offering a solution to this data scarcity problem. In particular, we introduce a deep learning network based on the Vision Transformer (TCA-ViT) to translate TCIR images into TCMW images. This can be viewed as a form of synthetic data generation, enhancing the available information for decision-making. We integrate a phase-based physical guidance mechanism into the training process. Furthermore, we have developed a dataset of TC infrared-to-microwave image conversions (TCIR2MW) for training and testing the model. Experimental results demonstrate the method’s capability in rapidly and accurately extracting key features of TCs. Leveraging techniques like Mask and Transfer Learning, it addresses the absence of TCMW images by generating MW images from IR images, thereby aiding downstream tasks like TC intensity and precipitation forecasting. This study introduces a novel approach to the field of TC image research, with the potential to advance deep learning in this direction and provide vital insights for real-time observation and prediction of global TCs. Our source code and data are publicly available online at https://github.com/kleenY/TCIR2MW.
具有高时空分辨率的微波图像对热带气旋的观测和预报至关重要,包括热带气旋定位、强度估计和同心眼壁检测。然而,由于卫星数量和轨道的限制,热带气旋微波图像的时间分辨率有限,这给热带气旋灾害预报带来了挑战。本研究提出了一种多传感器数据融合方法,利用高时间分辨率热带气旋红外(TCIR)图像生成合成的热带气旋红外(TCMW)图像,解决了这一数据稀缺问题。特别地,我们引入了一种基于视觉转换器(TCA-ViT)的深度学习网络,将TCIR图像转换为TCMW图像。这可以看作是生成综合数据的一种形式,增强了决策所需的现有信息。我们将基于阶段的物理指导机制整合到训练过程中。此外,我们开发了一个TC红外到微波图像转换(TCIR2MW)数据集,用于训练和测试模型。实验结果表明,该方法能够快速准确地提取tc的关键特征。利用掩模和迁移学习等技术,它通过从红外图像生成MW图像来解决TCMW图像的缺失问题,从而帮助下游任务,如TC强度和降水预报。本研究为TC图像研究领域引入了一种新的方法,有可能在这一方向上推进深度学习,并为全球TC的实时观测和预测提供重要的见解。我们的源代码和数据可以在https://github.com/kleenY/TCIR2MW上公开获取。
{"title":"A data fusion approach to synthesize microwave imagery of tropical cyclones from infrared data using vision transformers","authors":"Fan Meng ,&nbsp;Tao Song ,&nbsp;Xianxuan Lin ,&nbsp;Kunlin Yang","doi":"10.1016/j.inffus.2026.104167","DOIUrl":"10.1016/j.inffus.2026.104167","url":null,"abstract":"<div><div>Microwave images with high spatiotemporal resolution are essential for observing and predicting tropical cyclones (TCs), including TC positioning, intensity estimation, and detection of concentric eyewall. Nevertheless, the temporal resolution of tropical cyclone microwave (TCMW) images is limited due to satellite quantity and orbit constraints, presenting a challenging problem for TC disaster forecasting. This research suggests a multi-sensor data fusion approach, using high-temporal-resolution tropical cyclone infrared (TCIR) images to generate synthetic TCMW images, offering a solution to this data scarcity problem. In particular, we introduce a deep learning network based on the Vision Transformer (TCA-ViT) to translate TCIR images into TCMW images. This can be viewed as a form of synthetic data generation, enhancing the available information for decision-making. We integrate a phase-based physical guidance mechanism into the training process. Furthermore, we have developed a dataset of TC infrared-to-microwave image conversions (TCIR2MW) for training and testing the model. Experimental results demonstrate the method’s capability in rapidly and accurately extracting key features of TCs. Leveraging techniques like Mask and Transfer Learning, it addresses the absence of TCMW images by generating MW images from IR images, thereby aiding downstream tasks like TC intensity and precipitation forecasting. This study introduces a novel approach to the field of TC image research, with the potential to advance deep learning in this direction and provide vital insights for real-time observation and prediction of global TCs. Our source code and data are publicly available online at <span><span>https://github.com/kleenY/TCIR2MW</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"131 ","pages":"Article 104167"},"PeriodicalIF":15.5,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146033286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ELOGOnet: Knowledge-enhanced local-global learning for cardiac diagnosis ELOGOnet:用于心脏诊断的知识增强局部-全局学习
IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-21 DOI: 10.1016/j.inffus.2026.104168
Yizhuo Feng , Beibei Wang , Zirui Wang , Ke Jiang , Peng Wang , Lidong Du , Xianxiang Chen , Pang Wu , Zhenfeng Li , Junxian Song , Libin Jiang , Zhen Fang
The diagnostic process of a human cardiologist is a holistic act of reasoning that seamlessly integrates two key components: (1) a synergistic analysis of the ECG signal itself, combining insights from both global rhythmic patterns and local morphologies; and (2) a prior-informed interpretation process that leverages internalized medical priors and external patient-specific information. However, existing deep learning models struggle to emulate this complex expert reasoning, often facing a dual dilemma: a failure to synergize local and global features within a unified framework, and a widespread neglect of valuable, low-cost prior knowledge sources like disease associations and patient metadata. To bridge this gap, we propose ELOGOnet, a novel deep learning framework designed to model the expert diagnostic workflow. Modeling the expert’s synergistic signal analysis, ELOGOnet employs a parallel hybrid architecture that integrates a State Space Model (SSM) for global rhythms and a CNN for local morphologies. Enabling a prior-informed interpretation, the framework incorporates two key innovations: an association loss that enhances clinical coherence by modeling disease comorbidity and mutual exclusivity, and an adaptive cross-gating module for the robust fusion of patient metadata. Extensive experiments on several mainstream public benchmarks demonstrate that ELOGOnet establishes a new state-of-the-art by achieving an average Macro-F1 of 63.8% across 8 multi-label tasks and consistently outperforming 16 competitive baselines, thereby setting a new performance benchmark for automated cardiac diagnosis from ECG.
人类心脏病专家的诊断过程是一个整体的推理行为,它无缝集成了两个关键组成部分:(1)对ECG信号本身的协同分析,结合来自全局节律模式和局部形态的见解;(2)利用内化医疗经验和外部患者特定信息的先验信息解释过程。然而,现有的深度学习模型很难模仿这种复杂的专家推理,往往面临双重困境:无法在统一的框架内协同局部和全局特征,以及普遍忽视有价值的、低成本的先验知识来源,如疾病关联和患者元数据。为了弥补这一差距,我们提出了ELOGOnet,这是一个新的深度学习框架,旨在对专家诊断工作流程进行建模。ELOGOnet采用并行混合架构,为专家的协同信号分析建模,该架构集成了用于全局节奏的状态空间模型(SSM)和用于局部形态的CNN。该框架采用了两个关键的创新:通过建模疾病共病和互斥性来增强临床一致性的关联丢失,以及用于稳健融合患者元数据的自适应交叉门控模块。在几个主流公共基准上进行的大量实验表明,ELOGOnet通过在8个多标签任务中实现平均63.8%的Macro-F1,并始终优于16个竞争基准,从而建立了新的性能基准,从而为心电图自动心脏诊断设定了新的性能基准。
{"title":"ELOGOnet: Knowledge-enhanced local-global learning for cardiac diagnosis","authors":"Yizhuo Feng ,&nbsp;Beibei Wang ,&nbsp;Zirui Wang ,&nbsp;Ke Jiang ,&nbsp;Peng Wang ,&nbsp;Lidong Du ,&nbsp;Xianxiang Chen ,&nbsp;Pang Wu ,&nbsp;Zhenfeng Li ,&nbsp;Junxian Song ,&nbsp;Libin Jiang ,&nbsp;Zhen Fang","doi":"10.1016/j.inffus.2026.104168","DOIUrl":"10.1016/j.inffus.2026.104168","url":null,"abstract":"<div><div>The diagnostic process of a human cardiologist is a holistic act of reasoning that seamlessly integrates two key components: (1) a synergistic analysis of the ECG signal itself, combining insights from both global rhythmic patterns and local morphologies; and (2) a prior-informed interpretation process that leverages internalized medical priors and external patient-specific information. However, existing deep learning models struggle to emulate this complex expert reasoning, often facing a dual dilemma: a failure to synergize local and global features within a unified framework, and a widespread neglect of valuable, low-cost prior knowledge sources like disease associations and patient metadata. To bridge this gap, we propose ELOGOnet, a novel deep learning framework designed to model the expert diagnostic workflow. Modeling the expert’s synergistic signal analysis, ELOGOnet employs a parallel hybrid architecture that integrates a State Space Model (SSM) for global rhythms and a CNN for local morphologies. Enabling a prior-informed interpretation, the framework incorporates two key innovations: an association loss that enhances clinical coherence by modeling disease comorbidity and mutual exclusivity, and an adaptive cross-gating module for the robust fusion of patient metadata. Extensive experiments on several mainstream public benchmarks demonstrate that ELOGOnet establishes a new state-of-the-art by achieving an average Macro-F1 of 63.8% across 8 multi-label tasks and consistently outperforming 16 competitive baselines, thereby setting a new performance benchmark for automated cardiac diagnosis from ECG.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"131 ","pages":"Article 104168"},"PeriodicalIF":15.5,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146033293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal fusion of 3D point cloud and intraoperative imaging to enhance surgical robot navigation 三维点云和术中影像的多模态融合增强手术机器人导航
IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-21 DOI: 10.1016/j.inffus.2026.104171
Yiheng Wang , Tianlun Wang , Tao Liu , Yihao Huang
To address the insufficient navigation accuracy of surgical robots in dynamic, complex, and non-rigid intraoperative environments, this paper proposes an enhanced multimodal fusion framework—EMF-RSN. This framework achieves spatial consistency between point clouds and intraoperative images through a depth-guided Geometry-Vision Alignment Module (GVAN), implements dynamic weighted fusion of geometric and visual features through a cross-modal attention fusion module (CAFM), and constructs a closed-loop optimization mechanism from perception to decision through a Task Feedback Optimization (TFO) module, thereby improving navigation accuracy and stability.
Experiments on the public dataset (Hamlyn) and the self-built simulation dataset (Sim-Surgical Fusion) demonstrate that EMF-RSN significantly outperforms existing methods in terms of geometric accuracy, semantic consistency, and task robustness. Compared to traditional registration algorithms, point cloud errors are reduced by approximately 50%, trajectory errors are reduced by over 20%, and real-time performance of 44 FPS is maintained even in complex deformation and occlusion environments. This research provides a new technical approach and model foundation for realizing intelligent surgical navigation that integrates virtual and real elements, and is of great significance for the perception and autonomous control of surgical robots.
为了解决手术机器人在动态、复杂、非刚性的术中环境中导航精度不足的问题,本文提出了一种增强型多模态融合框架- emf - rsn。该框架通过深度引导的几何-视觉对齐模块(GVAN)实现点云和术中图像的空间一致性,通过跨模态注意力融合模块(CAFM)实现几何特征和视觉特征的动态加权融合,通过任务反馈优化模块(TFO)构建从感知到决策的闭环优化机制,从而提高导航精度和稳定性。在公共数据集(Hamlyn)和自建仿真数据集(Sim-Surgical Fusion)上的实验表明,EMF-RSN在几何精度、语义一致性和任务鲁棒性方面显著优于现有方法。与传统配准算法相比,点云误差减少了约50%,轨迹误差减少了20%以上,即使在复杂的变形和遮挡环境下也能保持44 FPS的实时性能。本研究为实现虚实结合的智能手术导航提供了新的技术途径和模型基础,对手术机器人的感知和自主控制具有重要意义。
{"title":"Multimodal fusion of 3D point cloud and intraoperative imaging to enhance surgical robot navigation","authors":"Yiheng Wang ,&nbsp;Tianlun Wang ,&nbsp;Tao Liu ,&nbsp;Yihao Huang","doi":"10.1016/j.inffus.2026.104171","DOIUrl":"10.1016/j.inffus.2026.104171","url":null,"abstract":"<div><div>To address the insufficient navigation accuracy of surgical robots in dynamic, complex, and non-rigid intraoperative environments, this paper proposes an enhanced multimodal fusion framework—EMF-RSN. This framework achieves spatial consistency between point clouds and intraoperative images through a depth-guided Geometry-Vision Alignment Module (GVAN), implements dynamic weighted fusion of geometric and visual features through a cross-modal attention fusion module (CAFM), and constructs a closed-loop optimization mechanism from perception to decision through a Task Feedback Optimization (TFO) module, thereby improving navigation accuracy and stability.</div><div>Experiments on the public dataset (Hamlyn) and the self-built simulation dataset (Sim-Surgical Fusion) demonstrate that EMF-RSN significantly outperforms existing methods in terms of geometric accuracy, semantic consistency, and task robustness. Compared to traditional registration algorithms, point cloud errors are reduced by approximately 50%, trajectory errors are reduced by over 20%, and real-time performance of 44 FPS is maintained even in complex deformation and occlusion environments. This research provides a new technical approach and model foundation for realizing intelligent surgical navigation that integrates virtual and real elements, and is of great significance for the perception and autonomous control of surgical robots.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"131 ","pages":"Article 104171"},"PeriodicalIF":15.5,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146014542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal spatio-temporal fusion: A generalizable GCN-LSTM with attention framework for urban application 多模态时空融合:一个具有城市应用关注框架的广义GCN-LSTM
IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-20 DOI: 10.1016/j.inffus.2026.104164
Yunfei Guo
The proliferation of urban big data presents unprecedented opportunities for understanding cities, yet the analytical methods to harness this data are often fragmented and domain-specific. Existing predictive models in urban computing are typically highly specialized, creating analytical silos that inhibit knowledge transfer and are difficult to adapt across domains such as public safety, housing and transport. This paper confronts this critical gap by developing a generalizable, multimodal spatio-temporal deep learning framework engineered for both high predictive performance and interpretability, which is capable of mastering diverse urban prediction tasks without architectural modification. The hybrid architecture fuses a Multi-Head Graph Convolutional Network (GCN) for spatial diffusion, a Long Short-Term Memory (LSTM) network for temporal dynamics, and a learnable Gating Mechanism that weights the influence of spatial graph versus static external features. To validate this generalizability, the framework was tested on three distinct urban domains in London: crime forecasting, housing price estimation and transport network demand. The model outperformed traditional baselines (ARIMA, XGBoost) and state-of-the-art deep learning models (TabNet, TFT). Moreover, the framework moves beyond prediction to explanation by incorporating attention mechanisms and permutation feature importance analysis.
城市大数据的激增为理解城市提供了前所未有的机会,然而利用这些数据的分析方法往往是碎片化的,并且是特定于领域的。城市计算中现有的预测模型通常是高度专业化的,造成了分析孤岛,阻碍了知识的转移,并且难以跨公共安全、住房和交通等领域进行适应。本文通过开发具有高预测性能和可解释性的可推广的多模态时空深度学习框架来解决这一关键差距,该框架能够在不修改架构的情况下掌握各种城市预测任务。该混合架构融合了用于空间扩散的多头图卷积网络(GCN),用于时间动态的长短期记忆(LSTM)网络,以及用于权衡空间图与静态外部特征影响的可学习门控制机制。为了验证这种普遍性,该框架在伦敦三个不同的城市领域进行了测试:犯罪预测、房价估计和交通网络需求。该模型优于传统的基线(ARIMA、XGBoost)和最先进的深度学习模型(TabNet、TFT)。此外,该框架通过结合注意机制和排列特征重要性分析,从预测走向解释。
{"title":"Multimodal spatio-temporal fusion: A generalizable GCN-LSTM with attention framework for urban application","authors":"Yunfei Guo","doi":"10.1016/j.inffus.2026.104164","DOIUrl":"10.1016/j.inffus.2026.104164","url":null,"abstract":"<div><div>The proliferation of urban big data presents unprecedented opportunities for understanding cities, yet the analytical methods to harness this data are often fragmented and domain-specific. Existing predictive models in urban computing are typically highly specialized, creating analytical silos that inhibit knowledge transfer and are difficult to adapt across domains such as public safety, housing and transport. This paper confronts this critical gap by developing a generalizable, multimodal spatio-temporal deep learning framework engineered for both high predictive performance and interpretability, which is capable of mastering diverse urban prediction tasks without architectural modification. The hybrid architecture fuses a Multi-Head Graph Convolutional Network (GCN) for spatial diffusion, a Long Short-Term Memory (LSTM) network for temporal dynamics, and a learnable Gating Mechanism that weights the influence of spatial graph versus static external features. To validate this generalizability, the framework was tested on three distinct urban domains in London: crime forecasting, housing price estimation and transport network demand. The model outperformed traditional baselines (ARIMA, XGBoost) and state-of-the-art deep learning models (TabNet, TFT). Moreover, the framework moves beyond prediction to explanation by incorporating attention mechanisms and permutation feature importance analysis.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"131 ","pages":"Article 104164"},"PeriodicalIF":15.5,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146014809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Code-driven programming prediction enhanced by LLM with a feature fusion approach 基于特征融合的LLM增强代码驱动编程预测
IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-20 DOI: 10.1016/j.inffus.2026.104165
Shengyingjie Liu , Jianxin Li , Qian Wan , Bo He , Zhijun Huang , Qing Li
Programming education is essential for equipping individuals with digital literacy skills and developing the problem-solving abilities necessary for success in the modern workforce. In online programming tutoring systems, knowledge tracing (KT) techniques are crucial for programming prediction, as they monitor user performance and model user cognition. However, both universal and programming-specific knowledge transfer methods depend on traditional state-driven paradigms that indirectly predict programming outcomes based on users’ knowledge states. It does not align with the core objective of programming prediction, which is to determine whether submitted code can solve the question. To address this, we present the code-driven feature fusion KT (CFKT), which integrates large language models (LLM) and encoders for both individualized and common code features. It consists of two modules: pass prediction and code prediction. The pass prediction module leverages LLM to incorporate semantic information from the question and code through embedding, extracting key features that determine code correctness through proxy tasks and effectively narrowing the solution space with vectorization. The code prediction module integrates user historical data and data from other users through feature fusion blocks, allowing for accurate predictions of submitted code and effectively mitigating the cold start problem. Experiments on multiple real-world public programming datasets demonstrate that CFKT significantly outperforms existing baseline methods.
编程教育对于使个人具备数字素养技能和发展在现代劳动力中取得成功所必需的解决问题的能力至关重要。在在线编程辅导系统中,知识跟踪(KT)技术对编程预测至关重要,因为它们监控用户性能并为用户认知建模。然而,通用知识转移方法和特定编程知识转移方法都依赖于传统的状态驱动范式,这些范式基于用户的知识状态间接预测编程结果。它不符合编程预测的核心目标,即确定提交的代码是否可以解决问题。为了解决这个问题,我们提出了代码驱动的特征融合KT (CFKT),它集成了大型语言模型(LLM)和用于个性化和公共代码特征的编码器。它包括两个模块:传递预测和代码预测。pass预测模块利用LLM通过嵌入将问题和代码中的语义信息结合起来,通过代理任务提取确定代码正确性的关键特征,并通过向量化有效地缩小解决方案空间。代码预测模块通过特征融合块集成用户历史数据和来自其他用户的数据,允许对提交的代码进行准确预测,并有效缓解冷启动问题。在多个真实世界公共编程数据集上的实验表明,CFKT显著优于现有的基线方法。
{"title":"Code-driven programming prediction enhanced by LLM with a feature fusion approach","authors":"Shengyingjie Liu ,&nbsp;Jianxin Li ,&nbsp;Qian Wan ,&nbsp;Bo He ,&nbsp;Zhijun Huang ,&nbsp;Qing Li","doi":"10.1016/j.inffus.2026.104165","DOIUrl":"10.1016/j.inffus.2026.104165","url":null,"abstract":"<div><div>Programming education is essential for equipping individuals with digital literacy skills and developing the problem-solving abilities necessary for success in the modern workforce. In online programming tutoring systems, knowledge tracing (KT) techniques are crucial for programming prediction, as they monitor user performance and model user cognition. However, both universal and programming-specific knowledge transfer methods depend on traditional state-driven paradigms that indirectly predict programming outcomes based on users’ knowledge states. It does not align with the core objective of programming prediction, which is to determine whether submitted code can solve the question. To address this, we present the code-driven feature fusion KT (CFKT), which integrates large language models (LLM) and encoders for both individualized and common code features. It consists of two modules: pass prediction and code prediction. The pass prediction module leverages LLM to incorporate semantic information from the question and code through embedding, extracting key features that determine code correctness through proxy tasks and effectively narrowing the solution space with vectorization. The code prediction module integrates user historical data and data from other users through feature fusion blocks, allowing for accurate predictions of submitted code and effectively mitigating the cold start problem. Experiments on multiple real-world public programming datasets demonstrate that CFKT significantly outperforms existing baseline methods.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"131 ","pages":"Article 104165"},"PeriodicalIF":15.5,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146014543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel knowledge distillation method for graph neural networks with gradient mapping and fusion 基于梯度映射和融合的图神经网络知识提取方法
IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-20 DOI: 10.1016/j.inffus.2026.104163
Kang Liu , Shunzhi Yang , Chang-Dong Wang , Yunwen Chen , Zhenhua Huang
The primary goal of graph knowledge distillation (GKD) is to transfer knowledge from a complex graph neural network (GNN) teacher to a smaller, yet more efficient GNN or multi-layer perceptron student. Although existing methods address network scalability, they rely on a frozen teacher that fails to explain how to derive results, thus limiting performance and hindering the improvement of a student. Therefore, we propose a novel GKD method, termed Dynamic Gradient Distillation (DGD), consisting of Generative Adversarial Imitation Learning (GAIL)-based Gradient Mapping and Two-Stage Gradient Fusion modules. The former builds the teacher’s learning process to understand knowledge by drawing on the principle of GAIL. The latter consists of attention fusion and weighted bias operations. Through the attentional fusion operation, it captures and fuses the responses of the teacher to change the gradient of the student at each layer. The fused gradients are then updated by combining them with the student’s backpropagated gradients using the weighted bias operation. DGD allows the student to inherit and extend the teacher’s learning process efficiently. Extensive experiments conducted with seven publicly available datasets show that DGD could significantly outperform some existing methods in node classification tasks. Our code and data are released at https://github.com/KangL-G/Dynamic-Gradient-Distillation.
图知识蒸馏(GKD)的主要目标是将知识从复杂图神经网络(GNN)教师转移到更小但更高效的GNN或多层感知器学生。虽然现有的方法解决了网络的可扩展性,但它们依赖于一个僵化的老师,无法解释如何得出结果,从而限制了学生的表现,阻碍了学生的进步。因此,我们提出了一种新的GKD方法,称为动态梯度蒸馏(DGD),由基于生成对抗模仿学习(GAIL)的梯度映射和两阶段梯度融合模块组成。前者借鉴了GAIL的原理,构建了教师理解知识的学习过程。后者包括注意融合和加权偏置操作。通过注意融合操作,捕捉并融合教师的反应,改变学生在每一层的梯度。然后使用加权偏置操作将融合的梯度与学生的反向传播梯度结合起来进行更新。DGD允许学生有效地继承和扩展老师的学习过程。在7个公开数据集上进行的大量实验表明,在节点分类任务中,DGD可以显著优于一些现有的方法。我们的代码和数据发布在https://github.com/KangL-G/Dynamic-Gradient-Distillation。
{"title":"A novel knowledge distillation method for graph neural networks with gradient mapping and fusion","authors":"Kang Liu ,&nbsp;Shunzhi Yang ,&nbsp;Chang-Dong Wang ,&nbsp;Yunwen Chen ,&nbsp;Zhenhua Huang","doi":"10.1016/j.inffus.2026.104163","DOIUrl":"10.1016/j.inffus.2026.104163","url":null,"abstract":"<div><div>The primary goal of graph knowledge distillation (GKD) is to transfer knowledge from a complex graph neural network (GNN) teacher to a smaller, yet more efficient GNN or multi-layer perceptron student. Although existing methods address network scalability, they rely on a frozen teacher that fails to explain how to derive results, thus limiting performance and hindering the improvement of a student. Therefore, we propose a novel GKD method, termed Dynamic Gradient Distillation (DGD), consisting of Generative Adversarial Imitation Learning (GAIL)-based Gradient Mapping and Two-Stage Gradient Fusion modules. The former builds the teacher’s learning process to understand knowledge by drawing on the principle of GAIL. The latter consists of attention fusion and weighted bias operations. Through the attentional fusion operation, it captures and fuses the responses of the teacher to change the gradient of the student at each layer. The fused gradients are then updated by combining them with the student’s backpropagated gradients using the weighted bias operation. DGD allows the student to inherit and extend the teacher’s learning process efficiently. Extensive experiments conducted with seven publicly available datasets show that DGD could significantly outperform some existing methods in node classification tasks. Our code and data are released at <span><span>https://github.com/KangL-G/Dynamic-Gradient-Distillation</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"131 ","pages":"Article 104163"},"PeriodicalIF":15.5,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146006526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Information Fusion
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1