Pub Date : 2026-01-15DOI: 10.1016/j.inffus.2026.104129
Sadia Afroze, Md Rajib Hossain, Mohammed Moshiul Hoque, Nazmul Siddique
{"title":"MulMoSenT: Multimodal Sentiment Analysis for a Low-Resource Language Using Textual-Visual Cross-Attention and Fusion","authors":"Sadia Afroze, Md Rajib Hossain, Mohammed Moshiul Hoque, Nazmul Siddique","doi":"10.1016/j.inffus.2026.104129","DOIUrl":"https://doi.org/10.1016/j.inffus.2026.104129","url":null,"abstract":"","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"5 1","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145995211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GeoCraft: A Diffusion Model-Based 3D Reconstruction Method Driven by Image and Point Cloud Fusion","authors":"Weixuan Ma, Yamin Li, Chujin Liu, Hao Zhang, Jie Li, Kansong Chen, Weixuan Gao","doi":"10.1016/j.inffus.2026.104149","DOIUrl":"https://doi.org/10.1016/j.inffus.2026.104149","url":null,"abstract":"","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"55 1","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145961755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1016/j.inffus.2026.104145
Li Li , Jianbing Ma , Beiji Zou , Hao Xu , Shenghui Liao , Wenyi Xiong , Liqiang Zhi
Knee osteoarthritis (KOA) is a globally prevalent degenerative joint disorder. A central challenge in its automated diagnosis is the efficient fusion of multimodal MRI data. This fusion aims to enhance the accuracy and generalizability of clinical cartilage segmentation, while simultaneously minimizing healthcare resource consumption. Therefore, this study introduces dynamic confidence fuzzy control (DynCFC) within the symmetric unet architecture (SymUnet), referred to as SymUnet-DynCFC, which is designed to enhance the accuracy and robustness of cartilage segmentation. Firstly, the SymUnet architecture is developed, with separate inputs from T1W and T2W modalities to facilitate comprehensive segmentation evaluation. Secondly, the DynCFC mechanism is implemented to compute the optimal weighting for each modality, enabling the fusion and optimization of multimodal features. Finally, the performance of the proposed SymUnet-DynCFC method is evaluated on clinical datasets from a multi-campus hospital system. Experimental results show that SymUnet-DynCFC achieves better segmentation performance than the baselines, with mean Dice, IoU, and HD95 values of 87.96 %, 79.93 %, and 1.29, respectively. In particular, SymUnet-DynCFC exhibits improved robustness compared to the baseline methods. This may facilitate automated cartilage segmentation in clinical workflows and could support the assessment of moderate-to-severe KOA by detecting outlier metrics.
{"title":"SymUnet-DynCFC: Multimodal MRI fusion for robust cartilage segmentation and clinically confirmed moderate-to-severe KOA diagnosis","authors":"Li Li , Jianbing Ma , Beiji Zou , Hao Xu , Shenghui Liao , Wenyi Xiong , Liqiang Zhi","doi":"10.1016/j.inffus.2026.104145","DOIUrl":"10.1016/j.inffus.2026.104145","url":null,"abstract":"<div><div>Knee osteoarthritis (KOA) is a globally prevalent degenerative joint disorder. A central challenge in its automated diagnosis is the efficient fusion of multimodal MRI data. This fusion aims to enhance the accuracy and generalizability of clinical cartilage segmentation, while simultaneously minimizing healthcare resource consumption. Therefore, this study introduces dynamic confidence fuzzy control (DynCFC) within the symmetric unet architecture (SymUnet), referred to as SymUnet-DynCFC, which is designed to enhance the accuracy and robustness of cartilage segmentation. Firstly, the SymUnet architecture is developed, with separate inputs from T1W and T2W modalities to facilitate comprehensive segmentation evaluation. Secondly, the DynCFC mechanism is implemented to compute the optimal weighting for each modality, enabling the fusion and optimization of multimodal features. Finally, the performance of the proposed SymUnet-DynCFC method is evaluated on clinical datasets from a multi-campus hospital system. Experimental results show that SymUnet-DynCFC achieves better segmentation performance than the baselines, with mean Dice, IoU, and HD95 values of 87.96 %, 79.93 %, and 1.29, respectively. In particular, SymUnet-DynCFC exhibits improved robustness compared to the baseline methods. This may facilitate automated cartilage segmentation in clinical workflows and could support the assessment of moderate-to-severe KOA by detecting outlier metrics.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"130 ","pages":"Article 104145"},"PeriodicalIF":15.5,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145957304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1016/j.inffus.2025.104113
Emilio Porcu , Roy El Moukari , Laurent Najman , Francisco Herrera , Horst Simon
This manuscript provides a systemic and data-centric view of what we term essential data science, as a natural ecosystem with challenges and missions stemming from the fusion of data universe with its multiple combinations of the 5D complexities (data structure, domain, cardinality, causality, and ethics) with the phases of the data life cycle. Data agents perform tasks driven by specific goals. The data scientist is an abstract entity that comes from the logical organization of data agents with their actions. Data scientists face challenges that are defined according to the missions. We define specific discipline-induced data science, which in turn allows for the definition of pan-data science, a natural ecosystem that integrates specific disciplines with the essential data science. We semantically split the essential data science into computational, and foundational. By formalizing this ecosystemic view, we contribute a general-purpose, fusion-oriented architecture for integrating heterogeneous knowledge, agents, and workflows-relevant to a wide range of disciplines and high-impact applications.
{"title":"Data science: a natural ecosystem","authors":"Emilio Porcu , Roy El Moukari , Laurent Najman , Francisco Herrera , Horst Simon","doi":"10.1016/j.inffus.2025.104113","DOIUrl":"10.1016/j.inffus.2025.104113","url":null,"abstract":"<div><div>This manuscript provides a systemic and data-centric view of what we term <em>essential</em> data science, as a <em>natural</em> ecosystem with challenges and missions stemming from the fusion of data universe with its multiple combinations of the 5D complexities (data structure, domain, cardinality, causality, and ethics) with the phases of the data life cycle. Data agents perform tasks driven by specific <em>goals</em>. The data scientist is an abstract entity that comes from the logical organization of data agents with their actions. Data scientists face challenges that are defined according to the <em>missions</em>. We define specific discipline-induced data science, which in turn allows for the definition of <em>pan</em>-data science, a natural ecosystem that integrates specific disciplines with the essential data science. We semantically split the essential data science into computational, and foundational. By formalizing this ecosystemic view, we contribute a general-purpose, fusion-oriented architecture for integrating heterogeneous knowledge, agents, and workflows-relevant to a wide range of disciplines and high-impact applications.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"130 ","pages":"Article 104113"},"PeriodicalIF":15.5,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145957302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Benefiting from the booming deep learning techniques, spatial-spectral fusion (SSF) is considered as an ideal alternative to break the traditions of acquiring hyperspectral images (HSI) with costly devices. Yet with the remarkable progress, current solutions necessitate training and storing multiple models for different scaling factors. To overcome this dilemma, we propose a spatial–spectral fusion neural operator (SFNO) to perform arbitrary-scale SSF within the operator learning framework. Specifically, SFNO approaches the problem from the perspective of approximation theory by embedding the features of two degraded functions into a high-dimensional latent space through pointwise convolution layers, thereby capturing richer spectral feature information. Consequently, the mapping between function spaces is approximated via the Galerkin integral (GI) mechanism, which culminates in a final dimensionality reduction step to produce a high-resolution HSI. Moreover, we propose a progressive resampling integration (PR) that resamples the integrand’s domain in the triple kernel integration to provide non-local multi-scale information. The synergistic action of both integration mechanisms enables SFNO to effortlessly handle magnification factors it never encountered during training. Extensive experiments on the CAVE, Chikusei, Pavia Centre, Harvard, and real-world datasets demonstrate that our SFNO delivers substantial improvements over existing state-of-the-art methods. In particular, under the 8× upsampling setting on the CAVE, Chikusei, and Pavia Centre datasets, SFNO surpasses the second-best model by 0.56 dB, 1.05 dB, and 0.72 dB in PSNR, respectively. Our code is publicly available at https://github.com/weili419/SFNO.
{"title":"Arbitrary‑Scale Spatial–Spectral Fusion using Kernel Integral and Progressive Resampling","authors":"Wei Li, Honghui Xu, Yueqian Quan, Zhe Chen, Jianwei Zheng","doi":"10.1016/j.inffus.2026.104143","DOIUrl":"https://doi.org/10.1016/j.inffus.2026.104143","url":null,"abstract":"Benefiting from the booming deep learning techniques, spatial-spectral fusion (SSF) is considered as an ideal alternative to break the traditions of acquiring hyperspectral images (HSI) with costly devices. Yet with the remarkable progress, current solutions necessitate training and storing multiple models for different scaling factors. To overcome this dilemma, we propose a spatial–spectral fusion neural operator (SFNO) to perform <ce:bold>arbitrary-scale</ce:bold> SSF within the operator learning framework. Specifically, SFNO approaches the problem from the perspective of approximation theory by embedding the features of two degraded functions into a high-dimensional latent space through pointwise convolution layers, thereby capturing richer spectral feature information. Consequently, the mapping between function spaces is approximated via the Galerkin integral (GI) mechanism, which culminates in a final dimensionality reduction step to produce a high-resolution HSI. Moreover, we propose a progressive resampling integration (PR) that resamples the integrand’s domain in the triple kernel integration to provide non-local multi-scale information. The synergistic action of both integration mechanisms enables SFNO to effortlessly handle magnification factors it never encountered during training. Extensive experiments on the CAVE, Chikusei, Pavia Centre, Harvard, and real-world datasets demonstrate that our SFNO delivers substantial improvements over existing state-of-the-art methods. In particular, under the 8× upsampling setting on the CAVE, Chikusei, and Pavia Centre datasets, SFNO surpasses the second-best model by 0.56 dB, 1.05 dB, and 0.72 dB in PSNR, respectively. Our code is publicly available at <ce:inter-ref xlink:href=\"https://github.com/weili419/SFNO\" xlink:type=\"simple\">https://github.com/weili419/SFNO</ce:inter-ref>.","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"12 1","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145957301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current methods for evaluating lung function require substantial patient cooperation and rigorous quality control. In contrast, impulse oscillometry (IOS) is a promising alternative that can measure lung mechanics with minimal patient effort and operational ease. IOS applies pressure oscillations to the airways and analyzes the resulting signals. However, previous studies on IOS have been limited to frequency-domain features derived from its response signals, while neglecting valuable time-domain information. To bridge this gap, we developed a deep learning model that fuses time- and frequency-domain IOS data for lung function evaluation. An internal dataset (2,702 cases) and an external dataset (335 cases) were retrospectively collected for model training and validation. Model performance was first evaluated through ablation studies and then tested across different demographic subgroups. Finally, Grad-CAM was employed to improve model interpretability. Results showed that our model accurately predicted lung function parameters, including FEV1/FVC (mean absolute errors [MAEs] of 3.78 and 4.33%), FEV1 (MAEs of 0.235 and 0.270 L), and FVC (MAEs of 0.264 and 0.315 L), in internal and external validation sets. The model also demonstrated strong performance in respiratory disease prescreening, achieving AUCs of 0.989 and 0.980 with sensitivities of 73.97% and 71.47% for detecting airway obstruction, and AUCs of 0.938 and 0.925 with sensitivities of 76.41% and 66.24% for classifying four ventilation patterns across the two sets. By fusing time- and frequency-domain IOS data, this study offers a new strategy for pulmonary function evaluation, facilitating more efficient prescreening for pulmonary diseases.
{"title":"Fusing time- and frequency-domain information for effort-independent lung function evaluation using oscillometry","authors":"Sunxiaohe Li, Dongfang Zhao, Zirui Wang, Hao Zhang, Pang Wu, Zhenfeng Li, Lidong Du, Xianxiang Chen, Hongtao Niu, Xiaopan Li, Jingen Xia, Ting Yang, Peng Wang, Zhen Fang","doi":"10.1016/j.inffus.2026.104147","DOIUrl":"https://doi.org/10.1016/j.inffus.2026.104147","url":null,"abstract":"Current methods for evaluating lung function require substantial patient cooperation and rigorous quality control. In contrast, impulse oscillometry (IOS) is a promising alternative that can measure lung mechanics with minimal patient effort and operational ease. IOS applies pressure oscillations to the airways and analyzes the resulting signals. However, previous studies on IOS have been limited to frequency-domain features derived from its response signals, while neglecting valuable time-domain information. To bridge this gap, we developed a deep learning model that fuses time- and frequency-domain IOS data for lung function evaluation. An internal dataset (2,702 cases) and an external dataset (335 cases) were retrospectively collected for model training and validation. Model performance was first evaluated through ablation studies and then tested across different demographic subgroups. Finally, Grad-CAM was employed to improve model interpretability. Results showed that our model accurately predicted lung function parameters, including FEV<ce:inf loc=\"post\">1</ce:inf>/FVC (mean absolute errors [MAEs] of 3.78 and 4.33%), FEV<ce:inf loc=\"post\">1</ce:inf> (MAEs of 0.235 and 0.270 L), and FVC (MAEs of 0.264 and 0.315 L), in internal and external validation sets. The model also demonstrated strong performance in respiratory disease prescreening, achieving AUCs of 0.989 and 0.980 with sensitivities of 73.97% and 71.47% for detecting airway obstruction, and AUCs of 0.938 and 0.925 with sensitivities of 76.41% and 66.24% for classifying four ventilation patterns across the two sets. By fusing time- and frequency-domain IOS data, this study offers a new strategy for pulmonary function evaluation, facilitating more efficient prescreening for pulmonary diseases.","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"4 1","pages":""},"PeriodicalIF":18.6,"publicationDate":"2026-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145957303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}