首页 > 最新文献

Information Fusion最新文献

英文 中文
Review of multimodal machine learning approaches in healthcare
IF 14.7 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-14 DOI: 10.1016/j.inffus.2024.102690

Machine learning methods in healthcare have traditionally focused on using data from a single modality, limiting their ability to effectively replicate the clinical practice of integrating multiple sources of information for improved decision making. Clinicians typically rely on a variety of data sources including patients’ demographic information, laboratory data, vital signs and various imaging data modalities to make informed decisions and contextualise their findings. Recent advances in machine learning have facilitated the more efficient incorporation of multimodal data, resulting in applications that better represent the clinician’s approach. Here, we provide an overview of multimodal machine learning approaches in healthcare, encompassing various data modalities commonly used in clinical diagnoses, such as imaging, text, time series and tabular data. We discuss key stages of model development, including pre-training, fine-tuning and evaluation. Additionally, we explore common data fusion approaches used in modelling, highlighting their advantages and performance challenges. An overview is provided of 17 multimodal clinical datasets with detailed description of the specific data modalities used in each dataset. Over 50 studies have been reviewed, with a predominant focus on the integration of imaging and tabular data. While multimodal techniques have shown potential in improving predictive accuracy across many healthcare areas, our review highlights that the effectiveness of a method is contingent upon the specific data and task at hand.

{"title":"Review of multimodal machine learning approaches in healthcare","authors":"","doi":"10.1016/j.inffus.2024.102690","DOIUrl":"10.1016/j.inffus.2024.102690","url":null,"abstract":"<div><p>Machine learning methods in healthcare have traditionally focused on using data from a single modality, limiting their ability to effectively replicate the clinical practice of integrating multiple sources of information for improved decision making. Clinicians typically rely on a variety of data sources including patients’ demographic information, laboratory data, vital signs and various imaging data modalities to make informed decisions and contextualise their findings. Recent advances in machine learning have facilitated the more efficient incorporation of multimodal data, resulting in applications that better represent the clinician’s approach. Here, we provide an overview of multimodal machine learning approaches in healthcare, encompassing various data modalities commonly used in clinical diagnoses, such as imaging, text, time series and tabular data. We discuss key stages of model development, including pre-training, fine-tuning and evaluation. Additionally, we explore common data fusion approaches used in modelling, highlighting their advantages and performance challenges. An overview is provided of 17 multimodal clinical datasets with detailed description of the specific data modalities used in each dataset. Over 50 studies have been reviewed, with a predominant focus on the integration of imaging and tabular data. While multimodal techniques have shown potential in improving predictive accuracy across many healthcare areas, our review highlights that the effectiveness of a method is contingent upon the specific data and task at hand.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":null,"pages":null},"PeriodicalIF":14.7,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1566253524004688/pdfft?md5=c13f0b2819a78d412d45575c042d7e61&pid=1-s2.0-S1566253524004688-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142240687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal fusion for large-scale traffic prediction with heterogeneous retentive networks 利用异构保持网络多模态融合进行大规模交通预测
IF 14.7 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-13 DOI: 10.1016/j.inffus.2024.102695

Traffic speed prediction is a critical challenge in transportation research due to the complex spatiotemporal dynamics of urban mobility. This study proposes a novel framework for fusing diverse data modalities to enhance short-term traffic speed forecasting accuracy. We introduce the Heterogeneous Retentive Network (H-RetNet), which integrates multisource urban data into high-dimensional representations encoded with geospatial relationships. By combining the H-RetNet with a Gated Recurrent Unit (GRU), our model captures intricate spatial and temporal correlations. We validate the approach using a real-world Beijing traffic dataset encompassing social media, real estate, and point of interest data. Experiments demonstrate superior performance over existing methods, with the fusion architecture improving robustness. Specifically, we observe a 21.91% reduction in MSE, underscoring the potential of our framework to inform and enhance traffic management strategies.

由于城市交通的时空动态十分复杂,因此交通速度预测是交通研究中的一项重要挑战。本研究提出了一种融合多种数据模式的新型框架,以提高短期交通速度预测的准确性。我们引入了异构保留网络(H-RetNet),它将多源城市数据整合为以地理空间关系编码的高维表示。通过将 H-RetNet 与门控递归单元 (GRU) 相结合,我们的模型可以捕捉到错综复杂的时空相关性。我们使用一个包含社交媒体、房地产和兴趣点数据的真实世界北京交通数据集对该方法进行了验证。实验表明,该方法的性能优于现有方法,其融合架构提高了鲁棒性。具体来说,我们观察到 MSE 降低了 21.91%,这凸显了我们的框架在为交通管理策略提供信息和增强交通管理策略方面的潜力。
{"title":"Multimodal fusion for large-scale traffic prediction with heterogeneous retentive networks","authors":"","doi":"10.1016/j.inffus.2024.102695","DOIUrl":"10.1016/j.inffus.2024.102695","url":null,"abstract":"<div><p>Traffic speed prediction is a critical challenge in transportation research due to the complex spatiotemporal dynamics of urban mobility. This study proposes a novel framework for fusing diverse data modalities to enhance short-term traffic speed forecasting accuracy. We introduce the Heterogeneous Retentive Network (H-RetNet), which integrates multisource urban data into high-dimensional representations encoded with geospatial relationships. By combining the H-RetNet with a Gated Recurrent Unit (GRU), our model captures intricate spatial and temporal correlations. We validate the approach using a real-world Beijing traffic dataset encompassing social media, real estate, and point of interest data. Experiments demonstrate superior performance over existing methods, with the fusion architecture improving robustness. Specifically, we observe a 21.91% reduction in MSE, underscoring the potential of our framework to inform and enhance traffic management strategies.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":null,"pages":null},"PeriodicalIF":14.7,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142228692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence-based suicide prevention and prediction: A systematic review (2019–2023)
IF 14.7 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-12 DOI: 10.1016/j.inffus.2024.102673

Suicide is a major global public health concern, and the application of artificial intelligence (AI) methods, such as natural language processing (NLP), machine learning (ML), and deep learning (DL), has shown promise in advancing suicide prediction and prevention efforts. Recent advancements in AI – particularly NLP and DL have opened up new avenues of research in suicide prediction and prevention. While several papers have reviewed specific detection techniques like NLP or DL, there has been no recent study that acts as a one-stop-shop, providing a comprehensive overview of all AI-based studies in this field. In this work, we conduct a systematic literature review to identify relevant studies published between 2019 and 2023, resulting in the inclusion of 156 studies. We provide a comprehensive overview of the current state of research conducted on AI-driven suicide prevention and prediction, focusing on different data types and AI techniques employed. We discuss the benefits and challenges of these approaches and propose future research directions to improve the practical application of AI in suicide research. AI is highly capable of improving the accuracy and efficiency of risk assessment, enabling personalized interventions, and enhancing our understanding of risk and protective factors. Multidisciplinary approaches combining diverse data sources and AI methods can help identify individuals at risk by analyzing social media content, patient histories, and data from mobile devices, enabling timely intervention. However, challenges related to data privacy, algorithmic bias, model interpretability, and real-world implementation must be addressed to realize the full potential of these technologies. Future research should focus on integrating prediction and prevention strategies, harnessing multimodal data, and expanding the scope to include diverse populations. Collaboration across disciplines and stakeholders is essential to ensure that AI-driven suicide prevention and prediction efforts are ethical, culturally sensitive, and person-centered.

{"title":"Artificial intelligence-based suicide prevention and prediction: A systematic review (2019–2023)","authors":"","doi":"10.1016/j.inffus.2024.102673","DOIUrl":"10.1016/j.inffus.2024.102673","url":null,"abstract":"<div><p>Suicide is a major global public health concern, and the application of artificial intelligence (AI) methods, such as natural language processing (NLP), machine learning (ML), and deep learning (DL), has shown promise in advancing suicide prediction and prevention efforts. Recent advancements in AI – particularly NLP and DL have opened up new avenues of research in suicide prediction and prevention. While several papers have reviewed specific detection techniques like NLP or DL, there has been no recent study that acts as a one-stop-shop, providing a comprehensive overview of all AI-based studies in this field. In this work, we conduct a systematic literature review to identify relevant studies published between 2019 and 2023, resulting in the inclusion of 156 studies. We provide a comprehensive overview of the current state of research conducted on AI-driven suicide prevention and prediction, focusing on different data types and AI techniques employed. We discuss the benefits and challenges of these approaches and propose future research directions to improve the practical application of AI in suicide research. AI is highly capable of improving the accuracy and efficiency of risk assessment, enabling personalized interventions, and enhancing our understanding of risk and protective factors. Multidisciplinary approaches combining diverse data sources and AI methods can help identify individuals at risk by analyzing social media content, patient histories, and data from mobile devices, enabling timely intervention. However, challenges related to data privacy, algorithmic bias, model interpretability, and real-world implementation must be addressed to realize the full potential of these technologies. Future research should focus on integrating prediction and prevention strategies, harnessing multimodal data, and expanding the scope to include diverse populations. Collaboration across disciplines and stakeholders is essential to ensure that AI-driven suicide prevention and prediction efforts are ethical, culturally sensitive, and person-centered.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":null,"pages":null},"PeriodicalIF":14.7,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142240680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High performance RGB-Thermal Video Object Detection via hybrid fusion with progressive interaction and temporal-modal difference 通过渐进交互和时态模态差异混合融合技术实现高性能 RGB 热视频物体检测
IF 14.7 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-12 DOI: 10.1016/j.inffus.2024.102665

RGB-Thermal Video Object Detection (RGBT VOD) is to localize and classify the predefined objects in visible and thermal spectrum videos. The key issue in RGBT VOD lies in integrating multi-modal information effectively to improve detection performance. Current multi-modal fusion methods predominantly employ middle fusion strategies, but the inherent modal difference directly influences the effect of multi-modal fusion. Although the early fusion strategy reduces the modality gap in the middle stage of the network, achieving in-depth feature interaction between different modalities remains challenging. In this work, we propose a novel hybrid fusion network called PTMNet, which effectively combines the early fusion strategy with the progressive interaction and the middle fusion strategy with the temporal-modal difference, for high performance RGBT VOD. In particular, we take each modality as a master modality to achieve an early fusion with other modalities as auxiliary information by progressive interaction. Such a design not only alleviates the modality gap but facilitates middle fusion. The temporal-modal difference models temporal information through spatial offsets and utilizes feature erasure between modalities to motivate the network to focus on shared objects in both modalities. The hybrid fusion can achieve high detection accuracy only using three input frames, which makes our PTMNet achieve a high inference speed. Experimental results show that our approach achieves state-of-the-art performance on the VT-VOD50 dataset and also operates at over 70 FPS. The code will be freely released at https://github.com/tzz-ahu for academic purposes.

RGBT 视频物体检测(RGBT VOD)是对可见光和热光谱视频中的预定义物体进行定位和分类。RGBT VOD 的关键问题在于有效整合多模态信息以提高检测性能。目前的多模态融合方法主要采用中间融合策略,但固有的模态差异直接影响了多模态融合的效果。虽然早期融合策略减少了网络中间阶段的模态差距,但实现不同模态之间的深度特征交互仍具有挑战性。在这项工作中,我们提出了一种名为 PTMNet 的新型混合融合网络,它有效地结合了渐进交互的早期融合策略和时态模态差异的中期融合策略,以实现高性能的 RGBT VOD。具体而言,我们将每种模态作为主模态,通过渐进式交互实现与作为辅助信息的其他模态的早期融合。这样的设计不仅缓解了模态差距,还促进了中间融合。时间-模态差异通过空间偏移对时间信息进行建模,并利用模态间的特征擦除来促使网络关注两种模态中的共享对象。混合融合仅使用三个输入帧就能达到很高的检测精度,这使得我们的 PTMNet 实现了很高的推理速度。实验结果表明,我们的方法在 VT-VOD50 数据集上实现了最先进的性能,而且运行速度超过 70 FPS。该代码将在 https://github.com/tzz-ahu 上免费发布,用于学术研究。
{"title":"High performance RGB-Thermal Video Object Detection via hybrid fusion with progressive interaction and temporal-modal difference","authors":"","doi":"10.1016/j.inffus.2024.102665","DOIUrl":"10.1016/j.inffus.2024.102665","url":null,"abstract":"<div><p>RGB-Thermal Video Object Detection (RGBT VOD) is to localize and classify the predefined objects in visible and thermal spectrum videos. The key issue in RGBT VOD lies in integrating multi-modal information effectively to improve detection performance. Current multi-modal fusion methods predominantly employ middle fusion strategies, but the inherent modal difference directly influences the effect of multi-modal fusion. Although the early fusion strategy reduces the modality gap in the middle stage of the network, achieving in-depth feature interaction between different modalities remains challenging. In this work, we propose a novel hybrid fusion network called PTMNet, which effectively combines the early fusion strategy with the progressive interaction and the middle fusion strategy with the temporal-modal difference, for high performance RGBT VOD. In particular, we take each modality as a master modality to achieve an early fusion with other modalities as auxiliary information by progressive interaction. Such a design not only alleviates the modality gap but facilitates middle fusion. The temporal-modal difference models temporal information through spatial offsets and utilizes feature erasure between modalities to motivate the network to focus on shared objects in both modalities. The hybrid fusion can achieve high detection accuracy only using three input frames, which makes our PTMNet achieve a high inference speed. Experimental results show that our approach achieves state-of-the-art performance on the VT-VOD50 dataset and also operates at over 70 FPS. The code will be freely released at <span><span>https://github.com/tzz-ahu</span><svg><path></path></svg></span> for academic purposes.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":null,"pages":null},"PeriodicalIF":14.7,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142228691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scalable data fusion via a scale-based hierarchical framework: Adapting to multi-source and multi-scale scenarios 通过基于规模的分层框架实现可扩展的数据融合:适应多源和多尺度场景
IF 14.7 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-12 DOI: 10.1016/j.inffus.2024.102694

Multi-source information fusion addresses challenges in integrating and transforming complementary data from diverse sources to facilitate unified information representation for centralized knowledge discovery. However, traditional methods face difficulties when applied to multi-scale data, where optimal scale selection can effectively resolve these issues but typically lack the advantage of identifying the optimal and simplest data from different data source relationships. Moreover, in multi-source, multi-scale environments, heterogeneous data (where identical samples have different features and scales in different sources) is prone to occur. To address these challenges, this study proposes a novel approach in two key stages: first, aggregating heterogeneous data sources and refining datasets using information gain; second, employing a customized Scale-based Tree (SbT) structure for each attribute to help extract specific scale information value, thereby achieving effective data fusion goals. Extensive experimental evaluations cover ten different datasets, reporting detailed performance across multiple metrics including Approximation Precision (AP), Approximation Quality (AQ) values, classification accuracy, and computational efficiency. The results highlight the robustness and effectiveness of our proposed algorithm in handling complex multi-source, multi-scale data environments, demonstrating its potential and practicality in addressing real-world data fusion challenges.

多源信息融合解决了整合和转换来自不同来源的互补数据的难题,从而为集中式知识发现提供统一的信息表示。然而,传统方法在应用于多尺度数据时面临着困难,最佳尺度选择可以有效解决这些问题,但通常缺乏从不同数据源关系中识别最佳和最简单数据的优势。此外,在多源多尺度环境中,容易出现异构数据(相同样本在不同源中具有不同特征和尺度)。为了应对这些挑战,本研究提出了一种分两个关键阶段的新方法:第一,聚合异构数据源,利用信息增益提炼数据集;第二,为每个属性采用定制的基于尺度的树(SbT)结构,帮助提取特定的尺度信息值,从而实现有效的数据融合目标。广泛的实验评估涵盖了十个不同的数据集,报告了多个指标的详细性能,包括近似精度(AP)、近似质量(AQ)值、分类准确性和计算效率。这些结果凸显了我们提出的算法在处理复杂的多源、多尺度数据环境时的稳健性和有效性,证明了它在应对现实世界数据融合挑战方面的潜力和实用性。
{"title":"Scalable data fusion via a scale-based hierarchical framework: Adapting to multi-source and multi-scale scenarios","authors":"","doi":"10.1016/j.inffus.2024.102694","DOIUrl":"10.1016/j.inffus.2024.102694","url":null,"abstract":"<div><p>Multi-source information fusion addresses challenges in integrating and transforming complementary data from diverse sources to facilitate unified information representation for centralized knowledge discovery. However, traditional methods face difficulties when applied to multi-scale data, where optimal scale selection can effectively resolve these issues but typically lack the advantage of identifying the optimal and simplest data from different data source relationships. Moreover, in multi-source, multi-scale environments, heterogeneous data (where identical samples have different features and scales in different sources) is prone to occur. To address these challenges, this study proposes a novel approach in two key stages: first, aggregating heterogeneous data sources and refining datasets using information gain; second, employing a customized <strong>S</strong>cale-<strong>b</strong>ased <strong>T</strong>ree (SbT) structure for each attribute to help extract specific scale information value, thereby achieving effective data fusion goals. Extensive experimental evaluations cover ten different datasets, reporting detailed performance across multiple metrics including <strong>A</strong>pproximation <strong>P</strong>recision (AP), <strong>A</strong>pproximation <strong>Q</strong>uality (AQ) values, classification accuracy, and computational efficiency. The results highlight the robustness and effectiveness of our proposed algorithm in handling complex multi-source, multi-scale data environments, demonstrating its potential and practicality in addressing real-world data fusion challenges.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":null,"pages":null},"PeriodicalIF":14.7,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142228689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tensor-based unsupervised feature selection for error-robust handling of unbalanced incomplete multi-view data 基于张量的无监督特征选择,可稳健处理不平衡的不完整多视角数据
IF 14.7 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-12 DOI: 10.1016/j.inffus.2024.102693

Recent advancements in multi-view unsupervised feature selection (MUFS) have been notable, yet two primary challenges persist. First, real-world datasets frequently consist of unbalanced incomplete multi-view data, a scenario not adequately addressed by current MUFS methodologies. Second, the inherent complexity and heterogeneity of multi-view data often introduce significant noise, an aspect largely neglected by existing approaches, compromising their noise robustness. To tackle these issues, this paper introduces a Tensor-Based Error Robust Unbalanced Incomplete Multi-view Unsupervised Feature Selection (TERUIMUFS) strategy. The proposed MUFS framework specifically caters to unbalanced incomplete multi-view data, incorporating self-representation learning with a tensor low-rank constraint and sample diversity learning. This approach not only mitigates errors in the self-representation process but also corrects errors in the self-representation tensor, significantly enhancing the model’s resilience to noise. Furthermore, graph learning serves as a pivotal link between MUFS and self-representation learning. An innovative iterative optimization algorithm is developed for TERUIMUFS, complete with a thorough analysis of its convergence and computational complexity. Experimental results demonstrate TERUIMUFS’s effectiveness and competitiveness in addressing unbalanced incomplete multi-view unsupervised feature selection (UIMUFS), marking a significant advancement in the field.

多视角无监督特征选择(MUFS)最近取得了显著进展,但仍存在两个主要挑战。首先,现实世界的数据集经常由不平衡、不完整的多视角数据组成,而当前的多视角无监督特征选择方法并未充分解决这一问题。其次,多视角数据固有的复杂性和异质性往往会带来严重的噪声,而现有方法在很大程度上忽视了这一点,从而影响了其噪声鲁棒性。为了解决这些问题,本文提出了一种基于张量误差鲁棒不平衡不完整多视角无监督特征选择(TERUIMUFS)策略。所提出的 MUFS 框架专门针对不平衡不完整多视角数据,结合了带有张量低阶约束的自表示学习和样本多样性学习。这种方法不仅能减少自表示过程中的误差,还能纠正自表示张量中的误差,从而显著增强模型的抗噪能力。此外,图学习是 MUFS 和自表示学习之间的关键纽带。我们为 TERUIMUFS 开发了一种创新的迭代优化算法,并对其收敛性和计算复杂性进行了全面分析。实验结果证明了 TERUIMUFS 在解决不平衡不完整多视角无监督特征选择(UIMUFS)方面的有效性和竞争力,标志着该领域的重大进展。
{"title":"Tensor-based unsupervised feature selection for error-robust handling of unbalanced incomplete multi-view data","authors":"","doi":"10.1016/j.inffus.2024.102693","DOIUrl":"10.1016/j.inffus.2024.102693","url":null,"abstract":"<div><p>Recent advancements in multi-view unsupervised feature selection (MUFS) have been notable, yet two primary challenges persist. First, real-world datasets frequently consist of unbalanced incomplete multi-view data, a scenario not adequately addressed by current MUFS methodologies. Second, the inherent complexity and heterogeneity of multi-view data often introduce significant noise, an aspect largely neglected by existing approaches, compromising their noise robustness. To tackle these issues, this paper introduces a Tensor-Based Error Robust Unbalanced Incomplete Multi-view Unsupervised Feature Selection (TERUIMUFS) strategy. The proposed MUFS framework specifically caters to unbalanced incomplete multi-view data, incorporating self-representation learning with a tensor low-rank constraint and sample diversity learning. This approach not only mitigates errors in the self-representation process but also corrects errors in the self-representation tensor, significantly enhancing the model’s resilience to noise. Furthermore, graph learning serves as a pivotal link between MUFS and self-representation learning. An innovative iterative optimization algorithm is developed for TERUIMUFS, complete with a thorough analysis of its convergence and computational complexity. Experimental results demonstrate TERUIMUFS’s effectiveness and competitiveness in addressing unbalanced incomplete multi-view unsupervised feature selection (UIMUFS), marking a significant advancement in the field.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":null,"pages":null},"PeriodicalIF":14.7,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142228968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised multi-view graph representation learning with dual weight-net 利用双权重网进行无监督多视图图表示学习
IF 14.7 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-10 DOI: 10.1016/j.inffus.2024.102669

Unsupervised multi-view graph representation learning (UMGRL) aims to capture the complex relationships in the multi-view graph without human annotations, so it has been widely applied in real-world applications. However, existing UMGRL methods still face the issues as follows: (i) Previous UMGRL methods tend to overlook the importance of nodes with different influences and the importance of graphs with different relationships, so that they may lose discriminative information in nodes with large influences and graphs with important relationships. (ii) Previous UMGRL methods generally ignore the heterophilic edges in the multi-view graph to possibly introduce noise from different classes into node representations. To address these issues, we propose a novel bi-level optimization UMGRL framework with dual weight-net. Specifically, the lower-level optimizes the parameters of encoders to obtain node representations of different graphs, while the upper-level optimizes the parameters of the dual weight-net to adaptively and dynamically capture the importance of node level, graph level, and edge level, thus obtaining discriminative fused representations for downstream tasks. Moreover, theoretical analysis demonstrates that the proposed method shows a better generalization ability on downstream tasks, compared to previous UMGRL methods. Extensive experimental results verify the effectiveness of the proposed method on public datasets, in terms of different downstream tasks, compared to numerous comparison methods.

无监督多视图表示学习(UMGRL)旨在捕捉多视图中的复杂关系,而无需人工标注,因此在现实世界中得到了广泛应用。然而,现有的 UMGRL 方法仍面临以下问题:(i) 以往的 UMGRL 方法往往会忽略影响程度不同的节点的重要性和关系不同的图的重要性,因此可能会丢失影响程度大的节点和关系重要的图的判别信息。(ii) 以往的 UMGRL 方法通常会忽略多视图中的异亲边缘,从而可能在节点表示中引入不同类别的噪声。为解决这些问题,我们提出了一种新颖的双权重网双层优化 UMGRL 框架。具体来说,下层优化编码器的参数,以获得不同图的节点表示,而上层优化双权重网的参数,以自适应性地动态捕捉节点级、图级和边级的重要性,从而为下游任务获得具有区分性的融合表示。此外,理论分析表明,与之前的 UMGRL 方法相比,所提出的方法对下游任务具有更好的泛化能力。广泛的实验结果验证了所提出的方法在公共数据集上的有效性,在不同的下游任务方面,与众多比较方法相比,效果更佳。
{"title":"Unsupervised multi-view graph representation learning with dual weight-net","authors":"","doi":"10.1016/j.inffus.2024.102669","DOIUrl":"10.1016/j.inffus.2024.102669","url":null,"abstract":"<div><p>Unsupervised multi-view graph representation learning (UMGRL) aims to capture the complex relationships in the multi-view graph without human annotations, so it has been widely applied in real-world applications. However, existing UMGRL methods still face the issues as follows: (i) Previous UMGRL methods tend to overlook the importance of nodes with different influences and the importance of graphs with different relationships, so that they may lose discriminative information in nodes with large influences and graphs with important relationships. (ii) Previous UMGRL methods generally ignore the heterophilic edges in the multi-view graph to possibly introduce noise from different classes into node representations. To address these issues, we propose a novel bi-level optimization UMGRL framework with dual weight-net. Specifically, the lower-level optimizes the parameters of encoders to obtain node representations of different graphs, while the upper-level optimizes the parameters of the dual weight-net to adaptively and dynamically capture the importance of node level, graph level, and edge level, thus obtaining discriminative fused representations for downstream tasks. Moreover, theoretical analysis demonstrates that the proposed method shows a better generalization ability on downstream tasks, compared to previous UMGRL methods. Extensive experimental results verify the effectiveness of the proposed method on public datasets, in terms of different downstream tasks, compared to numerous comparison methods.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":null,"pages":null},"PeriodicalIF":14.7,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142231928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evolving intra-and inter-session graph fusion for next item recommendation 不断发展的会内和会间图谱融合,用于推荐下一个项目
IF 14.7 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-10 DOI: 10.1016/j.inffus.2024.102691

Next-item recommendation aims to predict users’ subsequent behaviors using their historical sequence data. However, sessions are often anonymous, short, and time-varying, making it challenging to capture accurate and evolving item representations. Existing methods using static graphs may fail to model the evolving semantics of items over time. To address this problem, we propose the Evolving Intra-session and Inter-session Graph Neural Network (EII-GNN) to capture the evolving item semantics by fusing global and local graph information. EII-GNN utilizes a global dynamic graph to model inter-session item transitions and update item embeddings at each timestamp. It also constructs a per-session graph with shortcut edges to learn complex intra-session patterns. To personalize recommendations, a history-aware GRU applies the user’s past sessions. We fuse the inter-session graph, intra-session graph, and history embeddings to obtain the session representation for final recommendation. Our model performed well in experiments with three real-world data sets against its state-of-the-art counterparts.

下一个项目推荐旨在利用用户的历史序列数据预测用户的后续行为。然而,会话通常是匿名的、短暂的、随时间变化的,因此捕捉准确且不断变化的项目表征具有挑战性。使用静态图的现有方法可能无法为随时间演变的项目语义建模。为解决这一问题,我们提出了 "不断演化的会内和会间图神经网络"(EII-GNN),通过融合全局和局部图信息来捕捉不断演化的项目语义。EII-GNN 利用全局动态图来模拟会话间项目转换,并在每个时间戳更新项目嵌入。它还构建了带有捷径边的每个会话图,以学习复杂的会话内模式。为了实现个性化推荐,历史感知 GRU 会应用用户过去的会话。我们融合了会话间图、会话内图和历史嵌入,从而获得会话表示,用于最终推荐。在三个真实世界数据集的实验中,我们的模型与最先进的模型相比表现出色。
{"title":"Evolving intra-and inter-session graph fusion for next item recommendation","authors":"","doi":"10.1016/j.inffus.2024.102691","DOIUrl":"10.1016/j.inffus.2024.102691","url":null,"abstract":"<div><p>Next-item recommendation aims to predict users’ subsequent behaviors using their historical sequence data. However, sessions are often anonymous, short, and time-varying, making it challenging to capture accurate and evolving item representations. Existing methods using static graphs may fail to model the evolving semantics of items over time. To address this problem, we propose the Evolving Intra-session and Inter-session Graph Neural Network (EII-GNN) to capture the evolving item semantics by fusing global and local graph information. EII-GNN utilizes a global dynamic graph to model inter-session item transitions and update item embeddings at each timestamp. It also constructs a per-session graph with shortcut edges to learn complex intra-session patterns. To personalize recommendations, a history-aware GRU applies the user’s past sessions. We fuse the inter-session graph, intra-session graph, and history embeddings to obtain the session representation for final recommendation. Our model performed well in experiments with three real-world data sets against its state-of-the-art counterparts.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":null,"pages":null},"PeriodicalIF":14.7,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142231926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Competitive resource allocation on a network considering opinion dynamics with self-confidence evolution 网络上的竞争性资源分配:考虑自信心演变的舆论动态
IF 14.7 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-10 DOI: 10.1016/j.inffus.2024.102680

The formation of public opinion is typically influenced by different stakeholders, such as governments and firms. Recently, various real-world problems related to the management of public opinion have emerged, necessitating stakeholders to strategically allocate resources on networks to achieve their objectives. To address this, it is imperative to consider the dynamics of opinion formation. Notably, in existing opinion dynamics models, individuals possess self-confidence parameters reflecting their adherence to historical opinions. However, most extant studies assume the individuals’ self-confidence levels remain constant over time, which cannot accurately capture the intricacies of human behavior. In response to this gap, we first introduce a self-confidence evolution model, which encompasses two influencing factors: the self-confidence levels of one's group mates and the passage of time. Furthermore, we present the social network DeGroot model with self-confidence evolution, and conduct some theoretical analyses. Moreover, we propose a game model to identify the optimal resource allocation strategies of players on a network. Finally, we provide sensitivity analyses, comparative studies, and a case study. This paper highlights the significance of incorporating self-confidence evolution into the process of opinion dynamics, and the results can provide valuable practical insights for players seeking to improve their optimal resource allocation on a network to more effectively manage public opinions.

舆论的形成通常受到政府和企业等不同利益相关者的影响。最近,现实世界中出现了各种与舆论管理相关的问题,要求利益相关者在网络上战略性地分配资源,以实现其目标。要解决这些问题,就必须考虑舆论形成的动态过程。值得注意的是,在现有的舆论动态模型中,个人拥有的自信参数反映了他们对历史观点的坚持。然而,大多数现有研究都假设个体的自信心水平随时间推移保持不变,这无法准确捕捉人类行为的复杂性。针对这一缺陷,我们首先引入了一个自信心演化模型,其中包含两个影响因素:一个是群体伙伴的自信心水平,另一个是时间的流逝。此外,我们还提出了具有自信心演变的社会网络 DeGroot 模型,并进行了一些理论分析。此外,我们还提出了一个博弈模型,以确定网络中参与者的最优资源分配策略。最后,我们提供了敏感性分析、比较研究和案例研究。本文强调了将自信心演变纳入舆论动态过程的重要意义,其结果可为寻求改善网络上最优资源配置以更有效地管理舆论的参与者提供有价值的实践启示。
{"title":"Competitive resource allocation on a network considering opinion dynamics with self-confidence evolution","authors":"","doi":"10.1016/j.inffus.2024.102680","DOIUrl":"10.1016/j.inffus.2024.102680","url":null,"abstract":"<div><p>The formation of public opinion is typically influenced by different stakeholders, such as governments and firms. Recently, various real-world problems related to the management of public opinion have emerged, necessitating stakeholders to strategically allocate resources on networks to achieve their objectives. To address this, it is imperative to consider the dynamics of opinion formation. Notably, in existing opinion dynamics models, individuals possess self-confidence parameters reflecting their adherence to historical opinions. However, most extant studies assume the individuals’ self-confidence levels remain constant over time, which cannot accurately capture the intricacies of human behavior. In response to this gap, we first introduce a self-confidence evolution model, which encompasses two influencing factors: the self-confidence levels of one's group mates and the passage of time. Furthermore, we present the social network DeGroot model with self-confidence evolution, and conduct some theoretical analyses. Moreover, we propose a game model to identify the optimal resource allocation strategies of players on a network. Finally, we provide sensitivity analyses, comparative studies, and a case study. This paper highlights the significance of incorporating self-confidence evolution into the process of opinion dynamics, and the results can provide valuable practical insights for players seeking to improve their optimal resource allocation on a network to more effectively manage public opinions.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":null,"pages":null},"PeriodicalIF":14.7,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142231927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
STSNet: A cross-spatial resolution multi-modal remote sensing deep fusion network for high resolution land-cover segmentation STSNet:用于高分辨率土地覆盖物分割的跨空间分辨率多模式遥感深度融合网络
IF 14.7 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-08 DOI: 10.1016/j.inffus.2024.102689

Recently, deep learning models have found extensive application in high-resolution land-cover segmentation research. However, the most current research still suffers from issues such as insufficient utilization of multi-modal information, which limits further improvement in high-resolution land-cover segmentation accuracy. Moreover, differences in the size and spatial resolution of multi-modal datasets collectively pose challenges to multi-modal land-cover segmentation. Therefore, we propose a high-resolution land-cover segmentation network (STSNet) with cross-spatial resolution spatio-temporal-spectral deep fusion. This network effectively utilizes spatio-temporal-spectral features to achieve information complementary among multi-modal data. Specifically, STSNet consists of four components: (1) A high resolution and multi-scale spatial-spectral encoder to jointly extract subtle spatial-spectral features in hyperspectral and high spatial resolution images. (2) A long-term spatio-temporal encoder formulated by spectral convolution and spatio-temporal transformer block to simultaneously delineates the spatial, temporal and spectral information in dense time series Sentinel-2 imagery. (3) A cross-resolution fusion module to alleviate the spatial resolution differences between multi-modal data and effectively leverages complementary spatio-temporal-spectral information. (4) A multi-scale decoder integrates multi-scale information from multi-modal data. We utilized airborne hyperspectral remote sensing imagery from the Shenyang region of China in 2020, with a spatial resolution of 1authors declare that they have no known competm, a spectral number of 249, and a spectral resolution ≤ 5 nm, and its Sentinel dense time-series images acquired in the same period with a spatial resolution of 10 m, a spectral number of 10, and a time-series number of 31. These datasets were combined to generate a multi-modal dataset called WHU-H2SR-MT, which is the first open accessed large-scale high spatio-temporal-spectral satellite remote sensing dataset (i.e., with >2500 image pairs sized 300 m × 300 m for each). Additionally, we employed two open-source datasets to validate the effectiveness of the proposed modules. Extensive experiments show that our multi-scale spatial-spectral encoder, spatio-temporal encoder, and cross-resolution fusion module outperform existing state-of-the-art (SOTA) algorithms in terms of overall performance on high-resolution land-cover segmentation. The new multi-modal dataset will be made available at http://irsip.whu.edu.cn/resources/resources_en_v2.php, along with the corresponding code for accessing and utilizing the dataset at https://github.com/RS-Mage/STSNet.

最近,深度学习模型在高分辨率土地覆盖物分割研究中得到了广泛应用。然而,目前大多数研究仍存在多模态信息利用不足等问题,限制了高分辨率土地覆盖物分割精度的进一步提高。此外,多模态数据集在大小和空间分辨率上的差异也共同对多模态土地覆盖分割提出了挑战。因此,我们提出了一种跨空间分辨率时空-光谱深度融合的高分辨率土地覆盖物分割网络(STSNet)。该网络能有效利用时空-光谱特征,实现多模态数据之间的信息互补。具体来说,STSNet 由四个部分组成:(1) 一个高分辨率和多尺度空间光谱编码器,用于联合提取高光谱和高空间分辨率图像中的微妙空间光谱特征。(2) 由频谱卷积和时空变换块构成的长期时空编码器,可同时在高密度时间序列哨兵-2 图像中划分空间、时间和频谱信息。(3) 交叉分辨率融合模块,用于缓解多模态数据之间的空间分辨率差异,并有效利用互补的时空-光谱信息。(4) 多尺度解码器整合多模态数据中的多尺度信息。我们利用 2020 年中国沈阳地区的机载高光谱遥感影像(空间分辨率为 1作者声明不存在已知竞争m,光谱数为 249,光谱分辨率≤5 nm)及其同期获取的哨兵高密度时间序列影像(空间分辨率为 10 m,光谱数为 10,时间序列数为 31)。这些数据集合并生成了一个多模态数据集,名为 WHU-H2SR-MT,它是第一个开放访问的大规模高时空光谱卫星遥感数据集(即每个数据集有 2500 幅图像对,每幅图像对的尺寸为 300 m × 300 m)。此外,我们还使用了两个开源数据集来验证所提模块的有效性。广泛的实验表明,我们的多尺度空间光谱编码器、时空编码器和跨分辨率融合模块在高分辨率土地覆盖物分割方面的总体性能优于现有的最先进算法(SOTA)。新的多模态数据集将发布在 http://irsip.whu.edu.cn/resources/resources_en_v2.php 网站上,访问和使用该数据集的相应代码也将发布在 https://github.com/RS-Mage/STSNet 网站上。
{"title":"STSNet: A cross-spatial resolution multi-modal remote sensing deep fusion network for high resolution land-cover segmentation","authors":"","doi":"10.1016/j.inffus.2024.102689","DOIUrl":"10.1016/j.inffus.2024.102689","url":null,"abstract":"<div><p>Recently, deep learning models have found extensive application in high-resolution land-cover segmentation research. However, the most current research still suffers from issues such as insufficient utilization of multi-modal information, which limits further improvement in high-resolution land-cover segmentation accuracy. Moreover, differences in the size and spatial resolution of multi-modal datasets collectively pose challenges to multi-modal land-cover segmentation. Therefore, we propose a high-resolution land-cover segmentation network (STSNet) with cross-spatial resolution <strong>s</strong>patio-<strong>t</strong>emporal-<strong>s</strong>pectral deep fusion. This network effectively utilizes spatio-temporal-spectral features to achieve information complementary among multi-modal data. Specifically, STSNet consists of four components: (1) A high resolution and multi-scale spatial-spectral encoder to jointly extract subtle spatial-spectral features in hyperspectral and high spatial resolution images. (2) A long-term spatio-temporal encoder formulated by spectral convolution and spatio-temporal transformer block to simultaneously delineates the spatial, temporal and spectral information in dense time series Sentinel-2 imagery. (3) A cross-resolution fusion module to alleviate the spatial resolution differences between multi-modal data and effectively leverages complementary spatio-temporal-spectral information. (4) A multi-scale decoder integrates multi-scale information from multi-modal data. We utilized airborne hyperspectral remote sensing imagery from the Shenyang region of China in 2020, with a spatial resolution of 1authors declare that they have no known competm, a spectral number of 249, and a spectral resolution ≤ 5 nm, and its Sentinel dense time-series images acquired in the same period with a spatial resolution of 10 m, a spectral number of 10, and a time-series number of 31. These datasets were combined to generate a multi-modal dataset called WHU-H<sup>2</sup>SR-MT, which is the first open accessed large-scale high spatio-temporal-spectral satellite remote sensing dataset (<em>i.e.</em>, with &gt;2500 image pairs sized 300 <em>m</em> × 300 m for each). Additionally, we employed two open-source datasets to validate the effectiveness of the proposed modules. Extensive experiments show that our multi-scale spatial-spectral encoder, spatio-temporal encoder, and cross-resolution fusion module outperform existing state-of-the-art (SOTA) algorithms in terms of overall performance on high-resolution land-cover segmentation. The new multi-modal dataset will be made available at <span><span>http://irsip.whu.edu.cn/resources/resources_en_v2.php</span><svg><path></path></svg></span>, along with the corresponding code for accessing and utilizing the dataset at <span><span>https://github.com/RS-Mage/STSNet</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":null,"pages":null},"PeriodicalIF":14.7,"publicationDate":"2024-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142173504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Information Fusion
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1