首页 > 最新文献

Neurocomputing最新文献

英文 中文
Improving rule-based classifiers by Bayes point aggregation 通过贝叶斯点聚合改进基于规则的分类器
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-16 DOI: 10.1016/j.neucom.2024.128699
The widespread adoption of artificial intelligence systems with continuously higher capabilities is causing ethical concerns. The lack of transparency, particularly for state-of-the-art models such as deep neural networks, hinders the applicability of such black-box methods in many domains, like the medical or the financial ones, where model transparency is a mandatory requirement, and hence white-box models are largely preferred over potentially more accurate but opaque techniques.
For this reason, in this paper, we focus on ruleset learning, arguably the most interpretable class of learning techniques. Specifically, we propose Bayes Point Rule Classifier, an ensemble methodology inspired by the Bayes Point Machine, to improve the performance and robustness of rule-based classifiers. In addition, to improve interpretability, we propose a technique to retain the most relevant rules based on their importance, thus increasing the transparency of the ensemble, making it easier to understand its decision-making process.
We also propose FIND-RS, a greedy ruleset learning algorithm that, under mild conditions, guarantees to learn hypothesis with perfect accuracy on the training set while preserving a good generalization capability to unseen data points.
We performed extensive experimentation showing that FIND-RS achieves state-of-the-art classification performance at the cost of a slight increase in the ruleset complexity w.r.t. the competitors. However, when paired with the Bayes Point Rule Classifier, FIND-RS outperforms all the considered baselines.
人工智能系统的能力不断提高,其广泛应用引起了伦理方面的关注。缺乏透明度,特别是对于深度神经网络等最先进的模型而言,阻碍了这类黑盒方法在许多领域的应用,如医疗或金融领域,在这些领域,模型的透明度是一项硬性要求,因此白盒模型在很大程度上比潜在的更精确但不透明的技术更受青睐。具体来说,我们提出了贝叶斯点规则分类器(Bayes Point Rule Classifier),这是一种受贝叶斯点机启发的集合方法,用于提高基于规则的分类器的性能和鲁棒性。我们还提出了一种贪婪规则集学习算法 FIND-RS,在温和的条件下,它能保证在训练集上以完美的准确度学习假设,同时对未见数据点保持良好的泛化能力。我们进行了大量实验,结果表明 FIND-RS 在规则集复杂度与竞争对手相比略有增加的情况下实现了最先进的分类性能。然而,当与贝叶斯点规则分类器搭配使用时,FIND-RS 的表现优于所有考虑过的基线。
{"title":"Improving rule-based classifiers by Bayes point aggregation","authors":"","doi":"10.1016/j.neucom.2024.128699","DOIUrl":"10.1016/j.neucom.2024.128699","url":null,"abstract":"<div><div>The widespread adoption of artificial intelligence systems with continuously higher capabilities is causing ethical concerns. The lack of transparency, particularly for state-of-the-art models such as deep neural networks, hinders the applicability of such black-box methods in many domains, like the medical or the financial ones, where model transparency is a mandatory requirement, and hence white-box models are largely preferred over potentially more accurate but opaque techniques.</div><div>For this reason, in this paper, we focus on ruleset learning, arguably the most interpretable class of learning techniques. Specifically, we propose Bayes Point Rule Classifier, an ensemble methodology inspired by the Bayes Point Machine, to improve the performance and robustness of rule-based classifiers. In addition, to improve interpretability, we propose a technique to retain the most relevant rules based on their importance, thus increasing the transparency of the ensemble, making it easier to understand its decision-making process.</div><div>We also propose FIND-RS, a greedy ruleset learning algorithm that, under mild conditions, guarantees to learn hypothesis with perfect accuracy on the training set while preserving a good generalization capability to unseen data points.</div><div>We performed extensive experimentation showing that FIND-RS achieves state-of-the-art classification performance at the cost of a slight increase in the ruleset complexity w.r.t. the competitors. However, when paired with the Bayes Point Rule Classifier, FIND-RS outperforms all the considered baselines.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142534702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A hybrid novel SWARA-ELECTRE-I method using probabilistic uncertain linguistic information for feature selection in image recognition 利用概率不确定语言信息在图像识别中进行特征选择的 SWARA-ELECTRE-I 混合新方法
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-16 DOI: 10.1016/j.neucom.2024.128615
In the digital age, the exponential growth of data poses significant challenges for analysts and machine learning algorithms in pattern detection due to its high dimensionality. This study addresses the dimensionality problem by leveraging Probabilistic Uncertain Linguistic Term Set (PULTS), which combine Uncertain Linguistic Term Set (ULTS) with associated probabilities to handle uncertainty in decision-making. We introduce the PUL-weighted average operator to integrate the opinions of multiple decision-makers and propose a novel ELimination and Choice Translating REality (ELECTRE-I) method for optimizing alternatives in multiple attribute group decision-making (MAGDM) scenarios. This method is enhanced by the Stepwise Weight Assessment Ratio Analysis (SWARA) method to determine the relative weight of each attribute. By integrating SWARA with the ELECTRE-I method, we develop a comprehensive approach to tackle MAGDM problems using PULTS. A numerical example involving feature selection in image recognition demonstrates the method’s effectiveness and accuracy. Comparative studies highlight the advantages of our approach in producing a small feature set with high classification accuracy. The proposed method offers a robust solution for feature selection in image recognition and other MAGDM problems, significantly improving decision-making accuracy and efficiency. The methodology’s simplicity and computational ease make it applicable across various domains requiring effective dimensionality reduction.
在数字时代,由于数据的高维性,数据的指数级增长给分析人员和机器学习算法的模式检测带来了巨大挑战。本研究利用概率不确定语言术语集(PULTS)解决了维度问题,该术语集将不确定语言术语集(ULTS)与相关概率相结合,以处理决策中的不确定性。我们引入了 PUL 加权平均算子来整合多个决策者的意见,并提出了一种新颖的 ELimination and Choice Translating REality(ELECTRE-I)方法,用于优化多属性群体决策(MAGDM)情景中的备选方案。该方法通过逐步权重评估比率分析法(SWARA)进行了改进,以确定每个属性的相对权重。通过将 SWARA 与 ELECTRE-I 方法相结合,我们开发出一种使用 PULTS 解决 MAGDM 问题的综合方法。一个涉及图像识别中特征选择的数字示例证明了该方法的有效性和准确性。对比研究凸显了我们的方法在生成具有高分类准确性的小特征集方面的优势。所提出的方法为图像识别和其他 MAGDM 问题中的特征选择提供了稳健的解决方案,显著提高了决策的准确性和效率。该方法简单且易于计算,因此适用于需要有效降维的各种领域。
{"title":"A hybrid novel SWARA-ELECTRE-I method using probabilistic uncertain linguistic information for feature selection in image recognition","authors":"","doi":"10.1016/j.neucom.2024.128615","DOIUrl":"10.1016/j.neucom.2024.128615","url":null,"abstract":"<div><div>In the digital age, the exponential growth of data poses significant challenges for analysts and machine learning algorithms in pattern detection due to its high dimensionality. This study addresses the dimensionality problem by leveraging Probabilistic Uncertain Linguistic Term Set (PULTS), which combine Uncertain Linguistic Term Set (ULTS) with associated probabilities to handle uncertainty in decision-making. We introduce the PUL-weighted average operator to integrate the opinions of multiple decision-makers and propose a novel ELimination and Choice Translating REality (ELECTRE-I) method for optimizing alternatives in multiple attribute group decision-making (MAGDM) scenarios. This method is enhanced by the Stepwise Weight Assessment Ratio Analysis (SWARA) method to determine the relative weight of each attribute. By integrating SWARA with the ELECTRE-I method, we develop a comprehensive approach to tackle MAGDM problems using PULTS. A numerical example involving feature selection in image recognition demonstrates the method’s effectiveness and accuracy. Comparative studies highlight the advantages of our approach in producing a small feature set with high classification accuracy. The proposed method offers a robust solution for feature selection in image recognition and other MAGDM problems, significantly improving decision-making accuracy and efficiency. The methodology’s simplicity and computational ease make it applicable across various domains requiring effective dimensionality reduction.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142533691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel ensemble over-sampling approach based Chebyshev inequality for imbalanced multi-label data 基于切比雪夫不等式的新型集合超采样方法,适用于不平衡多标签数据
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-16 DOI: 10.1016/j.neucom.2024.128717
With the development of intelligent technology, data exhibits characteristics of multi-label and imbalanced distribution, which lead to the degradation of classification model performance. Therefore, addressing multi-label class imbalance has become a hot research topic. Nowadays, over-sampling approaches aim to generate a superset of the original dataset to deal with imbalanced data. However, traditional over-sampling methods only employ the central data point and its nearest neighbor samples to synthesize samples without considering the impact of data distribution. To address these issues, in this paper, we propose an ensemble multi-label over-sampling algorithm (MLCIO) based on Chebyshev inequality and a group optimization strategy. Firstly, to generate more representative and diverse samples, with the seed sample serving as the sphere’s center, Chebyshev inequality is utilized to ensure that synthetic samples fall within its m times the standard deviation. Secondly, a group optimization ranking weighting approach is employed to obtain more reliable and stable label information. Finally, comparative experiments are conducted on 11 imbalanced datasets from various domains using different evaluation metrics. The results demonstrate that our proposal achieves better performance than other approaches.
随着智能技术的发展,数据呈现出多标签和分布不平衡的特点,导致分类模型性能下降。因此,解决多标签类不平衡问题已成为研究热点。目前,超采样方法旨在生成原始数据集的超集来处理不平衡数据。然而,传统的过度采样方法只利用中心数据点及其最近邻样本来合成样本,而没有考虑数据分布的影响。针对这些问题,本文提出了一种基于切比雪夫不等式和分组优化策略的集合多标签过采样算法(MLCIO)。首先,为了生成更具代表性和多样性的样本,以种子样本为球心,利用切比雪夫不等式确保合成样本落在其 m 倍标准偏差范围内。其次,采用分组优化排序加权法,以获得更可靠、更稳定的标签信息。最后,我们使用不同的评估指标在 11 个不同领域的不平衡数据集上进行了对比实验。结果表明,我们的建议比其他方法取得了更好的性能。
{"title":"A novel ensemble over-sampling approach based Chebyshev inequality for imbalanced multi-label data","authors":"","doi":"10.1016/j.neucom.2024.128717","DOIUrl":"10.1016/j.neucom.2024.128717","url":null,"abstract":"<div><div>With the development of intelligent technology, data exhibits characteristics of multi-label and imbalanced distribution, which lead to the degradation of classification model performance. Therefore, addressing multi-label class imbalance has become a hot research topic. Nowadays, over-sampling approaches aim to generate a superset of the original dataset to deal with imbalanced data. However, traditional over-sampling methods only employ the central data point and its nearest neighbor samples to synthesize samples without considering the impact of data distribution. To address these issues, in this paper, we propose an ensemble multi-label over-sampling algorithm (MLCIO) based on Chebyshev inequality and a group optimization strategy. Firstly, to generate more representative and diverse samples, with the seed sample serving as the sphere’s center, Chebyshev inequality is utilized to ensure that synthetic samples fall within its <span><math><mi>m</mi></math></span> times the standard deviation. Secondly, a group optimization ranking weighting approach is employed to obtain more reliable and stable label information. Finally, comparative experiments are conducted on 11 imbalanced datasets from various domains using different evaluation metrics. The results demonstrate that our proposal achieves better performance than other approaches.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142533742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MARA: A deep learning based framework for multilayer graph simplification MARA:基于深度学习的多层图简化框架
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-15 DOI: 10.1016/j.neucom.2024.128712
In many scientific fields, complex systems are characterized by a multitude of heterogeneous interactions/relationships that are challenging to model. Multilayer graphs constitute valuable tools that can represent such complex systems, thus making possible their analysis for downstream decision-making processes. Nevertheless, modeling such complex information still remains challenging in real-world scenarios. On the one hand, holistically including all relationships may lead to noisy or computationally intensive graphs. On the other hand, limiting the amount of information to model through the selection of a portion of the available relationships can introduce boundary specification biases. However, the current research studies are demonstrating that it is more beneficial to retain as much information as possible and at a later stage perform graph simplification i.e., removing uninformative or redundant parts of the graph to facilitate the final analysis. While simplification strategies, based on deep learning methods, have been already extensively explored in the context of single-layer graphs, only a limited amount of efforts have been devoted to simplification strategies for multilayer graphs. In this work, we propose the MultilAyer gRaph simplificAtion (MARA) framework, a GNN-based approach designed to simplify multilayer graphs based on the downstream task. MARA generates node embeddings for a specific task by training jointly two main components: (i) an edge simplification module and (ii) a (multilayer) graph neural network. We tested MARA on different real-world multilayer graphs for node classification tasks. Experimental results show the effectiveness of the proposed approach: MARA reduces the dimension of the input graph while keeping and even improving the performance of node classification tasks in different domains and across graphs characterized by different structures. Moreover, deep learning-based simplification allows MARA to preserve and enhance important graph properties for the downstream task. To our knowledge, MARA represents the first simplification framework especially tailored for multilayer graphs analysis.
在许多科学领域,复杂系统的特点是存在大量异质的相互作用/关系,这对建模提出了挑战。多层图是表示这种复杂系统的宝贵工具,因此可以对其进行分析,以便进行下游决策过程。然而,在现实世界中,对此类复杂信息进行建模仍然具有挑战性。一方面,从整体上包含所有关系可能会导致图形噪声过大或计算量过大。另一方面,通过选择部分可用关系来限制建模信息量可能会引入边界规范偏差。不过,目前的研究表明,保留尽可能多的信息,并在后期阶段进行图形简化(即删除图形中无信息或冗余的部分,以方便最终分析)更有益处。虽然基于深度学习方法的简化策略已经在单层图的背景下得到了广泛的探索,但对于多层图的简化策略却只有有限的研究。在这项工作中,我们提出了多层图简化(MARA)框架,这是一种基于 GNN 的方法,旨在根据下游任务简化多层图。MARA 通过联合训练两个主要组件,为特定任务生成节点嵌入:(i) 边缘简化模块和 (ii) (多层)图神经网络。我们在不同的真实世界多层图上测试了 MARA 的节点分类任务。实验结果表明了建议方法的有效性:MARA 降低了输入图的维度,同时保持甚至提高了节点分类任务在不同领域和不同结构图中的性能。此外,基于深度学习的简化允许 MARA 为下游任务保留和增强重要的图属性。据我们所知,MARA 是首个专门为多层图分析量身定制的简化框架。
{"title":"MARA: A deep learning based framework for multilayer graph simplification","authors":"","doi":"10.1016/j.neucom.2024.128712","DOIUrl":"10.1016/j.neucom.2024.128712","url":null,"abstract":"<div><div>In many scientific fields, complex systems are characterized by a multitude of heterogeneous interactions/relationships that are challenging to model. Multilayer graphs constitute valuable tools that can represent such complex systems, thus making possible their analysis for downstream decision-making processes. Nevertheless, modeling such complex information still remains challenging in real-world scenarios. On the one hand, holistically including all relationships may lead to noisy or computationally intensive graphs. On the other hand, limiting the amount of information to model through the selection of a portion of the available relationships can introduce boundary specification biases. However, the current research studies are demonstrating that it is more beneficial to retain as much information as possible and at a later stage perform graph simplification i.e., removing uninformative or redundant parts of the graph to facilitate the final analysis. While simplification strategies, based on deep learning methods, have been already extensively explored in the context of single-layer graphs, only a limited amount of efforts have been devoted to simplification strategies for multilayer graphs. In this work, we propose the MultilAyer gRaph simplificAtion (<span>MARA</span>) framework, a GNN-based approach designed to simplify multilayer graphs based on the downstream task. <span>MARA</span> generates node embeddings for a specific task by training jointly two main components: (i) an edge simplification module and (ii) a (multilayer) graph neural network. We tested <span>MARA</span> on different real-world multilayer graphs for node classification tasks. Experimental results show the effectiveness of the proposed approach: <span>MARA</span> reduces the dimension of the input graph while keeping and even improving the performance of node classification tasks in different domains and across graphs characterized by different structures. Moreover, deep learning-based simplification allows <span>MARA</span> to preserve and enhance important graph properties for the downstream task. To our knowledge, <span>MARA</span> represents the first simplification framework especially tailored for multilayer graphs analysis.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142533179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing MRI segmentation with CLIP-driven semi-supervised learning and semantic alignment 利用 CLIP 驱动的半监督学习和语义对齐推进磁共振成像分割
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-15 DOI: 10.1016/j.neucom.2024.128690
Precise segmentation and reconstruction of multi-structures within MRI are crucial for clinical applications such as surgical navigation. However, medical image segmentation faces several challenges. Although semi-supervised methods can reduce the annotation workload, they often suffer from limited robustness. To address this issue, this study proposes a novel CLIP-driven semi-supervised model, that includes two branches and a module. In the image branch, copy-paste is used as data augmentation method to enhance consistency learning. In the text branch, patient-level information is encoded via CLIP to drive the image branch. Notably, a novel cross-modal fusion module is designed to enhance the alignment and representation of text and image. Additionally, a semantic spatial alignment module is introduced to register segmentation results from different axial MRIs into a unified space. Three multi-modal datasets (one private and two public) were constructed to demonstrate the model’s performance. Compared to previous state-of-the-art methods, this model shows a significant advantage with both 5% and 10% labeled data. This study constructs a robust semi-supervised medical segmentation model, particularly effective in addressing label inconsistency and abnormal organ deformations. It also tackles the axial non-orthogonality challenges inherent in MRI, providing a consistent view of multi-structures.
磁共振成像中多结构的精确分割和重建对于手术导航等临床应用至关重要。然而,医学图像分割面临着一些挑战。虽然半监督方法可以减少标注工作量,但其鲁棒性往往有限。为解决这一问题,本研究提出了一种新颖的 CLIP 驱动的半监督模型,包括两个分支和一个模块。在图像分支中,复制粘贴被用作数据增强方法,以提高学习的一致性。在文本分支中,通过 CLIP 对患者层面的信息进行编码,从而驱动图像分支。值得注意的是,设计了一个新颖的跨模态融合模块,以增强文本和图像的对齐和表示。此外,还引入了一个语义空间配准模块,将不同轴向核磁共振成像的分割结果注册到一个统一的空间中。为了证明该模型的性能,我们构建了三个多模态数据集(一个私人数据集和两个公共数据集)。与之前最先进的方法相比,该模型在使用 5%和 10%的标记数据时均显示出显著优势。这项研究构建了一个稳健的半监督医疗分割模型,尤其是在解决标签不一致和器官异常变形方面非常有效。它还解决了核磁共振成像固有的轴向非正交性难题,提供了一致的多结构视图。
{"title":"Advancing MRI segmentation with CLIP-driven semi-supervised learning and semantic alignment","authors":"","doi":"10.1016/j.neucom.2024.128690","DOIUrl":"10.1016/j.neucom.2024.128690","url":null,"abstract":"<div><div>Precise segmentation and reconstruction of multi-structures within MRI are crucial for clinical applications such as surgical navigation. However, medical image segmentation faces several challenges. Although semi-supervised methods can reduce the annotation workload, they often suffer from limited robustness. To address this issue, this study proposes a novel CLIP-driven semi-supervised model, that includes two branches and a module. In the image branch, copy-paste is used as data augmentation method to enhance consistency learning. In the text branch, patient-level information is encoded via CLIP to drive the image branch. Notably, a novel cross-modal fusion module is designed to enhance the alignment and representation of text and image. Additionally, a semantic spatial alignment module is introduced to register segmentation results from different axial MRIs into a unified space. Three multi-modal datasets (one private and two public) were constructed to demonstrate the model’s performance. Compared to previous state-of-the-art methods, this model shows a significant advantage with both 5% and 10% labeled data. This study constructs a robust semi-supervised medical segmentation model, particularly effective in addressing label inconsistency and abnormal organ deformations. It also tackles the axial non-orthogonality challenges inherent in MRI, providing a consistent view of multi-structures.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142533743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing aspect-based sentiment analysis with linking words-guided emotional augmentation and hybrid learning 利用关联词引导的情感增强和混合学习增强基于方面的情感分析
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-15 DOI: 10.1016/j.neucom.2024.128705
Aspect-based sentiment analysis (ABSA) is a sophisticated task in the field of natural language processing that aims to identify emotional tendencies related to specific aspects of text. However, ABSA often faces significant data shortages, which limit the availability of annotated data for training and affects the robustness of models. Moreover, when a text contains multiple emotional dimensions, these dimensions can interact, complicating the judgments of emotional polarity. In response to these challenges, this study proposes an innovative training framework: Linking words-guided multidimensional emotional data augmentation and adversarial contrastive training (LWEDA-ACT). Specifically, this method alleviates the issue of data scarcity by synthesizing additional training samples using four different text generators. To obtain the most representative samples, we selected them by calculating sentence entropy. Meanwhile, to reduce potential noise, we introduced linking words to ensure text coherence. Additionally, by applying adversarial training, the model is able to learn generalized feature representations to handle minor input perturbations, thereby enhancing its robustness and accuracy in complex emotional dimension interactions. Through contrastive learning, we constructed positive and negative sample pairs, enabling the model to more accurately identify and distinguish the sentiment polarity of different aspect terms. We conducted comprehensive experiments on three popular ABSA datasets, namely Restaurant, Laptop, and Twitter, and compared our method against the current state-of-the-art techniques. The experimental results demonstrate that our approach achieved an accuracy improvement of +0.98% and a macro F1 score increase of +0.52% on the Restaurant dataset. Additionally, on the challenging Twitter dataset, our method improved accuracy by +0.77% and the macro F1 score by +1.14%.
基于方面的情感分析(ABSA)是自然语言处理领域的一项复杂任务,旨在识别与文本特定方面相关的情感倾向。然而,ABSA 经常面临严重的数据短缺问题,这限制了用于训练的注释数据的可用性,并影响了模型的鲁棒性。此外,当文本包含多个情感维度时,这些维度可能会相互作用,从而使情感极性的判断变得复杂。为了应对这些挑战,本研究提出了一个创新的训练框架:词引导多维情感数据增强和对抗性对比训练(LWEDA-ACT)。具体来说,该方法通过使用四种不同的文本生成器合成额外的训练样本,缓解了数据稀缺的问题。为了获得最具代表性的样本,我们通过计算句子熵来选择样本。同时,为了减少潜在的噪音,我们引入了关联词以确保文本的连贯性。此外,通过对抗训练,该模型能够学习通用特征表征,以处理微小的输入扰动,从而提高其在复杂情感维度交互中的鲁棒性和准确性。通过对比学习,我们构建了正负样本对,使模型能够更准确地识别和区分不同方面术语的情感极性。我们在餐厅、笔记本电脑和 Twitter 这三个流行的 ABSA 数据集上进行了全面的实验,并将我们的方法与当前最先进的技术进行了比较。实验结果表明,我们的方法在餐厅数据集上的准确率提高了 +0.98%,宏观 F1 分数提高了 +0.52%。此外,在具有挑战性的 Twitter 数据集上,我们的方法将准确率提高了 +0.77%,宏观 F1 分数提高了 +1.14%。
{"title":"Enhancing aspect-based sentiment analysis with linking words-guided emotional augmentation and hybrid learning","authors":"","doi":"10.1016/j.neucom.2024.128705","DOIUrl":"10.1016/j.neucom.2024.128705","url":null,"abstract":"<div><div>Aspect-based sentiment analysis (ABSA) is a sophisticated task in the field of natural language processing that aims to identify emotional tendencies related to specific aspects of text. However, ABSA often faces significant data shortages, which limit the availability of annotated data for training and affects the robustness of models. Moreover, when a text contains multiple emotional dimensions, these dimensions can interact, complicating the judgments of emotional polarity. In response to these challenges, this study proposes an innovative training framework: Linking words-guided multidimensional emotional data augmentation and adversarial contrastive training (LWEDA-ACT). Specifically, this method alleviates the issue of data scarcity by synthesizing additional training samples using four different text generators. To obtain the most representative samples, we selected them by calculating sentence entropy. Meanwhile, to reduce potential noise, we introduced linking words to ensure text coherence. Additionally, by applying adversarial training, the model is able to learn generalized feature representations to handle minor input perturbations, thereby enhancing its robustness and accuracy in complex emotional dimension interactions. Through contrastive learning, we constructed positive and negative sample pairs, enabling the model to more accurately identify and distinguish the sentiment polarity of different aspect terms. We conducted comprehensive experiments on three popular ABSA datasets, namely Restaurant, Laptop, and Twitter, and compared our method against the current state-of-the-art techniques. The experimental results demonstrate that our approach achieved an accuracy improvement of +0.98% and a macro F1 score increase of +0.52% on the Restaurant dataset. Additionally, on the challenging Twitter dataset, our method improved accuracy by +0.77% and the macro F1 score by +1.14%.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142533690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Diverse Knowledge Perception and Fusion network for detecting targets and key parts in UAV images 用于探测无人机图像中的目标和关键部分的多元知识感知与融合网络
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-13 DOI: 10.1016/j.neucom.2024.128748
Detecting targets and their key parts in UAV images is crucial for both military and civilian applications, including optimizing damage assessment, evaluating infrastructure, and facilitating disaster response efforts. Traditional top-down approaches impose excessive constraints that struggle to address challenges such as variable definitions and quantities of key parts, potential target occlusion, and model redundancy. Conversely, end-to-end approaches often overlook the relationships between targets and key parts, resulting in low detection accuracy. Inspired by the remarkable human reasoning process, we propose the Diverse Knowledge Perception and Fusion (DKPF) network, which skillfully balances the trade-offs between stringent constraints and unconstrained methods while ensuring both detection precision and real-time performance. Specifically, our model integrates reasoning guided by three distinct forms of knowledge: contextual knowledge at the image level in an unsupervised manner; explicit semantic knowledge regarding the interactions between targets and key parts at the instance level; and implicit comprehensive knowledge about the relationships among different types of targets or key parts, such as shape similarity. These specific knowledge forms are extracted through a novel adaptive fusion strategy for multi-scale features, a binary region-to-region semantic knowledge graph, and a data-driven self-attention architecture, respectively. Experiments conducted on both simulated and real-world datasets reveal that our method significantly outperforms state-of-the-art techniques, regardless of the number of key parts in the target. Furthermore, extensive ablation studies and visualization analyses validate both the efficacy of our approach and the interpretability of the generated features.
检测无人机图像中的目标及其关键部分对于军事和民用应用都至关重要,包括优化损害评估、评估基础设施和促进灾难响应工作。传统的自上而下方法施加了过多的限制,难以应对关键部分的定义和数量可变、潜在的目标遮挡和模型冗余等挑战。相反,端到端方法往往会忽略目标与关键部件之间的关系,从而导致检测精度低下。受人类非凡推理过程的启发,我们提出了多样化知识感知和融合(DKPF)网络,它巧妙地平衡了严格约束和无约束方法之间的权衡,同时确保了检测精度和实时性。具体来说,我们的模型整合了三种不同形式的知识指导下的推理:以无监督方式在图像层面上的上下文知识;在实例层面上关于目标和关键部分之间相互作用的显式语义知识;以及关于不同类型目标或关键部分之间关系(如形状相似性)的隐式综合知识。这些特定的知识形式分别是通过一种新颖的多尺度特征自适应融合策略、二元区域到区域语义知识图谱和数据驱动的自我关注架构提取的。在模拟和真实世界数据集上进行的实验表明,无论目标中关键部分的数量如何,我们的方法都明显优于最先进的技术。此外,广泛的消融研究和可视化分析验证了我们方法的有效性和生成特征的可解释性。
{"title":"A Diverse Knowledge Perception and Fusion network for detecting targets and key parts in UAV images","authors":"","doi":"10.1016/j.neucom.2024.128748","DOIUrl":"10.1016/j.neucom.2024.128748","url":null,"abstract":"<div><div>Detecting targets and their key parts in UAV images is crucial for both military and civilian applications, including optimizing damage assessment, evaluating infrastructure, and facilitating disaster response efforts. Traditional top-down approaches impose excessive constraints that struggle to address challenges such as variable definitions and quantities of key parts, potential target occlusion, and model redundancy. Conversely, end-to-end approaches often overlook the relationships between targets and key parts, resulting in low detection accuracy. Inspired by the remarkable human reasoning process, we propose the Diverse Knowledge Perception and Fusion (DKPF) network, which skillfully balances the trade-offs between stringent constraints and unconstrained methods while ensuring both detection precision and real-time performance. Specifically, our model integrates reasoning guided by three distinct forms of knowledge: contextual knowledge at the image level in an unsupervised manner; explicit semantic knowledge regarding the interactions between targets and key parts at the instance level; and implicit comprehensive knowledge about the relationships among different types of targets or key parts, such as shape similarity. These specific knowledge forms are extracted through a novel adaptive fusion strategy for multi-scale features, a binary region-to-region semantic knowledge graph, and a data-driven self-attention architecture, respectively. Experiments conducted on both simulated and real-world datasets reveal that our method significantly outperforms state-of-the-art techniques, regardless of the number of key parts in the target. Furthermore, extensive ablation studies and visualization analyses validate both the efficacy of our approach and the interpretability of the generated features.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142533741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep belief network with fuzzy parameters and its membership function sensitivity analysis 带模糊参数的深度信念网络及其成员函数敏感性分析
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-11 DOI: 10.1016/j.neucom.2024.128716
Over the last few years, deep belief networks (DBNs) have been extensively utilized for efficient and reliable performance in several complex systems. One critical factor contributing to the enhanced learning of the DBN layers is the handling of network parameters, such as weights and biases. The efficient training of these parameters significantly influences the overall enhanced performance of the DBN. However, the initialization of these parameters is often random, and the data samples are normally corrupted by unwanted noise. This causes the uncertainty to arise among weights and biases of the DBNs, which ultimately hinders the performance of the network. To address this challenge, we propose a novel DBN model with weights and biases represented using fuzzy sets. The approach systematically handles inherent uncertainties in parameters resulting in a more robust and reliable training process. We show the working of the proposed algorithm considering four widely used benchmark datasets such as: MNSIT, n-MNIST (MNIST with additive white Gaussian noise (AWGN) and MNIST with motion blur) and CIFAR-10. The experimental results show superiority of the proposed approach as compared to classical DBN in terms of robustness and enhanced performance. Moreover, it has the capability to produce equivalent results with a smaller number of nodes in the hidden layer; thus, reducing the computational complexity of the network architecture. Additionally, we also study the sensitivity analysis for stability and consistency by considering different membership functions to model the uncertain weights and biases. Further, we establish the statistical significance of the obtained results by conducting both one-way and Kruskal-Wallis analyses of variance tests.
在过去几年中,深度信念网络(DBN)被广泛应用于多个复杂系统中,以获得高效、可靠的性能。增强 DBN 层学习能力的一个关键因素是对权重和偏置等网络参数的处理。这些参数的有效训练对 DBN 整体性能的提升有重大影响。然而,这些参数的初始化通常是随机的,而且数据样本通常会受到不必要的噪声干扰。这就导致 DBN 的权重和偏置之间出现不确定性,最终阻碍了网络性能的提高。为了应对这一挑战,我们提出了一种新型 DBN 模型,其权重和偏置使用模糊集表示。这种方法能系统地处理参数中固有的不确定性,从而使训练过程更加稳健可靠。我们通过四个广泛使用的基准数据集(如:MNSIT、n-MNSIT、MNSIT、MNSIT、MNSIT)展示了所提算法的工作原理:MNSIT、n-MNIST(带有加性白高斯噪声(AWGN)的 MNIST 和带有运动模糊的 MNIST)和 CIFAR-10。实验结果表明,与经典 DBN 相比,所提出的方法在鲁棒性和增强性能方面更胜一筹。此外,它还能以较少的隐层节点数产生等效的结果,从而降低了网络架构的计算复杂度。此外,我们还通过考虑不同的成员函数来模拟不确定的权重和偏差,研究了稳定性和一致性的敏感性分析。此外,我们还通过进行单因子和 Kruskal-Wallis 方差分析,确定了所获结果的统计意义。
{"title":"Deep belief network with fuzzy parameters and its membership function sensitivity analysis","authors":"","doi":"10.1016/j.neucom.2024.128716","DOIUrl":"10.1016/j.neucom.2024.128716","url":null,"abstract":"<div><div>Over the last few years, deep belief networks (DBNs) have been extensively utilized for efficient and reliable performance in several complex systems. One critical factor contributing to the enhanced learning of the DBN layers is the handling of network parameters, such as weights and biases. The efficient training of these parameters significantly influences the overall enhanced performance of the DBN. However, the initialization of these parameters is often random, and the data samples are normally corrupted by unwanted noise. This causes the uncertainty to arise among weights and biases of the DBNs, which ultimately hinders the performance of the network. To address this challenge, we propose a novel DBN model with weights and biases represented using fuzzy sets. The approach systematically handles inherent uncertainties in parameters resulting in a more robust and reliable training process. We show the working of the proposed algorithm considering four widely used benchmark datasets such as: MNSIT, n-MNIST (MNIST with additive white Gaussian noise (AWGN) and MNIST with motion blur) and CIFAR-10. The experimental results show superiority of the proposed approach as compared to classical DBN in terms of robustness and enhanced performance. Moreover, it has the capability to produce equivalent results with a smaller number of nodes in the hidden layer; thus, reducing the computational complexity of the network architecture. Additionally, we also study the sensitivity analysis for stability and consistency by considering different membership functions to model the uncertain weights and biases. Further, we establish the statistical significance of the obtained results by conducting both one-way and Kruskal-Wallis analyses of variance tests.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual-level adaptive incongruity-enhanced model for multimodal sarcasm detection 用于多模态讽刺检测的双层自适应不和谐增强模型
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-11 DOI: 10.1016/j.neucom.2024.128689
Multimodal sarcasm detection leverages multimodal information, such as image, text, etc. to identify special instances whose superficial emotional expression is contrary to the actual emotion. Existing methods primarily focused on the incongruity between text and image information for sarcasm detection. Existing sarcasm methods in which the tendency of image encoders to encode similar images into similar vectors, and the introduction of noise in graph-level feature extraction due to negative correlations caused by the accumulation of GAT layers and the lack of representations for non-neighboring nodes. To address these limitations, we propose a Dual-Level Adaptive Incongruity-Enhanced Model (DAIE) to extract the incongruity between the text and image at both token and graph levels. At the token level, we bolster token-level contrastive learning with patch-based reconstructed image to capture common and specific features of images, thereby amplifying incongruities between text and images. At the graph level, we introduce adaptive graph contrast learning, coupled with negative pair similarity weights, to refine the feature representation of the model’s textual and visual graph nodes, while also enhancing the information exchange among neighboring nodes. We conduct experiments using a publicly available sarcasm detection dataset. The results demonstrate the effectiveness of our method, outperforming several state-of-the-art approaches by 3.33% and 4.34% on accuracy and F1 score, respectively.
多模态讽刺检测利用图像、文本等多模态信息来识别表面情绪表达与实际情绪相反的特殊情况。现有的讽刺检测方法主要关注文本和图像信息之间的不一致性。现有的讽刺方法中,图像编码器倾向于将相似的图像编码成相似的向量,图层特征提取中由于 GAT 层的积累和非相邻节点缺乏表示而导致负相关,从而引入了噪声。为了解决这些局限性,我们提出了双层自适应不一致性增强模型(DAIE),在标记和图层面提取文本和图像之间的不一致性。在标记层面,我们利用基于补丁的重建图像来加强标记层面的对比学习,以捕捉图像的共性和特殊性,从而放大文本和图像之间的不一致性。在图层面,我们引入了自适应图对比学习,并结合负的配对相似性权重,以完善模型的文本和视觉图节点的特征表示,同时也加强了相邻节点之间的信息交流。我们使用公开的讽刺检测数据集进行了实验。结果证明了我们方法的有效性,在准确率和 F1 分数上分别比几种最先进的方法高出 3.33% 和 4.34%。
{"title":"Dual-level adaptive incongruity-enhanced model for multimodal sarcasm detection","authors":"","doi":"10.1016/j.neucom.2024.128689","DOIUrl":"10.1016/j.neucom.2024.128689","url":null,"abstract":"<div><div>Multimodal sarcasm detection leverages multimodal information, such as image, text, etc. to identify special instances whose superficial emotional expression is contrary to the actual emotion. Existing methods primarily focused on the incongruity between text and image information for sarcasm detection. Existing sarcasm methods in which the tendency of image encoders to encode similar images into similar vectors, and the introduction of noise in graph-level feature extraction due to negative correlations caused by the accumulation of GAT layers and the lack of representations for non-neighboring nodes. To address these limitations, we propose a Dual-Level Adaptive Incongruity-Enhanced Model (DAIE) to extract the incongruity between the text and image at both token and graph levels. At the token level, we bolster token-level contrastive learning with patch-based reconstructed image to capture common and specific features of images, thereby amplifying incongruities between text and images. At the graph level, we introduce adaptive graph contrast learning, coupled with negative pair similarity weights, to refine the feature representation of the model’s textual and visual graph nodes, while also enhancing the information exchange among neighboring nodes. We conduct experiments using a publicly available sarcasm detection dataset. The results demonstrate the effectiveness of our method, outperforming several state-of-the-art approaches by 3.33% and 4.34% on accuracy and F1 score, respectively.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142437690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based image encryption techniques: Fundamentals, current trends, challenges and future directions 基于深度学习的图像加密技术:基础知识、当前趋势、挑战和未来方向
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-09 DOI: 10.1016/j.neucom.2024.128714
In recent years, the number of digital images has grown exponentially because of the widespread use of fast internet and smart devices. The integrity authentication of these images is a major concern for the research community. So, the encryption schemes that are commonly used to protect these images are an important subject for many potential applications. This paper presents a comprehensive survey of recent image encryption techniques using deep learning models. First, we explain the reasons that image encryption using deep learning models is beneficial to researchers and the public. Second, we discuss various state-of-art encryption techniques using deep learning models and offer technical summaries of popular techniques. Third, we provide a comparative analysis of our survey and existing state-of-the-art surveys. Finally, by investigating existing deep learning-based encryption, we identify several important research challenges and possible solutions including standard security metrics. To the best of our knowledge, we are the first researchers to do a detailed survey of deep learning-based image encryption for digital images.
近年来,由于快速互联网和智能设备的广泛使用,数字图像的数量呈指数级增长。这些图像的完整性验证是研究界关注的一个主要问题。因此,常用于保护这些图像的加密方案是许多潜在应用的重要课题。本文全面介绍了近期使用深度学习模型的图像加密技术。首先,我们解释了使用深度学习模型进行图像加密有利于研究人员和公众的原因。其次,我们讨论了使用深度学习模型的各种最新加密技术,并对流行技术进行了技术总结。第三,我们提供了我们的调查与现有最先进调查的对比分析。最后,通过调查现有的基于深度学习的加密技术,我们确定了几个重要的研究挑战和可能的解决方案,包括标准安全指标。据我们所知,我们是第一个对基于深度学习的数字图像加密进行详细调查的研究人员。
{"title":"Deep learning-based image encryption techniques: Fundamentals, current trends, challenges and future directions","authors":"","doi":"10.1016/j.neucom.2024.128714","DOIUrl":"10.1016/j.neucom.2024.128714","url":null,"abstract":"<div><div>In recent years, the number of digital images has grown exponentially because of the widespread use of fast internet and smart devices. The integrity authentication of these images is a major concern for the research community. So, the encryption schemes that are commonly used to protect these images are an important subject for many potential applications. This paper presents a comprehensive survey of recent image encryption techniques using deep learning models. First, we explain the reasons that image encryption using deep learning models is beneficial to researchers and the public. Second, we discuss various state-of-art encryption techniques using deep learning models and offer technical summaries of popular techniques. Third, we provide a comparative analysis of our survey and existing state-of-the-art surveys. Finally, by investigating existing deep learning-based encryption, we identify several important research challenges and possible solutions including standard security metrics. To the best of our knowledge, we are the first researchers to do a detailed survey of deep learning-based image encryption for digital images.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142433942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neurocomputing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1