首页 > 最新文献

Knowledge-Based Systems最新文献

英文 中文
Joint multimodal entity-relation extraction based on temporal enhancement and similarity-gated attention 基于时间增强和相似性驱动注意力的联合多模态实体关联提取
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-11 DOI: 10.1016/j.knosys.2024.112504

Joint Multimodal Entity and Relation Extraction (JMERE), which needs to combine complex image information to extract entity-relation quintuples from text sequences, posts higher requirements of the model’s multimodal feature fusion and selection capabilities. With the advancement of large pre-trained language models, existing studies focus on improving the feature alignments between textual and visual modalities. However, there remains a noticeable gap in capturing the temporal information present in textual sequences. In addition, these methods exhibit a certain deficiency in distinguishing irrelevant images when integrating image and text features, making them susceptible to interference from image information unrelated to the text. To address these challenges, we propose a temporally enhanced and similarity-gated attention network (TESGA) for joint multimodal entity relation extraction. Specifically, we first incorporate an LSTM-based Text Temporal Enhancement module to enhance the model’s ability to capture temporal information from the text. Next, we introduce a Text-Image Similarity-Gated Attention mechanism, which controls the degree of incorporating image information based on the consistency between image and text features. Subsequently, We design the entity and relation prediction module using a form-filling approach based on entity and relation types, and conduct prediction of entity-relation quintuples. Notably, apart from the JMERE task, our approach can also be applied to other tasks involving text-visual enhancement, such as Multimodal Named Entity Recognition (MNER) and Multimodal Relation Extraction (MRE). To demonstrate the effectiveness of our approach, our model is extensively experimented on three benchmark datasets and achieves state-of-the-art performance. Our code will be available upon paper acceptance.1

多模态实体和关系联合提取(JMERE)需要结合复杂的图像信息,从文本序列中提取实体-关系五元组,这对模型的多模态特征融合和选择能力提出了更高的要求。随着大型预训练语言模型的发展,现有研究侧重于改进文本和视觉模式之间的特征对齐。然而,在捕捉文本序列中的时间信息方面仍存在明显差距。此外,在整合图像和文本特征时,这些方法在区分无关图像方面存在一定缺陷,因此容易受到与文本无关的图像信息的干扰。为了应对这些挑战,我们提出了一种用于联合多模态实体关系提取的时间增强和相似性门控注意力网络(TESGA)。具体来说,我们首先整合了基于 LSTM 的文本时态增强模块,以增强模型捕捉文本中时态信息的能力。接下来,我们引入了文本-图像相似性导向注意机制,该机制根据图像和文本特征之间的一致性来控制图像信息的纳入程度。随后,我们设计了实体和关系预测模块,采用基于实体和关系类型的填表方法,对实体-关系五元组进行预测。值得注意的是,除了 JMERE 任务,我们的方法还可应用于其他涉及文本-视觉增强的任务,如多模态命名实体识别(MNER)和多模态关系提取(MRE)。为了证明我们方法的有效性,我们的模型在三个基准数据集上进行了广泛的实验,并取得了一流的性能。我们的代码将在论文被接受后提供1。
{"title":"Joint multimodal entity-relation extraction based on temporal enhancement and similarity-gated attention","authors":"","doi":"10.1016/j.knosys.2024.112504","DOIUrl":"10.1016/j.knosys.2024.112504","url":null,"abstract":"<div><p>Joint Multimodal Entity and Relation Extraction (JMERE), which needs to combine complex image information to extract entity-relation quintuples from text sequences, posts higher requirements of the model’s multimodal feature fusion and selection capabilities. With the advancement of large pre-trained language models, existing studies focus on improving the feature alignments between textual and visual modalities. However, there remains a noticeable gap in capturing the temporal information present in textual sequences. In addition, these methods exhibit a certain deficiency in distinguishing irrelevant images when integrating image and text features, making them susceptible to interference from image information unrelated to the text. To address these challenges, we propose a temporally enhanced and similarity-gated attention network (TESGA) for joint multimodal entity relation extraction. Specifically, we first incorporate an LSTM-based Text Temporal Enhancement module to enhance the model’s ability to capture temporal information from the text. Next, we introduce a Text-Image Similarity-Gated Attention mechanism, which controls the degree of incorporating image information based on the consistency between image and text features. Subsequently, We design the entity and relation prediction module using a form-filling approach based on entity and relation types, and conduct prediction of entity-relation quintuples. Notably, apart from the JMERE task, our approach can also be applied to other tasks involving text-visual enhancement, such as Multimodal Named Entity Recognition (MNER) and Multimodal Relation Extraction (MRE). To demonstrate the effectiveness of our approach, our model is extensively experimented on three benchmark datasets and achieves state-of-the-art performance. Our code will be available upon paper acceptance.<span><span><sup>1</sup></span></span></p></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142171653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
StAlK: Structural Alignment based Self Knowledge distillation for Medical Image Classification StAlK:基于结构对齐的医学图像分类自我知识提炼
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-11 DOI: 10.1016/j.knosys.2024.112503

In the realm of medical image analysis, where challenges like high class imbalance, inter-class similarity, and intra-class variance are prevalent, knowledge distillation has emerged as a powerful mechanism for model compression and regularization. Existing methodologies, including label smoothening, contrastive learning, and relational knowledge transfer, aim to address these challenges but exhibit limitations in effectively managing either class imbalance or intricate inter and intra-class relations within input samples. In response, this paper introduces StAlK (Structural Alignment based Self Knowledge distillation) for Medical Image Classification, a novel approach which leverages the alignment of complex high-order discriminative features from a mean teacher model. This alignment enhances the student model’s ability to distinguish examples across different classes. StAlK demonstrates superior performance in scenarios involving both inter and intra-class relationships and proves significantly more robust in handling class imbalance compared to baseline methods. Extensive investigations across multiple benchmark datasets reveal that StAlK achieves a substantial improvement of 6%–7% in top-1 accuracy compared to various state-of-the-art baselines. The code is available at: https://github.com/philsaurabh/StAlK_KBS.

在医学图像分析领域,普遍存在高类不平衡、类间相似性和类内差异等挑战,知识提炼已成为模型压缩和正则化的强大机制。现有的方法,包括标签平滑化、对比学习和关系知识转移,旨在应对这些挑战,但在有效管理输入样本中的类不平衡或错综复杂的类间和类内关系方面表现出局限性。为此,本文介绍了用于医学图像分类的 StAlK(基于结构对齐的自我知识提炼),这是一种利用平均教师模型中复杂的高阶判别特征对齐的新方法。这种配准增强了学生模型区分不同类别实例的能力。与基线方法相比,StAlK 在涉及类间和类内关系的情况下均表现出卓越的性能,并在处理类不平衡方面证明了其显著的鲁棒性。对多个基准数据集的广泛研究表明,与各种最先进的基线方法相比,StAlK 的 top-1 准确率大幅提高了 6%-7%。代码见:https://github.com/philsaurabh/StAlK_KBS。
{"title":"StAlK: Structural Alignment based Self Knowledge distillation for Medical Image Classification","authors":"","doi":"10.1016/j.knosys.2024.112503","DOIUrl":"10.1016/j.knosys.2024.112503","url":null,"abstract":"<div><p>In the realm of medical image analysis, where challenges like high class imbalance, inter-class similarity, and intra-class variance are prevalent, knowledge distillation has emerged as a powerful mechanism for model compression and regularization. Existing methodologies, including label smoothening, contrastive learning, and relational knowledge transfer, aim to address these challenges but exhibit limitations in effectively managing either class imbalance or intricate inter and intra-class relations within input samples. In response, this paper introduces StAlK (<strong>St</strong>ructural <strong>Al</strong>ignment based Self <strong>K</strong>nowledge distillation) for Medical Image Classification, a novel approach which leverages the alignment of complex high-order discriminative features from a mean teacher model. This alignment enhances the student model’s ability to distinguish examples across different classes. StAlK demonstrates superior performance in scenarios involving both inter and intra-class relationships and proves significantly more robust in handling class imbalance compared to baseline methods. Extensive investigations across multiple benchmark datasets reveal that StAlK achieves a substantial improvement of 6%–7% in top-1 accuracy compared to various state-of-the-art baselines. The code is available at: <span><span>https://github.com/philsaurabh/StAlK_KBS</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142171748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physically-guided temporal diffusion transformer for long-term time series forecasting 用于长期时间序列预测的物理引导时间扩散变换器
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-10 DOI: 10.1016/j.knosys.2024.112508

Transformer has shown excellent performance in long-term time series forecasting because of its capability to capture long-term dependencies. However, existing Transformer-based approaches often overlook the unique characteristics inherent to time series, particularly multi-scale periodicity, which leads to a gap in inductive biases. To address this oversight, the temporal diffusion Transformer (TDT) was developed in this study to reveal the intrinsic evolution processes of time series. First, to uncover the connections among the periods of multi-periodic time series, the series are transformed into various types of patches using a multi-scale Patch method. Inspired by the principles of heat conduction, TDT conceptualizes the evolution of a time series as a diffusion process. TDT aims to achieve global consistency by minimizing energy constraints, which is accomplished through the iterative updating of patches. Finally, the results of these iterations across multiple periods are aggregated to form the TDT output. Compared to previous advanced models, TDT achieved state-of-the-art predictive performance in our experiments. In most datasets, TDT outperformed the baseline model by approximately 2% in terms of mean square error (MSE) and mean absolute error (MAE). Its effectiveness was further validated through ablation, efficiency, and hyperparameter analyses. TDT offers intuitive explanations by elucidating the diffusion process of time series patches throughout the iterative procedure.

由于 Transformer 能够捕捉长期依赖关系,因此在长期时间序列预测方面表现出色。然而,现有的基于 Transformer 的方法往往忽略了时间序列的固有特征,尤其是多尺度周期性,这导致了归纳偏差的差距。为解决这一问题,本研究开发了时间扩散变换器(TDT),以揭示时间序列的内在演化过程。首先,为了揭示多周期时间序列各时期之间的联系,使用多尺度补丁方法将序列转换为各种类型的补丁。受热传导原理的启发,TDT 将时间序列的演变概念化为一个扩散过程。TDT 的目标是通过最小化能量约束来实现全局一致性,而这是通过迭代更新补丁来实现的。最后,这些迭代更新在多个时段的结果汇总形成 TDT 输出。与以前的高级模型相比,TDT 在我们的实验中取得了最先进的预测性能。在大多数数据集中,TDT 的均方误差(MSE)和平均绝对误差(MAE)都比基准模型高出约 2%。通过消融、效率和超参数分析,进一步验证了其有效性。TDT 在整个迭代过程中阐明了时间序列斑块的扩散过程,从而提供了直观的解释。
{"title":"Physically-guided temporal diffusion transformer for long-term time series forecasting","authors":"","doi":"10.1016/j.knosys.2024.112508","DOIUrl":"10.1016/j.knosys.2024.112508","url":null,"abstract":"<div><p>Transformer has shown excellent performance in long-term time series forecasting because of its capability to capture long-term dependencies. However, existing Transformer-based approaches often overlook the unique characteristics inherent to time series, particularly multi-scale periodicity, which leads to a gap in inductive biases. To address this oversight, the temporal diffusion Transformer (TDT) was developed in this study to reveal the intrinsic evolution processes of time series. First, to uncover the connections among the periods of multi-periodic time series, the series are transformed into various types of patches using a multi-scale Patch method. Inspired by the principles of heat conduction, TDT conceptualizes the evolution of a time series as a diffusion process. TDT aims to achieve global consistency by minimizing energy constraints, which is accomplished through the iterative updating of patches. Finally, the results of these iterations across multiple periods are aggregated to form the TDT output. Compared to previous advanced models, TDT achieved state-of-the-art predictive performance in our experiments. In most datasets, TDT outperformed the baseline model by approximately 2% in terms of mean square error (MSE) and mean absolute error (MAE). Its effectiveness was further validated through ablation, efficiency, and hyperparameter analyses. TDT offers intuitive explanations by elucidating the diffusion process of time series patches throughout the iterative procedure.</p></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142171650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multidimensional time series motif group discovery based on matrix profile 基于矩阵轮廓的多维时间序列主题组发现
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-10 DOI: 10.1016/j.knosys.2024.112509

With the continuous advancements in sensor technology and the increasing capabilities for data collection and storage, the acquisition of time series data across diverse domains has become significantly easier. Consequently, there is a growing demand for identifying potential motifs within multidimensional time series. The introduction of the Matrix Profile (MP) structure and the mSTOMP algorithm enables the detection of multidimensional motifs in large-scale time series datasets. However, the Matrix Profile (MP) does not provide information regarding the frequency of occurrence of these motifs. As a result, it is challenging to determine whether a motif appears frequently or to identify the specific time periods during which it typically occurs, thereby limiting further analysis of the discovered motifs. To address this limitation, we proposed Index Link Motif Group Discovery (ILMGD) algorithm, which uses index linking to rapidly merge and group multidimensional motifs. Based on the results of the ILMGD algorithm, we can determine the frequency and temporal positions of motifs, facilitating deeper analysis. Our proposed method requires minimal additional parameters and reduces the need for extensive manual intervention. We validate the effectiveness of our algorithm on synthetic datasets and demonstrate its applicability on three real-world datasets, highlighting how it enables a comprehensive understanding of the discovered motifs.

随着传感器技术的不断进步以及数据收集和存储能力的不断提高,获取不同领域的时间序列数据变得更加容易。因此,在多维时间序列中识别潜在主题的需求日益增长。矩阵剖面(MP)结构和 mSTOMP 算法的引入使得在大规模时间序列数据集中检测多维图案成为可能。然而,矩阵剖面图(MP)并不提供有关这些图案出现频率的信息。因此,要确定某个主题是否经常出现或确定其通常出现的具体时间段非常困难,从而限制了对已发现主题的进一步分析。为了解决这一局限性,我们提出了索引链接图案组发现(ILMGD)算法,该算法利用索引链接快速合并和分组多维图案。根据 ILMGD 算法的结果,我们可以确定图案的频率和时间位置,从而便于进行更深入的分析。我们提出的方法只需极少的附加参数,减少了大量人工干预的需要。我们在合成数据集上验证了该算法的有效性,并在三个真实世界数据集上演示了该算法的适用性,重点介绍了该算法如何实现对已发现主题的全面理解。
{"title":"Multidimensional time series motif group discovery based on matrix profile","authors":"","doi":"10.1016/j.knosys.2024.112509","DOIUrl":"10.1016/j.knosys.2024.112509","url":null,"abstract":"<div><p>With the continuous advancements in sensor technology and the increasing capabilities for data collection and storage, the acquisition of time series data across diverse domains has become significantly easier. Consequently, there is a growing demand for identifying potential motifs within multidimensional time series. The introduction of the Matrix Profile (MP) structure and the mSTOMP algorithm enables the detection of multidimensional motifs in large-scale time series datasets. However, the Matrix Profile (MP) does not provide information regarding the frequency of occurrence of these motifs. As a result, it is challenging to determine whether a motif appears frequently or to identify the specific time periods during which it typically occurs, thereby limiting further analysis of the discovered motifs. To address this limitation, we proposed Index Link Motif Group Discovery (ILMGD) algorithm, which uses index linking to rapidly merge and group multidimensional motifs. Based on the results of the ILMGD algorithm, we can determine the frequency and temporal positions of motifs, facilitating deeper analysis. Our proposed method requires minimal additional parameters and reduces the need for extensive manual intervention. We validate the effectiveness of our algorithm on synthetic datasets and demonstrate its applicability on three real-world datasets, highlighting how it enables a comprehensive understanding of the discovered motifs.</p></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142168771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CIRA: Class imbalance resilient adaptive Gaussian process classifier CIRA:类不平衡弹性自适应高斯过程分类器
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-07 DOI: 10.1016/j.knosys.2024.112500

The problem of class imbalance is pervasive across various real-world applications, resulting in machine learning classifiers exhibiting bias towards majority classes. Algorithm-level balancing approaches adapt the machine learning algorithms to learn from imbalanced datasets while preserving the data’s original distribution. The Gaussian process classifier is a powerful machine learning classification algorithm, however, as with other standard classifiers, its classification performance could be exacerbated by class imbalance. In this work, we propose the Class Imbalance Resilient Adaptive Gaussian process classifier (CIRA), an algorithm-level adaptation of the binary Gaussian process classifier to alleviate the class imbalance. To the best of our knowledge, the proposed algorithm (CIRA) is the first adaptive method for the Gaussian process classifier to handle unbalanced data. The proposed CIRA algorithm consists of two balancing modifications to the original classifier. The first modification balances the posterior mean approximation to learn a more balanced decision boundary between the majority and minority classes. The second modification adopts an asymmetric conditional prediction model to give more emphasis to the minority points during the training process. We conduct extensive experiments and statistical significance tests on forty-two real-world unbalanced datasets. Through the experiments, our proposed CIRA algorithm surpasses six popular data sampling methods with an average of 2.29%, 3.25%, 3.67%, and 1.81% in terms of the Geometric mean, F1-measure, Matthew correlation coefficient, and Area under the receiver operating characteristics curve performance metrics respectively.

类不平衡问题在现实世界的各种应用中普遍存在,导致机器学习分类器偏向于多数类。算法级平衡方法可以调整机器学习算法,以便从不平衡的数据集中学习,同时保持数据的原始分布。高斯过程分类器是一种功能强大的机器学习分类算法,然而,与其他标准分类器一样,它的分类性能可能会因类别不平衡而恶化。在这项工作中,我们提出了类不平衡弹性自适应高斯过程分类器(CIRA),它是二进制高斯过程分类器的一种算法级自适应,可以缓解类不平衡问题。据我们所知,所提出的算法(CIRA)是高斯过程分类器处理不平衡数据的第一种自适应方法。所提出的 CIRA 算法包括对原始分类器的两个平衡修改。第一种修改是平衡后验均值近似值,以便在多数类和少数类之间学习更平衡的决策边界。第二项修改采用了非对称条件预测模型,在训练过程中更加重视少数群体点。我们在 42 个真实世界的非平衡数据集上进行了广泛的实验和统计显著性测试。通过实验,我们提出的 CIRA 算法在几何平均数、F1-measure、马太相关系数和接收者工作特征曲线下面积等性能指标上分别以 2.29%、3.25%、3.67% 和 1.81% 的平均值超过了六种流行的数据抽样方法。
{"title":"CIRA: Class imbalance resilient adaptive Gaussian process classifier","authors":"","doi":"10.1016/j.knosys.2024.112500","DOIUrl":"10.1016/j.knosys.2024.112500","url":null,"abstract":"<div><p>The problem of class imbalance is pervasive across various real-world applications, resulting in machine learning classifiers exhibiting bias towards majority classes. Algorithm-level balancing approaches adapt the machine learning algorithms to learn from imbalanced datasets while preserving the data’s original distribution. The Gaussian process classifier is a powerful machine learning classification algorithm, however, as with other standard classifiers, its classification performance could be exacerbated by class imbalance. In this work, we propose the Class Imbalance Resilient Adaptive Gaussian process classifier (CIRA), an algorithm-level adaptation of the binary Gaussian process classifier to alleviate the class imbalance. To the best of our knowledge, the proposed algorithm (CIRA) is the first adaptive method for the Gaussian process classifier to handle unbalanced data. The proposed CIRA algorithm consists of two balancing modifications to the original classifier. The first modification balances the posterior mean approximation to learn a more balanced decision boundary between the majority and minority classes. The second modification adopts an asymmetric conditional prediction model to give more emphasis to the minority points during the training process. We conduct extensive experiments and statistical significance tests on forty-two real-world unbalanced datasets. Through the experiments, our proposed CIRA algorithm surpasses six popular data sampling methods with an average of 2.29%, 3.25%, 3.67%, and 1.81% in terms of the Geometric mean, F1-measure, Matthew correlation coefficient, and Area under the receiver operating characteristics curve performance metrics respectively.</p></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142162270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey on temporal knowledge graph embedding: Models and applications 时态知识图嵌入调查:模型与应用
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-07 DOI: 10.1016/j.knosys.2024.112454

Knowledge graph embedding (KGE), as a pivotal technology in artificial intelligence, plays a significant role in enhancing the logical reasoning and management efficiency of downstream tasks in knowledge graphs (KGs). It maps the intricate structure of a KG to a continuous vector space. Conventional KGE techniques primarily focus on representing static data within a KG. However, in the real world, facts frequently change over time, as exemplified by evolving social relationships and news events. The effective utilization of embedding technologies to represent KGs that integrate temporal data has gained significant scholarly interest. This paper comprehensively reviews the existing methods for learning KG representations that incorporate temporal data. It offers a highly intuitive perspective by categorizing temporal KGE (TKGE) methods into seven main classes based on dynamic evolution models and extensions of static KGE. The review covers various aspects of TKGE, including the background, problem definition, symbolic representation, training process, commonly used datasets, evaluation schemes, and relevant research. Furthermore, detailed descriptions of related embedding models are provided, followed by an introduction to typical downstream tasks in temporal KG scenarios. Finally, the paper concludes by summarizing the challenges faced in TKGE and outlining future research directions.

知识图谱嵌入(KGE)作为人工智能领域的一项关键技术,在提高知识图谱(KG)的逻辑推理能力和下游任务的管理效率方面发挥着重要作用。它将知识图谱的复杂结构映射到连续的向量空间。传统的知识图谱技术主要侧重于在知识图谱中表示静态数据。然而,在现实世界中,事实经常会随着时间的推移而发生变化,不断发展的社会关系和新闻事件就是例证。如何有效利用嵌入技术来表示整合了时间数据的 KG,已经引起了学术界的极大兴趣。本文全面评述了学习包含时间数据的幼稚园表征的现有方法。它提供了一个高度直观的视角,根据动态演化模型和静态 KGE 的扩展,将时态 KGE(TKGE)方法分为七大类。综述涵盖了 TKGE 的各个方面,包括背景、问题定义、符号表示、训练过程、常用数据集、评估方案和相关研究。此外,论文还详细介绍了相关的嵌入模型,随后介绍了时态 KG 场景中的典型下游任务。最后,本文总结了 TKGE 面临的挑战,并概述了未来的研究方向。
{"title":"A survey on temporal knowledge graph embedding: Models and applications","authors":"","doi":"10.1016/j.knosys.2024.112454","DOIUrl":"10.1016/j.knosys.2024.112454","url":null,"abstract":"<div><p>Knowledge graph embedding (KGE), as a pivotal technology in artificial intelligence, plays a significant role in enhancing the logical reasoning and management efficiency of downstream tasks in knowledge graphs (KGs). It maps the intricate structure of a KG to a continuous vector space. Conventional KGE techniques primarily focus on representing static data within a KG. However, in the real world, facts frequently change over time, as exemplified by evolving social relationships and news events. The effective utilization of embedding technologies to represent KGs that integrate temporal data has gained significant scholarly interest. This paper comprehensively reviews the existing methods for learning KG representations that incorporate temporal data. It offers a highly intuitive perspective by categorizing temporal KGE (TKGE) methods into seven main classes based on dynamic evolution models and extensions of static KGE. The review covers various aspects of TKGE, including the background, problem definition, symbolic representation, training process, commonly used datasets, evaluation schemes, and relevant research. Furthermore, detailed descriptions of related embedding models are provided, followed by an introduction to typical downstream tasks in temporal KG scenarios. Finally, the paper concludes by summarizing the challenges faced in TKGE and outlining future research directions.</p></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0950705124010888/pdfft?md5=6825155cbc22973e3b9d0b91ab9c11af&pid=1-s2.0-S0950705124010888-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142162521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing multi-time series forecasting for enhanced cloud resource utilization based on machine learning 基于机器学习优化多时间序列预测,提高云资源利用率
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-07 DOI: 10.1016/j.knosys.2024.112489

Due to its flexibility, cloud computing has become essential in modern operational schemes. However, the effective management of cloud resources to ensure cost-effectiveness and maintain high performance presents significant challenges. The pay-as-you-go pricing model, while convenient, can lead to escalated expenses and hinder long-term planning. Consequently, FinOps advocates proactive management strategies, with resource usage prediction emerging as a crucial optimization category. In this research, we introduce the multi-time series forecasting system (MSFS), a novel approach for data-driven resource optimization alongside the hybrid ensemble anomaly detection algorithm (HEADA). Our method prioritizes the concept-centric approach, focusing on factors such as prediction uncertainty, interpretability and domain-specific measures. Furthermore, we introduce the similarity-based time-series grouping (STG) method as a core component of MSFS for optimizing multi-time series forecasting, ensuring its scalability with the rapid growth of the cloud environment. The experiments performed demonstrate that our group-specific forecasting model (GSFM) approach enabled MSFS to achieve a significant cost reduction of up to 44%.

云计算因其灵活性,已成为现代运营计划中不可或缺的一部分。然而,如何有效管理云计算资源,以确保成本效益并保持高性能,是一项重大挑战。现收现付的定价模式虽然方便,但会导致支出增加,阻碍长期规划。因此,FinOps 提倡积极主动的管理策略,而资源使用预测则成为一个重要的优化类别。在本研究中,我们介绍了多时间序列预测系统(MSFS),这是一种与混合集合异常检测算法(HEADA)相结合的数据驱动型资源优化新方法。我们的方法优先考虑以概念为中心的方法,重点关注预测的不确定性、可解释性和特定领域测量等因素。此外,我们还引入了基于相似性的时间序列分组(STG)方法,作为 MSFS 的核心组件,用于优化多时间序列预测,确保其可扩展性与云环境的快速增长相适应。实验证明,我们的分组预测模型(GSFM)方法使 MSFS 的成本大幅降低了 44%。
{"title":"Optimizing multi-time series forecasting for enhanced cloud resource utilization based on machine learning","authors":"","doi":"10.1016/j.knosys.2024.112489","DOIUrl":"10.1016/j.knosys.2024.112489","url":null,"abstract":"<div><p>Due to its flexibility, cloud computing has become essential in modern operational schemes. However, the effective management of cloud resources to ensure cost-effectiveness and maintain high performance presents significant challenges. The pay-as-you-go pricing model, while convenient, can lead to escalated expenses and hinder long-term planning. Consequently, FinOps advocates proactive management strategies, with resource usage prediction emerging as a crucial optimization category. In this research, we introduce the multi-time series forecasting system (MSFS), a novel approach for data-driven resource optimization alongside the hybrid ensemble anomaly detection algorithm (HEADA). Our method prioritizes the concept-centric approach, focusing on factors such as prediction uncertainty, interpretability and domain-specific measures. Furthermore, we introduce the similarity-based time-series grouping (STG) method as a core component of MSFS for optimizing multi-time series forecasting, ensuring its scalability with the rapid growth of the cloud environment. The experiments performed demonstrate that our group-specific forecasting model (GSFM) approach enabled MSFS to achieve a significant cost reduction of up to 44%.</p></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0950705124011237/pdfft?md5=f19c1aa29695016ff8f758ff70605e16&pid=1-s2.0-S0950705124011237-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142162268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CENN: Capsule-enhanced neural network with innovative metrics for robust speech emotion recognition CENN:采用创新指标的胶囊增强型神经网络,用于稳健的语音情感识别
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-07 DOI: 10.1016/j.knosys.2024.112499

Speech emotion recognition (SER) plays a pivotal role in enhancing Human-computer interaction (HCI) systems. This paper introduces a groundbreaking Capsule-enhanced neural network (CENN) that significantly advances the state of SER through a robust and reproducible deep learning framework. The CENN architecture seamlessly integrates advanced components, including Multi-head attention (MHA), residual module, and capsule module, which collectively enhance the model's capacity to capture both global and local features essential for precise emotion classification. A key contribution of this work is the development of a comprehensive reproducibility framework, featuring novel metrics: General learning reproducibility (GLR) and Correct learning reproducibility (CLR). These metrics, alongside their fractional and perfect variants, offer a multi-dimensional evaluation of the model's consistency and correctness across multiple executions, thereby ensuring the reliability and credibility of the results. To tackle the persistent challenge of overfitting in deep learning models, we propose an innovative overfitting metric that considers the intricate relationship between training and testing errors, model complexity, and data complexity. This metric, in conjunction with the newly introduced generalization and robustness metrics, provides a holistic assessment of the model's performance, guiding the application of regularization techniques to maintain generalizability and resilience. Extensive experiments conducted on benchmark SER datasets demonstrate that the CENN model not only surpasses existing approaches in terms of accuracy but also sets a new benchmark in reproducibility. This work establishes a new paradigm for deep learning model development in SER, underscoring the vital importance of reproducibility and offering a rigorous framework for future research.

语音情感识别(SER)在增强人机交互(HCI)系统方面发挥着举足轻重的作用。本文介绍了一种开创性的胶囊增强神经网络(CENN),它通过一种稳健且可重现的深度学习框架,极大地推动了 SER 的发展。CENN 架构无缝集成了先进的组件,包括多头注意力(MHA)、残差模块和胶囊模块,这些组件共同增强了模型捕捉全局和局部特征的能力,这些特征对于精确的情感分类至关重要。这项工作的一个主要贡献是开发了一个全面的可重现性框架,其特点是采用了新颖的衡量标准:一般学习再现性(GLR)和正确学习再现性(CLR)。这些指标以及它们的分数和完美变体,对模型在多次执行中的一致性和正确性进行了多维度评估,从而确保了结果的可靠性和可信度。为了解决深度学习模型中长期存在的过拟合问题,我们提出了一种创新的过拟合度量,该度量考虑了训练和测试误差、模型复杂性和数据复杂性之间错综复杂的关系。该指标与新引入的泛化指标和鲁棒性指标相结合,可对模型的性能进行整体评估,指导正则化技术的应用,以保持泛化性和弹性。在基准 SER 数据集上进行的广泛实验表明,CENN 模型不仅在准确性方面超越了现有方法,而且在可重复性方面树立了新的标杆。这项工作为 SER 中的深度学习模型开发建立了一个新范例,强调了可重复性的极端重要性,并为未来研究提供了一个严格的框架。
{"title":"CENN: Capsule-enhanced neural network with innovative metrics for robust speech emotion recognition","authors":"","doi":"10.1016/j.knosys.2024.112499","DOIUrl":"10.1016/j.knosys.2024.112499","url":null,"abstract":"<div><p>Speech emotion recognition (SER) plays a pivotal role in enhancing Human-computer interaction (HCI) systems. This paper introduces a groundbreaking Capsule-enhanced neural network (CENN) that significantly advances the state of SER through a robust and reproducible deep learning framework. The CENN architecture seamlessly integrates advanced components, including Multi-head attention (MHA), residual module, and capsule module, which collectively enhance the model's capacity to capture both global and local features essential for precise emotion classification. A key contribution of this work is the development of a comprehensive reproducibility framework, featuring novel metrics: General learning reproducibility (GLR) and Correct learning reproducibility (CLR). These metrics, alongside their fractional and perfect variants, offer a multi-dimensional evaluation of the model's consistency and correctness across multiple executions, thereby ensuring the reliability and credibility of the results. To tackle the persistent challenge of overfitting in deep learning models, we propose an innovative overfitting metric that considers the intricate relationship between training and testing errors, model complexity, and data complexity. This metric, in conjunction with the newly introduced generalization and robustness metrics, provides a holistic assessment of the model's performance, guiding the application of regularization techniques to maintain generalizability and resilience. Extensive experiments conducted on benchmark SER datasets demonstrate that the CENN model not only surpasses existing approaches in terms of accuracy but also sets a new benchmark in reproducibility. This work establishes a new paradigm for deep learning model development in SER, underscoring the vital importance of reproducibility and offering a rigorous framework for future research.</p></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142168961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Elliptic geometry-based kernel matrix for improved biological sequence classification 基于椭圆几何的核矩阵用于改进生物序列分类
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-07 DOI: 10.1016/j.knosys.2024.112479

Protein sequence classification plays a pivotal role in bioinformatics as it enables the comprehension of protein functions and their involvement in diverse biological processes. While numerous machine learning models have been proposed to tackle this challenge, traditional approaches face limitations in capturing the intricate relationships and hierarchical structures inherent in genomic sequences. These limitations stem from operating within high-dimensional non-Euclidean spaces. To address this issue, we introduce the application of the elliptic geometry-based approach for protein sequence classification. First, we transform the problem in elliptic geometry and integrate it with the Gaussian kernel to map the problem into the Mercer kernel. The Gaussian-Elliptic approach allows for the implicit mapping of data into a higher-dimensional feature space, enabling the capture of complex nonlinear relationships. This feature becomes particularly advantageous when dealing with hierarchical or tree-like structures commonly encountered in biological sequences. Experimental results highlight the effectiveness of the proposed model in protein sequence classification, showcasing the advantages of utilizing elliptic geometry in bioinformatics analyses. It outperforms state-of-the-art methods by achieving 76% and 84% accuracies for DNA and Protein datasets, respectively. Furthermore, we provide theoretical justifications for the proposed model. This study contributes to the burgeoning field of geometric deep learning, offering insights into the potential applications of elliptic representations in the analysis of biological data.

蛋白质序列分类在生物信息学中起着举足轻重的作用,因为它有助于理解蛋白质的功能及其在各种生物过程中的参与。虽然已经提出了许多机器学习模型来应对这一挑战,但传统方法在捕捉基因组序列中固有的复杂关系和层次结构方面存在局限性。这些限制源于在高维非欧几里得空间中的操作。为了解决这个问题,我们介绍了基于椭圆几何的蛋白质序列分类方法的应用。首先,我们在椭圆几何中转换问题,并将其与高斯核整合,从而将问题映射到美世核中。高斯椭圆方法允许将数据隐式映射到更高维度的特征空间,从而捕捉复杂的非线性关系。在处理生物序列中常见的分层结构或树状结构时,这一特点尤为有利。实验结果凸显了所提模型在蛋白质序列分类中的有效性,展示了在生物信息学分析中利用椭圆几何的优势。该模型在 DNA 和蛋白质数据集上的准确率分别达到了 76% 和 84%,优于最先进的方法。此外,我们还为所提出的模型提供了理论依据。这项研究为正在蓬勃发展的几何深度学习领域做出了贡献,为椭圆表示法在生物数据分析中的潜在应用提供了见解。
{"title":"Elliptic geometry-based kernel matrix for improved biological sequence classification","authors":"","doi":"10.1016/j.knosys.2024.112479","DOIUrl":"10.1016/j.knosys.2024.112479","url":null,"abstract":"<div><p>Protein sequence classification plays a pivotal role in bioinformatics as it enables the comprehension of protein functions and their involvement in diverse biological processes. While numerous machine learning models have been proposed to tackle this challenge, traditional approaches face limitations in capturing the intricate relationships and hierarchical structures inherent in genomic sequences. These limitations stem from operating within high-dimensional non-Euclidean spaces. To address this issue, we introduce the application of the elliptic geometry-based approach for protein sequence classification. First, we transform the problem in elliptic geometry and integrate it with the Gaussian kernel to map the problem into the Mercer kernel. The Gaussian-Elliptic approach allows for the implicit mapping of data into a higher-dimensional feature space, enabling the capture of complex nonlinear relationships. This feature becomes particularly advantageous when dealing with hierarchical or tree-like structures commonly encountered in biological sequences. Experimental results highlight the effectiveness of the proposed model in protein sequence classification, showcasing the advantages of utilizing elliptic geometry in bioinformatics analyses. It outperforms state-of-the-art methods by achieving 76% and 84% accuracies for DNA and Protein datasets, respectively. Furthermore, we provide theoretical justifications for the proposed model. This study contributes to the burgeoning field of geometric deep learning, offering insights into the potential applications of elliptic representations in the analysis of biological data.</p></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142162520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-timescale attention residual shrinkage network with adaptive global-local denoising for rolling-bearing fault diagnosis 用于滚动轴承故障诊断的具有自适应全局-局部去噪功能的多时间尺度注意力残差收缩网络
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-07 DOI: 10.1016/j.knosys.2024.112478

In actual engineering scenarios, bearing fault signals are inevitably overwhelmed by strong background noise from various sources. However, most deep-learning-based diagnostic models tend to broaden the feature extraction scale to extract rich fault features for bearing-fault identification under noise interference, with little attention paid to multi-timescale discriminative feature mining with adaptive noise rejection, which affects the diagnostic performance. Thus, a multi-timescale attention residual shrinkage network with adaptive global-local denoising (AMARSN) was proposed for rolling-bearing fault diagnosis by learning discriminative multi-timescale fault features from signals and fully eliminating noise components in the multi-timescale fault features. First, a multi-timescale attention learning module (MALMod) was developed to capture multi-timescale fault features and enhance their discriminability under noise interference. Subsequently, an adaptive global-local denoising module (AGDMod) was constructed to fully eliminate noise in multiscale fault features by constructing specific global-local denoising thresholds and designing an adaptive smooth soft thresholding function. Finally, end-to-end bearing fault diagnosis tasks were realized using a softmax classifier located at the end of the AMARSN. The AMARSN was validated using two bearing datasets. The extensive results demonstrated that the AMARSN can mine more effective fault features from signals and achieve average diagnostic accuracies of 85.24% and 80.09% under different noise with different levels.

在实际工程场景中,轴承故障信号不可避免地会被各种来源的强背景噪声淹没。然而,大多数基于深度学习的诊断模型倾向于扩大特征提取尺度,以提取丰富的故障特征用于噪声干扰下的轴承故障识别,而很少关注自适应噪声抑制的多时标判别特征挖掘,这影响了诊断性能。因此,我们提出了一种具有自适应全局-局部去噪功能的多时间尺度注意力残差收缩网络(AMARSN),通过从信号中学习多时间尺度的判别性故障特征,并完全消除多时间尺度故障特征中的噪声成分,用于滚动轴承故障诊断。首先,开发了多时间尺度注意力学习模块(MALMod),以捕捉多时间尺度故障特征,并增强其在噪声干扰下的可辨别性。随后,构建了自适应全局-局部去噪模块(AGDMod),通过构建特定的全局-局部去噪阈值和设计自适应平滑软阈值函数来完全消除多尺度故障特征中的噪声。最后,利用位于 AMARSN 末端的软最大分类器实现了端到端轴承故障诊断任务。AMARSN 利用两个轴承数据集进行了验证。大量结果表明,AMARSN 可以从信号中挖掘出更有效的故障特征,在不同噪声水平下的平均诊断准确率分别达到 85.24% 和 80.09%。
{"title":"Multi-timescale attention residual shrinkage network with adaptive global-local denoising for rolling-bearing fault diagnosis","authors":"","doi":"10.1016/j.knosys.2024.112478","DOIUrl":"10.1016/j.knosys.2024.112478","url":null,"abstract":"<div><p>In actual engineering scenarios, bearing fault signals are inevitably overwhelmed by strong background noise from various sources. However, most deep-learning-based diagnostic models tend to broaden the feature extraction scale to extract rich fault features for bearing-fault identification under noise interference, with little attention paid to multi-timescale discriminative feature mining with adaptive noise rejection, which affects the diagnostic performance. Thus, a multi-timescale attention residual shrinkage network with adaptive global-local denoising (AMARSN) was proposed for rolling-bearing fault diagnosis by learning discriminative multi-timescale fault features from signals and fully eliminating noise components in the multi-timescale fault features. First, a multi-timescale attention learning module (MALMod) was developed to capture multi-timescale fault features and enhance their discriminability under noise interference. Subsequently, an adaptive global-local denoising module (AGDMod) was constructed to fully eliminate noise in multiscale fault features by constructing specific global-local denoising thresholds and designing an adaptive smooth soft thresholding function. Finally, end-to-end bearing fault diagnosis tasks were realized using a softmax classifier located at the end of the AMARSN. The AMARSN was validated using two bearing datasets. The extensive results demonstrated that the AMARSN can mine more effective fault features from signals and achieve average diagnostic accuracies of 85.24% and 80.09% under different noise with different levels.</p></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142171649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Knowledge-Based Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1