首页 > 最新文献

Knowledge-Based Systems最新文献

英文 中文
Joint-optimized coverage path planning framework for USV-assisted offshore bathymetric mapping: From theory to practice 用于 USV 辅助近海测深绘图的联合优化覆盖路径规划框架:从理论到实践
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-03 DOI: 10.1016/j.knosys.2024.112449

Designing effective coverage routes for unmanned surface vehicles (USVs) is crucial to improve the efficiency of offshore bathymetric surveys. However, existing coverage planning methods for practical use are limited, primarily due to the large-scale surveying areas and intricate region geometries caused by coastal features. This study aims to address these challenges by introducing a coverage path planning framework for USV-assisted bathymetric mapping, specifically aimed at the joint optimization of paths to cover numerous complex regions. Initially, we conceptualize the large-scale bathymetric survey mission as an integer programming model. The model uses four distinct decision variables to meticulously formulate length calculations, inter-regional connections, entry and exit point selections, and line sweep direction. Then, a novel hierarchical algorithm is devised to solve the problem. The method first incorporates a bisection-based convex decomposition method to achieve optimal partitioning of complex regions. Additionally, a hierarchical heuristic optimization algorithm that seamlessly integrates the optimization of all influencing factors is designed, which includes order generation, candidate pattern finding, tour finding, and final optimization. The reliability of the framework is validated through semi-physical simulations and lake trials using a real USV. Through comparative studies, our model demonstrates clear advantages in computational efficiency and optimization capability compared to state-of-the-arts, with its superiority becoming more pronounced as the problem scale increases. The results from lake trials further affirm the efficient and reliable performance of our model in practical bathymetric survey tasks.

为无人水面航行器(USV)设计有效的覆盖路线对于提高近海测深勘测的效率至关重要。然而,现有的覆盖规划方法在实际应用中存在局限性,这主要是由于大规模的勘测区域和海岸地貌造成的错综复杂的区域几何形状。本研究旨在通过引入 USV 辅助测深绘图的覆盖路径规划框架来应对这些挑战,特别是针对覆盖众多复杂区域的路径联合优化。首先,我们将大规模水深测量任务概念化为一个整数编程模型。该模型使用四个不同的决策变量对长度计算、区域间连接、入口和出口点选择以及测线扫描方向进行了精心设计。然后,设计了一种新颖的分层算法来解决问题。该方法首先结合了一种基于平分的凸分解方法,以实现复杂区域的最优划分。此外,还设计了一种分层启发式优化算法,可无缝集成所有影响因素的优化,包括阶次生成、候选模式查找、巡回查找和最终优化。通过使用真实 USV 进行半物理模拟和湖泊试验,验证了该框架的可靠性。通过对比研究,我们的模型在计算效率和优化能力方面都明显优于同行,而且随着问题规模的扩大,其优越性更加明显。湖泊试验的结果进一步证实了我们的模型在实际测深任务中高效可靠的性能。
{"title":"Joint-optimized coverage path planning framework for USV-assisted offshore bathymetric mapping: From theory to practice","authors":"","doi":"10.1016/j.knosys.2024.112449","DOIUrl":"10.1016/j.knosys.2024.112449","url":null,"abstract":"<div><p>Designing effective coverage routes for unmanned surface vehicles (USVs) is crucial to improve the efficiency of offshore bathymetric surveys. However, existing coverage planning methods for practical use are limited, primarily due to the large-scale surveying areas and intricate region geometries caused by coastal features. This study aims to address these challenges by introducing a coverage path planning framework for USV-assisted bathymetric mapping, specifically aimed at the joint optimization of paths to cover numerous complex regions. Initially, we conceptualize the large-scale bathymetric survey mission as an integer programming model. The model uses four distinct decision variables to meticulously formulate length calculations, inter-regional connections, entry and exit point selections, and line sweep direction. Then, a novel hierarchical algorithm is devised to solve the problem. The method first incorporates a bisection-based convex decomposition method to achieve optimal partitioning of complex regions. Additionally, a hierarchical heuristic optimization algorithm that seamlessly integrates the optimization of all influencing factors is designed, which includes order generation, candidate pattern finding, tour finding, and final optimization. The reliability of the framework is validated through semi-physical simulations and lake trials using a real USV. Through comparative studies, our model demonstrates clear advantages in computational efficiency and optimization capability compared to state-of-the-arts, with its superiority becoming more pronounced as the problem scale increases. The results from lake trials further affirm the efficient and reliable performance of our model in practical bathymetric survey tasks.</p></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142171647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Which is better? Taxonomy induction with learning the optimal structure via contrastive learning 哪个更好?分类归纳法通过对比学习获得最佳结构
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-03 DOI: 10.1016/j.knosys.2024.112405

A taxonomy represents a hierarchically structured knowledge graph that forms the infrastructure for various downstream applications, including recommender systems, web search, and question answering. The exploration of automated induction from text corpora has yielded notable taxonomies such as CN-probase, CN-DBpedia, and Zhishi.schema. Despite these efforts, existing taxonomies still face two critical issues that result in sub-optimal hierarchical structures. On the one hand, commonly observed taxonomies exhibit a coarse-grained and “flat” structure, stemming from a noticeable lack of diversity in both nodes and edges. This limitation primarily originates from the biased and homogeneous data distribution. On the other hand, the semantic granularity among “siblings” within these taxonomies remains inconsistent, presenting a challenge in accurately and comprehensively identifying hierarchical relations. To address these issues, this study introduces a novel taxonomy induction framework composed of three meticulously designed components. Initially, we established a seed schema by leveraging statistical information from external data sources as distant supervision to append nodes and edges containing “generic semantics”, thereby rectifying biased data distributions. Subsequently, a clustering algorithm is employed to group the nodes based on their similarities, followed by a refinement operation of the hierarchical relations among these nodes. Building on this seed schema, we propose a fine-grained contrastive learning method in the expansion module to strengthen the utilization of taxonomic structures, consequently boosting the precision of query-anchor matching. Finally, we meticulously scrutinized the hierarchical relations between each query and its siblings to ensure the integrity of the constructed taxonomy. Extensive experiments on real-world datasets validated the efficacy of our proposed framework for constructing well-structured taxonomies.

分类法代表了一种分层结构的知识图谱,它构成了各种下游应用的基础架构,包括推荐系统、网络搜索和问题解答。从文本语料库中进行自动归纳的探索已经产生了一些著名的分类法,如 CN-probase、CN-DBpedia 和 Zhishi.schema。尽管做出了这些努力,但现有的分类法仍然面临两个关键问题,它们导致了次优的层次结构。一方面,由于节点和边都明显缺乏多样性,常见的分类标准表现出粗粒度和 "扁平 "结构。这种局限性主要源于数据分布的偏差和同质化。另一方面,这些分类法中 "兄弟姐妹 "之间的语义粒度仍然不一致,给准确、全面地识别层次关系带来了挑战。为了解决这些问题,本研究引入了一个新颖的分类归纳框架,该框架由三个精心设计的部分组成。首先,我们建立了一个种子模式,利用外部数据源的统计信息作为远距离监督,添加包含 "通用语义 "的节点和边,从而纠正偏差的数据分布。随后,采用聚类算法根据节点的相似性对节点进行分组,再对这些节点之间的层次关系进行细化操作。在这一种子模式的基础上,我们在扩展模块中提出了一种细粒度对比学习方法,以加强对分类结构的利用,从而提高查询锚点匹配的精确度。最后,我们仔细检查了每个查询及其同类之间的层次关系,以确保构建的分类法的完整性。在真实世界数据集上进行的大量实验验证了我们提出的构建结构良好的分类法框架的有效性。
{"title":"Which is better? Taxonomy induction with learning the optimal structure via contrastive learning","authors":"","doi":"10.1016/j.knosys.2024.112405","DOIUrl":"10.1016/j.knosys.2024.112405","url":null,"abstract":"<div><p>A taxonomy represents a hierarchically structured knowledge graph that forms the infrastructure for various downstream applications, including recommender systems, web search, and question answering. The exploration of automated induction from text corpora has yielded notable taxonomies such as CN-probase, CN-DBpedia, and Zhishi.schema. Despite these efforts, existing taxonomies still face two critical issues that result in sub-optimal hierarchical structures. On the one hand, commonly observed taxonomies exhibit a coarse-grained and “flat” structure, stemming from a noticeable lack of diversity in both nodes and edges. This limitation primarily originates from the biased and homogeneous data distribution. On the other hand, the semantic granularity among “siblings” within these taxonomies remains inconsistent, presenting a challenge in accurately and comprehensively identifying hierarchical relations. To address these issues, this study introduces a novel taxonomy induction framework composed of three meticulously designed components. Initially, we established a seed schema by leveraging statistical information from external data sources as distant supervision to append nodes and edges containing “generic semantics”, thereby rectifying biased data distributions. Subsequently, a clustering algorithm is employed to group the nodes based on their similarities, followed by a refinement operation of the hierarchical relations among these nodes. Building on this seed schema, we propose a fine-grained contrastive learning method in the expansion module to strengthen the utilization of taxonomic structures, consequently boosting the precision of query-anchor matching. Finally, we meticulously scrutinized the hierarchical relations between each query and its siblings to ensure the integrity of the constructed taxonomy. Extensive experiments on real-world datasets validated the efficacy of our proposed framework for constructing well-structured taxonomies.</p></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142147775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feature/vector entity retrieval and disambiguation techniques to create a supervised and unsupervised semantic table interpretation approach 利用特征/矢量实体检索和消歧技术创建有监督和无监督的语义表解释方法
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-03 DOI: 10.1016/j.knosys.2024.112447

Recently, there has been an increasing interest in extracting and annotating tables on the Web. This activity allows the transformation of textual data into machine-readable formats to enable the execution of various artificial intelligence tasks, e.g., semantic search and dataset extension. Semantic Table Interpretation (STI) is the process of annotating elements in a table. The paper explores Semantic Table Interpretation, addressing the challenges of Entity Retrieval and Entity Disambiguation in the context of Knowledge Graphs (KGs). It introduces LamAPI, an Information Retrieval system with string/type-based filtering and s-elBat, an Entity Disambiguation technique that combines heuristic and ML-based approaches. By applying the acquired know-how in the field and extracting algorithms, techniques and components from our previous STI approaches and the state of the art, we have created a new platform capable of annotating any tabular data, ensuring a high level of quality.

最近,人们对提取和注释网络上的表格越来越感兴趣。这项工作可以将文本数据转换成机器可读的格式,从而执行各种人工智能任务,例如语义搜索和数据集扩展。语义表解释(STI)是对表中元素进行注释的过程。本文探讨了语义表解释,解决了知识图谱(KG)背景下实体检索和实体消歧的难题。它介绍了LamAPI(一种基于字符串/类型过滤的信息检索系统)和s-elBat(一种将启发式方法和基于ML的方法相结合的实体消歧技术)。通过应用在该领域获得的专有技术,并从我们以前的科技创新方法和最新技术中提取算法、技术和组件,我们创建了一个新的平台,能够注释任何表格数据,并确保高质量。
{"title":"Feature/vector entity retrieval and disambiguation techniques to create a supervised and unsupervised semantic table interpretation approach","authors":"","doi":"10.1016/j.knosys.2024.112447","DOIUrl":"10.1016/j.knosys.2024.112447","url":null,"abstract":"<div><p>Recently, there has been an increasing interest in extracting and annotating tables on the Web. This activity allows the transformation of textual data into machine-readable formats to enable the execution of various artificial intelligence tasks, <em>e</em>.<em>g</em>., semantic search and dataset extension. Semantic Table Interpretation (STI) is the process of annotating elements in a table. The paper explores Semantic Table Interpretation, addressing the challenges of Entity Retrieval and Entity Disambiguation in the context of Knowledge Graphs (KGs). It introduces <span>LamAPI</span>, an Information Retrieval system with string/type-based filtering and <span>s-elBat</span>, an Entity Disambiguation technique that combines heuristic and ML-based approaches. By applying the acquired know-how in the field and extracting algorithms, techniques and components from our previous STI approaches and the state of the art, we have created a new platform capable of annotating any tabular data, ensuring a high level of quality.</p></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0950705124010815/pdfft?md5=8d3dfa7f8b225ec64232afd60b447f26&pid=1-s2.0-S0950705124010815-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142168962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fair swarm learning: Improving incentives for collaboration by a fair reward mechanism 公平的蜂群学习:通过公平奖励机制改善合作激励机制
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-02 DOI: 10.1016/j.knosys.2024.112451

Swarm learning is an emerging technique for collaborative machine learning in which several participants train machine learning models without sharing private data. In a standard swarm network, all the nodes in the network receive identical final models regardless of their individual contributions. This mechanism may be deemed unfair from an economic perspective, discouraging organizations with more resources from participating in any collaboration. Here, we present a framework for swarm learning in which nodes receive personalized models based on their contributions. The results of this study demonstrate the efficacy of this approach by showing that all participants experience performance enhancements compared to their local models. However, participants with higher contributions receive better models than those with lower contributions. This fair mechanism results in the highest possible accuracy for the most contributive participant, comparable to the standard swarm learning model. Such incentive structure can motivate resource-rich organizations to engage in collaboration, leading to the development of machine learning models that incorporate data from more resources, which is ultimately beneficial for every party.

蜂群学习是一种新兴的协作式机器学习技术,其中多个参与者在不共享私人数据的情况下训练机器学习模型。在标准的蜂群网络中,网络中的所有节点都会获得完全相同的最终模型,而不管它们各自的贡献如何。从经济角度来看,这种机制可能被认为是不公平的,会阻碍拥有更多资源的组织参与任何合作。在这里,我们提出了一种蜂群学习框架,其中的节点会根据自己的贡献获得个性化的模型。研究结果表明,与本地模型相比,所有参与者的性能都得到了提高,从而证明了这种方法的有效性。但是,贡献高的参与者比贡献低的参与者获得更好的模型。这种公平机制使贡献最大的参与者获得了尽可能高的精确度,与标准的蜂群学习模型不相上下。这种激励结构可以激励资源丰富的组织参与合作,从而开发出包含来自更多资源的数据的机器学习模型,最终使各方受益。
{"title":"Fair swarm learning: Improving incentives for collaboration by a fair reward mechanism","authors":"","doi":"10.1016/j.knosys.2024.112451","DOIUrl":"10.1016/j.knosys.2024.112451","url":null,"abstract":"<div><p>Swarm learning is an emerging technique for collaborative machine learning in which several participants train machine learning models without sharing private data. In a standard swarm network, all the nodes in the network receive identical final models regardless of their individual contributions. This mechanism may be deemed unfair from an economic perspective, discouraging organizations with more resources from participating in any collaboration. Here, we present a framework for swarm learning in which nodes receive personalized models based on their contributions. The results of this study demonstrate the efficacy of this approach by showing that all participants experience performance enhancements compared to their local models. However, participants with higher contributions receive better models than those with lower contributions. This fair mechanism results in the highest possible accuracy for the most contributive participant, comparable to the standard swarm learning model. Such incentive structure can motivate resource-rich organizations to engage in collaboration, leading to the development of machine learning models that incorporate data from more resources, which is ultimately beneficial for every party.</p></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0950705124010852/pdfft?md5=dd21ec96bdeb817d9b40caa27e8029a1&pid=1-s2.0-S0950705124010852-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142147767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MLP-AIR: An effective MLP-based module for actor interaction relation learning in group activity recognition MLP-AIR:基于 MLP 的群体活动识别中演员互动关系学习的有效模块
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-02 DOI: 10.1016/j.knosys.2024.112453

Modeling actor interaction relations is crucial for group activity recognition. Previous approaches often adopt a fixed paradigm that involves calculating an affinity matrix to model these interaction relations, yielding significant performance. On the one hand, the affinity matrix introduces an inductive bias that actor interaction relations should be dynamically computed based on the input actor features. On the other hand, MLPs with static parameterization, in which parameters are fixed after training, can represent arbitrary functions. Therefore, it is an open question whether inductive bias is necessary for modeling actor interaction relations. To explore the impact of this inductive bias, we propose an affinity matrix-free paradigm that directly uses the MLP with static parameterization to model actor interaction relations. We term this approach MLP-AIR. This paradigm overcomes the limitations of the inductive bias and enhances the capture of implicit actor interaction relations. Specifically, MLP-AIR consists of two sub-modules: the MLP-based Interaction relation modeling module (MLP-I) and the MLP-based Relation refining module (MLP-R). MLP-I is used to model the spatial–temporal interaction relations by emphasizing cross-actor and cross-frame feature learning. Meanwhile, MLP-R is used to refine the relation between different channels of each relation feature, thereby enhancing the expression ability of the features. MLP-AIR is a plug-and-play module. To evaluate our module, we applied MLP-AIR to replicate three representative methods. We conducted extensive experiments on two widely used benchmarks—the Volleyball and Collective Activity datasets. The experiments demonstrate that MLP-AIR achieves favorable results. The code is available at https://github.com/Xuguoliang12/MLP-AIR.

建立演员互动关系模型对于群体活动识别至关重要。以往的方法通常采用一种固定的模式,即通过计算亲和矩阵来模拟这些交互关系,从而获得显著的性能。一方面,亲和矩阵引入了一种归纳偏差,即应根据输入的演员特征动态计算演员互动关系。另一方面,静态参数化的 MLP(参数在训练后固定)可以表示任意函数。因此,归纳偏差是否是演员互动关系建模的必要条件是一个未决问题。为了探索这种归纳偏差的影响,我们提出了一种无亲和矩阵范式,直接使用静态参数化的 MLP 来模拟演员互动关系。我们将这种方法称为 MLP-AIR。这种范式克服了归纳偏差的局限性,并增强了对隐含演员互动关系的捕捉。具体来说,MLP-AIR 包括两个子模块:基于 MLP 的交互关系建模模块(MLP-I)和基于 MLP 的关系提炼模块(MLP-R)。MLP-I 通过强调跨角色和跨帧特征学习来建立时空交互关系模型。同时,MLP-R 用于细化每个关系特征的不同通道之间的关系,从而增强特征的表达能力。MLP-AIR 是一个即插即用的模块。为了评估我们的模块,我们应用 MLP-AIR 复制了三种具有代表性的方法。我们在两个广泛使用的基准--排球数据集和集体活动数据集上进行了大量实验。实验结果表明,MLP-AIR 取得了良好的效果。代码见 https://github.com/Xuguoliang12/MLP-AIR。
{"title":"MLP-AIR: An effective MLP-based module for actor interaction relation learning in group activity recognition","authors":"","doi":"10.1016/j.knosys.2024.112453","DOIUrl":"10.1016/j.knosys.2024.112453","url":null,"abstract":"<div><p>Modeling actor interaction relations is crucial for group activity recognition. Previous approaches often adopt a fixed paradigm that involves calculating an affinity matrix to model these interaction relations, yielding significant performance. On the one hand, the affinity matrix introduces an inductive bias that actor interaction relations should be dynamically computed based on the input actor features. On the other hand, MLPs with static parameterization, in which parameters are fixed after training, can represent arbitrary functions. Therefore, it is an open question whether inductive bias is necessary for modeling actor interaction relations. To explore the impact of this inductive bias, we propose an affinity matrix-free paradigm that directly uses the MLP with static parameterization to model actor interaction relations. We term this approach MLP-AIR. This paradigm overcomes the limitations of the inductive bias and enhances the capture of implicit actor interaction relations. Specifically, MLP-AIR consists of two sub-modules: the MLP-based Interaction relation modeling module (MLP-I) and the MLP-based Relation refining module (MLP-R). MLP-I is used to model the spatial–temporal interaction relations by emphasizing cross-actor and cross-frame feature learning. Meanwhile, MLP-R is used to refine the relation between different channels of each relation feature, thereby enhancing the expression ability of the features. MLP-AIR is a plug-and-play module. To evaluate our module, we applied MLP-AIR to replicate three representative methods. We conducted extensive experiments on two widely used benchmarks—the Volleyball and Collective Activity datasets. The experiments demonstrate that MLP-AIR achieves favorable results. The code is available at <span><span>https://github.com/Xuguoliang12/MLP-AIR</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142162269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A knowledge enhanced learning and semantic composition model for multi-claim fact checking 用于多索赔事实核查的知识增强型学习和语义组合模型
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-02 DOI: 10.1016/j.knosys.2024.112439

To inhibit the spread of rumorous information and its severe impacts, fact checking aims at retrieving relevant evidence to verify the veracity of a given statement. Fact checking methods typically use knowledge graphs (KGs) as external repositories and develop reasoning mechanism to retrieve evidence for verifying the statement. Existing fact checking methods have focused on verifying the statement of a single claim expressed by a clause. However, as real-world rumorous information is usually complex and a textual statement is often composed of multiple clauses (i.e. represented as multiple claims instead of a single one), multi-claim fact checking is not only necessary but more important for practical applications. Multi-claim statements imply rich contextual information and modeling the interactions of multiple claims can facilitate better verification. In this paper, we propose a knowledge enhanced learning and semantic composition model for multi-claim fact checking. Our model consists of two modules, KG-based learning enhancement and multi-claim semantic composition. To fully utilize the contextual information implied in multiple claims, the KG-based learning enhancement module learns the dynamic context-specific representations via selectively aggregating relevant attributes of entities. To robustly verify multiple claims robustly, the multi-claim semantic composition module learns a unified representation for multiple claims by modeling inter-claim interactions, and then verify them as a whole on the basis of this. We conduct experimental studies to validate our proposed method, and the experimental results on three typically datasets confirmed the efficacy of our model for multi-claim fact checking.

为了抑制谣言信息的传播及其严重影响,事实核查的目的是检索相关证据,以验证给定声明的真实性。事实核查方法通常使用知识图谱(KG)作为外部存储库,并开发推理机制来检索证据以验证声明。现有的事实检查方法侧重于验证由条款表达的单一主张的声明。然而,由于现实世界中的谣言信息通常比较复杂,而且一个文本声明往往由多个分句组成(即表示为多个主张而非单一主张),因此多主张事实检查在实际应用中不仅必要,而且更加重要。多主张语句意味着丰富的上下文信息,对多主张的交互进行建模可以促进更好的验证。在本文中,我们提出了一种用于多索赔事实检查的知识增强学习和语义组合模型。我们的模型由两个模块组成:基于知识的学习增强和多声明语义合成。为了充分利用多索赔中隐含的上下文信息,基于 KG 的学习增强模块通过选择性地聚合实体的相关属性来学习动态的上下文特定表征。为了稳健地验证多个权利要求,多权利要求语义组合模块通过对权利要求之间的交互建模来学习多个权利要求的统一表示,然后在此基础上对它们进行整体验证。我们进行了实验研究来验证我们提出的方法,在三个典型数据集上的实验结果证实了我们的模型在多索赔事实检查方面的有效性。
{"title":"A knowledge enhanced learning and semantic composition model for multi-claim fact checking","authors":"","doi":"10.1016/j.knosys.2024.112439","DOIUrl":"10.1016/j.knosys.2024.112439","url":null,"abstract":"<div><p>To inhibit the spread of rumorous information and its severe impacts, fact checking aims at retrieving relevant evidence to verify the veracity of a given statement. Fact checking methods typically use knowledge graphs (KGs) as external repositories and develop reasoning mechanism to retrieve evidence for verifying the statement. Existing fact checking methods have focused on verifying the statement of a single claim expressed by a clause. However, as real-world rumorous information is usually complex and a textual statement is often composed of multiple clauses (i.e. represented as multiple claims instead of a single one), multi-claim fact checking is not only necessary but more important for practical applications. Multi-claim statements imply rich contextual information and modeling the interactions of multiple claims can facilitate better verification. In this paper, we propose a knowledge enhanced learning and semantic composition model for multi-claim fact checking. Our model consists of two modules, KG-based learning enhancement and multi-claim semantic composition. To fully utilize the contextual information implied in multiple claims, the KG-based learning enhancement module learns the dynamic context-specific representations via selectively aggregating relevant attributes of entities. To robustly verify multiple claims robustly, the multi-claim semantic composition module learns a unified representation for multiple claims by modeling inter-claim interactions, and then verify them as a whole on the basis of this. We conduct experimental studies to validate our proposed method, and the experimental results on three typically datasets confirmed the efficacy of our model for multi-claim fact checking.</p></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142228570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal Graph Network for continuous-time dynamic event sequence 用于连续时间动态事件序列的时序图网络
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-02 DOI: 10.1016/j.knosys.2024.112452

Continuous-Time Dynamic Graph (CTDG) methods have shown their superior ability in learning representations for dynamic graph-structured data, the methods split the sequential updating process into discrete batches to reduce the computation costs, as a result, the message constructor in existing CTDG methods cannot be optimized by gradient descent and is designed to be parameter-free. In particular, this layer fails to embed complex event subgraphs and ignores the structure information, while most real-world events are structured and complex. For example, a paper publication event in an academic graph contains different relations like authorship and citations. Furthermore, the corresponding nodes could not receive position-wise messages to make precise representation updates. To tackle this issue, we propose a new method called Temporal Graph Network for continuous-time dynamic Event sequence (TGNE) with a structure-aware message constructor to update node representation with complex event subgraph, by treating message construction and delivery as a message-passing process, in this way, the message constructor can be formalized as a graph neural network layer. TGNE extends the input of CTDG methods to subgraphs with complex structures and preserves more information in message delivery. Extensive experiments demonstrate that the proposed method can achieve competitive performance on traditional tasks on bipartite graphs and event sequence learning tasks on heterogeneous graphs.

连续时间动态图(Continuous-Time Dynamic Graph,CTDG)方法在学习动态图结构数据的表征方面表现出了卓越的能力,这些方法将连续更新过程分割成离散的批次,以降低计算成本,因此,现有 CTDG 方法中的消息构造函数无法通过梯度下降进行优化,其设计是无参数的。尤其是,这一层无法嵌入复杂的事件子图,也忽略了结构信息,而现实世界中的大多数事件都是结构复杂的。例如,学术图谱中的论文发表事件包含作者和引用等不同关系。此外,相应的节点无法接收位置信息来进行精确的表示更新。为了解决这个问题,我们提出了一种名为连续时间动态事件序列时序图网络(Temporal Graph Network for continuous-time dynamic Event sequence,TGNE)的新方法,它具有结构感知的消息构造器,可以更新具有复杂事件子图的节点表示。TGNE 将 CTDG 方法的输入扩展到了具有复杂结构的子图中,并在信息传递过程中保留了更多信息。大量实验证明,所提出的方法可以在双方图上的传统任务和异构图上的事件序列学习任务中取得具有竞争力的性能。
{"title":"Temporal Graph Network for continuous-time dynamic event sequence","authors":"","doi":"10.1016/j.knosys.2024.112452","DOIUrl":"10.1016/j.knosys.2024.112452","url":null,"abstract":"<div><p>Continuous-Time Dynamic Graph (CTDG) methods have shown their superior ability in learning representations for dynamic graph-structured data, the methods split the sequential updating process into discrete batches to reduce the computation costs, as a result, the message constructor in existing CTDG methods cannot be optimized by gradient descent and is designed to be parameter-free. In particular, this layer fails to embed complex event subgraphs and ignores the structure information, while most real-world events are structured and complex. For example, a paper publication event in an academic graph contains different relations like authorship and citations. Furthermore, the corresponding nodes could not receive position-wise messages to make precise representation updates. To tackle this issue, we propose a new method called Temporal Graph Network for continuous-time dynamic Event sequence (TGNE) with a structure-aware message constructor to update node representation with complex event subgraph, by treating message construction and delivery as a message-passing process, in this way, the message constructor can be formalized as a graph neural network layer. TGNE extends the input of CTDG methods to subgraphs with complex structures and preserves more information in message delivery. Extensive experiments demonstrate that the proposed method can achieve competitive performance on traditional tasks on bipartite graphs and event sequence learning tasks on heterogeneous graphs.</p></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142162263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-supervised noise-resilient anomaly detection with feature autoencoder 利用特征自动编码器进行半监督抗噪异常检测
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-31 DOI: 10.1016/j.knosys.2024.112445

Most methods only use normal samples to learn anomaly detection (AD) models in an unsupervised manner. However, these samples may be noisy in real-world applications, causing the models to be unable to accurately identify anomaly objects. In addition, there are a small number of anomaly samples in real industrial production that should be fully utilized to help model discrimination. Existing methods of introducing anomaly samples still have bottlenecks in model identification capabilities. In this paper, by introducing both normal and a few abnormal samples, we propose a novel semi-supervised learning method for anomaly detection, named RobustPatch, which can improve the model discriminability through a self-cross scoring mechanism and the learning of feature AutoEncoder. Our approach contains two core designs: Firstly, we propose a self-cross scoring module, calculating the weights of normal and anomaly features extracted from corresponding images using a self-scoring and cross-scoring manner, respectively. Secondly, our approach proposes a fully connected feature AutoEncoder to rate the extracted features, which is trained with the supervision of the scored weights. Extensive experiments on the MVTecAD and BTAD datasets validate the superior anomaly boundaries discriminability of our approach and superior performance in noise-polluted scenarios.

大多数方法只使用正常样本,以无监督的方式学习异常检测(AD)模型。然而,这些样本在实际应用中可能存在噪声,导致模型无法准确识别异常对象。此外,实际工业生产中存在少量异常样本,应充分利用这些样本来帮助模型判别。现有的引入异常样本的方法在模型识别能力上仍存在瓶颈。本文通过引入正常样本和少量异常样本,提出了一种新颖的半监督学习异常检测方法,命名为 RobustPatch,通过自交叉评分机制和特征自动编码器的学习,提高模型的可识别性。我们的方法包含两个核心设计:首先,我们提出了一个自交叉评分模块,采用自评分和交叉评分的方式分别计算从相应图像中提取的正常特征和异常特征的权重。其次,我们的方法提出了一个全连接特征自动编码器来对提取的特征进行评分,该编码器是在评分权重的监督下进行训练的。在 MVTecAD 和 BTAD 数据集上进行的大量实验验证了我们的方法具有卓越的异常边界判别能力,并在噪声污染场景中表现出色。
{"title":"Semi-supervised noise-resilient anomaly detection with feature autoencoder","authors":"","doi":"10.1016/j.knosys.2024.112445","DOIUrl":"10.1016/j.knosys.2024.112445","url":null,"abstract":"<div><p>Most methods only use normal samples to learn anomaly detection (AD) models in an unsupervised manner. However, these samples may be noisy in real-world applications, causing the models to be unable to accurately identify anomaly objects. In addition, there are a small number of anomaly samples in real industrial production that should be fully utilized to help model discrimination. Existing methods of introducing anomaly samples still have bottlenecks in model identification capabilities. In this paper, by introducing both normal and a few abnormal samples, we propose a novel semi-supervised learning method for anomaly detection, named <em>RobustPatch</em>, which can improve the model discriminability through a self-cross scoring mechanism and the learning of feature AutoEncoder. Our approach contains two core designs: Firstly, we propose a self-cross scoring module, calculating the weights of normal and anomaly features extracted from corresponding images using a self-scoring and cross-scoring manner, respectively. Secondly, our approach proposes a fully connected feature AutoEncoder to rate the extracted features, which is trained with the supervision of the scored weights. Extensive experiments on the MVTecAD and BTAD datasets validate the superior anomaly boundaries discriminability of our approach and superior performance in noise-polluted scenarios.</p></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142147773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Global sparse attention network for remote sensing image super-resolution 用于遥感图像超分辨率的全局稀疏注意力网络
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-31 DOI: 10.1016/j.knosys.2024.112448

Recently, remote sensing images have become popular in various tasks, including resource exploration. However, limited by hardware conditions and formation processes, the obtained remote sensing images often suffer from low-resolution problems. Unlike the high cost of hardware to acquire high-resolution images, super-resolution software methods are good alternatives for restoring low-resolution images. In addition, remote sensing images have a common nature that similar visual patterns repeatedly appear across distant locations. To fully capture these long-range satellite image contexts, we first introduce the global attention network super-resolution method to reconstruct the images. This network improves the performance but introduces unessential information while significantly increasing the computational effort. To address these problems, we propose an innovative method named the global sparse attention network (GSAN) that integrates both sparsity constraints and global attention. Specifically, our method applies spherical locality sensitive hashing (SLSH) to convert feature elements into hash codes, constructs attention groups based on the hash codes, and computes the attention matrix according to similar elements in the attention group. Our method captures valid and useful global information and reduces the computational effort from quadratic to asymptotically linear regarding the spatial size. Extensive qualitative and quantitative experiments demonstrate that our GSAN has significant competitive advantages in terms of performance and computational cost compared with other state-of-the-art methods.

近来,遥感图像在包括资源勘探在内的各种任务中变得越来越受欢迎。然而,受硬件条件和形成过程的限制,获得的遥感图像往往存在分辨率低的问题。与获取高分辨率图像的高昂硬件成本不同,超分辨率软件方法是恢复低分辨率图像的良好选择。此外,遥感图像有一个共性,即在遥远的地点重复出现类似的视觉模式。为了充分捕捉这些远距离卫星图像背景,我们首先引入了全局注意力网络超分辨率方法来重建图像。这种网络虽然提高了性能,但引入了非必要信息,同时大大增加了计算工作量。为了解决这些问题,我们提出了一种名为全局稀疏注意力网络(GSAN)的创新方法,它将稀疏性约束和全局注意力整合在一起。具体来说,我们的方法应用球形位置敏感哈希算法(SLSH)将特征元素转换为哈希代码,根据哈希代码构建注意力组,并根据注意力组中的相似元素计算注意力矩阵。我们的方法能捕捉有效、有用的全局信息,并将计算量从空间大小的二次方降低到近似线性。广泛的定性和定量实验证明,与其他最先进的方法相比,我们的 GSAN 在性能和计算成本方面具有显著的竞争优势。
{"title":"Global sparse attention network for remote sensing image super-resolution","authors":"","doi":"10.1016/j.knosys.2024.112448","DOIUrl":"10.1016/j.knosys.2024.112448","url":null,"abstract":"<div><p>Recently, remote sensing images have become popular in various tasks, including resource exploration. However, limited by hardware conditions and formation processes, the obtained remote sensing images often suffer from low-resolution problems. Unlike the high cost of hardware to acquire high-resolution images, super-resolution software methods are good alternatives for restoring low-resolution images. In addition, remote sensing images have a common nature that similar visual patterns repeatedly appear across distant locations. To fully capture these long-range satellite image contexts, we first introduce the global attention network super-resolution method to reconstruct the images. This network improves the performance but introduces unessential information while significantly increasing the computational effort. To address these problems, we propose an innovative method named the global sparse attention network (GSAN) that integrates both sparsity constraints and global attention. Specifically, our method applies spherical locality sensitive hashing (SLSH) to convert feature elements into hash codes, constructs attention groups based on the hash codes, and computes the attention matrix according to similar elements in the attention group. Our method captures valid and useful global information and reduces the computational effort from quadratic to asymptotically linear regarding the spatial size. Extensive qualitative and quantitative experiments demonstrate that our GSAN has significant competitive advantages in terms of performance and computational cost compared with other state-of-the-art methods.</p></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142162265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Clean-Label Graph Backdoor Attack Method in Node Classification Task 节点分类任务中的清洁标签图后门攻击方法
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-31 DOI: 10.1016/j.knosys.2024.112433

Backdoor attacks in the traditional graph neural networks (GNNs) field are easily detectable due to the dilemma of confusing labels. To explore the backdoor vulnerability of GNNs and create a more stealthy backdoor attack method, a clean-label graph backdoor attack method(CGBA) in the node classification task is proposed in this paper. Differently from existing backdoor attack methods, CGBA requires neither modification of node labels nor graph structure. Specifically, to solve the problem of inconsistency between the contents and labels of the samples, CGBA selects poisoning samples in a specific target class and uses the samples’ own label as the target label (i.e., clean-label) after injecting triggers into the target samples. To guarantee the similarity of neighboring nodes, the raw features of the nodes are elaborately picked as triggers to further improve the concealment of the triggers. Extensive experiments results show the effectiveness of our method. When the poisoning rate is 0.04, CGBA can achieve an average attack success rate of 87.8%, 98.9%, 89.1%, and 98.5%, respectively.

在传统图神经网络(GNN)领域,由于标签混乱的困境,后门攻击很容易被发现。为了探索图神经网络的后门漏洞,创建一种更加隐蔽的后门攻击方法,本文提出了一种节点分类任务中的净标签图后门攻击方法(CGBA)。与现有的后门攻击方法不同,CGBA 既不需要修改节点标签,也不需要修改图结构。具体来说,为了解决样本内容与标签不一致的问题,CGBA 在特定目标类中选择中毒样本,并在目标样本中注入触发器后使用样本自身的标签作为目标标签(即干净标签)。为了保证相邻节点的相似性,还精心挑选了节点的原始特征作为触发器,以进一步提高触发器的隐蔽性。大量实验结果表明了我们方法的有效性。当中毒率为 0.04 时,CGBA 的平均攻击成功率分别为 87.8%、98.9%、89.1% 和 98.5%。
{"title":"A Clean-Label Graph Backdoor Attack Method in Node Classification Task","authors":"","doi":"10.1016/j.knosys.2024.112433","DOIUrl":"10.1016/j.knosys.2024.112433","url":null,"abstract":"<div><p>Backdoor attacks in the traditional graph neural networks (GNNs) field are easily detectable due to the dilemma of confusing labels. To explore the backdoor vulnerability of GNNs and create a more stealthy backdoor attack method, a clean-label graph backdoor attack method(CGBA) in the node classification task is proposed in this paper. Differently from existing backdoor attack methods, CGBA requires neither modification of node labels nor graph structure. Specifically, to solve the problem of inconsistency between the contents and labels of the samples, CGBA selects poisoning samples in a specific target class and uses the samples’ own label as the target label (i.e., clean-label) after injecting triggers into the target samples. To guarantee the similarity of neighboring nodes, the raw features of the nodes are elaborately picked as triggers to further improve the concealment of the triggers. Extensive experiments results show the effectiveness of our method. When the poisoning rate is 0.04, CGBA can achieve an average attack success rate of 87.8%, 98.9%, 89.1%, and 98.5%, respectively.</p></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142168963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Knowledge-Based Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1