首页 > 最新文献

Information Sciences最新文献

英文 中文
Knowledge-informed randomized machine learning and data fusion for anomaly areas detection in multimodal 3D images 多模态三维图像中异常区域检测的知识型随机机器学习和数据融合
IF 8.1 1区 计算机科学 N/A COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-22 DOI: 10.1016/j.ins.2024.121354

We consider a long-standing yet hard and largely open machine learning problem of anomaly areas detection in multimodal 3D images. Purely data-driven methods often fail in such tasks because rarely incorporating domain-specific knowledge into the algorithm and do not fully utilize information from multiple modalities. We address these issues by proposing a novel framework with data fusion technology to leverage domain-specific knowledge and multimodal labeled data, as well as employ the power of randomized learning techniques. To demonstrate the proposed framework efficiency, we apply it to the challenging task of detecting subtle pathologies in MRI scans. A distinct feature of the resulting solution is that it explicitly incorporates evidence-based medical knowledge about pathologies into the feature maps. Our experiments show that the method is capable of achieving lesion detection in 71% of subjects by using just one such feature. Integrating information from all feature maps and data modalities enhances detection rate to 78%. Using stochastic configuration networks to initialize the weights of the classification model enables to increase precision metric by 18% as compared to deterministic approaches. This demonstrates the possibility and practical viability of building efficient and interpretable randomised algorithms for automated anomaly detection in complex multimodal data.

多模态三维图像中的异常区域检测是一个长期存在但却很难解决的机器学习问题。纯粹的数据驱动方法往往无法完成此类任务,因为它们很少将特定领域的知识纳入算法,也不能充分利用来自多种模态的信息。为了解决这些问题,我们提出了一种采用数据融合技术的新型框架,以利用特定领域的知识和多模态标记数据,并利用随机学习技术的力量。为了证明所提框架的效率,我们将其应用于检测核磁共振成像扫描中的细微病变这一具有挑战性的任务。由此产生的解决方案的一个显著特点是,它明确地将基于证据的病理医学知识纳入特征图中。我们的实验表明,该方法只需使用一个这样的特征,就能对 71% 的受试者进行病变检测。整合来自所有特征图和数据模式的信息可将检测率提高到 78%。与确定性方法相比,使用随机配置网络初始化分类模型的权重可将精确度指标提高 18%。这证明了在复杂的多模态数据中建立高效、可解释的随机算法进行自动异常检测的可能性和实际可行性。
{"title":"Knowledge-informed randomized machine learning and data fusion for anomaly areas detection in multimodal 3D images","authors":"","doi":"10.1016/j.ins.2024.121354","DOIUrl":"10.1016/j.ins.2024.121354","url":null,"abstract":"<div><p>We consider a long-standing yet hard and largely open machine learning problem of anomaly areas detection in multimodal 3D images. Purely data-driven methods often fail in such tasks because rarely incorporating domain-specific knowledge into the algorithm and do not fully utilize information from multiple modalities. We address these issues by proposing a novel framework with data fusion technology to leverage domain-specific knowledge and multimodal labeled data, as well as employ the power of randomized learning techniques. To demonstrate the proposed framework efficiency, we apply it to the challenging task of detecting subtle pathologies in MRI scans. A distinct feature of the resulting solution is that it explicitly incorporates evidence-based medical knowledge about pathologies into the feature maps. Our experiments show that the method is capable of achieving lesion detection in 71% of subjects by using just one such feature. Integrating information from all feature maps and data modalities enhances detection rate to 78%. Using stochastic configuration networks to initialize the weights of the classification model enables to increase precision metric by 18% as compared to deterministic approaches. This demonstrates the possibility and practical viability of building efficient and interpretable randomised algorithms for automated anomaly detection in complex multimodal data.</p></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142077490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding world models through multi-step pruning policy via reinforcement learning 通过强化学习的多步剪枝策略了解世界模型
IF 8.1 1区 计算机科学 N/A COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-22 DOI: 10.1016/j.ins.2024.121361

In model-based reinforcement learning, the conventional approach to addressing world model bias is to use gradient optimization methods. However, using a singular policy from gradient optimization methods in response to a world model bias inevitably results in an inherently biased policy. This is because of constraints on the imperfect and dynamic data of state-action pairs. The gap between the world model and the real environment can never be completely eliminated. This article introduces a novel approach that explores a variety of policies instead of focusing on either world model bias or singular policy bias. Specifically, we introduce the Multi-Step Pruning Policy (MSPP), which aims to reduce redundant actions and compress the action and state spaces. This approach encourages a different perspective within the same world model. To achieve this, we use multiple pruning policies in parallel and integrate their outputs using the cross-entropy method. Additionally, we provide a convergence analysis of the pruning policy theory in tabular form and an updated parameter theoretical framework. In the experimental section, the newly proposed MSPP method demonstrates a comprehensive understanding of the world model and outperforms existing state-of-the-art model-based reinforcement learning baseline techniques.

在基于模型的强化学习中,解决世界模型偏差的传统方法是使用梯度优化方法。然而,使用梯度优化方法中的奇异策略来应对世界模型偏差,不可避免地会导致策略本身存在偏差。这是因为状态-行动配对的不完美和动态数据存在限制。世界模型与真实环境之间的差距永远无法完全消除。本文介绍了一种探索多种政策的新方法,而不是只关注世界模型偏差或单一政策偏差。具体来说,我们引入了多步剪枝策略(MSPP),旨在减少冗余动作,压缩动作和状态空间。这种方法鼓励在同一世界模型中采用不同的视角。为此,我们并行使用多个剪枝策略,并使用交叉熵方法整合它们的输出。此外,我们还以表格形式提供了剪枝策略理论的收敛性分析和更新的参数理论框架。在实验部分,新提出的 MSPP 方法展示了对世界模型的全面理解,并优于现有最先进的基于模型的强化学习基线技术。
{"title":"Understanding world models through multi-step pruning policy via reinforcement learning","authors":"","doi":"10.1016/j.ins.2024.121361","DOIUrl":"10.1016/j.ins.2024.121361","url":null,"abstract":"<div><p>In model-based reinforcement learning, the conventional approach to addressing world model bias is to use gradient optimization methods. However, using a singular policy from gradient optimization methods in response to a world model bias inevitably results in an inherently biased policy. This is because of constraints on the imperfect and dynamic data of state-action pairs. The gap between the world model and the real environment can never be completely eliminated. This article introduces a novel approach that explores a variety of policies instead of focusing on either world model bias or singular policy bias. Specifically, we introduce the Multi-Step Pruning Policy (MSPP), which aims to reduce redundant actions and compress the action and state spaces. This approach encourages a different perspective within the same world model. To achieve this, we use multiple pruning policies in parallel and integrate their outputs using the cross-entropy method. Additionally, we provide a convergence analysis of the pruning policy theory in tabular form and an updated parameter theoretical framework. In the experimental section, the newly proposed MSPP method demonstrates a comprehensive understanding of the world model and outperforms existing state-of-the-art model-based reinforcement learning baseline techniques.</p></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142076828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-objective evolutionary algorithm based on transfer learning and neural networks: Dual operator feature fusion and weight vector adaptation 基于迁移学习和神经网络的多目标进化算法双算子特征融合和权重向量自适应
IF 8.1 1区 计算机科学 N/A COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-22 DOI: 10.1016/j.ins.2024.121364

In multi-objective evolutionary algorithms, one of the focal points is finding a balance between diversity and convergence. In decomposition-based algorithms, the role of weight vectors is crucial. Despite numerous studies dedicated to these aspects, there is a scarcity of utilizing transfer learning algorithms for dual-operator feature fusion and employing neural networks for accurate partitioning of the objective space population. To address the aforementioned issues, this paper proposes the following improvements: (1) Implementing a Balanced Distribution Adaptation (BDA) transfer learning algorithm to achieve dual-operator feature fusion, resulting in a transfer population guiding the adaptive adjustment of weight vectors. (2) Integrating the BDA algorithm with multi-objective algorithms requires labeling the data, a challenge in the multi-objective evolutionary algorithm. To tackle this issue, non-dominated sorting is introduced as a bridge connecting the BDA and multi-objective evolutionary algorithms. This serves as a method to combine the advantages of decomposition-based and Pareto-dominance principle-based multi-objective algorithms. (3) To overcome the impact of traditional Euclidean distance on population sparsity, a neural network is employed to determine the population's distribution in the objective space accurately. This ensures the precise identification of individuals to be removed from the current population and the areas where additions are needed. In order to fully validate the effectiveness of the proposed algorithm, four different sets of experiments are conducted in the experimental section, where three sets of benchmarking problems are compared to a variety of algorithms that have received much attention in recent years, as well as ablation experiments.

在多目标进化算法中,焦点之一是在多样性和收敛性之间找到平衡。在基于分解的算法中,权重向量的作用至关重要。尽管对这些方面进行了大量研究,但利用迁移学习算法进行双运算符特征融合,以及利用神经网络对目标空间种群进行精确划分的研究还很少。针对上述问题,本文提出了以下改进建议:(1) 采用平衡分布自适应(BDA)迁移学习算法来实现双运算符特征融合,从而形成引导权重向量自适应调整的迁移群体。(2)将 BDA 算法与多目标算法相结合需要对数据进行标注,这是多目标进化算法中的一个难题。为了解决这个问题,引入了非优势排序作为连接 BDA 算法和多目标进化算法的桥梁。这是将基于分解和基于帕累托支配原则的多目标算法的优点结合起来的一种方法。(3) 为了克服传统欧氏距离对种群稀疏性的影响,采用了神经网络来精确确定种群在目标空间中的分布。这确保了从当前种群中精确识别出需要剔除的个体和需要添加的区域。为了充分验证所提算法的有效性,实验部分进行了四组不同的实验,其中三组基准问题与近年来备受关注的各种算法以及消融实验进行了比较。
{"title":"Multi-objective evolutionary algorithm based on transfer learning and neural networks: Dual operator feature fusion and weight vector adaptation","authors":"","doi":"10.1016/j.ins.2024.121364","DOIUrl":"10.1016/j.ins.2024.121364","url":null,"abstract":"<div><p>In multi-objective evolutionary algorithms, one of the focal points is finding a balance between diversity and convergence. In decomposition-based algorithms, the role of weight vectors is crucial. Despite numerous studies dedicated to these aspects, there is a scarcity of utilizing transfer learning algorithms for dual-operator feature fusion and employing neural networks for accurate partitioning of the objective space population. To address the aforementioned issues, this paper proposes the following improvements: (1) Implementing a Balanced Distribution Adaptation (BDA) transfer learning algorithm to achieve dual-operator feature fusion, resulting in a transfer population guiding the adaptive adjustment of weight vectors. (2) Integrating the BDA algorithm with multi-objective algorithms requires labeling the data, a challenge in the multi-objective evolutionary algorithm. To tackle this issue, non-dominated sorting is introduced as a bridge connecting the BDA and multi-objective evolutionary algorithms. This serves as a method to combine the advantages of decomposition-based and Pareto-dominance principle-based multi-objective algorithms. (3) To overcome the impact of traditional Euclidean distance on population sparsity, a neural network is employed to determine the population's distribution in the objective space accurately. This ensures the precise identification of individuals to be removed from the current population and the areas where additions are needed. In order to fully validate the effectiveness of the proposed algorithm, four different sets of experiments are conducted in the experimental section, where three sets of benchmarking problems are compared to a variety of algorithms that have received much attention in recent years, as well as ablation experiments.</p></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142040753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simplified rough sets 简化粗糙集
IF 8.1 1区 计算机科学 N/A COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-21 DOI: 10.1016/j.ins.2024.121367

Z. Pawlak first proposed the rough set (RS) in 1982. For over forty years, scholars have developed a large number of RS models to solve various data problems. However, most RS models are designed based on inherent rules, and their mathematical structures are similar and complex. For this reason, the efficiency of RS methods in analyzing data has not been significantly improved. To address this issue, we propose some new rules to simplify traditional RS models. These simplified RS models, which are equivalent to traditional RS models, can mine data more quickly. In this paper, we take Pawlak RS as an example to compare the computational efficiency between the simplified Pawlak RS (SPRS) and the traditional RSs. Numerical experiments confirm that the computational efficiency of the SPRS is not only far superior to that of traditional Pawlak RS (TPRS), but also higher than that of most existing RSs.

Z.帕夫拉克于 1982 年首次提出了粗糙集(RS)。四十多年来,学者们开发了大量的 RS 模型来解决各种数据问题。然而,大多数 RS 模型都是基于固有规则设计的,其数学结构相似而复杂。因此,RS 方法分析数据的效率并没有得到显著提高。针对这一问题,我们提出了一些简化传统 RS 模型的新规则。这些简化的 RS 模型等同于传统的 RS 模型,可以更快地挖掘数据。本文以 Pawlak RS 为例,比较了简化的 Pawlak RS (SPRS) 和传统 RS 的计算效率。数值实验证实,SPRS 的计算效率不仅远远优于传统 Pawlak RS(TPRS),而且高于大多数现有 RS。
{"title":"Simplified rough sets","authors":"","doi":"10.1016/j.ins.2024.121367","DOIUrl":"10.1016/j.ins.2024.121367","url":null,"abstract":"<div><p>Z. Pawlak first proposed the rough set (RS) in 1982. For over forty years, scholars have developed a large number of RS models to solve various data problems. However, most RS models are designed based on inherent rules, and their mathematical structures are similar and complex. For this reason, the efficiency of RS methods in analyzing data has not been significantly improved. To address this issue, we propose some new rules to simplify traditional RS models. These simplified RS models, which are equivalent to traditional RS models, can mine data more quickly. In this paper, we take Pawlak RS as an example to compare the computational efficiency between the simplified Pawlak RS (SPRS) and the traditional RSs. Numerical experiments confirm that the computational efficiency of the SPRS is not only far superior to that of traditional Pawlak RS (TPRS), but also higher than that of most existing RSs.</p></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142040750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Regulation-aware graph learning for drug repositioning over heterogeneous biological network 在异构生物网络上进行药物重新定位的调控感知图学习
IF 8.1 1区 计算机科学 N/A COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-21 DOI: 10.1016/j.ins.2024.121360

Drug repositioning (DR) is crucial for identifying new disease indications for existing drugs and enhancing their clinical utility. Despite the effectiveness of various artificial intelligence techniques in discovering novel drug-disease associations (DDAs), many algorithms primarily focus on incorporating biological knowledge of drugs and diseases into DDA networks, often overlooking the rich connectivity patterns inherent in heterogeneous biological networks. In this study, we leveraged diverse connectivity patterns to gain new insights into the regulatory mechanisms of drugs acting on target proteins in diseases. We defined a set of meta-paths to reveal different regulatory mechanisms, each corresponding to distinct connectivity patterns. For each meta-path, we constructed a regulation graph through random-walk sampling of its instances in the network and obtained drug and disease embeddings through regulation-aware graph representation learning. Subsequently, we proposed a novel multi-view attention mechanism to enhance drug and disease representations. The task of predicting DDAs was accomplished using the XGBoost classifier based on the final representations of drugs and diseases. The experimental results demonstrated the superior performance of our method, RGLDR, on three benchmark datasets under ten-fold cross-validation, outperforming state-of-the-art DR algorithms across several evaluation metrics. Furthermore, case studies on two diseases indicated that RGLDR is a promising DR tool that leverages meaningful connectivity patterns for improved efficacy.

药物重新定位(DR)对于确定现有药物的新疾病适应症和提高其临床效用至关重要。尽管各种人工智能技术在发现新的药物-疾病关联(DDA)方面效果显著,但许多算法主要侧重于将药物和疾病的生物学知识纳入 DDA 网络,往往忽略了异构生物网络中固有的丰富连接模式。在本研究中,我们利用不同的连接模式,对药物作用于疾病靶蛋白的调控机制获得了新的认识。我们定义了一组元路径来揭示不同的调控机制,每种元路径都与不同的连接模式相对应。对于每个元路径,我们通过对其网络中的实例进行随机漫步采样来构建调控图,并通过调控感知图表示学习来获得药物和疾病嵌入。随后,我们提出了一种新颖的多视角关注机制来增强药物和疾病表征。根据最终的药物和疾病表征,使用 XGBoost 分类器完成了预测 DDA 的任务。实验结果表明,在十倍交叉验证下,我们的方法 RGLDR 在三个基准数据集上表现出色,在多个评价指标上都优于最先进的 DR 算法。此外,对两种疾病的案例研究表明,RGLDR 是一种很有前途的 DR 工具,它能利用有意义的连接模式提高疗效。
{"title":"Regulation-aware graph learning for drug repositioning over heterogeneous biological network","authors":"","doi":"10.1016/j.ins.2024.121360","DOIUrl":"10.1016/j.ins.2024.121360","url":null,"abstract":"<div><p>Drug repositioning (DR) is crucial for identifying new disease indications for existing drugs and enhancing their clinical utility. Despite the effectiveness of various artificial intelligence techniques in discovering novel drug-disease associations (DDAs), many algorithms primarily focus on incorporating biological knowledge of drugs and diseases into DDA networks, often overlooking the rich connectivity patterns inherent in heterogeneous biological networks. In this study, we leveraged diverse connectivity patterns to gain new insights into the regulatory mechanisms of drugs acting on target proteins in diseases. We defined a set of meta-paths to reveal different regulatory mechanisms, each corresponding to distinct connectivity patterns. For each meta-path, we constructed a regulation graph through random-walk sampling of its instances in the network and obtained drug and disease embeddings through regulation-aware graph representation learning. Subsequently, we proposed a novel multi-view attention mechanism to enhance drug and disease representations. The task of predicting DDAs was accomplished using the XGBoost classifier based on the final representations of drugs and diseases. The experimental results demonstrated the superior performance of our method, RGLDR, on three benchmark datasets under ten-fold cross-validation, outperforming state-of-the-art DR algorithms across several evaluation metrics. Furthermore, case studies on two diseases indicated that RGLDR is a promising DR tool that leverages meaningful connectivity patterns for improved efficacy.</p></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142040746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A hierarchical and interlamination graph self-attention mechanism-based knowledge graph reasoning architecture 基于分层和层间图自关注机制的知识图谱推理架构
IF 8.1 1区 计算机科学 N/A COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-21 DOI: 10.1016/j.ins.2024.121345

Knowledge Graph (KG) is an essential research field in graph theory, but its inherent incompleteness and sparsity influence its performance in several fields. Knowledge Graph Reasoning (KGR) aims to ameliorate those problems by mining new knowledge from subsistent knowledge. As one of the downstream tasks of KGR, link prediction is of great significance for improving the quality of KG. Recently, the Graph Neural Network (GNN)-based method became the most effective way to achieve the link prediction task. However, it still suffers from problems such as incomplete neighbor and relation-level information aggregation and unstable learning of the entity's features. To improve those issues, a Hierarchical and Interlamination Graph Self-attention Mechanism-based (HIGSM) plug-and-play architecture is proposed for KGR in this paper. It is composed of three-level layers: feature extractor, encoder, and decoder. The feature extractor makes our architecture more effective and stable for the retrieval of new features. The encoder is equipped with a two-stage encoding mechanism accompanied by two mixture-of-expert strategies, which enables our architecture to capture more practical reasoning information to improve prediction accuracy and generalization of the model. The decoder can use existing KGR models and compute the scores of triples in KG. The extensive experimental results and ablation studies on four KGs unambiguously demonstrate the state-of-the-art prediction performance of the proposed HIGSM architecture compared to current GNN-based methods.

知识图谱(KG)是图论的一个重要研究领域,但其固有的不完整性和稀疏性影响了它在多个领域的表现。知识图谱推理(Knowledge Graph Reasoning,KGR)旨在通过从已有知识中挖掘新知识来改善这些问题。作为知识图谱推理的下游任务之一,链接预测对提高知识图谱推理的质量具有重要意义。最近,基于图神经网络(GNN)的方法成为实现链接预测任务的最有效方法。然而,该方法仍然存在邻居和关系级信息聚合不完整、实体特征学习不稳定等问题。为了改善这些问题,本文针对 KGR 提出了一种基于分层和层间图自关注机制(HIGSM)的即插即用架构。该架构由三层组成:特征提取器、编码器和解码器。特征提取器使我们的架构在检索新特征时更加有效和稳定。编码器配备了两级编码机制和两种专家混合策略,这使我们的架构能够捕捉到更多实用的推理信息,从而提高预测精度和模型的泛化能力。解码器可以使用现有的 KGR 模型,并计算 KG 中三元组的得分。大量的实验结果和对四种 KG 的消解研究清楚地表明,与目前基于 GNN 的方法相比,所提出的 HIGSM 架构具有最先进的预测性能。
{"title":"A hierarchical and interlamination graph self-attention mechanism-based knowledge graph reasoning architecture","authors":"","doi":"10.1016/j.ins.2024.121345","DOIUrl":"10.1016/j.ins.2024.121345","url":null,"abstract":"<div><p>Knowledge Graph (KG) is an essential research field in graph theory, but its inherent incompleteness and sparsity influence its performance in several fields. Knowledge Graph Reasoning (KGR) aims to ameliorate those problems by mining new knowledge from subsistent knowledge. As one of the downstream tasks of KGR, link prediction is of great significance for improving the quality of KG. Recently, the Graph Neural Network (GNN)-based method became the most effective way to achieve the link prediction task. However, it still suffers from problems such as incomplete neighbor and relation-level information aggregation and unstable learning of the entity's features. To improve those issues, a Hierarchical and Interlamination Graph Self-attention Mechanism-based (HIGSM) plug-and-play architecture is proposed for KGR in this paper. It is composed of three-level layers: feature extractor, encoder, and decoder. The feature extractor makes our architecture more effective and stable for the retrieval of new features. The encoder is equipped with a two-stage encoding mechanism accompanied by two mixture-of-expert strategies, which enables our architecture to capture more practical reasoning information to improve prediction accuracy and generalization of the model. The decoder can use existing KGR models and compute the scores of triples in KG. The extensive experimental results and ablation studies on four KGs unambiguously demonstrate the state-of-the-art prediction performance of the proposed HIGSM architecture compared to current GNN-based methods.</p></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142076831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Entropy measures of multigranular unbalanced hesitant fuzzy linguistic term sets for multiple criteria decision making 用于多标准决策的多粒度非平衡犹豫模糊语言术语集的熵计量
IF 8.1 1区 计算机科学 N/A COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-21 DOI: 10.1016/j.ins.2024.121346

The hesitant fuzzy linguistic term set (HFLTS) is an efficient tool for modeling linguistic information in multi-criteria decision making (MCDM), and the entropy measure of HFLTS, as a substantial representation of uncertainty, merits additional investigation. This article aims to exploit a general framework to facilitate the construction of entropy measure for multigranular unbalanced HFLTS. An axiomatic definition of the entropy for HFLTSs that considers both types of uncertainty (fuzziness and hesitation) is presented, with the entropy measure subsequently derived from distance-based mapping. From this definition, several deduced results have been developed for the mapping that depicts the entropy expression in order to get such functions with ease. Whereafter, a MCDM weight-determining model for multigranular unbalanced linguistic information without preset weights is devised, and an empirical application of the suggested model in MCDM is illustrated. Ultimately, comparisons and analyses with existing studies are conducted to demonstrate the advantages of the proposed work.

犹豫模糊语言项集(HFLTS)是多标准决策(MCDM)中语言信息建模的有效工具,而 HFLTS 的熵值作为不确定性的一种实质性表示,值得进一步研究。本文旨在利用一个通用框架来促进多粒度不平衡 HFLTS 熵度量的构建。文章提出了考虑到两种不确定性(模糊性和犹豫性)的 HFLTS 熵的公理定义,并随后从基于距离的映射中推导出了熵值。从这个定义出发,为描述熵表达式的映射开发了几个推导结果,以便轻松获得此类函数。之后,设计了一个不预设权重的多粒度不平衡语言信息的 MCDM 权重决定模型,并说明了所建议的模型在 MCDM 中的实际应用。最后,通过与现有研究的比较和分析,证明了所提工作的优势。
{"title":"Entropy measures of multigranular unbalanced hesitant fuzzy linguistic term sets for multiple criteria decision making","authors":"","doi":"10.1016/j.ins.2024.121346","DOIUrl":"10.1016/j.ins.2024.121346","url":null,"abstract":"<div><p>The hesitant fuzzy linguistic term set (HFLTS) is an efficient tool for modeling linguistic information in multi-criteria decision making (MCDM), and the entropy measure of HFLTS, as a substantial representation of uncertainty, merits additional investigation. This article aims to exploit a general framework to facilitate the construction of entropy measure for multigranular unbalanced HFLTS. An axiomatic definition of the entropy for HFLTSs that considers both types of uncertainty (fuzziness and hesitation) is presented, with the entropy measure subsequently derived from distance-based mapping. From this definition, several deduced results have been developed for the mapping that depicts the entropy expression in order to get such functions with ease. Whereafter, a MCDM weight-determining model for multigranular unbalanced linguistic information without preset weights is devised, and an empirical application of the suggested model in MCDM is illustrated. Ultimately, comparisons and analyses with existing studies are conducted to demonstrate the advantages of the proposed work.</p></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142021552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LeaDCD: Leadership concept-based method for community detection in social networks LeaDCD:基于领导力概念的社交网络社区检测方法
IF 8.1 1区 计算机科学 N/A COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-21 DOI: 10.1016/j.ins.2024.121341

Community discovery plays an essential role in analyzing and understanding the behavior and relationships of users in social networks. For this reason, various algorithms have been developed in the last decade for discovering the optimal community structure. In social networks, some individuals have special characteristics that make them well-known by others. These groups of users are called leaders and often have a significant impact on others, with an exceptional ability to build communities. In this paper, we propose an efficient method to detect communities in social networks using the concept of leadership (LeaDCD). The proposed algorithm mainly involves three phases. First, based on nodes' degree centrality and maximal cliques, some small groups of nodes (leaders) considered as seeds for communities are discovered. Next, unassigned nodes are added to the seeds through an expansion process to generate the initial community structure. Finally, small communities are merged to form the final community structure. To demonstrate the effectiveness of our proposal, we carried out comprehensive experiments on real-world and artificial graphs. The findings indicate that our algorithm outperforms other commonly used methods, demonstrating its high efficiency and reliability in discovering communities within social graphs.

社区发现在分析和理解社交网络中用户的行为和关系方面发挥着至关重要的作用。因此,在过去十年中,人们开发了各种算法来发现最佳社区结构。在社交网络中,有些人具有特殊的特征,因而为他人所熟知。这些用户群体被称为领导者,他们往往对他人产生重大影响,具有建立社区的非凡能力。在本文中,我们提出了一种利用领导力概念检测社交网络中社群的有效方法(LeaDCD)。本文提出的算法主要包括三个阶段。首先,根据节点的度中心性和最大聚类,发现一些被视为社群种子的节点小群(领导者)。接下来,通过扩展过程将未分配节点添加到种子节点中,生成初始社区结构。最后,合并小型社区,形成最终的社区结构。为了证明我们建议的有效性,我们在现实世界和人工图上进行了全面的实验。实验结果表明,我们的算法优于其他常用方法,证明了它在发现社交图中的社群方面的高效性和可靠性。
{"title":"LeaDCD: Leadership concept-based method for community detection in social networks","authors":"","doi":"10.1016/j.ins.2024.121341","DOIUrl":"10.1016/j.ins.2024.121341","url":null,"abstract":"<div><p>Community discovery plays an essential role in analyzing and understanding the behavior and relationships of users in social networks. For this reason, various algorithms have been developed in the last decade for discovering the optimal community structure. In social networks, some individuals have special characteristics that make them well-known by others. These groups of users are called leaders and often have a significant impact on others, with an exceptional ability to build communities. In this paper, we propose an efficient method to detect communities in social networks using the concept of leadership (LeaDCD). The proposed algorithm mainly involves three phases. First, based on nodes' degree centrality and maximal cliques, some small groups of nodes (leaders) considered as seeds for communities are discovered. Next, unassigned nodes are added to the seeds through an expansion process to generate the initial community structure. Finally, small communities are merged to form the final community structure. To demonstrate the effectiveness of our proposal, we carried out comprehensive experiments on real-world and artificial graphs. The findings indicate that our algorithm outperforms other commonly used methods, demonstrating its high efficiency and reliability in discovering communities within social graphs.</p></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142040754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive event-triggered sliding mode control for platooning of heterogeneous vehicular systems and its L2 input-to-output string stability 用于异构车辆系统排队的自适应事件触发滑动模式控制及其 L2 输入输出串稳定性
IF 8.1 1区 计算机科学 N/A COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-21 DOI: 10.1016/j.ins.2024.121342

Platooning of vehicular systems is an effective technique for enhancing transportation efficiency. As the scale of the vehicular platoon systems increases, disturbances on individual vehicles can affect the whole platoon through their connections. Besides, excessive vehicles impose a significant burden on communication devices. Towards this end, this work investigates the distributed platoon control problem of connected vehicular systems subject to disturbances by employing a resource-efficient communication mechanism. The proposed adaptive event-triggered mechanism (AETM) avoids periodic data transmission and reduces communication burden among vehicles. Besides, the AETM regulates the triggered threshold dynamically via the perception of spacing errors and avoids continuous inter-vehicle communication. Next, an AETM-based finite-time extended state observer (AFESO) is designed to alleviate the impact of the external disturbances. Then, an adaptive event-triggered distributed sliding mode control (DSMC) framework is developed to guarantee platoon stability. It is approved that, under the proposed control method, the closed-loop system subject to the disturbances satisfies the L2 input-to-output string stability (L2-IOSS). The salient feature of the AETM-based DSMC is that the AETM can effectively reduce communication consumption, while DSMC mitigates the performance degradation caused by triggering errors and disturbances. Finally, numerical simulations demonstrate the effectiveness of the proposed algorithm.

车辆排成一排是提高运输效率的有效技术。随着车辆编队系统规模的扩大,单个车辆受到的干扰会通过其连接影响整个编队。此外,过多的车辆也会给通信设备带来很大负担。为此,本研究通过采用一种资源节约型通信机制,对受干扰的连接车辆系统的分布式排控制问题进行了研究。所提出的自适应事件触发机制(AETM)避免了周期性数据传输,减轻了车辆之间的通信负担。此外,AETM 还能通过感知间距误差动态调节触发阈值,避免连续的车辆间通信。接下来,设计了基于 AETM 的有限时间扩展状态观测器(AFESO),以减轻外部干扰的影响。然后,开发了一种自适应事件触发分布式滑模控制(DSMC)框架,以保证排稳定性。结果表明,在所提出的控制方法下,受干扰影响的闭环系统满足 L2 输入输出串稳定性(L2-IOSS)。基于 AETM 的 DSMC 的突出特点是,AETM 可以有效降低通信消耗,而 DSMC 则可以缓解触发误差和干扰造成的性能下降。最后,数值模拟证明了所提算法的有效性。
{"title":"Adaptive event-triggered sliding mode control for platooning of heterogeneous vehicular systems and its L2 input-to-output string stability","authors":"","doi":"10.1016/j.ins.2024.121342","DOIUrl":"10.1016/j.ins.2024.121342","url":null,"abstract":"<div><p>Platooning of vehicular systems is an effective technique for enhancing transportation efficiency. As the scale of the vehicular platoon systems increases, disturbances on individual vehicles can affect the whole platoon through their connections. Besides, excessive vehicles impose a significant burden on communication devices. Towards this end, this work investigates the distributed platoon control problem of connected vehicular systems subject to disturbances by employing a resource-efficient communication mechanism. The proposed adaptive event-triggered mechanism (AETM) avoids periodic data transmission and reduces communication burden among vehicles. Besides, the AETM regulates the triggered threshold dynamically via the perception of spacing errors and avoids continuous inter-vehicle communication. Next, an AETM-based finite-time extended state observer (AFESO) is designed to alleviate the impact of the external disturbances. Then, an adaptive event-triggered distributed sliding mode control (DSMC) framework is developed to guarantee platoon stability. It is approved that, under the proposed control method, the closed-loop system subject to the disturbances satisfies the <span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msub></math></span> input-to-output string stability (<span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>2</mn></mrow></msub></math></span>-IOSS). The salient feature of the AETM-based DSMC is that the AETM can effectively reduce communication consumption, while DSMC mitigates the performance degradation caused by triggering errors and disturbances. Finally, numerical simulations demonstrate the effectiveness of the proposed algorithm.</p></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142021551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Coherence mode: Characterizing local graph structural information for temporal knowledge graph 一致性模式:表征时态知识图谱的局部图谱结构信息
IF 8.1 1区 计算机科学 N/A COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-21 DOI: 10.1016/j.ins.2024.121357

A Temporal Knowledge Graph (TKG) is designed for the effective modelling of the temporal relationships and dynamics of entities, events and concepts. Owing to its temporal attributes, a TKG offers greater benefits for reasoning than a static knowledge graph (KG). However, existing approaches for TKG reasoning do not consider coherent relationships between numerous facts, a term borrowed from specific parlance reflecting the interaction between objectives, such as observation and removal agents. This characteristic suggests that the model can obtain more insights from simultaneous coherent relationships. To address this problem, we develop a label-based process to construct a TKG from temporal event data in these domains. Based on the process, we build a specific TKG called Simulated Agent Interaction Knowledge Graph (SAIKG). In addition, we propose a novel TKG reasoning mechanism, termed the Coherence Mode. It is premised on an event coherence manner, enabling the prediction of unknown facts. Extensive experimental studies on different datasets demonstrate the effectiveness of the Coherence Mode integrated with typical models.

时态知识图谱(TKG)旨在有效地模拟实体、事件和概念的时态关系和动态。由于其时间属性,时态知识图谱比静态知识图谱(KG)更利于推理。然而,现有的 TKG 推理方法并不考虑众多事实之间的连贯关系,而这一术语是从反映目标(如观察和移除代理)之间互动的特定术语中借用的。这一特点表明,模型可以从同时存在的一致性关系中获得更多启示。为了解决这个问题,我们开发了一种基于标签的流程,从这些领域的时间事件数据中构建 TKG。基于该流程,我们构建了一个特定的 TKG,称为模拟代理交互知识图谱(SAIKG)。此外,我们还提出了一种新颖的 TKG 推理机制,称为 "一致性模式"。它以事件一致性方式为前提,能够预测未知事实。在不同数据集上进行的广泛实验研究证明了相干模式与典型模型相结合的有效性。
{"title":"Coherence mode: Characterizing local graph structural information for temporal knowledge graph","authors":"","doi":"10.1016/j.ins.2024.121357","DOIUrl":"10.1016/j.ins.2024.121357","url":null,"abstract":"<div><p>A Temporal Knowledge Graph (TKG) is designed for the effective modelling of the temporal relationships and dynamics of entities, events and concepts. Owing to its temporal attributes, a TKG offers greater benefits for reasoning than a static knowledge graph (KG). However, existing approaches for TKG reasoning do not consider coherent relationships between numerous facts, a term borrowed from specific parlance reflecting the interaction between objectives, such as observation and removal agents. This characteristic suggests that the model can obtain more insights from simultaneous coherent relationships. To address this problem, we develop a label-based process to construct a TKG from temporal event data in these domains. Based on the process, we build a specific TKG called Simulated Agent Interaction Knowledge Graph (SAIKG). In addition, we propose a novel TKG reasoning mechanism, termed the Coherence Mode. It is premised on an event coherence manner, enabling the prediction of unknown facts. Extensive experimental studies on different datasets demonstrate the effectiveness of the Coherence Mode integrated with typical models.</p></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142040744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Information Sciences
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1