首页 > 最新文献

Knowledge-Based Systems最新文献

英文 中文
Deceptive evidence detection of belief functions based on reinforcement learning in partial label environment 部分标签环境下基于强化学习的信念函数欺骗性证据检测
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-18 DOI: 10.1016/j.knosys.2024.112623
Yuhang Chang , Junhao Pan , Xuan Zhao , Bingyi Kang
Counter-deception evidence fusion is a critical issue in the application of Dempster–Shafer Theory (DST). Effectively detecting deceptive evidence poses a significant challenge in DST-based information fusion. Existing research on this topic is limited and often lacks a clear distinction between deceptive and credible evidence. Recently, two explicit definitions of deceptive evidence have been proposed to address different scenarios: one for cases with label information and another for cases without. However, these definitions are somewhat counter-intuitive and do not address situations where partial label information is available.
To address this gap, our paper introduces a new, explicit definition of deceptive evidence that considers both the characteristics of the evidence and the fusion system. This definition encompasses cases including with label information, without label information, and with partial label information. It extends the two previously mentioned definitions and, in certain circumstances, aligns with them.
Based on our new definition, we propose a mathematical model for counter-deception evidence fusion across these three scenarios and apply reinforcement learning to solve it. We present several numerical simulations, a data-driven counter-deception test, and a practical application to demonstrate that our method outperforms previous approaches in both detecting deceptive evidence and in practical applications, showcasing superior effectiveness and robustness.
反欺骗证据融合是 Dempster-Shafer 理论(DST)应用中的一个关键问题。在基于 DST 的信息融合中,有效检测欺骗性证据是一项重大挑战。现有的相关研究十分有限,而且往往缺乏对欺骗性证据和可信证据的明确区分。最近,针对不同情况提出了两种明确的欺骗性证据定义:一种适用于有标签信息的情况,另一种适用于无标签信息的情况。为了弥补这一缺陷,我们的论文引入了一个新的、明确的欺骗性证据定义,该定义同时考虑了证据和融合系统的特征。该定义包括有标签信息、无标签信息和有部分标签信息的情况。基于我们的新定义,我们提出了一个在这三种情况下进行反欺骗证据融合的数学模型,并应用强化学习来解决这个问题。我们提出了几个数值模拟、一个数据驱动的反欺骗测试和一个实际应用,以证明我们的方法在检测欺骗证据和实际应用方面都优于以前的方法,展示了卓越的有效性和鲁棒性。
{"title":"Deceptive evidence detection of belief functions based on reinforcement learning in partial label environment","authors":"Yuhang Chang ,&nbsp;Junhao Pan ,&nbsp;Xuan Zhao ,&nbsp;Bingyi Kang","doi":"10.1016/j.knosys.2024.112623","DOIUrl":"10.1016/j.knosys.2024.112623","url":null,"abstract":"<div><div>Counter-deception evidence fusion is a critical issue in the application of Dempster–Shafer Theory (DST). Effectively detecting deceptive evidence poses a significant challenge in DST-based information fusion. Existing research on this topic is limited and often lacks a clear distinction between deceptive and credible evidence. Recently, two explicit definitions of deceptive evidence have been proposed to address different scenarios: one for cases with label information and another for cases without. However, these definitions are somewhat counter-intuitive and do not address situations where partial label information is available.</div><div>To address this gap, our paper introduces a new, explicit definition of deceptive evidence that considers both the characteristics of the evidence and the fusion system. This definition encompasses cases including with label information, without label information, and with partial label information. It extends the two previously mentioned definitions and, in certain circumstances, aligns with them.</div><div>Based on our new definition, we propose a mathematical model for counter-deception evidence fusion across these three scenarios and apply reinforcement learning to solve it. We present several numerical simulations, a data-driven counter-deception test, and a practical application to demonstrate that our method outperforms previous approaches in both detecting deceptive evidence and in practical applications, showcasing superior effectiveness and robustness.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"305 ","pages":"Article 112623"},"PeriodicalIF":7.2,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142553725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Agent-DA: Enhancing low-resource event extraction with collaborative multi-agent data augmentation Agent-DA:通过协作式多代理数据增强增强低资源事件提取能力
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-18 DOI: 10.1016/j.knosys.2024.112625
Xuemeng Tian , Yikai Guo , Bin Ge , Xiaoguang Yuan , Hang Zhang , Yuting Yang , Wenjun Ke , Guozheng Li
Low-resource event extraction presents a significant challenge in real-world applications, particularly in domains like pharmaceuticals, military and law, where data is frequently insufficient. Data augmentation, as a direct method for expanding samples, is considered an effective solution. However, existing data augmentation methods often suffer from text fluency issues and label hallucination. To address these challenges, we propose a framework called Agent-DA, which leverages multi-agent collaboration for event extraction data augmentation. Specifically, Agent-DA follows a three-step process: data generation by the large language model, collaborative filtering by both the large language model and small language model to discriminate easy samples, and the use of an adjudicator to identify hard samples. Through iterative and selective augmentation, our method significantly enhances both the quantity and quality of event samples, improving text fluency and label consistency. Extensive experiments on the ACE2005-EN and ACE2005-EN+ datasets demonstrate the effectiveness of Agent-DA, with F1-score improvements ranging from 0.15% to 16.18% in trigger classification and from 2.2% to 15.67% in argument classification.
在实际应用中,特别是在医药、军事和法律等数据经常不足的领域,低资源事件提取是一项重大挑战。数据扩增作为扩大样本的直接方法,被认为是一种有效的解决方案。然而,现有的数据扩增方法往往存在文本流畅性问题和标签幻觉问题。为了应对这些挑战,我们提出了一个名为 Agent-DA 的框架,该框架利用多代理协作进行事件提取数据扩增。具体来说,Agent-DA 采用三步流程:由大语言模型生成数据;由大语言模型和小语言模型进行协同过滤,以分辨简单样本;使用裁定器识别困难样本。通过迭代和选择性增强,我们的方法显著提高了事件样本的数量和质量,改善了文本流畅性和标签一致性。在 ACE2005-EN 和 ACE2005-EN+ 数据集上进行的大量实验证明了 Agent-DA 的有效性,在触发器分类中的 F1 分数提高了 0.15% 到 16.18%,在论据分类中的 F1 分数提高了 2.2% 到 15.67%。
{"title":"Agent-DA: Enhancing low-resource event extraction with collaborative multi-agent data augmentation","authors":"Xuemeng Tian ,&nbsp;Yikai Guo ,&nbsp;Bin Ge ,&nbsp;Xiaoguang Yuan ,&nbsp;Hang Zhang ,&nbsp;Yuting Yang ,&nbsp;Wenjun Ke ,&nbsp;Guozheng Li","doi":"10.1016/j.knosys.2024.112625","DOIUrl":"10.1016/j.knosys.2024.112625","url":null,"abstract":"<div><div>Low-resource event extraction presents a significant challenge in real-world applications, particularly in domains like pharmaceuticals, military and law, where data is frequently insufficient. Data augmentation, as a direct method for expanding samples, is considered an effective solution. However, existing data augmentation methods often suffer from text fluency issues and <em>label hallucination</em>. To address these challenges, we propose a framework called Agent-DA, which leverages multi-agent collaboration for event extraction data augmentation. Specifically, Agent-DA follows a three-step process: data generation by the large language model, collaborative filtering by both the large language model and small language model to discriminate easy samples, and the use of an adjudicator to identify hard samples. Through iterative and selective augmentation, our method significantly enhances both the quantity and quality of event samples, improving text fluency and label consistency. Extensive experiments on the ACE2005-EN and ACE2005-EN+ datasets demonstrate the effectiveness of Agent-DA, with F1-score improvements ranging from 0.15% to 16.18% in trigger classification and from 2.2% to 15.67% in argument classification.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"305 ","pages":"Article 112625"},"PeriodicalIF":7.2,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142529494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep attributed graph clustering with feature consistency contrastive and topology enhanced network 利用特征一致性对比和拓扑增强网络进行深度属性图聚类
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-18 DOI: 10.1016/j.knosys.2024.112634
Xin Huang , Fan Yang , Guanqiu Qi , Yuanyuan Li , Ranqiao Zhang , Zhiqin Zhu
Deep attributed graph clustering has attracted considerable interest lately due to its capability to uncover meaningful latent knowledge from heterogeneous spaces, thereby improving our comprehension of real-world systems. However, ensuring the consistency of the clustering assignments generated from topological and attribute information remains a key issue, which is one of the reasons for the low performance of clustering. To tackle these issues, a novel deep clustering approach with Feature Consistency Contrastive and Topology Enhanced Network (FCC-TEN) is proposed, which consists of GAT and AE that can mine the topological and attributed information and achieve consistency contrastive learning to improve clustering performance. First, a Fusion Graph Convolutional Auto-encoder module is proposed to fuse the attribute information captured by each layer of the AE and enrich topological information for improving the feature extraction capability of AE. Then, using a Feature Consistency Contrastive module to uncover consistency information of the GAT and AE through contrastive learning at the feature and label level. Finally, clustering results are obtained directly by the clustering assignment obtained at the label level. Comprehensive testing on five improved datasets shows that our method provides advanced clustering performance. Moreover, visual analyses of the clustering results corroborate a gradual refinement of the clustering structure, proving the validity of our approach.
深度属性图聚类法能够从异构空间中挖掘出有意义的潜在知识,从而提高我们对现实世界系统的理解能力,因此最近引起了人们的极大兴趣。然而,确保从拓扑和属性信息中生成的聚类分配的一致性仍然是一个关键问题,这也是聚类性能低下的原因之一。为了解决这些问题,我们提出了一种新的深度聚类方法--特征一致性对比和拓扑增强网络(FCC-TEN),它由 GAT 和 AE 组成,可以挖掘拓扑和属性信息并实现一致性对比学习,从而提高聚类性能。首先,提出了融合图卷积自动编码器模块,以融合 AE 各层捕获的属性信息和丰富拓扑信息,从而提高 AE 的特征提取能力。然后,使用特征一致性对比模块,通过特征和标签层面的对比学习,挖掘 GAT 和 AE 的一致性信息。最后,通过在标签层面获得的聚类赋值直接获得聚类结果。对五个改进数据集的全面测试表明,我们的方法具有先进的聚类性能。此外,对聚类结果的可视化分析证实了聚类结构的逐步完善,证明了我们方法的有效性。
{"title":"Deep attributed graph clustering with feature consistency contrastive and topology enhanced network","authors":"Xin Huang ,&nbsp;Fan Yang ,&nbsp;Guanqiu Qi ,&nbsp;Yuanyuan Li ,&nbsp;Ranqiao Zhang ,&nbsp;Zhiqin Zhu","doi":"10.1016/j.knosys.2024.112634","DOIUrl":"10.1016/j.knosys.2024.112634","url":null,"abstract":"<div><div>Deep attributed graph clustering has attracted considerable interest lately due to its capability to uncover meaningful latent knowledge from heterogeneous spaces, thereby improving our comprehension of real-world systems. However, ensuring the consistency of the clustering assignments generated from topological and attribute information remains a key issue, which is one of the reasons for the low performance of clustering. To tackle these issues, a novel deep clustering approach with Feature Consistency Contrastive and Topology Enhanced Network (FCC-TEN) is proposed, which consists of GAT and AE that can mine the topological and attributed information and achieve consistency contrastive learning to improve clustering performance. First, a Fusion Graph Convolutional Auto-encoder module is proposed to fuse the attribute information captured by each layer of the AE and enrich topological information for improving the feature extraction capability of AE. Then, using a Feature Consistency Contrastive module to uncover consistency information of the GAT and AE through contrastive learning at the feature and label level. Finally, clustering results are obtained directly by the clustering assignment obtained at the label level. Comprehensive testing on five improved datasets shows that our method provides advanced clustering performance. Moreover, visual analyses of the clustering results corroborate a gradual refinement of the clustering structure, proving the validity of our approach.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"305 ","pages":"Article 112634"},"PeriodicalIF":7.2,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142529403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A cross-domain transfer learning model for author name disambiguation on heterogeneous graph with pretrained language model 利用预训练语言模型在异构图上进行作者姓名消歧的跨域迁移学习模型
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-18 DOI: 10.1016/j.knosys.2024.112624
Zhenyuan Huang , Hui Zhang , Chengqian Hao , Haijun Yang , Harris Wu
Author names in scientific literature are often ambiguous, complicating the accurate retrieval of academic information. Furthermore, many author names are shared by multiple scholars, making it challenging to construct academic search engine knowledge bases. These issues highlight the need for effective author name disambiguation. Existing methods have limitations in handling text content and heterogeneous graph node representations and often require extensive annotated training data. This study introduces an academic heterogeneous graph embedding neural network, HGNN-S, which leverages a pretrained semantic language model to integrate semantic information from texts, heterogeneous attribute relationships, and heterogeneous neighbor data. Trained on a small amount of single-domain annotated data, HGNN-S can disambiguate names across multiple domains. Experimental results demonstrate that our model outperforms current state-of-the-art methods and enhances search performance on the China National Platform, Kejso.
科学文献中的作者姓名往往含糊不清,使学术信息的准确检索变得更加复杂。此外,许多作者姓名由多个学者共享,这给构建学术搜索引擎知识库带来了挑战。这些问题凸显了对有效作者姓名消歧的需求。现有方法在处理文本内容和异构图节点表示方面存在局限性,通常需要大量注释训练数据。本研究介绍了一种学术异构图嵌入神经网络 HGNN-S,它利用预训练的语义语言模型来整合来自文本、异构属性关系和异构邻居数据的语义信息。HGNN-S 在少量单领域注释数据的基础上进行训练,可对多个领域的名称进行消歧。实验结果表明,我们的模型优于目前最先进的方法,并提高了中国国家平台(Kejso)上的搜索性能。
{"title":"A cross-domain transfer learning model for author name disambiguation on heterogeneous graph with pretrained language model","authors":"Zhenyuan Huang ,&nbsp;Hui Zhang ,&nbsp;Chengqian Hao ,&nbsp;Haijun Yang ,&nbsp;Harris Wu","doi":"10.1016/j.knosys.2024.112624","DOIUrl":"10.1016/j.knosys.2024.112624","url":null,"abstract":"<div><div>Author names in scientific literature are often ambiguous, complicating the accurate retrieval of academic information. Furthermore, many author names are shared by multiple scholars, making it challenging to construct academic search engine knowledge bases. These issues highlight the need for effective author name disambiguation. Existing methods have limitations in handling text content and heterogeneous graph node representations and often require extensive annotated training data. This study introduces an academic heterogeneous graph embedding neural network, HGNN-S, which leverages a pretrained semantic language model to integrate semantic information from texts, heterogeneous attribute relationships, and heterogeneous neighbor data. Trained on a small amount of single-domain annotated data, HGNN-S can disambiguate names across multiple domains. Experimental results demonstrate that our model outperforms current state-of-the-art methods and enhances search performance on the China National Platform, Kejso.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"305 ","pages":"Article 112624"},"PeriodicalIF":7.2,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142529489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Progressive Supervision via Label Decomposition: An long-term and large-scale wireless traffic forecasting method 通过标签分解进行渐进监督:一种长期和大规模无线流量预测方法
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-18 DOI: 10.1016/j.knosys.2024.112622
Daojun Liang , Haixia Zhang , Dongfeng Yuan , Minggao Zhang
Long-term and Large-scale Wireless Traffic Forecasting (LL-WTF) is pivotal for strategic network management and comprehensive planning on a macro scale. However, LL-WTF poses greater challenges than short-term ones due to the pronounced non-stationarity of extended wireless traffic and the vast number of nodes distributed at the city scale. To cope with this, we propose a Progressive Supervision method based on Label Decomposition (PSLD). Specifically, we first introduce a Random Subgraph Sampling (RSS) algorithm designed to sample a tractable subset from large-scale traffic data, thereby enabling efficient network training. Then, PSLD employs label decomposition to obtain multiple easy-to-learn components, which are learned progressively at shallow layers and combined at deep layers to effectively cope with the non-stationary problem raised by LL-WTF tasks. Finally, we compare the proposed method with various state-of-the-art (SOTA) methods on three large-scale WT datasets. Extensive experimental results demonstrate that the proposed PSLD significantly outperforms existing methods, with an average 2%, 4%, and 11% performance improvement on three WT datasets, respectively. In addition, we built an open source library for WT forecasting (WTFlib) to facilitate related research, which contains numerous SOTA methods and provides a strong benchmark. Experiments can be reproduced through https://github.com/Anoise/WTFlib.
长期和大规模无线流量预测(LL-WTF)对于战略网络管理和宏观综合规划至关重要。然而,由于扩展无线流量具有明显的非稳态性,而且城市范围内分布着大量节点,因此与短期预测相比,长期大规模无线流量预测面临着更大的挑战。为此,我们提出了一种基于标签分解(PSLD)的渐进监督方法。具体来说,我们首先引入了一种随机子图采样(RSS)算法,旨在从大规模流量数据中采样一个可处理的子集,从而实现高效的网络训练。然后,PSLD 利用标签分解获得多个易于学习的组件,这些组件在浅层逐步学习,并在深层进行组合,从而有效应对 LL-WTF 任务提出的非稳态问题。最后,我们在三个大规模 WT 数据集上比较了所提出的方法和各种最先进的(SOTA)方法。广泛的实验结果表明,所提出的 PSLD 明显优于现有方法,在三个 WT 数据集上的平均性能分别提高了 2%、4% 和 11%。此外,我们还为 WT 预测建立了一个开源库(WTFlib),以促进相关研究,该库包含大量 SOTA 方法,并提供了一个强大的基准。实验可通过 https://github.com/Anoise/WTFlib 重现。
{"title":"Progressive Supervision via Label Decomposition: An long-term and large-scale wireless traffic forecasting method","authors":"Daojun Liang ,&nbsp;Haixia Zhang ,&nbsp;Dongfeng Yuan ,&nbsp;Minggao Zhang","doi":"10.1016/j.knosys.2024.112622","DOIUrl":"10.1016/j.knosys.2024.112622","url":null,"abstract":"<div><div>Long-term and Large-scale Wireless Traffic Forecasting (LL-WTF) is pivotal for strategic network management and comprehensive planning on a macro scale. However, LL-WTF poses greater challenges than short-term ones due to the pronounced non-stationarity of extended wireless traffic and the vast number of nodes distributed at the city scale. To cope with this, we propose a Progressive Supervision method based on Label Decomposition (PSLD). Specifically, we first introduce a Random Subgraph Sampling (RSS) algorithm designed to sample a tractable subset from large-scale traffic data, thereby enabling efficient network training. Then, PSLD employs label decomposition to obtain multiple easy-to-learn components, which are learned progressively at shallow layers and combined at deep layers to effectively cope with the non-stationary problem raised by LL-WTF tasks. Finally, we compare the proposed method with various state-of-the-art (SOTA) methods on three large-scale WT datasets. Extensive experimental results demonstrate that the proposed PSLD significantly outperforms existing methods, with an average 2%, 4%, and 11% performance improvement on three WT datasets, respectively. In addition, we built an open source library for WT forecasting (WTFlib) to facilitate related research, which contains numerous SOTA methods and provides a strong benchmark. Experiments can be reproduced through <span><span>https://github.com/Anoise/WTFlib</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"305 ","pages":"Article 112622"},"PeriodicalIF":7.2,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142529490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel method for identifying key nodes in multi-layer networks based on dynamic influence range and community importance 基于动态影响范围和社区重要性的多层网络关键节点识别新方法
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-17 DOI: 10.1016/j.knosys.2024.112639
Zhengyi An , Xianghui Hu , Ruixia Jiang , Yichuan Jiang
Identifying key nodes in multi-layer networks is a hot research topic in complex network science and has broad application prospects, such as in mining enterprises that significantly affecting multi-layer industrial chains. Unlike single-layer networks, nodes in multi-layer networks exhibit heterogeneity due to varying connections and locations. There are also correlations between different network layers, which is particularly evident in industrial chains where companies operate across multiple layers of production, supply and distribution. It is necessary to consider the impact of these layers on the global performance of key node identification. In addition, due to changes in connections, the community structure of each network layer should be different, reflecting the dynamic nature of industrial collaborations and partnerships. However, existing research lacks the model that addresses the above problems. Therefore, this paper proposes a key node identification method based on Dynamic Influence Range and Community Importance (DIRCI), using both local and global information of the multi-layer network simultaneously. DIRCI determines the importance of nodes through three centrality measures: dynamic influence range-based centrality, network layer centrality and community-based centrality. Dynamic influence range-based centrality models node heterogeneity by combining the influence range of nodes and their neighbors with lower computational costs. Network layer centrality captures the corresponding importance for different network layers. Community-based centrality comprehensively considers the importance of community, the importance of each node within the community and between different communities. Experimental results for nineteen multi-layer networks show that DIRCI achieves better performance of key node identification than the latest algorithms.
识别多层网络中的关键节点是复杂网络科学的热门研究课题,具有广阔的应用前景,例如在对多层产业链有重大影响的采矿企业中。与单层网络不同,多层网络中的节点由于连接和位置的不同而呈现出异质性。不同网络层之间也存在相关性,这在企业跨生产、供应和分销多层运营的产业链中尤为明显。有必要考虑这些层级对关键节点识别全局性能的影响。此外,由于连接的变化,每个网络层的社群结构也应有所不同,这反映了产业合作和伙伴关系的动态性质。然而,现有研究缺乏解决上述问题的模型。因此,本文提出了一种基于动态影响范围和社群重要性(DIRCI)的关键节点识别方法,同时使用多层网络的局部和全局信息。DIRCI 通过三种中心度量来确定节点的重要性:基于动态影响范围的中心度、网络层中心度和基于社区的中心度。基于动态影响范围的中心度通过结合节点及其邻居的影响范围来模拟节点异质性,计算成本较低。网络层中心度捕捉不同网络层的相应重要性。基于社区的中心度全面考虑了社区的重要性、社区内每个节点的重要性以及不同社区之间的重要性。19 个多层网络的实验结果表明,与最新算法相比,DIRCI 在关键节点识别方面取得了更好的性能。
{"title":"A novel method for identifying key nodes in multi-layer networks based on dynamic influence range and community importance","authors":"Zhengyi An ,&nbsp;Xianghui Hu ,&nbsp;Ruixia Jiang ,&nbsp;Yichuan Jiang","doi":"10.1016/j.knosys.2024.112639","DOIUrl":"10.1016/j.knosys.2024.112639","url":null,"abstract":"<div><div>Identifying key nodes in multi-layer networks is a hot research topic in complex network science and has broad application prospects, such as in mining enterprises that significantly affecting multi-layer industrial chains. Unlike single-layer networks, nodes in multi-layer networks exhibit heterogeneity due to varying connections and locations. There are also correlations between different network layers, which is particularly evident in industrial chains where companies operate across multiple layers of production, supply and distribution. It is necessary to consider the impact of these layers on the global performance of key node identification. In addition, due to changes in connections, the community structure of each network layer should be different, reflecting the dynamic nature of industrial collaborations and partnerships. However, existing research lacks the model that addresses the above problems. Therefore, this paper proposes a key node identification method based on Dynamic Influence Range and Community Importance (DIRCI), using both local and global information of the multi-layer network simultaneously. DIRCI determines the importance of nodes through three centrality measures: dynamic influence range-based centrality, network layer centrality and community-based centrality. Dynamic influence range-based centrality models node heterogeneity by combining the influence range of nodes and their neighbors with lower computational costs. Network layer centrality captures the corresponding importance for different network layers. Community-based centrality comprehensively considers the importance of community, the importance of each node within the community and between different communities. Experimental results for nineteen multi-layer networks show that DIRCI achieves better performance of key node identification than the latest algorithms.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"305 ","pages":"Article 112639"},"PeriodicalIF":7.2,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142529491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An automated intrusion detection system in IoT system using Attention based Deep Bidirectional Sparse Auto Encoder model 使用基于注意力的深度双向稀疏自动编码器模型的物联网系统自动入侵检测系统
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-16 DOI: 10.1016/j.knosys.2024.112633
K. Swathi , G. Hima Bindu
Nowadays, the Internet of Things (IoT) is a smart network connected to the Internet for transmitting gathered data with verified protocols. Attackers frequently use communication protocol defects as the basis for their attacks. Better protection measures are required since attacks affect the reputations of service providers. Both machine learning (ML) and deep learning (DL) methods have been developed in a number of research works to detect network intrusions. However, the system's security is limited by the rising number of new threats. Critical problems in IoT platforms, cyber-physical systems, wireless networks, and fog computing are caused by such attacks. The development of various cyber-security attacks reinforces the need for a strong intrusion detection system (IDS) in the IoT platform. The proposed study introduced a robust deep-feature learning mechanism for automatically detecting network intruders in the IoT platform. Initially, input data are gathered from the given dataset. Pre-processing helps reduce any noise in the data and improves the data quality using cleaning, outlier removal, and min-max normalization. The proposed Attention-based Deep Bidirectional Sparse Auto Encoder (AD-BiSA) model is the most important feature retrieved using the attention-based deep Bi-LSTM model. The different IoT network threats are categorized using a sparse Autoencoder approach. The chaotic Seagull Optimization (CSGO) algorithm decreases the loss and enhances the weight in the proposed DL technique. The UNSW NB15_IDS and NSL-KDD datasets achieve accuracy rates of 99.71% and 98.97%, respectively, for the proposed technique. The proposed method achieves better performance than existing approaches.
如今,物联网(IoT)是一个与互联网相连的智能网络,通过已验证的协议传输收集到的数据。攻击者经常利用通信协议缺陷作为攻击的基础。由于攻击会影响服务提供商的声誉,因此需要更好的保护措施。许多研究工作都开发了机器学习(ML)和深度学习(DL)方法来检测网络入侵。然而,由于新威胁的数量不断增加,系统的安全性受到了限制。物联网平台、网络物理系统、无线网络和雾计算中的关键问题都是由此类攻击引起的。各种网络安全攻击的发展加强了物联网平台对强大入侵检测系统(IDS)的需求。本研究提出了一种稳健的深度特征学习机制,用于自动检测物联网平台中的网络入侵者。首先,从给定的数据集中收集输入数据。预处理有助于减少数据中的噪音,并通过清理、异常值去除和最小-最大归一化来提高数据质量。所提出的基于注意力的深度双向稀疏自动编码器(AD-BiSA)模型是使用基于注意力的深度 Bi-LSTM 模型检索的最重要特征。使用稀疏自动编码器方法对不同的物联网网络威胁进行分类。在拟议的 DL 技术中,混沌海鸥优化(CSGO)算法减少了损失并提高了权重。在 UNSW NB15_IDS 和 NSL-KDD 数据集上,拟议技术的准确率分别达到 99.71% 和 98.97%。与现有方法相比,建议的方法取得了更好的性能。
{"title":"An automated intrusion detection system in IoT system using Attention based Deep Bidirectional Sparse Auto Encoder model","authors":"K. Swathi ,&nbsp;G. Hima Bindu","doi":"10.1016/j.knosys.2024.112633","DOIUrl":"10.1016/j.knosys.2024.112633","url":null,"abstract":"<div><div>Nowadays, the Internet of Things (IoT) is a smart network connected to the Internet for transmitting gathered data with verified protocols. Attackers frequently use communication protocol defects as the basis for their attacks. Better protection measures are required since attacks affect the reputations of service providers. Both machine learning (ML) and deep learning (DL) methods have been developed in a number of research works to detect network intrusions. However, the system's security is limited by the rising number of new threats. Critical problems in IoT platforms, cyber-physical systems, wireless networks, and fog computing are caused by such attacks. The development of various cyber-security attacks reinforces the need for a strong intrusion detection system (IDS) in the IoT platform. The proposed study introduced a robust deep-feature learning mechanism for automatically detecting network intruders in the IoT platform. Initially, input data are gathered from the given dataset. Pre-processing helps reduce any noise in the data and improves the data quality using cleaning, outlier removal, and min-max normalization. The proposed Attention-based Deep Bidirectional Sparse Auto Encoder (AD-BiSA) model is the most important feature retrieved using the attention-based deep Bi-LSTM model. The different IoT network threats are categorized using a sparse Autoencoder approach. The chaotic Seagull Optimization (CSGO) algorithm decreases the loss and enhances the weight in the proposed DL technique. The UNSW NB15_IDS and NSL-KDD datasets achieve accuracy rates of 99.71% and 98.97%, respectively, for the proposed technique. The proposed method achieves better performance than existing approaches.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"305 ","pages":"Article 112633"},"PeriodicalIF":7.2,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142594090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bridge to better understanding: Syntax extension with virtual linking-phrase for natural language inference 通向更好理解的桥梁:自然语言推理中的虚拟连接词语法扩展
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-16 DOI: 10.1016/j.knosys.2024.112608
Seulgi Kim , Seokwon Jeong , Harksoo Kim
Natural language inference (NLI) models based on pretrained language models frequently mispredict the relations between premise and hypothesis sentences, attributing this inaccuracy to an overreliance on simple heuristics such as lexical overlap and negation presence. To address this problem, we introduce BridgeNet, a novel approach that improves NLI performance and model robustness by generating virtual linking-phrase representations to effectively bridge sentence pairs and by emulating the syntactic structure of hypothesis sentences. We conducted two main experiments to evaluate the effectiveness of BridgeNet. In the first experiment using four representative NLI benchmarks, BridgeNet improved the average accuracy by 1.5%p over the previous models by incorporating virtual linking-phrase representations into syntactic features. In the second experiment assessing the robustness of NLI models, BridgeNet improved the average accuracy by 7.0%p compared with other models. These results reveal the promising potential of our proposed method of bridging premise and hypothesis sentences through virtual linking-phrases.
基于预训练语言模型的自然语言推理(NLI)模型经常错误地预测前提句和假设句之间的关系,这种不准确性归因于过度依赖简单的启发式方法,如词汇重叠和否定存在。为了解决这个问题,我们引入了 BridgeNet,这是一种新颖的方法,它通过生成虚拟连接词表征来有效地连接句子对,并模拟假设句子的句法结构,从而提高了 NLI 性能和模型的稳健性。我们进行了两个主要实验来评估 BridgeNet 的有效性。在第一个实验中,我们使用了四个具有代表性的 NLI 基准,通过将虚拟连接词表示法纳入句法特征,BridgeNet 的平均准确率比之前的模型提高了 1.5%p。在评估 NLI 模型鲁棒性的第二个实验中,BridgeNet 的平均准确率比其他模型提高了 7.0%p。这些结果揭示了我们提出的通过虚拟连接词连接前提句和假设句的方法的巨大潜力。
{"title":"Bridge to better understanding: Syntax extension with virtual linking-phrase for natural language inference","authors":"Seulgi Kim ,&nbsp;Seokwon Jeong ,&nbsp;Harksoo Kim","doi":"10.1016/j.knosys.2024.112608","DOIUrl":"10.1016/j.knosys.2024.112608","url":null,"abstract":"<div><div>Natural language inference (NLI) models based on pretrained language models frequently mispredict the relations between premise and hypothesis sentences, attributing this inaccuracy to an overreliance on simple heuristics such as lexical overlap and negation presence. To address this problem, we introduce BridgeNet, a novel approach that improves NLI performance and model robustness by generating virtual linking-phrase representations to effectively bridge sentence pairs and by emulating the syntactic structure of hypothesis sentences. We conducted two main experiments to evaluate the effectiveness of BridgeNet. In the first experiment using four representative NLI benchmarks, BridgeNet improved the average accuracy by 1.5%p over the previous models by incorporating virtual linking-phrase representations into syntactic features. In the second experiment assessing the robustness of NLI models, BridgeNet improved the average accuracy by 7.0%p compared with other models. These results reveal the promising potential of our proposed method of bridging premise and hypothesis sentences through virtual linking-phrases.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"305 ","pages":"Article 112608"},"PeriodicalIF":7.2,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142529373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Directed Lobish-based explainable feature engineering model with TTPat and CWINCA for EEG artifact classification 基于定向洛比什的可解释特征工程模型与 TTPat 和 CWINCA,用于脑电图伪像分类
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-16 DOI: 10.1016/j.knosys.2024.112555
Turker Tuncer , Sengul Dogan , Mehmet Baygin , Irem Tasci , Bulent Mungen , Burak Tasci , Prabal Datta Barua , U.R. Acharya

Background and Objective

Electroencephalography (EEG) signals are crucial to decipher various brain activities. However, these EEG signals are subtle and contain various artifacts, which can happen due to various reasons. The main aim of this paper is to develop an explainable novel machine learning model that can identify the cause of these artifacts.

Material and method

A new EEG signal dataset was collected to classify various types of artifacts. This dataset contains eight classes: seven are artifacts, and one is the EEG signal without artifacts. A novel feature engineering model has been proposed to classify these artifact classes automatically. This model contains three main steps: (i) feature generation with the proposed transition table pattern (TTPat), (ii) the proposed cumulative weight-based iterative neighborhood component analysis (CWINCA)-based feature selection, and (iii) classification using t algorithm-based k-nearest neighbors (tkNN). The novelty of this work is TTPat feature extractor and CWINCA feature selector. Channel-based transformation is performed using the proposed TTPat, which extracts 392 features from the transformed EEG signal. A novel CWINCA feature selector is proposed. The artifacts are classified using tkNN algorithm.

Results

The proposed TTPat and CWINCA-based feature engineering model obtained a classification accuracy ranging from 66.39% to 97.69% for 30 cases. We presented the explainable results using a new symbolic language termed Directed Lobish.

Conclusions

The results and findings demonstrated that the proposed explainable feature engineering (EFE) model is good at artifact detection and classification. Directed Lobish has been presented to obtain explainable results and is a new symbolic language.
背景和目的脑电图(EEG)信号对于破译各种大脑活动至关重要。然而,这些脑电信号是微妙的,包含各种假象,而这些假象可能是由于各种原因造成的。本文的主要目的是开发一种可解释的新型机器学习模型,以识别这些伪像的原因。该数据集包含八个类别:七个是伪迹,一个是无伪迹的脑电信号。我们提出了一个新颖的特征工程模型,用于自动对这些伪像进行分类。该模型包含三个主要步骤(i) 使用提议的过渡表模式 (TTPat) 生成特征;(ii) 基于提议的累积权重迭代邻近成分分析 (CWINCA) 选择特征;(iii) 使用基于 t 算法的 k-nearest neighbors (tkNN) 进行分类。这项工作的新颖之处在于 TTPat 特征提取器和 CWINCA 特征选择器。使用提出的 TTPat 进行基于信道的转换,从转换后的脑电信号中提取 392 个特征。还提出了一种新颖的 CWINCA 特征选择器。结果在 30 个案例中,所提出的 TTPat 和基于 CWINCA 的特征工程模型获得了 66.39% 到 97.69% 的分类准确率。我们使用一种名为 Directed Lobish 的新符号语言展示了可解释的结果。结论结果和发现表明,所提出的可解释特征工程(EFE)模型在伪影检测和分类方面表现出色。为了获得可解释的结果,我们提出了一种新的符号语言 Directed Lobish。
{"title":"Directed Lobish-based explainable feature engineering model with TTPat and CWINCA for EEG artifact classification","authors":"Turker Tuncer ,&nbsp;Sengul Dogan ,&nbsp;Mehmet Baygin ,&nbsp;Irem Tasci ,&nbsp;Bulent Mungen ,&nbsp;Burak Tasci ,&nbsp;Prabal Datta Barua ,&nbsp;U.R. Acharya","doi":"10.1016/j.knosys.2024.112555","DOIUrl":"10.1016/j.knosys.2024.112555","url":null,"abstract":"<div><h3>Background and Objective</h3><div>Electroencephalography (EEG) signals are crucial to decipher various brain activities. However, these EEG signals are subtle and contain various artifacts, which can happen due to various reasons. The main aim of this paper is to develop an explainable novel machine learning model that can identify the cause of these artifacts.</div></div><div><h3>Material and method</h3><div>A new EEG signal dataset was collected to classify various types of artifacts. This dataset contains eight classes: seven are artifacts, and one is the EEG signal without artifacts. A novel feature engineering model has been proposed to classify these artifact classes automatically. This model contains three main steps: (i) feature generation with the proposed transition table pattern (TTPat), (ii) the proposed cumulative weight-based iterative neighborhood component analysis (CWINCA)-based feature selection, and (iii) classification using t algorithm-based k-nearest neighbors (tkNN). The novelty of this work is TTPat feature extractor and CWINCA feature selector. Channel-based transformation is performed using the proposed TTPat, which extracts 392 features from the transformed EEG signal. A novel CWINCA feature selector is proposed. The artifacts are classified using tkNN algorithm.</div></div><div><h3>Results</h3><div>The proposed TTPat and CWINCA-based feature engineering model obtained a classification accuracy ranging from 66.39% to 97.69% for 30 cases. We presented the explainable results using a new symbolic language termed Directed Lobish.</div></div><div><h3>Conclusions</h3><div>The results and findings demonstrated that the proposed explainable feature engineering (EFE) model is good at artifact detection and classification. Directed Lobish has been presented to obtain explainable results and is a new symbolic language.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"305 ","pages":"Article 112555"},"PeriodicalIF":7.2,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142529337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
USMDA: Unsupervised Multisource Domain Adaptive ADHD prediction model using neuroimaging USMDA:利用神经影像的无监督多源领域自适应多动症预测模型
IF 7.2 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-15 DOI: 10.1016/j.knosys.2024.112615
Mehak Mengi, Deepti Malhotra
There is an increasing number of large-scale cross-site database collections of neuroimaging markers (sMRI and fMRI) for studying neurodevelopmental illnesses (NDDs). Although a huge amount of data favors machine learning-based categorization algorithms, the unique heterogeneity of each site can impair cross-site generalization capacity. It is critical to create Unsupervised domain adaption methods for NDDs because obtaining appropriate diagnoses or labeling for NDDs might be problematic. In our work, we focus on Attention-deficit/hyperactivity disorder, which is the most common and frequently co-occurring NDD. We present an unsupervised multisource domain adaptation network (USMDA) with four primary components: Domain Alignment Module, Discrepancy Estimator, Pre-trained Model Generator, and Unsupervised Network. The Domain Alignment module is intended to incrementally and effectively align graph representations of the source and target domains. At the same time, the binary cross entropy regularizer is introduced for the first time during the training of a model learned on multiple source domains to improve existing feature alignment methods such as Transfer Joint Matching (TJM) and Joint Distribution Adaptation (JDA) by learning good unsupervised features. In an unsupervised network, the grid search optimization technique generates the optimal pseudo labels for unlabeled target data. We validate our proposed technique first on existing feature-level DA methods such as JDA, TJM, and Correlation alignment (CORAL), on the publically accessible dataset ADHD-200 and then by using binary cross-entropy in existing DA methods such as TJMCE and JDACE. The experimental results show that our proposed USMDAJDACE method when applied to multisite sMRI and fMRI ADHD data, can significantly outperform competitive methods for multi-center ADHD diagnosis.
用于研究神经发育疾病(NDDs)的神经影像标记(sMRI 和 fMRI)的大规模跨部位数据库集合越来越多。虽然海量数据有利于基于机器学习的分类算法,但每个部位的独特异质性会损害跨部位归纳能力。为 NDDs 创建无监督领域适应方法至关重要,因为为 NDDs 获取适当的诊断或标记可能存在问题。在我们的工作中,我们重点研究了注意力缺陷/多动障碍,这是最常见、最常并发的 NDD。我们提出的无监督多源领域适应网络(USMDA)包含四个主要组件:领域对齐模块、差异估计器、预训练模型生成器和无监督网络。域对齐模块旨在逐步有效地对齐源域和目标域的图表示。同时,在对多个源域学习的模型进行训练时,首次引入了二元交叉熵正则化器,通过学习良好的无监督特征来改进现有的特征对齐方法,如转移联合匹配(TJM)和联合分布适应(JDA)。在无监督网络中,网格搜索优化技术为未标记的目标数据生成最佳伪标签。我们首先在公开数据集 ADHD-200 上对 JDA、TJM 和 Correlation alignment (CORAL) 等现有特征级分布式计算方法进行了验证,然后在 TJMCE 和 JDACE 等现有分布式计算方法中使用二元交叉熵对我们提出的技术进行了验证。实验结果表明,我们提出的 USMDAJDACE 方法应用于多站点 sMRI 和 fMRI ADHD 数据时,在多中心 ADHD 诊断方面明显优于其他竞争方法。
{"title":"USMDA: Unsupervised Multisource Domain Adaptive ADHD prediction model using neuroimaging","authors":"Mehak Mengi,&nbsp;Deepti Malhotra","doi":"10.1016/j.knosys.2024.112615","DOIUrl":"10.1016/j.knosys.2024.112615","url":null,"abstract":"<div><div>There is an increasing number of large-scale cross-site database collections of neuroimaging markers (sMRI and fMRI) for studying neurodevelopmental illnesses (NDDs). Although a huge amount of data favors machine learning-based categorization algorithms, the unique heterogeneity of each site can impair cross-site generalization capacity. It is critical to create Unsupervised domain adaption methods for NDDs because obtaining appropriate diagnoses or labeling for NDDs might be problematic. In our work, we focus on Attention-deficit/hyperactivity disorder, which is the most common and frequently co-occurring NDD. We present an unsupervised multisource domain adaptation network (USMDA) with four primary components: Domain Alignment Module, Discrepancy Estimator, Pre-trained Model Generator, and Unsupervised Network. The Domain Alignment module is intended to incrementally and effectively align graph representations of the source and target domains. At the same time, the binary cross entropy regularizer is introduced for the first time during the training of a model learned on multiple source domains to improve existing feature alignment methods such as Transfer Joint Matching (TJM) and Joint Distribution Adaptation (JDA) by learning good unsupervised features. In an unsupervised network, the grid search optimization technique generates the optimal pseudo labels for unlabeled target data. We validate our proposed technique first on existing feature-level DA methods such as JDA, TJM, and Correlation alignment (CORAL), on the publically accessible dataset ADHD-200 and then by using binary cross-entropy in existing DA methods such as TJMCE and JDACE. The experimental results show that our proposed <span><math><mrow><mi>U</mi><mi>S</mi><mi>M</mi><mi>D</mi><msub><mrow><mi>A</mi></mrow><mrow><mi>J</mi><mi>D</mi><mi>A</mi><mi>C</mi><mi>E</mi></mrow></msub></mrow></math></span> method when applied to multisite sMRI and fMRI ADHD data, can significantly outperform competitive methods for multi-center ADHD diagnosis.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"305 ","pages":"Article 112615"},"PeriodicalIF":7.2,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142529402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Knowledge-Based Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1