首页 > 最新文献

Knowledge and Information Systems最新文献

英文 中文
Caption matters: a new perspective for knowledge-based visual question answering 标题很重要:基于知识的视觉问题解答新视角
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-22 DOI: 10.1007/s10115-024-02166-8
Bin Feng, Shulan Ruan, Likang Wu, Huijie Liu, Kai Zhang, Kun Zhang, Qi Liu, Enhong Chen

Knowledge-based visual question answering (KB-VQA) requires to answer questions according to the given image with the assistance of external knowledge. Recently, researchers generally tend to design different multimodal networks to extract visual and text semantic features for KB-VQA. Despite the significant progress, ‘caption’ information, a textual form of image semantics, which can also provide visually non-obvious cues for the reasoning process, is often ignored. In this paper, we introduce a novel framework, the Knowledge Based Caption Enhanced Net (KBCEN), designed to integrate caption information into the KB-VQA process. Specifically, for better knowledge reasoning, we make utilization of caption information comprehensively from both explicit and implicit perspectives. For the former, we explicitly link caption entities to knowledge graph together with object tags and question entities. While for the latter, a pre-trained multimodal BERT with natural implicit knowledge is leveraged to co-represent caption tokens, object regions as well as question tokens. Moreover, we develop a mutual correlation module to discern intricate correlations between explicit and implicit representations, thereby facilitating knowledge integration and final prediction. We conduct extensive experiments on three publicly available datasets (i.e., OK-VQA v1.0, OK-VQA v1.1 and A-OKVQA). Both quantitative and qualitative results demonstrate the superiority and rationality of our proposed KBCEN.

基于知识的视觉问题解答(KB-VQA)需要借助外部知识,根据给定图像回答问题。最近,研究人员普遍倾向于设计不同的多模态网络来提取视觉和文本语义特征,用于知识库-VQA。尽管取得了重大进展,但 "标题 "信息作为图像语义的一种文本形式,也能为推理过程提供视觉上不明显的提示,却往往被忽视。在本文中,我们介绍了一个新颖的框架--基于知识的标题增强网络(KBCEN),旨在将标题信息整合到 KB-VQA 流程中。具体来说,为了更好地进行知识推理,我们从显性和隐性两个角度综合利用字幕信息。对于前者,我们将标题实体与对象标签和问题实体一起显式地链接到知识图谱中。对于后者,我们利用预先训练好的具有自然隐含知识的多模态 BERT 来共同表示字幕标记、对象区域和问题标记。此外,我们还开发了一个相互关联模块,用于识别显性和隐性表征之间错综复杂的关联,从而促进知识整合和最终预测。我们在三个公开可用的数据集(即 OK-VQA v1.0、OK-VQA v1.1 和 A-OKVQA)上进行了广泛的实验。定量和定性结果都证明了我们提出的 KBCEN 的优越性和合理性。
{"title":"Caption matters: a new perspective for knowledge-based visual question answering","authors":"Bin Feng, Shulan Ruan, Likang Wu, Huijie Liu, Kai Zhang, Kun Zhang, Qi Liu, Enhong Chen","doi":"10.1007/s10115-024-02166-8","DOIUrl":"https://doi.org/10.1007/s10115-024-02166-8","url":null,"abstract":"<p>Knowledge-based visual question answering (KB-VQA) requires to answer questions according to the given image with the assistance of external knowledge. Recently, researchers generally tend to design different multimodal networks to extract visual and text semantic features for KB-VQA. Despite the significant progress, ‘caption’ information, a textual form of image semantics, which can also provide visually non-obvious cues for the reasoning process, is often ignored. In this paper, we introduce a novel framework, the Knowledge Based Caption Enhanced Net (KBCEN), designed to integrate caption information into the KB-VQA process. Specifically, for better knowledge reasoning, we make utilization of caption information comprehensively from both explicit and implicit perspectives. For the former, we explicitly link caption entities to knowledge graph together with object tags and question entities. While for the latter, a pre-trained multimodal BERT with natural implicit knowledge is leveraged to co-represent caption tokens, object regions as well as question tokens. Moreover, we develop a mutual correlation module to discern intricate correlations between explicit and implicit representations, thereby facilitating knowledge integration and final prediction. We conduct extensive experiments on three publicly available datasets (i.e., OK-VQA v1.0, OK-VQA v1.1 and A-OKVQA). Both quantitative and qualitative results demonstrate the superiority and rationality of our proposed KBCEN.</p>","PeriodicalId":54749,"journal":{"name":"Knowledge and Information Systems","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141743155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An adaptive and late multifusion framework in contextual representation based on evidential deep learning and Dempster–Shafer theory 基于证据深度学习和 Dempster-Shafer 理论的上下文表征中的自适应和后期多重融合框架
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-22 DOI: 10.1007/s10115-024-02150-2
Doaa Mohey El-Din, Aboul Ella Hassanein, Ehab E. Hassanien

There is a growing interest in multidisciplinary research in multimodal synthesis technology to stimulate diversity of modal interpretation in different application contexts. The real requirement for modality diversity across multiple contextual representation fields is due to the conflicting nature of data in multitarget sensors, which introduces other obstacles including ambiguity, uncertainty, imbalance, and redundancy in multiobject classification. This paper proposes a new adaptive and late multimodal fusion framework using evidence-enhanced deep learning guided by Dempster–Shafer theory and concatenation strategy to interpret multiple modalities and contextual representations that achieves a bigger number of features for interpreting unstructured multimodality types based on late fusion. Furthermore, it is designed based on a multifusion learning solution to solve the modality and context-based fusion that leads to improving decisions. It creates a fully automated selective deep neural network and constructs an adaptive fusion model for all modalities based on the input type. The proposed framework is implemented based on five layers which are a software-defined fusion layer, a preprocessing layer, a dynamic classification layer, an adaptive fusion layer, and an evaluation layer. The framework is formalizing the modality/context-based problem into an adaptive multifusion framework based on a late fusion level. The particle swarm optimization was used in multiple smart context systems to improve the final classification layer with the best optimal parameters that tracing 30 changes in hyperparameters of deep learning training models. This paper applies multiple experimental with multimodalities inputs in multicontext to show the behaviors the proposed multifusion framework. Experimental results on four challenging datasets including military, agricultural, COIVD-19, and food health data provide impressive results compared to other state-of-the-art multiple fusion models. The main strengths of proposed adaptive fusion framework can classify multiobjects with reduced features automatically and solves the fused data ambiguity and inconsistent data. In addition, it can increase the certainty and reduce the redundancy data with improving the unbalancing data. The experimental results of multimodalities experiment in multicontext using the proposed multimodal fusion framework achieve 98.45% of accuracy.

人们对多模态合成技术的多学科研究兴趣与日俱增,以促进不同应用背景下模态解释的多样性。由于多目标传感器中数据的冲突性,在多目标分类中引入了包括模糊性、不确定性、不平衡性和冗余性在内的其他障碍,因此对跨多个上下文表示领域的模态多样性提出了真正的要求。本文提出了一种新的自适应晚期多模态融合框架,利用以 Dempster-Shafer 理论为指导的证据增强型深度学习和串联策略来解释多种模态和上下文表征,从而在晚期融合的基础上实现更多特征来解释非结构化多模态类型。此外,它的设计基于多重融合学习解决方案,以解决基于模态和上下文的融合问题,从而改进决策。它创建了一个全自动选择性深度神经网络,并根据输入类型为所有模态构建了一个自适应融合模型。所提出的框架基于五个层来实现,即软件定义的融合层、预处理层、动态分类层、自适应融合层和评估层。该框架将基于模态/上下文的问题形式化为基于后期融合层的自适应多重融合框架。粒子群优化被用于多个智能语境系统中,以追踪深度学习训练模型超参数的 30 次变化的最佳参数来改进最终分类层。本文在多语境中应用了多种多模态输入实验,以展示所提出的多融合框架的行为。与其他最先进的多重融合模型相比,在军事、农业、COIVD-19 和食品健康数据等四个具有挑战性的数据集上的实验结果令人印象深刻。所提出的自适应融合框架的主要优点是能自动对特征减少的多物体进行分类,并能解决融合数据的模糊性和数据不一致性问题。此外,它还能提高数据的确定性并减少冗余数据,同时改善数据的不平衡性。使用所提出的多模态融合框架在多文本中进行的多模态实验结果表明,其准确率达到了 98.45%。
{"title":"An adaptive and late multifusion framework in contextual representation based on evidential deep learning and Dempster–Shafer theory","authors":"Doaa Mohey El-Din, Aboul Ella Hassanein, Ehab E. Hassanien","doi":"10.1007/s10115-024-02150-2","DOIUrl":"https://doi.org/10.1007/s10115-024-02150-2","url":null,"abstract":"<p>There is a growing interest in multidisciplinary research in multimodal synthesis technology to stimulate diversity of modal interpretation in different application contexts. The real requirement for modality diversity across multiple contextual representation fields is due to the conflicting nature of data in multitarget sensors, which introduces other obstacles including ambiguity, uncertainty, imbalance, and redundancy in multiobject classification. This paper proposes a new adaptive and late multimodal fusion framework using evidence-enhanced deep learning guided by Dempster–Shafer theory and concatenation strategy to interpret multiple modalities and contextual representations that achieves a bigger number of features for interpreting unstructured multimodality types based on late fusion. Furthermore, it is designed based on a multifusion learning solution to solve the modality and context-based fusion that leads to improving decisions. It creates a fully automated selective deep neural network and constructs an adaptive fusion model for all modalities based on the input type. The proposed framework is implemented based on five layers which are a software-defined fusion layer, a preprocessing layer, a dynamic classification layer, an adaptive fusion layer, and an evaluation layer. The framework is formalizing the modality/context-based problem into an adaptive multifusion framework based on a late fusion level. The particle swarm optimization was used in multiple smart context systems to improve the final classification layer with the best optimal parameters that tracing 30 changes in hyperparameters of deep learning training models. This paper applies multiple experimental with multimodalities inputs in multicontext to show the behaviors the proposed multifusion framework. Experimental results on four challenging datasets including military, agricultural, COIVD-19, and food health data provide impressive results compared to other state-of-the-art multiple fusion models. The main strengths of proposed adaptive fusion framework can classify multiobjects with reduced features automatically and solves the fused data ambiguity and inconsistent data. In addition, it can increase the certainty and reduce the redundancy data with improving the unbalancing data. The experimental results of multimodalities experiment in multicontext using the proposed multimodal fusion framework achieve 98.45% of accuracy.</p>","PeriodicalId":54749,"journal":{"name":"Knowledge and Information Systems","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141743298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal intelligent information retrieval and reliable storage scheme for cloud environment and E-learning big data analytics 云环境和电子学习大数据分析的最佳智能信息检索和可靠存储方案
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-22 DOI: 10.1007/s10115-024-02152-0
Chandrasekar Venkatachalam, Shanmugavalli Venkatachalam

Currently, online learning systems in the education sector are widely used and have become a new trend, generating large amounts of educational data based on students’ activities. In order to improve online learning experiences, sophisticated data analysis techniques are required. Adding value to E-learning platforms through the efficient processing of big learning data is possible with Big Data. With time, the E-learning management system’s repository expands and becomes a rich source of learning materials. Subject matter experts may benefit from using E-learning resources to reuse previously created content when creating online content. In addition, it might be beneficial to the students by giving them access to the pertinent documents for achieving their learning objectives effectively. An improved intelligent information retrieval and reliable storage (OIIRS) scheme is proposed for E-learning using hybrid deep learning techniques. Assume that relevant E-learning documents are stored in cloud and dynamically updated according to users’ status. First, we present a highly robust and lightweight crypto, i.e., optimized CLEFIA, for securely storing data in local repositories that improve the reliability of data loading. We develop an improved butterfly optimization algorithm to provide an optimal solution for CLEFIA that selects private keys. In addition, a hybrid deep learning method, i.e., backward diagonal search-based deep recurrent neural network (BD-DRNN) is introduced for optimal intelligent information retrieval based on keywords rather than semantics. Here, feature extraction and key feature matching are performed by the modified Hungarian optimization (MHO) algorithm that improves searching accuracy. Finally, we test our proposed OIIRS scheme with different benchmark datasets and use simulation results to test the performance.

目前,在线学习系统在教育领域得到广泛应用,并已成为一种新趋势,根据学生的活动产生了大量的教育数据。为了改善在线学习体验,需要先进的数据分析技术。通过大数据有效处理学习大数据,为电子学习平台增值成为可能。随着时间的推移,E-learning 管理系统的资料库会不断扩大,成为丰富的学习资料来源。学科专家在创建在线内容时,可以利用电子学习资源重新使用以前创建的内容。此外,学生也可以利用电子学习资源获取相关文件,从而有效实现学习目标。本文利用混合深度学习技术,为电子学习提出了一种改进的智能信息检索和可靠存储(OIIRS)方案。假设相关的电子学习文档存储在云中,并根据用户的状态动态更新。首先,我们提出了一种高度稳健和轻量级的加密技术,即优化的 CLEFIA,用于将数据安全地存储在本地存储库中,从而提高数据加载的可靠性。我们开发了一种改进的蝴蝶优化算法,为选择私钥的 CLEFIA 提供最优解。此外,我们还引入了一种混合深度学习方法,即基于后向对角搜索的深度递归神经网络(BD-DRNN),用于基于关键词而非语义的最优智能信息检索。在这里,特征提取和关键特征匹配是通过改进的匈牙利优化(MHO)算法来完成的,该算法提高了搜索的准确性。最后,我们用不同的基准数据集测试了我们提出的 OIIRS 方案,并使用仿真结果来检验其性能。
{"title":"Optimal intelligent information retrieval and reliable storage scheme for cloud environment and E-learning big data analytics","authors":"Chandrasekar Venkatachalam, Shanmugavalli Venkatachalam","doi":"10.1007/s10115-024-02152-0","DOIUrl":"https://doi.org/10.1007/s10115-024-02152-0","url":null,"abstract":"<p>Currently, online learning systems in the education sector are widely used and have become a new trend, generating large amounts of educational data based on students’ activities. In order to improve online learning experiences, sophisticated data analysis techniques are required. Adding value to E-learning platforms through the efficient processing of big learning data is possible with Big Data. With time, the E-learning management system’s repository expands and becomes a rich source of learning materials. Subject matter experts may benefit from using E-learning resources to reuse previously created content when creating online content. In addition, it might be beneficial to the students by giving them access to the pertinent documents for achieving their learning objectives effectively. An improved intelligent information retrieval and reliable storage (OIIRS) scheme is proposed for E-learning using hybrid deep learning techniques. Assume that relevant E-learning documents are stored in cloud and dynamically updated according to users’ status. First, we present a highly robust and lightweight crypto, i.e., optimized CLEFIA, for securely storing data in local repositories that improve the reliability of data loading. We develop an improved butterfly optimization algorithm to provide an optimal solution for CLEFIA that selects private keys. In addition, a hybrid deep learning method, i.e., backward diagonal search-based deep recurrent neural network (BD-DRNN) is introduced for optimal intelligent information retrieval based on keywords rather than semantics. Here, feature extraction and key feature matching are performed by the modified Hungarian optimization (MHO) algorithm that improves searching accuracy. Finally, we test our proposed OIIRS scheme with different benchmark datasets and use simulation results to test the performance.</p>","PeriodicalId":54749,"journal":{"name":"Knowledge and Information Systems","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141783057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Minimum spanning tree clustering approach for effective feature partitioning in multi-view ensemble learning 在多视角集合学习中有效划分特征的最小生成树聚类法
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-18 DOI: 10.1007/s10115-024-02182-8
Aditya Kumar, Jainath Yadav

This paper introduces a novel approach for feature set partitioning in multi-view ensemble learning (MVEL) utilizing the minimum spanning tree clustering (MSTC) algorithm. The proposed method aims to generate informative and diverse feature subsets to enhance classification performance in the MVEL framework. The MSTC algorithm constructs a minimum spanning tree based on correlation measures and divides features into non-overlapping clusters, representing distinct views used to improve ensemble learning. We evaluate the effectiveness of the MSTC-based MVEL framework on ten high-dimensional datasets using support vector machines. Results indicate significant improvements in classification performance compared to single-view learning and other cutting-edge feature partitioning approaches. Statistical analysis confirms the enhanced classification accuracy achieved by the proposed MVEL framework, reaching a level of accuracy that is both reliable and competitive.

本文介绍了一种利用最小生成树聚类(MSTC)算法在多视角集合学习(MVEL)中划分特征集的新方法。所提出的方法旨在生成信息丰富且多样化的特征子集,以提高 MVEL 框架中的分类性能。MSTC 算法基于相关性度量构建最小生成树,并将特征划分为非重叠簇,代表用于改进集合学习的不同观点。我们使用支持向量机评估了基于 MSTC 的 MVEL 框架在十个高维数据集上的有效性。结果表明,与单视图学习和其他先进的特征划分方法相比,分类性能有了明显提高。统计分析证实了建议的 MVEL 框架提高了分类准确性,达到了既可靠又有竞争力的准确性水平。
{"title":"Minimum spanning tree clustering approach for effective feature partitioning in multi-view ensemble learning","authors":"Aditya Kumar, Jainath Yadav","doi":"10.1007/s10115-024-02182-8","DOIUrl":"https://doi.org/10.1007/s10115-024-02182-8","url":null,"abstract":"<p>This paper introduces a novel approach for feature set partitioning in multi-view ensemble learning (MVEL) utilizing the minimum spanning tree clustering (MSTC) algorithm. The proposed method aims to generate informative and diverse feature subsets to enhance classification performance in the MVEL framework. The MSTC algorithm constructs a minimum spanning tree based on correlation measures and divides features into non-overlapping clusters, representing distinct views used to improve ensemble learning. We evaluate the effectiveness of the MSTC-based MVEL framework on ten high-dimensional datasets using support vector machines. Results indicate significant improvements in classification performance compared to single-view learning and other cutting-edge feature partitioning approaches. Statistical analysis confirms the enhanced classification accuracy achieved by the proposed MVEL framework, reaching a level of accuracy that is both reliable and competitive.</p>","PeriodicalId":54749,"journal":{"name":"Knowledge and Information Systems","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141743299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust anomaly detection via adversarial counterfactual generation 通过对抗性反事实生成进行稳健异常检测
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-17 DOI: 10.1007/s10115-024-02172-w
Angelica Liguori, Ettore Ritacco, Francesco Sergio Pisani, Giuseppe Manco

The capability to devise robust outlier and anomaly detection tools is an important research topic in machine learning and data mining. Recent techniques have been focusing on reinforcing detection with sophisticated data generation tools that successfully refine the learning process by generating variants of the data that expand the recognition capabilities of the outlier detector. In this paper, we propose (textrm{ARN}), a semi-supervised anomaly detection and generation method based on adversarial counterfactual reconstruction. (textrm{ARN}) exploits a regularized autoencoder to optimize the reconstruction of variants of normal examples with minimal differences that are recognized as outliers. The combination of regularization and counterfactual reconstruction helps to stabilize the learning process, which results in both realistic outlier generation and substantially extended detection capability. In fact, the counterfactual generation enables a smart exploration of the search space by successfully relating small changes in all the actual samples from the true distribution to high anomaly scores. Experiments on several benchmark datasets show that our model improves the current state of the art by valuable margins because of its ability to model the true boundaries of the data manifold.

设计强大的离群点和异常点检测工具是机器学习和数据挖掘领域的一个重要研究课题。最近的技术一直专注于利用复杂的数据生成工具来强化检测,这些工具通过生成数据的变体来扩展离群点检测器的识别能力,从而成功地完善了学习过程。在本文中,我们提出了一种基于对抗性反事实重构的半监督异常检测和生成方法--(textrm{ARN})。textrm{ARN}()利用正则化自动编码器来优化正常示例的变体重建,这些变体的差异最小,会被识别为异常值。正则化和反事实重构的结合有助于稳定学习过程,从而既能生成真实的离群值,又能大大提高检测能力。事实上,反事实生成技术通过成功地将所有实际样本与真实分布之间的微小变化与高异常分数联系起来,实现了对搜索空间的智能探索。在多个基准数据集上进行的实验表明,我们的模型能够模拟数据流形的真实边界,因此在很大程度上改善了当前的技术水平。
{"title":"Robust anomaly detection via adversarial counterfactual generation","authors":"Angelica Liguori, Ettore Ritacco, Francesco Sergio Pisani, Giuseppe Manco","doi":"10.1007/s10115-024-02172-w","DOIUrl":"https://doi.org/10.1007/s10115-024-02172-w","url":null,"abstract":"<p>The capability to devise robust outlier and anomaly detection tools is an important research topic in machine learning and data mining. Recent techniques have been focusing on reinforcing detection with sophisticated data generation tools that successfully refine the learning process by generating variants of the data that expand the recognition capabilities of the outlier detector. In this paper, we propose <span>(textrm{ARN})</span>, a semi-supervised anomaly detection and generation method based on adversarial counterfactual reconstruction. <span>(textrm{ARN})</span> exploits a regularized autoencoder to optimize the reconstruction of variants of normal examples with minimal differences that are recognized as outliers. The combination of regularization and counterfactual reconstruction helps to stabilize the learning process, which results in both realistic outlier generation and substantially extended detection capability. In fact, the counterfactual generation enables a smart exploration of the search space by successfully relating small changes in all the actual samples from the true distribution to high anomaly scores. Experiments on several benchmark datasets show that our model improves the current state of the art by valuable margins because of its ability to model the true boundaries of the data manifold.</p>","PeriodicalId":54749,"journal":{"name":"Knowledge and Information Systems","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141717792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing subgraph retrieval and matching with an efficient indexing scheme 利用高效索引方案优化子图检索和匹配
IF 2.5 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-16 DOI: 10.1007/s10115-024-02175-7
Jiezhong He, Yixin Chen, Zhouyang Liu, Dongsheng Li
{"title":"Optimizing subgraph retrieval and matching with an efficient indexing scheme","authors":"Jiezhong He, Yixin Chen, Zhouyang Liu, Dongsheng Li","doi":"10.1007/s10115-024-02175-7","DOIUrl":"https://doi.org/10.1007/s10115-024-02175-7","url":null,"abstract":"","PeriodicalId":54749,"journal":{"name":"Knowledge and Information Systems","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141642985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An online ensemble classification algorithm for multi-class imbalanced data stream 多类不平衡数据流的在线集合分类算法
IF 2.5 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-16 DOI: 10.1007/s10115-024-02184-6
Meng Han, Chunpeng Li, Fanxing Meng, Feifei He, Ruihua Zhang
{"title":"An online ensemble classification algorithm for multi-class imbalanced data stream","authors":"Meng Han, Chunpeng Li, Fanxing Meng, Feifei He, Ruihua Zhang","doi":"10.1007/s10115-024-02184-6","DOIUrl":"https://doi.org/10.1007/s10115-024-02184-6","url":null,"abstract":"","PeriodicalId":54749,"journal":{"name":"Knowledge and Information Systems","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141642026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A joint knowledge representation learning of sentence vectors weighting and primary neighbor constraints 句子向量加权和主邻约束的联合知识表示学习
IF 2.5 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-16 DOI: 10.1007/s10115-024-02174-8
Erping Zhao, Bailin Chen, BianBaDroMa, Ngodrup
{"title":"A joint knowledge representation learning of sentence vectors weighting and primary neighbor constraints","authors":"Erping Zhao, Bailin Chen, BianBaDroMa, Ngodrup","doi":"10.1007/s10115-024-02174-8","DOIUrl":"https://doi.org/10.1007/s10115-024-02174-8","url":null,"abstract":"","PeriodicalId":54749,"journal":{"name":"Knowledge and Information Systems","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141640406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction: An analysis of large language models: their impact and potential applications 更正:大型语言模型分析:其影响和潜在应用
IF 2.5 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-16 DOI: 10.1007/s10115-024-02157-9
G. Bharathi Mohan, R. Prasanna Kumar, P. Vishal Krishh, A. Keerthinathan, G. Lavanya, Meka Kavya Uma Meghana, Sheba Sulthana, Srinath Doss
{"title":"Correction: An analysis of large language models: their impact and potential applications","authors":"G. Bharathi Mohan, R. Prasanna Kumar, P. Vishal Krishh, A. Keerthinathan, G. Lavanya, Meka Kavya Uma Meghana, Sheba Sulthana, Srinath Doss","doi":"10.1007/s10115-024-02157-9","DOIUrl":"https://doi.org/10.1007/s10115-024-02157-9","url":null,"abstract":"","PeriodicalId":54749,"journal":{"name":"Knowledge and Information Systems","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141643486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge filter contrastive learning for recommendation 用于推荐的知识过滤对比学习
IF 2.5 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-16 DOI: 10.1007/s10115-024-02158-8
Boshen Xia, Jiwei Qin, Lu Han, Aohua Gao, Chao Ma
{"title":"Knowledge filter contrastive learning for recommendation","authors":"Boshen Xia, Jiwei Qin, Lu Han, Aohua Gao, Chao Ma","doi":"10.1007/s10115-024-02158-8","DOIUrl":"https://doi.org/10.1007/s10115-024-02158-8","url":null,"abstract":"","PeriodicalId":54749,"journal":{"name":"Knowledge and Information Systems","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141643203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Knowledge and Information Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1