首页 > 最新文献

Data & Knowledge Engineering最新文献

英文 中文
Multi-Granularity History Graph Network for temporal knowledge graph reasoning 时序知识图推理的多粒度历史图网络
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-05 DOI: 10.1016/j.datak.2025.102496
Jun Zhu , Yan Fu , Junlin Zhou , Duanbing Chen
Reasoning on knowledge graphs (KGs) can be categorized into two main categories: predicting missing facts and predicting unknown facts in the future. However, when it comes to future prediction, it becomes crucial to incorporate temporal information and add timestamps to KGs, thereby forming temporal knowledge graphs (TKGs). The key aspect of reasoning lies in treating a TKG as a sequence of static KGs in order to effectively grasp temporal information. Additionally, it is equally important to consider the evolution of facts from various perspectives. Existing models tend to replicate the original time granularity of data while modeling TKGs, often disregarding the impact of the minimum time period in the evolution process. Furthermore, historical information is typically perceived as a single sequence of facts, with a lack of diversity in strategies (e.g., modeling sequences with varying granularities or lengths) to capture complex temporal dynamics. This unified approach may lead to the loss of critical information during the modeling process. However, the process of historical evolution often exhibits complex periodic transformation characteristics, and associated events do not necessarily follow a fixed time period. Therefore, a single granularity is insufficient to model periodic events with dynamic changes in history. Consequently, we propose the Multi-Granularity History Graph Network (MGHGN), an innovative model for TKG reasoning. MGHGN dynamically models various event evolution periods by constructing representations with multiple time granularities, and integrates various modeling methods to reason the potential facts in the future. Our model adeptly captures valuable insights from the history of multi-granularity and employs diverse approaches to model historical information. The experimental results on six benchmark datasets demonstrate that the MGHGN outperforms state-of-the-art methods.
基于知识图的推理可以分为两大类:预测缺失的事实和预测未来未知的事实。然而,当涉及到未来预测时,将时间信息和时间戳添加到KGs中,从而形成时间知识图(TKGs)就变得至关重要。推理的关键在于将TKG视为一系列静态kg,以便有效地掌握时间信息。此外,从不同的角度考虑事实的演变也同样重要。现有模型在建模tkg时倾向于复制数据的原始时间粒度,往往忽略了演化过程中最小时间周期的影响。此外,历史信息通常被视为单个事实序列,缺乏捕获复杂时间动态的策略多样性(例如,具有不同粒度或长度的建模序列)。这种统一的方法可能导致在建模过程中丢失关键信息。然而,历史演化过程往往表现出复杂的周期变换特征,相关事件不一定遵循固定的时间周期。因此,单一粒度不足以对历史中动态变化的周期性事件进行建模。因此,我们提出了多粒度历史图网络(MGHGN),这是一种创新的TKG推理模型。MGHGN通过构建多时间粒度的表示,对不同事件演化周期进行动态建模,并集成多种建模方法对未来可能发生的事实进行推理。我们的模型熟练地从多粒度的历史中获取有价值的见解,并采用不同的方法来建模历史信息。在六个基准数据集上的实验结果表明,MGHGN优于最先进的方法。
{"title":"Multi-Granularity History Graph Network for temporal knowledge graph reasoning","authors":"Jun Zhu ,&nbsp;Yan Fu ,&nbsp;Junlin Zhou ,&nbsp;Duanbing Chen","doi":"10.1016/j.datak.2025.102496","DOIUrl":"10.1016/j.datak.2025.102496","url":null,"abstract":"<div><div>Reasoning on knowledge graphs (KGs) can be categorized into two main categories: predicting missing facts and predicting unknown facts in the future. However, when it comes to future prediction, it becomes crucial to incorporate temporal information and add timestamps to KGs, thereby forming temporal knowledge graphs (TKGs). The key aspect of reasoning lies in treating a TKG as a sequence of static KGs in order to effectively grasp temporal information. Additionally, it is equally important to consider the evolution of facts from various perspectives. Existing models tend to replicate the original time granularity of data while modeling TKGs, often disregarding the impact of the minimum time period in the evolution process. Furthermore, historical information is typically perceived as a single sequence of facts, with a lack of diversity in strategies (e.g., modeling sequences with varying granularities or lengths) to capture complex temporal dynamics. This unified approach may lead to the loss of critical information during the modeling process. However, the process of historical evolution often exhibits complex periodic transformation characteristics, and associated events do not necessarily follow a fixed time period. Therefore, a single granularity is insufficient to model periodic events with dynamic changes in history. Consequently, we propose the Multi-Granularity History Graph Network (MGHGN), an innovative model for TKG reasoning. MGHGN dynamically models various event evolution periods by constructing representations with multiple time granularities, and integrates various modeling methods to reason the potential facts in the future. Our model adeptly captures valuable insights from the history of multi-granularity and employs diverse approaches to model historical information. The experimental results on six benchmark datasets demonstrate that the MGHGN outperforms state-of-the-art methods.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"160 ","pages":"Article 102496"},"PeriodicalIF":2.7,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144771633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing credit risk assessment in the retail banking industry: A hybrid approach using time series and supervised learning models 推进零售银行业信用风险评估:使用时间序列和监督学习模型的混合方法
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-23 DOI: 10.1016/j.datak.2025.102490
Sebastian H. Goldmann, Marcos R. Machado, Joerg R. Osterrieder
Credit risk assessment remains a central challenge in retail banking, with conventional models often falling short in predictive accuracy and adaptability to granular customer behavior. This study explores the potential of Time Series Classification (TSC) algorithms to enhance credit risk modeling by analyzing customers’ historical end-of-day balance data. We compare traditional Machine Learning (ML) models – including Logistic Regression and XGBoost – with advanced TSC methods such as Shapelets, Long Short-Term Memory (LSTM) networks, and Canonical Interval Forests (CIF). Our results show that TSC algorithms, particularly CIF and Shapelet-based methods, significantly outperform traditional approaches. When using CIF-derived Probability of Default (PD) estimates as additional features in an XGBoost model, predictive performance improved notably: the combined model achieved an Area under the Curve (AUC) of 0.81, compared to 0.79 for CIF alone and 0.77 for XGBoost without the CIF input. These findings underscore the value of integrating temporal features into credit risk assessment frameworks. Moreover, the complementary strengths of the TSC and XGBoost models across different Receiver Operating Characteristic (ROC) curve regions demonstrate the practical benefits of model stacking. However, performance dropped when using aggregated monthly data, highlighting the importance of preserving high-frequency behavioral signals. This research contributes to more accurate, interpretable, and robust credit risk models and offers a pathway for banks to leverage time series data for improved risk forecasting.
信用风险评估仍然是零售银行业面临的一个核心挑战,传统模型往往在预测准确性和对精细客户行为的适应性方面存在不足。本研究探讨了时间序列分类(TSC)算法的潜力,通过分析客户的历史日末余额数据来增强信用风险建模。我们比较了传统的机器学习(ML)模型——包括逻辑回归和XGBoost——与先进的TSC方法,如Shapelets、长短期记忆(LSTM)网络和规范区间森林(CIF)。我们的研究结果表明,TSC算法,特别是CIF和基于shapelet的方法,明显优于传统方法。当使用CIF衍生的违约概率(PD)估计作为XGBoost模型的附加特征时,预测性能显著提高:组合模型实现了0.81的曲线下面积(AUC),而CIF单独为0.79,而没有CIF输入的XGBoost为0.77。这些发现强调了将时间特征整合到信用风险评估框架中的价值。此外,TSC和XGBoost模型在不同接收者工作特征(ROC)曲线区域上的互补优势证明了模型叠加的实际好处。然而,当使用每月汇总数据时,性能下降,突出了保留高频行为信号的重要性。本研究有助于建立更准确、可解释和稳健的信用风险模型,并为银行利用时间序列数据改进风险预测提供了途径。
{"title":"Advancing credit risk assessment in the retail banking industry: A hybrid approach using time series and supervised learning models","authors":"Sebastian H. Goldmann,&nbsp;Marcos R. Machado,&nbsp;Joerg R. Osterrieder","doi":"10.1016/j.datak.2025.102490","DOIUrl":"10.1016/j.datak.2025.102490","url":null,"abstract":"<div><div>Credit risk assessment remains a central challenge in retail banking, with conventional models often falling short in predictive accuracy and adaptability to granular customer behavior. This study explores the potential of Time Series Classification (TSC) algorithms to enhance credit risk modeling by analyzing customers’ historical end-of-day balance data. We compare traditional Machine Learning (ML) models – including Logistic Regression and XGBoost – with advanced TSC methods such as Shapelets, Long Short-Term Memory (LSTM) networks, and Canonical Interval Forests (CIF). Our results show that TSC algorithms, particularly CIF and Shapelet-based methods, significantly outperform traditional approaches. When using CIF-derived Probability of Default (PD) estimates as additional features in an XGBoost model, predictive performance improved notably: the combined model achieved an Area under the Curve (AUC) of 0.81, compared to 0.79 for CIF alone and 0.77 for XGBoost without the CIF input. These findings underscore the value of integrating temporal features into credit risk assessment frameworks. Moreover, the complementary strengths of the TSC and XGBoost models across different Receiver Operating Characteristic (ROC) curve regions demonstrate the practical benefits of model stacking. However, performance dropped when using aggregated monthly data, highlighting the importance of preserving high-frequency behavioral signals. This research contributes to more accurate, interpretable, and robust credit risk models and offers a pathway for banks to leverage time series data for improved risk forecasting.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"160 ","pages":"Article 102490"},"PeriodicalIF":2.7,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144711634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TEDA-driven adaptive stream clustering for concept drift detection 用于概念漂移检测的teda驱动的自适应流聚类
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-22 DOI: 10.1016/j.datak.2025.102484
Zahra Rezaei , Hedieh Sajedi
The rapid growth of data-driven applications has underlined the need for strong methods to analyze and cluster streaming data. Data stream clustering is envisioned to uncover interesting knowledge concealed within data streams, typically fast, structure- and pattern-evolving. However, most current methods suffer significant challenges like the inability to detect clusters with arbitrarily shaped, handling outliers, adaptation to concept drift, and reducing dependency on predefined parameters. To tackle these challenges, we propose a novel Typicality and Eccentricity Data Analysis (TEDA)-based concept drift detection stream clustering algorithm, which can divide the clustering problem into two subproblems, micro-clusters and macro-clusters. Our methodology utilizes a TEDA-based concept drift detection approach to enhance data stream clustering. Our method employs two models in monitoring the data stream to keep the information of a previous concept while tracking the emergence of a new concept. The models represent two distinct concepts when the intersection of data samples is significantly low, as described by the Jaccard Index. TEDA-CDD is compared to known methods from the literature in experiments using synthetic and real-world datasets simulating real-world applications. By dynamically updating clusters through model reuse or creation, our algorithm ensures adaptability to real-time changes in data distributions. The proposed algorithm was comprehensively evaluated using the KDDCup-99 dataset, an intrusion detection system benchmark under diverse scenarios, including concept drifts, evolving data distributions, varying cluster sizes, and outlier conditions. Empirical results demonstrated the algorithm’s superiority over baseline approaches such as DenStream, DStream, ClusTree, and DGStream, achieving perfect performance metrics. These findings emphasize the effectiveness of our algorithm in addressing real-world streaming data challenges, combining high sensitivity to concept drift with computational efficiency, adaptability, and robust clustering capabilities.
数据驱动应用程序的快速增长强调了对强大的方法来分析和集群流数据的需求。数据流聚类的设想是发现隐藏在数据流中的有趣的知识,通常是快速的、结构和模式的演变。然而,目前大多数方法都面临着重大挑战,例如无法检测任意形状的聚类、处理异常值、适应概念漂移以及减少对预定义参数的依赖。为了解决这些问题,我们提出了一种新的基于典型和偏心数据分析(TEDA)的概念漂移检测流聚类算法,该算法可以将聚类问题分为微观聚类和宏观聚类两个子问题。我们的方法利用基于teda的概念漂移检测方法来增强数据流聚类。我们的方法采用了两种模型来监控数据流,在跟踪新概念出现的同时保留了之前概念的信息。当数据样本的交叉点非常低时,如Jaccard指数所描述的那样,模型代表两个不同的概念。在使用模拟真实世界应用的合成和真实世界数据集的实验中,将TEDA-CDD与文献中的已知方法进行了比较。通过模型重用或创建动态更新集群,我们的算法确保了对数据分布实时变化的适应性。利用入侵检测系统基准KDDCup-99数据集,在概念漂移、不断变化的数据分布、不同的簇大小和离群值条件等多种场景下,对所提出的算法进行了全面评估。实验结果表明,该算法优于基线方法,如DenStream、DStream、ClusTree和DGStream,实现了完美的性能指标。这些发现强调了我们的算法在解决现实世界流数据挑战方面的有效性,将对概念漂移的高灵敏度与计算效率、适应性和强大的聚类能力相结合。
{"title":"TEDA-driven adaptive stream clustering for concept drift detection","authors":"Zahra Rezaei ,&nbsp;Hedieh Sajedi","doi":"10.1016/j.datak.2025.102484","DOIUrl":"10.1016/j.datak.2025.102484","url":null,"abstract":"<div><div>The rapid growth of data-driven applications has underlined the need for strong methods to analyze and cluster streaming data. Data stream clustering is envisioned to uncover interesting knowledge concealed within data streams, typically fast, structure- and pattern-evolving. However, most current methods suffer significant challenges like the inability to detect clusters with arbitrarily shaped, handling outliers, adaptation to concept drift, and reducing dependency on predefined parameters. To tackle these challenges, we propose a novel Typicality and Eccentricity Data Analysis (TEDA)-based concept drift detection stream clustering algorithm, which can divide the clustering problem into two subproblems, micro-clusters and macro-clusters. Our methodology utilizes a TEDA-based concept drift detection approach to enhance data stream clustering. Our method employs two models in monitoring the data stream to keep the information of a previous concept while tracking the emergence of a new concept. The models represent two distinct concepts when the intersection of data samples is significantly low, as described by the Jaccard Index. TEDA-CDD is compared to known methods from the literature in experiments using synthetic and real-world datasets simulating real-world applications. By dynamically updating clusters through model reuse or creation, our algorithm ensures adaptability to real-time changes in data distributions. The proposed algorithm was comprehensively evaluated using the KDDCup-99 dataset, an intrusion detection system benchmark under diverse scenarios, including concept drifts, evolving data distributions, varying cluster sizes, and outlier conditions. Empirical results demonstrated the algorithm’s superiority over baseline approaches such as DenStream, DStream, ClusTree, and DGStream, achieving perfect performance metrics. These findings emphasize the effectiveness of our algorithm in addressing real-world streaming data challenges, combining high sensitivity to concept drift with computational efficiency, adaptability, and robust clustering capabilities.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"160 ","pages":"Article 102484"},"PeriodicalIF":2.7,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144712898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Supporting Sound Multi-Level Modeling—Specification and Implementation of a Multi-Dimensional Modeling Approach 支持健全的多层次建模——多维建模方法的规范与实现
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-19 DOI: 10.1016/j.datak.2025.102481
Thomas Kühne , Manfred A. Jeusfeld
Multiple levels of classification naturally occur in many domains. Several multi-level modeling approaches account for this, and a subset of them attempt to provide their users with sanity-checking mechanisms in order to guard them against conceptually ill-formed models. Historically, the respective multi-level well-formedness schemes have either been overly restrictive or too lax. Orthogonal Ontological Classification has been proposed as a foundation for sound multi-level modeling that combines the selectivity of strict schemes with the flexibility afforded by laxer schemes. In this article, we present the second iteration of a formalization of Orthogonal Ontological Classification, which we empirically validated to demonstrate some of its hitherto only postulated claims using an implementation in ConceptBase. We discuss the expressiveness of the formal language used, ConceptBase’s evaluation efficiency, and the usability of our realization based on a digital twin example model.
在许多领域中自然会出现多级分类。几种多层次建模方法解释了这一点,其中的一个子集试图为用户提供安全性检查机制,以防止他们使用概念上不正确的模型。从历史上看,相应的多层格式良好性方案要么过于严格,要么过于宽松。正交本体论分类已被提出作为健全的多层次建模的基础,它结合了严格方案的选择性和宽松方案提供的灵活性。在本文中,我们提出正交本体论分类形式化的第二次迭代,我们通过使用ConceptBase中的实现对其进行了经验验证,以演示迄今为止仅假设的一些要求。我们讨论了所使用的形式语言的表达性、ConceptBase的评估效率以及基于数字孪生示例模型的实现的可用性。
{"title":"Supporting Sound Multi-Level Modeling—Specification and Implementation of a Multi-Dimensional Modeling Approach","authors":"Thomas Kühne ,&nbsp;Manfred A. Jeusfeld","doi":"10.1016/j.datak.2025.102481","DOIUrl":"10.1016/j.datak.2025.102481","url":null,"abstract":"<div><div>Multiple levels of classification naturally occur in many domains. Several multi-level modeling approaches account for this, and a subset of them attempt to provide their users with sanity-checking mechanisms in order to guard them against conceptually ill-formed models. Historically, the respective multi-level well-formedness schemes have either been overly restrictive or too lax. Orthogonal Ontological Classification has been proposed as a foundation for sound multi-level modeling that combines the selectivity of strict schemes with the flexibility afforded by laxer schemes. In this article, we present the second iteration of a formalization of Orthogonal Ontological Classification, which we empirically validated to demonstrate some of its hitherto only postulated claims using an implementation in <span>ConceptBase</span>. We discuss the expressiveness of the formal language used, <span>ConceptBase</span>’s evaluation efficiency, and the usability of our realization based on a digital twin example model.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"160 ","pages":"Article 102481"},"PeriodicalIF":2.7,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144780865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inference-based schema discovery for RDF data RDF数据的基于推理的模式发现
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-19 DOI: 10.1016/j.datak.2025.102491
Redouane Bouhamoum , Zoubida Kedad , Stéphane Lopes
The Semantic Web represents a huge information space where an increasing number of datasets, described in RDF, are made available to users and applications. In this context, the data is not constrained by a predefined schema. In RDF datasets, the schema may be incomplete or even missing. While this offers high flexibility in creating data sources, it also makes their use difficult. Several works have addressed the problem of automatic schema discovery for RDF datasets, but existing approaches rely only on the explicit information provided by the data source, which may limit the quality of the results. Indeed, in an RDF data source, an entity is described by explicitly declared properties, but also by implicit properties that can be derived using reasoning rules. These implicit properties are not considered by existing schema discovery approaches.
In this work, we propose a first contribution towards a hybrid schema discovery approach capable of exploiting all the semantics of a data source, which is represented not only by the explicitly declared triples, but also by the ones that can be inferred through reasoning. By considering both explicit and implicit properties, the quality of the generated schema is improved. We provide a scalable design of our approach to enable the processing of large RDF data sources while improving the quality of the results. We present some experiments which demonstrate the efficiency of our proposal and the quality of the discovered schema.
语义网代表了一个巨大的信息空间,在这个空间中,越来越多的数据集(以RDF描述)可供用户和应用程序使用。在此上下文中,数据不受预定义模式的约束。在RDF数据集中,模式可能是不完整的,甚至是缺失的。虽然这为创建数据源提供了高度的灵活性,但也使数据源的使用变得困难。一些工作已经解决了RDF数据集的自动模式发现问题,但是现有的方法只依赖于数据源提供的显式信息,这可能会限制结果的质量。实际上,在RDF数据源中,实体是通过显式声明的属性来描述的,但也可以通过可以使用推理规则派生的隐式属性来描述。现有的模式发现方法不考虑这些隐式属性。在这项工作中,我们提出了对混合模式发现方法的第一个贡献,该方法能够利用数据源的所有语义,这些语义不仅由显式声明的三元组表示,而且由可以通过推理推断的三元组表示。通过同时考虑显式和隐式属性,提高了生成模式的质量。我们提供了一种可伸缩的方法设计,以支持处理大型RDF数据源,同时提高结果的质量。通过实验证明了该方法的有效性和所发现模式的质量。
{"title":"Inference-based schema discovery for RDF data","authors":"Redouane Bouhamoum ,&nbsp;Zoubida Kedad ,&nbsp;Stéphane Lopes","doi":"10.1016/j.datak.2025.102491","DOIUrl":"10.1016/j.datak.2025.102491","url":null,"abstract":"<div><div>The Semantic Web represents a huge information space where an increasing number of datasets, described in RDF, are made available to users and applications. In this context, the data is not constrained by a predefined schema. In RDF datasets, the schema may be incomplete or even missing. While this offers high flexibility in creating data sources, it also makes their use difficult. Several works have addressed the problem of automatic schema discovery for RDF datasets, but existing approaches rely only on the explicit information provided by the data source, which may limit the quality of the results. Indeed, in an RDF data source, an entity is described by explicitly declared properties, but also by implicit properties that can be derived using reasoning rules. These implicit properties are not considered by existing schema discovery approaches.</div><div>In this work, we propose a first contribution towards a hybrid schema discovery approach capable of exploiting all the semantics of a data source, which is represented not only by the explicitly declared triples, but also by the ones that can be inferred through reasoning. By considering both explicit and implicit properties, the quality of the generated schema is improved. We provide a scalable design of our approach to enable the processing of large RDF data sources while improving the quality of the results. We present some experiments which demonstrate the efficiency of our proposal and the quality of the discovered schema.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"160 ","pages":"Article 102491"},"PeriodicalIF":2.7,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144670538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data and knowledge engineering: Insights from forty years of publication 数据和知识工程:来自出版四十年的见解
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-17 DOI: 10.1016/j.datak.2025.102492
Jacky Akoka , Isabelle Comyn-Wattiau , Nicolas Prat , Veda C. Storey
The journal, Data and Knowledge Engineering (DKE), first published by Elsevier in 1985, has now been in existence for forty years. This journal has evolved and matured to play an important role in establishing and progressing research on conceptual modeling and related areas. To accurately characterize the history and current state of the research contributions and their impact, we analyze its publications in three phases, by employing bibliometric techniques of co-citation, bibliographic coupling, main path analysis, and topic modeling. Using descriptive bibliometrics, the results from the first phase provide an overview of the articles that have been published in the journal. It analyzes the dynamics and trend patterns of publications, specifically, their main topics and contributions. Using bibliometric mapping, the second phase identifies the journal's intellectual structure, its primary research themes, and the pathways through which knowledge is disseminated between the most influential articles. The third phase entails a comparison of DKE with other scientific journals that share at least some of its scope. In addition to delineating the strengths of DKE, we provide insights into how DKE might continue to evolve and progress the contributions to the field.
《数据与知识工程》(DKE)杂志于1985年由爱思唯尔首次出版,至今已有40年的历史。这本杂志已经发展和成熟,在建立和推进概念建模和相关领域的研究方面发挥了重要作用。为了准确地描述研究贡献的历史和现状及其影响,我们采用共被引、书目耦合、主路径分析和主题建模等文献计量技术,分三个阶段对其出版物进行了分析。使用描述性文献计量学,第一阶段的结果提供了在期刊上发表的文章的概述。它分析了出版物的动态和趋势模式,特别是它们的主要主题和贡献。第二阶段使用文献计量图,确定期刊的知识结构、主要研究主题,以及在最具影响力的文章之间传播知识的途径。第三阶段需要将DKE与其他科学期刊进行比较,这些期刊至少有一部分共享它的范围。除了描述DKE的优势之外,我们还提供了关于DKE如何继续发展和进步对该领域贡献的见解。
{"title":"Data and knowledge engineering: Insights from forty years of publication","authors":"Jacky Akoka ,&nbsp;Isabelle Comyn-Wattiau ,&nbsp;Nicolas Prat ,&nbsp;Veda C. Storey","doi":"10.1016/j.datak.2025.102492","DOIUrl":"10.1016/j.datak.2025.102492","url":null,"abstract":"<div><div>The journal, <em>Data and Knowledge Engineering (DKE),</em> first published by Elsevier in 1985, has now been in existence for forty years. This journal has evolved and matured to play an important role in establishing and progressing research on conceptual modeling and related areas. To accurately characterize the history and current state of the research contributions and their impact, we analyze its publications in three phases, by employing bibliometric techniques of co-citation, bibliographic coupling, main path analysis, and topic modeling. Using descriptive bibliometrics, the results from the first phase provide an overview of the articles that have been published in the journal. It analyzes the dynamics and trend patterns of publications, specifically, their main topics and contributions. Using bibliometric mapping, the second phase identifies the journal's intellectual structure, its primary research themes, and the pathways through which knowledge is disseminated between the most influential articles. The third phase entails a comparison of DKE with other scientific journals that share at least some of its scope. In addition to delineating the strengths of DKE, we provide insights into how DKE might continue to evolve and progress the contributions to the field.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"160 ","pages":"Article 102492"},"PeriodicalIF":2.7,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144780864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting and repairing anomaly patterns in business process event logs 检测和修复业务流程事件日志中的异常模式
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-16 DOI: 10.1016/j.datak.2025.102488
Jonghyeon Ko , Marco Comuzzi , Fabrizio Maria Maggi
Event log anomaly detection and log repairing concern the identification of anomalous traces in an event log and the reconstruction of a correct trace for the anomalous ones, respectively. Trace-level anomalies in event logs often appear according to specific patterns, such events inserted, repeated, or skipped. This paper proposes P-BEAR (Pattern-Based Event Log Anomaly Reconstruction), a semi-supervised pattern-based anomaly detection and log repairing approach that exploits the pattern-based nature of trace-level anomalies in event logs. P-BEAR captures, in a set of ad-hoc graphs, the behaviour of clean traces in a log and uses these to identify anomalous traces, determine the specific anomaly pattern that applies to them, and then reconstruct the correct trace. The proposed approach is evaluated using artificial and real event logs against traditional trace alignment in conformance checking, the edit distance-based alignment method, and an unsupervised method based on deep learning. Overall, the proposed method outperforms the alignment method in anomalous trace reconstruction while providing interpretability with respect to anomaly pattern classification. P-BEAR is also quicker to execute, and its performance is more balanced across different types of anomaly patterns.
事件日志异常检测和日志修复分别关注事件日志中异常轨迹的识别和异常轨迹的正确重建。事件日志中的跟踪级异常通常根据特定的模式出现,例如插入、重复或跳过的事件。本文提出了P-BEAR(基于模式的事件日志异常重建),这是一种半监督的基于模式的异常检测和日志修复方法,它利用了事件日志中基于模式的跟踪级异常的特性。P-BEAR在一组特别的图中捕获日志中干净轨迹的行为,并使用这些来识别异常轨迹,确定应用于它们的特定异常模式,然后重建正确的轨迹。使用人工和真实事件日志对一致性检查中的传统跟踪对齐、基于编辑距离的对齐方法和基于深度学习的无监督方法进行了评估。总体而言,该方法在异常轨迹重建方面优于对准方法,同时提供了异常模式分类的可解释性。P-BEAR的执行速度也更快,其性能在不同类型的异常模式之间更加平衡。
{"title":"Detecting and repairing anomaly patterns in business process event logs","authors":"Jonghyeon Ko ,&nbsp;Marco Comuzzi ,&nbsp;Fabrizio Maria Maggi","doi":"10.1016/j.datak.2025.102488","DOIUrl":"10.1016/j.datak.2025.102488","url":null,"abstract":"<div><div>Event log anomaly detection and log repairing concern the identification of anomalous traces in an event log and the reconstruction of a correct trace for the anomalous ones, respectively. Trace-level anomalies in event logs often appear according to specific patterns, such events inserted, repeated, or skipped. This paper proposes P-BEAR (Pattern-Based Event Log Anomaly Reconstruction), a semi-supervised pattern-based anomaly detection and log repairing approach that exploits the pattern-based nature of trace-level anomalies in event logs. P-BEAR captures, in a set of ad-hoc graphs, the behaviour of clean traces in a log and uses these to identify anomalous traces, determine the specific anomaly pattern that applies to them, and then reconstruct the correct trace. The proposed approach is evaluated using artificial and real event logs against traditional trace alignment in conformance checking, the edit distance-based alignment method, and an unsupervised method based on deep learning. Overall, the proposed method outperforms the alignment method in anomalous trace reconstruction while providing interpretability with respect to anomaly pattern classification. P-BEAR is also quicker to execute, and its performance is more balanced across different types of anomaly patterns.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"160 ","pages":"Article 102488"},"PeriodicalIF":2.7,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144656577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Behavior Driven Development for 3D games 3D游戏的行为驱动开发
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-09 DOI: 10.1016/j.datak.2025.102486
Fernando Pastor Ricós , Beatriz Marín , I.S.W.B. Prasetya , Tanja E.J. Vos , Joseph Davidson , Karel Hovorka
Computer 3D games are complex software environments that require novel testing processes to ensure high-quality standards. The Intelligent Verification/Validation for Extended Reality Based Systems (iv4XR) framework addresses this need by enabling the implementation of autonomous agents to automate game testing scenarios. This framework facilitates the automation of regression test cases for complex 3D games like Space Engineers. Nevertheless, the technical expertise required to define test scripts using iv4XR can constrain seamless collaboration between developers and testers. This paper reports how integrating a Behavior-Driven Development (BDD) approach with the iv4XR framework allows the industrial company behind Space Engineers to automate regression testing. The success of this industrial collaboration has inspired the iv4XR team to integrate the BDD approach to improve the automation of play-testing for the experimental 3D game LabRecruits. Furthermore, the iv4XR framework has been extended with tactical programming to enable the automation of long-play test scenarios in Space Engineers. These results underscore the versatility of the iv4XR framework in supporting diverse testing approaches while showcasing how BDD empowers users to create, manage, and execute automated game tests using comprehensive and human-readable statements.
电脑3D游戏是复杂的软件环境,需要新颖的测试过程来确保高质量的标准。基于扩展现实系统的智能验证/验证(iv4XR)框架通过实现自动代理来自动化游戏测试场景,解决了这一需求。这个框架促进了复杂3D游戏(如《太空工程师》)回归测试用例的自动化。然而,使用iv4XR定义测试脚本所需的技术专长会限制开发人员和测试人员之间的无缝协作。本文报告了如何将行为驱动开发(BDD)方法与iv4XR框架集成,从而使Space Engineers背后的工业公司能够自动化回归测试。这次工业合作的成功启发了iv4XR团队整合BDD方法,以提高实验性3D游戏《lab新兵》的游戏测试自动化。此外,iv4XR框架已经通过战术编程进行了扩展,以实现空间工程师长期测试场景的自动化。这些结果强调了iv4XR框架在支持各种测试方法方面的多功能性,同时展示了BDD如何使用户能够使用全面和人类可读的语句创建,管理和执行自动化游戏测试。
{"title":"Behavior Driven Development for 3D games","authors":"Fernando Pastor Ricós ,&nbsp;Beatriz Marín ,&nbsp;I.S.W.B. Prasetya ,&nbsp;Tanja E.J. Vos ,&nbsp;Joseph Davidson ,&nbsp;Karel Hovorka","doi":"10.1016/j.datak.2025.102486","DOIUrl":"10.1016/j.datak.2025.102486","url":null,"abstract":"<div><div>Computer 3D games are complex software environments that require novel testing processes to ensure high-quality standards. The Intelligent Verification/Validation for Extended Reality Based Systems (<span>iv4XR</span>) framework addresses this need by enabling the implementation of autonomous agents to automate game testing scenarios. This framework facilitates the automation of regression test cases for complex 3D games like Space Engineers. Nevertheless, the technical expertise required to define test scripts using <span>iv4XR</span> can constrain seamless collaboration between developers and testers. This paper reports how integrating a Behavior-Driven Development (BDD) approach with the <span>iv4XR</span> framework allows the industrial company behind Space Engineers to automate regression testing. The success of this industrial collaboration has inspired the <span>iv4XR</span> team to integrate the BDD approach to improve the automation of play-testing for the experimental 3D game LabRecruits. Furthermore, the <span>iv4XR</span> framework has been extended with tactical programming to enable the automation of long-play test scenarios in Space Engineers. These results underscore the versatility of the <span>iv4XR</span> framework in supporting diverse testing approaches while showcasing how BDD empowers users to create, manage, and execute automated game tests using comprehensive and human-readable statements.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"160 ","pages":"Article 102486"},"PeriodicalIF":2.7,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144588942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conceptual modeling: Foundations, a historical perspective, and a vision for the future 概念建模:基础、历史视角和对未来的展望
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-07 DOI: 10.1016/j.datak.2025.102483
John Mylopoulos , Giancarlo Guizzardi , Nicola Guarino
We recount the foundations of Conceptual Modeling in Computer Science, Philosophy and Cognitive Science and their implications on what are concepts, conceptualizations, and conceptual models. We then review the history of the field, considering earlier work by the three co-authors, and highlight some of the contributions that made it what it is. Finally, we propose three research directions whose solutions could advance the field and will hopefully be addressed in the future. Our study is intended to help to circumscribe and characterize the field. It draws ideas from Philosophy, Cognitive Science, Engineering and the Social Sciences, as well as several areas within Computer Science, including Programming languages, Artificial Intelligence, Databases, Software Engineering, and Information Systems Engineering.
我们叙述了计算机科学、哲学和认知科学中概念建模的基础,以及它们对概念、概念化和概念模型的含义。然后,我们回顾了该领域的历史,考虑到三位共同作者的早期工作,并强调了使其成为今天的一些贡献。最后,我们提出了三个研究方向,它们的解决方案可以推动该领域的发展,并有望在未来得到解决。我们的研究旨在帮助划定和描述该领域。它从哲学、认知科学、工程学和社会科学以及计算机科学的几个领域汲取思想,包括编程语言、人工智能、数据库、软件工程和信息系统工程。
{"title":"Conceptual modeling: Foundations, a historical perspective, and a vision for the future","authors":"John Mylopoulos ,&nbsp;Giancarlo Guizzardi ,&nbsp;Nicola Guarino","doi":"10.1016/j.datak.2025.102483","DOIUrl":"10.1016/j.datak.2025.102483","url":null,"abstract":"<div><div>We recount the foundations of Conceptual Modeling in Computer Science, Philosophy and Cognitive Science and their implications on what are concepts, conceptualizations, and conceptual models. We then review the history of the field, considering earlier work by the three co-authors, and highlight some of the contributions that made it what it is. Finally, we propose three research directions whose solutions could advance the field and will hopefully be addressed in the future. Our study is intended to help to circumscribe and characterize the field. It draws ideas from Philosophy, Cognitive Science, Engineering and the Social Sciences, as well as several areas within Computer Science, including Programming languages, Artificial Intelligence, Databases, Software Engineering, and Information Systems Engineering.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"160 ","pages":"Article 102483"},"PeriodicalIF":2.7,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144631123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IDL-BiGRU: Integrated deep learning assisted smart scheduling of big data over cloud environment IDL-BiGRU:集成深度学习辅助云环境下大数据智能调度
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-06 DOI: 10.1016/j.datak.2025.102489
Rama Satish K V , Vibha M B , Lovely Sasidharan
The rapid expansion of Internet of Things (IoT) applications generates a continuous and massive flow of data, creating significant challenges in both data processing and storage management. Cloud computing offers scalable infrastructure to handle such data intensive workloads, but optimal task scheduling remains critical to ensure performance and resource efficiency. Traditional scheduling algorithms often fall short due to limited adaptability and consideration of only a few system parameters. In this paper, a novel integrated deep learning-assisted scheduling framework is utilized for scheduling big data over a cloud environment. The proposed framework integrated deep reinforcement learning with the bidirectional gated recurrent unit (IDL-BiGRU) model to intelligently schedule tasks based on real-time system states. The IDL-BiGRU model leverages the advantage of deep Q-learning for decision making and BiGRU's ability to capture bidirectional temporal dependencies in task and resource usage patterns. In this work, RAM, CPU, bandwidth utilization of the network, and disk storage are considered for scheduling purposes. The suggested method is to shorten the makespan and increase resource utilization. The Java tool is utilized for conducting the experimental verifications. Analysis and comparison of the suggested deep learning framework's performance with current methods are done. For 1000 tasks, the proposed method attains 0.90 degrees of imbalance, 291.17 ms downtime, 1050 ms throughput, and 721.58 makespan. The performance analysis demonstrates that the suggested strategy outperforms previous methods.
物联网(IoT)应用的快速扩展产生了持续和大量的数据流,在数据处理和存储管理方面都带来了重大挑战。云计算提供了可伸缩的基础设施来处理此类数据密集型工作负载,但优化任务调度仍然是确保性能和资源效率的关键。传统的调度算法由于适应性有限,且只考虑系统的几个参数,往往存在不足。本文提出了一种新的集成深度学习辅助调度框架,用于云环境下的大数据调度。该框架将深度强化学习与双向门控循环单元(IDL-BiGRU)模型相结合,基于实时系统状态智能调度任务。IDL-BiGRU模型利用了深度q学习的优势进行决策,以及BiGRU在任务和资源使用模式中捕获双向时间依赖性的能力。在这项工作中,为了调度目的,考虑了RAM、CPU、网络带宽利用率和磁盘存储。建议的方法是缩短makespan并增加资源利用率。使用Java工具进行实验验证。对所建议的深度学习框架与现有方法的性能进行了分析和比较。对于1000个任务,所提出的方法实现了0.90度的不平衡、291.17 ms的停机时间、1050 ms的吞吐量和721.58的makespan。性能分析表明,建议的策略优于以前的方法。
{"title":"IDL-BiGRU: Integrated deep learning assisted smart scheduling of big data over cloud environment","authors":"Rama Satish K V ,&nbsp;Vibha M B ,&nbsp;Lovely Sasidharan","doi":"10.1016/j.datak.2025.102489","DOIUrl":"10.1016/j.datak.2025.102489","url":null,"abstract":"<div><div>The rapid expansion of Internet of Things (IoT) applications generates a continuous and massive flow of data, creating significant challenges in both data processing and storage management. Cloud computing offers scalable infrastructure to handle such data intensive workloads, but optimal task scheduling remains critical to ensure performance and resource efficiency. Traditional scheduling algorithms often fall short due to limited adaptability and consideration of only a few system parameters. In this paper, a novel integrated deep learning-assisted scheduling framework is utilized for scheduling big data over a cloud environment. The proposed framework integrated deep reinforcement learning with the bidirectional gated recurrent unit (IDL-BiGRU) model to intelligently schedule tasks based on real-time system states. The IDL-BiGRU model leverages the advantage of deep Q-learning for decision making and BiGRU's ability to capture bidirectional temporal dependencies in task and resource usage patterns. In this work, RAM, CPU, bandwidth utilization of the network, and disk storage are considered for scheduling purposes. The suggested method is to shorten the makespan and increase resource utilization. The Java tool is utilized for conducting the experimental verifications. Analysis and comparison of the suggested deep learning framework's performance with current methods are done. For 1000 tasks, the proposed method attains 0.90 degrees of imbalance, 291.17 ms downtime, 1050 ms throughput, and 721.58 makespan. The performance analysis demonstrates that the suggested strategy outperforms previous methods.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"160 ","pages":"Article 102489"},"PeriodicalIF":2.7,"publicationDate":"2025-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144711633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Data & Knowledge Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1