首页 > 最新文献

Data & Knowledge Engineering最新文献

英文 中文
Knowledge graph question generation based on crucial semantic information 基于关键语义信息的知识图谱问题生成
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-17 DOI: 10.1016/j.datak.2025.102529
Mingtao Zhou , Juxiang Zhou , Jianhou Gan , Jun Wang , Jiatian Mei
The aim of the knowledge graph-based question generation (KGQG) task is to generate an answerable, fluent question from a ternary knowledge graph and a target answer. Existing KGQG that study knowledge graph subgraphs and the question of target answer generation do not effectively capture the critical semantic information between tokens within nodes/edges in subgraphs and fail to make full use of target answers and answer markers. This has led to the generation of disfluent and unanswerable questions. To address these problems, we propose a model called knowledge graph question generation based on crucial semantic information (KGQG-CSI). Our proposed model utilizes the critical semantic information encoding module to dynamically learn the degree of significance of tokens within the edges and nodes of fused answers, capturing critical semantic information that would remedy disfluency. In addition, the target answers and answer markers are sufficiently integrated with the nodes to make the generated questions answerable. First, the attention mechanism is used to allow the nodes to interact with the target answers, thereby expressing the semantic information related to the answers more accurately. The nodes that have been processed through the critical semantic information encoding module are then spliced with the answer markers to reduce the ambiguous information. The experimental results on two public datasets show that the results of the proposed model outperform the existing methods.
基于知识图的问题生成(KGQG)任务的目的是从三元知识图和目标答案中生成一个可回答的、流畅的问题。现有研究知识图子图和目标答案生成问题的KGQG不能有效捕获子图中节点/边内token之间的关键语义信息,不能充分利用目标答案和答案标记。这导致产生了不连贯和无法回答的问题。为了解决这些问题,我们提出了一个基于关键语义信息的知识图问题生成模型(KGQG-CSI)。我们提出的模型利用关键语义信息编码模块来动态学习融合答案的边缘和节点内标记的重要程度,捕获可以纠正不流畅的关键语义信息。此外,目标答案和答案标记与节点充分集成,使生成的问题可回答。首先,利用注意机制,允许节点与目标答案进行交互,从而更准确地表达与答案相关的语义信息。然后将经过关键语义信息编码模块处理的节点与答案标记进行拼接,以减少模糊信息。在两个公开数据集上的实验结果表明,所提模型的结果优于现有方法。
{"title":"Knowledge graph question generation based on crucial semantic information","authors":"Mingtao Zhou ,&nbsp;Juxiang Zhou ,&nbsp;Jianhou Gan ,&nbsp;Jun Wang ,&nbsp;Jiatian Mei","doi":"10.1016/j.datak.2025.102529","DOIUrl":"10.1016/j.datak.2025.102529","url":null,"abstract":"<div><div>The aim of the knowledge graph-based question generation (KGQG) task is to generate an answerable, fluent question from a ternary knowledge graph and a target answer. Existing KGQG that study knowledge graph subgraphs and the question of target answer generation do not effectively capture the critical semantic information between tokens within nodes/edges in subgraphs and fail to make full use of target answers and answer markers. This has led to the generation of disfluent and unanswerable questions. To address these problems, we propose a model called knowledge graph question generation based on crucial semantic information (KGQG-CSI). Our proposed model utilizes the critical semantic information encoding module to dynamically learn the degree of significance of tokens within the edges and nodes of fused answers, capturing critical semantic information that would remedy disfluency. In addition, the target answers and answer markers are sufficiently integrated with the nodes to make the generated questions answerable. First, the attention mechanism is used to allow the nodes to interact with the target answers, thereby expressing the semantic information related to the answers more accurately. The nodes that have been processed through the critical semantic information encoding module are then spliced with the answer markers to reduce the ambiguous information. The experimental results on two public datasets show that the results of the proposed model outperform the existing methods.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"161 ","pages":"Article 102529"},"PeriodicalIF":2.7,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145361577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A framework for purpose-guided event logs generation 用于生成目的导向的事件日志的框架
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-15 DOI: 10.1016/j.datak.2025.102526
Andrea Burattin , Barbara Re , Lorenzo Rossi , Francesco Tiezzi
Process mining is a prominent discipline in business process management. It collects a variety of techniques for gathering information from event logs, each fulfilling a different mining purpose. Event logs are always necessary for assessing and validating mining techniques in relation to specific purposes. Unfortunately, event logs are hard to find and usually contain noise that can influence the validity of the results of a mining technique. In this paper, we propose a framework, named purple, for generating, through business model simulation, event logs tailored for different mining purposes, i.e., discovery, what-if analysis, and conformance checking. It supports the simulation of models specified in different languages, by projecting their execution onto a common behavioral model, i.e., a labeled transition system. We present eleven instantiations of the framework implemented in a software tool by-product of this paper. The framework is validated against reference log generators through experiments on the purposes presented in the paper.
流程挖掘是业务流程管理中的一个重要学科。它收集了从事件日志中收集信息的各种技术,每种技术都有不同的挖掘目的。对于评估和验证与特定目的相关的挖掘技术,事件日志总是必需的。不幸的是,事件日志很难找到,并且通常包含可能影响挖掘技术结果有效性的噪声。在本文中,我们提出了一个名为purple的框架,通过业务模型模拟生成针对不同挖掘目的(即发现、假设分析和一致性检查)量身定制的事件日志。它支持用不同语言指定的模型的模拟,通过将它们的执行投射到一个共同的行为模型上,例如,一个标记的转换系统。我们给出了该框架的11个实例,这些实例是在本文的软件工具副产品中实现的。根据本文提出的目的,通过实验对该框架进行了参考日志生成器的验证。
{"title":"A framework for purpose-guided event logs generation","authors":"Andrea Burattin ,&nbsp;Barbara Re ,&nbsp;Lorenzo Rossi ,&nbsp;Francesco Tiezzi","doi":"10.1016/j.datak.2025.102526","DOIUrl":"10.1016/j.datak.2025.102526","url":null,"abstract":"<div><div>Process mining is a prominent discipline in business process management. It collects a variety of techniques for gathering information from event logs, each fulfilling a different mining purpose. Event logs are always necessary for assessing and validating mining techniques in relation to specific purposes. Unfortunately, event logs are hard to find and usually contain noise that can influence the validity of the results of a mining technique. In this paper, we propose a framework, named <span>purple</span>, for generating, through business model simulation, event logs tailored for different mining purposes, i.e., discovery, what-if analysis, and conformance checking. It supports the simulation of models specified in different languages, by projecting their execution onto a common behavioral model, i.e., a labeled transition system. We present eleven instantiations of the framework implemented in a software tool by-product of this paper. The framework is validated against reference log generators through experiments on the purposes presented in the paper.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"161 ","pages":"Article 102526"},"PeriodicalIF":2.7,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145319482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A fine-grained multi-lingual opinion mining method on social media texts using multi-scale fused features-based adaptive residual convolutional LSTM with attention mechanism 基于多尺度融合特征的自适应残差卷积LSTM的多语种社交媒体文本细粒度意见挖掘方法
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-06 DOI: 10.1016/j.datak.2025.102524
D. Kavitha, Ashwin Kumar S, Divya Priya B A, M V Guru Prasadh, Sri Krishna S, Sidh Parakh, Shriram. V, Tabish Rashid
Multilingual opinion mining has grown within Natural Language Processing (NLP), particularly in the setting of social media. Because it offers valuable data from content provided by users, social networks have grown exponentially in recent years, enabling people to express their opinions and share their ideas on a variety of subjects. Many Sentiment Analysis (SA) strategies have been developed subsequently to gather emotional information from the feedback. The primary limitations of these methods include longer training time and decreased accuracy. To solve this issue, the research work introduced a multi-lingual opinion mining model for analyzing public opinions that are useful in businesses and organizations. Firstly, the texts from social media are accumulated from standard data sources. The collected texts are pre-processed to remove unnecessary content, including promotional content and spam texts. Next, the features from pre-processed text are extracted using techniques including Bidirectional Encoder Representations from Transformers (BERT), N-gram and word2vec. Here, BERT understand the sentiment of a word based on its surrounding words, allowing for more accurate sentiment detection, particularly in complex linguistic structures present in different languages. Further, N-grams extract the features from multilingual datasets, where different languages may have unique syntactic and semantic structures. Word2Vec can effectively capture phrases and idioms that convey specific sentiments. This is particularly beneficial in multilingual contexts where expressions may vary significantly between languages. The extracted features from BERT, N-gram and word2vec are given to the developed Multi-scale Fused Features based Adaptive Residual Convolutional Long Short-Term Memory with Attention mechanism (MARCLA) for analyzing the opinion of the public in different languages. Here, the sentiments expressed in the complex and varied text data are accurately interpreted by the Residual Conv-LSTM. The multiscale mechanism captures the micro and the macro level linguistic features, and the residual connections combat the issues of vanishing gradient, which aid in effectively training the deep network. In addition, the parameters from the suggested MARCLA are optimized using a Modified Random Function-based Parrot Optimizer (MRFPO). This model is beneficial in understanding public sentiment more effectively. The suggested opinion mining model’s performance is compared with conventional techniques to ensure our model’s ability. The accuracy of the designed MRFPO- MARCLA framework is 95.12 %, which is higher than the conventional frameworks like CNN, LSTM, CoNBiLSTM and MARCLA, respectively. Thus, the experimental findings demonstrated that the developed multi-lingual opinion mining approach can effectively help the organizations to monitor sentiment changes and public reactions across different languages.
多语言意见挖掘在自然语言处理(NLP)中得到了发展,特别是在社交媒体的背景下。由于社交网络从用户提供的内容中提供有价值的数据,因此近年来社交网络呈指数级增长,使人们能够就各种主题表达自己的观点并分享自己的想法。随后,许多情绪分析(SA)策略被开发出来,从反馈中收集情绪信息。这些方法的主要局限性包括较长的训练时间和较低的准确性。为了解决这一问题,研究工作引入了一种多语言的民意挖掘模型,用于分析企业和组织中有用的民意。首先,社交媒体文本是从标准数据源中积累的。对收集到的文本进行预处理,以删除不必要的内容,包括促销内容和垃圾文本。接下来,从预处理文本中提取特征,使用的技术包括来自变形金刚的双向编码器表示(BERT)、N-gram和word2vec。在这里,BERT根据周围的词来理解一个词的情感,从而实现更准确的情感检测,特别是在不同语言中存在的复杂语言结构中。此外,N-grams从多语言数据集中提取特征,其中不同的语言可能具有独特的语法和语义结构。Word2Vec可以有效地捕捉表达特定情感的短语和习语。这在多语言环境中特别有用,因为不同语言之间的表达可能有很大差异。将BERT、N-gram和word2vec提取的特征输入到基于多尺度融合特征的自适应残差卷积长短期记忆注意机制(MARCLA)中,用于分析不同语言下的公众意见。在这里,在复杂多变的文本数据中表达的情感被残差卷积lstm准确地解释。多尺度机制捕获了微观和宏观层面的语言特征,残差连接解决了梯度消失的问题,有助于有效地训练深度网络。此外,使用基于改进随机函数的鹦鹉优化器(MRFPO)对建议MARCLA的参数进行优化。该模型有利于更有效地理解民意。将建议的意见挖掘模型的性能与传统技术进行了比较,以确保我们的模型的能力。设计的MRFPO- MARCLA框架的准确率为95.12%,分别高于CNN、LSTM、CoNBiLSTM和MARCLA等传统框架。因此,实验结果表明,开发的多语言意见挖掘方法可以有效地帮助组织监控不同语言的情绪变化和公众反应。
{"title":"A fine-grained multi-lingual opinion mining method on social media texts using multi-scale fused features-based adaptive residual convolutional LSTM with attention mechanism","authors":"D. Kavitha,&nbsp;Ashwin Kumar S,&nbsp;Divya Priya B A,&nbsp;M V Guru Prasadh,&nbsp;Sri Krishna S,&nbsp;Sidh Parakh,&nbsp;Shriram. V,&nbsp;Tabish Rashid","doi":"10.1016/j.datak.2025.102524","DOIUrl":"10.1016/j.datak.2025.102524","url":null,"abstract":"<div><div>Multilingual opinion mining has grown within Natural Language Processing (NLP), particularly in the setting of social media. Because it offers valuable data from content provided by users, social networks have grown exponentially in recent years, enabling people to express their opinions and share their ideas on a variety of subjects. Many Sentiment Analysis (SA) strategies have been developed subsequently to gather emotional information from the feedback. The primary limitations of these methods include longer training time and decreased accuracy. To solve this issue, the research work introduced a multi-lingual opinion mining model for analyzing public opinions that are useful in businesses and organizations. Firstly, the texts from social media are accumulated from standard data sources. The collected texts are pre-processed to remove unnecessary content, including promotional content and spam texts. Next, the features from pre-processed text are extracted using techniques including Bidirectional Encoder Representations from Transformers (BERT), N-gram and word2vec. Here, BERT understand the sentiment of a word based on its surrounding words, allowing for more accurate sentiment detection, particularly in complex linguistic structures present in different languages. Further, N-grams extract the features from multilingual datasets, where different languages may have unique syntactic and semantic structures. Word2Vec can effectively capture phrases and idioms that convey specific sentiments. This is particularly beneficial in multilingual contexts where expressions may vary significantly between languages. The extracted features from BERT, N-gram and word2vec are given to the developed Multi-scale Fused Features based Adaptive Residual Convolutional Long Short-Term Memory with Attention mechanism (MARCLA) for analyzing the opinion of the public in different languages. Here, the sentiments expressed in the complex and varied text data are accurately interpreted by the Residual Conv-LSTM. The multiscale mechanism captures the micro and the macro level linguistic features, and the residual connections combat the issues of vanishing gradient, which aid in effectively training the deep network. In addition, the parameters from the suggested MARCLA are optimized using a Modified Random Function-based Parrot Optimizer (MRFPO). This model is beneficial in understanding public sentiment more effectively. The suggested opinion mining model’s performance is compared with conventional techniques to ensure our model’s ability. The accuracy of the designed MRFPO- MARCLA framework is 95.12 %, which is higher than the conventional frameworks like CNN, LSTM, CoNBiLSTM and MARCLA, respectively. Thus, the experimental findings demonstrated that the developed multi-lingual opinion mining approach can effectively help the organizations to monitor sentiment changes and public reactions across different languages.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"161 ","pages":"Article 102524"},"PeriodicalIF":2.7,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145319483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal knowledge graph recommendation with sequence-aware and path reasoning 时序感知和路径推理的时序知识图推荐
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-03 DOI: 10.1016/j.datak.2025.102522
Yuanming Zhang, Ziyou He, Yongbiao Lou, Haixia Long, Fei Gao
Knowledge graph recommendation (KGRec) models not only alleviate the issues of data sparsity and the cold start problem encountered by traditional models but also enhance interpretability and credibility through the provision of explicit recommendation rationales. Nonetheless, existing KGRec models predominantly concentrate on extracting static structural features of user preferences from KG, often neglecting the dynamic temporal features, such as purchase time and click time. This oversight results in considerable limitations in recommendation performance. In response to this challenge, this paper introduces a novel temporal knowledge graph recommendation model (TKGRec), which fully utilizes both dynamic temporal feature and static structure feature for better recommendation. We specifically construct a temporal KG that encapsulates both static and dynamic user–item interactions. Based on the new environment, we propose a sequence-aware and path reasoning framework, in which the sequence-aware module employs a dual-attention mechanism to distill temporal features from interactions, whereas the path reasoning module utilizes reinforcement learning to extract path features. These two modules are seamlessly fused and iteratively refined to capture a more holistic understanding of user preferences. Experimental results on three real-world datasets demonstrate that the proposed model significantly outperforms existing state-of-the-art baseline models in terms of performance.
知识图推荐(KGRec)模型不仅缓解了传统模型遇到的数据稀疏性和冷启动问题,而且通过提供明确的推荐原理,提高了可解释性和可信度。然而,现有的KGRec模型主要集中于从KG中提取用户偏好的静态结构特征,往往忽略了动态时间特征,如购买时间和点击时间。这种疏忽导致了推荐性能的相当大的限制。针对这一挑战,本文提出了一种新的时间知识图推荐模型(TKGRec),该模型充分利用了动态时间特征和静态结构特征来进行更好的推荐。我们特别构造了一个临时的KG,它封装了静态和动态的用户项交互。基于新环境,我们提出了一个序列感知和路径推理框架,其中序列感知模块采用双注意机制从交互中提取时间特征,而路径推理模块采用强化学习提取路径特征。这两个模块无缝地融合在一起,并不断改进,以更全面地了解用户偏好。在三个真实数据集上的实验结果表明,所提出的模型在性能方面明显优于现有的最先进的基线模型。
{"title":"Temporal knowledge graph recommendation with sequence-aware and path reasoning","authors":"Yuanming Zhang,&nbsp;Ziyou He,&nbsp;Yongbiao Lou,&nbsp;Haixia Long,&nbsp;Fei Gao","doi":"10.1016/j.datak.2025.102522","DOIUrl":"10.1016/j.datak.2025.102522","url":null,"abstract":"<div><div>Knowledge graph recommendation (KGRec) models not only alleviate the issues of data sparsity and the cold start problem encountered by traditional models but also enhance interpretability and credibility through the provision of explicit recommendation rationales. Nonetheless, existing KGRec models predominantly concentrate on extracting static structural features of user preferences from KG, often neglecting the dynamic temporal features, such as purchase time and click time. This oversight results in considerable limitations in recommendation performance. In response to this challenge, this paper introduces a novel temporal knowledge graph recommendation model (TKGRec), which fully utilizes both dynamic temporal feature and static structure feature for better recommendation. We specifically construct a temporal KG that encapsulates both static and dynamic user–item interactions. Based on the new environment, we propose a sequence-aware and path reasoning framework, in which the sequence-aware module employs a dual-attention mechanism to distill temporal features from interactions, whereas the path reasoning module utilizes reinforcement learning to extract path features. These two modules are seamlessly fused and iteratively refined to capture a more holistic understanding of user preferences. Experimental results on three real-world datasets demonstrate that the proposed model significantly outperforms existing state-of-the-art baseline models in terms of performance.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"161 ","pages":"Article 102522"},"PeriodicalIF":2.7,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An optimization enabled Hierarchical Attention -Deep LSTM model for sentiment analysis on cloth products from customer rating 一个优化的分层关注深度LSTM模型,用于从客户评级中对布料产品进行情感分析
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-29 DOI: 10.1016/j.datak.2025.102523
Zhijun Chen , Tsungshun Hsieh , Ze Chen
The primary aim of the study endeavour is to introduce a deep learning approaches augmented with optimization techniques to conduct sentiment analysis on apparel products, utilize customer reviews and ratings as foundational data. Consequently, a review of a clothing item is utilized as input, which undergoes pre-processing involving the elimination of stop words and stemming to eradicate superfluous information. In parallel, critical features are extracted from the pre-processed data to facilitate effective categorization. Thereafter, feature extraction is executed through execution of Term frequency-inverse document frequency (TF-IDF), SentiWordNet features, positive sentiment scores, negative sentiment scores, the count of capitalized words, and hashtags. Subsequently, feature fusion is conducted utilizing the proposed Trend factor smoothing-Siberian Tiger Optimization (TS-STO), which is innovatively premeditated by integrating trend factor smoothing within the update process of Siberian Tiger Optimization (STO). Ultimately, sentiment analysis is conducted through the implementation of HA-Deep LSTM, which is conceived by merging Hierarchical Attention Network with Deep LSTM. Experimental analysis portrayed that presented approach conquered an accuracy of 95.9 %, a sensitivity of 96.1 % and specificity of 94.2 %.
该研究的主要目的是引入深度学习方法和优化技术,利用客户评论和评级作为基础数据,对服装产品进行情感分析。因此,对一件衣服的评论被用作输入,它经过预处理,包括消除停止词和词干,以消除多余的信息。同时,从预处理数据中提取关键特征,便于有效分类。然后,通过执行术语频率-逆文档频率(TF-IDF)、SentiWordNet特征、积极情绪得分、消极情绪得分、大写单词计数和标签来执行特征提取。随后,利用提出的趋势因子平滑-西伯利亚虎优化(TS-STO)进行特征融合,该算法创新性地将趋势因子平滑整合到西伯利亚虎优化(STO)的更新过程中。最后,通过HA-Deep LSTM的实现进行情感分析,HA-Deep LSTM是将分层注意网络与Deep LSTM合并而成的。实验分析表明,该方法的准确率为95.9%,灵敏度为96.1%,特异性为94.2%。
{"title":"An optimization enabled Hierarchical Attention -Deep LSTM model for sentiment analysis on cloth products from customer rating","authors":"Zhijun Chen ,&nbsp;Tsungshun Hsieh ,&nbsp;Ze Chen","doi":"10.1016/j.datak.2025.102523","DOIUrl":"10.1016/j.datak.2025.102523","url":null,"abstract":"<div><div>The primary aim of the study endeavour is to introduce a deep learning approaches augmented with optimization techniques to conduct sentiment analysis on apparel products, utilize customer reviews and ratings as foundational data. Consequently, a review of a clothing item is utilized as input, which undergoes pre-processing involving the elimination of stop words and stemming to eradicate superfluous information. In parallel, critical features are extracted from the pre-processed data to facilitate effective categorization. Thereafter, feature extraction is executed through execution of Term frequency-inverse document frequency (TF-IDF), SentiWordNet features, positive sentiment scores, negative sentiment scores, the count of capitalized words, and hashtags. Subsequently, feature fusion is conducted utilizing the proposed Trend factor smoothing-Siberian Tiger Optimization (TS-STO), which is innovatively premeditated by integrating trend factor smoothing within the update process of Siberian Tiger Optimization (STO). Ultimately, sentiment analysis is conducted through the implementation of HA-Deep LSTM, which is conceived by merging Hierarchical Attention Network with Deep LSTM. Experimental analysis portrayed that presented approach conquered an accuracy of 95.9 %, a sensitivity of 96.1 % and specificity of 94.2 %.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"161 ","pages":"Article 102523"},"PeriodicalIF":2.7,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secure data storage in multi-cloud environments using lattice-based saber with Diffie-Hellman cryptography and authenticate based on PUF-ECC 在多云环境中使用基于格子的军刀与Diffie-Hellman加密和基于PUF-ECC的身份验证来保护数据存储
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-16 DOI: 10.1016/j.datak.2025.102512
R. Iyswarya , R. Anitha
Human life has become highly dependent on data in recent decades almost every facet of daily activities, leading to its storage in multi-cloud environments. To ensure data integrity, confidentiality, and privacy, it is essential to protect data from unauthorized access. This paper proposes a novel approach for securing data in multi-cloud environments for user authentication and data storage using Lattice-Based Saber Cryptography combined with PUF-ECC and the Enhanced Goose Optimization Algorithm (EGOA). The initial user authentication is achieved through the PUF-ECC digital signature algorithm, which verifies both the user's and the device's identity. Once authenticated, user data is securely transmitted to the cloud server based on Lattice-Based Saber post-quantum cryptography combined with the Diffie-Hellman key exchange protocol. The encrypted data is then stored across multiple cloud storage through a cloud controller using RAM-based chunking. For efficient data retrieval, the Enhanced Goose Optimization Algorithm (EGOA) is employed to extract encrypted data from clouds. Finally, the data is decrypted using the Lattice-Based Saber decryption algorithm and securely retrieved by the authenticated user. This method enhances both the security and efficiency of cloud data management and retrieval. The experiment is carried out with the proposed methodologies and also compared with the existing technologies. The proposed approach achieves encryption times of 9.68 ms, key generation times of 4.84 ms, and block creation times of 1.59 ms, while maintaining a 93.7 % confidentiality rate, a 98 % packet delivery ratio, a transmission delay of 0.026 ms, throughput of 407.33 MB/s, jitter of 3.26 ms, and an RTT of 0.17 ms, demonstrating its effectiveness in secure data storage and retrieval in multi-cloud environments.
近几十年来,人类生活几乎在日常活动的各个方面都高度依赖数据,导致数据存储在多云环境中。为了确保数据的完整性、机密性和隐私性,必须保护数据免遭未经授权的访问。本文提出了一种在多云环境中保护数据的新方法,用于用户身份验证和数据存储,该方法使用基于格子的军刀加密技术结合PUF-ECC和增强型鹅优化算法(EGOA)。初始用户认证通过PUF-ECC数字签名算法实现,该算法同时验证用户和设备的身份。通过身份验证后,用户数据将基于基于格子的Saber后量子加密技术和Diffie-Hellman密钥交换协议安全地传输到云服务器。然后,加密的数据通过使用基于ram的分块的云控制器存储在多个云存储中。为了提高数据检索的效率,采用增强型鹅优化算法(Enhanced Goose Optimization Algorithm, EGOA)从云中提取加密数据。最后,使用基于格子的Saber解密算法对数据进行解密,并由经过身份验证的用户安全地检索。该方法提高了云数据管理和检索的安全性和效率。利用所提出的方法进行了实验,并与现有技术进行了比较。该方法实现了9.68 ms的加密时间、4.84 ms的密钥生成时间和1.59 ms的块创建时间,同时保持了93.7%的保密性、98%的数据包投递率、0.026 ms的传输延迟、407.33 MB/s的吞吐量、3.26 ms的抖动和0.17 ms的RTT,证明了其在多云环境下安全数据存储和检索的有效性。
{"title":"Secure data storage in multi-cloud environments using lattice-based saber with Diffie-Hellman cryptography and authenticate based on PUF-ECC","authors":"R. Iyswarya ,&nbsp;R. Anitha","doi":"10.1016/j.datak.2025.102512","DOIUrl":"10.1016/j.datak.2025.102512","url":null,"abstract":"<div><div>Human life has become highly dependent on data in recent decades almost every facet of daily activities, leading to its storage in multi-cloud environments. To ensure data integrity, confidentiality, and privacy, it is essential to protect data from unauthorized access. This paper proposes a novel approach for securing data in multi-cloud environments for user authentication and data storage using Lattice-Based Saber Cryptography combined with PUF-ECC and the Enhanced Goose Optimization Algorithm (EGOA). The initial user authentication is achieved through the PUF-ECC digital signature algorithm, which verifies both the user's and the device's identity. Once authenticated, user data is securely transmitted to the cloud server based on Lattice-Based Saber post-quantum cryptography combined with the Diffie-Hellman key exchange protocol. The encrypted data is then stored across multiple cloud storage through a cloud controller using RAM-based chunking. For efficient data retrieval, the Enhanced Goose Optimization Algorithm (EGOA) is employed to extract encrypted data from clouds. Finally, the data is decrypted using the Lattice-Based Saber decryption algorithm and securely retrieved by the authenticated user. This method enhances both the security and efficiency of cloud data management and retrieval. The experiment is carried out with the proposed methodologies and also compared with the existing technologies. The proposed approach achieves encryption times of 9.68 ms, key generation times of 4.84 ms, and block creation times of 1.59 ms, while maintaining a 93.7 % confidentiality rate, a 98 % packet delivery ratio, a transmission delay of 0.026 ms, throughput of 407.33 MB/s, jitter of 3.26 ms, and an RTT of 0.17 ms, demonstrating its effectiveness in secure data storage and retrieval in multi-cloud environments.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"161 ","pages":"Article 102512"},"PeriodicalIF":2.7,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145158079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A graph-based model for semantic textual similarity measurement 基于图的语义文本相似度度量模型
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-12 DOI: 10.1016/j.datak.2025.102509
Van-Tan Bui , Quang-Minh Nguyen , Van-Vinh Nguyen , Duc-Toan Nguyen
Measuring semantic similarity between sentence pairs is a fundamental problem in Natural Language Processing with applications in various domains, including machine translation, speech recognition, automatic question answering, and text summarization. Despite its significance, accurately assessing semantic similarity remains a challenging task, particularly for underrepresented languages such as Vietnamese. Existing methods have yet to fully leverage the unique linguistic characteristics of Vietnamese for semantic similarity measurement. To address this limitation, we propose GBNet-STS (Graph-Based Network for Semantic Textual Similarity), a novel framework for measuring the semantic similarity of Vietnamese sentence pairs. GBNet-STS integrates lexical-grammatical similarity scores and distributional semantic similarity scores within a multi-layered graph-based model. By capturing different semantic perspectives through multiple interconnected layers, our approach provides a more comprehensive and robust similarity estimation. Experimental results demonstrate that GBNet-STS outperforms traditional methods, achieving state-of-the-art performance in Vietnamese semantic similarity tasks.
句子对之间的语义相似度测量是自然语言处理中的一个基本问题,在机器翻译、语音识别、自动问答和文本摘要等领域都有广泛的应用。尽管具有重要意义,但准确评估语义相似性仍然是一项具有挑战性的任务,特别是对于像越南语这样代表性不足的语言。现有的方法尚未充分利用越南语独特的语言特征进行语义相似度测量。为了解决这一限制,我们提出了一种新的框架GBNet-STS (Graph-Based Network for Semantic Textual Similarity)来测量越南语句子对的语义相似度。GBNet-STS将词汇语法相似度评分和分布语义相似度评分集成在一个多层基于图的模型中。通过通过多个相互连接的层捕获不同的语义透视图,我们的方法提供了更全面和健壮的相似性估计。实验结果表明,GBNet-STS优于传统方法,在越南语语义相似任务中取得了最先进的性能。
{"title":"A graph-based model for semantic textual similarity measurement","authors":"Van-Tan Bui ,&nbsp;Quang-Minh Nguyen ,&nbsp;Van-Vinh Nguyen ,&nbsp;Duc-Toan Nguyen","doi":"10.1016/j.datak.2025.102509","DOIUrl":"10.1016/j.datak.2025.102509","url":null,"abstract":"<div><div>Measuring semantic similarity between sentence pairs is a fundamental problem in Natural Language Processing with applications in various domains, including machine translation, speech recognition, automatic question answering, and text summarization. Despite its significance, accurately assessing semantic similarity remains a challenging task, particularly for underrepresented languages such as Vietnamese. Existing methods have yet to fully leverage the unique linguistic characteristics of Vietnamese for semantic similarity measurement. To address this limitation, we propose GBNet-STS (Graph-Based Network for Semantic Textual Similarity), a novel framework for measuring the semantic similarity of Vietnamese sentence pairs. GBNet-STS integrates lexical-grammatical similarity scores and distributional semantic similarity scores within a multi-layered graph-based model. By capturing different semantic perspectives through multiple interconnected layers, our approach provides a more comprehensive and robust similarity estimation. Experimental results demonstrate that GBNet-STS outperforms traditional methods, achieving state-of-the-art performance in Vietnamese semantic similarity tasks.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"161 ","pages":"Article 102509"},"PeriodicalIF":2.7,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145060462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ASF: A novel associative scoring function for embedded knowledge graph reasoning ASF:一种新的嵌入式知识图推理关联评分函数
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-11 DOI: 10.1016/j.datak.2025.102511
MVPT Lakshika, HA Caldera
One of the most important tools for knowledge management is the Knowledge Graph (KG), a multi-relational graph that depicts rich factual information across entities. A KG represents entities as nodes and relations as edges, with each edge represented by a triplet: (head entity, relation, tail entity). The Scoring Function (SF) in a KG quantifies the plausibility of these triplets and is often derived from KG embeddings. However, due to the distinct relational patterns across KGs, an SF that performs well on one KG might fail on another, making the design of optimal SFs a challenging task. This study introduces the concept of an Associative Scoring Function (ASF), which leverages Association Rule Mining (ARM) to discover and incorporate patterns and characteristics of symmetric, asymmetric, inverse, and other relational types within embedded KGs. The ARM technique in ASF uses the FP-Growth algorithm to extract meaningful associations, which is enhanced further through hyperparameter tuning. Extensive experiments on benchmark datasets demonstrate that ASF is KG-independent and performs better than state-of-the-art SFs. These results highlight ASF's potential to generalize across diverse KGs, offering a significant advancement in the KG link prediction task.
知识图(KG)是知识管理最重要的工具之一,它是一种多关系图,描述了跨实体的丰富事实信息。KG将实体表示为节点,将关系表示为边,每条边由三元组(头实体、关系、尾实体)表示。KG中的评分函数(SF)量化了这些三元组的合理性,通常来源于KG嵌入。然而,由于不同KG之间的关系模式不同,在一个KG上表现良好的SF可能在另一个KG上表现不佳,这使得优化SF的设计成为一项具有挑战性的任务。本研究引入了关联评分函数(Association Scoring Function, ASF)的概念,该函数利用关联规则挖掘(Association Rule Mining, ARM)来发现和整合嵌入式kg中对称、非对称、逆和其他关系类型的模式和特征。ASF中的ARM技术使用FP-Growth算法来提取有意义的关联,并通过超参数调优进一步增强。在基准数据集上进行的大量实验表明,ASF与kg无关,性能优于最先进的SFs。这些结果突出了ASF在不同KG中的推广潜力,为KG链路预测任务提供了重大进展。
{"title":"ASF: A novel associative scoring function for embedded knowledge graph reasoning","authors":"MVPT Lakshika,&nbsp;HA Caldera","doi":"10.1016/j.datak.2025.102511","DOIUrl":"10.1016/j.datak.2025.102511","url":null,"abstract":"<div><div>One of the most important tools for knowledge management is the Knowledge Graph (KG), a multi-relational graph that depicts rich factual information across entities. A KG represents entities as nodes and relations as edges, with each edge represented by a triplet: (head entity, relation, tail entity). The Scoring Function (SF) in a KG quantifies the plausibility of these triplets and is often derived from KG embeddings. However, due to the distinct relational patterns across KGs, an SF that performs well on one KG might fail on another, making the design of optimal SFs a challenging task. This study introduces the concept of an Associative Scoring Function (ASF), which leverages Association Rule Mining (ARM) to discover and incorporate patterns and characteristics of symmetric, asymmetric, inverse, and other relational types within embedded KGs. The ARM technique in ASF uses the FP-Growth algorithm to extract meaningful associations, which is enhanced further through hyperparameter tuning. Extensive experiments on benchmark datasets demonstrate that ASF is KG-independent and performs better than state-of-the-art SFs. These results highlight ASF's potential to generalize across diverse KGs, offering a significant advancement in the KG link prediction task.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"161 ","pages":"Article 102511"},"PeriodicalIF":2.7,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145094813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An integrated requirements framework for analytical and AI projects 分析和人工智能项目的集成需求框架
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-06 DOI: 10.1016/j.datak.2025.102493
Juan Trujillo , Ana Lavalle , Alejandro Reina-Reina , Jorge García-Carrasco , Alejandro Maté , Wolfgang Maaß
To this day, the requirements of data warehouses, user visualizations and ML projects have been tackled in an independent manner, ignoring the possible cross-requirements, collective constraints and dependencies between the outputs of the different systems that should be taken into account to ensure a successful analytical project. In this work, we take a holistic approach and propose a methodology that supports modeling and subsequent analysis while taking into account these three aspects. This methodology has several advantages, mainly that (i) it enables us to identify possible conflicts between actors on different tasks that are overlooked if the systems are treated in an isolated manner and (ii) this holistic view enables modeling multi-company systems, where the information or even the analytical results can be provided by third-parties, identifying key participants in federated environments. After presenting the required formalism to carry out this kind of analysis, we showcase it on a real-world running example of the tourism sector.
到目前为止,数据仓库、用户可视化和ML项目的需求都是以独立的方式解决的,忽略了不同系统输出之间可能的交叉需求、集体约束和依赖关系,这些都应该被考虑在内,以确保分析项目的成功。在这项工作中,我们采取了一种整体的方法,并提出了一种支持建模和后续分析的方法,同时考虑到这三个方面。这种方法有几个优点,主要是:(i)它使我们能够识别不同任务的参与者之间可能存在的冲突,如果以孤立的方式处理系统,这些冲突就会被忽视;(ii)这种整体视图可以对多公司系统进行建模,其中信息甚至分析结果可以由第三方提供,从而识别联合环境中的关键参与者。在展示了执行这种分析所需的形式之后,我们将其展示在旅游部门的实际运行示例中。
{"title":"An integrated requirements framework for analytical and AI projects","authors":"Juan Trujillo ,&nbsp;Ana Lavalle ,&nbsp;Alejandro Reina-Reina ,&nbsp;Jorge García-Carrasco ,&nbsp;Alejandro Maté ,&nbsp;Wolfgang Maaß","doi":"10.1016/j.datak.2025.102493","DOIUrl":"10.1016/j.datak.2025.102493","url":null,"abstract":"<div><div>To this day, the requirements of data warehouses, user visualizations and ML projects have been tackled in an independent manner, ignoring the possible cross-requirements, collective constraints and dependencies between the outputs of the different systems that should be taken into account to ensure a successful analytical project. In this work, we take a holistic approach and propose a methodology that supports modeling and subsequent analysis while taking into account these three aspects. This methodology has several advantages, mainly that (i) it enables us to identify possible conflicts between actors on different tasks that are overlooked if the systems are treated in an isolated manner and (ii) this holistic view enables modeling multi-company systems, where the information or even the analytical results can be provided by third-parties, identifying key participants in federated environments. After presenting the required formalism to carry out this kind of analysis, we showcase it on a real-world running example of the tourism sector.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"161 ","pages":"Article 102493"},"PeriodicalIF":2.7,"publicationDate":"2025-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145094812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rule-guided process discovery 规则引导的流程发现
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-02 DOI: 10.1016/j.datak.2025.102508
Ali Norouzifar , Marcus Dees , Wil van der Aalst
Event data extracted from information systems serves as the foundation for process mining, enabling the extraction of insights and identification of improvements. Process discovery focuses on deriving descriptive process models from event logs, which form the basis for conformance checking, performance analysis, and other applications. Traditional process discovery techniques predominantly rely on event logs, often overlooking supplementary information such as domain knowledge and process rules. These rules, which define relationships between activities, can be obtained through automated techniques like declarative process discovery or provided by domain experts based on process specifications. When used as an additional input alongside event logs, such rules have significant potential to guide process discovery. However, leveraging rules to discover high-quality imperative process models, such as BPMN models and Petri nets, remains an underexplored area in the literature. To address this gap, we propose an enhanced framework, IMr, which integrates discovered or user-defined rules into the process discovery workflow via a novel recursive approach. The IMr framework employs a divide-and-conquer strategy, using rules to guide the selection of process structures at each recursion step in combination with the input event log. We evaluate our approach on several real-world event logs and demonstrate that the discovered models better align with the provided rules without compromising their conformance to the event log. Additionally, we show that high-quality rules can improve model quality across well-known conformance metrics. This work highlights the importance of integrating domain knowledge into process discovery, enhancing the quality, interpretability, and applicability of the resulting process models.
从信息系统中提取的事件数据可作为流程挖掘的基础,从而能够提取见解并识别改进。流程发现侧重于从事件日志中派生描述性流程模型,事件日志构成了一致性检查、性能分析和其他应用程序的基础。传统的过程发现技术主要依赖于事件日志,常常忽略了补充信息,如领域知识和过程规则。这些规则定义了活动之间的关系,可以通过诸如声明性流程发现之类的自动化技术获得,也可以由领域专家根据流程规范提供。当作为事件日志旁边的附加输入使用时,此类规则具有指导流程发现的巨大潜力。然而,利用规则来发现高质量的命令式过程模型,如BPMN模型和Petri网,在文献中仍然是一个未被充分探索的领域。为了解决这一差距,我们提出了一个增强的框架IMr,它通过一种新的递归方法将发现的或用户定义的规则集成到流程发现工作流中。IMr框架采用分而治之的策略,结合输入事件日志,在每个递归步骤中使用规则指导流程结构的选择。我们在几个真实的事件日志上评估了我们的方法,并证明发现的模型更好地与提供的规则保持一致,而不会影响它们与事件日志的一致性。此外,我们还展示了高质量的规则可以提高众所周知的一致性度量的模型质量。这项工作强调了将领域知识集成到过程发现、提高结果过程模型的质量、可解释性和适用性的重要性。
{"title":"Rule-guided process discovery","authors":"Ali Norouzifar ,&nbsp;Marcus Dees ,&nbsp;Wil van der Aalst","doi":"10.1016/j.datak.2025.102508","DOIUrl":"10.1016/j.datak.2025.102508","url":null,"abstract":"<div><div>Event data extracted from information systems serves as the foundation for process mining, enabling the extraction of insights and identification of improvements. Process discovery focuses on deriving descriptive process models from event logs, which form the basis for conformance checking, performance analysis, and other applications. Traditional process discovery techniques predominantly rely on event logs, often overlooking supplementary information such as domain knowledge and process rules. These rules, which define relationships between activities, can be obtained through automated techniques like declarative process discovery or provided by domain experts based on process specifications. When used as an additional input alongside event logs, such rules have significant potential to guide process discovery. However, leveraging rules to discover high-quality imperative process models, such as BPMN models and Petri nets, remains an underexplored area in the literature. To address this gap, we propose an enhanced framework, IMr, which integrates discovered or user-defined rules into the process discovery workflow via a novel recursive approach. The IMr framework employs a divide-and-conquer strategy, using rules to guide the selection of process structures at each recursion step in combination with the input event log. We evaluate our approach on several real-world event logs and demonstrate that the discovered models better align with the provided rules without compromising their conformance to the event log. Additionally, we show that high-quality rules can improve model quality across well-known conformance metrics. This work highlights the importance of integrating domain knowledge into process discovery, enhancing the quality, interpretability, and applicability of the resulting process models.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"161 ","pages":"Article 102508"},"PeriodicalIF":2.7,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145048987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Data & Knowledge Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1