首页 > 最新文献

Data & Knowledge Engineering最新文献

英文 中文
“Detectors Lead, LLMs Follow”: Integrating LLMs and traditional models on implicit hate speech detection to generate faithful and plausible explanations “检测器领先,法学硕士跟随”:整合法学硕士和传统的隐式仇恨言论检测模型,生成忠实和可信的解释
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-24 DOI: 10.1016/j.datak.2025.102535
Greta Damo , Nicolás Benjamín Ocampo , Elena Cabrio, Serena Villata
Social media platforms face a growing challenge in addressing abusive content and hate speech, particularly as traditional natural language processing methods often struggle with detecting nuanced and implicit instances. To tackle this issue, our study enhances Large Language Models (LLMs) in the detection and explanation of implicit hate speech, outperforming classical approaches. We focus on two key objectives: (1) determining whether jointly predicting and generating explanations for why a message is hateful improves LLMs’ accuracy, especially for implicit cases, and (2) evaluating whether incorporating information from BERT-based models can further boost detection and explanation performance. Our method evaluates and enhances LLMs’ ability to detect hate speech and explain their predictions. By combining binary classification (Hate Speech vs. Non-Hate Speech) with natural language explanations, our approach provides clearer insights into why a message is considered hateful, advancing the accuracy and interpretability of hate speech detection.
社交媒体平台在处理辱骂性内容和仇恨言论方面面临着越来越大的挑战,尤其是传统的自然语言处理方法往往难以发现微妙和隐含的实例。为了解决这个问题,我们的研究增强了大型语言模型(llm)在隐性仇恨言论的检测和解释方面的性能,优于经典方法。我们专注于两个关键目标:(1)确定联合预测和生成解释为什么消息是可恨的是否可以提高llm的准确性,特别是对于隐式情况,以及(2)评估结合基于bert的模型的信息是否可以进一步提高检测和解释性能。我们的方法评估并提高了法学硕士检测仇恨言论和解释其预测的能力。通过将二元分类(仇恨言论与非仇恨言论)与自然语言解释相结合,我们的方法可以更清楚地了解为什么一条信息被认为是仇恨的,从而提高仇恨言论检测的准确性和可解释性。
{"title":"“Detectors Lead, LLMs Follow”: Integrating LLMs and traditional models on implicit hate speech detection to generate faithful and plausible explanations","authors":"Greta Damo ,&nbsp;Nicolás Benjamín Ocampo ,&nbsp;Elena Cabrio,&nbsp;Serena Villata","doi":"10.1016/j.datak.2025.102535","DOIUrl":"10.1016/j.datak.2025.102535","url":null,"abstract":"<div><div>Social media platforms face a growing challenge in addressing abusive content and hate speech, particularly as traditional natural language processing methods often struggle with detecting nuanced and implicit instances. To tackle this issue, our study enhances Large Language Models (LLMs) in the detection and explanation of implicit hate speech, outperforming classical approaches. We focus on two key objectives: (1) determining whether jointly predicting and generating explanations for why a message is hateful improves LLMs’ accuracy, especially for implicit cases, and (2) evaluating whether incorporating information from BERT-based models can further boost detection and explanation performance. Our method evaluates and enhances LLMs’ ability to detect hate speech and explain their predictions. By combining binary classification (Hate Speech vs. Non-Hate Speech) with natural language explanations, our approach provides clearer insights into why a message is considered hateful, advancing the accuracy and interpretability of hate speech detection.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"162 ","pages":"Article 102535"},"PeriodicalIF":2.7,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145617829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human vs. Automated data annotation: Labeling the data set for an ML-driven support ticket classifier 人工与自动数据注释:为ml驱动的支持票证分类器标记数据集
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-23 DOI: 10.1016/j.datak.2025.102534
Simon Fuchs, Janik Schnellbach, Holger Wittges, Helmut Krcmar
In general, supervised Machine Learning approaches using labeled training data currently promise the best classification results with respect to classification accuracy. Therefore, data annotation is a key component of most Machine Learning projects implemented. However, creating labels for a training data set is often an elaborate project involving arduous and repetitive work, which is why data scientists often try to minimize the effort for data annotation by automating the data annotation process itself. In this paper, we present a case study of two data annotation projects on the same data set of support tickets and compare these: one using human annotators and the other using algorithmic Learning Functions in a combination of Active Learning and Weak Supervision. Here, we achieved a weighted confidence score of >94 % for the human-created labels, while also achieving up to 92 % agreement between the labels of our automated project and the labels created by human annotators, with the need for only 10 % of human annotation as the starting input of the automated approach. Additionally, we were able to reproduce the value of 85 % for initial human classification accuracy in support ticket distribution from previous papers. We close with a reflection about the worth of business understanding in data annotation projects and the problem and proposed solutions to ticket ambiguity.
一般来说,使用标记训练数据的监督机器学习方法目前在分类精度方面保证了最好的分类结果。因此,数据注释是大多数机器学习项目的关键组成部分。然而,为训练数据集创建标签通常是一个复杂的项目,涉及艰巨和重复的工作,这就是为什么数据科学家经常试图通过自动化数据注释过程本身来减少数据注释的工作量。在本文中,我们对同一支持票数据集上的两个数据标注项目进行了案例研究,并对它们进行了比较:一个使用人工标注,另一个使用主动学习和弱监督相结合的算法学习函数。在这里,我们实现了人工创建标签的加权置信度得分为>; 94%,同时我们的自动化项目的标签和人类注释者创建的标签之间也实现了高达92%的一致性,只需要10%的人类注释作为自动化方法的开始输入。此外,我们能够从以前的论文中重现85%的初始人类分类准确率。最后,我们对数据注释项目中业务理解的价值和问题进行了反思,并提出了解决票歧义的方法。
{"title":"Human vs. Automated data annotation: Labeling the data set for an ML-driven support ticket classifier","authors":"Simon Fuchs,&nbsp;Janik Schnellbach,&nbsp;Holger Wittges,&nbsp;Helmut Krcmar","doi":"10.1016/j.datak.2025.102534","DOIUrl":"10.1016/j.datak.2025.102534","url":null,"abstract":"<div><div>In general, supervised Machine Learning approaches using labeled training data currently promise the best classification results with respect to classification accuracy. Therefore, data annotation is a key component of most Machine Learning projects implemented. However, creating labels for a training data set is often an elaborate project involving arduous and repetitive work, which is why data scientists often try to minimize the effort for data annotation by automating the data annotation process itself. In this paper, we present a case study of two data annotation projects on the same data set of support tickets and compare these: one using human annotators and the other using algorithmic Learning Functions in a combination of Active Learning and Weak Supervision. Here, we achieved a weighted confidence score of &gt;94 % for the human-created labels, while also achieving up to 92 % agreement between the labels of our automated project and the labels created by human annotators, with the need for only 10 % of human annotation as the starting input of the automated approach. Additionally, we were able to reproduce the value of 85 % for initial human classification accuracy in support ticket distribution from previous papers. We close with a reflection about the worth of business understanding in data annotation projects and the problem and proposed solutions to ticket ambiguity.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"162 ","pages":"Article 102534"},"PeriodicalIF":2.7,"publicationDate":"2025-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145684462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quality matters: A decadal systematic exploration of data quality in IoT environment 质量至关重要:物联网环境下数据质量的十年系统探索
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-20 DOI: 10.1016/j.datak.2025.102533
Tarandeep Kaur, Pankaj Deep Kaur
The proliferation of the Internet of Things (IoT) has led to an unprecedented surge in data generation, making the enhancement of data quality indispensable for unlocking IoT's full potential and enabling intelligent, data-driven decision-making. This systematic literature review examines scholarly research from the past decade to unravel the complexities of data quality in IoT. Seven research questions have been formulated, an extensive search of relevant academic databases has been conducted, and criteria for inclusion and exclusion have been defined. Key insights from these selected studies address the research questions, analyzing the multi-faceted landscape of IoT data and its quality. This study explores various data quality dimensions that play a pivotal role in assessing overall data quality, identifying critical gaps and limitations, and offering a roadmap for future research. The comprehensive overview provides a nuanced understanding of the factors influencing data quality and highlights the contributions of various researchers to ensure data quality. The study consolidates perspectives on the significance of data quality as perceived by users, while also emphasizing the paramount importance of security and privacy in managing IoT data. The findings of this SLR will provide valuable insights to researchers and practitioners, advancing efforts to maintain robust data quality across IoT ecosystems.
物联网(IoT)的扩散导致了数据生成的前所未有的激增,这使得数据质量的提高对于释放物联网的全部潜力和实现智能、数据驱动的决策必不可少。这篇系统的文献综述检查了过去十年的学术研究,以揭示物联网数据质量的复杂性。制定了七个研究问题,对相关学术数据库进行了广泛的搜索,并确定了纳入和排除标准。这些精选研究的关键见解解决了研究问题,分析了物联网数据及其质量的多方面景观。本研究探讨了各种数据质量维度,这些维度在评估整体数据质量、识别关键差距和限制以及为未来研究提供路线图方面发挥着关键作用。全面的概述提供了对影响数据质量的因素的细致理解,并强调了各种研究人员对确保数据质量的贡献。该研究巩固了用户对数据质量重要性的看法,同时也强调了安全性和隐私在管理物联网数据中的首要重要性。该单反的研究结果将为研究人员和从业者提供有价值的见解,推动在物联网生态系统中保持稳健数据质量的努力。
{"title":"Quality matters: A decadal systematic exploration of data quality in IoT environment","authors":"Tarandeep Kaur,&nbsp;Pankaj Deep Kaur","doi":"10.1016/j.datak.2025.102533","DOIUrl":"10.1016/j.datak.2025.102533","url":null,"abstract":"<div><div>The proliferation of the Internet of Things (IoT) has led to an unprecedented surge in data generation, making the enhancement of data quality indispensable for unlocking IoT's full potential and enabling intelligent, data-driven decision-making. This systematic literature review examines scholarly research from the past decade to unravel the complexities of data quality in IoT. Seven research questions have been formulated, an extensive search of relevant academic databases has been conducted, and criteria for inclusion and exclusion have been defined. Key insights from these selected studies address the research questions, analyzing the multi-faceted landscape of IoT data and its quality. This study explores various data quality dimensions that play a pivotal role in assessing overall data quality, identifying critical gaps and limitations, and offering a roadmap for future research. The comprehensive overview provides a nuanced understanding of the factors influencing data quality and highlights the contributions of various researchers to ensure data quality. The study consolidates perspectives on the significance of data quality as perceived by users, while also emphasizing the paramount importance of security and privacy in managing IoT data. The findings of this SLR will provide valuable insights to researchers and practitioners, advancing efforts to maintain robust data quality across IoT ecosystems.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"162 ","pages":"Article 102533"},"PeriodicalIF":2.7,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145684460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimized adaptive depression state prediction and severity estimation from twitter data 基于twitter数据的抑郁状态优化自适应预测和严重程度估计
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-19 DOI: 10.1016/j.datak.2025.102532
Pavani Chirasani , Gatram Rama Mohan Babu
Social media sites like Twitter, which offer an abundance of content supplemented with emojis, can be used to identify and treat depression, a widespread mental health issue. Several prediction methods exist to predict the depression. However, the relevant outcome was not good because of inaccurate prediction. The input Twitter emoji data contains more error features, which increases the complexity of predicting depression. These drawbacks resulted in poor prediction and low Accuracy. So, the proposed work aims to design a novel Zebra-based Long former Emoji Analysis (ZLEA) for predicting depression. The Twitter Emoji database was initially collected from the standard website and provided to the Python environment as input. First, the pre-processing function is run to remove the noisy features that are present in the trained database. Moreover, the necessary features were extracted and the depression condition was predicted using the current emoji. Finally, the depression severe state was assessed based on the emoji's grade level, and the performance was confirmed with other conventional research with metrics.
像推特这样的社交媒体网站,提供了丰富的内容,辅以表情符号,可以用来识别和治疗抑郁症,这是一种普遍的心理健康问题。目前已有几种预测凹陷的方法。然而,由于预测不准确,相关结果并不好。输入的Twitter表情符号数据包含更多的错误特征,这增加了预测抑郁症的复杂性。这些缺点导致了较差的预测和较低的准确性。因此,本研究旨在设计一种基于斑马的长前表情符号分析(ZLEA)来预测抑郁症。Twitter表情符号数据库最初是从标准网站收集的,并作为输入提供给Python环境。首先,运行预处理函数去除训练数据库中存在的噪声特征。此外,提取必要的特征,并使用当前表情符号预测抑郁状况。最后,根据表情符号的年级水平评估抑郁严重状态,并通过其他带有指标的常规研究证实其表现。
{"title":"Optimized adaptive depression state prediction and severity estimation from twitter data","authors":"Pavani Chirasani ,&nbsp;Gatram Rama Mohan Babu","doi":"10.1016/j.datak.2025.102532","DOIUrl":"10.1016/j.datak.2025.102532","url":null,"abstract":"<div><div>Social media sites like Twitter, which offer an abundance of content supplemented with emojis, can be used to identify and treat depression, a widespread mental health issue. Several prediction methods exist to predict the depression. However, the relevant outcome was not good because of inaccurate prediction. The input Twitter emoji data contains more error features, which increases the complexity of predicting depression. These drawbacks resulted in poor prediction and low Accuracy. So, the proposed work aims to design a novel Zebra-based Long former Emoji Analysis (ZLEA) for predicting depression. The Twitter Emoji database was initially collected from the standard website and provided to the Python environment as input. First, the pre-processing function is run to remove the noisy features that are present in the trained database. Moreover, the necessary features were extracted and the depression condition was predicted using the current emoji. Finally, the depression severe state was assessed based on the emoji's grade level, and the performance was confirmed with other conventional research with metrics.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"162 ","pages":"Article 102532"},"PeriodicalIF":2.7,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145617823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Query-based automatic text summarization using query expansion approach 使用查询扩展方法的基于查询的自动文本摘要
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-17 DOI: 10.1016/j.datak.2025.102531
Hiteshwar Kumar Azad
The amount of information available on the Web has grown dramatically and continues to grow on a daily basis. The massive amount of Web data poses significant challenges to the reliability and accuracy of current information retrieval systems. The purpose of information retrieval is to discover relevant documents within a huge group of documents whose contents match a user-initiated query. Because most users struggle to formulate well-defined queries, the query expansion technique is critical for retrieving the most relevant information. Obtaining relevant results in a concise manner is a significant challenge in this scenario. Automatic text summarization can condense a lengthy document while retaining its informative content and key concepts. It could be a potential solution to information overload. This paper proposed a query-based automatic text summarization technique that employs query expansion to improve text summarization and provide the relevant information in a concise manner. To produce a relevant text summary, this article employs a query-based extractive text summarization method, which involves selecting sentences based on the four best features retrieved from each sentence. In this process, the words are scored by the expanded query’s score, and the sentences are scored by four important features, including sentence terms, position, similarity to the first sentence, and proper noun. Extensive experiments with different ROUGE variants on various evaluation metrics, including precision, recall, and F-score, were carried out on the DUC 2007 dataset, with gains of approximately 44%, 46%, and 45% respectively, in the best scenario. It is observed that the suggested approach outperforms both DUC participatory systems and cutting-edge approaches in summary generation.
Web上可用的信息量急剧增长,并且每天都在继续增长。海量的网络数据对当前信息检索系统的可靠性和准确性提出了严峻的挑战。信息检索的目的是在内容与用户发起的查询匹配的庞大文档组中发现相关文档。由于大多数用户难以制定定义良好的查询,因此查询扩展技术对于检索最相关的信息至关重要。在这种情况下,以简洁的方式获得相关结果是一项重大挑战。自动文本摘要可以压缩冗长的文档,同时保留其信息内容和关键概念。这可能是解决信息过载的一个潜在方法。本文提出了一种基于查询的文本自动摘要技术,该技术采用查询扩展的方法来改进文本摘要,并以简洁的方式提供相关信息。为了生成相关的文本摘要,本文采用了基于查询的提取文本摘要方法,该方法包括根据从每个句子中检索到的四个最佳特征选择句子。在这个过程中,单词根据扩展查询的分数进行评分,句子根据四个重要特征进行评分,包括句子术语、位置、与第一句的相似度和专有名词。在DUC 2007数据集上对不同的ROUGE变体进行了广泛的实验,包括精度、召回率和f分数,在最佳情况下分别获得约44%、46%和45%的增益。我们观察到,所建议的方法在摘要生成方面优于DUC参与式系统和前沿方法。
{"title":"Query-based automatic text summarization using query expansion approach","authors":"Hiteshwar Kumar Azad","doi":"10.1016/j.datak.2025.102531","DOIUrl":"10.1016/j.datak.2025.102531","url":null,"abstract":"<div><div>The amount of information available on the Web has grown dramatically and continues to grow on a daily basis. The massive amount of Web data poses significant challenges to the reliability and accuracy of current information retrieval systems. The purpose of information retrieval is to discover relevant documents within a huge group of documents whose contents match a user-initiated query. Because most users struggle to formulate well-defined queries, the query expansion technique is critical for retrieving the most relevant information. Obtaining relevant results in a concise manner is a significant challenge in this scenario. Automatic text summarization can condense a lengthy document while retaining its informative content and key concepts. It could be a potential solution to information overload. This paper proposed a query-based automatic text summarization technique that employs query expansion to improve text summarization and provide the relevant information in a concise manner. To produce a relevant text summary, this article employs a query-based extractive text summarization method, which involves selecting sentences based on the four best features retrieved from each sentence. In this process, the words are scored by the expanded query’s score, and the sentences are scored by four important features, including sentence terms, position, similarity to the first sentence, and proper noun. Extensive experiments with different ROUGE variants on various evaluation metrics, including precision, recall, and F-score, were carried out on the DUC 2007 dataset, with gains of approximately 44%, 46%, and 45% respectively, in the best scenario. It is observed that the suggested approach outperforms both DUC participatory systems and cutting-edge approaches in summary generation.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"162 ","pages":"Article 102531"},"PeriodicalIF":2.7,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145532425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Debiasing judgmental decisions by providing individual error pattern feedback 通过提供个人错误模式反馈来消除判断决策的偏见
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-15 DOI: 10.1016/j.datak.2025.102530
Nathalie Balla, Thomas Setzer
We present a Decision Support System (DSS) that provides experts with feedback on their personal potential bias based on their previous error pattern. Feedback is calculated using a knowledge database containing a library of biases and typical error patterns that suggest them. An error pattern means any identifiable structure of errors. For instance, an inference engine might detect continuously too high forecasts of an expert submitted via a user interface, regularly exceeding the actual quantities observed later. The engine might then positively evaluate a rule indicating an overestimation bias and provide feedback on the detected error pattern and/or the presumed bias, potentially including further explanations. As the feedback stems from an expert’s own error pattern, it intends to enhance their self-reflection and support wise consideration of the feedback. We assume that this allows experts to acquire knowledge about their own flawed judgmental heuristics, that experts are able to apply the feedback systematically and selectively to different decision tasks and to therefore reduce their potential bias and error. To test these assumptions, we conduct experiments with the DSS. Therein, subjects provide point estimations as well as certainty intervals and subsequently receive error feedback given by a machine based on his or her previous answers. After the feedback, subjects answer further questions. Results indicate that subjects reflect on their own error pattern and apply the feedback selectively to further, upcoming estimations and reduce overall bias and error.
我们提出了一个决策支持系统(DSS),该系统可以根据专家以前的错误模式,为他们的个人潜在偏见提供反馈。反馈是使用包含偏差库和提示偏差的典型错误模式的知识库来计算的。错误模式是指任何可识别的错误结构。例如,推理引擎可能会持续检测到专家通过用户界面提交的过高预测,经常超过后来观察到的实际数量。然后,引擎可能会积极评估指示高估偏差的规则,并对检测到的错误模式和/或假定的偏差提供反馈,可能包括进一步的解释。由于反馈源于专家自己的错误模式,因此它旨在增强他们的自我反思,并支持对反馈的明智考虑。我们假设这允许专家获得关于他们自己有缺陷的判断启发式的知识,专家能够系统地、有选择地将反馈应用于不同的决策任务,从而减少他们潜在的偏见和错误。为了验证这些假设,我们用DSS进行了实验。在其中,受试者提供点估计以及确定间隔,随后接收由机器根据其先前的答案给出的错误反馈。在反馈之后,受试者回答进一步的问题。结果表明,被试会反思自己的误差模式,并有选择地将反馈应用于进一步的、即将到来的估计,从而减少总体偏差和误差。
{"title":"Debiasing judgmental decisions by providing individual error pattern feedback","authors":"Nathalie Balla,&nbsp;Thomas Setzer","doi":"10.1016/j.datak.2025.102530","DOIUrl":"10.1016/j.datak.2025.102530","url":null,"abstract":"<div><div>We present a Decision Support System (DSS) that provides experts with feedback on their personal potential bias based on their previous error pattern. Feedback is calculated using a knowledge database containing a library of biases and typical error patterns that suggest them. An error pattern means any identifiable structure of errors. For instance, an inference engine might detect continuously too high forecasts of an expert submitted via a user interface, regularly exceeding the actual quantities observed later. The engine might then positively evaluate a rule indicating an overestimation bias and provide feedback on the detected error pattern and/or the presumed bias, potentially including further explanations. As the feedback stems from an expert’s own error pattern, it intends to enhance their self-reflection and support wise consideration of the feedback. We assume that this allows experts to acquire knowledge about their own flawed judgmental heuristics, that experts are able to apply the feedback systematically and selectively to different decision tasks and to therefore reduce their potential bias and error. To test these assumptions, we conduct experiments with the DSS. Therein, subjects provide point estimations as well as certainty intervals and subsequently receive error feedback given by a machine based on his or her previous answers. After the feedback, subjects answer further questions. Results indicate that subjects reflect on their own error pattern and apply the feedback selectively to further, upcoming estimations and reduce overall bias and error.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"162 ","pages":"Article 102530"},"PeriodicalIF":2.7,"publicationDate":"2025-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145617828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An integrated approach to GDPR-compliant data sharing employing consent, contracts, and licenses 采用同意、合同和许可的综合方法来实现符合gdpr的数据共享
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-24 DOI: 10.1016/j.datak.2025.102510
Amar Tauqeer , Tek Raj Chhetri , Robert David , Albin Ahmeti , Anna Fensel
GDPR defines six legal bases, at least one of which needs to be followed in order to process (or share) personally identifiable data in a lawful manner. Most of the research today is centered around the legal bases of consent and contracts. This limits the options for legal bases that one can select (or use) for data sharing, especially in circumstances where there is a need to use multiple legal bases. For example, one can consent to share data but may want to place restrictions on how it can be used, which requires a license (an extension/add-on to data sharing contracts) in scenarios, where digital assets licensing is involved. Overcoming these limitations and enabling data sharing via multiple legal bases require combining multiple legal bases. However, incorporating additional (or multiple) legal bases, such as licenses (as an add-on to contracts), in a GDPR-compliant manner remains a challenging task. This is because combining multiple legal bases requires an understanding of each individual legal basis—a task challenging in itself—and designing a system in a manner that is both compliant with regulatory requirements and practically pertinent. Therefore, in this paper, we present our semantic-based approach and tool that enables GDPR-compliant data sharing via multiple legal bases, consent, and contracts (using licenses as an add-on). This work extends our previous work, GDPR Contract Compliance Verification (CCV) tool, which enables GDPR-compliant data sharing via consent and contracts only. We add licenses as a further add-on to contracts, make our previous work more semantically compliant by utilizing SHACL validation for compliance checking, secure the contract signing process with digital signatures, introduce SHACL repairs to automatically fix data inconsistencies, and evaluate the performance of the tool and the SHACL components. We demonstrate the effectiveness of SHACL and the enhancement of the tool with GDPR-complaint data sharing based on multiple legal bases by performance testing.
GDPR定义了六个法律基础,为了以合法的方式处理(或共享)个人身份数据,至少需要遵循其中一个。今天的大多数研究都集中在同意和合同的法律基础上。这限制了可以为数据共享选择(或使用)法律依据的选项,特别是在需要使用多个法律依据的情况下。例如,可以同意共享数据,但可能希望对数据的使用方式进行限制,在涉及数字资产许可的场景中,这需要许可证(数据共享合同的扩展/附加组件)。克服这些限制并通过多种法律基础实现数据共享需要结合多种法律基础。然而,以符合gdpr的方式合并其他(或多个)法律基础,例如许可证(作为合同的附加内容)仍然是一项具有挑战性的任务。这是因为组合多个法律基础需要理解每个单独的法律基础——这本身就是一项具有挑战性的任务——并以既符合法规要求又实际相关的方式设计系统。因此,在本文中,我们提出了基于语义的方法和工具,可以通过多种法律依据、同意和合同(使用许可证作为附加组件)实现符合gdpr的数据共享。这项工作扩展了我们之前的工作,即GDPR合同合规性验证(CCV)工具,该工具仅通过同意和合同实现符合GDPR的数据共享。我们将许可证作为合同的进一步附加,通过使用SHACL验证进行合规性检查,使我们以前的工作在语义上更符合要求,使用数字签名保护合同签署过程,引入SHACL修复以自动修复数据不一致,并评估工具和SHACL组件的性能。我们通过性能测试证明了SHACL的有效性,以及基于多种法律依据的gdpr投诉数据共享对该工具的增强。
{"title":"An integrated approach to GDPR-compliant data sharing employing consent, contracts, and licenses","authors":"Amar Tauqeer ,&nbsp;Tek Raj Chhetri ,&nbsp;Robert David ,&nbsp;Albin Ahmeti ,&nbsp;Anna Fensel","doi":"10.1016/j.datak.2025.102510","DOIUrl":"10.1016/j.datak.2025.102510","url":null,"abstract":"<div><div>GDPR defines six legal bases, at least one of which needs to be followed in order to process (or share) personally identifiable data in a lawful manner. Most of the research today is centered around the legal bases of consent and contracts. This limits the options for legal bases that one can select (or use) for data sharing, especially in circumstances where there is a need to use multiple legal bases. For example, one can consent to share data but may want to place restrictions on how it can be used, which requires a license (an extension/add-on to data sharing contracts) in scenarios, where digital assets licensing is involved. Overcoming these limitations and enabling data sharing via multiple legal bases require combining multiple legal bases. However, incorporating additional (or multiple) legal bases, such as licenses (as an add-on to contracts), in a GDPR-compliant manner remains a challenging task. This is because combining multiple legal bases requires an understanding of each individual legal basis—a task challenging in itself—and designing a system in a manner that is both compliant with regulatory requirements and practically pertinent. Therefore, in this paper, we present our semantic-based approach and tool that enables GDPR-compliant data sharing via multiple legal bases, consent, and contracts (using licenses as an add-on). This work extends our previous work, GDPR Contract Compliance Verification (CCV) tool, which enables GDPR-compliant data sharing via consent and contracts only. We add licenses as a further add-on to contracts, make our previous work more semantically compliant by utilizing SHACL validation for compliance checking, secure the contract signing process with digital signatures, introduce SHACL repairs to automatically fix data inconsistencies, and evaluate the performance of the tool and the SHACL components. We demonstrate the effectiveness of SHACL and the enhancement of the tool with GDPR-complaint data sharing based on multiple legal bases by performance testing.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"162 ","pages":"Article 102510"},"PeriodicalIF":2.7,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145684540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FKQG: Few-shot question generation from knowledge graph via large language model in-context learning FKQG:基于大语言模型语境学习的知识图谱少题生成
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-24 DOI: 10.1016/j.datak.2025.102528
Ruishen Liu , Shaorong Xie , Xinzhi Wang , Xiangfeng Luo , Hang Yu
Knowledge Base Question Generation (KBQG) focuses on generating natural language questions from a set of triplets and answers, playing a crucial role in applications such as personalized question-answering systems and educational evaluation tools. Currently, most previous work trains the models based on a large-scale dataset, leading to degraded performance in low-resource scenarios. To address this issue, we propose a novel framework for KBQG using in-context learning with large language models (LLMs) in few-shot settings (FKQG). The key to in-context learning lies in selecting relevant examples to construct prompts that guide the LLM. Therefore, we introduce two strategies for example selection: (1) extracting the semantic paths of triplets as seeds to organize the data, and (2) using the relation linking to the answer as an additional seed. Based on these seeds, examples are retrieved from the data and reranked through graph edit distance, optimizing the prompt structure. This approach ensures contextually relevant question generation. We evaluate FKQG through extensive experiments on two benchmark datasets. Our framework outperforms existing KBQG models in few-shot scenarios, achieving up to a 4.73% improvement in ROUGE-L. Additionally, FKQG enhances the performance of knowledge-based question-answering systems, yielding a 1.2% increase in Hit@1.
知识库问题生成(KBQG)侧重于从一组三联体和答案生成自然语言问题,在个性化问答系统和教育评估工具等应用中起着至关重要的作用。目前,大多数以前的工作是基于大规模数据集训练模型,导致在低资源场景下性能下降。为了解决这一问题,我们提出了一种新的KBQG框架,该框架使用在少数镜头设置(FKQG)中使用大型语言模型(llm)的上下文学习。情境学习的关键在于选择相关的例子来构建指导法学硕士的提示。因此,我们引入了两种示例选择策略:(1)提取三元组的语义路径作为种子来组织数据,(2)使用链接到答案的关系作为额外的种子。基于这些种子,从数据中检索示例,并通过图编辑距离重新排序,优化提示结构。这种方法确保生成与上下文相关的问题。我们通过在两个基准数据集上进行大量实验来评估FKQG。我们的框架在少数场景下优于现有的KBQG模型,在ROUGE-L中实现了高达4.73%的改进。此外,FKQG提高了基于知识的问答系统的性能,使Hit@1提高了1.2%。
{"title":"FKQG: Few-shot question generation from knowledge graph via large language model in-context learning","authors":"Ruishen Liu ,&nbsp;Shaorong Xie ,&nbsp;Xinzhi Wang ,&nbsp;Xiangfeng Luo ,&nbsp;Hang Yu","doi":"10.1016/j.datak.2025.102528","DOIUrl":"10.1016/j.datak.2025.102528","url":null,"abstract":"<div><div>Knowledge Base Question Generation (KBQG) focuses on generating natural language questions from a set of triplets and answers, playing a crucial role in applications such as personalized question-answering systems and educational evaluation tools. Currently, most previous work trains the models based on a large-scale dataset, leading to degraded performance in low-resource scenarios. To address this issue, we propose a novel framework for KBQG using in-context learning with large language models (LLMs) in few-shot settings (FKQG). The key to in-context learning lies in selecting relevant examples to construct prompts that guide the LLM. Therefore, we introduce two strategies for example selection: (1) extracting the semantic paths of triplets as seeds to organize the data, and (2) using the relation linking to the answer as an additional seed. Based on these seeds, examples are retrieved from the data and reranked through graph edit distance, optimizing the prompt structure. This approach ensures contextually relevant question generation. We evaluate FKQG through extensive experiments on two benchmark datasets. Our framework outperforms existing KBQG models in few-shot scenarios, achieving up to a 4.73% improvement in ROUGE-L. Additionally, FKQG enhances the performance of knowledge-based question-answering systems, yielding a 1.2% increase in Hit@1.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"161 ","pages":"Article 102528"},"PeriodicalIF":2.7,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145416217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey of processing methods for different types of concept drift 不同类型概念漂移的处理方法综述
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-22 DOI: 10.1016/j.datak.2025.102525
Shurong Yang, Meng Han, Shineng Zhu, Wenyan Yang, Zhenlong Dai, Juan Li, Jian Ding
For large, dynamic and continuously changing data streams, the characteristics and categories of data samples change with the increase of time, which is called concept drift. Concept drift has an impact on the stability and performance of the model, if ignored, the prediction will be inaccurate. More and more attention has been paid to the study of multi-data streams. Group concept drift is more complicated than single data stream concept drift. This paper studies the processing methods of group concept drift, and classifies the processing methods from three directions of single type, multiple type and group type for the first time. Within the scope of methods designed to address a single type of concept drift, this paper provides a systematic summary based on the classification of drift types, distinguishing between approaches that target real concept drift and those that address virtual concept drift. Solving real concept drift includes abrupt, gradual, recurrent and local concept drift processing methods. In this paper, the methods of dealing with the concept drift of multiple type are introduced from the aspects of solving the drift of gradual and abrupt, abrupt, gradual and incremental, abrupt, gradual and recurrent. This paper analyzes and summarizes the methods to solve the group type concept drift problem. The methods considered in this paper are compared and summarized from the aspects of processing technology, applicable drift type, comparison methods, advantages and disadvantages.
对于大型的、动态的、不断变化的数据流,数据样本的特征和类别会随着时间的增加而变化,这被称为概念漂移。概念漂移对模型的稳定性和性能有影响,如果忽略,预测将不准确。多数据流的研究越来越受到人们的重视。群概念漂移比单数据流概念漂移更为复杂。本文对群体概念漂移的处理方法进行了研究,首次从单一类型、多类型和群体类型三个方向对处理方法进行了分类。在旨在解决单一类型概念漂移的方法范围内,本文提供了基于漂移类型分类的系统总结,区分了针对真实概念漂移的方法和解决虚拟概念漂移的方法。求解实际概念漂移包括突发性、渐进性、周期性和局部概念漂移处理方法。本文从解决渐进性与突发性、突发性与渐进性、突发性与渐进性、突发性与渐进性、突发性与渐进性与反复性等方面介绍了处理多类型概念漂移的方法。本文分析和总结了解决群型概念漂移问题的方法。从加工工艺、适用漂移类型、比较方法、优缺点等方面对本文所考虑的几种方法进行了比较和总结。
{"title":"A survey of processing methods for different types of concept drift","authors":"Shurong Yang,&nbsp;Meng Han,&nbsp;Shineng Zhu,&nbsp;Wenyan Yang,&nbsp;Zhenlong Dai,&nbsp;Juan Li,&nbsp;Jian Ding","doi":"10.1016/j.datak.2025.102525","DOIUrl":"10.1016/j.datak.2025.102525","url":null,"abstract":"<div><div>For large, dynamic and continuously changing data streams, the characteristics and categories of data samples change with the increase of time, which is called concept drift. Concept drift has an impact on the stability and performance of the model, if ignored, the prediction will be inaccurate. More and more attention has been paid to the study of multi-data streams. Group concept drift is more complicated than single data stream concept drift. This paper studies the processing methods of group concept drift, and classifies the processing methods from three directions of single type, multiple type and group type for the first time. Within the scope of methods designed to address a single type of concept drift, this paper provides a systematic summary based on the classification of drift types, distinguishing between approaches that target real concept drift and those that address virtual concept drift. Solving real concept drift includes abrupt, gradual, recurrent and local concept drift processing methods. In this paper, the methods of dealing with the concept drift of multiple type are introduced from the aspects of solving the drift of gradual and abrupt, abrupt, gradual and incremental, abrupt, gradual and recurrent. This paper analyzes and summarizes the methods to solve the group type concept drift problem. The methods considered in this paper are compared and summarized from the aspects of processing technology, applicable drift type, comparison methods, advantages and disadvantages.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"161 ","pages":"Article 102525"},"PeriodicalIF":2.7,"publicationDate":"2025-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An interpretable knowledge recommendation method for civil dispute mediation 一种适用于民事纠纷调解的可解释知识推荐方法
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-17 DOI: 10.1016/j.datak.2025.102527
Ning Wang, Shibo Cui, Jing Zhang, Runzhe Wang, Yongping Yu
The demand for efficient and fair civil dispute mediation is driving the development of intelligent knowledge recommendation technologies. However, existing approaches face challenges in interpretability and reasoning over complex relationships. This study proposes an Interpretable Knowledge Recommendation Method (IKRM) that integrates deep learning and multi-hop reasoning to provide precise and transparent decision support for online mediation platforms. First, to address the extraction of specialized terms and intricate relationships in legal texts, we propose a pre-trained model-based semi-joint extraction method combined with ontology design, constructing a civil dispute knowledge graph that enables hierarchical semantic modeling of legal concepts. Second, we design a hybrid multi-hop reasoning framework that combines neural logic programming for numerical rule-based latent relation mining and cognitive graphs for multi-path reasoning, dynamically generating traceable explanations during path expansion. IKRM performs better than mainstream baseline models in terms of all key evaluation indicators, according to experiments validated using multi-source Chinese legal datasets. It additionally exhibits greater reasoning robustness for difficult queries. This study creates a new paradigm for legal knowledge recommendation systems that is modular, interpretable, and effective. It also contributes to larger social equitable governance by offering accurate decision assistance for civil dispute mediation in China.
对高效、公正的民事纠纷调解的需求推动着智能知识推荐技术的发展。然而,现有的方法在复杂关系的可解释性和推理方面面临挑战。本研究提出一种融合深度学习和多跳推理的可解释知识推荐方法(IKRM),为在线中介平台提供精确透明的决策支持。首先,为了解决法律文本中专业术语和复杂关系的提取问题,我们提出了一种基于预训练模型的半联合提取方法,结合本体设计,构建了一个民事纠纷知识图,实现了法律概念的分层语义建模。其次,我们设计了一个混合多跳推理框架,该框架结合了基于数值规则的潜在关系挖掘的神经逻辑编程和用于多路径推理的认知图,在路径扩展过程中动态生成可追溯的解释。根据使用多源中国法律数据集验证的实验,IKRM在所有关键评估指标方面都优于主流基线模型。此外,对于困难的查询,它显示出更强的推理鲁棒性。本研究为模块化、可解释性和有效性的法律知识推荐系统创造了一个新的范式。它还通过为中国的民事纠纷调解提供准确的决策协助,促进更大范围的社会公平治理。
{"title":"An interpretable knowledge recommendation method for civil dispute mediation","authors":"Ning Wang,&nbsp;Shibo Cui,&nbsp;Jing Zhang,&nbsp;Runzhe Wang,&nbsp;Yongping Yu","doi":"10.1016/j.datak.2025.102527","DOIUrl":"10.1016/j.datak.2025.102527","url":null,"abstract":"<div><div>The demand for efficient and fair civil dispute mediation is driving the development of intelligent knowledge recommendation technologies. However, existing approaches face challenges in interpretability and reasoning over complex relationships. This study proposes an Interpretable Knowledge Recommendation Method (IKRM) that integrates deep learning and multi-hop reasoning to provide precise and transparent decision support for online mediation platforms. First, to address the extraction of specialized terms and intricate relationships in legal texts, we propose a pre-trained model-based semi-joint extraction method combined with ontology design, constructing a civil dispute knowledge graph that enables hierarchical semantic modeling of legal concepts. Second, we design a hybrid multi-hop reasoning framework that combines neural logic programming for numerical rule-based latent relation mining and cognitive graphs for multi-path reasoning, dynamically generating traceable explanations during path expansion. IKRM performs better than mainstream baseline models in terms of all key evaluation indicators, according to experiments validated using multi-source Chinese legal datasets. It additionally exhibits greater reasoning robustness for difficult queries. This study creates a new paradigm for legal knowledge recommendation systems that is modular, interpretable, and effective. It also contributes to larger social equitable governance by offering accurate decision assistance for civil dispute mediation in China.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"161 ","pages":"Article 102527"},"PeriodicalIF":2.7,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145319484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Data & Knowledge Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1