首页 > 最新文献

Data & Knowledge Engineering最新文献

英文 中文
Ensemble model with combined feature set for Big data classification in IoT scenario 基于组合特征集的物联网场景大数据分类集成模型
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-17 DOI: 10.1016/j.datak.2025.102447
Harivardhagini S (Professor) , Pranavanand S (Associate Professor) , Raghuram A (Professor)
Sensor nodes that are wirelessly connected to the internet and several systems make up the Internet of Things system. Large volumes of data are often stored in big data, which complicates the classification process. There are many Big data classification strategies in use, but the main issues are the management of secure information as well as computational time. This paper's goal is to suggest a novel classification system for big data in Internet of Things networks that operates in four main phases. Particularly, the healthcare data is considered as the Big data perspective to solve the classification problem. Since the healthcare Big data is the revolutionary tool in this industry, it is becoming the most vital point of patient-centric care. Different data sources are aggregated in this Big data healthcare ecosystem. The first stage is data acquisition which takes place via Internet of Things through sensors. The second stage is improved DSig normalization for input data preprocessing. The third stage is MapReduce framework-based feature extraction for handling the Big data. This extract features like raw data, mutual information, information gain, and improved Renyi entropy. Finally, the fourth stage is an ensemble disease classification model by the combination of Recurrent Neural Network, Neural Network, and Improved Support Vector Machine for predicting normal and abnormal diseases. The suggested work is implemented by the Python tool, and the effectiveness, specificity, sensitivity, precision, and other factors of the results are assessed. The proposed ensemble model achieves superior precision of 0.9573 for the training rate of 90 % when compared to the traditional models.
无线连接到互联网和多个系统的传感器节点组成了物联网系统。大量的数据通常存储在大数据中,这使得分类过程变得复杂。目前有许多大数据分类策略在使用中,但主要问题是安全信息的管理以及计算时间。本文的目标是为物联网网络中的大数据提出一种新的分类系统,该系统分为四个主要阶段。特别是将医疗数据作为大数据视角来解决分类问题。由于医疗大数据是这个行业的革命性工具,它正在成为以患者为中心的医疗的最重要的一点。不同的数据源聚集在这个大数据医疗生态系统中。第一阶段是通过传感器通过物联网进行数据采集。第二阶段是改进的DSig规范化输入数据预处理。第三阶段是基于MapReduce框架的大数据特征提取。该方法提取了原始数据、互信息、信息增益和改进的人义熵等特征。最后,第四阶段是将递归神经网络、神经网络和改进的支持向量机相结合的疾病集成分类模型,用于预测正常和异常疾病。建议的工作由Python工具实现,并评估结果的有效性、特异性、灵敏度、精度和其他因素。与传统模型相比,该集成模型的训练精度达到0.9573,训练率达到90%。
{"title":"Ensemble model with combined feature set for Big data classification in IoT scenario","authors":"Harivardhagini S (Professor) ,&nbsp;Pranavanand S (Associate Professor) ,&nbsp;Raghuram A (Professor)","doi":"10.1016/j.datak.2025.102447","DOIUrl":"10.1016/j.datak.2025.102447","url":null,"abstract":"<div><div>Sensor nodes that are wirelessly connected to the internet and several systems make up the Internet of Things system. Large volumes of data are often stored in big data, which complicates the classification process. There are many Big data classification strategies in use, but the main issues are the management of secure information as well as computational time. This paper's goal is to suggest a novel classification system for big data in Internet of Things networks that operates in four main phases. Particularly, the healthcare data is considered as the Big data perspective to solve the classification problem. Since the healthcare Big data is the revolutionary tool in this industry, it is becoming the most vital point of patient-centric care. Different data sources are aggregated in this Big data healthcare ecosystem. The first stage is data acquisition which takes place via Internet of Things through sensors. The second stage is improved DSig normalization for input data preprocessing. The third stage is MapReduce framework-based feature extraction for handling the Big data. This extract features like raw data, mutual information, information gain, and improved Renyi entropy. Finally, the fourth stage is an ensemble disease classification model by the combination of Recurrent Neural Network, Neural Network, and Improved Support Vector Machine for predicting normal and abnormal diseases. The suggested work is implemented by the Python tool, and the effectiveness, specificity, sensitivity, precision, and other factors of the results are assessed. The proposed ensemble model achieves superior precision of 0.9573 for the training rate of 90 % when compared to the traditional models.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"159 ","pages":"Article 102447"},"PeriodicalIF":2.7,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144084758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Releasing differentially private event logs using generative models 使用生成模型发布不同的私有事件日志
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-15 DOI: 10.1016/j.datak.2025.102450
Frederik Wangelik, Majid Rafiei, Mahsa Pourbafrani, Wil M.P. van der Aalst
In recent years, the industry has been witnessing an extended usage of process mining and automated event data analysis. Consequently, there is a rising significance in addressing privacy apprehensions related to the inclusion of sensitive and private information within event data utilized by process mining algorithms. State-of-the-art research mainly focuses on providing quantifiable privacy guarantees, e.g., via differential privacy, for trace variants that are used by the main process mining techniques, e.g., process discovery. However, privacy preservation techniques designed for the release of trace variants are still insufficient to meet all the demands of industry-scale utilization. Moreover, ensuring privacy guarantees in situations characterized by a high occurrence of infrequent trace variants remains a challenging endeavor. In this paper, we introduce two novel approaches for releasing differentially private trace variants based on trained generative models. With TraVaG, we leverage Generative Adversarial Networks (GANs) to sample from a privatized implicit variant distribution. Our second method employs Denoising Diffusion Probabilistic Models that reconstruct artificial trace variants from noise via trained Markov chains. Both methods offer industry-scale benefits and elevate the degree of privacy assurances, particularly in scenarios featuring a substantial prevalence of infrequent variants. Also, they overcome the shortcomings of conventional privacy preservation techniques, such as bounding the length of variants and introducing fake variants. Experimental results on real-life event data demonstrate that our approaches surpass state-of-the-art techniques in terms of privacy guarantees and utility preservation.
近年来,该行业见证了流程挖掘和自动化事件数据分析的扩展使用。因此,在处理过程挖掘算法使用的事件数据中包含敏感和私有信息相关的隐私担忧方面,具有越来越重要的意义。最先进的研究主要集中于提供可量化的隐私保证,例如,通过差分隐私,用于主要过程挖掘技术(例如,过程发现)使用的跟踪变量。然而,为跟踪变体的发布而设计的隐私保护技术仍然不足以满足工业规模利用的所有需求。此外,在以不频繁跟踪变体的高发生率为特征的情况下确保隐私保证仍然是一项具有挑战性的工作。本文介绍了两种基于训练生成模型的差分私有跟踪变量释放的新方法。在TraVaG中,我们利用生成对抗网络(gan)从私有化的隐式变量分布中进行采样。我们的第二种方法采用去噪扩散概率模型,通过训练好的马尔可夫链从噪声中重建人工痕迹变体。这两种方法都提供了行业规模的好处,并提高了隐私保证的程度,特别是在具有大量罕见变体的情况下。此外,它们还克服了传统隐私保护技术的缺点,例如限制变体的长度和引入假变体。真实事件数据的实验结果表明,我们的方法在隐私保障和效用保护方面超越了最先进的技术。
{"title":"Releasing differentially private event logs using generative models","authors":"Frederik Wangelik,&nbsp;Majid Rafiei,&nbsp;Mahsa Pourbafrani,&nbsp;Wil M.P. van der Aalst","doi":"10.1016/j.datak.2025.102450","DOIUrl":"10.1016/j.datak.2025.102450","url":null,"abstract":"<div><div>In recent years, the industry has been witnessing an extended usage of process mining and automated event data analysis. Consequently, there is a rising significance in addressing privacy apprehensions related to the inclusion of sensitive and private information within event data utilized by process mining algorithms. State-of-the-art research mainly focuses on providing quantifiable privacy guarantees, e.g., via differential privacy, for trace variants that are used by the main process mining techniques, e.g., process discovery. However, privacy preservation techniques designed for the release of trace variants are still insufficient to meet all the demands of industry-scale utilization. Moreover, ensuring privacy guarantees in situations characterized by a high occurrence of infrequent trace variants remains a challenging endeavor. In this paper, we introduce two novel approaches for releasing differentially private trace variants based on trained generative models. With TraVaG, we leverage <em>Generative Adversarial Networks</em> (GANs) to sample from a privatized implicit variant distribution. Our second method employs <em>Denoising Diffusion Probabilistic Models</em> that reconstruct artificial trace variants from noise via trained Markov chains. Both methods offer industry-scale benefits and elevate the degree of privacy assurances, particularly in scenarios featuring a substantial prevalence of infrequent variants. Also, they overcome the shortcomings of conventional privacy preservation techniques, such as bounding the length of variants and introducing fake variants. Experimental results on real-life event data demonstrate that our approaches surpass state-of-the-art techniques in terms of privacy guarantees and utility preservation.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"159 ","pages":"Article 102450"},"PeriodicalIF":2.7,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143848466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A conceptual model for attributions in event-centric knowledge graphs 以事件为中心的知识图中属性的概念模型
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-15 DOI: 10.1016/j.datak.2025.102449
Florian Plötzky , Katarina Britz , Wolf-Tilo Balke
The use of narratives as a means of fusing information from knowledge graphs (KGs) into a coherent line of argumentation has been the subject of recent investigation. Narratives are especially useful in event-centric knowledge graphs in that they provide a means to connect different real-world events and categorize them by well-known narrations. However, specifically for controversial events, a problem in information fusion arises, namely, multiple viewpoints regarding the validity of certain event aspects, e.g., regarding the role a participant takes in an event, may exist. Expressing those viewpoints in KGs is challenging because disputed information provided by different viewpoints may introduce inconsistencies. Hence, most KGs only feature a single view on the contained information, hampering the effectiveness of narrative information access. This paper is an extension of our original work and introduces attributions, i.e., parameterized predicates that allow for the representation of facts that are only valid in a specific viewpoint. For this, we develop a conceptual model that allows for the representation of viewpoint-dependent information. As an extension, we enhance the model by a conception of viewpoint-compatibility. Based on this, we deepen our original deliberations on the model’s effects on information fusion and provide additional grounding in the literature.
使用叙述作为一种将知识图(KGs)中的信息融合成连贯的论证线的手段,一直是最近研究的主题。叙述在以事件为中心的知识图谱中特别有用,因为它们提供了一种连接不同现实世界事件的方法,并根据众所周知的叙述对它们进行分类。然而,特别是对于有争议的事件,信息融合中出现了一个问题,即可能存在关于事件某些方面有效性的多种观点,例如,关于参与者在事件中所扮演的角色。在KGs中表达这些观点是具有挑战性的,因为不同观点提供的有争议的信息可能会导致不一致。因此,大多数kg仅以对所包含信息的单一视图为特征,阻碍了叙事信息获取的有效性。本文是我们原始工作的扩展,并引入了归因,即参数化谓词,它允许仅在特定观点中有效的事实的表示。为此,我们开发了一个概念模型,允许表示依赖于视点的信息。作为扩展,我们通过视点兼容性的概念增强了模型。在此基础上,我们深化了对模型对信息融合影响的原始思考,并在文献中提供了额外的基础。
{"title":"A conceptual model for attributions in event-centric knowledge graphs","authors":"Florian Plötzky ,&nbsp;Katarina Britz ,&nbsp;Wolf-Tilo Balke","doi":"10.1016/j.datak.2025.102449","DOIUrl":"10.1016/j.datak.2025.102449","url":null,"abstract":"<div><div>The use of narratives as a means of fusing information from knowledge graphs (KGs) into a coherent line of argumentation has been the subject of recent investigation. Narratives are especially useful in event-centric knowledge graphs in that they provide a means to connect different real-world events and categorize them by well-known narrations. However, specifically for controversial events, a problem in information fusion arises, namely, multiple <em>viewpoints</em> regarding the validity of certain event aspects, e.g., regarding the role a participant takes in an event, may exist. Expressing those viewpoints in KGs is challenging because disputed information provided by different viewpoints may introduce <em>inconsistencies</em>. Hence, most KGs only feature a single view on the contained information, hampering the effectiveness of narrative information access. This paper is an extension of our original work and introduces <em>attributions</em>, i.e., parameterized predicates that allow for the representation of facts that are only valid in a specific viewpoint. For this, we develop a conceptual model that allows for the representation of viewpoint-dependent information. As an extension, we enhance the model by a conception of viewpoint-compatibility. Based on this, we deepen our original deliberations on the model’s effects on information fusion and provide additional grounding in the literature.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"159 ","pages":"Article 102449"},"PeriodicalIF":2.7,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143855958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Collaboration with GenAI in engineering research design 与GenAI合作进行工程研究设计
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-10 DOI: 10.1016/j.datak.2025.102445
Fazel Naghdy
Over the past five years, the fast development and use of generative artificial intelligence (GenAI) and large language models (LLMs) has ushered in a new era of study, teaching, and learning in many domains. The role that GenAIs can play in engineering research is addressed. The related previous works report on the potential of GenAIs in the literature review process. However, such potential is not demonstrated by case studies and practical examples. The previous works also do not address how GenAIs can assist with all the steps traditionally taken to design research. This study examines the effectiveness of collaboration with GenAIs at various stages of research design. It explores whether collaboration with GenAIs can result in more focused and comprehensive outcomes. A generalised approach for collaboration with AI tools in research design is proposed. A case study to develop a research design on the concept of “shared machine-human driving” is deployed to show the validity of the articulated concepts. The case study demonstrates both the pros and cons of collaboration with GenAIs. The results generated at each stage are rigorously validated and thoroughly examined to ensure they remain free from inaccuracies or hallucinations and align with the original research objectives. When necessary, the results are manually adjusted and refined to uphold their integrity and accuracy. The findings produced by the various GenAI models utilized in this study highlight the key attributes of generative artificial intelligence, namely speed, efficiency, and scope. However, they also underscore the critical importance of researcher oversight, as unexamined inferences and interpretations can render the results irrelevant or meaningless.
在过去的五年中,生成式人工智能(GenAI)和大型语言模型(llm)的快速发展和使用,在许多领域开创了一个研究、教学和学习的新时代。讨论了GenAIs在工程研究中可以发挥的作用。在文献综述的过程中对前人的相关工作进行了报道。然而,这种潜力并没有通过案例研究和实际例子来证明。以前的工作也没有解决GenAIs如何协助传统上采取的设计研究的所有步骤。本研究考察了在研究设计的各个阶段与GenAIs合作的有效性。它探讨了与GenAIs的合作是否能够产生更有针对性和更全面的结果。提出了一种在研究设计中与人工智能工具合作的通用方法。通过一个案例研究,对“人机共享驾驶”概念进行了研究设计,以展示所阐述概念的有效性。案例研究展示了与GenAIs合作的优点和缺点。每个阶段产生的结果都经过严格的验证和彻底的检查,以确保它们没有不准确或幻觉,并与最初的研究目标保持一致。必要时,人工调整和改进结果,以保持其完整性和准确性。本研究中使用的各种GenAI模型产生的结果突出了生成式人工智能的关键属性,即速度、效率和范围。然而,它们也强调了研究人员监督的重要性,因为未经检验的推论和解释可能使结果无关或毫无意义。
{"title":"Collaboration with GenAI in engineering research design","authors":"Fazel Naghdy","doi":"10.1016/j.datak.2025.102445","DOIUrl":"10.1016/j.datak.2025.102445","url":null,"abstract":"<div><div>Over the past five years, the fast development and use of generative artificial intelligence (GenAI) and large language models (LLMs) has ushered in a new era of study, teaching, and learning in many domains. The role that GenAIs can play in engineering research is addressed. The related previous works report on the potential of GenAIs in the literature review process. However, such potential is not demonstrated by case studies and practical examples. The previous works also do not address how GenAIs can assist with all the steps traditionally taken to design research. This study examines the effectiveness of collaboration with GenAIs at various stages of research design. It explores whether collaboration with GenAIs can result in more focused and comprehensive outcomes. A generalised approach for collaboration with AI tools in research design is proposed. A case study to develop a research design on the concept of “shared machine-human driving” is deployed to show the validity of the articulated concepts. The case study demonstrates both the pros and cons of collaboration with GenAIs. The results generated at each stage are rigorously validated and thoroughly examined to ensure they remain free from inaccuracies or hallucinations and align with the original research objectives. When necessary, the results are manually adjusted and refined to uphold their integrity and accuracy. The findings produced by the various GenAI models utilized in this study highlight the key attributes of generative artificial intelligence, namely speed, efficiency, and scope. However, they also underscore the critical importance of researcher oversight, as unexamined inferences and interpretations can render the results irrelevant or meaningless.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"159 ","pages":"Article 102445"},"PeriodicalIF":2.7,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143942791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Derived multi-objective function for latency sensitive-based cloud object storage system using hybrid heuristic algorithm 利用混合启发式算法推导了基于延迟敏感的云对象存储系统的多目标函数
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-10 DOI: 10.1016/j.datak.2025.102448
N Nataraj , RV Nataraj
Cloud Object Storage System (COSS) is capable of storing and retrieving a ton of unstructured data items called objects which act as a core cloud service for contemporary web-based applications. While sharing the data among different parties, privacy preservation becomes challenging. Research Problem: From day-to-day activities, a high volume of requests are served daily thus, it leads to cause the latency issues. In a cloud storage system, the adaption of a holistic approach helps the user to identify sensitive information and analyze the unwanted files/data. With evolving of Internet of Things (IoT) applications are latency-sensitive, which does not function well with these new ideas and platforms that are available today. Overall Purpose of the Study: Therefore, a novel latency-aware COSS is implemented with the aid of multi-objective functionalities to allocate and reallocate data efficiently in order to sustain the storage process in the cloud environment. Design of the Study: This goal is accomplished by implementing a hybrid meta-heuristic approach with the integration of the Mother Optimization Algorithm (MOA) with Dolphin Swarm Optimization (DSO) algorithm. The implemented hybrid optimization algorithm is called the Hybrid Dolphin Swarm-based Mother Optimization Algorithm (HDS-MOA). The HDS-MOA considers the objective function by considering constraints like throughput, latency, resource usage, and active servers during the data allocation process. While considering data reallocation process, the developed HDS-MOA algorithm is also performed by considering the multi-objective constraints like cost, makespan, and energy. The diverse experimental test is conducted to prove its effectiveness by comparing it with other existing methods for storing data efficiently across cloud networks. Major findings of results: In the configuration 3, the proposed HDS-MOA attains 31.11 %, 55.71 %, 55.71 %, and 68.21 % enhanced than the OSSperf, queuing theory, scheduling technique, and Monte Carlo-PSO based on the latency analysis. Overview of Interpretations and Conclusions: The developed HDS-MOA assured the better performance on the data is preserved in the optimal locations having appropriate access time and less latency that is highly essential for the cloud object storage. This supports to enhance the overall user experience by boosting the data retrieval. Limitations of this Study with Solutions: The ability of the proposed algorithm needs to enhance on balancing the multiple objectives such as performance, cost, and fault tolerance for optimally performing the operations in real-time that makes the system to be more efficient as well as responsive in the dynamic variations in the demand.
云对象存储系统(COSS)能够存储和检索大量被称为对象的非结构化数据项,这些数据项作为当代基于web的应用程序的核心云服务。在各方之间共享数据时,隐私保护变得具有挑战性。研究问题:从日常活动来看,每天都要处理大量的请求,因此会导致延迟问题。在云存储系统中,采用整体方法可以帮助用户识别敏感信息并分析不需要的文件/数据。随着物联网(IoT)的发展,应用程序对延迟敏感,这与今天可用的这些新想法和平台不能很好地配合。研究的总体目的:因此,为了维持云环境中的存储过程,在多目标功能的帮助下,实现了一种新的延迟感知的COSS,以有效地分配和重新分配数据。研究设计:该目标是通过实现一种混合元启发式方法来实现的,该方法将母体优化算法(MOA)与海豚群优化算法(DSO)相结合。所实现的混合优化算法被称为基于海豚群的混合母优化算法(HDS-MOA)。HDS-MOA通过在数据分配过程中考虑吞吐量、延迟、资源使用和活动服务器等约束来考虑目标函数。在考虑数据再分配过程的同时,开发的HDS-MOA算法还考虑了成本、完工时间和能量等多目标约束。通过将其与其他现有的跨云网络高效存储数据的方法进行比较,进行了多样化的实验测试,以证明其有效性。在配置3中,基于时延分析的HDS-MOA比OSSperf、排队论、调度技术和Monte Carlo-PSO分别提高了31.11%、55.71%、55.71%和68.21%。概述解释和结论:开发的HDS-MOA确保了数据保存在最佳位置的更好性能,具有适当的访问时间和更少的延迟,这对云对象存储至关重要。这有助于通过提高数据检索来增强整体用户体验。本研究与解决方案的局限性:本文提出的算法需要增强在性能、成本和容错等多个目标之间的平衡能力,以优化实时执行操作,使系统在需求动态变化中更加高效和响应。
{"title":"Derived multi-objective function for latency sensitive-based cloud object storage system using hybrid heuristic algorithm","authors":"N Nataraj ,&nbsp;RV Nataraj","doi":"10.1016/j.datak.2025.102448","DOIUrl":"10.1016/j.datak.2025.102448","url":null,"abstract":"<div><div>Cloud Object Storage System (COSS) is capable of storing and retrieving a ton of unstructured data items called objects which act as a core cloud service for contemporary web-based applications. While sharing the data among different parties, privacy preservation becomes challenging. <em>Research Problem:</em> From day-to-day activities, a high volume of requests are served daily thus, it leads to cause the latency issues. In a cloud storage system, the adaption of a holistic approach helps the user to identify sensitive information and analyze the unwanted files/data. With evolving of Internet of Things (IoT) applications are latency-sensitive, which does not function well with these new ideas and platforms that are available today. <em>Overall Purpose of the Study:</em> Therefore, a novel latency-aware COSS is implemented with the aid of multi-objective functionalities to allocate and reallocate data efficiently in order to sustain the storage process in the cloud environment. <em>Design of the Study:</em> This goal is accomplished by implementing a hybrid meta-heuristic approach with the integration of the Mother Optimization Algorithm (MOA) with Dolphin Swarm Optimization (DSO) algorithm. The implemented hybrid optimization algorithm is called the Hybrid Dolphin Swarm-based Mother Optimization Algorithm (HDS-MOA). The HDS-MOA considers the objective function by considering constraints like throughput, latency, resource usage, and active servers during the data allocation process. While considering data reallocation process, the developed HDS-MOA algorithm is also performed by considering the multi-objective constraints like cost, makespan, and energy. The diverse experimental test is conducted to prove its effectiveness by comparing it with other existing methods for storing data efficiently across cloud networks. <em>Major findings of results:</em> In the configuration 3, the proposed HDS-MOA attains 31.11 %, 55.71 %, 55.71 %, and 68.21 % enhanced than the OSSperf, queuing theory, scheduling technique, and Monte Carlo-PSO based on the latency analysis. <em>Overview of Interpretations and Conclusions:</em> The developed HDS-MOA assured the better performance on the data is preserved in the optimal locations having appropriate access time and less latency that is highly essential for the cloud object storage. This supports to enhance the overall user experience by boosting the data retrieval. <em>Limitations of this Study with Solutions:</em> The ability of the proposed algorithm needs to enhance on balancing the multiple objectives such as performance, cost, and fault tolerance for optimally performing the operations in real-time that makes the system to be more efficient as well as responsive in the dynamic variations in the demand.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"159 ","pages":"Article 102448"},"PeriodicalIF":2.7,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143859469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ECS-KG: An event-centric semantic knowledge graph for event-related news articles ECS-KG:针对事件相关新闻文章的以事件为中心的语义知识图谱
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-08 DOI: 10.1016/j.datak.2025.102451
MVPT Lakshika, HA Caldera, TNK De Zoysa
Recent advances in deep learning techniques and contextual understanding render Knowledge Graphs (KGs) valuable tools for enhancing accessibility and news comprehension. Conventional and news-specific KGs frequently lack the specificity for efficient news-related tasks, leading to limited relevance and static data representation. To fill the gap, this study proposes an Event-Centric Semantic Knowledge Graph (ECS-KG) model that combines deep learning approaches with contextual embeddings to improve the procedural and dynamic knowledge representation observed in news articles. The ECS-KG incorporates several information extraction techniques, a temporal Graph Neural Network (GNN), and a Graph Attention Network (GAT), yielding significant improvements in news representation. Several gold-standard datasets, comprising CNN/Daily Mail, TB-Dense, and ACE 2005, revealed that the proposed model outperformed the most advanced models. By integrating temporal reasoning and semantic insights, ECS-KG not only enhances user understanding of news significance but also meets the evolving demands of news consumers. This model advances the field of event-centric semantic KGs and provides valuable resources for applications in news information processing.
深度学习技术和上下文理解的最新进展使知识图(KGs)成为增强可访问性和新闻理解的有价值的工具。传统的和特定于新闻的kg经常缺乏有效的新闻相关任务的特异性,导致相关性有限和静态数据表示。为了填补这一空白,本研究提出了一个以事件为中心的语义知识图(ECS-KG)模型,该模型将深度学习方法与上下文嵌入相结合,以改进新闻文章中观察到的程序性和动态知识表示。ECS-KG结合了几种信息提取技术,一个时间图神经网络(GNN)和一个图注意网络(GAT),在新闻表示方面产生了显著的改进。包括CNN/Daily Mail、TB-Dense和ACE 2005在内的几个金标准数据集显示,所提出的模型优于最先进的模型。通过时间推理和语义洞察的结合,ECS-KG不仅增强了用户对新闻意义的理解,而且满足了新闻消费者不断变化的需求。该模型推动了以事件为中心的语义知识库领域的发展,为新闻信息处理中的应用提供了宝贵的资源。
{"title":"ECS-KG: An event-centric semantic knowledge graph for event-related news articles","authors":"MVPT Lakshika,&nbsp;HA Caldera,&nbsp;TNK De Zoysa","doi":"10.1016/j.datak.2025.102451","DOIUrl":"10.1016/j.datak.2025.102451","url":null,"abstract":"<div><div>Recent advances in deep learning techniques and contextual understanding render Knowledge Graphs (KGs) valuable tools for enhancing accessibility and news comprehension. Conventional and news-specific KGs frequently lack the specificity for efficient news-related tasks, leading to limited relevance and static data representation. To fill the gap, this study proposes an Event-Centric Semantic Knowledge Graph (ECS-KG) model that combines deep learning approaches with contextual embeddings to improve the procedural and dynamic knowledge representation observed in news articles. The ECS-KG incorporates several information extraction techniques, a temporal Graph Neural Network (GNN), and a Graph Attention Network (GAT), yielding significant improvements in news representation. Several gold-standard datasets, comprising CNN/Daily Mail, TB-Dense, and ACE 2005, revealed that the proposed model outperformed the most advanced models. By integrating temporal reasoning and semantic insights, ECS-KG not only enhances user understanding of news significance but also meets the evolving demands of news consumers. This model advances the field of event-centric semantic KGs and provides valuable resources for applications in news information processing.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"159 ","pages":"Article 102451"},"PeriodicalIF":2.7,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143828580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Overcoming the hurdle of legal expertise: A reusable model for smartwatch privacy policies 克服法律专业知识的障碍:智能手表隐私政策的可重用模型
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-01 DOI: 10.1016/j.datak.2025.102443
Constantin Buschhaus , Arvid Butting , Judith Michael , Verena Nitsch , Sebastian Pütz , Bernhard Rumpe , Carolin Stellmacher , Sabine Theis
Regulations for privacy protection aim to protect individuals from the unauthorized storage, processing, and transfer of their personal data but oftentimes fail in providing helpful support for understanding these regulations. To better communicate privacy policies for smartwatches, we need an in-depth understanding of their concepts and provide better ways to enable developers to integrate them when engineering systems. Up to now, no conceptual model exists covering privacy statements from different smartwatch manufacturers that is reusable for developers. This paper introduces such a conceptual model for privacy policies of smartwatches and shows its use in a model-driven software engineering approach to create a platform for data visualization of wearable privacy policies from different smartwatch manufacturers. We have analyzed the privacy policies of various manufacturers and extracted the relevant concepts. Moreover, we have checked the model with lawyers for its correctness, instantiated it with concrete data, and used it in a model-driven software engineering approach to create a platform for data visualization. This reusable privacy policy model can enable developers to easily represent privacy policies in their systems. This provides a foundation for more structured and understandable privacy policies which, in the long run, can increase the data sovereignty of application users.
隐私保护法规旨在保护个人免受未经授权的存储、处理和传输其个人数据,但通常无法为理解这些法规提供有用的支持。为了更好地传达智能手表的隐私政策,我们需要深入了解它们的概念,并提供更好的方法,使开发人员能够在设计系统时集成它们。到目前为止,还没有一个概念模型涵盖不同智能手表制造商的隐私声明,可供开发人员重用。本文介绍了智能手表隐私政策的概念模型,并展示了其在模型驱动的软件工程方法中的应用,以创建不同智能手表制造商可穿戴隐私政策的数据可视化平台。我们分析了各厂商的隐私政策,提取了相关概念。此外,我们已经与律师一起检查了模型的正确性,用具体的数据实例化了它,并在模型驱动的软件工程方法中使用它来创建数据可视化的平台。这个可重用的隐私策略模型可以使开发人员轻松地在他们的系统中表示隐私策略。这为更加结构化和可理解的隐私策略提供了基础,从长远来看,这些策略可以增加应用程序用户的数据主权。
{"title":"Overcoming the hurdle of legal expertise: A reusable model for smartwatch privacy policies","authors":"Constantin Buschhaus ,&nbsp;Arvid Butting ,&nbsp;Judith Michael ,&nbsp;Verena Nitsch ,&nbsp;Sebastian Pütz ,&nbsp;Bernhard Rumpe ,&nbsp;Carolin Stellmacher ,&nbsp;Sabine Theis","doi":"10.1016/j.datak.2025.102443","DOIUrl":"10.1016/j.datak.2025.102443","url":null,"abstract":"<div><div>Regulations for privacy protection aim to protect individuals from the unauthorized storage, processing, and transfer of their personal data but oftentimes fail in providing helpful support for understanding these regulations. To better communicate privacy policies for smartwatches, we need an in-depth understanding of their concepts and provide better ways to enable developers to integrate them when engineering systems. Up to now, no conceptual model exists covering privacy statements from different smartwatch manufacturers that is reusable for developers. This paper introduces such a conceptual model for privacy policies of smartwatches and shows its use in a model-driven software engineering approach to create a platform for data visualization of wearable privacy policies from different smartwatch manufacturers. We have analyzed the privacy policies of various manufacturers and extracted the relevant concepts. Moreover, we have checked the model with lawyers for its correctness, instantiated it with concrete data, and used it in a model-driven software engineering approach to create a platform for data visualization. This reusable privacy policy model can enable developers to easily represent privacy policies in their systems. This provides a foundation for more structured and understandable privacy policies which, in the long run, can increase the data sovereignty of application users.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"159 ","pages":"Article 102443"},"PeriodicalIF":2.7,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143817727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial preface to the special issue on research challenges in information science (RCIS’2023) 信息科学研究挑战特刊编辑序言(RCIS ' 2023)
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-31 DOI: 10.1016/j.datak.2025.102446
Selmin Nurcan, Andreas L. Opdahl
{"title":"Editorial preface to the special issue on research challenges in information science (RCIS’2023)","authors":"Selmin Nurcan,&nbsp;Andreas L. Opdahl","doi":"10.1016/j.datak.2025.102446","DOIUrl":"10.1016/j.datak.2025.102446","url":null,"abstract":"","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"158 ","pages":"Article 102446"},"PeriodicalIF":2.7,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143911765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Customized long short-term memory architecture for multi-document summarization with improved text feature set 用于多文档摘要的定制化长短时记忆架构,具有改进的文本特征集
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-25 DOI: 10.1016/j.datak.2025.102440
Satya Deo , Debajyoty Banik , Prasant Kumar Pattnaik
One among the most crucial concerns in the domain of Natural Language Processing (NLP) is the Multi-Document Summarization (MDS) and in recent decades, the focus on this issue has risen massively. Hence, it is vital for the NLP community to provide effective and reliable MDS methods. Current deep learning-dependent MDS techniques rely on the extraordinary capacity of neural networks, in order to extract distinctive features. Motivated by this fact, we introduce a novel MDS technique, named as Customized Long Short-Term Memory-based Multi-Document Summarization using IBi-GRU (CLSTM-MDS+IBi-GRU), which includes the following working processes. Firstly, the input data gets converted into tokens by the Bi-directional Transformer (BERT) tokenizer. The features, such as Term Frequency- Inverse Document Frequency (TF-IDF), Bag of Words (BoW), thematic features and an improved aspect term-based feature are then extracted afterwards. Finally, the summarization process takes place by utilizing the concatenation of Customized Long Short-Term Memory (CLSTM) with a pre-eminent layer. Accurate and high-quality summary is provided via introducing this layer in the LSTM module and the Bi-GRU-based Inception module (IBi-GRU), which can capture long range dependences through parallel convolution. The outcomes of this work prove the superiority of our CLSTM-MDS in the Multi-Document Summarization task.
自然语言处理(NLP)领域中最重要的问题之一是多文档摘要(MDS),近几十年来,这一问题得到了广泛关注。因此,提供有效可靠的MDS方法对NLP社区至关重要。当前依赖深度学习的MDS技术依赖于神经网络的非凡能力,以提取显著特征。基于此,我们提出了一种新的基于IBi-GRU的基于定制长短期记忆的多文档摘要技术(CLSTM-MDS+IBi-GRU),包括以下工作流程。首先,输入数据通过双向转换器(BERT)标记器转换为标记。然后提取词频-逆文档频率(TF-IDF)、词包(BoW)、主题特征和改进的基于词的方面特征。最后,总结过程通过利用自定义长短期记忆(CLSTM)与卓越层的连接进行。通过在LSTM模块和基于bi - gru的Inception模块(IBi-GRU)中引入该层,可以提供准确和高质量的摘要,该模块可以通过并行卷积捕获远程依赖关系。研究结果证明了我们的CLSTM-MDS在多文档摘要任务中的优越性。
{"title":"Customized long short-term memory architecture for multi-document summarization with improved text feature set","authors":"Satya Deo ,&nbsp;Debajyoty Banik ,&nbsp;Prasant Kumar Pattnaik","doi":"10.1016/j.datak.2025.102440","DOIUrl":"10.1016/j.datak.2025.102440","url":null,"abstract":"<div><div>One <strong>a</strong>mong the most crucial concerns in the domain of Natural Language Processing (NLP) is the Multi-Document Summarization (MDS) and in recent decades, the focus on this issue has risen massively. Hence, it is vital for the NLP community to provide effective and reliable MDS methods. Current deep learning-dependent MDS techniques rely on the extraordinary capacity of neural networks, in order to extract distinctive features. Motivated by this fact, we introduce a novel MDS technique, named as Customized Long Short-Term Memory-based Multi-Document Summarization using IBi-GRU <strong>(</strong>CLSTM-MDS+IBi-GRU), which includes the following working processes. Firstly, the input data gets converted into tokens by the Bi-directional Transformer (BERT) tokenizer. The features, such as Term Frequency- Inverse Document Frequency (TF-IDF), Bag of Words (BoW), thematic features and an improved aspect term-based feature are then extracted afterwards. Finally, the summarization process takes place by utilizing the concatenation of Customized Long Short-Term Memory (CLSTM) with a pre-eminent layer. Accurate and high-quality summary is provided via introducing this layer in the LSTM module and the Bi-GRU-based Inception module (IBi-GRU), which can capture long range dependences through parallel convolution. The outcomes of this work prove the superiority of our CLSTM-MDS in the Multi-Document Summarization task.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"159 ","pages":"Article 102440"},"PeriodicalIF":2.7,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143800459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of digital shadows on different levels in the automation pyramid 数字阴影在自动化金字塔不同层次上的应用
IF 2.7 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-24 DOI: 10.1016/j.datak.2025.102442
Malte Heithoff , Christian Hopmann , Thilo Köbel , Judith Michael , Bernhard Rumpe , Patrick Sapel
The concept of digital shadows helps to move from handling large amounts of heterogeneous data in production to the handling of task- and context-dependent aggregated data sets supporting a specific purpose. Current research lacks further investigations of characteristics digital shadows may have when they are applied to different levels of the automation pyramid. Within this paper, we describe the application of the digital shadow concept for two use cases in injection molding, namely geometry-dependent process configuration, and optimal production planning of jobs on an injection molding machine. In detail, we describe the creation process of digital shadows, relevant data needs for the specific purpose, as well as relevant models. Based on their usage, we describe specifics of their characteristics and discuss commonalities and differences. These aspects can be taken into account when creating digital shadows for further use cases.
数字阴影的概念有助于从处理生产中的大量异构数据转向处理支持特定目的的、与任务和上下文相关的聚合数据集。目前的研究缺乏对数字阴影应用于自动化金字塔不同层次时可能具有的特征的进一步研究。在本文中,我们介绍了数字阴影概念在注塑成型领域两个使用案例中的应用,即与几何形状相关的流程配置和注塑成型机上作业的优化生产计划。我们详细描述了数字阴影的创建过程、特定用途的相关数据需求以及相关模型。根据其用途,我们描述了它们的具体特点,并讨论了它们的共性和差异。在为其他使用案例创建数字阴影时,可以考虑这些方面。
{"title":"Application of digital shadows on different levels in the automation pyramid","authors":"Malte Heithoff ,&nbsp;Christian Hopmann ,&nbsp;Thilo Köbel ,&nbsp;Judith Michael ,&nbsp;Bernhard Rumpe ,&nbsp;Patrick Sapel","doi":"10.1016/j.datak.2025.102442","DOIUrl":"10.1016/j.datak.2025.102442","url":null,"abstract":"<div><div>The concept of digital shadows helps to move from handling large amounts of heterogeneous data in production to the handling of task- and context-dependent aggregated data sets supporting a specific purpose. Current research lacks further investigations of characteristics digital shadows may have when they are applied to different levels of the automation pyramid. Within this paper, we describe the application of the digital shadow concept for two use cases in injection molding, namely geometry-dependent process configuration, and optimal production planning of jobs on an injection molding machine. In detail, we describe the creation process of digital shadows, relevant data needs for the specific purpose, as well as relevant models. Based on their usage, we describe specifics of their characteristics and discuss commonalities and differences. These aspects can be taken into account when creating digital shadows for further use cases.</div></div>","PeriodicalId":55184,"journal":{"name":"Data & Knowledge Engineering","volume":"158 ","pages":"Article 102442"},"PeriodicalIF":2.7,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143748056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Data & Knowledge Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1