首页 > 最新文献

Information Systems最新文献

英文 中文
CrossER: A robust and adaptable generalized entity resolution framework for diverse and heterogeneous datasets CrossER:一个鲁棒且适应性强的通用实体解析框架,适用于各种异构数据集
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-08-14 DOI: 10.1016/j.is.2025.102609
Yunong Tian , Ning Wang , Anshun Zhou
Entity Resolution (ER) is a critical task in data cleaning and integration, traditionally focusing on structured relational tables with aligned schemas. However, real-world applications often involve diverse data formats, leading to the emergence of Generalized Entity Resolution, which addresses structured, semi-structured, and unstructured data. While prompt-based methods have shown promise in improving entity resolution, they suffer from significant limitations such as sensitivity to prompt design and instability across heterogeneous data formats. To address these challenges, we propose CrossER, a novel framework that integrates cross-attention mechanisms, contrastive learning, and data augmentation. CrossER employs a cross-attention module to dynamically align attributes across heterogeneous data sources, enabling accurate entity resolution. To enhance robustness, contrastive learning constructs discriminative feature representations, and data augmentation introduces variability to improve adaptability to noisy and complex datasets. Experimental results on multiple real-world datasets demonstrate that CrossER significantly outperforms state-of-the-art Generalized Entity Resolution methods in F1 scores while maintaining computational efficiency. Furthermore, CrossER exhibits minimal dependency on specific pre-trained language models and delivers superior recall rates compared to baseline methods, especially in challenging heterogeneous datasets.
实体解析(ER)是数据清理和集成中的一项关键任务,传统上关注具有对齐模式的结构化关系表。然而,现实世界的应用程序通常涉及不同的数据格式,这导致了通用实体解析的出现,它处理结构化、半结构化和非结构化数据。虽然基于提示的方法在提高实体分辨率方面表现出了希望,但它们存在明显的局限性,例如对提示设计的敏感性以及跨异构数据格式的不稳定性。为了应对这些挑战,我们提出了CrossER,这是一个集成了交叉注意机制、对比学习和数据增强的新框架。CrossER使用交叉关注模块来动态地对齐异构数据源之间的属性,从而实现准确的实体解析。为了增强鲁棒性,对比学习构建了判别特征表示,数据增强引入了可变性以提高对噪声和复杂数据集的适应性。在多个真实数据集上的实验结果表明,CrossER在保持计算效率的同时,在F1得分方面明显优于最先进的广义实体分辨率方法。此外,CrossER对特定的预训练语言模型的依赖最小,与基线方法相比,具有更高的召回率,特别是在具有挑战性的异构数据集中。
{"title":"CrossER: A robust and adaptable generalized entity resolution framework for diverse and heterogeneous datasets","authors":"Yunong Tian ,&nbsp;Ning Wang ,&nbsp;Anshun Zhou","doi":"10.1016/j.is.2025.102609","DOIUrl":"10.1016/j.is.2025.102609","url":null,"abstract":"<div><div>Entity Resolution (ER) is a critical task in data cleaning and integration, traditionally focusing on structured relational tables with aligned schemas. However, real-world applications often involve diverse data formats, leading to the emergence of Generalized Entity Resolution, which addresses structured, semi-structured, and unstructured data. While prompt-based methods have shown promise in improving entity resolution, they suffer from significant limitations such as sensitivity to prompt design and instability across heterogeneous data formats. To address these challenges, we propose CrossER, a novel framework that integrates cross-attention mechanisms, contrastive learning, and data augmentation. CrossER employs a cross-attention module to dynamically align attributes across heterogeneous data sources, enabling accurate entity resolution. To enhance robustness, contrastive learning constructs discriminative feature representations, and data augmentation introduces variability to improve adaptability to noisy and complex datasets. Experimental results on multiple real-world datasets demonstrate that CrossER significantly outperforms state-of-the-art Generalized Entity Resolution methods in F1 scores while maintaining computational efficiency. Furthermore, CrossER exhibits minimal dependency on specific pre-trained language models and delivers superior recall rates compared to baseline methods, especially in challenging heterogeneous datasets.</div></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":"135 ","pages":"Article 102609"},"PeriodicalIF":3.4,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144858374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Density based learned spatial index for clustered data 基于密度的聚类数据学习空间索引
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-08-11 DOI: 10.1016/j.is.2025.102606
Xiaofei Zhao, Kam-Yiu Lam
Retrieving spatial points, such as GPS records or Point of Interests, that satisfy specific location-based query criteria is a core operation in location-based services. Recent studies have shown that learned indexes can outperform traditional indexing methods in both query performance and space efficiency by leveraging data distribution to construct compact predictive models. On the other hand, traditional indexes typically make minimal assumptions about the underlying data distribution. In real-world spatial databases, data is often non-uniformly distributed and tends to cluster in specific regions or along road networks. Adaptivity to such data patterns may bring performance benefits.
In this paper, we explore the construction of efficient learned indexes that exploit the clustering characteristics of spatial datasets. Specifically, we propose a Density-based Grid Learning Spatial Index (DGLSI), which partitions the spatial domain based on point density and utilizes learned models, including multiple recursive model indexes to predict the grid cell IDs of query points. We evaluate DGLSI’s performance on real-world GPS datasets and demonstrate that the proposed methods outperform analogous grid-based indexes across various query workloads, including nearest point queries and range queries while maintaining high space efficiency.
检索满足特定基于位置的查询条件的空间点(如GPS记录或兴趣点)是基于位置的服务中的核心操作。最近的研究表明,通过利用数据分布构造紧凑的预测模型,学习索引在查询性能和空间效率方面都优于传统的索引方法。另一方面,传统索引通常对底层数据分布的假设很少。在现实世界的空间数据库中,数据通常是不均匀分布的,并且倾向于在特定区域或沿着道路网络聚集。对此类数据模式的适应性可能带来性能优势。本文探讨了利用空间数据集聚类特征构建高效学习索引的方法。具体来说,我们提出了一种基于密度的网格学习空间索引(DGLSI),它基于点密度划分空间域,并利用包括多个递归模型索引在内的学习模型来预测查询点的网格单元id。我们评估了DGLSI在真实GPS数据集上的性能,并证明了所提出的方法在各种查询工作负载(包括最近点查询和范围查询)上优于类似的基于网格的索引,同时保持了较高的空间效率。
{"title":"Density based learned spatial index for clustered data","authors":"Xiaofei Zhao,&nbsp;Kam-Yiu Lam","doi":"10.1016/j.is.2025.102606","DOIUrl":"10.1016/j.is.2025.102606","url":null,"abstract":"<div><div>Retrieving spatial points, such as GPS records or Point of Interests, that satisfy specific location-based query criteria is a core operation in location-based services. Recent studies have shown that learned indexes can outperform traditional indexing methods in both query performance and space efficiency by leveraging data distribution to construct compact predictive models. On the other hand, traditional indexes typically make minimal assumptions about the underlying data distribution. In real-world spatial databases, data is often non-uniformly distributed and tends to cluster in specific regions or along road networks. Adaptivity to such data patterns may bring performance benefits.</div><div>In this paper, we explore the construction of efficient learned indexes that exploit the clustering characteristics of spatial datasets. Specifically, we propose a Density-based Grid Learning Spatial Index (DGLSI), which partitions the spatial domain based on point density and utilizes learned models, including multiple recursive model indexes to predict the grid cell IDs of query points. We evaluate DGLSI’s performance on real-world GPS datasets and demonstrate that the proposed methods outperform analogous grid-based indexes across various query workloads, including nearest point queries and range queries while maintaining high space efficiency.</div></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":"135 ","pages":"Article 102606"},"PeriodicalIF":3.4,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144886818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-source data outlier detection based on secure multi-party computation 基于安全多方计算的多源数据离群点检测
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-08-05 DOI: 10.1016/j.is.2025.102597
Lin Yao , Zhaolong Zheng , Tian Wei , Guowei Wu
Outlier detection has been applied to many fields such as financial fraud, fault detection, and health diagnosis as an important technology to discover abnormal data. Data sharing is required to perform outlier detection on multi-source data. However, data sharing between multi-source generally discloses privacy embedded within the data such as sensitive patient information. With the increasing emphasis on personal privacy, it is necessary to study how to achieve outlier detection for multi-source data while preserving privacy. Secure Multi-Party Computation (SMPC) is a privacy-preserving technology to achieve secure calculation between multi-source in the absence of a trusted third party. But due to frequent data interaction, high complexity and low practicability comes with complex calculations. In this paper, we propose a secure multi-source data outlier detection scheme based on SMPC. Our scheme uses homomorphic encryption and perturbation to preserve the critical process of calculating the global distance matrix, which greatly reduces the complexity of the secure calculation process. Besides, we design an outlier determination strategy to reduce the steps of searching reverse neighbors and calculating the final local outlier factor. By comparison, our scheme outperforms the existing schemes in terms of accuracy ratio, running time and efficiency.
异常点检测作为一种发现异常数据的重要技术,已被应用于金融欺诈、故障检测、健康诊断等诸多领域。对多源数据进行离群值检测需要数据共享。然而,多数据源之间的数据共享通常会暴露数据中嵌入的隐私,例如敏感的患者信息。随着人们对个人隐私的日益重视,有必要研究如何在保护隐私的前提下实现多源数据的离群值检测。安全多方计算(SMPC)是一种在没有可信第三方的情况下实现多源间安全计算的隐私保护技术。但由于数据交互频繁,计算复杂,复杂性高,实用性低。本文提出了一种基于SMPC的安全多源数据离群点检测方案。该方案采用同态加密和摄动的方法,保留了计算全局距离矩阵的关键过程,大大降低了安全计算过程的复杂度。此外,我们设计了一个离群值确定策略,以减少搜索反向邻居和计算最终局部离群值因子的步骤。通过比较,我们的方案在准确率、运行时间和效率方面都优于现有方案。
{"title":"Multi-source data outlier detection based on secure multi-party computation","authors":"Lin Yao ,&nbsp;Zhaolong Zheng ,&nbsp;Tian Wei ,&nbsp;Guowei Wu","doi":"10.1016/j.is.2025.102597","DOIUrl":"10.1016/j.is.2025.102597","url":null,"abstract":"<div><div>Outlier detection has been applied to many fields such as financial fraud, fault detection, and health diagnosis as an important technology to discover abnormal data. Data sharing is required to perform outlier detection on multi-source data. However, data sharing between multi-source generally discloses privacy embedded within the data such as sensitive patient information. With the increasing emphasis on personal privacy, it is necessary to study how to achieve outlier detection for multi-source data while preserving privacy. Secure Multi-Party Computation (SMPC) is a privacy-preserving technology to achieve secure calculation between multi-source in the absence of a trusted third party. But due to frequent data interaction, high complexity and low practicability comes with complex calculations. In this paper, we propose a secure multi-source data outlier detection scheme based on SMPC. Our scheme uses homomorphic encryption and perturbation to preserve the critical process of calculating the global distance matrix, which greatly reduces the complexity of the secure calculation process. Besides, we design an outlier determination strategy to reduce the steps of searching reverse neighbors and calculating the final local outlier factor. By comparison, our scheme outperforms the existing schemes in terms of accuracy ratio, running time and efficiency.</div></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":"135 ","pages":"Article 102597"},"PeriodicalIF":3.4,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144771835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the evaluation and optimization of LabeledPAM LabeledPAM的评价与优化
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-07-22 DOI: 10.1016/j.is.2025.102580
Miriama Jánošová , Andreas Lang , Petra Budikova , Erich Schubert , Vlastislav Dohnal
The analysis of complex and weakly labeled data is increasingly popular. Traditional unsupervised clustering aims to uncover interrelated sets of objects based on feature-based similarity. This approach often reaches its limits when dealing with complex multimedia data due to the curse of dimensionality, presenting unique challenges. Semi-supervised clustering, which leverages small amounts of labeled data, has the potential to cope with this problem.
In this work, we delve into LabeledPAM, a semi-supervised clustering method, which extends FasterPAM, a state-of-the-art k-medoids clustering algorithm. Our algorithm is designed for both semi-supervised classification, where labels are assigned to clusters with minimal labeled data, and semi-supervised clustering, where new clusters with unknown labels are identified. We propose an optimization to the original LabeledPAM algorithm that reduces its computational complexity. Additionally, we provide an implementation in Rust, which integrates seamlessly with Python libraries.
To assess LabeledPAM’s performance, we empirically evaluate its properties by comparing it against a range of semi-supervised clustering algorithms, including density-based ones. We conduct experiments on a collection of real-world datasets. Our results demonstrate that LabeledPAM achieves competitive clustering quality while maintaining efficiency across various scenarios, showing its versatility for real-world applications.
对复杂和弱标记数据的分析越来越流行。传统的无监督聚类旨在基于特征相似性发现相互关联的对象集。由于维数的限制,这种方法在处理复杂的多媒体数据时往往会达到极限,呈现出独特的挑战。利用少量标记数据的半监督聚类有可能解决这个问题。在这项工作中,我们深入研究了LabeledPAM,一种半监督聚类方法,它扩展了FasterPAM,一种最先进的k- medioids聚类算法。我们的算法是为半监督分类和半监督聚类设计的,前者将标签分配给具有最小标记数据的聚类,后者识别具有未知标签的新聚类。我们对原始的LabeledPAM算法进行了优化,降低了其计算复杂度。此外,我们还提供了一个Rust实现,它与Python库无缝集成。为了评估LabeledPAM的性能,我们通过将其与一系列半监督聚类算法(包括基于密度的算法)进行比较来经验地评估其性能。我们在真实世界的数据集上进行实验。我们的结果表明,LabeledPAM在保持各种场景的效率的同时,实现了具有竞争力的集群质量,显示了它在实际应用程序中的多功能性。
{"title":"On the evaluation and optimization of LabeledPAM","authors":"Miriama Jánošová ,&nbsp;Andreas Lang ,&nbsp;Petra Budikova ,&nbsp;Erich Schubert ,&nbsp;Vlastislav Dohnal","doi":"10.1016/j.is.2025.102580","DOIUrl":"10.1016/j.is.2025.102580","url":null,"abstract":"<div><div>The analysis of complex and weakly labeled data is increasingly popular. Traditional unsupervised clustering aims to uncover interrelated sets of objects based on feature-based similarity. This approach often reaches its limits when dealing with complex multimedia data due to the curse of dimensionality, presenting unique challenges. Semi-supervised clustering, which leverages small amounts of labeled data, has the potential to cope with this problem.</div><div>In this work, we delve into LabeledPAM, a semi-supervised clustering method, which extends FasterPAM, a state-of-the-art <span><math><mi>k</mi></math></span>-medoids clustering algorithm. Our algorithm is designed for both semi-supervised classification, where labels are assigned to clusters with minimal labeled data, and semi-supervised clustering, where new clusters with unknown labels are identified. We propose an optimization to the original LabeledPAM algorithm that reduces its computational complexity. Additionally, we provide an implementation in Rust, which integrates seamlessly with Python libraries.</div><div>To assess LabeledPAM’s performance, we empirically evaluate its properties by comparing it against a range of semi-supervised clustering algorithms, including density-based ones. We conduct experiments on a collection of real-world datasets. Our results demonstrate that LabeledPAM achieves competitive clustering quality while maintaining efficiency across various scenarios, showing its versatility for real-world applications.</div></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":"135 ","pages":"Article 102580"},"PeriodicalIF":3.4,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144767116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comprehensive characterization of concept drifts in process mining 工艺采矿中概念漂移的综合表征
IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-07-19 DOI: 10.1016/j.is.2025.102584
Alexander Kraus , Han van der Aa
Business processes are subject to changes due to the dynamic environments in which they are executed. These process changes can lead to concept drifts, which are situations when the characteristics of a business process have undergone significant changes, resulting in event logs that contain data on different versions of a process. The accuracy and usefulness of process mining results derived from such event logs may be compromised because they rely on historical data that no longer reflects the current process behavior, or because the results do not distinguish between different process versions. Therefore, concept drift detection in process mining aims to identify drifts recorded in an event log by detecting when they occurred, localizing process modifications, and characterizing how they manifest over time. This paper focuses on the latter task, i.e., drift characterization, which seeks to understand whether changes unfolded suddenly or gradually and if they form complex patterns like incremental or recurring drifts. However, current solutions for automatically detecting concept drifts from event logs lack comprehensive characterization capabilities. Instead, they mainly focus on drift detection and characterization of isolated process changes. This leads to an incomplete understanding of more complex concept drifts, like incremental and recurring drifts, when several process changes are inter-connected. This paper overcomes such limitations by introducing an improved taxonomy for characterizing concept drifts and a three-step framework that provides an automatic characterization of concept drifts from event logs. We evaluated our framework through elaborate evaluation experiments conducted using a large collection of synthetic event logs. The results highlight the effectiveness and accuracy of our proposed framework and show that it outperforms state-of-the-art techniques.
由于执行业务流程的动态环境,业务流程可能会发生变化。这些流程变更可能导致概念漂移,即业务流程的特征发生了重大变更,从而导致事件日志中包含流程不同版本的数据。源自此类事件日志的流程挖掘结果的准确性和有用性可能会受到损害,因为它们依赖于不再反映当前流程行为的历史数据,或者因为结果不能区分不同的流程版本。因此,流程挖掘中的概念漂移检测旨在通过检测何时发生、定位流程修改以及描述它们随时间的表现方式来识别记录在事件日志中的漂移。本文侧重于后一项任务,即漂移表征,旨在了解变化是突然展开还是逐渐展开,以及它们是否形成像增量或重复漂移这样的复杂模式。然而,目前用于从事件日志中自动检测概念漂移的解决方案缺乏全面的特征描述功能。相反,他们主要关注漂移检测和孤立过程变化的表征。这导致了对更复杂的概念漂移的不完全理解,比如增量和重复漂移,当几个过程变化是相互联系的。本文通过引入用于描述概念漂移的改进分类法和提供从事件日志自动描述概念漂移的三步框架,克服了这些限制。我们通过使用大量合成事件日志进行的详细评估实验来评估我们的框架。结果突出了我们提出的框架的有效性和准确性,并表明它优于最先进的技术。
{"title":"Comprehensive characterization of concept drifts in process mining","authors":"Alexander Kraus ,&nbsp;Han van der Aa","doi":"10.1016/j.is.2025.102584","DOIUrl":"10.1016/j.is.2025.102584","url":null,"abstract":"<div><div>Business processes are subject to changes due to the dynamic environments in which they are executed. These process changes can lead to concept drifts, which are situations when the characteristics of a business process have undergone significant changes, resulting in event logs that contain data on different versions of a process. The accuracy and usefulness of process mining results derived from such event logs may be compromised because they rely on historical data that no longer reflects the current process behavior, or because the results do not distinguish between different process versions. Therefore, concept drift detection in process mining aims to identify drifts recorded in an event log by detecting when they occurred, localizing process modifications, and characterizing how they manifest over time. This paper focuses on the latter task, i.e., drift characterization, which seeks to understand whether changes unfolded suddenly or gradually and if they form complex patterns like incremental or recurring drifts. However, current solutions for automatically detecting concept drifts from event logs lack comprehensive characterization capabilities. Instead, they mainly focus on drift detection and characterization of isolated process changes. This leads to an incomplete understanding of more complex concept drifts, like incremental and recurring drifts, when several process changes are inter-connected. This paper overcomes such limitations by introducing an improved taxonomy for characterizing concept drifts and a three-step framework that provides an automatic characterization of concept drifts from event logs. We evaluated our framework through elaborate evaluation experiments conducted using a large collection of synthetic event logs. The results highlight the effectiveness and accuracy of our proposed framework and show that it outperforms state-of-the-art techniques.</div></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":"135 ","pages":"Article 102584"},"PeriodicalIF":3.4,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144738344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-code solutions for business process dataflows: From modeling to execution 业务流程数据流的低代码解决方案:从建模到执行
IF 3 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-07-18 DOI: 10.1016/j.is.2025.102577
Ali Nour Eldin , Jonathan Baudot , Benjamin Dalmas , Walid Gaaloul
Business Process Modeling and Notation (BPMN) is a widely adopted standard for modeling business workflows. However, the increasing complexity and integration of data within business processes demand a modeling language capable of clearly expressing both process and data perspectives. While BPMN effectively represents process control flows, it inadequately addresses critical data-related aspects such as data flow, data dependencies, and data transformations. Moreover, communication gaps and differing interpretations of process requirements frequently arise between developers and business analysts, leading to errors and delays in process implementation and execution.
To address these limitations, this paper introduces an extension of BPMN, termed the Business Process and Data Modeling Language (BPDML). BPDML is a low-code modeling language specifically designed to capture, model, and execute data-driven business processes. By adopting a low-code approach, BPDML bridges the gap between business analysts and developers, facilitating faster development and delivery of business applications with reduced effort and minimal manual coding. In addition, a specialized modeling tool has been developed to support the creation, validation, and execution of models using BPDML. Both quantitative and qualitative evaluations demonstrate that BPDML significantly enhances the clarity, efficiency, and overall effectiveness of business process modeling and implementation compared to traditional BPMN.
业务流程建模和符号(BPMN)是一种被广泛采用的业务工作流建模标准。然而,业务流程中不断增加的复杂性和数据集成要求一种能够清晰地表达流程和数据透视图的建模语言。虽然BPMN有效地表示流程控制流,但它不能充分解决关键的数据相关方面,如数据流、数据依赖关系和数据转换。此外,开发人员和业务分析人员之间经常出现沟通缺口和对流程需求的不同解释,从而导致流程实现和执行中的错误和延迟。为了解决这些限制,本文介绍了BPMN的扩展,称为业务流程和数据建模语言(BPDML)。BPDML是一种低代码建模语言,专门用于捕获、建模和执行数据驱动的业务流程。通过采用低代码方法,BPDML弥合了业务分析人员和开发人员之间的鸿沟,通过减少工作量和最少的手工编码,促进了业务应用程序的更快开发和交付。此外,还开发了专门的建模工具来支持使用BPDML创建、验证和执行模型。定量和定性评估都表明,与传统的BPMN相比,BPDML显著提高了业务流程建模和实现的清晰度、效率和总体有效性。
{"title":"Low-code solutions for business process dataflows: From modeling to execution","authors":"Ali Nour Eldin ,&nbsp;Jonathan Baudot ,&nbsp;Benjamin Dalmas ,&nbsp;Walid Gaaloul","doi":"10.1016/j.is.2025.102577","DOIUrl":"10.1016/j.is.2025.102577","url":null,"abstract":"<div><div>Business Process Modeling and Notation (BPMN) is a widely adopted standard for modeling business workflows. However, the increasing complexity and integration of data within business processes demand a modeling language capable of clearly expressing both process and data perspectives. While BPMN effectively represents process control flows, it inadequately addresses critical data-related aspects such as data flow, data dependencies, and data transformations. Moreover, communication gaps and differing interpretations of process requirements frequently arise between developers and business analysts, leading to errors and delays in process implementation and execution.</div><div>To address these limitations, this paper introduces an extension of BPMN, termed the Business Process and Data Modeling Language (BPDML). BPDML is a low-code modeling language specifically designed to capture, model, and execute data-driven business processes. By adopting a low-code approach, BPDML bridges the gap between business analysts and developers, facilitating faster development and delivery of business applications with reduced effort and minimal manual coding. In addition, a specialized modeling tool has been developed to support the creation, validation, and execution of models using BPDML. Both quantitative and qualitative evaluations demonstrate that BPDML significantly enhances the clarity, efficiency, and overall effectiveness of business process modeling and implementation compared to traditional BPMN.</div></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":"135 ","pages":"Article 102577"},"PeriodicalIF":3.0,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144679395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal relational algebras supporting preferences in temporal relational databases: Definition, properties and evaluation 支持时态关系数据库中首选项的时态关系代数:定义、属性和评估
IF 3 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-07-17 DOI: 10.1016/j.is.2025.102583
Luca Anselma , Antonella Coviello , Davide Cerotti , Erica Raina , Paolo Terenziani
Despite numerous approaches address the treatment of time within relational contexts, temporal preferences remain unexplored. Many tasks and applications, such as planning, scheduling, workflows, and guidelines, involve scenarios where the exact timing of events is not known — referred to as indeterminate time. In such cases, preferences can be assigned to different possible temporal outcomes. In a recent study, we established the theoretical foundation for handling preferential indeterminate time in temporal relational databases. This includes proposing a temporal relational representation and a corresponding temporal relational algebra, along with an analysis of their theoretical properties, such as correctness and reducibility.
The contributions of this paper are twofold. First, we extend the above theoretical framework to deal with a more expressive representation of temporal preferences. Second, we assess both theoretical frameworks in terms of performance evaluation along different dimensions, and study the overhead added to cope with preferences with respect to relational approaches without time, with exact time, and with indeterminate time but no preferences.
尽管有许多方法在关系背景下处理时间,但时间偏好仍然未被探索。许多任务和应用程序,如计划、调度、工作流和指导方针,都涉及不知道事件的确切时间的场景——称为不确定时间。在这种情况下,偏好可以分配给不同的可能的时间结果。在最近的一项研究中,我们建立了处理时态关系数据库中优先不确定时间的理论基础。这包括提出一个时间关系表示和相应的时间关系代数,以及对它们的理论性质的分析,如正确性和可约性。本文的贡献是双重的。首先,我们扩展了上述理论框架,以处理时间偏好的更具表现力的表示。其次,我们从不同维度的性能评估方面评估了这两个理论框架,并研究了在没有时间、精确时间和不确定时间但没有偏好的关系方法中处理偏好所增加的开销。
{"title":"Temporal relational algebras supporting preferences in temporal relational databases: Definition, properties and evaluation","authors":"Luca Anselma ,&nbsp;Antonella Coviello ,&nbsp;Davide Cerotti ,&nbsp;Erica Raina ,&nbsp;Paolo Terenziani","doi":"10.1016/j.is.2025.102583","DOIUrl":"10.1016/j.is.2025.102583","url":null,"abstract":"<div><div>Despite numerous approaches address the treatment of time within relational contexts, temporal preferences remain unexplored. Many tasks and applications, such as planning, scheduling, workflows, and guidelines, involve scenarios where the exact timing of events is not known — referred to as <em>indeterminate time</em>. In such cases, preferences can be assigned to different possible temporal outcomes. In a recent study, we established the theoretical foundation for handling preferential indeterminate time in temporal relational databases. This includes proposing a temporal relational representation and a corresponding temporal relational algebra, along with an analysis of their theoretical properties, such as correctness and reducibility.</div><div>The contributions of this paper are twofold. First, we extend the above theoretical framework to deal with a more expressive representation of temporal preferences. Second, we assess both theoretical frameworks in terms of performance evaluation along different dimensions, and study the overhead added to cope with preferences with respect to relational approaches without time, with exact time, and with indeterminate time but no preferences.</div></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":"135 ","pages":"Article 102583"},"PeriodicalIF":3.0,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144679396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive approach to improving CLIP-based image retrieval while maintaining joint-embedding alignment 一种改进基于clip的图像检索同时保持关节嵌入对齐的综合方法
IF 3 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-07-08 DOI: 10.1016/j.is.2025.102581
Konstantin Schall , Kai Uwe Barthel , Nico Hezel , Andre Moelle
Contrastive Language–Image Pre-training (CLIP) jointly optimizes an image encoder and a text encoder, yet its semantic supervision can blur the distinction between visually different images that share similar captions, hurting instance-level image retrieval. We study two strategies, two-stage fine-tuning (2SFT) and multi-caption-image pairing (MCIP) that strengthen CLIP models for content-based image retrieval while preserving their cross-modal strengths. 2SFT first adapts the image encoder for retrieval and then realigns the text encoder. MCIP injects multiple pseudo-captions per image so that class labels sharpen retrieval and the extra captions keep text alignment. This extended version augments the original SISAP24 study with experiments on additional models, a systematic investigation of key hyperparameters of the presented approach, insights into the effects of the methods on the model, and more a detailed report on training setting and costs. Across four CLIP model families, the proposed methods boost image-to-image retrieval accuracy without sacrificing text-to-image performance, simplifying large-scale multimodal search systems by allowing them to store one embedding per image while being effective in image-to-image and text-to-image search.
对比语言-图像预训练(CLIP)联合优化了图像编码器和文本编码器,但其语义监督可能模糊了具有相似标题的视觉不同图像之间的区别,从而损害了实例级图像检索。我们研究了两种策略,两阶段微调(2SFT)和多标题图像配对(MCIP),它们增强了基于内容的图像检索的CLIP模型,同时保留了它们的跨模态优势。sft首先调整图像编码器进行检索,然后重新调整文本编码器。MCIP为每个图像注入多个伪标题,这样类标签可以增强检索,额外的标题可以保持文本对齐。这个扩展版本增加了原始的SISAP24研究,对其他模型进行了实验,对所提出方法的关键超参数进行了系统调查,深入了解了方法对模型的影响,并详细报告了训练设置和成本。在四个CLIP模型家族中,所提出的方法在不牺牲文本到图像性能的情况下提高了图像到图像检索的准确性,简化了大型多模式搜索系统,允许它们在图像到图像和文本到图像搜索中有效地存储每个图像的一个嵌入。
{"title":"A comprehensive approach to improving CLIP-based image retrieval while maintaining joint-embedding alignment","authors":"Konstantin Schall ,&nbsp;Kai Uwe Barthel ,&nbsp;Nico Hezel ,&nbsp;Andre Moelle","doi":"10.1016/j.is.2025.102581","DOIUrl":"10.1016/j.is.2025.102581","url":null,"abstract":"<div><div>Contrastive Language–Image Pre-training (CLIP) jointly optimizes an image encoder and a text encoder, yet its semantic supervision can blur the distinction between visually different images that share similar captions, hurting instance-level image retrieval. We study two strategies, two-stage fine-tuning (2SFT) and multi-caption-image pairing (MCIP) that strengthen CLIP models for content-based image retrieval while preserving their cross-modal strengths. 2SFT first adapts the image encoder for retrieval and then realigns the text encoder. MCIP injects multiple pseudo-captions per image so that class labels sharpen retrieval and the extra captions keep text alignment. This extended version augments the original SISAP24 study with experiments on additional models, a systematic investigation of key hyperparameters of the presented approach, insights into the effects of the methods on the model, and more a detailed report on training setting and costs. Across four CLIP model families, the proposed methods boost image-to-image retrieval accuracy without sacrificing text-to-image performance, simplifying large-scale multimodal search systems by allowing them to store one embedding per image while being effective in image-to-image and text-to-image search.</div></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":"134 ","pages":"Article 102581"},"PeriodicalIF":3.0,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144605515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Refining the process picture: Unstructured data in object-centric process mining 细化流程图:以对象为中心的流程挖掘中的非结构化数据
IF 3 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-07-08 DOI: 10.1016/j.is.2025.102582
Andreas Egger , Tobias Fehrer , Wolfgang Kratsch , Niklas Wördehoff , Fabian König , Maximilian Röglinger
Process mining aims to discover, monitor, and improve processes. To this end, process mining techniques use event data, typically extracted from information systems and organized along process instances. The inherent complexity of real-world processes has driven the recent introduction of object-centric process mining, allowing for a more comprehensive view of processes. Another avenue of research contributing to more complete process analyses is integrating unstructured data, which can enhance traditional event logs by extracting hitherto unidentified process information. Although combining the object-centric perspective with event log enrichment from unstructured data sources holds promising potential, such investigation remains in its infancy. Against this background, this study presents the OCRAUD, a reference architecture that provides guidance on using unstructured data sources and traditional event logs for object-centric process mining. A design science research process was employed to design and evaluate the OCRAUD. This involved conducting a total of 20 expert interviews over two rounds, comparing the OCRAUD to competing artifacts, instantiating the artifact for the use of video and sensor data, developing a software prototype, and applying the prototype to real-world data. This work contributes to process mining by guiding the combination of unstructured data with traditional event logs, incorporating an object-centric representation of event data. The instantiation targets video and sensor data, thereby demonstrating the use of the artifact. This enables researchers and practitioners to instantiate the artifact for other data types or specific use cases. The published code of the software prototype allows for further development of the implemented algorithms.
过程挖掘的目的是发现、监视和改进过程。为此,流程挖掘技术使用事件数据,通常从信息系统中提取并按照流程实例进行组织。现实世界过程固有的复杂性推动了最近引入的以对象为中心的过程挖掘,允许更全面的过程视图。另一个有助于更完整过程分析的研究途径是集成非结构化数据,它可以通过提取迄今未识别的过程信息来增强传统的事件日志。尽管将以对象为中心的透视图与来自非结构化数据源的事件日志丰富相结合具有很大的潜力,但此类研究仍处于起步阶段。在此背景下,本研究提出了OCRAUD,这是一种参考体系结构,为使用非结构化数据源和传统事件日志进行以对象为中心的过程挖掘提供指导。采用设计科学的研究流程对ococaud进行设计和评价。这包括在两轮中进行总共20次专家访谈,将OCRAUD与竞争的工件进行比较,为使用视频和传感器数据实例化工件,开发软件原型,并将原型应用于现实世界的数据。这项工作通过指导非结构化数据与传统事件日志的组合,结合事件数据的以对象为中心的表示,有助于流程挖掘。实例化以视频和传感器数据为目标,从而演示了工件的使用。这使得研究人员和实践者能够为其他数据类型或特定用例实例化工件。发布的软件原型代码允许进一步开发实现的算法。
{"title":"Refining the process picture: Unstructured data in object-centric process mining","authors":"Andreas Egger ,&nbsp;Tobias Fehrer ,&nbsp;Wolfgang Kratsch ,&nbsp;Niklas Wördehoff ,&nbsp;Fabian König ,&nbsp;Maximilian Röglinger","doi":"10.1016/j.is.2025.102582","DOIUrl":"10.1016/j.is.2025.102582","url":null,"abstract":"<div><div>Process mining aims to discover, monitor, and improve processes. To this end, process mining techniques use event data, typically extracted from information systems and organized along process instances. The inherent complexity of real-world processes has driven the recent introduction of object-centric process mining, allowing for a more comprehensive view of processes. Another avenue of research contributing to more complete process analyses is integrating unstructured data, which can enhance traditional event logs by extracting hitherto unidentified process information. Although combining the object-centric perspective with event log enrichment from unstructured data sources holds promising potential, such investigation remains in its infancy. Against this background, this study presents the OCRAUD, a reference architecture that provides guidance on using unstructured data sources and traditional event logs for object-centric process mining. A design science research process was employed to design and evaluate the OCRAUD. This involved conducting a total of 20 expert interviews over two rounds, comparing the OCRAUD to competing artifacts, instantiating the artifact for the use of video and sensor data, developing a software prototype, and applying the prototype to real-world data. This work contributes to process mining by guiding the combination of unstructured data with traditional event logs, incorporating an object-centric representation of event data. The instantiation targets video and sensor data, thereby demonstrating the use of the artifact. This enables researchers and practitioners to instantiate the artifact for other data types or specific use cases. The published code of the software prototype allows for further development of the implemented algorithms.</div></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":"134 ","pages":"Article 102582"},"PeriodicalIF":3.0,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144605366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pose estimation analysis and fine-tuning on the REHAB24-6 rehabilitation dataset 基于REHAB24-6康复数据集的姿态估计分析与微调
IF 3 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-07-07 DOI: 10.1016/j.is.2025.102579
Andrej Černek , Jan Sedmidubsky , Petra Budikova
Human motion analysis is a key enabler for remote healthcare applications, particularly in physical rehabilitation. In this context, mobile devices equipped with RGB cameras seem to be a promising technology for monitoring patients during home-based exercises and providing real-time feedback. This relies on pose estimation algorithms that extract spatio-temporal features of human motion from video data. While state-of-the-art models can estimate body pose from mobile video streams, their effectiveness in rehabilitation scenarios remains underexplored. To address this, we introduce the REHAB24-6 dataset, which includes untrimmed RGB videos, 2D and 3D skeletal ground truth annotations, and temporal segmentation for six common rehabilitation exercises. We also propose an evaluation protocol for assessing different aspects of quality of pose estimation methods, dealing with challenges that arise when different skeleton formats are compared. Additionally, we show how fine-tuning of existing models on our dataset leads to improved quality. Our experimental results compare several state-of-the-art approaches and highlight their key limitations – particularly in depth estimation – offering practical insights for selecting and improving pose estimation systems for rehabilitation monitoring.
人体运动分析是远程医疗保健应用的关键推动因素,特别是在物理康复方面。在这种情况下,配备RGB相机的移动设备似乎是一种很有前途的技术,可以在家庭锻炼期间监测患者并提供实时反馈。这依赖于姿态估计算法,该算法从视频数据中提取人体运动的时空特征。虽然最先进的模型可以从移动视频流中估计身体姿势,但它们在康复场景中的有效性仍有待探索。为了解决这个问题,我们引入了REHAB24-6数据集,其中包括未修剪的RGB视频,2D和3D骨骼地面真相注释,以及六种常见康复练习的时间分割。我们还提出了一种评估方案,用于评估姿态估计方法质量的不同方面,处理在比较不同骨架格式时出现的挑战。此外,我们还展示了如何对数据集上的现有模型进行微调以提高质量。我们的实验结果比较了几种最先进的方法,并强调了它们的主要局限性——特别是在深度估计方面——为选择和改进用于康复监测的姿态估计系统提供了实用的见解。
{"title":"Pose estimation analysis and fine-tuning on the REHAB24-6 rehabilitation dataset","authors":"Andrej Černek ,&nbsp;Jan Sedmidubsky ,&nbsp;Petra Budikova","doi":"10.1016/j.is.2025.102579","DOIUrl":"10.1016/j.is.2025.102579","url":null,"abstract":"<div><div>Human motion analysis is a key enabler for remote healthcare applications, particularly in physical rehabilitation. In this context, mobile devices equipped with RGB cameras seem to be a promising technology for monitoring patients during home-based exercises and providing real-time feedback. This relies on pose estimation algorithms that extract spatio-temporal features of human motion from video data. While state-of-the-art models can estimate body pose from mobile video streams, their effectiveness in rehabilitation scenarios remains underexplored. To address this, we introduce the REHAB24-6 dataset, which includes untrimmed RGB videos, 2D and 3D skeletal ground truth annotations, and temporal segmentation for six common rehabilitation exercises. We also propose an evaluation protocol for assessing different aspects of quality of pose estimation methods, dealing with challenges that arise when different skeleton formats are compared. Additionally, we show how fine-tuning of existing models on our dataset leads to improved quality. Our experimental results compare several state-of-the-art approaches and highlight their key limitations – particularly in depth estimation – offering practical insights for selecting and improving pose estimation systems for rehabilitation monitoring.</div></div>","PeriodicalId":50363,"journal":{"name":"Information Systems","volume":"134 ","pages":"Article 102579"},"PeriodicalIF":3.0,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144597533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Information Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1